12 Comments
User's avatar
Filip Kozłowski's avatar

It's worth mentioning the MCP standard here, which is becoming popular recently. It's amazing how it helped make the shift from user-driven to agent-driven systems a reality.

Expand full comment
Tony Asdourian's avatar

Your last two posts, in conjunction with Amodei’s “50% of entry-level white collar jobs will quite possibly be gone by 2027” the other day, just highlight the increasingly bimodal and heightened reactions that everyone, including people with deep AI expertise, are having towards the prospect of AGI/ASI coming soon. At least subjectively, it seems even more polarized and with more extreme rhetoric than a year ago.

I appreciate how carefully you have examined the assumptions on both sides without resorting to hyperbole, boosterism, or doomism. But I have to be honest that it is sort of amazing that informed opinion is divided between thinking that a)the world will be unrecognizable in 2-5 years or b) it’s not going to be much different at all, maybe there will be job loss for white collar people on the order of what NAFTA was for blue collar, and it’ll take a few decades. We are living in bizarre times.

Expand full comment
Steve Newman's avatar

Yes, it is quite amazing!

That bifurcation is a windmill I plan to tilt at, and these last two posts are a reflection of the homework I'm doing in preparation. But, y'know, maybe don't bet against the windmill.

Expand full comment
Tony Asdourian's avatar

Hey Steve, check this blog post out I ran into if you have a moment. I don't really understand any of the technical stuff, but it strikes me as very grounded and an interesting take that I thought you might find helpfully different than some of the more pie in the sky stuff I've been seeing people write.

https://fly.io/blog/youre-all-nuts/

Expand full comment
Chris L's avatar

Epistemic status: Extremely quickly written hot take. I'd probably want to revise at least some aspects if I thought them through more deeply.

A few pieces of evidence against the normal technology view:

• Three Mile Island nuclear reactor to restart to power Microsoft AI operations - https://www.theguardian.com/environment/2024/sep/20/three-mile-island-nuclear-plant-reopen-microsoft

• "President Trump and @SecretaryWright are laser-focused on winning the global race for AI. And that means we must unleash our energy dominance and restore American competitiveness in collaboration with our National Labs and technology companies!" - https://x.com/ENERGY/status/1907829928672014420

• Ten-year moratorium on AI regulation proposed: https://www.dlapiper.com/en-au/insights/publications/ai-outlook/2025/ten-year-moratorium-on-ai

• Anthropic’s new AI model turns to blackmail when engineers try to take it offline - https://techcrunch.com/2025/05/22/anthropics-new-ai-model-turns-to-blackmail-when-engineers-try-to-take-it-offline/

• Turing test - https://www.livescience.com/technology/artificial-intelligence/open-ai-gpt-4-5-is-the-first-ai-model-to-pass-an-authentic-turing-test-scientists-say - what's most telling about this is not such much the result itself, but that there's been so many breakthroughs that the general reaction was "meh"

I'll admit that this comment mostly just rebuts the casual use of normal rather than the various different definitions of normal discussed in the paper, however, I still think his paper leans upon these intuitions to a large extent.

After all, once you realise that there are so many ways in which AI is a deeply unusual technology, you then have the "generator function" that will allow you to rebut his individual points. But if you've been persuaded that AI is a normal technology, then you aren't going to take the extra step to think through how AI being weird would most likely break his assumptions.

Another key point to keep in mind, is that you can prove anything by poorly choosing metrics. Take for example, hours used as a metric for the adoption of AI. If I ask the AI a complex question that would take hours for me to resolve and it answers almost immediately, this metric would misleadingly suggest that this use of AI was not very impactful. Similarly, it doesn't make sense to adjust for cost when measuring adoption speed. When people claim that AI will be adopted extremely fast, cost (and how fast costs are falling) is precisely one of the reasons why they believe that.

There is some value to the AI as Normal Technology frame, but it is much more limited than he suggests. Perhaps this could be turned into a critique too? AI is clearly extremely weird as a technology, about as weird as they get. Given all the unusual properties of AI, surely you'd expect at least some of this to flow through in terms of its impact in society or how we need to govern it.

An argument that the impacts of AI and how we should govern it being surprisingly normal in a few different ways sounds plausible. But the claim that AI can basically just be treated as a 'normal technology'; really? It's almost as though the bottom line was written first in order to be maximally contrarian to the AGI folks and then the argument was written afterwards in order to justify it. Does this psychological line of thought constitute strong evidence? No. But is his argument suspicious and therefore deserving of close scrutiny? I think it is.

Alternatively, maybe it counts against his theory that his framings probably wouldn't have predicted the weirdness that we've seen already?

Expand full comment
Steve Newman's avatar

There's certainly room to debate all of this, and I personally don't have a clear view of how I expect things to play out. I'll just leave you with a single thought: how many similar statements, or at least similarly exotic-seeming at the time, could have been made about electricity in the late 1800s?

To be clear, I expect AI to ultimately have a bigger impact than electricity, which is to say very big indeed. But I am open to the idea that it may take quite some time before the course of AI's impact on the world truly diverges from some of the giant technological transformations of the past.

Expand full comment
Chris L's avatar

"How many similar statements, or at least similarly exotic-seeming at the time, could have been made about electricity in the late 1800s?"

I don't actually know. I suppose this is one area where a historian could add real value.

Expand full comment
Jan Matusiewicz's avatar

I recall that there was widespread optimism after 2nd World War about the potential of cheap nuclear energy as a source of abundance. Stopped after Three Mile Island incident and amid growing threat of nuclear war.

Expand full comment
Jan Matusiewicz's avatar

One common sense argument: the vision of AI 2027 that the ruling business and political elite would just pass the power to the AGI seems naive. As a general rule people who ascended to power don't like to relinquish it. Or to be replaced by AI which has some goals determined by eggheads from some tech company. An ideal subordinate should not have its own goals and morality limitation but fulfill any tasks it is given (sorry, I might have got too cynical after watching the "Succession" :) )

Expand full comment
Steve Newman's avatar

For sure, people do and will hate to give up power. I think the idea in AI 2027 is some combination of:

1. Circumstances, such as a perceived race with China, may force leader's hands. Given the apparent choice between delegating power, and losing to a rival, they may consider delegation to be the lesser evil.

2. There is a fine line between relying on an AI to be your extremely helpful assistant and advisor, and outright handing power to the AI. It might not always be obvious which side of the line you're on. (I wrote about this idea a couple of years ago, and I think it holds up fairly well: https://secondthoughts.ai/i/129079922/ais-will-make-all-the-decisions.)

Expand full comment
Jan Matusiewicz's avatar

1. There are many conflict of interest between corporations and the state. I rarely see that CEOs put the state interest first. On the contrary - there are many cases when corporations through lobbying shape the state actions to their own interest, never mind the welfare of the society. Even if the leader would consider delegating power to AI as the best for the country - interest of the ruling elite will come first. Which seems reassuring in this case.

2. I think that the speed of processes would be limited by humans and their abilities to process new content. I also find it very hard to predict the future shape of society and its politics. We may underestimate the willingness of the people to undermine the current order. Disillusionment in their chances to achieve the American Dream and discontent in the direction their county is headed lead many American voters to support the candidate who promised to undermine the world order. The current tariff war is one example of the rejection of long term consensus. This happened due to competition from "Chinese peasants". How would voters react if threatened by competition from robots and this alien AGI? Who would they support to prevent being permanently displaced from the job market?

It is very easy for humans to fear the others even if they are not a real threat. Now imagine robots, not fellow humans from another country, working in stores and services. I expect a huge backlash against it and even now only 17% of Americans think AI is going to be good for the country: https://www.pewresearch.org/internet/2025/04/03/how-the-us-public-and-ai-experts-view-artificial-intelligence/

Expand full comment
Matt Bamberger's avatar

Thank you for this excellent analysis.

It seems to me that a lot of this comes down to capability. If state of the art AI systems remain roughly at their current level of capability, then a lot of the article's arguments are plausible. But if we get to AGI (defined as approximately human-level capabilities across the board), then everything changes.

To take the example of autonomous cars, I'd argue the problem isn't that diffusing technology is slow, it's that the technology in question just wasn't good enough. Human high schoolers routinely go from never having touched a steering wheel to being fully licensed drivers with just a few tens of hours of training and practice.

More generally, we know that smart-ish humans with college degrees take at most a few years to become fully productive at most jobs. It seems almost tautological that the same would be true for AGI (modulo regulatory barriers, of course).

Expand full comment