6 Comments
User's avatar
Filip Kozłowski's avatar

It's worth mentioning the MCP standard here, which is becoming popular recently. It's amazing how it helped make the shift from user-driven to agent-driven systems a reality.

Expand full comment
Tony Asdourian's avatar

Your last two posts, in conjunction with Amodei’s “50% of entry-level white collar jobs will quite possibly be gone by 2027” the other day, just highlight the increasingly bimodal and heightened reactions that everyone, including people with deep AI expertise, are having towards the prospect of AGI/ASI coming soon. At least subjectively, it seems even more polarized and with more extreme rhetoric than a year ago.

I appreciate how carefully you have examined the assumptions on both sides without resorting to hyperbole, boosterism, or doomism. But I have to be honest that it is sort of amazing that informed opinion is divided between thinking that a)the world will be unrecognizable in 2-5 years or b) it’s not going to be much different at all, maybe there will be job loss for white collar people on the order of what NAFTA was for blue collar, and it’ll take a few decades. We are living in bizarre times.

Expand full comment
Steve Newman's avatar

Yes, it is quite amazing!

That bifurcation is a windmill I plan to tilt at, and these last two posts are a reflection of the homework I'm doing in preparation. But, y'know, maybe don't bet against the windmill.

Expand full comment
Chris L's avatar
1dEdited

Epistemic status: Extremely quickly written hot take. I'd probably want to revise at least some aspects if I thought them through more deeply.

A few pieces of evidence against the normal technology view:

• Three Mile Island nuclear reactor to restart to power Microsoft AI operations - https://www.theguardian.com/environment/2024/sep/20/three-mile-island-nuclear-plant-reopen-microsoft

• "President Trump and @SecretaryWright are laser-focused on winning the global race for AI. And that means we must unleash our energy dominance and restore American competitiveness in collaboration with our National Labs and technology companies!" - https://x.com/ENERGY/status/1907829928672014420

• Ten-year moratorium on AI regulation proposed: https://www.dlapiper.com/en-au/insights/publications/ai-outlook/2025/ten-year-moratorium-on-ai

• Anthropic’s new AI model turns to blackmail when engineers try to take it offline - https://techcrunch.com/2025/05/22/anthropics-new-ai-model-turns-to-blackmail-when-engineers-try-to-take-it-offline/

• Turing test - https://www.livescience.com/technology/artificial-intelligence/open-ai-gpt-4-5-is-the-first-ai-model-to-pass-an-authentic-turing-test-scientists-say - what's most telling about this is not such much the result itself, but that there's been so many breakthroughs that the general reaction was "meh"

I'll admit that this comment mostly just rebuts the casual use of normal rather than the various different definitions of normal discussed in the paper, however, I still think his paper leans upon these intuitions to a large extent.

After all, once you realise that there are so many ways in which AI is a deeply unusual technology, you then have the "generator function" that will allow you to rebut his individual points. But if you've been persuaded that AI is a normal technology, then you aren't going to take the extra step to think through how AI being weird would most likely break his assumptions.

Another key point to keep in mind, is that you can prove anything by poorly choosing metrics. Take for example, hours used as a metric for the adoption of AI. If I ask the AI a complex question that would take hours for me to resolve and it answers almost immediately, this metric would misleadingly suggest that this use of AI was not very impactful. Similarly, it doesn't make sense to adjust for cost when measuring adoption speed. When people claim that AI will be adopted extremely fast, cost (and how fast costs are falling) is precisely one of the reasons why they believe that.

There is some value to the AI as Normal Technology frame, but it is much more limited than he suggests. Perhaps this could be turned into a critique too? AI is clearly extremely weird as a technology, about as weird as they get. Given all the unusual properties of AI, surely you'd expect at least some of this to flow through in terms of its impact in society or how we need to govern it.

An argument that the impacts of AI and how we should govern it being surprisingly normal in a few different ways sounds plausible. But the claim that AI can basically just be treated as a 'normal technology'; really? It's almost as though the bottom line was written first in order to be maximally contrarian to the AGI folks and then the argument was written afterwards in order to justify it. Does this psychological line of thought constitute strong evidence? No. But is his argument suspicious and therefore deserving of close scrutiny? I think it is.

Alternatively, maybe it counts against his theory that his framings probably wouldn't have predicted the weirdness that we've seen already?

Expand full comment
Steve Newman's avatar

There's certainly room to debate all of this, and I personally don't have a clear view of how I expect things to play out. I'll just leave you with a single thought: how many similar statements, or at least similarly exotic-seeming at the time, could have been made about electricity in the late 1800s?

To be clear, I expect AI to ultimately have a bigger impact than electricity, which is to say very big indeed. But I am open to the idea that it may take quite some time before the course of AI's impact on the world truly diverges from some of the giant technological transformations of the past.

Expand full comment
Chris L's avatar

"How many similar statements, or at least similarly exotic-seeming at the time, could have been made about electricity in the late 1800s?"

I don't actually know. I suppose this is one area where a historian could add real value.

Expand full comment