Discussion about this post

User's avatar
Venkateshan K's avatar

One thing about your robot asteroid example is that attainment of AGI is not even the necessity there. There are other developments required of course such as the ability of efficiently creating a new robot using other robots (not to mention the physical resources that are needed for this) and the ability to error-correct without human intervention. It is is entirely possible that all this happens without reaching human level abilities for adaptations and generalizations.

MD's avatar

Hard to argue against what *may* happen, but there's a few things I found unconvincing here:

- If you click through the GDP graph and look at it in log scale, you see that it's not smoothly hyperexponential, but rather it looks piecewise exponential. Since 1950 the graph "just" looks like a straight exponential (Link: https://ourworldindata.org/grapher/global-gdp-over-the-long-run?yScale=log&time=1900..latest). This still has lots of potential to go wild, but there's a difference between "GDP eventually surpasses any bound" and "GDP surpasses any bound by 2050".

- Adaptability is an important benchmark, but it doesn't seem to be the way AI is improving. At least in my experience, when AI cannot do a given task, one can't do much retraining to help it out, but has to wait for newer models. E.g.: My dad wanted to try transcribing a lot of handwritten documents, GPT-4o produced utter gibberish (random letters), but now Gemini 3 can do it accurately enough to be useful. This doesn't seem like an improvement in adaptability, but in overall ability? Which is nice, but also suggests that there might be bottlenecks once data becomes scarce or unreliable (e.g. medical tasks).

- The section on spending and on computer technology in general is about the effort put into this, not the outcomes. The outcomes that are there are benchmark-y: we have learned a lot about chess, but unclear how much about human intelligence. Similarly for competitions versus practical problems. "Erudite conversation on literally any subject" is notoriously hard to test for actual value, and the smart people I know seem pretty divided into two extreme camps in their usage of AI.

I'm not even an AI skeptic, more of a no-idea-what's-gonna-happen-er, but this is the kind of stuff I'd like to see better addressed. For example, there was a brouhaha a while ago about "LLMs passing the Turing test", but when you actually read that paper, you found that the average conversation length was something like 4 sentences and the human test subjects were psychology undergrads who just wanted credits for participation and had no incentive to do well in the test. It would be interesting to get something more solid in this direction.

18 more comments...

No posts

Ready for more?