Discussion about this post

User's avatar
Rob Middleton's avatar

As a non-technical reader, my only feedback is to compliment you on your engaging, rigorous and methodical analysis of the AGI landscape. Most commentators are vibe-based, which gets increasingly irritating each week as the field matures and the landscape becomes more granular.

Expand full comment
Jacob Kelter's avatar

Excellent post! I have one small disagreement with potentially big implications.

> To get transformational AGI within three or four years, I expect that we’ll need at least one breakthrough per year on a par with the emergence of “reasoning models” (o1)15. I suspect we’ll specifically need breakthroughs that enable continuous learning and access to knowledge-in-the-world.

I agree with this except that I think the breakthroughs needed are likely more difficult than "reasoning models". Very roughly, let's say the big four breakthroughs to get us here were: deep learning itself, training them on GPUs, transformers, and "reasoning". Of these, I think "reasoning" was probably the easiest. It is plausible that solving continuous learning and/or long term memory requires breakthroughs on par with deep learning itself. The longer we go without then being solved within the deep learning paradigm, the more likely it is that they can't be solved within that paradigm. Then, even if a non-deep learning breakthrough is discovered, the new technique might not be trainable on GPUs (for example, last I delved into the stuff being done at Numenta, their brain inspired techniques can't be trained on GPUs). In that case, the conceptual breakthrough might be dismissed for years or even decades just as deep learning largely was dismissed until adequate compute could be thrown behind it. A LOT depends on whether all the remaining breakthroughs can be within the deep learning paradigm or not.

Expand full comment
2 more comments...

No posts