Discussion about this post

User's avatar
Dan's avatar

I can't say if "AI" in general will foom but I'm pretty sure LLMs can't.

My (maybe flawed?) reasoning is this:

LLMs are trained using a loss function. The lower the loss is, the closer they are to the "perfect" function that represents the true average across many-dimensional space of the training set.

Once its close to zero, how is it going to foom? It can't. Diminishing returns.

Expand full comment
Benjamin Todd's avatar

You might find some of the figures in the relevant part of this episode useful:

https://www.dwarkeshpatel.com/p/carl-shulman

Expand full comment
2 more comments...

No posts