Discussion about this post

User's avatar
Dan's avatar

I can't say if "AI" in general will foom but I'm pretty sure LLMs can't.

My (maybe flawed?) reasoning is this:

LLMs are trained using a loss function. The lower the loss is, the closer they are to the "perfect" function that represents the true average across many-dimensional space of the training set.

Once its close to zero, how is it going to foom? It can't. Diminishing returns.

Expand full comment
Joseph Shawa's avatar

Feedback Loops Often Peter Out.... I imagine they do....but they don't have to. Program intelligenty. Ask the right questions.

Expand full comment
5 more comments...

No posts