13 Comments
User's avatar
Ron Cline's avatar

Is there anything fundamental in terms of LLM scaling vs accuracy that keeps AI from being less homogeneous and more like humans from Lake Wobegon -- simply above average? E.g., after lots of multi-modal training with "real life". (This Q from a HW engineer.)

Expand full comment
Jurgen Appelo's avatar

Great work!

Here's another thing that many people seem to miss: AI makes the world increasingly messy, wicked and uncertain. The paradox is the AI will be struggling against itself to achieve longer time windows of task completion as the world becomes less and less predictable.

https://substack.jurgenappelo.com/p/the-red-queen-says-no-to-ai-agents

Expand full comment
asf32aa's avatar

You don't need autonomous AI to solve very hard problems. There is little relevance to the world becoming 'less and less predictable' if my hermetically sealed company can take a pretrained model and have it build Windows 12 with a context window able to understand the entire codebase. I don't know if we'll make it there, but how messy the world is irrelevant assuming theres data to train on (or inference-level tricks can improve). Now maybe you don't think there's any more good training data. Ok that's fair, but that's not about the world becoming less predictable than there being no good training data left.

Expand full comment
Jurgen Appelo's avatar

Thanks, but I don't believe that. There is plenty of evidence of accelerated change and increased unpredictability. The training data of an irrelevant past doesn't help you in a wildly different future. Your AI agents won't know what problems they need to deal with tomorrow.

Expand full comment
asf32aa's avatar

What's the evidence of 'increased unpredictability'?

Every task in the paper, making windows 12, the next game, diagnosing diseases, writing papers, researching legal precedents--none of these will become 'less predictable' than they are now.

Expand full comment
Jurgen Appelo's avatar

Perhaps you can try your own Deep Research query to validate evidence of an increase in volatility, complexity, reflexivity, ambiguity, etc. in this world. My research says yes. Okay with me if you don't believe it.

Expand full comment
asf32aa's avatar

I absolutely don't believe it. Do you know of any evidence or do you just have faith that if you tried a Deep Research query, it would give you some?

Expand full comment
Steve Newman's avatar

Respectfully, I think this conversation has ceased to be productive. I suggest the two of you take a breather and/or move to another communication channel.

Expand full comment
SorenJ's avatar

1. AI labs do a lot of software engineering specifically devoted to making AI

2. AI labs can use their own data about the software engineering for AI to train their AI model

3. Therefore, we can expect AI systems to get much better at AI-software-development *specifically* quite fast

4. Once you have an AI which is good at AI software development, this lets you use that AI to make other AI for different areas faster

5. ...

6. Intelligence explosion

.... And so on. That's the basic idea in the forecast AI-2027. Will it end up being true? Who knows, but something like this seems at least plausible.

Expand full comment
Jim's avatar

Thanks.

Expand full comment