Is there anything fundamental in terms of LLM scaling vs accuracy that keeps AI from being less homogeneous and more like humans from Lake Wobegon -- simply above average? E.g., after lots of multi-modal training with "real life". (This Q from a HW engineer.)
Here's another thing that many people seem to miss: AI makes the world increasingly messy, wicked and uncertain. The paradox is the AI will be struggling against itself to achieve longer time windows of task completion as the world becomes less and less predictable.
You don't need autonomous AI to solve very hard problems. There is little relevance to the world becoming 'less and less predictable' if my hermetically sealed company can take a pretrained model and have it build Windows 12 with a context window able to understand the entire codebase. I don't know if we'll make it there, but how messy the world is irrelevant assuming theres data to train on (or inference-level tricks can improve). Now maybe you don't think there's any more good training data. Ok that's fair, but that's not about the world becoming less predictable than there being no good training data left.
Thanks, but I don't believe that. There is plenty of evidence of accelerated change and increased unpredictability. The training data of an irrelevant past doesn't help you in a wildly different future. Your AI agents won't know what problems they need to deal with tomorrow.
What's the evidence of 'increased unpredictability'?
Every task in the paper, making windows 12, the next game, diagnosing diseases, writing papers, researching legal precedents--none of these will become 'less predictable' than they are now.
Perhaps you can try your own Deep Research query to validate evidence of an increase in volatility, complexity, reflexivity, ambiguity, etc. in this world. My research says yes. Okay with me if you don't believe it.
Respectfully, I think this conversation has ceased to be productive. I suggest the two of you take a breather and/or move to another communication channel.
Is there anything fundamental in terms of LLM scaling vs accuracy that keeps AI from being less homogeneous and more like humans from Lake Wobegon -- simply above average? E.g., after lots of multi-modal training with "real life". (This Q from a HW engineer.)
Great work!
Here's another thing that many people seem to miss: AI makes the world increasingly messy, wicked and uncertain. The paradox is the AI will be struggling against itself to achieve longer time windows of task completion as the world becomes less and less predictable.
https://substack.jurgenappelo.com/p/the-red-queen-says-no-to-ai-agents
You don't need autonomous AI to solve very hard problems. There is little relevance to the world becoming 'less and less predictable' if my hermetically sealed company can take a pretrained model and have it build Windows 12 with a context window able to understand the entire codebase. I don't know if we'll make it there, but how messy the world is irrelevant assuming theres data to train on (or inference-level tricks can improve). Now maybe you don't think there's any more good training data. Ok that's fair, but that's not about the world becoming less predictable than there being no good training data left.
Thanks, but I don't believe that. There is plenty of evidence of accelerated change and increased unpredictability. The training data of an irrelevant past doesn't help you in a wildly different future. Your AI agents won't know what problems they need to deal with tomorrow.
What's the evidence of 'increased unpredictability'?
Every task in the paper, making windows 12, the next game, diagnosing diseases, writing papers, researching legal precedents--none of these will become 'less predictable' than they are now.
Perhaps you can try your own Deep Research query to validate evidence of an increase in volatility, complexity, reflexivity, ambiguity, etc. in this world. My research says yes. Okay with me if you don't believe it.
I absolutely don't believe it. Do you know of any evidence or do you just have faith that if you tried a Deep Research query, it would give you some?
Respectfully, I think this conversation has ceased to be productive. I suggest the two of you take a breather and/or move to another communication channel.
1. AI labs do a lot of software engineering specifically devoted to making AI
2. AI labs can use their own data about the software engineering for AI to train their AI model
3. Therefore, we can expect AI systems to get much better at AI-software-development *specifically* quite fast
4. Once you have an AI which is good at AI software development, this lets you use that AI to make other AI for different areas faster
5. ...
6. Intelligence explosion
.... And so on. That's the basic idea in the forecast AI-2027. Will it end up being true? Who knows, but something like this seems at least plausible.
Thanks.