I wonder, is this an AI speaking? Rephrasing of the article, vague verbiage that wasn't originally there ("its own identity, momentum, and emergent structure"), em dashes, promotes an AI company, several multiple-example lists, conclusion.
Even if it isn't, I don't see what it adds to the discussion. I have the same feeling I get from AI text -- lots of words but I'm no wiser about anything.
(By the way, you can mitigate this in ChatGPT by selecting the Robot personality, which is concise by default. It's not more correct, but it is more pleasant to work with.)
My experience doing LLM research is that the average research project is of roughly 3-4 months in duration, and even major projects are rarely >9 months (except well known examples like GPT-4, o1, etc, which are a little longer (but even those are highly decomposable)). In fact, I think AI research as currently practiced by industry labs is *unusually easier to automate* relative to fully automating software engineering.
So overall while I think you could be correct about the SWE trend, I currently think it will be possible to automate medium size research projects by 2028, that a team of 2-3 researchers in an industry lab might be assigned today for 3-6 months. And it's hard to see how the cost would not be competitive given that AI research talent is so scarce.
It would be valuable for someone to write at length about the differences between "AI research as currently practiced by industry labs" and the broader practice of software engineering. If you'd be open to it, I'd love to chat / pick your brain about this – I'll DM you.
>There are (disputed) reports that AI tools are already writing 90% of code at one frontier AI developer
This may be completely true and in same time lead to zero productivity gains. Very often, it simply means that people, instead of programming in a programming language, start "programming" in prompts.
In my experience, AI writes more than 90 percent of my code when it is simply because almost all the typing is automated by my AI IDE(cursor). But this didn't speed up my work at all, because writing code is a small part of it
Fascinating insights into the software development process for larger scale projects.
At the same time: humans didn't learn to fly in the same way as birds and AI won't learn to run large projects in the exact same way as humans. It will take advantage of it's distinctive strengths to overcome challenges in a different way.
Thank you, great post. I'd love to hear your thoughts on how much having 1-month horizon engineers could speed up AI progress – seems like the crucial question if you accept the rest.
That is an important question, and one which I had more perspective on! I don't have much of a feel for the kind of work that goes on in the big AI labs.
I wonder, is this an AI speaking? Rephrasing of the article, vague verbiage that wasn't originally there ("its own identity, momentum, and emergent structure"), em dashes, promotes an AI company, several multiple-example lists, conclusion.
Even if it isn't, I don't see what it adds to the discussion. I have the same feeling I get from AI text -- lots of words but I'm no wiser about anything.
(By the way, you can mitigate this in ChatGPT by selecting the Robot personality, which is concise by default. It's not more correct, but it is more pleasant to work with.)
My experience doing LLM research is that the average research project is of roughly 3-4 months in duration, and even major projects are rarely >9 months (except well known examples like GPT-4, o1, etc, which are a little longer (but even those are highly decomposable)). In fact, I think AI research as currently practiced by industry labs is *unusually easier to automate* relative to fully automating software engineering.
So overall while I think you could be correct about the SWE trend, I currently think it will be possible to automate medium size research projects by 2028, that a team of 2-3 researchers in an industry lab might be assigned today for 3-6 months. And it's hard to see how the cost would not be competitive given that AI research talent is so scarce.
Very interesting!
It would be valuable for someone to write at length about the differences between "AI research as currently practiced by industry labs" and the broader practice of software engineering. If you'd be open to it, I'd love to chat / pick your brain about this – I'll DM you.
>There are (disputed) reports that AI tools are already writing 90% of code at one frontier AI developer
This may be completely true and in same time lead to zero productivity gains. Very often, it simply means that people, instead of programming in a programming language, start "programming" in prompts.
In my experience, AI writes more than 90 percent of my code when it is simply because almost all the typing is automated by my AI IDE(cursor). But this didn't speed up my work at all, because writing code is a small part of it
Fascinating insights into the software development process for larger scale projects.
At the same time: humans didn't learn to fly in the same way as birds and AI won't learn to run large projects in the exact same way as humans. It will take advantage of it's distinctive strengths to overcome challenges in a different way.
Thank you, great post. I'd love to hear your thoughts on how much having 1-month horizon engineers could speed up AI progress – seems like the crucial question if you accept the rest.
That is an important question, and one which I had more perspective on! I don't have much of a feel for the kind of work that goes on in the big AI labs.