12 Comments
User's avatar
Connor Clark Lindh's avatar

This is so interesting and thank you for balancing your post with both the opportunity/appeal and the obvious weaknesses and gaps. Balanced writing about AI is especially hard.

I kept thinking about my past experiences rolling out/being involved in rollouts of citizen development, workflow automation, data platforms and devops, etc...

There is always this appeal to putting ever more effort into productivity and improvement. And it looks oh so appealing. The process gets faster, simpler and you feel like you aren't doing the monkey work and are 'super smart'.

Yet it can also quickly become a trap and an excuse to avoid relooking entirely at what you are doing. It could turn out that the best thing is to do nothing at all, but someone caught up in the cycle of ever more efficiency doesn't see that. Everything becomes an optimisation problem to solve.

Like the product team that has one-click deployment, modular, terraform architecture, automated testing and top-notch code quality but no users and no revenue...

Expand full comment
Ian Pytlarz's avatar

Excellent piece. I've been doing so much thinking on this topic for the past month or two, as I had the same thing happen to me. I haven't gone to the self-improvement extremes these folks have, but my actual coding work is going at least 3-4x faster than it was just weeks ago.

I think you correctly captured what has so plagued my mind the past few weeks and that is the descriptor of 'exhausting'. Allowing the AI to do all of the work that I would normally call 'podcast work' ie relatively easy for you, task-oriented work - leaves you with nothing but the higher-functioning work. We didn't evolve to make hard prudential decisions all day long, we evolved as hunter-gatherers who could go do tasks and turn their higher functioning off for long periods of time. It has left me often feeling like I could be doing more, but also that I am exhausted of making smart decisions about what to create next - I've never had to do that at such a high pace because it was never possible to do the rest of it so quickly.

For some people, going that route may be the way. But it definitely isn't possible for everyone to work like that, and it probably isn't desirable either. I wonder if we're on the verge of another labor revolution that could take us from a standard 8-hour workday with 5-day workweek to something less or just different that is better suited to this style of work. But given it is only truly viable (at least for now) for information-heavy jobs, that benefit would be highly uneven in society. What would that mean? Does that make it impossible?

I hope you keep writing about this subject, because I am extremely interested in seeing it discussed more. Thanks for writing it!

Expand full comment
Tony Asdourian's avatar

If it turns out that the AI improvement curves hits diminishing returns in the next few years, I have to admit that I will be quite surprised. I know the analogy with chess software is lacking in a number of ways, but in one way watching it improve in the 1980s-2010s seems very similar to now with the surge of AI-- the way that we as humans are reacting to its progress. With chess computers, at every stage there were always many highly intelligent, thoughtful takes by grandmasters and programmers why the chess programs would not improve to the next level soon. And then, when inevitably two years later they had moved from international master to strong grandmaster, the rationalizations would reappear but in different form, with new, equally thoughtful sets of objections explaining why the latest improvement made sense in retrospect, but that further progress was very unlikely in the near term, as a deep level of understanding inaccessible to machines would be necessary for a program to achieve that. Only after Kasparov lost to Deep Blue did most chess players accept that the programs were stronger and would inevitably become even stronger with each passing year, and this DESPITE the fact that the programs continued to have known weaknesses (and indeed, the impossibly strong chess programs of today STILL have known weaknesses, vaguely analogous to hallucinations with AI).

So while this post was of course more of a possible vision of how workers can utilize AI to improve tools which improve AI which improve tools, and so on, you also mention that you still have a healthy skepticism that AI will be able to do all the myriad things needed to actually be a software engineer. But as I look at the progress in the last 3 years, I just can't shake what I saw with chess software development, as much as that was a toy problem in comparison. I think if AI acquires in the next couple of years many of the numerous skills it currently lacks, that the reaction that you and I everyone else will have will be in some ways the most shocking of all-- we'll shrug our shoulders and go "oh, it turns out that wasn't as much mysterious secret sauce as we thought" and move on with our day. That's what happened with chess-- endless debates about whether computers could ever execute long-term strategy were just rendered completely moot. The computers didn't win in exactly the way humans do (unless is was purely tactical exchange stuff), but it turns out that computer "strategy" is just fine, thank you.

One thing Sam Altman, for all his flaws, is right about-- once we get used to a new capability from an AI, we become inured to it in about 2-3 months and it stops impressing us. When the engineers at Amplify are rendered mostly moot, I don't think we'll be shocked at all. And when AI turns out to have enough "engineering project manager" skills to devise and run its own shop, we'll rationalize that none of those individual skills were all that amazing, anyway.

Expand full comment
Steve Newman's avatar

Agreed! I didn't mean to say that AI won't be able to fully master software engineering. I just suspect that there is still some time to go before that happens. If Deep Blue beat Kasparov in 1997, then I think we're in 1990, not 1995 – that's all.

Expand full comment
Andrew's avatar

I find this fascinating, but I'm not a coder. Are there any applications of these concepts outside of coding?

Expand full comment
Martin Black's avatar

I'd suggest you copy the entire post in to Chat GPT and ask exactly that question, good luck!

Expand full comment
Christiana Stubblefield's avatar

This is super insightful! Thank you for sharing these curated thoughts. Also, I've been using Tasklet.ai, and it is absolutely INCREDIBLE. Love the shoutout!

Expand full comment
Adi Pradhan's avatar

This really resonated and mirrors what I'm trying to do at my startup (Socratify) - build the system that builds the product.

One thing that has helped recently is the release of Claude Code Skills which are simply a markdown convention used for dynamic prompt loading into claude code.

It means the agent can decide when certain skills are relevant and fully load them.

A lot of scripts that previously had several arguments and options are now exposed as Claude Code skills and I can simply invoke them with natural language a small gain perhaps but one of the thousands that compounds.

Expand full comment
Kenny Fraser's avatar

Love this - fascinating deep dive into the potential for personal productivity. My own view is that this is incredibly useful but it will never be the route to AI driven transformation. Goods and services are produced by complex networks of collaboration and competition. The world truly changes when AI is used to reorder those webs of interaction and make the whole more productive. Simply making each person more productive doesn't get there.

Expand full comment
Andy X Andersen's avatar

This is an exciting prospect for the future, but we are very early.

Daily work consists of millions little details, that tie together in nonobvious ways. I use an assistant when I have a clear task. Often even pausing to formulate that task takes more time than to just do the work.

Then, one little bottleneck that takes time to sort out can have a giant impact on overall productivity.

So, what we'll see over the next 10 years, is humans gradually making use of more and more augmentation, with the speed of adoption depending on us, not on the tech itself.

Expand full comment
Brian B's avatar

One of my colleagues did some research on a few of these developers and discovered that “Liu Xiaopai” from Beijing is on $200 monthly plan for utilizing Claude’s computational resources but utilizes around $50000 worth of model usage. That seems like an inefficiency that won’t exist indefinitely.

Expand full comment
Steve Newman's avatar

Yes, there are pockets of unsustainability like this, but (a) AI costs for a given level of capability are plunging at something like 10x / year, and (b) the "hyperproductive" teams I've heard about are often paying API rates (which are not susceptible to abuse like this) and still happy at the value they're getting – even as they're often spending hundreds of dollars per day.

Expand full comment