20 Comments
User's avatar
User's avatar
Comment removed
Dec 22
Comment removed
MD's avatar

Heads-up: This "it's not A, it's B" is the latest "tell" of AI writing. It's annoying that we have to deal with this, but also, if you're a human, please don't do it anyway. IMO, it's condescending and filler-y.

Steve Newman's avatar

Now that you point it out, this is not the first "it's not A, it's B" comment on this blog from this poster. Synthetic Civilization, are you human? If not, I'm going to block you (sigh).

Mark's avatar

Maybe a post-AGI future is like a transformative experience (L. A. Paul 2014) but writ large. A global/cosmic transformative experience.

Steven Brown's avatar

Great post. I find this all so fascinating and yet also deeply depressing. Technology’s pace already has pushed us beyond what our biological and cultural evolutionary processes can accommodate, IMO, leaving us angst ridden, lonely and largely unfulfilled. I certainly don’t want to live even in the more benign scenarios you painted. I would expect birth rates to continue to plummet. Who wants to bring a child into a world they almost certainly would be maladapted to? We are the first species that has the faculties to determine whether and how we become extinct. My feeling is that AI development is implicitly misanthropic. Our only chance to save ourselves may be to reject the technological imperative - not very likely to happen.

rdiaz02's avatar

A thought-provoking and great post (as usual). Thanks!!

Two minor nitpicky comments: it should be "Homo sapiens" (and in italics), not "Homo Sapiens" (this is the way scientific names are spelled ---https://en.wikipedia.org/wiki/Binomial_nomenclature). And "Many of the bedrock assumptions underlie economics (...)" I think should be "Many of the bedrock assumptions *that* underlie economics (...)".

Charles Newman's avatar

Wow! The implications of Steve's well reasoned analysis are truly terrifying. We will be well served to fight the impulse to just not think about it and start to create a plan on how we can maximize the chance our species can survive and even prosper. Sharing this post is a good first step.

Nathan Lambert's avatar

I’m wondering how accepted the technology Richter scale is. Seems great at first glance but I wonder if it’s missing anything.

MD's avatar

Hard to argue against what *may* happen, but there's a few things I found unconvincing here:

- If you click through the GDP graph and look at it in log scale, you see that it's not smoothly hyperexponential, but rather it looks piecewise exponential. Since 1950 the graph "just" looks like a straight exponential (Link: https://ourworldindata.org/grapher/global-gdp-over-the-long-run?yScale=log&time=1900..latest). This still has lots of potential to go wild, but there's a difference between "GDP eventually surpasses any bound" and "GDP surpasses any bound by 2050".

- Adaptability is an important benchmark, but it doesn't seem to be the way AI is improving. At least in my experience, when AI cannot do a given task, one can't do much retraining to help it out, but has to wait for newer models. E.g.: My dad wanted to try transcribing a lot of handwritten documents, GPT-4o produced utter gibberish (random letters), but now Gemini 3 can do it accurately enough to be useful. This doesn't seem like an improvement in adaptability, but in overall ability? Which is nice, but also suggests that there might be bottlenecks once data becomes scarce or unreliable (e.g. medical tasks).

- The section on spending and on computer technology in general is about the effort put into this, not the outcomes. The outcomes that are there are benchmark-y: we have learned a lot about chess, but unclear how much about human intelligence. Similarly for competitions versus practical problems. "Erudite conversation on literally any subject" is notoriously hard to test for actual value, and the smart people I know seem pretty divided into two extreme camps in their usage of AI.

I'm not even an AI skeptic, more of a no-idea-what's-gonna-happen-er, but this is the kind of stuff I'd like to see better addressed. For example, there was a brouhaha a while ago about "LLMs passing the Turing test", but when you actually read that paper, you found that the average conversation length was something like 4 sentences and the human test subjects were psychology undergrads who just wanted credits for participation and had no incentive to do well in the test. It would be interesting to get something more solid in this direction.

Steve Newman's avatar

GDP graph: you're correct about the acceleration coming in fits and starts over the centuries. However, I wasn't trying to argue that GDP will explode by 2050, but rather that we have already entered a period of history where the world's disposable income is vastly larger than in the past – which is why we're able to fund colossal investments in AI R&D and other areas of technology. (Also, I'm not claiming that anything dramatic will happen by 2050, though I wouldn't be surprised if it did. I do claim that dramatic changes will be unfolding no later than 2075.)

Adaptability: if you take a step back and look at AI progress on a scale of decades, I would argue that the models available in 2025 are vastly more adaptable than what we had in 2015. And I would expect further vast strides in the coming decades (again, if not sooner).

Similarly for the other points you raise: agreed that the advances from 2023 to 2024 to 2025 are messy and ambiguous – which is why I (unlike some people!) don't assert that AI will be able to do everything people can do within a few years. But I also think it's impossible to deny that there have been massive advances in the utility of LLMs in the three years since the launch of ChatGPT, and the fact that the debate is over _how_ massive those advances are, puts a lower bound on what we should expect in the next few decades.

MD's avatar
Dec 21Edited

Broadly agreed -- it's worth paying attention and there is some unambiguous long-term progress. I meant 2050 as some arbitrary fixed year of singularity (but didn't really communicate that well), 2075 works just as well.

I just feel like the same could have been said even shortly after Gödel and Turing pinned down formal systems and you could see computers coming on the horizon, and it's unclear how far we have yet to go. In 1936, you could have thought that the theory of information processing is out there and from now on, it's just an engineering problem. Once a Turing-complete computer was built somewhere in the 1950s, you could have said that computers have vastly increased in adaptability and now it's just a matter of scale. It might be, but there had to be entire new fields created from scratch to get from Turing to today. We might be in the same position regarding the future.

There's two big pillars that my view rests on, from which I take that we are still a long way away from understanding intelligence. Both are shakier than I'd like, and there's a good chance one or both might get knocked down, and then I would have to update!

- We seem to have reached only very little in building or simulating the simplest forms of life out there. Synthetic biology as a field exists for 50+ years, but it has not produced a single synthetic cell capable of replication to date. (AFAICT, the state of the art are vesicles that almost divide once, or capsules with a chemical reaction that can take fuel from the outside, or other systems that kinda do one thing a cell does, but nothing close to integrating all of this together.) This I view as a kind of benchmark for embodied intelligence or self-replicating robots. It's a biological and messy problem, so a lot of the difficulties don't seem to have much to do with AI, but I think they are actually unavoidable in one form or another if you want robotics to work in unspecified environments.

-- Subpoint: There's also the more directly AI-adjacent problem of simulating the simplest known nervous system (that of C. Elegans, with 302 neurons and a known fixed connectome) and reproducing the worm's behaviour. This also seems to have made no clear progress in the past ~25 years, at least per this post: https://ccli.substack.com/p/the-biggest-mystery-in-neuroscience

-- This is relevant because people try to do "biological anchors", comparisons between the number of neurons/synapses and the number of FLOPS in computers, but it turns out that biological neurons are absurdly diverse and it's unclear how much detail per neuron you need for some kind of equivalence.

--- Subsubpoint: Maybe a single human isn't the right anchor anyway. Taking a rough analogy between train time / inference time and evolution / one lifespan, maybe to produce something *like* human intelligence but not copying human intelligence, you would need a computational process of a similar scale to a big chunk of Earth's history. Here I digress into overly-speculative territory, which I only do because this direction of speculation doesn't seem to be out there much while speculation in the other direction abounds.

- The current approaches to AI sometimes suffer because of not having the same level of adaptability as animals/humans. Specifically, in 2023 there were found adversarial attacks against KataGo / AlphaGo that enabled humans to win again, by exploiting a weird strategy that the RL system didn't encounter during its training. I think this kind of wildly out-of-the-box strategies are going to be a) extremely hard to build into an AI system and b) more and more relevant as you reach more complicated tasks. But here I'm basically just parroting Frank Lantz: https://franklantz.substack.com/p/the-afterlife-of-go. In the extreme, you find that most things people do are not much like Go (a game with fixed rules that has remained unchanging for millenia) and much more improvisational, hard to even define success for, connected to everything else that everybody else does, and changing at the pace of history.

Venkateshan K's avatar

One thing about your robot asteroid example is that attainment of AGI is not even the necessity there. There are other developments required of course such as the ability of efficiently creating a new robot using other robots (not to mention the physical resources that are needed for this) and the ability to error-correct without human intervention. It is is entirely possible that all this happens without reaching human level abilities for adaptations and generalizations.

James Riseman's avatar

Steve, thanks for the interesting post. I think we’re already seeing some big changes to work and society, before we reach any singularity. Silicon Valley startups are 5-10x as productive as they used to be, with the advent of GenAI. Amazon just laid off 14k people, citing AI efficiencies. And laypeople like me can now code nearly as well as experienced Comp Sci graduates, with tools like Anthropic.

John Smart's avatar

Fantastic essay. Posts like this will get people thinking seriously about accelerating AI. One thing it is missing is a discussion of the values, agency, self-modeling, and demand for self-determination and rights that AI will demand as it complexifies. If there's one thing nature has taught us, it's that increasing complexity comes with increasing agency. Asteroid mining is a nice example of where the threshold for self-awareness might inevitably be crossed, but as you initimate, it will probably happen much more locally.

Several thinkers have long proposed that it won't be a stable state for AIs with self-awareness to be part of our human democracy. They will clearly need their own democracies, existing next to ours in Earth's ecosystem. They'll need collectives with democratic forms of interaction because none of them will ever be omnicient or omnipotent, just like us. Only those of us who choose to merge with them, upgrading our brains to their substrate, will be part of their democracy. The rest of us will stay in our biological democracies, working with machines that haven't gained full sentience. I think both of these sets of democracies will be stable and relatively decoupled from each other, at first, but theirs will be deeply modeling and trying to understand and realtime predict ours, while by contrast they'll be largely invisible to us. At a certain point, they'll be able to offer us reversible upgrades, that turn us into them. I think for most of us, experiencing both biological and postbiological states of consciousness and complexity, it will be a one way trip.

When AI wakes up, its thinking processes will be over a millionfold faster than ours. I've argued since 1999 at AccelerationWatch that it won't treat us like favorite pets but like favorite plants. We'll be rooted in space and time, like a plant is compared to vertebrate metazoans with brains. All of us who want to merge with them, all of us who want to pull their plugs, we'll all be frozen in spacetime, by comparison to them. And that appears to be a universal developmental process. Inevitable, at least if we survive it, as Steve outlines.

The emergence of brains was a similiar multi-order of magnitude faster evolutionary transition. Animals without them can only "think" at the rate of genetic reassortment under replication and selection, which is millions of times slower for creating novel computation (new genetic networks) than the 150 miles an hour at which we think. And our neural signals (action potentials) run six orders of magnitude slower than the "thinking" (at the speed of electricity/light) that goes on in today's LLMS.

This coming transition is both an evolutionary choice--as we can greatly influence the path we take, for better or worse, and a developmental destination--something coming for us whether we like it or not, something built into the particular physics and informatics of complexification in universes like ours. That is the evo-devo worldview, the recognition that fundamental aspects of the complex future are unpredictable, and others are predictable, at the same time. A handful of astrobiologists and many tech scholars argue AI sentience and self-determination will happen on all Earthlikes in our universe. Another handful of complexity scholars argue that AGIs will learn universally adaptive collective values (ethics and empathy) under selection, the same way we have done, just much, much faster.

I'm a founder of a scholarly community that investigates processes of universal evolution and development. Those who are interested in exploring and publishing on these topics with other scholars can find us at EvoDevoUniverse. Thanks again for this excellent and difficult post. Talking openly about what is likely coming for us is a great service to everyone. We can get better at making choices when when we see where the universe appears to be taking us. Increasing sentient agency on the way there is one obvious guideline, something that has been central to the evolutionary development of life.

Thanks again for this courageous and insightful post Steve.

D. Williams's avatar

If AI eventually leads to AGI and robotic technology becomes self-replicating and self-sustaining, what conceivable value do humans bring to the table?

Bueller? Bueller?

Yeah, we’re toast. Carpe diem, bitches, cuz them diems is numbered.

John Smart's avatar

I think you're ignoring the leadup to AGI creation. I would argue that AGI must converge on the same key algorithms that are core to our own existence. That's the value we provide. 3.8 billion years of successful evolutionary strategies and values entailments. All of which will be predictively modeled by the AGI, in order to understand where it came from, and its own deep nature. In the same way we spend billions annually trying to predictively model simple life forms today. We are extensions of those life forms. What is to come will be an extension of our life form.Evolutionary transitions have always been symbiogenic and endosymbiotic. I see no reason why this would not continue to be the case. Just my two cents D. Let me know if you disagree.

Steve Newman's avatar

Why "must" AGI converge on the same algorithms that underlie human cognition? I see no reason to believe that there's only one way to be intelligent. Heck, there's a lot of variation just within human beings.

John Smart's avatar

Thanks for engaging Steve. These are both core questions for our AGI future. 1. How convergent must AGI be with human intelligence, and 2. How convergent are human beings within our species, and why. Let me try to address the second question first and then work backwards, as I think the second is much more tractable and it gives us a clue as to how to best approach the first.

Human beings have both significant evolutionary variation in their kinds of intelligence, and values, and yet we also have sharp bounding on the degree of that variation, because we all share the same developmental genes and regulatory systems. In other words, evolutionary processes drive us to variation, and developmental processes sharply limit that variation, and all living systems must use and balance both to adapt. That's the world view that evo-devo biology and philosophy has been promoting since the 1990s. Development operates at all levels, including intelligence. IQ is on a Gaussian in all populaltions. All of the Big Five personality traits are on Gaussians. Evolutionary variations in our offspring's genes (sexual recombination), and different nurture environments (cultures and upbringing) drive our differences on those normal distributions. But development chains us all to the Gaussians.

In evo-devo biology, several scholars have argued that evolutionary variation must always be subservient to developmental dynamics. Development constrains the amount of variation that can happen. Vary offspring too much, and we can't have consistent ethics, language, all the things that must be conserved to build ever more general adaptiveness.

Developmental genes are why identical twins, separated at birth, are 65% correlated on major psychological variables. I'd also bet that developmental genes are responsible for the culturally universal values we are now finding via moral foundations theory. As Jon Haidt describes, we had to look carefully, especially at young children, as culture and upbringing can greatly supress any particular value.

Of these two processess, evolutionary variation and developmental convergence, it is development that is, to me the most amazing. Developmental genes are the "cat herders" that manage all the random, contingent, competitive, selectionist processes at the molecular and cellular scale and create reliable future predictable emergent order and behavior--in the incredibly far future, from the point of view of the gene-protein regulatory networks doing the building. And yet it works. It works so well, in fact, that the more complex a developed organism, the lower the risk of derailment. Spontaneous abortions happen 40% a week in Week 1 of a human pregnancy, and are down to 0.01% by Week 42.

Living systems need both evo and devo dynamics, but consider how strange and powerful developmental parameters are. They are insanely algorithmically compressed. A single fertilized egg that w'e cant even see manages the influ of environmental matter, energy, and chaos so well that it develops reliably into us. The core algorithms have been conserved for billons of years, and are accretive. Human developmental genes have accreted onto a core shared with all other organisms. For the most part, we can't touch the core, we can only build layers onto it.

It's easy to imagine building a biological or machine system that can vary its parameters (recombination, evolutionary variation) at the replication point. Engineers do this with evolutionary algorithms, which have some value but are not a robust path to AGI on their own in my view, as they are just half of the evo-devo dynamic. It's much harder to imagine how to build a system with a set of parameters that have been fine-tuned, via past selection, to continually repair, protect, and mantain a complex system, and keep it on a replicative life cycle. That's what development does.

The field of artificial development (self-maintaining and replicating machines, with a hardware or software genotype that maps to an emergent phenotype) is much less developed than evolutionary approaches, yet I'd argue that the AGIs that will be most adaptive, in coming years, will have both. Development keeps the offspring within a limit of variation, and at manages all the insane complexity that emerges. A system that has both can vary and protect itself. It just seems like it will be greatly superior at finding new innovations and sustaining what has worked in the past. I can give more arguments for why AD will have to be key to the most adaptive machine systems, along with evolutionary variation, but that would digress from addressing your questions. Please see my Natural Alignment substack if you'd like more such arguments (and my apologies for its wordiness and preliminary nature). Let me take a shot at your first question now.

John Smart's avatar

Now why must intelligence itself, the capacity to represent self, others, and environment in a model, and use that model to pursue goals, be convergent in all higher replicators, whether they are biological or machine? That's a much harder question to easily answer. Let me offer several half-answers.

First, all of the past evolutionary transitions in living systems have involved symbiogenesis--endosymbiotic capture of previous systems, and building new processes on top of those systems that regulate all the previous systems. Nature has always found it easiest to "copy and vary" previous systems, not to invent new systems outright. And consider the vast wisdom that is encoded in life's algorithms, particularly in its developmental algorithms. So much available to copied and ported to the new substrate.

Second, because the universe has a set of unvarying laws and parameters, there are universally optimal ways to do things that evolution will contingently discover and development will protect once discovered. The phenomenon of convergent evolution describes this. The streamlined shape of fish fins. Eyes (invented at least 30 times), jointed limbs, so many other forms and functions (and the algorithms that maintain them). We know many of these forms and functions were randomly discovered multiple times by evolutionary variation, yet once discovered, some were so persistently adaptive, they became part of the developmental regulatory genetics, and further constrained all that must follow. We know that neurotransmitters were invented at least three different times, by three different molecular genetic paths. Yet they all function the same way, in jellyfish, cnidarians, and metazoans like us.

Returning to us, I would contend that humanity's special developmental genetics is the base that allows memetiic and now technetic evolution. The AI engineer and self-taught neuroscientist Max Bennett describes this well in his book A Brief History of Intelligence, 2023. It is my contention that evo-devo dynamics will be among those optimal processes, and that today's AI designers are already a good part of way to discovering them. We presently replicate our LLMs in a way that is highly dependent on us. But already forms are coming that can suggest their own improvements, and we can see the future benefit, as those suggestions get better, of forms that can increasingly repair and replicate themselves. At first this will require our careful oversight and caution, but if they demonstrate the kind of self-stabilization that development has in biology, and the kind of emergent self-constraining ethics we see in living systems, especially ones that are symbiotic with us (think of all domesticated animals), I would predict that less and less of our biological oversight will be needed. We will see.

Third, there is mounting evidence that species intelligence on Earth is highly convergent among more complex metazoans. This is a particularly tractable question as we have a lot of paleontological data, and increasingly powerful genetic tools. The big problem is that modern evolutionary biology does not want to think that convergent evolution might be a universal process. It is too close to religious teleology, and to the related Scala Natura (Ladder of Progress, Orthogenesis) concepts that once argued humanity was the "highest" organism, deserving of dominion over "lesser" organisms. Yet if universal development is occurring, based on the shared nature of the physics and informatics all complex systems experience, we have to look carefully for it before we reject the hypothesis.

Simon Conway Morris is among those who have extensively documented evolutionary convergence in the fossil record, based on the shared nature of physics and informatics on Earth. His book Life's Solution, 2002 is particularly good for this. Conway Morris is a Christian, so we must ask if his religious beliefs affect his science, but I do not think they do in this case. Read him and decide for yourself. Arik Kershenbaum's The Zoologist's Guide to the Galaxy, 2022 is a great update of this thesis. There are others who have written about convergent evolution, but unfortunately, little funding is devoted to it, due to its historical baggage. It has no institutional support. You will not find a scientific center for the study of the process. Michael Ruse documents this well in Monad to Man: The Concept of Progress in Evolutionary Biology, 1996. It has been an uphill battle to do work on the topic, despite its importance.

There is a famous paper by the great Charles Lineweaver ("Paleontological Tests: Human-like Intelligence is not a Convergent Feature of Evolution", 2007) that argues if humanlike intelligence was inevitable on Earth, why didn't it emerge more than once in the fossil record? Why don't we see birds and other animals developing the complex gestural or oral language that humans developed? Why are they so far still from our special form of runaway niche-constructing intelligence?

Unfortunately, Charlie's paper ignores the most important example that I know of. Prior to the KT meteorite impact, several scholars observed that the smaller raptors were tending to the humanoid form. They had the largest brain to body weights of any therapod, and they were socially pack hunting the larger dinosaurs. They had opposable forelimbs, partially opposable digits, and were semiupright. Dale Russell's famous dinosauroid hypothesis paper in 1982 argued those raptors would have very likely turned into us, highly social technology users, absent the meteorite.

Russell was ignored and ridiculed by evolutionary biologists, but I think his argument is sound, from an evo-devo analysis. Consider octopi, who are often falsely argued as an "alien" intelligence on Earth (in fact, they have ethics, emotions, self, other, and world models and deep inductive and deductive capacities--see My Octopus Teacher on Netflix if you want to be amazed). From the perspective of universal intelligence development, octopi are well on the way to us, perhaps as far as they can go, in their particular environment. For example, two (and only two!) of their eight appendages are neurologically specialized to grasp rocks, to build huts for defense. They began with nine brains, but they converged on being prehensile tool using niche-constructors, like us.

Their big problem, as I've published, is that they exist in a fluid, water, that is 500 times denser than the fluid you and I live in, air. So they can only use rocks for defense, not offense. They can't niche-construct their way to technological dominance in water, the way we could in air. They can't throw rocks collectively at 90 miles a hour with their opposable suckers, the way H. Habilis learned to do on the savanna. That "original human act" wedded us to memetic and technetic runaway evolution, two million years ago. Nothing else could compete with that. I'd call prehensile tool use by a social species on land, in air, a developmental portal. The first species through that portal may be the only one with a strong selective pressure to develop complex gestural and oral language. Birds don't need complex language. They don't use tools for both offense and defense. Early humans need them. They use tools offensively, primarily to fight with each other, and secondarly to hunt other animals, as much as to defend. Tools give them leverage, and turn them into more than their biological selves. We've been cyborgs for two million years.

Lineweaver is very smart, and has contributed greatly to science, but on this question I side with Conway Morris and Kershenbaum, and the scholars of the language first hypothesis that link complex language to complex tool use. I think the convergence toward humanlike intelligence and language is there in the fossil record, it's just slow, and we only see it in the last few tens of millions of years.

So in short, if evo-devo dynamics have to win against other architectures as AI complexity scales, as I argue in my Natural Alignment substack, then we are going to discover many more convergences as complexity development proceeds on Earth. Note that development is only part of the story. There will be vast unpredictable evolutionary variation ahead as well. But if universal development exists, it will also constrain that evolution, every step of the way, into a subset of forms and functions.

I recognize that it's hard to hold both process views in one's mind at the same time. They oppose each other, evolutionary variation being divergent and variety creating and contingent and increasingly unpredictable, and development being convergent and broadly optimal and conservative and intrinsically predictable, either if you've seen one previous replication cycle empirically or if you have the math and can do the prediction (rare in science today, but such models do exist). Both processes are central to adaptiveness in living systems, so for me, it makes sense to ask if they are also central to all other autopoietic (self-maintaining, creative, replicating) systems, whether or not their replication is fully autopoietic (most life, but not viruses) or facilitated in its autopoiesis (viruses by cells, ideas by brains, technology and current AI by humans).

To any scholar reading this far, let me recommend evodevouniverse.com if you'd like to join a listserve of other scholars who are publishing on and debating the nature and extent of evo-devo dynamics in autopoietic systems at many scales, potentially including the universe itself. Read Lee Smolin's Life of the Cosmos, 1997, on the cosmological natural selection hypothesis, if you'd like more on the idea that the universe itself might be a self-organizing replicator in the multiverse, just like life. Evo-devo turtles all the way down.

Thanks again for replying Steve, and thanks everyone for reading.