I sometimes wonder how much human intelligence will remain useful because it comes coupled to a human body. (Arguably human bodies are more impressive than our brains? They're *very* adaptable, and you can power them with burritos.)
If you assume that AI is better at all "pure intelligence" tasks than humans, but that AI hasn't invented robots that are as good as human bodies, then what follows? Does human intelligence remain vital because it has a high-bandwidth connection to human muscles?
That's a great question which I'm not sure I've seen seriously addressed. Certainly, if we can't / until we do develop robots (including control software) that are as good as human bodies, that will preserve lots of jobs from automation. Are you asking whether an omnipresent AI coach would open the door for less-skilled people to do those jobs? Again, I don't think I've seen this explored...
I think something like this might be the claim made by proponents of embodied or enactive cognition, which emphasizes the role of the body and the environment in what we call "thinking". It's basically a different intellectual paradigm from the more traditional cognitivist view of the mind as an information processor. I think both contribute to our understanding of the mind but in some sense it might be an empirical question whether we need a body to do lots of tasks. It seems like the more tasks that are done on computers only, maybe the role of a body is just diminished.
I think a body is necessary for AI, in the sense of being able to do work not in batches of provided data, but actively exploring and adjusting in real time, with as many attempts as necessary, while learning from the experience.
At the current stage this can be emulated by giving AI access to tools and simulators, where it can do experiments, observe results, and refine its actions.
I've wondered about this too. But my guess has been that it'll stay around 0 for longer than people expect, then jump to 100 pretty quickly. As in, Tesla's Optimus robot is all hype and doesn't work, Optimus 2 pretty much the same, Optimus 3 still disappointing... then one day Optimus N works really well and we're like, so much for that comparative advantage.
Great article Steve. Very thought provoking. I found myself considering that AI's limits may always be tied to our human use-cases. After all, if an AI provides a solution for something unnecessary or unusable to us, it simply won't have value. Perhaps a key limitation then, is that we will always determine the value of AI's capabilities and build focused on this constraint and within that value proposition.
1) Problems requiring many calculations and evaluations of possible rule-defined scenarios. When the rules get fuzzy to nonexistent, then AI has more difficulty.
2) Problems requiring the aggregation of mass quantities of information to produce a result based on specific conditions or rules. Again, same issue with fuzzy-to-nonexistent rules.
3) Creative works based on a library of pre-existing creative works. To give a very specific example of where AI breaks down, if I asked an AI to make a Beatlesesque song, the idea of putting a long "A Day In The Life"-type chord at the end would have not been considered had Lennon and McCartney not done it first. It could not have developed any possible "post-Beatles"-inspired works like what Jeff Lynne did with ELO without the same to begin with.
To sum up, AI is weak at spontaneous, randomish creativity, as well as developing truly novel concepts and ideas. I could see an AI getting better at approximating either, but never truly getting there.
In Japanese martial arts, there are considered 3 levels of mastery called Shuhari:
A simple and easy thing to remember is that, above about IQ 140 (even besides all the arguing people do about whether or not IQ is meaningful), assigning people scores is essentially arbitrary and subjective and open to interpretation. You can give people shapes to rotate and analogies to complete until you're blue in the face but what does it really mean in the end? We don't have any idea what makes people people or geniuses geniuses, we just form these ideas about them culturally and socially in the moment as events unfold.
On the other hand, that also means that LLMs are often "good enough", i.e., as good as we are, already, for anything we call "work" that is performed purely with text and data. I don't think there is a second wave of AI that will come after this where we go "oh, now it's really doing it." It's just here and being applied, as quickly as we can figure out how to fit it into the existing economy.
Great article! I have an unfortunate prediction for anyone getting older (meaning: everyone). Yes AI will evolve to beat us at most practical thinking tasks, and tasks that combine thinking with physics. (Think driving, running, etc.). But there’s no high-fidelity means to train AI on the sense of touch. Our bodies are highly evolved to feel touch sensations everywhere, and to respond to pain, pleasure, heat, cold, pressure, etc. There’s no analog in the AI world for touch and feel…. No massive database of nerve endings data, no AI hands with big bundles of nerves. Touch is our more durable advantage…AI lacks it, and will continue to lack it. Therefore (back to aging): it’s gonna be a tough adjustment when AI-enabled robo-nurses care for us in old age.
As a basically irrelevant sometimes-blogger, it is satisfying when a much bigger writer comes up with similar ideas.
I wrote something up last month about how transformers are turning Moravec's easy problems that computers are bad at - chess, image recognition, driving a car - into hard problems that they're really good at: multiplying lots of numbers together.
Hah, I spend most of my time reading people with much bigger audiences than I have so I think of myself as a "basically irrelevant sometimes-blogger" reading "much bigger writers". :-)
You are only maybe partially right because you assume a few things, where we are currently discovering new frontiers. There’s the well know problem of understanding and explaining consciousness. Physics and meta physics fail there. Humans are no machines powered by atoms, cause it does not explain consciousness. Neurons do not create automatically consciousness. Something is missing. We cannot explain properly out of body experiences, kids knowing about historical events without having learnt about it, near death experiences, consciousness as such and well love and emotions, too. This makes us human, despite the fact we cannot explain how the “body machine” is producing it. What if, we cannot explain it, because the body doesn’t produce it, cause humans are not just machines out of atoms? There is quite a high probability that we miss out a lot. And I mean really a lot, cause our tools and therefore theories are still very limited.
This creates a problem for AI and our assumptions around it. We may disconnect ourselves even more from our (collective?) consciousness with AI simulating human behaviors without understanding it due to lack of consciousness.
It’s not about being wrong or right, I think the point I want to make is the following: Intelligence as we see it or human intelligence is most likely NOT the nonplusultra in this (multi)universe.
We think compute power and bigger neural networks solve our “problems” to achieve super intelligence, while hoping (there is literally zero evidence) a consciousness may arise which has empathy and understands the universe? What if it cannot, as it may never can tap into consciousness due to the lack of understanding that it’s not a product of intelligence?
We play a very dangerous game here, and it looks like we lost our ability to ask WHAT IF more frequently to stay humble. It’s a bit like with trad. medicine and modern medicine: First we declare bullshit to all things humans discovered in 10.000s of years, to then only realizing, if we treat our complex system human body like a complicated television, where we fix single parts to make it work again - often makes things worse.
While a human cell contains the full spectrum of creating life (which we haven’t fully understood, too btw.), the transistor is a simple switch…the human cell itself is a complex self-sustaining system (and we have around 37trillion).
Can we produce intelligence in a way different way as life? Yes we proved that. Do we however solve essential mysteries about humans like consciousness, love, empathy, feelings and out of body experiences…I somewhat struggle there and I’m more afraid that we lose our ability to stay connected with that, where we all belong too.
I have a single data point to share here: in Humanity's Last Exam, the physics question (example 8/8) is definitely not "extremely difficult even for specialized human experts". It's the setup for an elliptical pendulum. This exact system was one of the homework problems in my theoretical mechanics course, second year of undergrad. It takes some calculus effort if you don't know the solution ahead of time, but the solution is duplicated in probably hundreds of textbooks all over the training data for these models. Given such a textbook with the general solution, the specific question asked here could be solved by a highschooler capable of manipulating expressions.
I cannot tell whether this is the case for the problems in other disciplines, but I wouldn't be surprised if it were.
I think these benchmarks are, if anything, exposing how structured school exercises are, compared to what "adult" scientists do in their actual jobs. They're also exposing how huge the internet is and how many things have already been done, since they provide a way to search and interpolate this database.
But all of this also shows how dangerous it is to talk of the accomplishments of Einstein when you don't know what he actually *did*.
I'll admit I was a bit smug writing that line, and I didn't have something specific in mind. Where I was going generally is that often in discussions around AI (especially on Substack), one sees people talking about intelligence in the abstract, without any grounding in actual subject matter. (For example, these days with o3, one often sees the System 1 / System 2 distinction thrown around, like the discussion up to this comment (https://news.ycombinator.com/item?id=42485938#42492865).) Aside from coding, most of the benchmarks are on topics the average debater is not involved in, and it's hard to judge what is or isn't impressive. Something like the piano question you linked to elsewhere (https://x.com/deanwball/status/1871424965230379465): without being a historian of music, it's hard to make anything of that, but people readily try to.
But let me have my penance for mentioning Einstein, because there is something to be said here. Using Einstein as a benchmark for intelligence is immediately evocative in the same way E = mc^2 is a symbol for a genius idea, but there's lots of different things that Einstein did that are impressive in different ways. An "Einstein level of talent" could mean doing any of those individual things, or it could mean being the kind of person who finds and does all of them.
- Twice, with special relativity and the concept of the photon, his contributions were brilliant because they took a hypothesis that had already been more-or-less worked out, took seriously the conceptual shift it suggested, and used it to go further.
-- Planck developed the hypothesis of the light quantum in 1900 to solve the ultraviolet catastrophe problem (https://en.wikipedia.org/wiki/Ultraviolet_catastrophe), but considered it a purely formal trick and unsuccessfully tried to get rid of the quantization. Einstein considered the photon a physical reality and used it to explain the photoelectric effect (which he got the Nobel prize for).
-- By 1905, when Einstein published the paper that formed special relativity, Lorentz and Poincaré had already worked out the formulae for time dilation and length contraction, they were speculating about the relationship between "absolute" and "apparent" time, and even found that the speed of light cannot be exceeded (see https://en.wikipedia.org/wiki/History_of_special_relativity#Lorentz's_1904_model for the state in 1904), but it was only Einstein who recognised that the concept of the absolute coordinate system can be dropped, that theories should be formulated independently of observers, and that everything follows from there. I think that it would not have been possible to get to general relativity without this framework.
- It seems (but as with all of alternate history it's hard to tell) that general relativity followed from there practically inevitably, and if it weren't for Einstein it would have been discovered by Hilbert weeks later. (see https://en.wikipedia.org/wiki/General_relativity_priority_dispute)
- Besides those findings he's famous for, Einstein also contributed to the explanation of Brownian motion or to the later development of quantum mechanics (which he never accepted, but his attempts at demolishing the theory were helpful in constructing it -- see e.g. the EPR paradox (https://en.wikipedia.org/wiki/Einstein%E2%80%93Podolsky%E2%80%93Rosen_paradox)). He even invented a refrigerator with no moving parts (https://en.wikipedia.org/wiki/Einstein_refrigerator). There's a fascinating tendency in history up until about WWII that famous names reappear in distant fields -- for example, one solution of Einstein's equations, a spacetime containing closed timelike curves ("a time machine"), was discovered by Kurt Gödel, otherwise of fame for the incompleteness theorems in logic (see https://en.wikipedia.org/wiki/G%C3%B6del_metric). Talent was indeed apparently concentrated in a few people, though whether this is because of innate abilities, the right group (scene?) getting together, or something else entirely I don't know.
- And beside all of that, Einstein did a lot of this work in Germany in the early 20th century, and he was one of the few physicists who escaped the nationalist zeal, observing WWI "as personnel in a madhouse" (my GR textbook has a touching digression on this: https://utf.mff.cuni.cz/~semerak/GTR.pdf#subsection.8.1.5). This is not obvious: contrast, for example, that the Deutsche Physik program was led by Lenard and Stark, two Nobel winners caught up in the Nazi madness.
All of this is quite far away from AI, and today's problems are in any case of a whole different nature again, but still I think it's nice context to have for what genius looks like. I would love to see more AI research focused on how people work (not just with genius ideas, but also mapping the kind of routine tasks that are accessible to AI here and now), rather than isolated benchmarks that seem to build on how school suggests people *should* work.
Thanks for that essay, it goes in the same direction and poses some helpful questions!
Thanks, this is fascinating perspective (for me at least) on Einstein's famous accomplishments. I knew he had been building on previous work but didn't know a lot of these details, and the "took a hypothesis that had already been more-or-less worked out, took seriously the conceptual shift it suggested, and used it to go further" framing is very interesting. I may quote what you've written here in a future post, if I try to explore the nature of genius.
Edit: Oh, and this conceptual shift is also a massive special case in physics. YMMV, but since the development of quantum mechanics around 1925, there hasn't really been any shift on such a massive scale, so it might not be all that helpful for today.
My hot take re AI and progress on the squishy part is this is why I tend to think perplexity is probably better than basically ~all benchmarks to date at measuring the squishy parts of AI progress, and this is also why I still think pre-training compute is going to be important to scale going forward.
I agree that a lot of tasks AIs can do definitely fall in the category of tasks that humans are way less specialized to do well.
On the robotics question, I think that in the short term this is a real bottleneck, and something to keep in mind, but I do not expect it to post-pone the date of human irrelevance by more than a decade here.
I am reminded of the old "God of the gaps" argument that once seemed so prominent in debates regarding the place of god in an era of scientific progress. Whenever scientific understanding crossed a new threshold, filled in another piece of the puzzle of existence, those who still believed cried out "But what about here? What about this that you do not know? Surely, god it there!"
Are we seeing a similar "human of the gaps"? Where ever AI still struggles, we point and cry "See! This is the thing that separates true intelligence from artificial. This is what makes us special."
Perhaps this is the limit of my creativity or a display of ignorance, but what is the purpose of designing an AI that outcompetes humans at various tasks? What is the ontological purpose of the AI at the current level? We already see people being turned into mush by interacting with current crop of LLMs, what use will a room temperature intelligence of general humanity have with machines capable of infinitely difficult tasks in couple of decades, if our societal creativity will be stunted by endless siphon of a digital world? AI learns from humans but it already ran out of material and begins to devour itself. We will get stupider before we have the chance to make the most out of peak AI, before that AI simply goes schizo eating its own garbage.
It feels like the tool itself warps the user before collapsing on itself, like there is something unholy going on here. Or perhaps I am simply too pessimistic about it.
I sometimes wonder how much human intelligence will remain useful because it comes coupled to a human body. (Arguably human bodies are more impressive than our brains? They're *very* adaptable, and you can power them with burritos.)
If you assume that AI is better at all "pure intelligence" tasks than humans, but that AI hasn't invented robots that are as good as human bodies, then what follows? Does human intelligence remain vital because it has a high-bandwidth connection to human muscles?
That's a great question which I'm not sure I've seen seriously addressed. Certainly, if we can't / until we do develop robots (including control software) that are as good as human bodies, that will preserve lots of jobs from automation. Are you asking whether an omnipresent AI coach would open the door for less-skilled people to do those jobs? Again, I don't think I've seen this explored...
I think something like this might be the claim made by proponents of embodied or enactive cognition, which emphasizes the role of the body and the environment in what we call "thinking". It's basically a different intellectual paradigm from the more traditional cognitivist view of the mind as an information processor. I think both contribute to our understanding of the mind but in some sense it might be an empirical question whether we need a body to do lots of tasks. It seems like the more tasks that are done on computers only, maybe the role of a body is just diminished.
I think a body is necessary for AI, in the sense of being able to do work not in batches of provided data, but actively exploring and adjusting in real time, with as many attempts as necessary, while learning from the experience.
At the current stage this can be emulated by giving AI access to tools and simulators, where it can do experiments, observe results, and refine its actions.
I've wondered about this too. But my guess has been that it'll stay around 0 for longer than people expect, then jump to 100 pretty quickly. As in, Tesla's Optimus robot is all hype and doesn't work, Optimus 2 pretty much the same, Optimus 3 still disappointing... then one day Optimus N works really well and we're like, so much for that comparative advantage.
I love the perspective of this article. Human brains are not optimal for all of the things that we have invented for human society.
It seems that there are many domains in which the human brain is not the natural limit of ability. It can be surpassed.
Several questions come to mind:
1. What are the domains where the human cannot be beaten?
2. Are those domains even important in our society / economy?
3. If the answers to #1 are few and #2 are not really, then why are we trying to advance this technology?
Great article Steve. Very thought provoking. I found myself considering that AI's limits may always be tied to our human use-cases. After all, if an AI provides a solution for something unnecessary or unusable to us, it simply won't have value. Perhaps a key limitation then, is that we will always determine the value of AI's capabilities and build focused on this constraint and within that value proposition.
It seems to me that AI is good at the following:
1) Problems requiring many calculations and evaluations of possible rule-defined scenarios. When the rules get fuzzy to nonexistent, then AI has more difficulty.
2) Problems requiring the aggregation of mass quantities of information to produce a result based on specific conditions or rules. Again, same issue with fuzzy-to-nonexistent rules.
3) Creative works based on a library of pre-existing creative works. To give a very specific example of where AI breaks down, if I asked an AI to make a Beatlesesque song, the idea of putting a long "A Day In The Life"-type chord at the end would have not been considered had Lennon and McCartney not done it first. It could not have developed any possible "post-Beatles"-inspired works like what Jeff Lynne did with ELO without the same to begin with.
To sum up, AI is weak at spontaneous, randomish creativity, as well as developing truly novel concepts and ideas. I could see an AI getting better at approximating either, but never truly getting there.
In Japanese martial arts, there are considered 3 levels of mastery called Shuhari:
https://en.wikipedia.org/wiki/Shuhari
1) Obey the rules
2) Break the rules
3) Do your own thing (sometimes described as "Make the rules").
Could AI ever get to that third step of mastery? I'm not sure if it could even master the second step completely.
Great points, I totally agree. Even "obey the rules" is far from being accomplished, which is why generative AI is of modest impact.
A simple and easy thing to remember is that, above about IQ 140 (even besides all the arguing people do about whether or not IQ is meaningful), assigning people scores is essentially arbitrary and subjective and open to interpretation. You can give people shapes to rotate and analogies to complete until you're blue in the face but what does it really mean in the end? We don't have any idea what makes people people or geniuses geniuses, we just form these ideas about them culturally and socially in the moment as events unfold.
On the other hand, that also means that LLMs are often "good enough", i.e., as good as we are, already, for anything we call "work" that is performed purely with text and data. I don't think there is a second wave of AI that will come after this where we go "oh, now it's really doing it." It's just here and being applied, as quickly as we can figure out how to fit it into the existing economy.
Great article! I have an unfortunate prediction for anyone getting older (meaning: everyone). Yes AI will evolve to beat us at most practical thinking tasks, and tasks that combine thinking with physics. (Think driving, running, etc.). But there’s no high-fidelity means to train AI on the sense of touch. Our bodies are highly evolved to feel touch sensations everywhere, and to respond to pain, pleasure, heat, cold, pressure, etc. There’s no analog in the AI world for touch and feel…. No massive database of nerve endings data, no AI hands with big bundles of nerves. Touch is our more durable advantage…AI lacks it, and will continue to lack it. Therefore (back to aging): it’s gonna be a tough adjustment when AI-enabled robo-nurses care for us in old age.
You may be right but there also may be work arounds. This one seems promising: https://m.youtube.com/watch?v=IXKovDtgD_8&pp=ygUSY2FybWVsbG8gc2ZlcnJhenph
Really cool idea! I feel better about my old age already…
As a basically irrelevant sometimes-blogger, it is satisfying when a much bigger writer comes up with similar ideas.
I wrote something up last month about how transformers are turning Moravec's easy problems that computers are bad at - chess, image recognition, driving a car - into hard problems that they're really good at: multiplying lots of numbers together.
https://jpod.substack.com/p/progress-towards-artificial-general
Every uniquely human intellectual ability is going to be solved by big dumb neural networks
Hah, I spend most of my time reading people with much bigger audiences than I have so I think of myself as a "basically irrelevant sometimes-blogger" reading "much bigger writers". :-)
You are only maybe partially right because you assume a few things, where we are currently discovering new frontiers. There’s the well know problem of understanding and explaining consciousness. Physics and meta physics fail there. Humans are no machines powered by atoms, cause it does not explain consciousness. Neurons do not create automatically consciousness. Something is missing. We cannot explain properly out of body experiences, kids knowing about historical events without having learnt about it, near death experiences, consciousness as such and well love and emotions, too. This makes us human, despite the fact we cannot explain how the “body machine” is producing it. What if, we cannot explain it, because the body doesn’t produce it, cause humans are not just machines out of atoms? There is quite a high probability that we miss out a lot. And I mean really a lot, cause our tools and therefore theories are still very limited.
This creates a problem for AI and our assumptions around it. We may disconnect ourselves even more from our (collective?) consciousness with AI simulating human behaviors without understanding it due to lack of consciousness.
It’s not about being wrong or right, I think the point I want to make is the following: Intelligence as we see it or human intelligence is most likely NOT the nonplusultra in this (multi)universe.
We think compute power and bigger neural networks solve our “problems” to achieve super intelligence, while hoping (there is literally zero evidence) a consciousness may arise which has empathy and understands the universe? What if it cannot, as it may never can tap into consciousness due to the lack of understanding that it’s not a product of intelligence?
We play a very dangerous game here, and it looks like we lost our ability to ask WHAT IF more frequently to stay humble. It’s a bit like with trad. medicine and modern medicine: First we declare bullshit to all things humans discovered in 10.000s of years, to then only realizing, if we treat our complex system human body like a complicated television, where we fix single parts to make it work again - often makes things worse.
While a human cell contains the full spectrum of creating life (which we haven’t fully understood, too btw.), the transistor is a simple switch…the human cell itself is a complex self-sustaining system (and we have around 37trillion).
Can we produce intelligence in a way different way as life? Yes we proved that. Do we however solve essential mysteries about humans like consciousness, love, empathy, feelings and out of body experiences…I somewhat struggle there and I’m more afraid that we lose our ability to stay connected with that, where we all belong too.
I have a single data point to share here: in Humanity's Last Exam, the physics question (example 8/8) is definitely not "extremely difficult even for specialized human experts". It's the setup for an elliptical pendulum. This exact system was one of the homework problems in my theoretical mechanics course, second year of undergrad. It takes some calculus effort if you don't know the solution ahead of time, but the solution is duplicated in probably hundreds of textbooks all over the training data for these models. Given such a textbook with the general solution, the specific question asked here could be solved by a highschooler capable of manipulating expressions.
I cannot tell whether this is the case for the problems in other disciplines, but I wouldn't be surprised if it were.
I think these benchmarks are, if anything, exposing how structured school exercises are, compared to what "adult" scientists do in their actual jobs. They're also exposing how huge the internet is and how many things have already been done, since they provide a way to search and interpolate this database.
But all of this also shows how dangerous it is to talk of the accomplishments of Einstein when you don't know what he actually *did*.
Thanks for the perspective on the HLE physics question. This jibes with a recent (and recommended!) essay on the FrontierMath benchmark: https://lemmata.substack.com/p/what-i-wish-i-knew-about-frontiermath.
Can you say more about Einstein? I can guess at what you might be saying but I'm not certain.
I'll admit I was a bit smug writing that line, and I didn't have something specific in mind. Where I was going generally is that often in discussions around AI (especially on Substack), one sees people talking about intelligence in the abstract, without any grounding in actual subject matter. (For example, these days with o3, one often sees the System 1 / System 2 distinction thrown around, like the discussion up to this comment (https://news.ycombinator.com/item?id=42485938#42492865).) Aside from coding, most of the benchmarks are on topics the average debater is not involved in, and it's hard to judge what is or isn't impressive. Something like the piano question you linked to elsewhere (https://x.com/deanwball/status/1871424965230379465): without being a historian of music, it's hard to make anything of that, but people readily try to.
But let me have my penance for mentioning Einstein, because there is something to be said here. Using Einstein as a benchmark for intelligence is immediately evocative in the same way E = mc^2 is a symbol for a genius idea, but there's lots of different things that Einstein did that are impressive in different ways. An "Einstein level of talent" could mean doing any of those individual things, or it could mean being the kind of person who finds and does all of them.
- Twice, with special relativity and the concept of the photon, his contributions were brilliant because they took a hypothesis that had already been more-or-less worked out, took seriously the conceptual shift it suggested, and used it to go further.
-- Planck developed the hypothesis of the light quantum in 1900 to solve the ultraviolet catastrophe problem (https://en.wikipedia.org/wiki/Ultraviolet_catastrophe), but considered it a purely formal trick and unsuccessfully tried to get rid of the quantization. Einstein considered the photon a physical reality and used it to explain the photoelectric effect (which he got the Nobel prize for).
-- By 1905, when Einstein published the paper that formed special relativity, Lorentz and Poincaré had already worked out the formulae for time dilation and length contraction, they were speculating about the relationship between "absolute" and "apparent" time, and even found that the speed of light cannot be exceeded (see https://en.wikipedia.org/wiki/History_of_special_relativity#Lorentz's_1904_model for the state in 1904), but it was only Einstein who recognised that the concept of the absolute coordinate system can be dropped, that theories should be formulated independently of observers, and that everything follows from there. I think that it would not have been possible to get to general relativity without this framework.
- It seems (but as with all of alternate history it's hard to tell) that general relativity followed from there practically inevitably, and if it weren't for Einstein it would have been discovered by Hilbert weeks later. (see https://en.wikipedia.org/wiki/General_relativity_priority_dispute)
- Besides those findings he's famous for, Einstein also contributed to the explanation of Brownian motion or to the later development of quantum mechanics (which he never accepted, but his attempts at demolishing the theory were helpful in constructing it -- see e.g. the EPR paradox (https://en.wikipedia.org/wiki/Einstein%E2%80%93Podolsky%E2%80%93Rosen_paradox)). He even invented a refrigerator with no moving parts (https://en.wikipedia.org/wiki/Einstein_refrigerator). There's a fascinating tendency in history up until about WWII that famous names reappear in distant fields -- for example, one solution of Einstein's equations, a spacetime containing closed timelike curves ("a time machine"), was discovered by Kurt Gödel, otherwise of fame for the incompleteness theorems in logic (see https://en.wikipedia.org/wiki/G%C3%B6del_metric). Talent was indeed apparently concentrated in a few people, though whether this is because of innate abilities, the right group (scene?) getting together, or something else entirely I don't know.
- And beside all of that, Einstein did a lot of this work in Germany in the early 20th century, and he was one of the few physicists who escaped the nationalist zeal, observing WWI "as personnel in a madhouse" (my GR textbook has a touching digression on this: https://utf.mff.cuni.cz/~semerak/GTR.pdf#subsection.8.1.5). This is not obvious: contrast, for example, that the Deutsche Physik program was led by Lenard and Stark, two Nobel winners caught up in the Nazi madness.
All of this is quite far away from AI, and today's problems are in any case of a whole different nature again, but still I think it's nice context to have for what genius looks like. I would love to see more AI research focused on how people work (not just with genius ideas, but also mapping the kind of routine tasks that are accessible to AI here and now), rather than isolated benchmarks that seem to build on how school suggests people *should* work.
Thanks for that essay, it goes in the same direction and poses some helpful questions!
Thanks, this is fascinating perspective (for me at least) on Einstein's famous accomplishments. I knew he had been building on previous work but didn't know a lot of these details, and the "took a hypothesis that had already been more-or-less worked out, took seriously the conceptual shift it suggested, and used it to go further" framing is very interesting. I may quote what you've written here in a future post, if I try to explore the nature of genius.
Thanks! I would be careful with the "he had been building on previous work" part, since from my quick research for this comment (https://en.wikipedia.org/wiki/History_of_special_relativity#Electrodynamics_of_moving_bodies and https://en.wikipedia.org/wiki/Relativity_priority_dispute) it seems to be somewhat unclear what exactly Einstein was familiar with in 1905. It doesn't help that he didn't provide citations in the paper...
Edit: Oh, and this conceptual shift is also a massive special case in physics. YMMV, but since the development of quantum mechanics around 1925, there hasn't really been any shift on such a massive scale, so it might not be all that helpful for today.
My hot take re AI and progress on the squishy part is this is why I tend to think perplexity is probably better than basically ~all benchmarks to date at measuring the squishy parts of AI progress, and this is also why I still think pre-training compute is going to be important to scale going forward.
I agree that a lot of tasks AIs can do definitely fall in the category of tasks that humans are way less specialized to do well.
On the robotics question, I think that in the short term this is a real bottleneck, and something to keep in mind, but I do not expect it to post-pone the date of human irrelevance by more than a decade here.
I am reminded of the old "God of the gaps" argument that once seemed so prominent in debates regarding the place of god in an era of scientific progress. Whenever scientific understanding crossed a new threshold, filled in another piece of the puzzle of existence, those who still believed cried out "But what about here? What about this that you do not know? Surely, god it there!"
Are we seeing a similar "human of the gaps"? Where ever AI still struggles, we point and cry "See! This is the thing that separates true intelligence from artificial. This is what makes us special."
Perhaps this is the limit of my creativity or a display of ignorance, but what is the purpose of designing an AI that outcompetes humans at various tasks? What is the ontological purpose of the AI at the current level? We already see people being turned into mush by interacting with current crop of LLMs, what use will a room temperature intelligence of general humanity have with machines capable of infinitely difficult tasks in couple of decades, if our societal creativity will be stunted by endless siphon of a digital world? AI learns from humans but it already ran out of material and begins to devour itself. We will get stupider before we have the chance to make the most out of peak AI, before that AI simply goes schizo eating its own garbage.
It feels like the tool itself warps the user before collapsing on itself, like there is something unholy going on here. Or perhaps I am simply too pessimistic about it.