Discussion about this post

User's avatar
skybrian's avatar

> The implications range from “curing cancer” to “100% unemployment rate”, and go on from there. It sounds crazy as I type it, but I can’t look at current trends and see any other trajectory.

If you want a counterexample to get your imagination going, a good one is driverless cars. Somehow, they are better (safer) but not good enough for widespread use? We have high standards for machines in safety-related fields. People are grandfathered in, even though we're often bad drivers. And there's no physical reason driverless cars can't work, no fundamental barrier.

Going from AI to "curing cancer" seems like an absurd overreach? There are new treatments, often very good, but they didn't need AI, and it's not clear how useful AI will be. Also, I would put medicine in the physical realm, which you've said you want to exclude?

It seems like this is easier to think about if we ban the word "intelligence" (poorly defined for machines) and just talk about tasks. It's often true that, for a given well-defined task, once a machine can do it as well as a person, the machine can be improved to do the task better. Or if not better, cheaper. Lots of tasks have been automated already using computers, and I expect it to continue. We can also change the task to make it more feasible for a machine to do it. It happens all the time.

But beware survivorship bias. The machines you can think of survived in the marketplace because they were better along enough dimensions to keep using. But there are also many failures.

Expand full comment
Lukas Bergstrom's avatar

Architecturally, we don't know how to get really adaptive, goal-oriented behavior. I don't think this is a transformer-sized problem - this is the problem it took hundreds of millions of years of evolution to solve. Language, on the other hand, just took a few hundred thousand years after the hard part of adaptive organisms was solved. Or as Hans Moravec put it, abstract thought "is a new trick, perhaps less than 100 thousand years old….effective only because it is supported by this much older and much more powerful, though usually unconscious, sensorimotor knowledge." See the NeuroAI paper from Yann LeCun and others: https://www.nature.com/articles/s41467-023-37180-x

That doesn't mean AI won't surpass us at many tasks, but general-purpose agents (give them a very high-level goal and walk away) would likely require more than one breakthrough.

Expand full comment
12 more comments...

No posts