Discussion about this post

User's avatar
Player1's avatar

As a mathematician, I am annoyed by the common assumption that proving the Riemann hypothesis *doesn't* require managing complexity, metacognition, judgement, learning+memory, and creativity/insight/novel heuristics. Certainly, if a human were to establish a major open conjecture, in the process of doing so they would demonstrate all of these qualities. I think people underestimate the extent to which a research project (in math, or in science) differs from an exam question that is written by humans with a solution in mind.

Perhaps AI will be able to answer major open questions through a different, more brute-force method, as in chess. But chess is qualitatively very different from math: to play chess well requires much greater calculational ability than many areas of math. (At the end of the day, chess has no deep structure).

Also, prediction timelines for the Riemann Hypothesis or any specific conjecture are absurd. For all we know, we could be the same situation as Fermat in the 1600's, where to prove the equation a^n + b^n = c^n has no solutions you might need to invent modular forms, etale cohomology, deformation theory of Galois representations, and a hundred other abstract concepts that Fermat had no clue about. (Of course, there is likely some alternate proof out there, but is it really much simpler?). It is possible that we could achieve ASI and complete a Dyson sphere before all the Millenium problems are solved-- math can be arbitrarily hard.

Expand full comment
Vaughn Tan's avatar

I've been thinking about this question for a while!

When we consider what humans "actually" do, we often look at tasks and their outputs. A different way to consider the question is to understand the subjective valuations put on tasks and their outputs — my belief is that this alternative is superior because it provides clearer discrimination between [essentially human actions] and [actions which humans currently do which could be done better by machines].

I call this act of deciding and assigning subjective value "meaningmaking."

A writer choosing this word (and not that word) to achieve the correct tone for a blogpost is engaging in an act of meaningmaking — the choice of word is the result of deciding that one word is subjectively better than another in conveying the chosen tone for the intended audience.

These meaningmaking acts are everywhere in daily life, corporate life, and public life.

Deciding that this logo (and not that logo) is a better vehicle for corporate identity — meaningmaking. Choosing to hire this person (but not that person) because they are a better culture fit — meaningmaking. Ruling that this way of laying off lots of government employees (and not that way of doing it) is unlawful — meaningmaking.

Humans do 4 types of meaningmaking all the time:

Type 1: Deciding that something is subjectively good or bad. “Diamonds are beautiful,” or “blood diamonds are morally reprehensible.”

Type 2: Deciding that something is subjectively worth doing (or not). “Going to college is worth the tuition,” or “I want to hang out with Bob, but it’s too much trouble to go all the way to East London to meet him.”

Type 3: Deciding what the subjective value-orderings and degrees of commensuration of a set of things should be. “Howard Hodgkin is a better painter than Damien Hirst, but Hodgkin is not as good as Vermeer,” or “I’d rather have a bottle of Richard Leroy’s ‘Les Rouliers’ in a mediocre vintage than six bottles of Vieux Telegraphe in a great vintage.”

Type 4: Deciding to reject existing decisions about subjective quality/worth/value-ordering/value-commensuration. “I used to think the pizza at this restaurant was excellent, but after eating at Pizza Dada, I now think it is pretty mid,” or “Lots of eminent biologists believe that kin selection theory explains eusociality, but I think they are wrong and that group selection makes more sense.”

At the moment, I cannot see a way for an AI system do meaningmaking work.

I've quoted a lot from an article i wrote on the problem AI systems (and machines more generally) have with meaningmaking: https://uncertaintymindset.substack.com/p/ai-meaningmaking.

It's part of a longer series of essays about how the meaningmaking lens helps us understand what AI can and should be used for (and what it can't do and should not be used for): https://vaughntan.org/meaningmakingai

Very much a work in progress so would love comments and suggestions from this community.

Expand full comment
13 more comments...

No posts