Discussion about this post

User's avatar
Koen van den Heuvel's avatar

Wauw. Thank you Steve. For someone who is not in the field of AI. This is an incredible read. Thank you so much

Expand full comment
persona non-sequitur's avatar

Very interesting point on LLM limitations. Would be interesting to see if there are good implementations of architectures that go beyond predict the next word. But barring that, I can imagine workarounds that might be somewhat useful for automated agents and decrease it's error rate. Lots of people are already trying a bunch of things like giving it an internal dialogue and long term memory and coding and calculation abilities with APIs. With these plus some approach where instead of directly answering the question it asks itself, what are the necessary (and maybe sufficient) conditions to get the correct result? Is there a way for me to check my results through for example an API call to Wolfram Alpha? And maybe it could ask these questions for each step it generates. And then with each step, it only accepts a result that satisfy the conditions. Or better yet, maybe it can choose previous steps based on the conditions as well, working backwards.

Expand full comment
14 more comments...

No posts