Discussion about this post

User's avatar
Tom Dietterich's avatar

It is great to have these examples laid out in one place -- thank you! All of them reveal an inability to identify and track the relevant state of the world, such as net profit, cost of inventory, and so on. As you point out, it also reveals yet again the absence of meta-cognition: in this case its own time management (as well as its own identity and task). Finally, we see that these systems may know many things (e.g., that discounts are a bad idea), but can't reliably apply that knowledge when taking action.

Expand full comment
Abhay Ghatpande's avatar

Hi Steve, I've been reading posts by Gary Marcus, Peter Voss, Srini P., and others, and their opinion AFAIK is that agents would need "world models" to operate. (Not the world models introduced recently as part of gaming/video generation, but an actual "model" of the world.) I've not been able to find more info on exactly what they mean here, because the supposed leader in this space, aigo.ai, has zero info on the site. If this were true, it leads me to believe that agents would need to be highly specialized and narrowly focused on a task because building a broad, general purpose model (of concepts and their relationships) is close to impossible.

I would love to see a hybrid agent that combines both LLMs and Cognitive AI. If you are aware of any such efforts, please point them out. Thank you for your (second) thoughtful posts and efforts to educate us.

Expand full comment
1 more comment...

No posts