Discussion about this post

User's avatar
Matthew Stone's avatar

I’m quite excited for the near-term ways AI augment/enhance individuals. I personally get a lot of value from GPT in its current form. One of my favorite use cases is to give it a problem/situation along with the things I’m considering and ask it to give me additional areas/risk/considerations I may have missed. While a majority of the output is predictable I find it will usually find one or two things worthwhile that I would have otherwise missed.

GPT itself being useful I look forward to it having a bit more...agency (for lack of a better term) to speak up on its own. As an example if I have a particular goal I’m trying to achieve, say lose weight, and I frequently stumble on my discipline in the evenings with overeating (this example is starting to hit too close to home 😅) it can craft a message using its understanding not only my habits/tendencies but also draw on its body of psychology “knowledge.” Even in small percentages of effectiveness this could be meaningful enchantment.

I can think of a few others in a similar vein and honestly this doesn’t take any additional AI advancement just using the software systems surrounding it more cleverly.

These types of augmentations raise the usual suspects of concern. Specifically on my mind is skill atrophy -- even for myself and even in the near term. I find myself allowing Copilot write a full function for me which, from experience, requires vigilance to ensure it’s doing what you actually want and not in some fraught way. Humans suck at vigilance over the long haul and AI us _just_ reliable enough to trust it.

Expand full comment

No posts