Slow Adoption Applies To Evil AI, Too, The Market for AI Coding Tools Could Be Dwarfed by Health Care Administration, Why Do Radiologists Still Have Jobs?, and More
Thanks for sharing my post about radiology. Just to clarify, are your hypotheses in addition to the one I proposed (the hard parts are the boundaries between tasks) or are you saying that that hypothesis does not seem plausible to you? Cheers.
On re-reading, I think I did not express myself well. My second bullet was meant to encompass the idea that what's missing may be the boundaries between tasks ("messy gap-filling work that’s hard to study").
I do find it very plausible that the hard parts, at least for AI, are the boundaries between neatly-defined tasks – and that this likely is a significant, perhaps primary, reason that radiology is not more automated.
I absolutely think there is a lot of room for AI to improve healthcare administration. But the larger question is are the incentives there?
An example that's fresh in my mind: I recently received a bill in the mail for a $10 copay. The bill mentioned an option for paying online, but this was broken. So I had to pay over the phone. This system was entirely automated, and probably doesn't cost the provider that much to maintain, so there might not be much of an incentive to fix it. This terrible UX that the customer has no choice but to deal with-- typing in all of my information digit by digit rather than using autocompleted fields online.
Ultimately though, I wonder what led the process to be the way it was. Why couldn't I have just quickly paid the $10 at the office rather than receiving multiple reminders in the mail and going through this whole process?
I wonder how much innovation is held back by a lack of competition and accountability to end consumers of healthcare.
Sadly, I'm not referring to the paperwork burden on consumers. I'm referring to the costs incurred by health care providers, insurers, and others involved in health care delivery and payment. Apparently this alone is a trillion-dollar-per-year proposition in the US alone, *separately* from the enormous consumer burden you mention.
Though it's also conceivable that within the next few years, you'll have access to an agent that can successfully and reliably pay your bill over the phone on your behalf.
>"Perhaps I’m missing something, or perhaps the AI industry is just focusing much harder on coding tools than medical paperwork."
I think this explains much of what's going on. From my experience as a lawyer who codes and works freelance for legal tech startups, actual familiarity with the problems lawyers face (that can be solved by AI) is rare in the industry. Ultimately, you need to make something that people will pay for, and that requires tons of work downstream of the LLM itself. So good product-market fit, etc., but also understanding relevant workflows, benchmarks, etc. for that industry to make sure the LLM is orchestrated correctly and that outputs are trustworthy and reliable.
It happens that the domain most coders understand is...coding. So I'm not surprised that this is where most of the action is in the first wave of LLM apps. I don't think this is a reflection of market-driven behavior so much as it is feedback in the networks of people actually building these tools.
Amara's Law could be another explanation for radiologists not losing their jobs. "We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run."
Personally, I believe people underestimate the complexity of jobs when they try to automate tasks.
Thanks for sharing my post about radiology. Just to clarify, are your hypotheses in addition to the one I proposed (the hard parts are the boundaries between tasks) or are you saying that that hypothesis does not seem plausible to you? Cheers.
On re-reading, I think I did not express myself well. My second bullet was meant to encompass the idea that what's missing may be the boundaries between tasks ("messy gap-filling work that’s hard to study").
I do find it very plausible that the hard parts, at least for AI, are the boundaries between neatly-defined tasks – and that this likely is a significant, perhaps primary, reason that radiology is not more automated.
Ah, got it. Thanks for clarifying.
I absolutely think there is a lot of room for AI to improve healthcare administration. But the larger question is are the incentives there?
An example that's fresh in my mind: I recently received a bill in the mail for a $10 copay. The bill mentioned an option for paying online, but this was broken. So I had to pay over the phone. This system was entirely automated, and probably doesn't cost the provider that much to maintain, so there might not be much of an incentive to fix it. This terrible UX that the customer has no choice but to deal with-- typing in all of my information digit by digit rather than using autocompleted fields online.
Ultimately though, I wonder what led the process to be the way it was. Why couldn't I have just quickly paid the $10 at the office rather than receiving multiple reminders in the mail and going through this whole process?
I wonder how much innovation is held back by a lack of competition and accountability to end consumers of healthcare.
Sadly, I'm not referring to the paperwork burden on consumers. I'm referring to the costs incurred by health care providers, insurers, and others involved in health care delivery and payment. Apparently this alone is a trillion-dollar-per-year proposition in the US alone, *separately* from the enormous consumer burden you mention.
Though it's also conceivable that within the next few years, you'll have access to an agent that can successfully and reliably pay your bill over the phone on your behalf.
>"Perhaps I’m missing something, or perhaps the AI industry is just focusing much harder on coding tools than medical paperwork."
I think this explains much of what's going on. From my experience as a lawyer who codes and works freelance for legal tech startups, actual familiarity with the problems lawyers face (that can be solved by AI) is rare in the industry. Ultimately, you need to make something that people will pay for, and that requires tons of work downstream of the LLM itself. So good product-market fit, etc., but also understanding relevant workflows, benchmarks, etc. for that industry to make sure the LLM is orchestrated correctly and that outputs are trustworthy and reliable.
It happens that the domain most coders understand is...coding. So I'm not surprised that this is where most of the action is in the first wave of LLM apps. I don't think this is a reflection of market-driven behavior so much as it is feedback in the networks of people actually building these tools.
Amara's Law could be another explanation for radiologists not losing their jobs. "We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run."
Personally, I believe people underestimate the complexity of jobs when they try to automate tasks.