A U.S. Department of the Treasury report states that personal auto insurance premiums in 2023 were about USD 318 billion (i.e. ~35.8% of the U.S. property & casualty insurance market). 
• Another source (AgencyChecklists) puts U.S. direct premiums written for private passenger auto in 2024 at USD 344.11 billion 
That's a great metric! Presumably it includes some unrelated costs (e.g. theft), but I imagine this is greatly outweighed by the full human toll of accidents not being covered by insurance – as well as vehicles other than personal automobiles.
, which insurance companies are required to report every year. Drivers who do not carry insurance are, by law, driving illegally. When we think about this in the context of autonomous vehicles, the situation becomes even more complex. If I am not actively driving my car, I should not be held personally responsible for accidents that occur. In that case, the liability might reasonably shift to the manufacturer or the operating system of the vehicle. Theft could also become a manufacturer or insurer issue since modern vehicles are packed with sensors, while vandalism might be handled through public programs funded by vehicle-related taxes (when purchased).
The total premiums collected each year are enormous (you can buy some countries with it), which likely creates pressure from large insurance companies to resist regulatory change that could reduce their revenue.
Finally, if I purchase a new autonomous vehicle, I should only have to pay insurance premiums when I disable autopilot and assume control. Insurance could be billed by the hour or by usage rather than as a flat annual rate. This approach would encourage the adoption of safer autonomous and electric vehicles, supporting national goals to transition toward full electrification of the fleet. Countries like China are already moving in this direction, where companies such as BYD include advanced autopilot features as a standard part of the vehicle package.
"So far, no other AV maker has demonstrated a safety record similar to Waymo's."
I think that's largely due to a lack of mileage data. When the other AV providers get enough miles under their belt, hopefully they can boast similar figures.
This should make them have an easy lobbying campaign. Few other technologies could save so many lives whilst being profitable in their own right. AV providers can reach out to policy makers and city councils and say, 'Look, we're going to reduce the primary killer of teenagers in your city' how compelling is that?
It's also really encouraging that this technology is gaining speed on the merits of other arguments alone, and that the OEM industry is not lobbying against it but joining the winds of change.
I have to say, my impression is that Waymo cars today are in fact safer than most (possibly all) other AVs. Waymo has spent decades and many billions of dollars on a patient, thorough, thoughtful, multi-layered effort to develop a safe driver. They've also invested in a thorough sensor package that (for the moment) costs on the order of $100,000 per vehicle. Hopefully other manufacturers will get there on safety, and costs should come down, but I'm not aware of anyone else being there yet.
In very round figures, there's roughly 1.5 fatalities per 100 million miles driven on US roads. So the expected number of fatalities from Waymo to compare with would be about 1 or 2: it seems that at present it is 0. So that's good news, but not yet nearly good enough to suggest a degree of statistical significance what might support a claim that driverless cars are on track to "to virtually eliminate traffic injuries and fatalities"
Presumably there is much better peer-reviewed statistics than these headline figures, but I would guess that accidents and fatalities should be asssessed per interaction rather than per mile. If so, what we're seeing so far is data on the accident rate per interaction between driverless car/human-driven car and between driverless car/other road user. There are so few driverless cars on the road that there will be very few interactions of the form driverless car/driverless car. It's not just a formality to distinguish those interactions, it has a genuine casual basis. The AI inside driverless is trained on data and so far there has been little opportunity for those AI to be trained in live situations where there are many interactions with other driverless cars. I claim that nobody actually knows how these autonomous systems will behave when interacting together in large numbers.
It's true that there isn't enough data to draw direct conclusions regarding fatality rates. What we do know is that Waymo is *vastly* less likely to be the cause of mild to moderate collisions. I am making the assumption that a system which is very very very good at avoiding milder collisions is also likely to be good at avoiding severe / fatal collisions. It is conceivable that there are rare edge cases under which Waymos will trigger a fatal collision, but I would expect those edge cases to also trigger a larger number of non-fatal collisions, and that would likely have shown up in the data by now. It just seems like a stretch to postulate that Waymos will have fatal collisions but no similar-but-not-fatal incidents.
It also seems reasonable to assume that its safety record will only improve from here, especially with regard to serious collisions, because (a) this is the type of tech that tends to improve rapidly over time, (b) Waymo has strong incentives to continue avoiding fatalities, and (c) they have access to lots of data about any near misses that occur. ("C" is really a sub-point of "A".)
> there isn't enough data to draw direct conclusions regarding fatality rates
I rather think that's exactly what this article does, with language like "virtually eliminate traffic injuries and fatalities" and "road deaths [...] would be almost as rare, and seem almost as bizarre, as cases of scurvy". The language to use here is data.
> this is the type of tech that tends to improve rapidly over time
Tech doesn't just improve all by itself like a fine wine maturing in a cool cellar. It is improved by human effort and the use of resources, and in the case of AI by the use of more and better data. I've suggested a reason to believe that the operating environment for driverless will move way from that on which it was trained (driverless cars in a tiny minority, all interactions between isolated driverless cars and mainly human agents) to one for which is has not yet been trained (driverless cars in the majority, many interactions between driverless cars en masse). To the extent that the enviroment moves in this way, tech will get worse not better as its training sets get further from reality. This is going to require effort and money. Is that going to forthcoming, and if so, where from?
"But none of this can excuse delay. Once an AV manufacturer has demonstrated a safety record like Waymo’s, we should roll out the red carpet."
I think part of the reason they have such a good safety record is exactly *because* they're careful when rolling things out. For example, if we roll out self driving too quickly to new cities and crash rates go up, this will only make people more skeptical, I imagine.
Absolutely. The companies that are designing, building, and deploying these systems should exercise caution. But the rest of the paragraph you quoted reads as follows:
> Fast-track planning approval for maintenance depots and grid connections for charging stations. Figure out how to support displaced workers, instead of supergluing them to their current jobs. Treat issues as problems to be solved, not obstacles to be placed in the path of this lifesaving technology.
I enjoyed reading it, wanna share that
A U.S. Department of the Treasury report states that personal auto insurance premiums in 2023 were about USD 318 billion (i.e. ~35.8% of the U.S. property & casualty insurance market). 
• Another source (AgencyChecklists) puts U.S. direct premiums written for private passenger auto in 2024 at USD 344.11 billion 
That's a great metric! Presumably it includes some unrelated costs (e.g. theft), but I imagine this is greatly outweighed by the full human toll of accidents not being covered by insurance – as well as vehicles other than personal automobiles.
Not exactly. These numbers represent
private passenger auto insurance
, which insurance companies are required to report every year. Drivers who do not carry insurance are, by law, driving illegally. When we think about this in the context of autonomous vehicles, the situation becomes even more complex. If I am not actively driving my car, I should not be held personally responsible for accidents that occur. In that case, the liability might reasonably shift to the manufacturer or the operating system of the vehicle. Theft could also become a manufacturer or insurer issue since modern vehicles are packed with sensors, while vandalism might be handled through public programs funded by vehicle-related taxes (when purchased).
The total premiums collected each year are enormous (you can buy some countries with it), which likely creates pressure from large insurance companies to resist regulatory change that could reduce their revenue.
Finally, if I purchase a new autonomous vehicle, I should only have to pay insurance premiums when I disable autopilot and assume control. Insurance could be billed by the hour or by usage rather than as a flat annual rate. This approach would encourage the adoption of safer autonomous and electric vehicles, supporting national goals to transition toward full electrification of the fleet. Countries like China are already moving in this direction, where companies such as BYD include advanced autopilot features as a standard part of the vehicle package.
https://home.treasury.gov/news/press-releases/jy2797
"So far, no other AV maker has demonstrated a safety record similar to Waymo's."
I think that's largely due to a lack of mileage data. When the other AV providers get enough miles under their belt, hopefully they can boast similar figures.
This should make them have an easy lobbying campaign. Few other technologies could save so many lives whilst being profitable in their own right. AV providers can reach out to policy makers and city councils and say, 'Look, we're going to reduce the primary killer of teenagers in your city' how compelling is that?
It's also really encouraging that this technology is gaining speed on the merits of other arguments alone, and that the OEM industry is not lobbying against it but joining the winds of change.
Hopefully so!
I have to say, my impression is that Waymo cars today are in fact safer than most (possibly all) other AVs. Waymo has spent decades and many billions of dollars on a patient, thorough, thoughtful, multi-layered effort to develop a safe driver. They've also invested in a thorough sensor package that (for the moment) costs on the order of $100,000 per vehicle. Hopefully other manufacturers will get there on safety, and costs should come down, but I'm not aware of anyone else being there yet.
No notes. Rarely an article that says exactly what I believe. Thank you for taking the time to make this point.
In very round figures, there's roughly 1.5 fatalities per 100 million miles driven on US roads. So the expected number of fatalities from Waymo to compare with would be about 1 or 2: it seems that at present it is 0. So that's good news, but not yet nearly good enough to suggest a degree of statistical significance what might support a claim that driverless cars are on track to "to virtually eliminate traffic injuries and fatalities"
Presumably there is much better peer-reviewed statistics than these headline figures, but I would guess that accidents and fatalities should be asssessed per interaction rather than per mile. If so, what we're seeing so far is data on the accident rate per interaction between driverless car/human-driven car and between driverless car/other road user. There are so few driverless cars on the road that there will be very few interactions of the form driverless car/driverless car. It's not just a formality to distinguish those interactions, it has a genuine casual basis. The AI inside driverless is trained on data and so far there has been little opportunity for those AI to be trained in live situations where there are many interactions with other driverless cars. I claim that nobody actually knows how these autonomous systems will behave when interacting together in large numbers.
It's true that there isn't enough data to draw direct conclusions regarding fatality rates. What we do know is that Waymo is *vastly* less likely to be the cause of mild to moderate collisions. I am making the assumption that a system which is very very very good at avoiding milder collisions is also likely to be good at avoiding severe / fatal collisions. It is conceivable that there are rare edge cases under which Waymos will trigger a fatal collision, but I would expect those edge cases to also trigger a larger number of non-fatal collisions, and that would likely have shown up in the data by now. It just seems like a stretch to postulate that Waymos will have fatal collisions but no similar-but-not-fatal incidents.
It also seems reasonable to assume that its safety record will only improve from here, especially with regard to serious collisions, because (a) this is the type of tech that tends to improve rapidly over time, (b) Waymo has strong incentives to continue avoiding fatalities, and (c) they have access to lots of data about any near misses that occur. ("C" is really a sub-point of "A".)
> there isn't enough data to draw direct conclusions regarding fatality rates
I rather think that's exactly what this article does, with language like "virtually eliminate traffic injuries and fatalities" and "road deaths [...] would be almost as rare, and seem almost as bizarre, as cases of scurvy". The language to use here is data.
> this is the type of tech that tends to improve rapidly over time
Tech doesn't just improve all by itself like a fine wine maturing in a cool cellar. It is improved by human effort and the use of resources, and in the case of AI by the use of more and better data. I've suggested a reason to believe that the operating environment for driverless will move way from that on which it was trained (driverless cars in a tiny minority, all interactions between isolated driverless cars and mainly human agents) to one for which is has not yet been trained (driverless cars in the majority, many interactions between driverless cars en masse). To the extent that the enviroment moves in this way, tech will get worse not better as its training sets get further from reality. This is going to require effort and money. Is that going to forthcoming, and if so, where from?
"But none of this can excuse delay. Once an AV manufacturer has demonstrated a safety record like Waymo’s, we should roll out the red carpet."
I think part of the reason they have such a good safety record is exactly *because* they're careful when rolling things out. For example, if we roll out self driving too quickly to new cities and crash rates go up, this will only make people more skeptical, I imagine.
Absolutely. The companies that are designing, building, and deploying these systems should exercise caution. But the rest of the paragraph you quoted reads as follows:
> Fast-track planning approval for maintenance depots and grid connections for charging stations. Figure out how to support displaced workers, instead of supergluing them to their current jobs. Treat issues as problems to be solved, not obstacles to be placed in the path of this lifesaving technology.
There are hurdles in the path of AV deployment that have nothing to do with careful management of the safety / deployment curve. I am referring, for instance, to proposals to effectively ban driverless vehicles in Boston (https://www.universalhub.com/2025/boston-city-council-consider-requiring-driverless-cars-have-human).