Is it really plausible that human driver inattention just doesn’t matter here? Sleepiness, drug use, personal issues, eyes were on something interesting rather than the road, etc. I’d guess something like that is involved in a majority of collisions, and that Just Shouldn’t Happen to AI drivers.
Of course AI drivers do plausibly have new failure modes, like maybe the sensors fail sometimes (maybe more often than human eyes just suddenly stop working). But there should be plenty of data about that sort of thing from just testing them a lot.
The only realistic way I can see for AI drivers, that have been declared street-legal and are functioning in a roadway and regulatory system that humans (chose to) set up, to be less safe than human drivers is if there’s some kind of coordinated failure. Like if they trust data coming from GPS satellites or cell towers, and those start spitting out garbage and throw the AIs off distribution; or a deliberate cyber-attack / sabotage of some kind.
have been declared street-legal and are functioning in a roadway and regulatory system that humans (chose to) set up
Does not eliminate the regulator screwing up the standards and not placing them high enough, or having a builder flub the implementation
For instance, pure imitation learning has a likelyhood of starting to drive like a bad driver if trained on bad drivers, and if it does one thing that a bad driver would do. (we have seen this failure mode of bugs causing bugs in LLMs)
Similarly, the best way to push down crash rates for self-driving cars is by reconstructing every accident and training the vehicle to avoid them.
But if you mess up your training pattern, or if there are weird regulations, you don’t get this, and you can end up with a system which consistently crashes in particular situations because of how it generalizes from it’s training data.
A good example for this misimplementation is the timeouts after collision warning on Waymo vehciles, and how this has caused multiple crashes without getting fixed. If something like that just slides, you don’t end up safer
If the only defense against this is activity from the regulator, then arguing that the regulator should get out of the way to make things safer does not work.
The complexity of getting declared street legal means exactly that the safety is an open question.
Is it really plausible that human driver inattention just doesn’t matter here? Sleepiness, drug use, personal issues, eyes were on something interesting rather than the road, etc. I’d guess something like that is involved in a majority of collisions, and that Just Shouldn’t Happen to AI drivers.
Of course AI drivers do plausibly have new failure modes, like maybe the sensors fail sometimes (maybe more often than human eyes just suddenly stop working). But there should be plenty of data about that sort of thing from just testing them a lot.
The only realistic way I can see for AI drivers, that have been declared street-legal and are functioning in a roadway and regulatory system that humans (chose to) set up, to be less safe than human drivers is if there’s some kind of coordinated failure. Like if they trust data coming from GPS satellites or cell towers, and those start spitting out garbage and throw the AIs off distribution; or a deliberate cyber-attack / sabotage of some kind.
have been declared street-legal and are functioning in a roadway and regulatory system that humans (chose to) set up
Does not eliminate the regulator screwing up the standards and not placing them high enough, or having a builder flub the implementation
For instance, pure imitation learning has a likelyhood of starting to drive like a bad driver if trained on bad drivers, and if it does one thing that a bad driver would do. (we have seen this failure mode of bugs causing bugs in LLMs)
Similarly, the best way to push down crash rates for self-driving cars is by reconstructing every accident and training the vehicle to avoid them.
But if you mess up your training pattern, or if there are weird regulations, you don’t get this, and you can end up with a system which consistently crashes in particular situations because of how it generalizes from it’s training data.
A good example for this misimplementation is the timeouts after collision warning on Waymo vehciles, and how this has caused multiple crashes without getting fixed. If something like that just slides, you don’t end up safer
If the only defense against this is activity from the regulator, then arguing that the regulator should get out of the way to make things safer does not work.
The complexity of getting declared street legal means exactly that the safety is an open question.