obviously the self-driving trucks will overall be vastly safer
I don’t think that is obvious,
I mean it is obvious from scaling laws that you can make a self driving truck with accidents at the noise/problem entropy floor. But the entropy is unknown, human drivers might be at it, and that is sensitive to requiring you to get good training on as many accidents, near accidents, and non-accidents as possible, and to any failures at all in your traditional code (think like 1 per million vehicle miles), will show up. Fatal accidents already are down to like 1.4 deaths per 100 million vehicle miles, with more people dead from smaller vehicles, so not the trucks but the smaller vehicles in collisions, though not having a driver around to get hurt will still reduce risk.
In short, self-driving trucks will only be safer if the company making them actually executes. In most industries, at least one company fails to execute.
(there is a lingering policy question that we would want full person-free trucks to do some things that regular vehicles might not, like take the bad side of collisions if it would reduce odds of injury)
Is it really plausible that human driver inattention just doesn’t matter here? Sleepiness, drug use, personal issues, eyes were on something interesting rather than the road, etc. I’d guess something like that is involved in a majority of collisions, and that Just Shouldn’t Happen to AI drivers.
Of course AI drivers do plausibly have new failure modes, like maybe the sensors fail sometimes (maybe more often than human eyes just suddenly stop working). But there should be plenty of data about that sort of thing from just testing them a lot.
The only realistic way I can see for AI drivers, that have been declared street-legal and are functioning in a roadway and regulatory system that humans (chose to) set up, to be less safe than human drivers is if there’s some kind of coordinated failure. Like if they trust data coming from GPS satellites or cell towers, and those start spitting out garbage and throw the AIs off distribution; or a deliberate cyber-attack / sabotage of some kind.
have been declared street-legal and are functioning in a roadway and regulatory system that humans (chose to) set up
Does not eliminate the regulator screwing up the standards and not placing them high enough, or having a builder flub the implementation
For instance, pure imitation learning has a likelyhood of starting to drive like a bad driver if trained on bad drivers, and if it does one thing that a bad driver would do. (we have seen this failure mode of bugs causing bugs in LLMs)
Similarly, the best way to push down crash rates for self-driving cars is by reconstructing every accident and training the vehicle to avoid them.
But if you mess up your training pattern, or if there are weird regulations, you don’t get this, and you can end up with a system which consistently crashes in particular situations because of how it generalizes from it’s training data.
A good example for this misimplementation is the timeouts after collision warning on Waymo vehciles, and how this has caused multiple crashes without getting fixed. If something like that just slides, you don’t end up safer
If the only defense against this is activity from the regulator, then arguing that the regulator should get out of the way to make things safer does not work.
The complexity of getting declared street legal means exactly that the safety is an open question.
I don’t think that is obvious,
I mean it is obvious from scaling laws that you can make a self driving truck with accidents at the noise/problem entropy floor. But the entropy is unknown, human drivers might be at it, and that is sensitive to requiring you to get good training on as many accidents, near accidents, and non-accidents as possible, and to any failures at all in your traditional code (think like 1 per million vehicle miles), will show up. Fatal accidents already are down to like 1.4 deaths per 100 million vehicle miles, with more people dead from smaller vehicles, so not the trucks but the smaller vehicles in collisions, though not having a driver around to get hurt will still reduce risk.
In short, self-driving trucks will only be safer if the company making them actually executes. In most industries, at least one company fails to execute.
(there is a lingering policy question that we would want full person-free trucks to do some things that regular vehicles might not, like take the bad side of collisions if it would reduce odds of injury)
Is it really plausible that human driver inattention just doesn’t matter here? Sleepiness, drug use, personal issues, eyes were on something interesting rather than the road, etc. I’d guess something like that is involved in a majority of collisions, and that Just Shouldn’t Happen to AI drivers.
Of course AI drivers do plausibly have new failure modes, like maybe the sensors fail sometimes (maybe more often than human eyes just suddenly stop working). But there should be plenty of data about that sort of thing from just testing them a lot.
The only realistic way I can see for AI drivers, that have been declared street-legal and are functioning in a roadway and regulatory system that humans (chose to) set up, to be less safe than human drivers is if there’s some kind of coordinated failure. Like if they trust data coming from GPS satellites or cell towers, and those start spitting out garbage and throw the AIs off distribution; or a deliberate cyber-attack / sabotage of some kind.
have been declared street-legal and are functioning in a roadway and regulatory system that humans (chose to) set up
Does not eliminate the regulator screwing up the standards and not placing them high enough, or having a builder flub the implementation
For instance, pure imitation learning has a likelyhood of starting to drive like a bad driver if trained on bad drivers, and if it does one thing that a bad driver would do. (we have seen this failure mode of bugs causing bugs in LLMs)
Similarly, the best way to push down crash rates for self-driving cars is by reconstructing every accident and training the vehicle to avoid them.
But if you mess up your training pattern, or if there are weird regulations, you don’t get this, and you can end up with a system which consistently crashes in particular situations because of how it generalizes from it’s training data.
A good example for this misimplementation is the timeouts after collision warning on Waymo vehciles, and how this has caused multiple crashes without getting fixed. If something like that just slides, you don’t end up safer
If the only defense against this is activity from the regulator, then arguing that the regulator should get out of the way to make things safer does not work.
The complexity of getting declared street legal means exactly that the safety is an open question.
self driving trucks will be measurably safer than status quo is probably true, but it is not obvious.