Tesla’s algorithm is supposed to be autonomous for freeways, not for highways with intersections, like this. The algorithm doing what it was supposed to do would not have prevented a crash. But the algorithm was supposed to eventually apply the brakes. Its failure to do so was a real failure of the algorithm. The driver also erred in failing to brake, probably because he was inappropriately relying on the algorithm. Maybe this was a difficult situation and he could not be expected to prevent a crash, but his failure to brake at all is a bad sign.
It was obvious when Telsa first released this that people were using it inappropriately. I think that they have released updates to encourage better use, but I don’t know how successful they were.
The driver also erred in failing to brake, probably because he was inappropriately relying on the algorithm.
Yep, according to the truck driver, the Model S driver was watching Harry Potter, and it was still playing even after the car came to a stop. He probably had his eyes completely off the road.
What is unclear is whether the driver is likely to have seen it in time if the car had no autonomous mode. Humans, when paying attention even for long periods of time, are still way better at recognizing objects than computers.
My expectation is that this is exactly the problematic case for stastical vs concrete risk analysis. The automated system as it is today is generally safer than humans, as it’s more predictable and reliable. However, there are individual situations where even a less-statistically-safe system like a human forced to pay attention by having limited automation can avoid an accident that the automated system can’t.
the Tesla auto-driver accident was truly an accident. I didn’t realize it was a semi crossing the divider and two lanes to hit him.
https://www.teslamotors.com/blog/misfortune
Here (copy) is a diagram.
Tesla’s algorithm is supposed to be autonomous for freeways, not for highways with intersections, like this. The algorithm doing what it was supposed to do would not have prevented a crash. But the algorithm was supposed to eventually apply the brakes. Its failure to do so was a real failure of the algorithm. The driver also erred in failing to brake, probably because he was inappropriately relying on the algorithm. Maybe this was a difficult situation and he could not be expected to prevent a crash, but his failure to brake at all is a bad sign.
It was obvious when Telsa first released this that people were using it inappropriately. I think that they have released updates to encourage better use, but I don’t know how successful they were.
Yep, according to the truck driver, the Model S driver was watching Harry Potter, and it was still playing even after the car came to a stop. He probably had his eyes completely off the road.
The truck pulled in front of the Model S. The Model S had enough time to break and stop but didn’t recognize the truck against the brightly lit sky.
What is unclear is whether the driver is likely to have seen it in time if the car had no autonomous mode. Humans, when paying attention even for long periods of time, are still way better at recognizing objects than computers.
My expectation is that this is exactly the problematic case for stastical vs concrete risk analysis. The automated system as it is today is generally safer than humans, as it’s more predictable and reliable. However, there are individual situations where even a less-statistically-safe system like a human forced to pay attention by having limited automation can avoid an accident that the automated system can’t.