Some context: In the past I had a job as a quality assurance inspector. I realized very soon after I started doing the job that a machine could easily do my job with less errors and for less then I was being paid so I wondered “Why do they pay for a human to do this job?” My conclusion was that if a machine makes a mistake as it is bound to do eventually they can’t really fire it or yell at it well as a human can be. A human can be blamed.
So I agree with you. In the future I can see robots doing all the jobs expect being the scape goats.
Self-driving cars have a similar problem. Even if the car would cause 100 times fewer accidents than a human driver, the problem is that when an accident happens, we need a human to blame.
How will we determine who goes to jail? Elon Musk? The poor programmer who wrote the piece of software that will be identified as having caused the bug? Or maybe someone like you, who “should have checked that the car is 100% safe”, even if everyone knows it is impossible. Most likely, it will be someone at the bottom of the corporate structure.
For now, as far as I know, the solution is that there must be a human driver in a self-driving car. In case of accident, that human will be blamed for not avoiding it by taking over the control.
But I suppose that moving the blame from the customer to some low-wage employee of the producer would be better for sales, so the legislation will likely change this way some day. We just need to find the proper scapegoat.