These are all solved problems or nearly so, as far as I can tell.
Details / citations? I conjecture that they are not, based on my beliefs about the quality of demos, the limited circumstances in which Google’s vehicles have been demonstrated, and my perception of the difficulty of solving the AI problems involved.
From the little I know, it looks like google cars are at least able to tell when they need a human to take over (and stop when said human doesn’t). One could also imagine a semi-automatic mode where the driver helps the car in unusual situations.
I for one would be very happy to have a car that would drive itself most of the way, calling for my attention only once in a while. Solving only the common case (highway and traffic jam) is already very useful. Now I can read in my car!
From the little I know, it looks like google cars are at least able to tell when they need a human to take over (and stop when said human doesn’t).
Okay, this makes perfect sense, thanks.
It does feed into another conjecture I heard (I forget where), that most deaths from robot cars are going to happen when they unexpectedly hand control to a human, who needs time to transition into the right mental context to drive.
Yep. It already happens with airliners, where pilots do errors specifically because they are too used to automation and fall prey to boredom induced sleepiness.
If there are sufficiently few special case, we could have the car stopping before it switches to manual mode. The goal being that the human would take control at leisure, and not in the midst of motion.
Or, switch to a semi manual mode, where safeties are still on. Even when lost, the car can still see nearby obstacles, sense weather the tires stick to the road etc.
We could also keep a fully manual (and unsafe!) mode, which is activated only by a pull on a lever followed by a push on a red button beneath the lever. Hopefully that should eliminate most “context switching” errors. (And the human could be blamed, and the car insurance may not activate etc… making you think thrice before you actually switch).
Details / citations? I conjecture that they are not, based on my beliefs about the quality of demos, the limited circumstances in which Google’s vehicles have been demonstrated, and my perception of the difficulty of solving the AI problems involved.
From the little I know, it looks like google cars are at least able to tell when they need a human to take over (and stop when said human doesn’t). One could also imagine a semi-automatic mode where the driver helps the car in unusual situations.
I for one would be very happy to have a car that would drive itself most of the way, calling for my attention only once in a while. Solving only the common case (highway and traffic jam) is already very useful. Now I can read in my car!
Okay, this makes perfect sense, thanks.
It does feed into another conjecture I heard (I forget where), that most deaths from robot cars are going to happen when they unexpectedly hand control to a human, who needs time to transition into the right mental context to drive.
Yep. It already happens with airliners, where pilots do errors specifically because they are too used to automation and fall prey to boredom induced sleepiness.
If there are sufficiently few special case, we could have the car stopping before it switches to manual mode. The goal being that the human would take control at leisure, and not in the midst of motion.
Or, switch to a semi manual mode, where safeties are still on. Even when lost, the car can still see nearby obstacles, sense weather the tires stick to the road etc.
We could also keep a fully manual (and unsafe!) mode, which is activated only by a pull on a lever followed by a push on a red button beneath the lever. Hopefully that should eliminate most “context switching” errors. (And the human could be blamed, and the car insurance may not activate etc… making you think thrice before you actually switch).