This is mildly offtopic, but: I’m curious what LessWrongers’ thoughts are on just how near robot cars are. The general impression I get from people is “they’re going to be here any day now, and Google already has some”. I wonder/suspect if Google’s are actually very much demo-quality devices, meaning they only work under controlled conditions. If so, then the first 80% of the engineering is done, but the last 80% is still to go, and we may not see consumer robot cars for quite a while.
In particular, I’m curious about how they intend to solve problems like: navigating in the presence of work zones and human flaggers, navigating two-way streets too narrow for two cars to pass, 5/6/7-way intersections with specific lanes restricted in which way they can turn, etc.
I imagine some of these problems will be solved by blacklisting particular places for the robot cars to go, or even whitelisting a small set of places they’re allowed (i.e. major highways) at first.
Some of the equipment for Google’s car is still expensive:
But Google’s lidar is far more complex, consisting of 64 infrared lasers that spin inside a housing atop the car to take measurements in all horizontal directions. (Lidar systems like this are also very expensive — about $70,000 a unit — so cost and complexity will have to come down before they can be widely used.)
The lidar is the most expensive component and several companies have promised bringing much cheaper lidars to market over the next couple years. I expect the typical drastic reductions in cost as they start being produced in large numbers.
I’m curious about how they intend to solve problems like: navigating in the presence of work zones and human flaggers, navigating two-way streets too narrow for two cars to pass, 5/6/7-way intersections with specific lanes restricted in which way they can turn, etc.
These are all solved problems or nearly so, as far as I can tell. Robot drivers are safer than humans in most cases.
These are all solved problems or nearly so, as far as I can tell.
Details / citations? I conjecture that they are not, based on my beliefs about the quality of demos, the limited circumstances in which Google’s vehicles have been demonstrated, and my perception of the difficulty of solving the AI problems involved.
From the little I know, it looks like google cars are at least able to tell when they need a human to take over (and stop when said human doesn’t). One could also imagine a semi-automatic mode where the driver helps the car in unusual situations.
I for one would be very happy to have a car that would drive itself most of the way, calling for my attention only once in a while. Solving only the common case (highway and traffic jam) is already very useful. Now I can read in my car!
From the little I know, it looks like google cars are at least able to tell when they need a human to take over (and stop when said human doesn’t).
Okay, this makes perfect sense, thanks.
It does feed into another conjecture I heard (I forget where), that most deaths from robot cars are going to happen when they unexpectedly hand control to a human, who needs time to transition into the right mental context to drive.
Yep. It already happens with airliners, where pilots do errors specifically because they are too used to automation and fall prey to boredom induced sleepiness.
If there are sufficiently few special case, we could have the car stopping before it switches to manual mode. The goal being that the human would take control at leisure, and not in the midst of motion.
Or, switch to a semi manual mode, where safeties are still on. Even when lost, the car can still see nearby obstacles, sense weather the tires stick to the road etc.
We could also keep a fully manual (and unsafe!) mode, which is activated only by a pull on a lever followed by a push on a red button beneath the lever. Hopefully that should eliminate most “context switching” errors. (And the human could be blamed, and the car insurance may not activate etc… making you think thrice before you actually switch).
This is mildly offtopic, but: I’m curious what LessWrongers’ thoughts are on just how near robot cars are. The general impression I get from people is “they’re going to be here any day now, and Google already has some”. I wonder/suspect if Google’s are actually very much demo-quality devices, meaning they only work under controlled conditions. If so, then the first 80% of the engineering is done, but the last 80% is still to go, and we may not see consumer robot cars for quite a while.
In particular, I’m curious about how they intend to solve problems like: navigating in the presence of work zones and human flaggers, navigating two-way streets too narrow for two cars to pass, 5/6/7-way intersections with specific lanes restricted in which way they can turn, etc.
I imagine some of these problems will be solved by blacklisting particular places for the robot cars to go, or even whitelisting a small set of places they’re allowed (i.e. major highways) at first.
Thoughts?
Some of the equipment for Google’s car is still expensive:
The lidar is the most expensive component and several companies have promised bringing much cheaper lidars to market over the next couple years. I expect the typical drastic reductions in cost as they start being produced in large numbers.
These are all solved problems or nearly so, as far as I can tell. Robot drivers are safer than humans in most cases.
Details / citations? I conjecture that they are not, based on my beliefs about the quality of demos, the limited circumstances in which Google’s vehicles have been demonstrated, and my perception of the difficulty of solving the AI problems involved.
From the little I know, it looks like google cars are at least able to tell when they need a human to take over (and stop when said human doesn’t). One could also imagine a semi-automatic mode where the driver helps the car in unusual situations.
I for one would be very happy to have a car that would drive itself most of the way, calling for my attention only once in a while. Solving only the common case (highway and traffic jam) is already very useful. Now I can read in my car!
Okay, this makes perfect sense, thanks.
It does feed into another conjecture I heard (I forget where), that most deaths from robot cars are going to happen when they unexpectedly hand control to a human, who needs time to transition into the right mental context to drive.
Yep. It already happens with airliners, where pilots do errors specifically because they are too used to automation and fall prey to boredom induced sleepiness.
If there are sufficiently few special case, we could have the car stopping before it switches to manual mode. The goal being that the human would take control at leisure, and not in the midst of motion.
Or, switch to a semi manual mode, where safeties are still on. Even when lost, the car can still see nearby obstacles, sense weather the tires stick to the road etc.
We could also keep a fully manual (and unsafe!) mode, which is activated only by a pull on a lever followed by a push on a red button beneath the lever. Hopefully that should eliminate most “context switching” errors. (And the human could be blamed, and the car insurance may not activate etc… making you think thrice before you actually switch).