I’d first like to see the problem of safe navigation on real roads solved before we move into 3D space.
Not to mention, even if you somehow make a 100% reliable guiding mechanism, it requires drivers to completely trust the automatic controls. Are they going to do that? And the consequences of deliberate abuse would be far worse than even those with real cars.
Google’s Eric Schmidt thinks that “it’s a bug that cars were invented before computers.” It’s an interesting viewpoint, given Google’s largely successful experiments with automated driving.
Maybe. Relevant part of the article: 1,000 miles were driven without human intervention; 140,000 with occasional human intervention. I’d love to know more detail on what prompted people to intervene and when they did it but I’m surprised at even that amount of trust in a technology at its level.
Not to mention, even if you somehow make a 100% reliable guiding mechanism, it requires drivers to completely trust the automatic controls. Are they going to do that?
Allow me to rephrase this.
Not to mention, even if you somehow made a 100% reliable plane autopilot which can even land it safely, it requires the pilot and co-pilot to trust said autopilot. Are they really going to do that?
For one, airplane pilots are generally far more qualified than car drivers.
For two, civilian airplane pilots don’t usually have to deal with other planes entering their space and having to execute demanding maneuvres in real time, thanks to strict airspace regulations.
It could well be safer, assuming appropriate navigation technology. Spreads out the congestion.
I’d first like to see the problem of safe navigation on real roads solved before we move into 3D space.
Not to mention, even if you somehow make a 100% reliable guiding mechanism, it requires drivers to completely trust the automatic controls. Are they going to do that? And the consequences of deliberate abuse would be far worse than even those with real cars.
Google’s Eric Schmidt thinks that “it’s a bug that cars were invented before computers.” It’s an interesting viewpoint, given Google’s largely successful experiments with automated driving.
Maybe. Relevant part of the article: 1,000 miles were driven without human intervention; 140,000 with occasional human intervention. I’d love to know more detail on what prompted people to intervene and when they did it but I’m surprised at even that amount of trust in a technology at its level.
Allow me to rephrase this.
Not to mention, even if you somehow made a 100% reliable plane autopilot which can even land it safely, it requires the pilot and co-pilot to trust said autopilot. Are they really going to do that?
For one, airplane pilots are generally far more qualified than car drivers.
For two, civilian airplane pilots don’t usually have to deal with other planes entering their space and having to execute demanding maneuvres in real time, thanks to strict airspace regulations.