Self-driving cars are already inspiring discussion of AI ethics in mainstream media.
Driving is something that most people in the developed world feel familiar with — even if they don’t themselves drive a car or truck, they interact with people who do. They are aware of the consequences of collisions, traffic jams, road rage, trucker or cabdriver strikes, and other failures of cooperation on the road. The kinds of moral judgments involved in driving are familiar to most people — in a way that (say) operating a factory or manipulating a stock market are not.
I don’t mean to imply that most people make good moral judgments about driving — or that they will reach conclusions about self-driving cars that an AI-aware consequentialist would agree with. But they will feel like having opinions on the issue, rather than writing it off as something that programmers or lawyers should figure out. And some of those people will actually become more aware of the issue, who otherwise (i.e. in the absence of self-driving cars) would not.
So yeah, people will become more and more aware of AI ethics. It’s already happening.
Self-driving cars will also inevitably catalyze discussion of the economic morality of AI deployment. Or rather, self-driving trucks will, as they put millions of truck drivers out of work over the course of five to ten years — long-distance truckers first, followed by delivery drivers. As soon as the ability to retrofit an existing truck with self-driving is available, it would be economic idiocy for any given trucking firm to not adopt it as soon as possible. Robots don’t sleep or take breaks.
So, who benefits? The owners of the trucking firm and the folks who make the robots. And, of course, everyone whose goods are now being shipped twice as fast because robots don’t sleep or take breaks. (The AI does not love you, nor does it hate you, but you have a job that it can do better than you can.)
As this level of AI — not AGI, but application-specific AI — replaces more and more skilled labor, faster and faster, it will become increasingly impractical for the displaced workers to retrain into the fewer and fewer remaining jobs.
Whether we should do otherwise-obviously-suboptimal things solely because it’d result in more jobs is a question that long predates self-driving cars...
Well, I want to end up in the future where humans don’t have to labor to survive, so I’m all for automating more and more jobs away. But in order to end up in that future, the benefits of automation have to also accrue to the displaced workers. Otherwise you end up with a shrinking productive class, a teeny-tiny owner class, and a rapidly growing unemployable class — who literally can’t learn a new trade fast enough to work at it before it is automated away by accelerating AI deployment.
As far as I can tell, the only serious proposal that might make the transition from the “most adult humans work at jobs to make a living” present to the “robots do most of the work and humans do what they like” future — without the sort of mass die-off of the lower class that someone out there probably fantasizes about — is something like Friedman’s basic income / negative income tax proposal. If you want to end up in a future where humans can screw off all day because the robots have the work covered, you have to let some humans screw off all day. May as well be the displaced workers.
Whether we should do otherwise-obviously-suboptimal things solely because it’d result in more jobs is a question that long predates self-driving cars...
Self-driving cars are already inspiring discussion of AI ethics in mainstream media.
Driving is something that most people in the developed world feel familiar with — even if they don’t themselves drive a car or truck, they interact with people who do. They are aware of the consequences of collisions, traffic jams, road rage, trucker or cabdriver strikes, and other failures of cooperation on the road. The kinds of moral judgments involved in driving are familiar to most people — in a way that (say) operating a factory or manipulating a stock market are not.
I don’t mean to imply that most people make good moral judgments about driving — or that they will reach conclusions about self-driving cars that an AI-aware consequentialist would agree with. But they will feel like having opinions on the issue, rather than writing it off as something that programmers or lawyers should figure out. And some of those people will actually become more aware of the issue, who otherwise (i.e. in the absence of self-driving cars) would not.
So yeah, people will become more and more aware of AI ethics. It’s already happening.
Self-driving cars will also inevitably catalyze discussion of the economic morality of AI deployment. Or rather, self-driving trucks will, as they put millions of truck drivers out of work over the course of five to ten years — long-distance truckers first, followed by delivery drivers. As soon as the ability to retrofit an existing truck with self-driving is available, it would be economic idiocy for any given trucking firm to not adopt it as soon as possible. Robots don’t sleep or take breaks.
So, who benefits? The owners of the trucking firm and the folks who make the robots. And, of course, everyone whose goods are now being shipped twice as fast because robots don’t sleep or take breaks. (The AI does not love you, nor does it hate you, but you have a job that it can do better than you can.)
As this level of AI — not AGI, but application-specific AI — replaces more and more skilled labor, faster and faster, it will become increasingly impractical for the displaced workers to retrain into the fewer and fewer remaining jobs.
This is also a moral problem of AI …
Whether we should do otherwise-obviously-suboptimal things solely because it’d result in more jobs is a question that long predates self-driving cars...
Well, I want to end up in the future where humans don’t have to labor to survive, so I’m all for automating more and more jobs away. But in order to end up in that future, the benefits of automation have to also accrue to the displaced workers. Otherwise you end up with a shrinking productive class, a teeny-tiny owner class, and a rapidly growing unemployable class — who literally can’t learn a new trade fast enough to work at it before it is automated away by accelerating AI deployment.
As far as I can tell, the only serious proposal that might make the transition from the “most adult humans work at jobs to make a living” present to the “robots do most of the work and humans do what they like” future — without the sort of mass die-off of the lower class that someone out there probably fantasizes about — is something like Friedman’s basic income / negative income tax proposal. If you want to end up in a future where humans can screw off all day because the robots have the work covered, you have to let some humans screw off all day. May as well be the displaced workers.
I agree. (Yvain wrote about that in more detail here, and a followup here.)
I’d prefer something like Georgism to negative income tax, but the former has fewer chances of actually being implemented any time soon.
It long predates Milton Friedman, too