We have Brooks answer to many of these questions here—at 17:20.
Essentially, I think Brooks is wrong, robots are highly likely to take over. He only addresses the “standard scenario” of a Hollywood-style hostile robot takeover.
One big possibility he fails to address is a cooperative machine takeover, with the humans and the machines on the same side.
I agree with Brooks that consumer pressure will mostly create “good” robots in the short term. Consumer-related forces will drive the extraction of human preferences into machine-readable formats, much as we are seeing privacy-related preferences being addressed by companies today. Brooks doesn’t really look into later future scenarios where forces applied by human consumers are relatively puny, though. There’s eventually going to be a bit of a difference between a good company, and a company that is pretending to be good for PR reasons.
I agree with Brooks that a major accident is relatively unlikely. Brooks gives a feeble reason for thinking that, though—comparing an accident with a “lone guy” building a 747. That is indeed unlikely—but surely is only one of the possible accident scenarios.
Brooks is a robot guy. Those folk are not going to build intelligent machines first. They are typically too wedded to systems with slow build-test cycles. So Brooks may be a muddle about all this, but that doesn’t seem too important: it isn’t really his area.
We have Brooks answer to many of these questions here—at 17:20.
Essentially, I think Brooks is wrong, robots are highly likely to take over. He only addresses the “standard scenario” of a Hollywood-style hostile robot takeover.
One big possibility he fails to address is a cooperative machine takeover, with the humans and the machines on the same side.
I agree with Brooks that consumer pressure will mostly create “good” robots in the short term. Consumer-related forces will drive the extraction of human preferences into machine-readable formats, much as we are seeing privacy-related preferences being addressed by companies today. Brooks doesn’t really look into later future scenarios where forces applied by human consumers are relatively puny, though. There’s eventually going to be a bit of a difference between a good company, and a company that is pretending to be good for PR reasons.
I agree with Brooks that a major accident is relatively unlikely. Brooks gives a feeble reason for thinking that, though—comparing an accident with a “lone guy” building a 747. That is indeed unlikely—but surely is only one of the possible accident scenarios.
Brooks is a robot guy. Those folk are not going to build intelligent machines first. They are typically too wedded to systems with slow build-test cycles. So Brooks may be a muddle about all this, but that doesn’t seem too important: it isn’t really his area.