Yes—I’ve seen him talk a couple of times, and everyone still loves to hear him, but he’s not now influential.
I also recently saw Rodney Brooks giving the standard “rapture of the nerds” answer to a singularity question. Brooks is influential, I think, so maybe a good target.
To help XiXiDu’s task, we should put together a list of useful targets.
To help XiXiDu’s task, we should put together a list of useful targets.
That would be great. I don’t know of many AGI researchers. I am not going to ask Hugo De Garis, we know what Ben Goertzel thinks, and there already is an interview with Peter Voss that I will have to watch first.
Look—what will prevent the first human-level AGIs from self-modifying in a way that will massively increase their intelligence is a very simple thing: they won’t be smart enough to do that!
Every actual AGI researcher I know can see that. The only people I know who think that an early-stage, toddler-level AGI has a meaningful chance of somehow self-modifying its way up to massive superhuman intelligence—are people associated with SIAI.
But I have never heard any remotely convincing arguments in favor of this odd, outlier view of the easiness of hard takeoff!!!
We have Brooks answer to many of these questions here—at 17:20.
Essentially, I think Brooks is wrong, robots are highly likely to take over. He only addresses the “standard scenario” of a Hollywood-style hostile robot takeover.
One big possibility he fails to address is a cooperative machine takeover, with the humans and the machines on the same side.
I agree with Brooks that consumer pressure will mostly create “good” robots in the short term. Consumer-related forces will drive the extraction of human preferences into machine-readable formats, much as we are seeing privacy-related preferences being addressed by companies today. Brooks doesn’t really look into later future scenarios where forces applied by human consumers are relatively puny, though. There’s eventually going to be a bit of a difference between a good company, and a company that is pretending to be good for PR reasons.
I agree with Brooks that a major accident is relatively unlikely. Brooks gives a feeble reason for thinking that, though—comparing an accident with a “lone guy” building a 747. That is indeed unlikely—but surely is only one of the possible accident scenarios.
Brooks is a robot guy. Those folk are not going to build intelligent machines first. They are typically too wedded to systems with slow build-test cycles. So Brooks may be a muddle about all this, but that doesn’t seem too important: it isn’t really his area.
Yes—I’ve seen him talk a couple of times, and everyone still loves to hear him, but he’s not now influential.
I also recently saw Rodney Brooks giving the standard “rapture of the nerds” answer to a singularity question. Brooks is influential, I think, so maybe a good target.
To help XiXiDu’s task, we should put together a list of useful targets.
That would be great. I don’t know of many AGI researchers. I am not going to ask Hugo De Garis, we know what Ben Goertzel thinks, and there already is an interview with Peter Voss that I will have to watch first.
More on Ben Goertzel:
He recently wrote ‘Why an Intelligence Explosion is Probable’, but with the caveat (see the comments):
Jurgen Schmidhuber is one possibility.
Thanks, emailed him.
I watched it, check 9:00 (first video) for the answer on friendly AI, he seems to agree with Ben Goertzel?
ETA
More here.
We have Brooks answer to many of these questions here—at 17:20.
Essentially, I think Brooks is wrong, robots are highly likely to take over. He only addresses the “standard scenario” of a Hollywood-style hostile robot takeover.
One big possibility he fails to address is a cooperative machine takeover, with the humans and the machines on the same side.
I agree with Brooks that consumer pressure will mostly create “good” robots in the short term. Consumer-related forces will drive the extraction of human preferences into machine-readable formats, much as we are seeing privacy-related preferences being addressed by companies today. Brooks doesn’t really look into later future scenarios where forces applied by human consumers are relatively puny, though. There’s eventually going to be a bit of a difference between a good company, and a company that is pretending to be good for PR reasons.
I agree with Brooks that a major accident is relatively unlikely. Brooks gives a feeble reason for thinking that, though—comparing an accident with a “lone guy” building a 747. That is indeed unlikely—but surely is only one of the possible accident scenarios.
Brooks is a robot guy. Those folk are not going to build intelligent machines first. They are typically too wedded to systems with slow build-test cycles. So Brooks may be a muddle about all this, but that doesn’t seem too important: it isn’t really his area.