To help XiXiDu’s task, we should put together a list of useful targets.
That would be great. I don’t know of many AGI researchers. I am not going to ask Hugo De Garis, we know what Ben Goertzel thinks, and there already is an interview with Peter Voss that I will have to watch first.
Look—what will prevent the first human-level AGIs from self-modifying in a way that will massively increase their intelligence is a very simple thing: they won’t be smart enough to do that!
Every actual AGI researcher I know can see that. The only people I know who think that an early-stage, toddler-level AGI has a meaningful chance of somehow self-modifying its way up to massive superhuman intelligence—are people associated with SIAI.
But I have never heard any remotely convincing arguments in favor of this odd, outlier view of the easiness of hard takeoff!!!
That would be great. I don’t know of many AGI researchers. I am not going to ask Hugo De Garis, we know what Ben Goertzel thinks, and there already is an interview with Peter Voss that I will have to watch first.
More on Ben Goertzel:
He recently wrote ‘Why an Intelligence Explosion is Probable’, but with the caveat (see the comments):
Jurgen Schmidhuber is one possibility.
Thanks, emailed him.
I watched it, check 9:00 (first video) for the answer on friendly AI, he seems to agree with Ben Goertzel?
ETA
More here.