BTW, have you read the The Hanson-Yudkowsky AI-Foom Debate? I just read through it again last night (after rereading your previous post). Robin Hanson is the economist (with two degrees in physics) who does the Overcoming Bias blog linked in the sidebar, and the LessWrong sequences were first posted at OB. Hanson absolutely doesn’t buy the FOOM scenario either.
Yeah, Hanson’s newest article argues that you don’t some weird ‘intelligence explosion’ to get Kurzweilian singularity: you just need standard economic growth rates to continue. He seems to think the most likely scenario is takeover of the economy and everything else by ems and eventually “hardscrabble hell,” which might sound terrible to us but, what the hell: it’s just another intergenerational conflict. So if somebody is looking for solace in Hanson’s own particular doubts about intelligence explosion, I doubt they’ll find it. :)
Hanson’s newest article argues that you don’t some weird ‘intelligence explosion’ to get Kurzweilian singularity: you just need standard economic growth rates to continue.
He seems to think the most likely scenario is takeover of the economy and everything else by ems and eventually “hardscrabble hell,” which might sound terrible to us but, what the hell: it’s just another intergenerational conflict.
Brain emulations coming first or mattering much should be assigned a low probability by clued in folks, IMO.
Natural selection on people does seem like a possible outcome. However, like the SIAI folk, I’m more inclined towards thinking that there will be unified rule and self-directed evolution—broadly along the lines that Pierre Teilhard de Chardin foresaw.
The good thing about natural selection directing things is that it might help to keep us from going off the rails—and eventually getting assimilated by aliens. At least a competitive universe won’t wirehead itself. Pure self-directed evolution by one dominant agent could lead to a big, fat—but ultimately screwed-up—future.
BTW, have you read the The Hanson-Yudkowsky AI-Foom Debate? I just read through it again last night (after rereading your previous post). Robin Hanson is the economist (with two degrees in physics) who does the Overcoming Bias blog linked in the sidebar, and the LessWrong sequences were first posted at OB. Hanson absolutely doesn’t buy the FOOM scenario either.
Hanson doesn’t think it likely that a small group will “take over the world”.
He does picture pretty rapid progress being caused by machine intelligence fairly soon.
Yeah, Hanson’s newest article argues that you don’t some weird ‘intelligence explosion’ to get Kurzweilian singularity: you just need standard economic growth rates to continue. He seems to think the most likely scenario is takeover of the economy and everything else by ems and eventually “hardscrabble hell,” which might sound terrible to us but, what the hell: it’s just another intergenerational conflict. So if somebody is looking for solace in Hanson’s own particular doubts about intelligence explosion, I doubt they’ll find it. :)
It sounds a teensy bit like my own “The Intelligence Explosion Is Happening Now”.
Brain emulations coming first or mattering much should be assigned a low probability by clued in folks, IMO.
Natural selection on people does seem like a possible outcome. However, like the SIAI folk, I’m more inclined towards thinking that there will be unified rule and self-directed evolution—broadly along the lines that Pierre Teilhard de Chardin foresaw.
The good thing about natural selection directing things is that it might help to keep us from going off the rails—and eventually getting assimilated by aliens. At least a competitive universe won’t wirehead itself. Pure self-directed evolution by one dominant agent could lead to a big, fat—but ultimately screwed-up—future.