The main focus is the intelligence explosion that some think will happen when machines become more intelligent than humans. First, I try to clarify and analyze the argument for an intelligence explosion. Second, I discuss strategies for negotiating the singularity to maximize the chances of a good outcome. Third, I discuss issues regarding uploading human minds into computers, focusing on issues about consciousness and personal identity.
Rather sad to see Chalmers embracing the dopey “singularity” terminology.
He seems to have toned down his ideas about development under conditions of isolation:
“Confining a superintelligence to a virtual world is almost certainly impossible: if it wants to escape, it almost certainly will.”
Still, the ideas he expresses here are not very realistic, IMO. People want machine intelligence to help them to attain their goals. Machines can’t do that if they are isolated off in virtual worlds. Sure there will be test harnesses—but of course we won’t keep these things permanently restrained on grounds of sheer paranoia—that would stop us from using them.
We can’t test for values—we don’t know what they are. A negative test might be possible (“this thing surely has wrong values”), as a precaution, but not a positive test.
David Chalmers has written up a paper based on the talk he gave at 2009 Singularity Summit:
The Singularity: A Philosophical Analysis
From the blog post where he announced the paper:
Rather sad to see Chalmers embracing the dopey “singularity” terminology.
He seems to have toned down his ideas about development under conditions of isolation:
“Confining a superintelligence to a virtual world is almost certainly impossible: if it wants to escape, it almost certainly will.”
Still, the ideas he expresses here are not very realistic, IMO. People want machine intelligence to help them to attain their goals. Machines can’t do that if they are isolated off in virtual worlds. Sure there will be test harnesses—but of course we won’t keep these things permanently restrained on grounds of sheer paranoia—that would stop us from using them.
53 pages with only 2 mentions of zombies—yay.
We can’t test for values—we don’t know what they are. A negative test might be possible (“this thing surely has wrong values”), as a precaution, but not a positive test.
Testing often doesn’t identify all possible classes of flaw—e.g. see:
http://en.wikipedia.org/wiki/Unit_testing#Unit_testing_limitations
It is still very useful, nonetheless.