Rather sad to see Chalmers embracing the dopey “singularity” terminology.
He seems to have toned down his ideas about development under conditions of isolation:
“Confining a superintelligence to a virtual world is almost certainly impossible: if it wants to escape, it almost certainly will.”
Still, the ideas he expresses here are not very realistic, IMO. People want machine intelligence to help them to attain their goals. Machines can’t do that if they are isolated off in virtual worlds. Sure there will be test harnesses—but of course we won’t keep these things permanently restrained on grounds of sheer paranoia—that would stop us from using them.
We can’t test for values—we don’t know what they are. A negative test might be possible (“this thing surely has wrong values”), as a precaution, but not a positive test.
Rather sad to see Chalmers embracing the dopey “singularity” terminology.
He seems to have toned down his ideas about development under conditions of isolation:
“Confining a superintelligence to a virtual world is almost certainly impossible: if it wants to escape, it almost certainly will.”
Still, the ideas he expresses here are not very realistic, IMO. People want machine intelligence to help them to attain their goals. Machines can’t do that if they are isolated off in virtual worlds. Sure there will be test harnesses—but of course we won’t keep these things permanently restrained on grounds of sheer paranoia—that would stop us from using them.
53 pages with only 2 mentions of zombies—yay.
We can’t test for values—we don’t know what they are. A negative test might be possible (“this thing surely has wrong values”), as a precaution, but not a positive test.
Testing often doesn’t identify all possible classes of flaw—e.g. see:
http://en.wikipedia.org/wiki/Unit_testing#Unit_testing_limitations
It is still very useful, nonetheless.