An interesting thought-experiment. But I don’t follow this part:
So in theory we could hand it off to human philosophers or some other human-based procedure, thus dealing with “complexity of value” without much risk.
The complexity of value has to do with how the border delineating good outcomes from all possible outcomes cannot be specified in a compact way. Granted, the space of possible polygon arrangements is smaller than the space of possible atom arrangements. That does make the space of possible outcomes relatively more manageable in your VR world. But the space of outcomes is still Vast. It seems Vast enough that the border separating good from bad is still complex beyond our capacity to specify.
It’s certainly vast. People can’t write the best possible immersive videogame. But they can write a videogame that would be pretty good for most people, and use that as a formal goal for UFAI. My idea isn’t any more ambitious than that, it certainly wastes tons of potential utility, I’m just trying to make something that’s better than apocalypse.
An interesting thought-experiment. But I don’t follow this part:
The complexity of value has to do with how the border delineating good outcomes from all possible outcomes cannot be specified in a compact way. Granted, the space of possible polygon arrangements is smaller than the space of possible atom arrangements. That does make the space of possible outcomes relatively more manageable in your VR world. But the space of outcomes is still Vast. It seems Vast enough that the border separating good from bad is still complex beyond our capacity to specify.
It’s certainly vast. People can’t write the best possible immersive videogame. But they can write a videogame that would be pretty good for most people, and use that as a formal goal for UFAI. My idea isn’t any more ambitious than that, it certainly wastes tons of potential utility, I’m just trying to make something that’s better than apocalypse.