Discussion: Which futures are good enough?

Thirty years from now, a well-meaning team of scientists in a basement creates a superintelligent AI with a carefully hand-coded utility function. Two days later, every human being on earth is seamlessly scanned, uploaded and placed into a realistic simulation of their old life, such that no one is aware that anything has changed. Further, the AI had so much memory and processing power to spare that it gave every single living human being their own separate simulation.

Each person lives an extremely long and happy life in their simulation, making what they perceive to be meaningful accomplishments. For those who are interested in acquiring scientific knowledge and learning the nature of the universe, the simulation is accurate enough that everything they learn and discover is true of the real world. Every other pursuit, occupation, and pastime is equally fulfilling. People create great art, find love that lasts for centuries, and create worlds without want. Every single human being lives a genuinely excellent life, awesome in every way. (Unless you mind being simulated, in which case at least you’ll never know.)

I offer this particular scenario because it seems conceivable that with no possible competition between people, it would be possible to avoid doing interpersonal utility comparison, which could make Mostly Friendly AI (MFAI) easier. I don’t think this is likely or even worthy of serious consideration, but it might make some of the discussion questions easier to swallow.


1. Value is fragile. But is Eliezer right in thinking that if we get just one piece wrong the whole endeavor is worthless? (Edit: Thanks to Lukeprog for pointing out that this question completely misrepresents EY’s position. Error deliberately preserved for educational purposes.)

2. Is the above scenario better or worse than the destruction of all earth-originating intelligence? (This is the same as question 1.)

3. Are there other values (besides affecting-the-real-world) that you would be willing to trade off?

4. Are there other values that, if we traded them off, might make MFAI much easier?

5. If the answers to 3 and 4 overlap, how do we decide which direction to pursue?