Once we’ve resolved that danger, we can take our time to solve things
I don’t know—I think that once you hand off the formalized goal to the UFAI, you’re stuck: you snapshotted the desired state and you can’t change anything any more. If you can change things, well, that UFAI will make sure things will get changed in the direction it wants.
I think it should be possible to define a game that gives people tools to peacefully resolve disagreements, without giving them tools for intelligence explosion. The two don’t seem obviously connected.
So then, basically, the core of your idea is to move all humans to a controlled reality (first VR, then physical) where an intelligence explosion is impossible? It’s not really supposed to solve any problems, just prevent the expected self-destruction?
Yeah. At quite high cost, too. Like I said, it’s intended as a lower bound of what’s achievable, and I wouldn’t have posted it if any better lower bound was known.
I don’t know—I think that once you hand off the formalized goal to the UFAI, you’re stuck: you snapshotted the desired state and you can’t change anything any more. If you can change things, well, that UFAI will make sure things will get changed in the direction it wants.
I think it should be possible to define a game that gives people tools to peacefully resolve disagreements, without giving them tools for intelligence explosion. The two don’t seem obviously connected.
So then, basically, the core of your idea is to move all humans to a controlled reality (first VR, then physical) where an intelligence explosion is impossible? It’s not really supposed to solve any problems, just prevent the expected self-destruction?
Yeah. At quite high cost, too. Like I said, it’s intended as a lower bound of what’s achievable, and I wouldn’t have posted it if any better lower bound was known.