The Peerless

Link post

In the post linked above, I propose a plan for addressing superintelligence-based risks.

The core idea is a three-step plan for giving us more time to figure out alignment by:

  1. Scanning human brains, or extracting them from simulations of earth (perhaps found in the universal distribution).

  2. Creating a safe and deterministic virtual environment in which they have all the time they need to either figure out alignment, or design a better virtual environment for themselves to do that.

  3. Giving an AGI the following formalized goal: “implement whatever goal is determined by this deterministic computation”.

The deterministic aspect is key: if the simulation is deterministic, then the AGI should want to run it with full accuracy, without meddling with what we do inside of it.

There are complications and aspects to be investigated in every part of this plan, but I believe it’s not too bad of a plan to start working on considering the current state of alignment and AI risk.

See the linked post for full details.