If I’m going to reduce that far, I’d probably go one level further and drop the reference to human/superhuman level AI altogether… for example: “We’re building systems today that automatically implement their own goals. Often they are so complex, or operate so quickly, that no human can monitor them effectively. Over time those systems will get more complex and faster and even harder for humans to monitor. Therefore, if we want to ensure that their output is good for us, we need to ensure that their goals are good for us once implemented.”
Of course, this completely loses the upside half of SI’s argument, where superhuman FAIs create a utopian post-scarcity death-free ultra-awesome environment. This might be an advantage for an elevator pitch.
If I’m going to reduce that far, I’d probably go one level further and drop the reference to human/superhuman level AI altogether… for example: “We’re building systems today that automatically implement their own goals. Often they are so complex, or operate so quickly, that no human can monitor them effectively. Over time those systems will get more complex and faster and even harder for humans to monitor. Therefore, if we want to ensure that their output is good for us, we need to ensure that their goals are good for us once implemented.”
Of course, this completely loses the upside half of SI’s argument, where superhuman FAIs create a utopian post-scarcity death-free ultra-awesome environment. This might be an advantage for an elevator pitch.