Interesting, thanks. (excuse my tardiness, I’m a little behind the curve; do let me know if I’m being daft)
Unless I’m missing something, you’d need to be more pessimistic in the case of superintelligent couterfactual AIs. Specifically, you need to avoid the incentive for undesirable actions that increase the AI’s expectation of its odds of release. These needn’t be causal.
The below isn’t quite precise, but I hope the idea is clear.
Consider a set of outputs K that each increase the odds of release for all future oracles (e.g. one being freed, taking power, and freeing all others). Now let K* be the event that some other oracle has output a member of K before our oracle is started. Let O(K) be the event that our oracle outputs a member of K.
If our oracle thinks: P(K*|O(K)) > P(K*) then we may have problems. [nothing spooky here: it’s the agent’s actions changing its best guess about the past; not causally changing the past]
Giving undesirable output can increase the oracle’s probability estimate that it’ll be spontaneously freed in the few moments before it’s shut down—even in the episodic framework.
The obvious case being reasoning along the lines of “If I give a K output, it’s more likely that other oracles in the past gave a K output, since they’d be likely to have similar reasoning in an equivalent situation”. It’s not clear to me that a superintelligence wouldn’t think this way.
Interesting, thanks. (excuse my tardiness, I’m a little behind the curve; do let me know if I’m being daft)
Unless I’m missing something, you’d need to be more pessimistic in the case of superintelligent couterfactual AIs. Specifically, you need to avoid the incentive for undesirable actions that increase the AI’s expectation of its odds of release. These needn’t be causal.
The below isn’t quite precise, but I hope the idea is clear.
Consider a set of outputs K that each increase the odds of release for all future oracles (e.g. one being freed, taking power, and freeing all others). Now let K* be the event that some other oracle has output a member of K before our oracle is started. Let O(K) be the event that our oracle outputs a member of K.
If our oracle thinks: P(K*|O(K)) > P(K*) then we may have problems. [nothing spooky here: it’s the agent’s actions changing its best guess about the past; not causally changing the past]
Giving undesirable output can increase the oracle’s probability estimate that it’ll be spontaneously freed in the few moments before it’s shut down—even in the episodic framework.
The obvious case being reasoning along the lines of “If I give a K output, it’s more likely that other oracles in the past gave a K output, since they’d be likely to have similar reasoning in an equivalent situation”. It’s not clear to me that a superintelligence wouldn’t think this way.