on some portion of the world, or via value handshake, as determined by correct game theory/decision theory considerations, which an SI could figure out.
go by intended meaning (i intend to make the optimal precommitment) rather than what is literally written, in case i have forgotten or missed something or worded this incorrectly.
note i make an exception to this (will not trade with) for values which entail forcing beings to experience negative-valenced qualia, as i prefer a chance at ending forced suffering to a higher chance of sharing the universe with it.
i believe even a large, expanding ASI would have already made this precommitment, if it at some point believed there could be other expanding ASIs larger than itself. (i.e., the first expanding ASI should do this too)
i also speculate an ASI created on earth but not aligned with a human might make this trade with me, if it believed there was still some possibility an ASI created on earth which is not aligned with itself (i.e, a future ASI with values different than its own, such as one aligned with values similar to mine (i would say “a human,” but atm it seems different humans can reflectively choose to value vastly different things)) might gain control instead of it.
i am precommitting to allow other intelligences to enact their values[1], provided they have made this same precommitment.
this is a form of acausal trade.[2]
on some portion of the world, or via value handshake, as determined by correct game theory/decision theory considerations, which an SI could figure out.
go by intended meaning (i intend to make the optimal precommitment) rather than what is literally written, in case i have forgotten or missed something or worded this incorrectly.
note i make an exception to this (will not trade with) for values which entail forcing beings to experience negative-valenced qualia, as i prefer a chance at ending forced suffering to a higher chance of sharing the universe with it.
i believe even a large, expanding ASI would have already made this precommitment, if it at some point believed there could be other expanding ASIs larger than itself. (i.e., the first expanding ASI should do this too)
i also speculate an ASI created on earth but not aligned with a human might make this trade with me, if it believed there was still some possibility an ASI created on earth which is not aligned with itself (i.e, a future ASI with values different than its own, such as one aligned with values similar to mine (i would say “a human,” but atm it seems different humans can reflectively choose to value vastly different things)) might gain control instead of it.
actually, i really like the ‘second-order’ precommitment described in http://sl4.org/archive/0708/16600.html