Still not on board with the value of this. Why would you expect an AGI that does no harm (as far as you know) in an unpopulated and unobserved portion of the universe also does no harm on Earth (where you keep your stuff, and get the vast majority of your utility, but also with a radically different context—nearly unrelated terms in your utility function).
Still not on board with the value of this. Why would you expect an AGI that does no harm (as far as you know) in an unpopulated and unobserved portion of the universe also does no harm on Earth (where you keep your stuff, and get the vast majority of your utility, but also with a radically different context—nearly unrelated terms in your utility function).
The hypothetical included knowing. So, the hypothetical approach I proposed exploited this.
The main benefit is that the AI is not at risk of killing you. In the left half of the universe, it is at risk of killing you.