U’ = { max(U) | if ω exists, Ø | if ω !exist}
If I’m interpreting that utility function correctly, it produces an AI that will always and immediately commit quantum-suicide.
If it bought many-worlds, then it would think that ω exists no matter what in some branch, no? So destruction in one branch doesn’t change whether ω exists.
Or near-certain suicide, regardless of whether it understands physics.
If I’m interpreting that utility function correctly, it produces an AI that will always and immediately commit quantum-suicide.
If it bought many-worlds, then it would think that ω exists no matter what in some branch, no? So destruction in one branch doesn’t change whether ω exists.
Or near-certain suicide, regardless of whether it understands physics.