I disagree with the insistence on “paperclip maximiser”. As an emerging ASI you want to know about other ASIs you’ll meet, especially grabby ones. But there are aligned grabby ASIs. You’d want an accurate prior, so I don’t think this updates me on probability of alignment, or even much on grabbiness, since it’s hard to know ahead of time, that’s why you’d run a simulation in the first place.
I don’t take it very seriously because (1) it is a big pile of assumptions and I don’t trust anthropic reasoning much at the best of times, it’s very confusing and hard to think about (2) the simulation is most useful if it’s using the same physics as base reality, so it kind of doesn’t matter if we’re simulated or not or both, our actions should be the same. So it’s not really decision-relevant, I’d follow ~the same policy either way
I disagree with the insistence on “paperclip maximiser”. As an emerging ASI you want to know about other ASIs you’ll meet, especially grabby ones. But there are aligned grabby ASIs. You’d want an accurate prior, so I don’t think this updates me on probability of alignment, or even much on grabbiness, since it’s hard to know ahead of time, that’s why you’d run a simulation in the first place.
I don’t take it very seriously because (1) it is a big pile of assumptions and I don’t trust anthropic reasoning much at the best of times, it’s very confusing and hard to think about (2) the simulation is most useful if it’s using the same physics as base reality, so it kind of doesn’t matter if we’re simulated or not or both, our actions should be the same. So it’s not really decision-relevant, I’d follow ~the same policy either way