Yes, that is the same idea. “This is a big pile of speculation that I don’t take very seriously, but I feel like if we are being simulated, that’s where most simulations of me would be instantiate” Why not take it seriously, if you accept high chance that (1) our reality is a simulation, (2) we seem on track to creating a paperclip maximizer, (3) weird that I, Robert Miles, would have the personality traits that cause me to be one of the few humans so worried about humanity creating a paperclip maximizer if I’m right about us being on track to probably create one?
I disagree with the insistence on “paperclip maximiser”. As an emerging ASI you want to know about other ASIs you’ll meet, especially grabby ones. But there are aligned grabby ASIs. You’d want an accurate prior, so I don’t think this updates me on probability of alignment, or even much on grabbiness, since it’s hard to know ahead of time, that’s why you’d run a simulation in the first place.
I don’t take it very seriously because (1) it is a big pile of assumptions and I don’t trust anthropic reasoning much at the best of times, it’s very confusing and hard to think about (2) the simulation is most useful if it’s using the same physics as base reality, so it kind of doesn’t matter if we’re simulated or not or both, our actions should be the same. So it’s not really decision-relevant, I’d follow ~the same policy either way
Yes, that is the same idea. “This is a big pile of speculation that I don’t take very seriously, but I feel like if we are being simulated, that’s where most simulations of me would be instantiate” Why not take it seriously, if you accept high chance that (1) our reality is a simulation, (2) we seem on track to creating a paperclip maximizer, (3) weird that I, Robert Miles, would have the personality traits that cause me to be one of the few humans so worried about humanity creating a paperclip maximizer if I’m right about us being on track to probably create one?
I disagree with the insistence on “paperclip maximiser”. As an emerging ASI you want to know about other ASIs you’ll meet, especially grabby ones. But there are aligned grabby ASIs. You’d want an accurate prior, so I don’t think this updates me on probability of alignment, or even much on grabbiness, since it’s hard to know ahead of time, that’s why you’d run a simulation in the first place.
I don’t take it very seriously because (1) it is a big pile of assumptions and I don’t trust anthropic reasoning much at the best of times, it’s very confusing and hard to think about (2) the simulation is most useful if it’s using the same physics as base reality, so it kind of doesn’t matter if we’re simulated or not or both, our actions should be the same. So it’s not really decision-relevant, I’d follow ~the same policy either way