# Wei_Dai comments on Less Wrong Q&A with Eliezer Yudkowsky: Video Answers

• In my posts, I’ve argued that indexical uncertainty like this shouldn’t be represented using probabilities. Instead, I suggest that you consider yourself to be all of the many copies of you, i.e., both the ones in the ancestor simulations and the one in 2010, making decisions for all of them. Depending on your preferences, you might consider the consequences of the decisions of the copy in 2010 to be the most important and far-reaching, and therefore act mostly as if that was the only copy.

• BTW, I agree with this.

• Coming back to this comment, it seems to be another example of UDT giving a technically correct but incomplete answer.

Imagine you have a device that will tell you, tomorrow at 12am, whether you are in a simulation or in the base layer. (It turns out that all simulations are required by multiverse law to have such devices.) There’s probably not much you can do before 12am tomorrow that can cause important and far-reaching consequences. But fortunately you also have another device that you can hook up to the first. The second device generates moments of pleasure or pain for the user. More precisely, it gives you X pleasure/​pain if you turn out to be in a sim, and Y pleasure/​pain if you are in the base layer (presumably X and Y have different signs). Depending on X and Y, how do you decide whether to turn the second device on?

• Have you pulled it all together anywhere? I’ve sometimes seen & thought this Pascal’s wager-like logic before (act as if your choices matter because if they don’t...), but I’ve always been suspicious precisely because it looks too much to me like Pascal’s wager.

• I’ve thought about writing a post on the application of TDT/​UDT to the Simulation Argument, but I could’t think of much to say. But to expand a bit more on what I wrote in the grandparent, in the Simulation Argument, the decision of the original you interacts with the decisions of the simulations. If you make the wrong decision, your simulations might end up not existing at all, so it doesn’t make sense to put a probability on “being in a simulation”. (This is like in the absent-minded driver problem, where your decision at the first exit determines whether you get to the second exit.)

I’m not sure I see what you mean by “Pascal’s wager-like logic”. Can you explain a bit more?

• A top-level post on the application of TDT/​UDT to the Simulation Argument would be worthwhile even if it was just a paragraph or two long.

• A top level post telling me whether TDT and UDT are supposed to be identical or different (or whether they are the same but at different levels of development) would also be handy!

• I’ve thought about writing a post on the application of TDT/​UDT to the Simulation Argument, but I could’t think of much to say.

I think that’s enough. I feel I understand the SA very well, but not TDT or UDT much at all; approaching the latter from the former might make things click for me.

I’m not sure I see what you mean by “Pascal’s wager-like logic”. Can you explain a bit more?

I mean that I read Pascal’s Wager as basically ‘p implies x reward for believing in p, and ~p implies no reward (either positive or negative); thus, best to believe in p regardless of the evidence for p’. (Clumsy phrasing, I’m afraid.)

Your example sounds like that: ‘believing you-are-not-being-simulated implies x utility (motivation for one’s actions & efforts), and if ~you-are-not-being-simulated then your utility to the real world is just 0; so believe you-are-not-being-simulated.’ This seems to be a substitution of ‘not-being-simulated’ into the PW schema.