# DanielFilan

Karma: 4,524
• This is the (long-term, no longer) lurker friend of which Solvent speaks! I would like to be able to have enough karma to post these myself, so if people could upvote this comment, that would be useful.

ETA: have learnt more about the mechanics of the site, now realise that this is not necessary. Thanks to whoever did upvote me though!

• Hi! My name is Daniel. I’m an undergraduate student, currently studying physics and mathematics at the Australian National University. I discovered Less Wrong about two years ago, and I’ve been regularly lurking ever since. I’m starting a meetup in Canberra—see http://​​lesswrong.com/​​meetups/​​wc. I hope that I see some of you there!

# Meetup : Se­cond Can­berra Meetup—Para­noid Debating

19 Feb 2014 4:00 UTC
2 points

# Meetup : Can­berra: Meta-meetup + meditation

7 Mar 2014 1:04 UTC
4 points

# Meetup : Can­berra Meetup: Life hacks part 1

31 Mar 2014 7:28 UTC
1 point

# Meetup : Can­berra: Life Hacks Part 2

14 Apr 2014 1:11 UTC
1 point

# Meetup : Can­berra: Ra­tion­al­ist Fun and Games!

1 May 2014 12:44 UTC
1 point
• I’m not sure that the proof can be summarised in a comment, but the theorem can:

Suppose you are an agent that knows that you are living in an Everettian universe. You have a choice between unitary transformations (the only type of evolution that the world is allowed to undergo in MWI), that will in general cause your ‘world’ to split and give you various rewards or punishments in the various resulting branches. Your preferences between unitary transformations satisfy a few constraints:

• Some technical ones about which unitary transformations are available.

• Your preferences should be a total ordering on the set of the available unitary transformations.

• If you currently have unitary transformation U available, and after performing U you will have unitary transformations V and V’ available, and you know that you will later prefer V to V’, then you should currently prefer (U and then V) to (U and then V’).

• If there are two microstates that give rise to the same macrostate, you don’t care about which one you end up in.

• You don’t care about branching in and of itself: if I offer to flip a quantum coin and give you reward R whether it lands heads or tails, you should be indifferent between me doing that and just giving you reward R.

• You only care about which state the universe ends up in.

• If you prefer U to V, then changing U and V by some sufficiently small amount does not change this preference.

Then, you act exactly as if you have a utility function on the set of rewards, and you are evaluating each unitary transformation based on the weighted sum of the utility of the reward you get in each resulting branch, where you weight by the Born ‘probability’ of each branch.

• Equations 13, 14 and 15 introduce notation that aren’t used in the axioms, so they don’t really constitute an assumption that maximising Born-expected utility is the only rational strategy.

Your second paragraph has a subtle problem: the argument of u is which reward you get, but the argument of p might have to do with the coefficients of the branches in superposition.

To illustrate, suppose that I only care about getting Born-expected dollars. Then, letting

$\\psi\_n$
denote the world where I get $n, my preference ordering includes and $\\frac\{1\}\{\\sqrt\{3\}\} \\psi\_0 \+ \\sqrt\{\\frac\{2\}\{3\}\} \\psi\_3 \\sim \\frac\{1\}\{\\sqrt\{2\}\} \\psi\_0 \+ \\frac\{1\}\{\\sqrt\{2\}\} \\psi\_4$ You might wonder if my preferences could be represented as maximising utility with respect to the uniform branch weights: you don’t care at all about branches with Born weight zero, but you care equally about all elements with non-zero coefficient, regardless of what that coefficient is. Then, if the new utility function is , we require $U$ %20%3E%20U’(\$3))

and

$\\frac\{1\}\{2\}U$
%20+%20\frac{1}{2}%20U’(\$3)%20=%20\frac{1}{2}%20U’(\$0)%20+%20\frac{1}{2}%20U’(\$4))

However, this is a contradiction, so my preferences cannot be represented in this way.

• They are used in the last theorem.

I agree that the notation that they introduce is used in the last two theorems (the Utility Lemma and the Born Rule Theorem), but I don’t see where in the proof that they assume that you should maximise Born-expected utility. If you could point out which step you think does this, that would help me understand your comment better.

I think this violates indifference to microstate/​branching.

I agree. This is actually part of the point: you can’t just maximise utility with respect to any old probability function you want to define on superpositions, you have to use the Born rule to avoid violating diachronic consistency or indifference to branching or any of the others.

• It is used in to define the expected utility in the statement of these two theorems, eq. 27 and 30.

Yes. The point of those theorems is to prove that if your preferences are ‘nice’, then you are maximising Born-expected utility. This is why Born-expected utility appears in the statement of the theorems. They do not assume that a rational agent maximises Born-expected utility, they prove it.

The issue is that the agent needs a decision rule that, given a quantum state computes an action, and this decision rule must be consistent with the agent’s preference ordering over observable macrostates (which has to obey the constraints specified in the paper).

Yes. My point is that maximising Born-expected utility is the only way to do this. This is what the paper shows. The power of this theorem is that other decision algoritms don’t obey the constraints specified in the paper.

If the decision rule has to have the form of expected utility maximization, then we have two functions which are multiplied together, which gives us some wiggle room between them.

No: the functions are of two different arguments. Utility (at least in this paper) is a function of what reward you get, whereas the probability will be a function of the amplitude of the branch. You can represent the strategy of maximising Born-expected utility as the strategy of maximising some other function with respect to some other set of probabilities, but that other function will not be a function of the rewards.

Even if it turns out that it is, the result would be interesting but not particularly impressive, since macrostates are defined in terms projections, which naturally induces a L2 weighting. But defining macrostates this way makes sense precisely because there is the Born rule.

A macrostate here is defined in terms of a subspace of the whole Hilbert space, which of course involves an associated projection operator. That being said, I can’t think of a reason why this doesn’t make sense if you don’t assume the Born rule. Could you elaborate on this?

# [LINK] Scott Aaron­son on In­te­grated In­for­ma­tion Theory

22 May 2014 8:40 UTC
38 points

# Meetup : Can­berra: De­ci­sion Theory

26 May 2014 14:44 UTC
2 points

# Meetup : Can­berra: Many Wor­lds + Para­noid Debating

17 Jun 2014 13:44 UTC
2 points