Found the newest welcome thread, posted there instead.
DanielFilan
Hi! My name is Daniel. I’m an undergraduate student, currently studying physics and mathematics at the Australian National University. I discovered Less Wrong about two years ago, and I’ve been regularly lurking ever since. I’m starting a meetup in Canberra—see http://lesswrong.com/meetups/wc. I hope that I see some of you there!
Of likely interest to anyone coming: http://lesswrong.com/r/discussion/lw/k0p/less_wrong_australia_weekend_retreat/
I’m not so sure that this is actually true. It has been shown that, given a fairly minimal set of constraints that don’t mention probability, decision-makers in a MWI setting maximise expected utility, where the expectation is given with respect to the Born rule: http://arxiv.org/abs/0906.2718
I’m not sure that the proof can be summarised in a comment, but the theorem can:
Suppose you are an agent that knows that you are living in an Everettian universe. You have a choice between unitary transformations (the only type of evolution that the world is allowed to undergo in MWI), that will in general cause your ‘world’ to split and give you various rewards or punishments in the various resulting branches. Your preferences between unitary transformations satisfy a few constraints:
Some technical ones about which unitary transformations are available.
Your preferences should be a total ordering on the set of the available unitary transformations.
If you currently have unitary transformation U available, and after performing U you will have unitary transformations V and V’ available, and you know that you will later prefer V to V’, then you should currently prefer (U and then V) to (U and then V’).
If there are two microstates that give rise to the same macrostate, you don’t care about which one you end up in.
You don’t care about branching in and of itself: if I offer to flip a quantum coin and give you reward R whether it lands heads or tails, you should be indifferent between me doing that and just giving you reward R.
You only care about which state the universe ends up in.
If you prefer U to V, then changing U and V by some sufficiently small amount does not change this preference.
Then, you act exactly as if you have a utility function on the set of rewards, and you are evaluating each unitary transformation based on the weighted sum of the utility of the reward you get in each resulting branch, where you weight by the Born ‘probability’ of each branch.
Equations 13, 14 and 15 introduce notation that aren’t used in the axioms, so they don’t really constitute an assumption that maximising Born-expected utility is the only rational strategy.
Your second paragraph has a subtle problem: the argument of u is which reward you get, but the argument of p might have to do with the coefficients of the branches in superposition.
To illustrate, suppose that I only care about getting Born-expected dollars. Then, letting denote the world where I get $n, my preference ordering includes
and
You might wonder if my preferences could be represented as maximising utility with respect to the uniform branch weights: you don’t care at all about branches with Born weight zero, but you care equally about all elements with non-zero coefficient, regardless of what that coefficient is. Then, if the new utility function is , we require
%20%3E%20U’(\$3))and
%20+%20\frac{1}{2}%20U’(\$3)%20=%20\frac{1}{2}%20U’(\$0)%20+%20\frac{1}{2}%20U’($4))However, this is a contradiction, so my preferences cannot be represented in this way.
They are used in the last theorem.
I agree that the notation that they introduce is used in the last two theorems (the Utility Lemma and the Born Rule Theorem), but I don’t see where in the proof that they assume that you should maximise Born-expected utility. If you could point out which step you think does this, that would help me understand your comment better.
I think this violates indifference to microstate/branching.
I agree. This is actually part of the point: you can’t just maximise utility with respect to any old probability function you want to define on superpositions, you have to use the Born rule to avoid violating diachronic consistency or indifference to branching or any of the others.
It is used in to define the expected utility in the statement of these two theorems, eq. 27 and 30.
Yes. The point of those theorems is to prove that if your preferences are ‘nice’, then you are maximising Born-expected utility. This is why Born-expected utility appears in the statement of the theorems. They do not assume that a rational agent maximises Born-expected utility, they prove it.
The issue is that the agent needs a decision rule that, given a quantum state computes an action, and this decision rule must be consistent with the agent’s preference ordering over observable macrostates (which has to obey the constraints specified in the paper).
Yes. My point is that maximising Born-expected utility is the only way to do this. This is what the paper shows. The power of this theorem is that other decision algoritms don’t obey the constraints specified in the paper.
If the decision rule has to have the form of expected utility maximization, then we have two functions which are multiplied together, which gives us some wiggle room between them.
No: the functions are of two different arguments. Utility (at least in this paper) is a function of what reward you get, whereas the probability will be a function of the amplitude of the branch. You can represent the strategy of maximising Born-expected utility as the strategy of maximising some other function with respect to some other set of probabilities, but that other function will not be a function of the rewards.
Even if it turns out that it is, the result would be interesting but not particularly impressive, since macrostates are defined in terms projections, which naturally induces a L2 weighting. But defining macrostates this way makes sense precisely because there is the Born rule.
A macrostate here is defined in terms of a subspace of the whole Hilbert space, which of course involves an associated projection operator. That being said, I can’t think of a reason why this doesn’t make sense if you don’t assume the Born rule. Could you elaborate on this?
By an amazing coincidence, Giulio Tononi, leading proponent of the Integrated Information Theory of consciousness, is giving a talk at ANU just before the meetup. It is at 4 pm in the Coombs building, seminar room A—you will want to arrive early, because Coombs is known for being rather maze-like. Full description of the talk is here: http://pastebin.com/dshkvZhi
Learning German atm via Duolingo + Anki, already speak Esperanto and am reasonably good at Japanese.
You might be interested in Remembering the Kanji, a guide to using mnemonics to systematically memorise the meaning of all the kanji. I found it helpful while reinforcing it with flashcards + going to high school Japanese class. Wikipedia page for Remembering the Kanji
I seem to have high karma, but don’t know why. Looking through my contribution history, I seem to only have a total of 47 net upvotes on anything I’ve ever posted, but have 74 karma points, including 10 in the last 30 days. Looking at the LW wiki FAQ, it says that you can get 10 karma per upvote if you post in main, but I haven’t done that. Does anyone know why this might be happening?
No, but I just realised that everything adds up if I assume that meetup posts also get 10 karma for every upvote. Given that this sort of makes sense but that I can’t find it mentioned anywhere, I’m not sure whether it’s a feature or a bug.
That doesn’t seem likely in my case, since the only non-meetup things I’ve posted before today have been about the MWI and Scott Aaronson’s take on integrated information theory.
For the last few weeks, I’ve been using an alarm app that forces me to take a picture of my front door before it turns off. Previously, I had been using one that forced me to do two difficult arithmetic problems. This meant that I woke up mentally, but was still unwilling to leave my bed, and instead spend half an hour checking fb and browsing the net on my phone. Now, the design of the clock forces me to leave my room, which makes it much easier for me to start my day more quickly. The photo recognition is not great, so normally I need to take 2 or 3 photos before it recognises the door, but this helps me wake up even more. I would highly reccommend the app, or something with the same functionality, for people who have difficulty leaving bed in the morning.
Formatting issues:
The title “Part Eight: Slightly More Complicated Questions” appears twice.
Question requests:
Ability to solve the Schrodinger equation for the hydrogen atom.
OCEAN personality test results
Split “no” option in meetups into “no, because there are no meetups near where I live” and “no, there may be meetups near where I live but I don’t want to go to them”
Other comments:
I like the multiple calibration questions
In case you care about that in order to know which respondents know what they’re talking about when answering the MWI question, that’s a very poor choice.
Fair enough. In that case, I’ll request a question as to whether you can prove Bell’s theorem. I guess I was lucky that in my university, interpretational issues were discussed a fair bit in later-year theoretical physics classes.
A question on romantic orientation would be good.
What gender/s you are romantically attracted to, and also how strongly you feel that attraction, see the Wikipedia page. It is mainly useful for asexuals (and also, I imagine, people who answer ‘other’), but it’s certainly possible to have a romantic orientation that doesn’t match your sexual orientation. Maybe it could be included as an optional write-in box, or at the end?
This is the (long-term, no longer) lurker friend of which Solvent speaks! I would like to be able to have enough karma to post these myself, so if people could upvote this comment, that would be useful.
ETA: have learnt more about the mechanics of the site, now realise that this is not necessary. Thanks to whoever did upvote me though!