Why do you refer to the difference between a prior and the uniform prior as a bias, rather than the difference from the optimal prior? This doesn’t agree with how you previously defined a bias.
simon2
Well then, what’s the point of discussing it on the blog, if the similarity is only due to the names?
As for the optimal prior, if the universe is non-deterministic, or if there are “many worlds”, or multiple universes in general, or other ways in which a given observer can have multiple different futures, then the optimal prior is a distribution over all those futures.
I shouldn’t have included non-deterministic, since that only leads to one actual outcome.
OK, that clears it up then.
The point about the optimal prior was that, to the extent that a prior can be considered biased (in the sense I understood the word “bias”, not inductive bias), the optimal prior is the unbiased prior it should be compared to. I didn’t mean to imply that finding the optimal prior is realistic.
Of course few to no people will read this but...
yes, they are different 2-4 empirical questions
no
n/a
no sense
2+2=4 is true given the commonly accepted definitions of the terms involved. Given an assumed systemization of morality moral statements could be “true” relative to that systematization in the same sense that 2+2=4 is true relative to commonly accepted arithmetic. However, I don’t consider this a particularly useful way of thinking about morality.
any ought-statement can be converted (in principle) into a “pure” ought-statement by rephrasing it as an implication of the original statement from a sufficiently detailed set of factual assumptions. 10.same as 9
To say that the concepts of true and false do not apply to moral statements is not the same thing as saying that ethics is meaningless. For one thing, one can be committed personally to a particular ethical view without necessarily believing that there is any objective criterion by which it is superior to others. Also, ethics serve a real world purpose in co-ordinating the behaviour of agents with different goals; one can judge the efficacy of moral systems in fulfilling this purpose without necessarily either approving of it or making the mistake of confusing utility with truth.
Putting these together (actually the first is sufficient, but I threw the second one in anyway), one can both be in favor of some set of real world consequences, and judge moral systems on how well they promote these consequences (ie be a consequentalist) without making the mistake of attributing objective truth (or whatever) to the moral systems you therefore favor. There is thus no contradiction in being a consequentialist and denying the existence of any objective morality.
Daniel, the foundational problem with meta-ethics (as done by philosophers) is that they start from the presumption that morality is something “out there”.
For non-consequentialists this seems to usually result in them either simply relying on a combination of intuition (not as much a fault in ethics as in other subjects, but we should try to do better) and axiomatic systems. When intuition collides with an axiomatic system, or different axiomatic systems contradict one another, they don’t have the ability to resolve the issue.
A moral prescription can be judged by how well it satisfies some goal. The goal is ultimately “arbitrary”—it is up to any person making a judgement about a prescriptive system. Seperating out prescriptions from goals is perhaps not logically necessary, but I think it is useful to distinguish between moral disagreements that can be eliminated through gaining and spreading knowledge (any disagreement assuming common goals) and those that can’t (goal disagreements).
Even when philosophers correctly recognise that a goal is necessary to judge prescriptions, they tend to think of some way of deriving a goal (typically some form of utilitarianism) as being objectively right. This leads to a tendency to deny evidence that their own personal judgements of prescriptive systems (and those of others) in fact derive from different goals. It seems to me, however, that most consequentialists haven’t properly distinguished between prescriptions and goals by which to judge prescriptions, which leads to more confusion (rule consequentialism is a clumsy attempt to get around this, but as commonly understood it is not very general, as a moral prescription need not be a set of simple general rules).
Daniel: philosophers are not all wrong about everything but between them they seem to support every theory that a reasonable person could hold and many more, so they aren’t very useful as a guide as to what to believe. In principle their arguments could still be useful, but in practice I am not impressed by, for example, the arguments against moral skepticism, nor do I find that the arguments for it add anything particularly useful to my knowledge that I could not think of myself.
Questions like “How are amplitudes converted to subjective probabilities?” are not automatically dictated by the theory
You might find this paper by David Deutsch interesting. Although, equation 14 bugs me, it seems to me |Psi_2> as defined doesn’t necessarily exist.
Matthew C: My criticisms of Kent’s criticisms of MWI (as formulated by Everett), in the paper you link to:
A Hilbert space has an inner product by definition, so mu is already an entity of the theory without needing any extra postulates.
In the example given, decoherence will result in the two terms of the RHS of (2) not being able to interfere with one another, which justifies considering them to be independent worlds, no intuition required.
Kent’s talk about bases seems confused, the dimensionality of a basis is fixed by the dimensionality of the state space. What he refers to as a 1-dimensional basis is in fact a 2-d basis (the two terms being added together are basis vectors).
In practice, one can chose a basis as follows: when a measurement is made, decoherence results in the system seperating into a noninterfering subsystem for each outcome. If there is a unique state for each measurement outcome, put together they are a basis; otherwise choose a basis for each outcome and put all the bases together to make the basis for the whole system, the ambiguity has no effect on the measurement outcome because different bases only mix together states with the same outcome. This doesn’t need to be made an axiom; any basis can be used in principle but some are a lot more useful in practice than others.
Of course, Everett didn’t know about decoherence, but we do now.
As for determining probabilities, I suggest you read the paper I linked earlier. It might be flawed, as I mentioned, but if so I think it can probably be amended to work.
If you won’t explicitly state your analysis, maybe we can try 20 questions?
I have suspected that supposed “paradoxes” of evidential decision theory occur because not all the evidence was considered. For example, the fact that you are using evidential decision theory to make the decision.
Agree/disagree?
Hmm, changed my mind, should have thought more before writing… the EDT virus has early symptoms of causing people to use EDT before progressing to terrible illness and death. It seems EDT would then recommend not using EDT.
MIND IS FUNDAMENTAL AFTER ALL! CONSCIOUS AWARENESS DETERMINES OUR EXPERIMENTAL RESULTS!
You can still read this kind of stuff. In physics textbooks.
I hope this is just a strawman of the Copenhagen interpretation. If not, what textbooks are you reading?
In classical configuration spaces, you can take a single point in the configuration space, and the single point describes the entire state of a classical system. So you can take a single point in classical configuration space, and ask how the corresponding system develops over time. You can take a single point in classical configuration space, and ask, “Where does this one point go?”
The development over time of quantum systems depends on things like the second derivative of the amplitude distribution. Our laws of physics describe how amplitude distributions develop into new amplitude distributions. They do not describe, even in principle, how one configuration develops into another configuration.
Instead of viewing the wavefunction as some kind of structure encompassing many points in configuration space, you can view the wavefunction as a whole as a single point in configuration space. Then the evolution in configuration space does indeed depend only on the point itself, not its neighbourhood.
anonymous, the rate of change in amplitude at a location depends only on the derivatives at that location (and the derivative of a function at a point depends only on the values near that point).
Eliezer, I am on the whole inclined to agree with Psy-Kosh, but I sometimes suspect (wild unsupported speculation) that perhaps a locality rule is fundamental and spacetime itself is not, but derived from the locality.
I haven’t yet devised a way to express my appreciation of the orderliness of the universe, which doesn’t involve counting people in orderly states as compared to disorderly states.
What do you mean by that?
Frankly, I’m not sure what it is that you’re complaining about. Even in ordinary life humans have number ambiguity: if you split the connection between the halves of the brain, you get what seems to be two minds, but why should this be some great problem?
But unfortunately there’s that whole thing with the squared modulus of the complex amplitude giving the apparent “probability” of “finding ourselves in a particular blob”.
I hope you will at least acknowledge the existence of the point of view of Wallace/Saunders/Deutsch that the Born rule can be derived from quantum mechanics without it plus only very reasonable outside assumptions, if you won’t agree with it.
Sorry for the impulsive unhelpful bit of my previous comment. Of course if you have a number ambiguity between subjectively identical minds, then you might have problems if you apply an indifference principle to determine probabilities. But please explain if you have any other problem with this.
Eliezer: OK, so you object to branching indifference.
Here is what I was going to reply until I remembered that you support mangled worlds:
“So, I guess I’ll go buy a lottery ticket, and if I win, I’ll conduct an experiment that branches the universe 10^100 times (eg. single electron Stern-Gerlach repeated less than 1000 times). That way I’ll be virtually certain to win.”
Now, I suppose with mangled worlds and a low cutoff you can’t quite rule out your point of view experimentally this way. But you’re still proposing a rule in which if you have a world which splits into world A and world B, they have probability 1⁄2 each, and then when world B splits into B1 and B2, it changes the probability of A to 1⁄3 - until an unobserved physical process turns the probability of A back to 1⁄2. Seems a little odd, no?
I guess I was too quick to assume that mangled worlds involved some additional process. Oops.
Repeating an experiment with systematic error is a special case of non-independent evidence.