If your ontology implies quantum mechanics then I think the measure of the universes (m(u) in step 1) must involve wave functions somehow, but my understanding of QM doesn’t allow me to think it through much.
This looks like a mistake to me. QM says a lot about E but there are logically possible universes where quantum mechanics is false. Presumably we want to be able to assign probabilities to the truth of quantum mechanics. Ontological questions in general seem like delineators of E and not m. Thus I’m confused by
If your ontology implies a computable universe (thus you only need to consider those in E)
as well. Obviously QM and physics generally is entangled with information theory and probability in all kinds of important ways and hopefully an eventual theory of physics will clarify all this. For the time being, it seems more worthwhile to describe the mental operation of assigning subjective probabilities (and thus understanding possible worlds in this sense) and avoid conflating rationality with physics.
Unless you have something else motivating that part of the discussion that I’m not familiar with.
Oh, another thing: you can do approximations, as long as you’re aware that you’re doing it. So, upon noticing that your world looks like QM, you can try to just do all your calculations with a QM world. That is, you say that your observations of QM excludes non-QM worlds from E, even though it’s not strictly true (it may be just a simulation, for example).
If your approximation was a bad one you’ll notice. AFAICT, all the discussions about probability are just such approximations. When you explain the solution to a balls-in-urns problem by saying “look, there are X possible outcomes, N red and P blue”, what you’re saying is “there are X possible universes; in N of them red and P blue”, then you proceed doing exactly the “integral” above, with each universe having equal measure. If the problem says the ball’s colors are monochromatic, of wavelength uniformly distributed in the spectrum, but the question uses human color words, you might end up with a measure based on the perceptual width of “colors” in wavelength terms.
Even though the problem describes urns and balls and, presumably, you picking things up via EM interactions of your and the balls’ atoms, you approximate by saying “that’s just irrelevant detail”. In the terms of the schema, this just means that “other details will add the same factor to the cardinalities of each color class”.
I’m still confused about what you’re trying to say about QM, though. I start with m. Then Omega tells me “QM is true”. Now, I have E which is the set of worlds in which QM is true. But you’re saying QM being true affects m(u) and thats what I don’t grok.
I think it was a mistake to define m on U (the set of possible worlds, before restricting it to E); it can work even if you do it like that, but it’s less intuitive.
Try this: suppose your U can be partitioned two classes of universes: those in Q are quantum, and those in K are not. That is, U is the union of Q and K, and the intersection Q and K is empty.
A reasonable strategy for defining m (in the sense that that’s what I was aiming for) could go like this:
You define m(u) as a product of g(u) with s(u).
s is a measure specific for the class of universes u is part of. For (“logically possible”) universes that come from the Q class, s might depend on the amplitude of u’s wave-function. For those that come from K, it’s a completely different function, e.g. the Kolmogorov complexity of the bit-string defining u. Note that the two “branches” of s are completely independent: the wave-function doesn’t make any sense for K, nor does Kolmogorov complexity for those in Q (supposing that Q implies exact real-valued functions).
The g(u) part of the measure reflects everything that’s not related to this particular partitioning of U. It may be just a constant 1, or it may be a complex function based on other possible partitioning (e.g. finite/infinite universes).
The important part is that your m includes from the start a term for QM-related measure of universes, but only for those where it makes sense.
When Omega comes, you just remove K from U, so your E is a subset of Q. As a result, the rest of the calculations is never dependent on the s branch for K, and always depends on the s branch for Q. The effect is not that Omega changed the m, it just made part of it irrelevant to the rest of the calculations.
As I said in the first sentence of this answer, you can just define m only on E. But E is much more complex than U (it depends on every experience you had, thus it’s harder to specify, even if it’s “smaller”), so it’s harder to define a function only for its values.
Conceptually it’s easier to pick a somewhat vague m on U, and then “fill in details” as your E becomes more exact. But, to be clear, this is just because we’re handwaving about without actually being able to do the calculations; since the calculations seem impossible in the general case it’s kind of a moot point which is “really” easier.
My motivation for those comments was observing that you don’t need the measure except for worlds in E, those that are compatible with observation.
Say you have in the original “possible worlds” both some that are computable (e.g. Turing machine outputs) and some that are not (continuous, real-valued space and time coordinates). Now, suppose it’s possible to distinguish among those two, and one of your observations eliminates one possibility (e.g., it’s certainly not computable). Then you can certainly use a measure that only makes sense for the remaining one (computable things).
There might be other tricks. Suppose again, as above, you have two classes of possible worlds, and you have an idea how to assign measures within each class but not for the whole “possible worlds” set. Now, if you do the rest of the calculations in both classes, you’ll obtain two “probabilities”, one for each class of possible worlds. If the two agree on a value (within “measurement error”), you’re set, use that. If they don’t, then it means you have a way to test those two worlds.
For the time being, it seems more worthwhile to describe the mental operation of assigning subjective probabilities (and thus understanding possible worlds in this sense) and avoid conflating rationality with physics.
I certainly didn’t intend the schema as an actual algorithm. I’m not sure if your comment about subjective probabilities means. (I can parse it both as “this is a potentially useful model of what brains ‘have in mind’ when thinking of probabilities, but not really useful for computation”, and as “this is not a useful model in general, try to come up with one based on how brains assign probability”.)
What is interesting for me is that I couldn’t find any model of probability that doesn’t match that schema after formalizing it. So I conjectured that any formalization of probability, if general enough* to apply to real life, will be an instance of it. Of course, it may just be that I’m not imaginative enough, or maybe I’m just the guy with a new hammer seeing nails everywhere.
(*: by this, I mean not just claiming, say, that coins fall with equal probability on each side. You can do a lot of probability calculation with that, but it’s not useful at all for which alien team wins the game, what face a dice falls on, or even how real coins work.)
I’m really curious to see a model of probability that seems reasonable and general, and that I can’t reduce to the shape above.
Upvoted for tackling the issue, at least.
This looks like a mistake to me. QM says a lot about E but there are logically possible universes where quantum mechanics is false. Presumably we want to be able to assign probabilities to the truth of quantum mechanics. Ontological questions in general seem like delineators of E and not m. Thus I’m confused by
as well. Obviously QM and physics generally is entangled with information theory and probability in all kinds of important ways and hopefully an eventual theory of physics will clarify all this. For the time being, it seems more worthwhile to describe the mental operation of assigning subjective probabilities (and thus understanding possible worlds in this sense) and avoid conflating rationality with physics.
Unless you have something else motivating that part of the discussion that I’m not familiar with.
Oh, another thing: you can do approximations, as long as you’re aware that you’re doing it. So, upon noticing that your world looks like QM, you can try to just do all your calculations with a QM world. That is, you say that your observations of QM excludes non-QM worlds from E, even though it’s not strictly true (it may be just a simulation, for example).
If your approximation was a bad one you’ll notice. AFAICT, all the discussions about probability are just such approximations. When you explain the solution to a balls-in-urns problem by saying “look, there are X possible outcomes, N red and P blue”, what you’re saying is “there are X possible universes; in N of them red and P blue”, then you proceed doing exactly the “integral” above, with each universe having equal measure. If the problem says the ball’s colors are monochromatic, of wavelength uniformly distributed in the spectrum, but the question uses human color words, you might end up with a measure based on the perceptual width of “colors” in wavelength terms.
Even though the problem describes urns and balls and, presumably, you picking things up via EM interactions of your and the balls’ atoms, you approximate by saying “that’s just irrelevant detail”. In the terms of the schema, this just means that “other details will add the same factor to the cardinalities of each color class”.
I misread the computability bit, it makes sense.
I’m still confused about what you’re trying to say about QM, though. I start with m. Then Omega tells me “QM is true”. Now, I have E which is the set of worlds in which QM is true. But you’re saying QM being true affects m(u) and thats what I don’t grok.
I think it was a mistake to define m on U (the set of possible worlds, before restricting it to E); it can work even if you do it like that, but it’s less intuitive.
Try this: suppose your U can be partitioned two classes of universes: those in Q are quantum, and those in K are not. That is, U is the union of Q and K, and the intersection Q and K is empty.
A reasonable strategy for defining m (in the sense that that’s what I was aiming for) could go like this:
You define m(u) as a product of g(u) with s(u).
s is a measure specific for the class of universes u is part of. For (“logically possible”) universes that come from the Q class, s might depend on the amplitude of u’s wave-function. For those that come from K, it’s a completely different function, e.g. the Kolmogorov complexity of the bit-string defining u. Note that the two “branches” of s are completely independent: the wave-function doesn’t make any sense for K, nor does Kolmogorov complexity for those in Q (supposing that Q implies exact real-valued functions).
The g(u) part of the measure reflects everything that’s not related to this particular partitioning of U. It may be just a constant 1, or it may be a complex function based on other possible partitioning (e.g. finite/infinite universes).
The important part is that your m includes from the start a term for QM-related measure of universes, but only for those where it makes sense.
When Omega comes, you just remove K from U, so your E is a subset of Q. As a result, the rest of the calculations is never dependent on the s branch for K, and always depends on the s branch for Q. The effect is not that Omega changed the m, it just made part of it irrelevant to the rest of the calculations.
As I said in the first sentence of this answer, you can just define m only on E. But E is much more complex than U (it depends on every experience you had, thus it’s harder to specify, even if it’s “smaller”), so it’s harder to define a function only for its values.
Conceptually it’s easier to pick a somewhat vague m on U, and then “fill in details” as your E becomes more exact. But, to be clear, this is just because we’re handwaving about without actually being able to do the calculations; since the calculations seem impossible in the general case it’s kind of a moot point which is “really” easier.
My motivation for those comments was observing that you don’t need the measure except for worlds in E, those that are compatible with observation.
Say you have in the original “possible worlds” both some that are computable (e.g. Turing machine outputs) and some that are not (continuous, real-valued space and time coordinates). Now, suppose it’s possible to distinguish among those two, and one of your observations eliminates one possibility (e.g., it’s certainly not computable). Then you can certainly use a measure that only makes sense for the remaining one (computable things).
There might be other tricks. Suppose again, as above, you have two classes of possible worlds, and you have an idea how to assign measures within each class but not for the whole “possible worlds” set. Now, if you do the rest of the calculations in both classes, you’ll obtain two “probabilities”, one for each class of possible worlds. If the two agree on a value (within “measurement error”), you’re set, use that. If they don’t, then it means you have a way to test those two worlds.
I certainly didn’t intend the schema as an actual algorithm. I’m not sure if your comment about subjective probabilities means. (I can parse it both as “this is a potentially useful model of what brains ‘have in mind’ when thinking of probabilities, but not really useful for computation”, and as “this is not a useful model in general, try to come up with one based on how brains assign probability”.)
What is interesting for me is that I couldn’t find any model of probability that doesn’t match that schema after formalizing it. So I conjectured that any formalization of probability, if general enough* to apply to real life, will be an instance of it. Of course, it may just be that I’m not imaginative enough, or maybe I’m just the guy with a new hammer seeing nails everywhere.
(*: by this, I mean not just claiming, say, that coins fall with equal probability on each side. You can do a lot of probability calculation with that, but it’s not useful at all for which alien team wins the game, what face a dice falls on, or even how real coins work.)
I’m really curious to see a model of probability that seems reasonable and general, and that I can’t reduce to the shape above.