After reflecting on this a bit, I think my P(H) is around 33%, and I’m pretty confident Q is true (coherence only requires 0 ⇐ P(Q) ⇐ 67% but I think I put it on the upper end).
Thanks for clarifying your view this way. I guess my question at this point is why your P(Q) is so high, given that it seems impossible to reduce P(H) further by updating on empirical observations (do you agree with this?), and we don’t seem to have even an outline of a philosophical argument for “taking H seriously is a philosophical mistake”. Such an argument seemingly has to include that having a significant prior for H is a mistake, but it’s hard for me to see how to argue for that, given that the individual hypotheses in H like “the universe is a dovetailed simulation on a UTM” seem self-consistent and not too complex or contrived. How would even a superintelligence be able to rule them out?
Perhaps the idea is that a SI, after trying and failing to find a computable theory of everything, concludes that our universe can’t be computable (otherwise it would have found the theory already), thus ruling out part of H, and maybe does the same for mathematical theories of everything, ruling out H altogether? (This seems far-fetched, i.e., how can even a superintelligence confidently conclude that our universe can’t be described by a mathematical theory of everything, given the infinite space of such theories, but this is my best guess of what you think will happen.)
Beyond the intuition that platonic belief in mathematical objects is probably the mind projection fallacy
Can you give an example of a metaphysical theory that does not seem like a mind projection fallacy to you? (If all such theories look that way, then platonic belief in mathematical objects looking like the mind projection fallacy shouldn’t count against it, right?)
It seems presumptuous to guess that our universe is one of infinitely many dovetailed computer simulations when we don’t even know that our universe can be simulated on a computer!
I agree this seems presumptuous and hence prefer Tegmark over Schmidhuber, because the former is proposing a mathematical multiverse, unlike the latter’s computable multiverse. (I talked about “dovetailed computer simulations” just because it seems more concrete and easy to imagine than “a member of an infinite mathematical multiverse distributing reality-fluid according to simplicity.”)
Do you suspect that our universe is not even mathematical (i.e., not fully describable by a mathematical theory of everything or isomorphic to some well-defined mathematical structure)?
ETA: I’m not sure if it’s showing through in my tone, but I’m genuinely curious whether you have a viable argument against “superintelligence will probably take something like L4 multiverse seriously”. It’s rare to see someone with the prerequisites for understanding the arguments (e.g. AIT and metamathematics) trying to push back on this , so I’m treasuring this opportunity. (Also, it occurs to me that we might be in a bubble and plenty of people outside LW with the prerequisites do not share our views about this. Do you have any observations related to this?)
Thanks for clarifying your view this way. I guess my question at this point is why your P(Q) is so high, given that it seems impossible to reduce P(H) further by updating on empirical observations (do you agree with this?), and we don’t seem to have even an outline of a philosophical argument for “taking H seriously is a philosophical mistake”. Such an argument seemingly has to include that having a significant prior for H is a mistake, but it’s hard for me to see how to argue for that, given that the individual hypotheses in H like “the universe is a dovetailed simulation on a UTM” seem self-consistent and not too complex or contrived. How would even a superintelligence be able to rule them out?
I think that you’re leaning too heavily on AIT intuitions to suppose that “the universe is a dovetailed simulation on a UTM” is simple. This feels circular to me—how do you know it’s simple? You’re probably thinking it’s described by a simple program, but that seems circular—of course if we’re already judging things by how hard they are to implement on a UTM, dovetailing all programs for that UTM is simple. We’d probably need a whole dialogue to get to the root of this, but basically, I think you need some support from outside of AIT to justify your view here. Why do you think you can use AIT in this way? I’m not sure that the reasons that we have arrived at AIT justify this—we have some results showing that it’s a best in class predictor (sort of), so I take the predictions of the universal distribution seriously. But it seems you want to take its ontology literally. I don’t see any reason to do that—actually, I’m about to drop a post and hopefully soon a paper closely related to this point (EDIT: the post, which discusses the interpretation of AIXI’s ontology).
Perhaps the idea is that a SI, after trying and failing to find a computable theory of everything, concludes that our universe can’t be computable (otherwise it would have found the theory already), thus ruling out part of H, and maybe does the same for mathematical theories of everything, ruling out H altogether? (This seems far-fetched, i.e., how can even a superintelligence confidently conclude that our universe can’t be described by a mathematical theory of everything, but this is my best guess of what you think will happen.)
Experiments might cast doubt on these multiverses: I don’t think a superintelligence would need to prove that the universe can’t have a computable theory of everything—just ruling out the simple programs that we could be living in would seem sufficient to cast doubt on the UTM theory of everything. Of course, this is not trivial, because some small computable universes will be very hard to “run” for long enough that they make predictions disagreeing with our universe! I haven’t thought as much about uncomputable mathematical universes, but does this universe look like a typical mathematical object? I’m not sure.
However, I suspect that a superintelligence rules these huge multiverses out mostly through “armchair” reasoning based on the same level of evidence we have available.
Can you give an example of a metaphysical theory that does not seem like a mind projection fallacy to you? (If all such theories look that way, then platonic belief in mathematical objects looking like the mind projection fallacy shouldn’t count against it, right?)
This is an interesting point to consider; I am very conservative about making claims about “absolute reality” of things as opposed to the effectiveness of models (I suppose I’m following Kant). Generally I’m on board with materialism, naturalized induction, and the claims about causal structure made by Eliezer in “Highly advanced epistemology 101.” An example of a wrong metaphysical theory that is NOT really the mind projection fallacy is theism in most forms. But animism probably is making the fallacy.
Do you suspect that our universe is not even mathematical (i.e., not fully describable by a mathematical theory of everything or isomorphic to some well-defined mathematical structure)?
I don’t know.
ETA: I’m not sure if it’s showing through in my tone, but I’m genuinely curious whether you have a viable argument against “superintelligence will probably take something like L4 multiverse seriously”. It’s rare to see someone with the prerequisites for understanding the arguments (e.g. AIT and metamathematics) trying to push back on this , so I’m treasuring this opportunity. (Also, it occurs to me that we might be in a bubble and plenty of people outside LW with the prerequisites do not share our views about this. Do you have any observations related to this?)
I’m glad! I think Daniel Herrman maybe agrees with me here—but he’s not exactly a mainstream academic decision theorist. So I’m not sure if there’s a large group of scholars which thinks about AIT and rejects the UTM theory of everything.
I think that you’re leaning too heavily on AIT intuitions to suppose that “the universe is a dovetailed simulation on a UTM” is simple. This feels circular to me—how do you know it’s simple?
The intuition I get from AIT is broader than this, namely that the “simplicity” of an infinite collection of things can be very high, i.e., simpler than most or all finite collections, and this seems likely true for any formal definition of “simplicity” that does not explicitly penalize size or resource requirements. (Our own observable universe already seems very “wasteful” and does not seem to be sampled from a distribution that penalizes size / resource requirements.) Can you perhaps propose or outline a definition of complexity that does not have this feature?
I don’t think a superintelligence would need to prove that the universe can’t have a computable theory of everything—just ruling out the simple programs that we could be living in would seem sufficient to cast doubt on the UTM theory of everything. Of course, this is not trivial, because some small computable universes will be very hard to “run” for long enough that they make predictions disagreeing with our universe!
Putting aside how easy it would be to show, you have a strong intuition that our universe is not or can’t be a simple program? This seems very puzzling to me, as we don’t seem to see any phenomenon in the universe that looks uncomputable or can’t be the result of running a simple program. (I prefer Tegmark over Schmidhuber despite thinking our universe looks computable, in case the multiverse also contains uncomputable universes.)
I haven’t thought as much about uncomputable mathematical universes, but does this universe look like a typical mathematical object? I’m not sure.
If it’s not a typical computable or mathematical object, what class of objects is it a typical member of?
An example of a wrong metaphysical theory that is NOT really the mind projection fallacy is theism in most forms.
Most (all?) instances of theism posit that the world is an artifact of an intelligent being. Can’t this still be considered a form of mind projection fallacy?
I asked AI (Gemini 2.5 Pro) to come with other possible answers (metaphyiscal theories that aren’t mind projection fallacy), and it gave Causal Structuralism, Physicalism, and Kantian-Inspired Agnosticism. I don’t understand the last one, but the first two seem to imply something similar to “we should take MUH seriously”, because the hypothesis of “the universe contains the class of all possible causal structures / physical systems” probably has a short description in whatever language is appropriate for formulating hypotheses.
In conclusion, I see you (including in the new post) as trying to weaken arguments/intuitions for taking AIT’s ontology literally or too seriously, but without positive arguments against the universe being an infinite collection of something like mathematical objects, or the broad principle that reality might arise from a simple generator encompassing vast possibilities, which seems robust across different metaphysical foundations, I don’t see how we can reduce our credence for that hypothesis to a negligible level, such that we no longer need to consider it in decision theory. (I guess you have a strong intuition in this direction and expect superintelligence to find arguments for it, which seems fine, but naturally not very convincing for others.)
Putting aside how easy it would be to show, you have a strong intuition that our universe is not or can’t be a simple program? This seems very puzzling to me, as we don’t seem to see any phenomenon in the universe that looks uncomputable or can’t be the result of running a simple program. (I prefer Tegmark over Schmidhuber despite thinking our universe looks computable, in case the multiverse also contains uncomputable universes.)
I don’t see conclusive evidence either way, do you? What would a phenomenon that “looks uncomputable” look like concretely, other than mysterious or hard to understand? It seems many aspects of the universe are hard to understand. Maybe you would expect things at higher levels of the arithmetical hierarchy to live in uncomputable universes, and the fact that we can’t build a halting oracle implies to you that our universe is computable? That seems plausible but questionable to me. Also, the standard model is pretty complicated—it’s hard to assess what this means because the standard model is wrong (is there a simpler or more complicated true theory of everything?).
The intuition I get from AIT is broader than this, namely that the “simplicity” of an infinite collection of things can be very high, i.e., simpler than most or all finite collections, and this seems likely true for any formal definition of “simplicity” that does not explicitly penalize size or resource requirements. (Our own observable universe already seems very “wasteful” and does not seem to be sampled from a distribution that penalizes size / resource requirements.) Can you perhaps propose or outline a definition of complexity that does not have this feature?
Yes, in some cases ensembles can be simpler than any element in the ensemble. If our universe is a typical member of some ensemble, we should take seriously the possibility that the whole ensemble exists. Now it is hard to say whether that is decision-relevant; it probably depends on the ensemble.
Combining these two observations, a superintelligence should take the UTM multiverse seriously if we live in a typical (~= simple) computable universe. I put that at about 33%, which leaves it consistent with my P(H).
My P(Q) is lower than 1 - P(H) because the answer may be hard for a superintelligence to determine. But I lean towards betting on the superintelligence to work it out (whether the universe should be expected to be a simple program seems like not only an empirical but a philosophical question), which is why I put P(Q) fairly close to 1 - P(H). Though I think this discussion is starting to shift my intuitions a bit in your direction.
What would a phenomenon that “looks uncomputable” look like concretely, other than mysterious or hard to understand?
There could be some kind of “oracle”, not necessarily a halting oracle, but any kind of process or phenomenon that can’t be broken down into elementary interactions that each look computable, or otherwise explainable as a computable process. Do you agree that our universe doesn’t seem to contain anything like this?
If the universe contained a source of ML-random bits they might look like uniformly random coin flips to us, even if they actually had some uncomputable distribution. For instance, perhaps spin measurements are not iid Bernoulli, but since their distribution is not computable, we aren’t able to predict it any better than that model?
I’m not sure how you’re imagining this oracle would act? Nothing like what you’re describing seems to be embedded as a physical object in spacetime, but I think that’s the wrong thing to expect, failures of computability wouldn’t act like Newtonian objects.
It’s rare to see someone with the prerequisites for understanding the arguments (e.g. AIT and metamathematics) trying to push back on this
My view is probably different from Cole’s, but it has struck me that the universe seems to have a richer mathematical structure than one might expect given a generic AIT-ish view(e.g. continuous space/time, quantum mechanics, diffeomorphism invariance/gauge invariance), so we should perhaps update that the space of mathematical structures instantiating life/sentience might be narrower than it initially appears(that is, if “generic” mathematical structures support life/agency, we should expect ourselves to be in a generic universe, but instead we seem to be in a richly structured universe, so this is an update that maybe we can only be in a rich/structured universe[or that life/agency is just much more likely to arise in such a universe]). Taken to an extreme, perhaps it’s possible to derive a priori that the universe has to look like the standard model. (Of course, you could run the standard model on a Turing machine, so the statement would have to be about how the universe relates/appears to agents inhabiting it, not its ultimate ontology which is inaccessible since any Turing-complete structure can simulate any other)
Thanks for clarifying your view this way. I guess my question at this point is why your P(Q) is so high, given that it seems impossible to reduce P(H) further by updating on empirical observations (do you agree with this?), and we don’t seem to have even an outline of a philosophical argument for “taking H seriously is a philosophical mistake”. Such an argument seemingly has to include that having a significant prior for H is a mistake, but it’s hard for me to see how to argue for that, given that the individual hypotheses in H like “the universe is a dovetailed simulation on a UTM” seem self-consistent and not too complex or contrived. How would even a superintelligence be able to rule them out?
Perhaps the idea is that a SI, after trying and failing to find a computable theory of everything, concludes that our universe can’t be computable (otherwise it would have found the theory already), thus ruling out part of H, and maybe does the same for mathematical theories of everything, ruling out H altogether? (This seems far-fetched, i.e., how can even a superintelligence confidently conclude that our universe can’t be described by a mathematical theory of everything, given the infinite space of such theories, but this is my best guess of what you think will happen.)
Can you give an example of a metaphysical theory that does not seem like a mind projection fallacy to you? (If all such theories look that way, then platonic belief in mathematical objects looking like the mind projection fallacy shouldn’t count against it, right?)
I agree this seems presumptuous and hence prefer Tegmark over Schmidhuber, because the former is proposing a mathematical multiverse, unlike the latter’s computable multiverse. (I talked about “dovetailed computer simulations” just because it seems more concrete and easy to imagine than “a member of an infinite mathematical multiverse distributing reality-fluid according to simplicity.”)
Do you suspect that our universe is not even mathematical (i.e., not fully describable by a mathematical theory of everything or isomorphic to some well-defined mathematical structure)?
ETA: I’m not sure if it’s showing through in my tone, but I’m genuinely curious whether you have a viable argument against “superintelligence will probably take something like L4 multiverse seriously”. It’s rare to see someone with the prerequisites for understanding the arguments (e.g. AIT and metamathematics) trying to push back on this , so I’m treasuring this opportunity. (Also, it occurs to me that we might be in a bubble and plenty of people outside LW with the prerequisites do not share our views about this. Do you have any observations related to this?)
I think that you’re leaning too heavily on AIT intuitions to suppose that “the universe is a dovetailed simulation on a UTM” is simple. This feels circular to me—how do you know it’s simple? You’re probably thinking it’s described by a simple program, but that seems circular—of course if we’re already judging things by how hard they are to implement on a UTM, dovetailing all programs for that UTM is simple. We’d probably need a whole dialogue to get to the root of this, but basically, I think you need some support from outside of AIT to justify your view here. Why do you think you can use AIT in this way? I’m not sure that the reasons that we have arrived at AIT justify this—we have some results showing that it’s a best in class predictor (sort of), so I take the predictions of the universal distribution seriously. But it seems you want to take its ontology literally. I don’t see any reason to do that—actually, I’m about to drop a post and hopefully soon a paper closely related to this point (EDIT: the post, which discusses the interpretation of AIXI’s ontology).
Experiments might cast doubt on these multiverses: I don’t think a superintelligence would need to prove that the universe can’t have a computable theory of everything—just ruling out the simple programs that we could be living in would seem sufficient to cast doubt on the UTM theory of everything. Of course, this is not trivial, because some small computable universes will be very hard to “run” for long enough that they make predictions disagreeing with our universe! I haven’t thought as much about uncomputable mathematical universes, but does this universe look like a typical mathematical object? I’m not sure.
However, I suspect that a superintelligence rules these huge multiverses out mostly through “armchair” reasoning based on the same level of evidence we have available.
This is an interesting point to consider; I am very conservative about making claims about “absolute reality” of things as opposed to the effectiveness of models (I suppose I’m following Kant). Generally I’m on board with materialism, naturalized induction, and the claims about causal structure made by Eliezer in “Highly advanced epistemology 101.” An example of a wrong metaphysical theory that is NOT really the mind projection fallacy is theism in most forms. But animism probably is making the fallacy.
I don’t know.
I’m glad! I think Daniel Herrman maybe agrees with me here—but he’s not exactly a mainstream academic decision theorist. So I’m not sure if there’s a large group of scholars which thinks about AIT and rejects the UTM theory of everything.
The intuition I get from AIT is broader than this, namely that the “simplicity” of an infinite collection of things can be very high, i.e., simpler than most or all finite collections, and this seems likely true for any formal definition of “simplicity” that does not explicitly penalize size or resource requirements. (Our own observable universe already seems very “wasteful” and does not seem to be sampled from a distribution that penalizes size / resource requirements.) Can you perhaps propose or outline a definition of complexity that does not have this feature?
Putting aside how easy it would be to show, you have a strong intuition that our universe is not or can’t be a simple program? This seems very puzzling to me, as we don’t seem to see any phenomenon in the universe that looks uncomputable or can’t be the result of running a simple program. (I prefer Tegmark over Schmidhuber despite thinking our universe looks computable, in case the multiverse also contains uncomputable universes.)
If it’s not a typical computable or mathematical object, what class of objects is it a typical member of?
Most (all?) instances of theism posit that the world is an artifact of an intelligent being. Can’t this still be considered a form of mind projection fallacy?
I asked AI (Gemini 2.5 Pro) to come with other possible answers (metaphyiscal theories that aren’t mind projection fallacy), and it gave Causal Structuralism, Physicalism, and Kantian-Inspired Agnosticism. I don’t understand the last one, but the first two seem to imply something similar to “we should take MUH seriously”, because the hypothesis of “the universe contains the class of all possible causal structures / physical systems” probably has a short description in whatever language is appropriate for formulating hypotheses.
In conclusion, I see you (including in the new post) as trying to weaken arguments/intuitions for taking AIT’s ontology literally or too seriously, but without positive arguments against the universe being an infinite collection of something like mathematical objects, or the broad principle that reality might arise from a simple generator encompassing vast possibilities, which seems robust across different metaphysical foundations, I don’t see how we can reduce our credence for that hypothesis to a negligible level, such that we no longer need to consider it in decision theory. (I guess you have a strong intuition in this direction and expect superintelligence to find arguments for it, which seems fine, but naturally not very convincing for others.)
I don’t see conclusive evidence either way, do you? What would a phenomenon that “looks uncomputable” look like concretely, other than mysterious or hard to understand? It seems many aspects of the universe are hard to understand. Maybe you would expect things at higher levels of the arithmetical hierarchy to live in uncomputable universes, and the fact that we can’t build a halting oracle implies to you that our universe is computable? That seems plausible but questionable to me. Also, the standard model is pretty complicated—it’s hard to assess what this means because the standard model is wrong (is there a simpler or more complicated true theory of everything?).
Yes, in some cases ensembles can be simpler than any element in the ensemble. If our universe is a typical member of some ensemble, we should take seriously the possibility that the whole ensemble exists. Now it is hard to say whether that is decision-relevant; it probably depends on the ensemble.
Combining these two observations, a superintelligence should take the UTM multiverse seriously if we live in a typical (~= simple) computable universe. I put that at about 33%, which leaves it consistent with my P(H).
My P(Q) is lower than 1 - P(H) because the answer may be hard for a superintelligence to determine. But I lean towards betting on the superintelligence to work it out (whether the universe should be expected to be a simple program seems like not only an empirical but a philosophical question), which is why I put P(Q) fairly close to 1 - P(H). Though I think this discussion is starting to shift my intuitions a bit in your direction.
There could be some kind of “oracle”, not necessarily a halting oracle, but any kind of process or phenomenon that can’t be broken down into elementary interactions that each look computable, or otherwise explainable as a computable process. Do you agree that our universe doesn’t seem to contain anything like this?
If the universe contained a source of ML-random bits they might look like uniformly random coin flips to us, even if they actually had some uncomputable distribution. For instance, perhaps spin measurements are not iid Bernoulli, but since their distribution is not computable, we aren’t able to predict it any better than that model?
I’m not sure how you’re imagining this oracle would act? Nothing like what you’re describing seems to be embedded as a physical object in spacetime, but I think that’s the wrong thing to expect, failures of computability wouldn’t act like Newtonian objects.
My view is probably different from Cole’s, but it has struck me that the universe seems to have a richer mathematical structure than one might expect given a generic AIT-ish view(e.g. continuous space/time, quantum mechanics, diffeomorphism invariance/gauge invariance), so we should perhaps update that the space of mathematical structures instantiating life/sentience might be narrower than it initially appears(that is, if “generic” mathematical structures support life/agency, we should expect ourselves to be in a generic universe, but instead we seem to be in a richly structured universe, so this is an update that maybe we can only be in a rich/structured universe[or that life/agency is just much more likely to arise in such a universe]). Taken to an extreme, perhaps it’s possible to derive a priori that the universe has to look like the standard model. (Of course, you could run the standard model on a Turing machine, so the statement would have to be about how the universe relates/appears to agents inhabiting it, not its ultimate ontology which is inaccessible since any Turing-complete structure can simulate any other)
Yes.
For the interested reader, this line of reasoning is what @Vanessa Kosoy calls metacosmology.
It’s also what ultimately underlies Christiano’s malign Solomonoff prior argument.