I have no basis to even ascribe existence to anything that has no causal effect on me, is not causally affected by me, shares no knowable or even respectably guessable causes with me, and shares no knowable or even respectably guessable effects with me. Not if I want a usable concept of “existence”.
My (lack of) acausal trade is mostly determined by my bigoted preference for entities that exist.
Even if I were inclined to ignore their nonexistence, for any “acausal trading partner” who wants me to do X, there might be another who specifically wants me to not do X, and a bunch of others who don’t care about X, but want various other things incompatible with X. Not to mention the ones whose desires are internally inconsistent.
That’s an especially bad problem for me in an infinite multiverse where everthing is allowed. In that case, those sets will all have the same cardinality, say aleph-null. Which I think is probably what you’re assuming, since any finite multiverse large enough to contain a bunch of within-epsilon Boltzmann copies and a bunch of potential acausal trading partners should really be asked to explain why it doesn’t just give up the whole pretense and go infinite.
I might be able to come up with some probability measure, based on something like limits of infinite series of random draws, to let me say that one of those aleph-null sets was in some non-cardinal sense “bigger” than another. I don’t know enough about measure theory and probability to know. It seems like I’d at least need the axiom of choice, which is “optional” math. But, although I intuitively feel that I ought to want to use some such measure to say whether I was one of “the majority of Boltzmann brains”, it seems less intuitive that using that measure would be more “right” than using cardinality for acausal trade. The construct of probability seems to be more “for” thinking about individual, but unknown, cases, than for thinking about the whole ensemble of something like “potential trading partners”
You, at least, seem to be thinking in terms of cardinality even for the Boltzmann brain case, so I assume you’d agree, and want me to use cardinality for the trading partners. So I have no basis at all to do any one thing over another. No matter what I do I will always both “satisfy” an infinite number of partners and “betray” an infinite number of others.
That kind of chaos and confusion is what leads me to believe that, above and beyond the whole problem of what can actually be said to exist, there’s something else fundamentally wrong with the whole idea of acausal trade. There is a feeling of brokenness somewhere deep in the whole framework. Probably comes from unlicensed messing around with infinities.
And what do Turing machines have to do with anything? I appear to live in a quantum universe full of arbitrary complex numbers and vectors. A Turing machine can’t simulate such a universe; at best it can approximate it. If for some reason I choose to abandon the obvious simplifying assumption that I live in the “base reality”—and then feel compelled to make unverifiable guesses about what’s underneath—then I think I should at least favor the hypothesis that the substrate can easily process the kinds of rules that seem to generate my universe. It makes my universe a simpler program in the substrate’s language.
But probably I should just avoid that particular speculative swamp to begin with.
I have no basis to even ascribe existence to anything that has no causal effect on me, is not causally affected by me, shares no knowable or even respectably guessable causes with me, and shares no knowable or even respectably guessable effects with me. Not if I want a usable concept of “existence”.
My (lack of) acausal trade is mostly determined by my bigoted preference for entities that exist.
Even if I were inclined to ignore their nonexistence, for any “acausal trading partner” who wants me to do X, there might be another who specifically wants me to not do X, and a bunch of others who don’t care about X, but want various other things incompatible with X. Not to mention the ones whose desires are internally inconsistent.
That’s an especially bad problem for me in an infinite multiverse where everthing is allowed. In that case, those sets will all have the same cardinality, say aleph-null. Which I think is probably what you’re assuming, since any finite multiverse large enough to contain a bunch of within-epsilon Boltzmann copies and a bunch of potential acausal trading partners should really be asked to explain why it doesn’t just give up the whole pretense and go infinite.
I might be able to come up with some probability measure, based on something like limits of infinite series of random draws, to let me say that one of those aleph-null sets was in some non-cardinal sense “bigger” than another. I don’t know enough about measure theory and probability to know. It seems like I’d at least need the axiom of choice, which is “optional” math. But, although I intuitively feel that I ought to want to use some such measure to say whether I was one of “the majority of Boltzmann brains”, it seems less intuitive that using that measure would be more “right” than using cardinality for acausal trade. The construct of probability seems to be more “for” thinking about individual, but unknown, cases, than for thinking about the whole ensemble of something like “potential trading partners”
You, at least, seem to be thinking in terms of cardinality even for the Boltzmann brain case, so I assume you’d agree, and want me to use cardinality for the trading partners. So I have no basis at all to do any one thing over another. No matter what I do I will always both “satisfy” an infinite number of partners and “betray” an infinite number of others.
That kind of chaos and confusion is what leads me to believe that, above and beyond the whole problem of what can actually be said to exist, there’s something else fundamentally wrong with the whole idea of acausal trade. There is a feeling of brokenness somewhere deep in the whole framework. Probably comes from unlicensed messing around with infinities.
And what do Turing machines have to do with anything? I appear to live in a quantum universe full of arbitrary complex numbers and vectors. A Turing machine can’t simulate such a universe; at best it can approximate it. If for some reason I choose to abandon the obvious simplifying assumption that I live in the “base reality”—and then feel compelled to make unverifiable guesses about what’s underneath—then I think I should at least favor the hypothesis that the substrate can easily process the kinds of rules that seem to generate my universe. It makes my universe a simpler program in the substrate’s language.
But probably I should just avoid that particular speculative swamp to begin with.