The future—what will happen—is necessarily “fixed”. To say that it isn’t implies that what will happen may not happen, which is logically impossible.
Pablo, I think the debate is over whether there is such a thing as “what will happen”; maybe that question doesn’t yet have an answer. In fact, I think any good definition of libertarian free will would require that it not have an answer yet.
So, can someone please explain just exactly what “free will” is such that the question of whether I have it or not has meaning?
As I see it, the real issue is whether it’s possible to “have an impact on the way the world turns out.” For example, imagine that God is deciding whether or not to punish you in hell. “Free will” is the hope that “there’s still a chance for me to affect God’s decision” before it happens. If, say, he’s already written down the answer on a piece of paper, there’s nothing to be done to change your fate.
What I said above shouldn’t be taken too literally—I was trying to convey an intuition for a concept that can’t really be described well in words. ‘Having your fate written down on a piece of paper’ is somewhat misleading if interpreted to imply that ‘since the answer has been decided, I can now do anything and my fate won’t change.’ In the scenario where we lack free will, the physical actions taking place right now in our heads and the world around us are the writing down of the answer on the paper, because those are precisely what produce the results that happen.
“Free will” is the idea that there’s some sort of “us” whose choices could make it the case that the question of “What will happen?” doesn’t yet have an answer (even in a Platonic realm of ‘truth’) and that this choice is somehow nonarbitrary. I actually have no idea how this could work, or what this even really means, but I maintain some probability that I’m simply not smart enough to understand it.
I do know that if the future is determined, then then whether I believe the right answer about free will (or, perhaps, whether I accede to an incoherent concept people call “free will”) is fixed, in the sense of being ‘already written down’ in some realm of Platonic knowledge. But if not, might there be something I can do (where the ‘I’ refers to something whose actions aren’t yet decided even in a Platonic realm) to improve the truth / coherence of my beliefs?
Pascal’s wager type arguments fail due to their symmetry (which is preserved in finite cases).
Even if our priors are symmetric for equally complex religious hypotheses, our posteriors almost certainly won’t be. There’s too much evidence in the world, and too many strong claims about these matters, for me to imagine that posteriors would come out even. Besides, even if two religions are equally probable, there may be certainly be non-epistemic reasons to prefer one over the other.
However, if after chugging through the math, it didn’t balance out and still the expected disutility from the existance of the disutility threat was greater, then perhaps allowing oneself to be vulnerable to such threats is genuinely the correct outcome, however counterintuitive and absurd it would seem to us.
I agree. If we really trust the AI doing the computations and don’t have reason to think that it’s biased, and if the AI has considered all of the points that have been raised about the future consequences of showing oneself vulnerable to Pascalian muggings, then I feel we should go along with the AI’s conclusion. 3^^^^3 people is too many to get wrong, and if the probabilities come out asymmetric, so be it.
Maybe the origin of the paradox is that we are extending the principle of maximizing expected return beyond its domain of applicability.
In addition to a frequency argument, one can in some cases make a different argument for maximizing expected value even in one-time-only scenarios. For instance, if you knew you would become a randomly selected person in the universe, and if your only goal was to avoid being murdered, then minimizing the expected number of people murdered would also minimize the probability that you personally would be murdered. Unfortunately, arguments like this make the assumption that your utility function on outcomes takes only one of two values (“good,” i.e., not murdered, and “bad,” i.e., murdered); it doesn’t capture the fact that being murdered in one way may be twice as bad as being murdered in another way.