Both boxes might be transparent. In this case, you would see the money in both boxes only if you are rational enough to understand, that you have to pick just B.
Wouldn’t that be an irrational move? Not all! You have to understand that to be rational.
That’s brilliant! (I’m not sure what you mean by understand though.)
In other words, Omega does one of the two things: it either offers you $1000 + $1, or only $10. It offers you $1000 + $1 only if it predicts that you won’t take the $1, otherwise it only gives you $10.
This is a variant of counterfactual mugging, except that there is no chance involved. Your past self prefers to precommit to not taking the $1, while your present self faced with that situation prefers to take the 1$.
Hmmm… It looks like the decision to take the $1 determines the situation where you make that decision out of reality. Effects of precommitment being restricted to the counterfactual branches are a usual thing, but in this problem they stare you right in the face, which is rather daring.
Another variation, playing only on real/counterfactual, without motivating the real decision. Omega comes to you and offers $1, if and only if it predicts that you won’t take it. What do you do? It looks neutral, since expected gain in both cases is zero. But the decision to take the $1 sounds rather bizarre: if you take the $1, then you don’t exist!
Agents self-consistent under reflection are counterfactual zombies, indifferent to whether they are real or not.
Gaussian adaptation as an evolutionary model of the brain obeying the Hebbian theory of associative learning offers an alternative view of free will due to the ability of the process to maximize the mean fitness of signal patterns in the brain by climbing a mental landscape in analogy with phenotypic evolution.
Such a random process gives us lots of freedom of choice, but hardly any will. An illusion of will may, however, emanate from the ability of the process to maximize mean fitness, making the process goal seeking. I. e., it prefers higher peaks in the landscape prior to lower, or better alternatives prior to worse. In this way an illusive will may appear. A similar view has been given by Zohar 1990. See also Kjellström 1999.
Both boxes might be transparent. In this case, you would see the money in both boxes only if you are rational enough to understand, that you have to pick just B.
Wouldn’t that be an irrational move? Not all! You have to understand that to be rational.
That’s brilliant! (I’m not sure what you mean by understand though.)
In other words, Omega does one of the two things: it either offers you $1000 + $1, or only $10. It offers you $1000 + $1 only if it predicts that you won’t take the $1, otherwise it only gives you $10.
This is a variant of counterfactual mugging, except that there is no chance involved. Your past self prefers to precommit to not taking the $1, while your present self faced with that situation prefers to take the 1$.
You have to understand this twist, to be able to call yourself rational, by my book.
You understood the twist, as I see.
This reply is too mysterious to reveal whether you got the criterion right.
Hmmm… It looks like the decision to take the $1 determines the situation where you make that decision out of reality. Effects of precommitment being restricted to the counterfactual branches are a usual thing, but in this problem they stare you right in the face, which is rather daring.
Another variation, playing only on real/counterfactual, without motivating the real decision. Omega comes to you and offers $1, if and only if it predicts that you won’t take it. What do you do? It looks neutral, since expected gain in both cases is zero. But the decision to take the $1 sounds rather bizarre: if you take the $1, then you don’t exist!
Agents self-consistent under reflection are counterfactual zombies, indifferent to whether they are real or not.
Seems roughly as disturbing as Wikipedia’s article on Gaussian adaptation:
If you want your source code to be self-consistent under reflection, you know what you have to do.