I was somewhat confused by this post, the language of it. I resolved my confusion, which came through applying more quantitative reasoning to it. Yet, my revised view is the largest deviation from (the appearance of) Eliezer’s message I’ve yet had that has stuck around; past such differences revealed themselves to be flawed under more reflection. (If this view had been internalized it could generate the wording in the post, but the wording of the post doesn’t seem to strongly imply it.) I know that going forward I’m going to hold my revised interpretation unless some other evidence presents itself, so I’m going to stick my neck out here in the case that I am mistaken and someone can correct me.
But so long as we have a lawful specification of how counterfactuals are constructed—a lawful computational procedure—then the counterfactual result of removing Oswald, depends entirely on the empirical state of the world.
A counterfactual does not depend (directly) on the actual state of the world, but on one’s model of the world. Given a model of the world and a method of calculating counterfactuals we can say whether a counterfactual is mathematically or logically correct. But like the phrase “‘snow is white’ is true,” or “the bucket is true,” we can also put forward the proposition that the actual world corresponds to such a state, that our models match up to reality, and ask how likely it is that this proposition is true, with probabilistic beliefs. So we can assign probabilities to the proposition “‘If Oswald hadn’t shot Kennedy, Kennedy would not have died’ is true.”
To make a long story short, it turns out that there’s a very natural way of scoring the accuracy of a probability assignment, as compared to reality: just take the logarithm of the probability assigned to the real state of affairs.
(emphasis added)
We never actually receive “confirmation” that “snow is white,” at least in the sense of obtaining a probability of exactly 1 that “‘snow is white’ is true”. Likewise, we never receive confirmation that a counterfactual is true; we just increase the probabilities we assign to it being true.
(I worked out some simple math that allows you to score a probabilistic expectation even if you can’t gain full confirmation of what happened, which does nice things like having the same expected score of the basic model, but I won’t go into it here, assuming that such math already exists and wasn’t the point. I don’t disagree with presenting the simpler model, as it works just fine.)
I was somewhat confused by this post, the language of it. I resolved my confusion, which came through applying more quantitative reasoning to it. Yet, my revised view is the largest deviation from (the appearance of) Eliezer’s message I’ve yet had that has stuck around; past such differences revealed themselves to be flawed under more reflection. (If this view had been internalized it could generate the wording in the post, but the wording of the post doesn’t seem to strongly imply it.) I know that going forward I’m going to hold my revised interpretation unless some other evidence presents itself, so I’m going to stick my neck out here in the case that I am mistaken and someone can correct me.
A counterfactual does not depend (directly) on the actual state of the world, but on one’s model of the world. Given a model of the world and a method of calculating counterfactuals we can say whether a counterfactual is mathematically or logically correct. But like the phrase “‘snow is white’ is true,” or “the bucket is true,” we can also put forward the proposition that the actual world corresponds to such a state, that our models match up to reality, and ask how likely it is that this proposition is true, with probabilistic beliefs. So we can assign probabilities to the proposition “‘If Oswald hadn’t shot Kennedy, Kennedy would not have died’ is true.”
From the post Qualitatively Confused:
(emphasis added)
We never actually receive “confirmation” that “snow is white,” at least in the sense of obtaining a probability of exactly 1 that “‘snow is white’ is true”. Likewise, we never receive confirmation that a counterfactual is true; we just increase the probabilities we assign to it being true.
(I worked out some simple math that allows you to score a probabilistic expectation even if you can’t gain full confirmation of what happened, which does nice things like having the same expected score of the basic model, but I won’t go into it here, assuming that such math already exists and wasn’t the point. I don’t disagree with presenting the simpler model, as it works just fine.)