“Suppose Omega (the same superagent from Newcomb’s Problem, who is known to be honest about how it poses these sorts of dilemmas) comes to you and says: “I just flipped a fair coin. I decided, before I flipped the coin, that if it came up heads, I would ask you for $1000. And if it came up tails, I would give you $1,000,000 if and only if I predicted that you would give me $1000 if the coin had come up heads. The coin came up heads—can I have $1000?” Obviously, the only reflectively consistent answer in this case is “Yes—here’s the $1000″, because if you’re an agent who expects to encounter many problems like this in the future, you will self-modify to be the sort of agent who answers “Yes” to this sort of question—just like with Newcomb’s Problem or Parfit’s Hitchhiker.”
To me, this seems like losing. I disagree that a reflectively consistent agent would give Omega money in these instances.
Since Omega always tells the truth, we should program ourselves to give him $1000 if and only if we don’t know that the coin came up heads. If we know that the coin came up heads, it no longer makes sense to give him $1000 dollars.
This in no way prevents us from maximizing utility. An opponent of this strategy would contend that this prevents us from receiving the larger cash prize. That assertion would be false, because this strategy only occurs in situations where we have literally zero possibility of receiving any money from Omega. “Not giving Omega $1000” in instance H does not mean that we wouldn’t give Omega $1000 in instances where we don’t know whether or not H.
The fact that Omega has already told us it came up heads completely precludes any possibility of a reward. The fact that we choose to not give Omega money in situations where we know the coin comes up heads in no way precludes us from giving him $1000 in scenarios where we have a chance that we’ll receive the larger cash prize. By not giving $1000 when H, there is still no precedent set which precludes the larger reward. Therefore we should give Omega $1000 if and only if we do not know that the coin landed on heads.
Yay. That seems too easy, I’m kind of worried I made a super obvious logical mistake. But I think it’s right.
Sorry for not using the quote feature, but I’m awful at editing. I even tried using the sandbox and couldn’t get it right.
EDIT: So, unfortunately, I don’t think this solves the issue. It technically does, but it really amounts to more of a reason that Omega is stupid and should rephrase his statements. Instead of Omega saying “I would give you $1,000,000 if and only if I predicted that you would give me $1000 if the coin had come up heads” he should say “I would give you $1,000,000 if and only if I predicted that you would give me $1000 if the coin had come up heads and I told you that the coin came up heads”.
So it’s a minor improvement but it’s nothing really important.
I THINK I SOLVED ONE—EDIT—Sorry, not quite.
To me, this seems like losing. I disagree that a reflectively consistent agent would give Omega money in these instances.
Since Omega always tells the truth, we should program ourselves to give him $1000 if and only if we don’t know that the coin came up heads. If we know that the coin came up heads, it no longer makes sense to give him $1000 dollars.
This in no way prevents us from maximizing utility. An opponent of this strategy would contend that this prevents us from receiving the larger cash prize. That assertion would be false, because this strategy only occurs in situations where we have literally zero possibility of receiving any money from Omega. “Not giving Omega $1000” in instance H does not mean that we wouldn’t give Omega $1000 in instances where we don’t know whether or not H.
The fact that Omega has already told us it came up heads completely precludes any possibility of a reward. The fact that we choose to not give Omega money in situations where we know the coin comes up heads in no way precludes us from giving him $1000 in scenarios where we have a chance that we’ll receive the larger cash prize. By not giving $1000 when H, there is still no precedent set which precludes the larger reward. Therefore we should give Omega $1000 if and only if we do not know that the coin landed on heads.
Yay. That seems too easy, I’m kind of worried I made a super obvious logical mistake. But I think it’s right.
Sorry for not using the quote feature, but I’m awful at editing. I even tried using the sandbox and couldn’t get it right.
EDIT: So, unfortunately, I don’t think this solves the issue. It technically does, but it really amounts to more of a reason that Omega is stupid and should rephrase his statements. Instead of Omega saying “I would give you $1,000,000 if and only if I predicted that you would give me $1000 if the coin had come up heads” he should say “I would give you $1,000,000 if and only if I predicted that you would give me $1000 if the coin had come up heads and I told you that the coin came up heads”.
So it’s a minor improvement but it’s nothing really important.
Just prefix the quote with a single greater-than sign.
I did, but I don’t know how to stop quoting. I can start but I don’t know how to stop.
Also, one of the times I tried to quote it I ended up with an ugly horizontal scroll bar in the middle of the text.
A blank newline between the quote and non-quote will stop the quote.
>quoted text
more quoted text
non-quoted text