(After talking via PM) Oh, you mean I beat OpenPhil because I was able to predict the direction of their future beliefs! Right.
I mean, I think that I didn’t beat the grantmakers with this bet. Nick (Beckstead) writes in the 2016 recommendations for personal donations that you should donate to MIRI, and one of his reasons is:
My impressions about potential risks from advanced AI have grown closer to Eliezer Yudkowsky’s over time, and I don’t think it would be too surprising if that movement on my end continues. I see additional $ to MIRI as an appropriate response to potential/anticipated future updates.
It’s not obvious to me that four months ago Nick wouldn’t have taken the same side of the bet as I did, against Greg (although that would be weird bettor incentives/behaviour for obvious reasons).
Added: I do think that my reasons were not time-dependant, and I should’ve been able to make the bet in 2016. However, note that the above link where Nick B and Daniel D both recommend giving to MIRI was also in 2016, so still not sure I beat them to it.
I don’t think it would be too surprising if that movement on my end continues.
I’m very confused about the notion of fitting expected updating within a Bayesian framework. Phenomena like the fact that a Bayesian agent should expect to never change any particular belief, although they might have high credence that they’ll change some belief; or that a Bayesian agent can recognize a median belief change ≠ 0 but not a mean belief change ≠ 0.
On the theoretical level, I believe that it’s consistent to say “further movement is unsurprising, but I can’t predict in which direction”.
On the practical level, it’s probably also consistent to say “If you forced betting odds out of me now, I’d probably bet that I’ll increase funding to MIRI, so if you’re trusting my view you should donate there yourself, but my process for increasing a grant size has more steps and deliberation and I’m not going to immediately decide to increase funding for MIRI—wait for my next report”.
I think I understand this a bit better know, given also Rob’s comment on FB.
On the theoretical level, that’s a very interesting belief to have, because sometimes it doesn’t pay rent in anticipated experience at all. Given that you cannot predict a change in direction, it seems rational to act as if your belief will not change, despite you being very confident it will change.
Your practical example is not a change of belief. It’s rather saying “I now believe I’ll increase funding to MIRI, but my credence is still <70% as the formal decision process usually uncovers many surprises”
(After talking via PM) Oh, you mean I beat OpenPhil because I was able to predict the direction of their future beliefs! Right.
I mean, I think that I didn’t beat the grantmakers with this bet. Nick (Beckstead) writes in the 2016 recommendations for personal donations that you should donate to MIRI, and one of his reasons is:
It’s not obvious to me that four months ago Nick wouldn’t have taken the same side of the bet as I did, against Greg (although that would be weird bettor incentives/behaviour for obvious reasons).
Added: I do think that my reasons were not time-dependant, and I should’ve been able to make the bet in 2016. However, note that the above link where Nick B and Daniel D both recommend giving to MIRI was also in 2016, so still not sure I beat them to it.
I’m very confused about the notion of fitting expected updating within a Bayesian framework. Phenomena like the fact that a Bayesian agent should expect to never change any particular belief, although they might have high credence that they’ll change some belief; or that a Bayesian agent can recognize a median belief change ≠ 0 but not a mean belief change ≠ 0.
On the theoretical level, I believe that it’s consistent to say “further movement is unsurprising, but I can’t predict in which direction”.
On the practical level, it’s probably also consistent to say “If you forced betting odds out of me now, I’d probably bet that I’ll increase funding to MIRI, so if you’re trusting my view you should donate there yourself, but my process for increasing a grant size has more steps and deliberation and I’m not going to immediately decide to increase funding for MIRI—wait for my next report”.
I think I understand this a bit better know, given also Rob’s comment on FB.
On the theoretical level, that’s a very interesting belief to have, because sometimes it doesn’t pay rent in anticipated experience at all. Given that you cannot predict a change in direction, it seems rational to act as if your belief will not change, despite you being very confident it will change.
Your practical example is not a change of belief. It’s rather saying “I now believe I’ll increase funding to MIRI, but my credence is still <70% as the formal decision process usually uncovers many surprises”