It doesn’t measurably update my probability of the ship sinking
When you say, doesn’t “measurably,” do you mean that it doesn’t update all or doesn’t update much? I’m not saying you should update much. I’m just saying you should update some. Like I’m nodding along at your example, but my conclusion is instead simply the opposite.
Like suppose we’ve been worried about the imminent unaligned MacGyver threat. Some people say there’s no way he can sink the ship; other people say he can. So the people who say he can confer and try to offer 10 different plausible ways he could sink the ship.
If we found out all ten didn’t work, then—considering that these examples were selected for being the clearest ways he can destroy this ship—it’s hard for me to think this shouldn’t move you down at all. And so presumably finding out that just one didn’t work should move you down by some lesser amount, if finding out 10 didn’t work would also do so.
Imagine a a counterfactual world where people had asked, “how can he sink the ship” and people had responded “You don’t need to know how, that’s would just a concrete example, concrete examples are irrelevant to the principle which is simply that MacGuyver’s superior improvisational skills are sufficient to sink the ship.” I would have lower credence in MacGyver’s sink shipping ability in the world without concrete examples; I think most people would; I think it would be weird not to. So I think moving in the direction of such a world should similarly lower your credence.
I think the chess analogy is better: if I predict that, from some specific position, MacGyver will play some sequence of ten moves that will leave him winning, and then try to demonstrate that by playing from that position and losing, would you update at all?
I meant “measurably” in a literal sense: nobody can measure the change in my probability estimate, including myself. If my reported probability of MacGyver Ruin after reading Alice’s post was 56.4%, after reading Bob’s post it remains 56.4%. The size of a measurable update will vary based on the hypothetical, but it sounds like we don’t have a detailed model that we trust, so a measurable update would need to be at least 0.1%, possibly larger.
You’re saying I should update “some” and “somewhat”. How much do you mean by that?
When you say, doesn’t “measurably,” do you mean that it doesn’t update all or doesn’t update much? I’m not saying you should update much. I’m just saying you should update some. Like I’m nodding along at your example, but my conclusion is instead simply the opposite.
Like suppose we’ve been worried about the imminent unaligned MacGyver threat. Some people say there’s no way he can sink the ship; other people say he can. So the people who say he can confer and try to offer 10 different plausible ways he could sink the ship.
If we found out all ten didn’t work, then—considering that these examples were selected for being the clearest ways he can destroy this ship—it’s hard for me to think this shouldn’t move you down at all. And so presumably finding out that just one didn’t work should move you down by some lesser amount, if finding out 10 didn’t work would also do so.
Imagine a a counterfactual world where people had asked, “how can he sink the ship” and people had responded “You don’t need to know how, that’s would just a concrete example, concrete examples are irrelevant to the principle which is simply that MacGuyver’s superior improvisational skills are sufficient to sink the ship.” I would have lower credence in MacGyver’s sink shipping ability in the world without concrete examples; I think most people would; I think it would be weird not to. So I think moving in the direction of such a world should similarly lower your credence.
I think the chess analogy is better: if I predict that, from some specific position, MacGyver will play some sequence of ten moves that will leave him winning, and then try to demonstrate that by playing from that position and losing, would you update at all?
I meant “measurably” in a literal sense: nobody can measure the change in my probability estimate, including myself. If my reported probability of MacGyver Ruin after reading Alice’s post was 56.4%, after reading Bob’s post it remains 56.4%. The size of a measurable update will vary based on the hypothetical, but it sounds like we don’t have a detailed model that we trust, so a measurable update would need to be at least 0.1%, possibly larger.
You’re saying I should update “some” and “somewhat”. How much do you mean by that?