I’m not sure I quite agree with you about strategic ambiguity, though. Again, imagine that you’d said “I am 80% confident that the human race will still be here in 100 years, because 2+2=5”. If someone says “I don’t know anything about existential risk, but I know that 2+2 isn’t 5 and that aside from ex falso quodlibet basic arithmetic like this obviously can’t tell us anything about it”, then I am perfectly happy for them to claim that they knew you were wrong even though they didn’t stand to lose anything if your overall prediction turns out right.
(My own position, not that anyone should care: my gut agrees with lsusr’s overall position “but I try not to think with my gut”; I don’t think I understand all the possible ways AI progress could go well enough for any prediction I’d make by explicitly reasoning it out to be worth much; accordingly I decline to make a concrete prediction; I mostly agree that making such predictions is a virtuous activity because it disincentivizes overconfident-looking bullshitting, but I think admitting one’s ignorance is about equally virtuous; the arguments mentioned in the OP seem to me unlikely to be correct but I could well be missing important insights that would make them more plausible. And I do agree that the comments lsusr replied to in the way I’m gently objecting to would have been improved by adding “and therefore I think our chance of survival is below 20%” or “but I do agree that we will probably still be here in 100 years” or “and I have no idea about the actual prediction lsusr is making” or whatever.)
I think the most virtuous solution to your hypothetical is to say “I don’t know anything about existential risk, but I’d bet at 75% confidence that a mathematician will prove that 2+2≠5” (or something along those lines).
Thanks for the clarification.
I’m not sure I quite agree with you about strategic ambiguity, though. Again, imagine that you’d said “I am 80% confident that the human race will still be here in 100 years, because 2+2=5”. If someone says “I don’t know anything about existential risk, but I know that 2+2 isn’t 5 and that aside from ex falso quodlibet basic arithmetic like this obviously can’t tell us anything about it”, then I am perfectly happy for them to claim that they knew you were wrong even though they didn’t stand to lose anything if your overall prediction turns out right.
(My own position, not that anyone should care: my gut agrees with lsusr’s overall position “but I try not to think with my gut”; I don’t think I understand all the possible ways AI progress could go well enough for any prediction I’d make by explicitly reasoning it out to be worth much; accordingly I decline to make a concrete prediction; I mostly agree that making such predictions is a virtuous activity because it disincentivizes overconfident-looking bullshitting, but I think admitting one’s ignorance is about equally virtuous; the arguments mentioned in the OP seem to me unlikely to be correct but I could well be missing important insights that would make them more plausible. And I do agree that the comments lsusr replied to in the way I’m gently objecting to would have been improved by adding “and therefore I think our chance of survival is below 20%” or “but I do agree that we will probably still be here in 100 years” or “and I have no idea about the actual prediction lsusr is making” or whatever.)
I think the most virtuous solution to your hypothetical is to say “I don’t know anything about existential risk, but I’d bet at 75% confidence that a mathematician will prove that 2+2≠5” (or something along those lines).