Your comment is contingent on several binary possibilities about my intentions. I appreciate your attempt to address all leaves of the decision tree. Here I will help limit the work you have to do by pinning things down.
To clarify,
My post serves one purpose: to register a public prediction. I am betting reputation. But it makes no sense to bet reputation on something everyone agrees on. It only makes sense to bet on things people disagree on. I’m hoping people will make counter-predictions because that can help verify, in the future, that the claims I made were disputed at the time.
These responses give me the very strong impression—and I would bet fairly heavily that I am not alone in this—that lsusr considers those comments [comments without specific predictions] inappropriate, and is trying to convey something along the lines of “put up or shut up; if you disagree with me then you should be able to make a concrete prediction that differs from mine; otherwise, no one should care about your arguments”.
Not exactly. The comments are prefectly appropriate. I don’t plan on engaging them in either direction because the purpose of this post isn’t (for me) to debate. It’s to register a public prediction.
So why ask if people want to make a counter-prediction? Because arguing against me without making a concrete prediction after I just made a public prediction puts comments in a frustratingly ambiguous state where it’s not obvious whether they constitute a counter-prediction. I want to avoid ambiguity in future evaluation of these threads.
To put it another way, someone could claim “I knew lsusr was wrong—see this comment I made” if I turn out to be wrong. That same person could also claim “I don’t lose reputation because I didn’t make an explicit counter-prediction”. I want to avoid this potential for strategic ambiguity.
First, maybe lsusr considers that (especially given the last paragraph in the OP) any discussion of the arguments at all is inappropriate. In that case, I think either (1) the arguments should simply not be in the OP at all or else (2) there should be an explicit statement along the lines of “I am not willing to discuss my reasoning and will ignore comments attempting to do so”.
They’re not arguments intended to convince anyone else of anything. They’re personal reasons for my conclusion. I think it’s better to have them than not to have them, because my post is part of a collaborative effort to find the truth, and more transparency is better toward achieving this end.
As for an explicit statement, here’s something I could try the next time I make a similar post:
This post primarily serves as a public prediction. Please begin all comments with either a counter-prediction or [no prediction]. You are welcome to debate the logic of my reasoning, but do not expect me to engage with you. Right now I am putting skin in the game, not grandstanding.
Second, maybe lsusr is happy to discuss the arguments, but only with people who demonstrate their seriousness by making a concrete prediction that differs from lsusr’s.
Nope, but I appreciate you considering the possibility. I am happy to consider the arguments elsewhere, but the arguments presented here are too thin to defend. For each tiny point I’d have to write a whole post, and if/when I do that I’d rather make an actual top-level post.
Third, maybe the impression I have is wrong, and lsusr doesn’t in fact disapprove of, or dislike, comments addressing the arguments without a concrete disagreement with the conclusions.
I “[don’t] in fact disapprove of, or dislike, comments addressing the arguments without a concrete disagreement with the conclusions.” I enjoy them, actually. I just want to clarify whether the comments are counter-predictions or not.
I appreciate the feedback. I will consider it in the future to as not to give a misleading impression.
I’m not sure I quite agree with you about strategic ambiguity, though. Again, imagine that you’d said “I am 80% confident that the human race will still be here in 100 years, because 2+2=5”. If someone says “I don’t know anything about existential risk, but I know that 2+2 isn’t 5 and that aside from ex falso quodlibet basic arithmetic like this obviously can’t tell us anything about it”, then I am perfectly happy for them to claim that they knew you were wrong even though they didn’t stand to lose anything if your overall prediction turns out right.
(My own position, not that anyone should care: my gut agrees with lsusr’s overall position “but I try not to think with my gut”; I don’t think I understand all the possible ways AI progress could go well enough for any prediction I’d make by explicitly reasoning it out to be worth much; accordingly I decline to make a concrete prediction; I mostly agree that making such predictions is a virtuous activity because it disincentivizes overconfident-looking bullshitting, but I think admitting one’s ignorance is about equally virtuous; the arguments mentioned in the OP seem to me unlikely to be correct but I could well be missing important insights that would make them more plausible. And I do agree that the comments lsusr replied to in the way I’m gently objecting to would have been improved by adding “and therefore I think our chance of survival is below 20%” or “but I do agree that we will probably still be here in 100 years” or “and I have no idea about the actual prediction lsusr is making” or whatever.)
I think the most virtuous solution to your hypothetical is to say “I don’t know anything about existential risk, but I’d bet at 75% confidence that a mathematician will prove that 2+2≠5” (or something along those lines).
Your comment is contingent on several binary possibilities about my intentions. I appreciate your attempt to address all leaves of the decision tree. Here I will help limit the work you have to do by pinning things down.
To clarify,
My post serves one purpose: to register a public prediction. I am betting reputation. But it makes no sense to bet reputation on something everyone agrees on. It only makes sense to bet on things people disagree on. I’m hoping people will make counter-predictions because that can help verify, in the future, that the claims I made were disputed at the time.
Not exactly. The comments are prefectly appropriate. I don’t plan on engaging them in either direction because the purpose of this post isn’t (for me) to debate. It’s to register a public prediction.
So why ask if people want to make a counter-prediction? Because arguing against me without making a concrete prediction after I just made a public prediction puts comments in a frustratingly ambiguous state where it’s not obvious whether they constitute a counter-prediction. I want to avoid ambiguity in future evaluation of these threads.
To put it another way, someone could claim “I knew lsusr was wrong—see this comment I made” if I turn out to be wrong. That same person could also claim “I don’t lose reputation because I didn’t make an explicit counter-prediction”. I want to avoid this potential for strategic ambiguity.
They’re not arguments intended to convince anyone else of anything. They’re personal reasons for my conclusion. I think it’s better to have them than not to have them, because my post is part of a collaborative effort to find the truth, and more transparency is better toward achieving this end.
As for an explicit statement, here’s something I could try the next time I make a similar post:
This post primarily serves as a public prediction. Please begin all comments with either a counter-prediction or [no prediction]. You are welcome to debate the logic of my reasoning, but do not expect me to engage with you. Right now I am putting skin in the game, not grandstanding.
Nope, but I appreciate you considering the possibility. I am happy to consider the arguments elsewhere, but the arguments presented here are too thin to defend. For each tiny point I’d have to write a whole post, and if/when I do that I’d rather make an actual top-level post.
I “[don’t] in fact disapprove of, or dislike, comments addressing the arguments without a concrete disagreement with the conclusions.” I enjoy them, actually. I just want to clarify whether the comments are counter-predictions or not.
I appreciate the feedback. I will consider it in the future to as not to give a misleading impression.
Thanks for the clarification.
I’m not sure I quite agree with you about strategic ambiguity, though. Again, imagine that you’d said “I am 80% confident that the human race will still be here in 100 years, because 2+2=5”. If someone says “I don’t know anything about existential risk, but I know that 2+2 isn’t 5 and that aside from ex falso quodlibet basic arithmetic like this obviously can’t tell us anything about it”, then I am perfectly happy for them to claim that they knew you were wrong even though they didn’t stand to lose anything if your overall prediction turns out right.
(My own position, not that anyone should care: my gut agrees with lsusr’s overall position “but I try not to think with my gut”; I don’t think I understand all the possible ways AI progress could go well enough for any prediction I’d make by explicitly reasoning it out to be worth much; accordingly I decline to make a concrete prediction; I mostly agree that making such predictions is a virtuous activity because it disincentivizes overconfident-looking bullshitting, but I think admitting one’s ignorance is about equally virtuous; the arguments mentioned in the OP seem to me unlikely to be correct but I could well be missing important insights that would make them more plausible. And I do agree that the comments lsusr replied to in the way I’m gently objecting to would have been improved by adding “and therefore I think our chance of survival is below 20%” or “but I do agree that we will probably still be here in 100 years” or “and I have no idea about the actual prediction lsusr is making” or whatever.)
I think the most virtuous solution to your hypothetical is to say “I don’t know anything about existential risk, but I’d bet at 75% confidence that a mathematician will prove that 2+2≠5” (or something along those lines).