So I think what I’m getting here is that you have an object-level disagreement (not as convinced about doom), but you are also reinforcing that object-level disagreement with signalling/reputational considerations (this will just alienate people). This pattern feels ugh and worries me. It seems highly important to separate the question of what’s true from the reputational question. It furthermore seems highly important to separate arguments about what makes sense to say publicly on-your-world-model vs on-Eliezer’s-model. In particular, it is unclear to me whether your position is “it is dangerously wrong to speak the truth about AI risk” vs “Eliezer’s position is dangerously wrong” (or perhaps both).
I guess that your disagreement with Eliezer is large but not that large (IE you would name it as a disagreement between reasonable people, not insanity). It is of course possible to consistently maintain that (1) Eliezer’s view is reasonable, (2) on Eliezer’s view, it is strategically acceptable to speak out, and (3) it is not in fact strategically acceptable for people with Eliezer’s views to speak out about those views. But this combination of views does imply endorsing a silencing of reasonable disagreements which seems unfortunate and anti-epistemic.
My own guess is that the maintenance of such anti-epistemic silences is itself an important factor contributing to doom. But, this could be incorrect.
This was posted on the day of the open letter and I was indeed confused about what to think of the situation.
I think something I failed to properly communicate is that I was worried that this was a bad time to pull the lever even if I’m concerned about risks from AGI. I was worried the public wouldn’t take alignment seriously because they cause a panic much sooner than people were ready for.
I care about being truthful, but I care even more about not dying so my comment was mostly trying to communicate that I didn’t think this was the best strategic decision for not dying.
I was seeing a lot of people write negative statements about the open letter on Twitter and it kind of fed my fears that this was going to backfire as a strategy and impact all of our work to make ai risk taken seriously.
In the end, the final thing that matters is that we win (i.e. not dying from AGI).
I’m not fully sure what I think now (mostly because I don’t know about higher order effects that will happen 2-3 years from now), but I think it turned out a lot strategically better than I initially expected.
So I think what I’m getting here is that you have an object-level disagreement (not as convinced about doom), but you are also reinforcing that object-level disagreement with signalling/reputational considerations (this will just alienate people). This pattern feels ugh and worries me. It seems highly important to separate the question of what’s true from the reputational question. It furthermore seems highly important to separate arguments about what makes sense to say publicly on-your-world-model vs on-Eliezer’s-model. In particular, it is unclear to me whether your position is “it is dangerously wrong to speak the truth about AI risk” vs “Eliezer’s position is dangerously wrong” (or perhaps both).
I guess that your disagreement with Eliezer is large but not that large (IE you would name it as a disagreement between reasonable people, not insanity). It is of course possible to consistently maintain that (1) Eliezer’s view is reasonable, (2) on Eliezer’s view, it is strategically acceptable to speak out, and (3) it is not in fact strategically acceptable for people with Eliezer’s views to speak out about those views. But this combination of views does imply endorsing a silencing of reasonable disagreements which seems unfortunate and anti-epistemic.
My own guess is that the maintenance of such anti-epistemic silences is itself an important factor contributing to doom. But, this could be incorrect.
Yeah, so just to clarify a few things:
This was posted on the day of the open letter and I was indeed confused about what to think of the situation.
I think something I failed to properly communicate is that I was worried that this was a bad time to pull the lever even if I’m concerned about risks from AGI. I was worried the public wouldn’t take alignment seriously because they cause a panic much sooner than people were ready for.
I care about being truthful, but I care even more about not dying so my comment was mostly trying to communicate that I didn’t think this was the best strategic decision for not dying.
I was seeing a lot of people write negative statements about the open letter on Twitter and it kind of fed my fears that this was going to backfire as a strategy and impact all of our work to make ai risk taken seriously.
In the end, the final thing that matters is that we win (i.e. not dying from AGI).
I’m not fully sure what I think now (mostly because I don’t know about higher order effects that will happen 2-3 years from now), but I think it turned out a lot strategically better than I initially expected.