Thanks. I’ll probably reply to different parts in different threads.
For the first bit:
My guess is that the parts of the core leadership of Anthropic which are thinking actively about misalignment risks (in particular, Dario and Jared) think that misalignment risk is like ~5x smaller than I think it is while also thinking that risks from totalitarian regimes are like 2x worse than I think they are. I think the typical views of opinionated employees on the alignment science team are closer to my views than to the views of leadership. I think this explains a lot about how Anthropic operates.
The rough number you give are helpful. I’m not 100% sure I see the dots you’re intending to connect with “leadership thinks 1/5-ryan-misalignment and 2x-ryan-totalitariansm” / “rest of alignment science team closer to ryan” → “this explains a lot.”
Is this just the obvious “whelp, leadership isn’t bought into this risk model and call most of the shots, but in conversations with several employees that engage more with misalignment?”. Or was there a more specific dynamic you thought it explained?
Yep, just the obvious. (I’d say “much less bought in” than “isn’t bought in”, but whatever.)
I don’t really have dots I’m trying to connect here, but this feels more central to me than what you discuss. Like, I think “alignment might be really, really hard” (which you focus on) is less of the crux than “is misalignment that likely to be a serious problem at all?” in explaining. Another way to put this is that I think “is misalignment the biggest problem” is maybe more of the crux than “is misalignment going to be really, really hard to resolve in some worlds”. I see why you went straight to your belief though.
Thanks. I’ll probably reply to different parts in different threads.
For the first bit:
The rough number you give are helpful. I’m not 100% sure I see the dots you’re intending to connect with “leadership thinks 1/5-ryan-misalignment and 2x-ryan-totalitariansm” / “rest of alignment science team closer to ryan” → “this explains a lot.”
Is this just the obvious “whelp, leadership isn’t bought into this risk model and call most of the shots, but in conversations with several employees that engage more with misalignment?”. Or was there a more specific dynamic you thought it explained?
Yep, just the obvious. (I’d say “much less bought in” than “isn’t bought in”, but whatever.)
I don’t really have dots I’m trying to connect here, but this feels more central to me than what you discuss. Like, I think “alignment might be really, really hard” (which you focus on) is less of the crux than “is misalignment that likely to be a serious problem at all?” in explaining. Another way to put this is that I think “is misalignment the biggest problem” is maybe more of the crux than “is misalignment going to be really, really hard to resolve in some worlds”. I see why you went straight to your belief though.