As an aside, I think it’s good for people and organizations (especially AI labs) to clearly state their views on AI risk, see e.g., my comment here. So I agree with this aspect of the post.
Stating clear views on what ideal government/international policy would look like also seems good.
(And I agree with a bunch of other misc specific points in the post like “we can maybe push the overton window far” and “avoiding saying true things to retain respectability in order to get more power is sketchy”.)
(Edit: from a communication best practices perspective, I wish I noted where I agree in the parent comment than here.)
As an aside, I think it’s good for people and organizations (especially AI labs) to clearly state their views on AI risk, see e.g., my comment here. So I agree with this aspect of the post.
Stating clear views on what ideal government/international policy would look like also seems good.
(And I agree with a bunch of other misc specific points in the post like “we can maybe push the overton window far” and “avoiding saying true things to retain respectability in order to get more power is sketchy”.)
(Edit: from a communication best practices perspective, I wish I noted where I agree in the parent comment than here.)