There are some good thoughts here, I like this enough that I am going
to comment on the effective strategies angle. You state that
The wider AI research community is an almost-optimal engine of apocalypse.
and
AI capabilities are advancing rapidly, while our attempts to align it proceed at a frustratingly slow pace.
I have to observe that, even though certain people on this forum
definitely do believe the above two statements, even on this forum
this extreme level of pessimism is a minority opinion. Personally, I
have been quite pleased with the pace of progress in alignment
research.
This level of disagreement, which is almost inevitable as it involves estimates about about the future.
has important implications for the problem of convincing people:
As per above, we’d be fighting an uphill battle here. Researchers
and managers are knowledgeable on the subject, have undoubtedly
heard about AI risk already, and weren’t convinced.
I’d say that you would indeed be facing an uphill battle, if you’d want
to convince most researchers and managers that the recent late-stage
Yudkowsky estimates about the inevitability of an AI apocalypse are
correct.
The effective framing you are looking for, even if you believe
yourself that Yudkowsky is fully correct, is that more work is needed
on reducing long-term AI risks. Researchers and managers in the AI
industry might agree with you on that, even if they disagree with you
and Yudkowsky about other things.
Whether these researchers and managers will change their whole career just because they agree with you is a different matter. Most will not. This is a separate problem, and should be treated as such. Trying to solve both problems at once by making people deeply afraid about the AI apocalypse is a losing strategy.
There are some good thoughts here, I like this enough that I am going to comment on the effective strategies angle. You state that
and
I have to observe that, even though certain people on this forum definitely do believe the above two statements, even on this forum this extreme level of pessimism is a minority opinion. Personally, I have been quite pleased with the pace of progress in alignment research.
This level of disagreement, which is almost inevitable as it involves estimates about about the future. has important implications for the problem of convincing people:
I’d say that you would indeed be facing an uphill battle, if you’d want to convince most researchers and managers that the recent late-stage Yudkowsky estimates about the inevitability of an AI apocalypse are correct.
The effective framing you are looking for, even if you believe yourself that Yudkowsky is fully correct, is that more work is needed on reducing long-term AI risks. Researchers and managers in the AI industry might agree with you on that, even if they disagree with you and Yudkowsky about other things.
Whether these researchers and managers will change their whole career just because they agree with you is a different matter. Most will not. This is a separate problem, and should be treated as such. Trying to solve both problems at once by making people deeply afraid about the AI apocalypse is a losing strategy.