Addressing the post, a focus on AI risk feels like something worth experimenting with.
My lame model suggests that the main downside is that it risks the brand. If so, experimenting with AI risk in the CFAR context seems like a potentially high value avenue of exploration, and brand damage can be mitigated.
This doesn’t seem like a controversial of a claim (be specific and not vague is one of the most timeless heuristics out there), but does seem worth highlighting.
I would like to add that my favorite claim so far (“Effective Altruism’s current message discourages creativity”) was not particularly well-specified. (“creativity” and “EA’s current message” are not very specific imo).