Effective Altruists and Rationalists Views & The case for using marketing to highlight AI risks.

Link post

The link is to a particular timestamp in a much longer podcast episode. This segment plays immediately after the (Nonlinear co-founder) Kat Woods interview. (Skipping over the part about requesting donations.) In it, the podcast host John Sherman specifically calls out the apparent lack of instrumental rationality on the part of the Rationalist and Effective Altruism communities when it comes to stopping our impending AI doom. In particular, our reluctance to use the Dark Arts, or at least symmetric weapons (like “marketing”), in the interest of maintaining our epistemic “purity”.

(For those not yet aware, Sherman was persuaded by Yudkowsky’s TIME article and created the For Humanity Podcast in an effort to spread the word about AI x-risk and thereby reduce it. This is an excerpt from Episode #24, the latest at the time of writing.)

I have my own thoughts about this, but I’m not fully aware of trends in the broader community, so I thought I’d create a space for discussion. Is the criticism fair? Are there any Rationalist/​EA projects Sherman is unaware of that might change his mind? Have we failed? Are we just not winning hard enough? Should we change? If so, what should we change?

My (initial) Thoughts

I’m less involved with the EA side, but I feel that LessWrong in particular is a bastion of sanity in a mad world, and this is worth protecting, even if that means that LessWrong proper doesn’t get much done. Maxims like “Aim to explain, not persuade” are good for our collective epistemics, but also seem like a prohibition on prerequisites to collective action.

I think this is fine? Politics easily become toxic; they risk poisoning the well. There’s no prohibition on rationalists building action- or single issue–focused institutions outside of LessWrong. There have been reports of people doing this. (I even kind of co-founded one, starting from LessWrong, but it’s not super active.) Announcing what they’re starting, doing postmortems on how things went, or explaining organizational principles seem totally kosher for LessWrong to me. I feel like I’m seeing some of this happening too, but maybe not enough?

What I’m not seeing is any kind of pipeline on skilling up our group rationality, especially of the instrumental flavor. Not to say there’s been zero effort.

Also, I’m personally not a marketer, or even very skilled socially. The kind of action Sherman seems to be asking for is probably not my comparative advantage. Should I be doing something else to contribute? Or should I skill up in whatever seems the most important? I’m not sure, and I expect my answer won’t be the same for everyone.