Campaign for AI Safety: Please join me

I will start by saying that I generally agree with Yudkowsky’s position on AI. We must proceed with extreme caution. We must radically slow down AI capability advancement. We must invest unfathomable amounts of resources in AI alignment research. We need to enact laws and treaties that will help keep it all together for as long as possible and hopefully we figure things out in time.

The laughter at the recent White House press conference at the question about Yudkowsky’s argument, indicates far we are in public debate from a sensible position of caution.

But I am hopeful that we can change that. Few people laugh at nuclear weapons now. We are a species capable of cooperation and of taking things seriously. As the saying goes:

“First they ignore you, then they laugh at you, then they fight you, then you win.”

What is missing is public understanding of the dangers of misaligned /​ unaligned AI. Democracy does not work in darkness. People must know the dangers, the uncertaintly, and of ways they can contribute.

That’s why I am proposing a campaign on public awareness of x-risk from AI. So far, it’s just me and my wife. Please join me, especially if you work in advertising, marketing, PR, activism, politics, law, etc., if you know how to make a website, if you want to create PR materials, meet journalists, do accounting, fund-raising, etc.

Please share this with people who do not read Less Wrong but we are freaked out and want to do something.

I do not know exactly how this campaign will run, which countries to focus on. I am myself only human and can contribute very little of the total required effort. My background is in consulting and market research and I run a market research company. Personally, at this stage, I can best contribute by coordination and facilitating operations.

We need people, money, expertise, patience, etc. Please join: https://​​campaignforaisafety.org/​​.