[Question] Your specific attitudes towards AI safety

Hi everyone! We would love your responses to this survey about AI risk attitudes given personal context: Google Forms link.

Edit: We updated the survey for the feedback given in the comments. If you already answered it before, you don’t need to do it again.

Further information

The unique value proposition is to give an idea of the attitudes towards AI safety in the context of your initial exposure point, knowledge level and demographic whereas other surveys often focus on pure forecasting and field overview. It sprung from this article and further discussion about how people initially came to know about AI safety. This survey is focused on AI safety-engaged people while the next survey will be reformulated for a non-EA, non-rationalist community to help in outreach strategy designs.

We have shared the survey in an array of groups and forums (EA Forum, AI Safety Discussion FB group and more) and expect ~100 responses. The specific contrasts we’re hoping to infer and analyze are:

  • Prior knowledge <> (% AGI developed <> % AGI dangerous)

  • (First exposure point <> Initial impressions) <> (Currently convincing arguments <> Current impressions)

  • (Occupation + Age + Country) <> [*]

We are expecting to publish the results on LessWrong and EA Forum with in-depth exploratory analyses in the light of the contrasts above.

Subjectivity is part of the survey and one of the reasons we made it. Be prepared for ambiguous questions.