Create educational content to sway opinions of large voting demographics. Especially when you can successfully signal that you are a part of that demographic.
Your link to “don’t do technical ai alignment” does not argue for that claim. In fact, it appears to be based on the assumption that the opposite is true, but that there are a lot of distractor hypotheses for how to do it that will turn out to be an expensive waste of time.
What are all the high-level answers to “What should you, a layperson, do about AI x-risk?”. Happy to receive a link to an existing list.
Mine from 5m of recalling answers I’ve heard:
Don’t work for OpenAI
Found or work for an AI lab that gains a lead on capabilities, while remaining relatively safe
Maybe work for Anthropic, they seem least bad
Don’t work for any AI lab
Don’t take any action which increases revenue of any AI lab
Mourn
Do technical AI alignment
Don’t do technical AI alignment
Do AI governance & advocacy
Donate to AI x-risk funds
Cope
Don’t perform domestic terrorism
Create educational content to sway opinions of large voting demographics. Especially when you can successfully signal that you are a part of that demographic.
Do what Zvi is doing but for a lower IQ audience
Form organisations in your local area
Try your best to avoid making AI a culture war
Stay grounded
make humans (who are) better at thinking (imo maybe like continuing this way ≈forever, not until humans can “solve AI alignment”)
think well. do math, philosophy, etc.. learn stuff. become better at thinking
live a good life
Your link to “don’t do technical ai alignment” does not argue for that claim. In fact, it appears to be based on the assumption that the opposite is true, but that there are a lot of distractor hypotheses for how to do it that will turn out to be an expensive waste of time.