I don’t think this is a very helpful approach, and you’re not doing yourself justice by taking it. Calling/writing congress might be several orders of magnitude more effective than voting, but it’s still at least one order or magnitude under what most people are doing (that takes the same amount of effort as calling congress).
The goal of the explanation is to give people a fair chance of understanding AI risk. You can either give someone a fair chance to model the world correctly, or you can fail to give them that fair chance. More fairness is better.
I could tell from the post that Omid did not feel confident in their ability to give someone a fair chance at understanding AI risk.
I don’t think this is a very helpful approach, and you’re not doing yourself justice by taking it. Calling/writing congress might be several orders of magnitude more effective than voting, but it’s still at least one order or magnitude under what most people are doing (that takes the same amount of effort as calling congress).
There’s tons of examples of things to do that aren’t this. I’ve been interested in using imagery to make it much easier to explain AI risk to someone for the first time. Raemon has repeatedly endorsed Scott Alexander’s Superintelligence FAQ post as the best known layperson-friendly introduction to AI safety (he also endorsed this post for ML engineers). Anyone can research and write a post like Lessons learned from talking with 100 academics about AI safety about the effect, because everyone has access to a sample of people who don’t know about AI safety yet (in person, don’t use social media, your data will be crap and you’ll drag everyone down with you).
Is the goal of all this persuasion to get people to fire off a letter like the one above?
The goal of the explanation is to give people a fair chance of understanding AI risk. You can either give someone a fair chance to model the world correctly, or you can fail to give them that fair chance. More fairness is better.
I could tell from the post that Omid did not feel confident in their ability to give someone a fair chance at understanding AI risk.