The goal of the explanation is to give people a fair chance of understanding AI risk. You can either give someone a fair chance to model the world correctly, or you can fail to give them that fair chance. More fairness is better.
I could tell from the post that Omid did not feel confident in their ability to give someone a fair chance at understanding AI risk.
Is the goal of all this persuasion to get people to fire off a letter like the one above?
The goal of the explanation is to give people a fair chance of understanding AI risk. You can either give someone a fair chance to model the world correctly, or you can fail to give them that fair chance. More fairness is better.
I could tell from the post that Omid did not feel confident in their ability to give someone a fair chance at understanding AI risk.