Thanks! I would also add that the OP ended up advertising a project which would require Dustin Moskovitz to succeed in becoming the POTUS. I don’t understand how it is even possible.
Suppose, by an analogy, that we have three candidates: Trump, Biden and Yudkowsky, and that before our efforts Yudkowsky was unelectable and Trump or Biden had a probability of 50% to be elected, but were known not to do anything about AI safety. Also suppose that we have a plan A which causes Trump or Biden to have a 10% chance of accepting the anti-ASI treaty and a plan B which results in electing Yudkowsky with probability 1% while failing to change the opinions of Trump or Biden. Under these conditions, I wouldn’t vote for Plan B.
You act like Plan A and Plan B (let’s say Plan B is Dustin Moskovitz running for president, Yudkowsky is a far worse candidate due to his lack of a college or high school diploma, worse public speaking skills, and relative lack of executive leadership) are mutually exclusive, when they are not. Furthermore, the whole point is that there is no credible Plan A without at least the media coverage of a Plan B being implemented. Enough politicians won’t care until it’s abundantly clear that voters will.
I am arguing against an attitude on this form that political engagement is too difficult to model and ulitimately pointless to engage in. I have definitely encountered this in the past.
I’m confused who you’re arguing against. There have already been posts arguing that people who want certain AI policies should support / donate to specific candidates (Alex Bores, Dustin Moskovitz, Scott Wiener), plus a bunch of AI Safety orgs trying to influence politics more directly (ControlAI, MIRI, CAIP, more ControlAI), direct action by individuals (comments on the Whitehouse AI plan), and a bunch of meta posts.
Thanks! I would also add that the OP ended up advertising a project which would require Dustin Moskovitz to succeed in becoming the POTUS. I don’t understand how it is even possible.
Suppose, by an analogy, that we have three candidates: Trump, Biden and Yudkowsky, and that before our efforts Yudkowsky was unelectable and Trump or Biden had a probability of 50% to be elected, but were known not to do anything about AI safety. Also suppose that we have a plan A which causes Trump or Biden to have a 10% chance of accepting the anti-ASI treaty and a plan B which results in electing Yudkowsky with probability 1% while failing to change the opinions of Trump or Biden. Under these conditions, I wouldn’t vote for Plan B.
Edited to add: the post which argued for Dustin Moskovitz was written by Kuperman himself.
You act like Plan A and Plan B (let’s say Plan B is Dustin Moskovitz running for president, Yudkowsky is a far worse candidate due to his lack of a college or high school diploma, worse public speaking skills, and relative lack of executive leadership) are mutually exclusive, when they are not. Furthermore, the whole point is that there is no credible Plan A without at least the media coverage of a Plan B being implemented. Enough politicians won’t care until it’s abundantly clear that voters will.
I am arguing against an attitude on this form that political engagement is too difficult to model and ulitimately pointless to engage in. I have definitely encountered this in the past.