More productive would be exploiting tensions: if someone claims voting is a fantastic idea because of 1 to millions odds of affecting the outcome, why don’t they accept this same reasoning in other cases like existential risks?
they believe the odds for voting (voting and polling data are more solid evidence)
it’s socially popular, and voting can be good for your social standing; people tend to live in politically segregated localities and communities, so for most people voting means affiliating with the groups your associates are connected with
there are huge marketing campaigns for voting since politicians and advocates have a vested interest in convincing people to vote for them, and in the process produce generic pro-voting spillovers collectively
voting is a cheap and bounded commitment (although political donations can be arbitrarily large)
voting affects current generations and fellow citizens more relative to future generations,
voting invokes coalitional thinking and group loyalty/morality more
consequences of votes, conditional on decisiveness, are revealed to a degree relatively soon
voting justifies reading about or watching politics for political junkies and policy wonks
I think the last reason is illegitimate, because it is symmetrical with the existential risk case. Just as voting justifies following politics, so does trying to decrease existential risk justify soaking up X-risk information. Therefore, someone who accepts it as a reason to vote should accept it as a reason to try to mitigate X-risks.
Hmm, how would that world look, assuming he had his way? Billions spent on FAI research and cryonics? Mandatory basic rationality training? Legalizing polyamory marriage? Erecting statues of Bayes?
If we are assuming there wouldn’t be any other major changes to the political structure (e.g. no bayesian party in congress) then the effect on policy outcomes would be fairly minor. For better or worse the president doesn’t have that much direct power, and has to work with a lot of other interested groups.
Also I think people underestimate the domain specific knowledge in politics, there’s no reason to believe that being rational would make Eleizer a particularly effective politician any more than a good doctor or lawyer.
The main specific power the president has is in publicity, so Eleizer could probably increase attention on existential risk and FAI issues, but how much concrete change that would make I don’t know.
More productive would be exploiting tensions: if someone claims voting is a fantastic idea because of 1 to millions odds of affecting the outcome, why don’t they accept this same reasoning in other cases like existential risks?
Among other reasons, because :
they believe the odds for voting (voting and polling data are more solid evidence)
it’s socially popular, and voting can be good for your social standing; people tend to live in politically segregated localities and communities, so for most people voting means affiliating with the groups your associates are connected with
there are huge marketing campaigns for voting since politicians and advocates have a vested interest in convincing people to vote for them, and in the process produce generic pro-voting spillovers collectively
voting is a cheap and bounded commitment (although political donations can be arbitrarily large)
voting affects current generations and fellow citizens more relative to future generations,
voting invokes coalitional thinking and group loyalty/morality more
consequences of votes, conditional on decisiveness, are revealed to a degree relatively soon
voting justifies reading about or watching politics for political junkies and policy wonks
I think the last reason is illegitimate, because it is symmetrical with the existential risk case. Just as voting justifies following politics, so does trying to decrease existential risk justify soaking up X-risk information. Therefore, someone who accepts it as a reason to vote should accept it as a reason to try to mitigate X-risks.
Vote Eliezer for President!
Great, now HPMOR will never get finished.
Hmm, how would that world look, assuming he had his way? Billions spent on FAI research and cryonics? Mandatory basic rationality training? Legalizing polyamory marriage? Erecting statues of Bayes?
To give a boring answer:
If we are assuming there wouldn’t be any other major changes to the political structure (e.g. no bayesian party in congress) then the effect on policy outcomes would be fairly minor. For better or worse the president doesn’t have that much direct power, and has to work with a lot of other interested groups.
Also I think people underestimate the domain specific knowledge in politics, there’s no reason to believe that being rational would make Eleizer a particularly effective politician any more than a good doctor or lawyer.
The main specific power the president has is in publicity, so Eleizer could probably increase attention on existential risk and FAI issues, but how much concrete change that would make I don’t know.