Like, the most important thing to estimate when evaluating a political candidate is their trustworthiness and integrity! It’s the thing that would flip the sign on whether supporting someone is good or bad for the world.
I agree that this is an important thing that deserved more consideration in Eric’s analysis (I wrote a note about it on Oct 22 but then I forgot to include it in my post yesterday). But I don’t think it’s too hard to put into a model (although it’s hard to find the right numbers to use). The model I wrote down in my note is
30% chance Bores would oppose an AI pause / strong AI regulations (b/c it’s too “anti-innovation” or something)
40% chance Bores would support strong regulations
30% chance he would vote for strong regulations but not advocate for them
90% chance Bores would support weak/moderate AI regulations
My guess is that 2⁄3 of the EV comes from strong regulations and 1⁄3 from weak regulations (which I just came up with a justification for earlier today but it’s too complicated to fit in this comment), so these considerations reduce the EV to 37% (i.e., roughly divide EV by 3).
FWIW I wouldn’t say “trustworthiness” is the most important thing, more like “can be trusted to take AI risk seriously”, and my model is more about the latter. (A trustworthy politician who is honest about the fact that they don’t care about AI safety will not be getting any donations from me.)
Yeah I pretty much agree with what you’re saying. But I think I misunderstood your comment before mine, and the thing you’re talking about was not captured by the model I wrote in my last comment; so I have some more thinking to do.
I didn’t mean “can be trusted to take AI risk seriously” as “indeterminate trustworthiness but cares about x-risk”, more like “the conjunction of trustworthy + cares about x-risk”.