I work at the Alignment Research Center (ARC). I write a blog on stuff I’m interested in (such as math, philosophy, puzzles, statistics, and elections): https://ericneyman.wordpress.com/
Eric Neyman
Yeah. I guess another piece of it is that it’s straightforwardly unpleasant to have attack ads being run against you, regardless of how much they affect your chances of winning.
Update on the Alex Bores campaign
Some examples (epistemic status: not very thought-through; I’m more confident that there are uses than of any specific uses):
If there’s a massive increase in cyberattacks, prediction markets could help predict the scale.
AI tools might result in a bioengineered pandemic; here, the value is similar to what it would have been for Covid. But also you might have useful markets like “Will there be a consensus that AI helped engineer this virus”, if there’s no consensus.
If we lose control of some (non-superhuman) AIs, in a way that turns out to be hard to shut down, it may be useful to predict what kinds of things those AIs will try to do.
There’s been increased discourse on whether prediction markets are net-positive for the world.
My take: so far, they haven’t been clearly net-positive. However, when I think back to Covid, I sure wish that prediction markets had been as mature of a technology as they are today; prediction markets in February 2020 on how many cases there would be in April 2020 would have probably made the world marginally more sane.
Prediction markets are probably most useful in a crisis, where decisions need to be made quickly based on uncertain information. I find it plausible that we’ll have such a crisis within the next decade, particularly in the context of AI. And I think that the benefits that prediction markets are likely to provide in such a crisis will likely outweigh the negatives incurred thus far through things like increased sports gambling.
I think that, while many LessWrong readers do believe that one party is way better than the other, such that the inter-party quality variation is far larger than the intra-party quality variation, this is not true of all readers.
And I think it’s a reasonable move to write a post that says “Assuming that these are your values/beliefs, you should do X” without taking a position on whether those values/beliefs are correct: it can be valuable and action-guiding for such people!
This consideration is meant to be included in the evaluation of Donna and Randy. As in, I am supposing that they are of similar quality after taking into account the dynamic you mention.
I don’t understand how what you’re saying is in tension with what I’m saying. My post makes no object-level claims about the relative goodness of Democrats and Republicans. I’m merely positing a hypothetical in which you think Donna and Randy would be equally good as president, despite being nominated by two different parties, one of which you prefer to the other.
Is it fair to assume that Obama-McCaine and Obama-Romney were the background thoughts that lead to this post?
Nope. I was thinking about this in the context of imagining hypothetical nominees in the 2028 presidential election (I probably won’t say who specifically I was imagining).
Suppose that you generally prefer Democrats to Republicans, but Republicans nominate Randy, who’s an above-average Republican, for president, while Democrats nominate Donna, a below-average Democrat, for president, such that you’re actually roughly neutral between them.
Even though you think Randy and Donna would be about equally good as president, I claim that you should vote for Randy. That’s because, if Randy becomes president, he’s “locked in” as their party’s nominee in the next presidential election, which is great from your perspective. You’d much rather the next presidential election be contested between Randy and a generic Democrat, than between Donna and a generic Republican.
This difference can be important enough that you might sometimes want to vote for Randy even if you actually prefer Donna as president by a small-to-medium amount.
(This of course works symmetrically if you switch the two parties in my example.)
Huh, I guess I’m not familiar with the connotations? I l’m used to seeing it used literally.
LW react suggestion: “big if true”
Ah I see. I think the analogous thing would be if Harris but not Trump could appear on PBS. Which I think would be quite bad. But maybe not so bad that it would tempt me into calling the US “not a democracy”.
Sorry but the analogous situation is clearly if Biden had banned Trump from appearing on TV. Further, the reason it was hard for RFK Jr. to get on TV was financial decisions made by TV channels, rather than political decisions made by the federal government.
I dunno man, not letting your opposition appear on TV is pretty far into “not a democracy” territory.
I don’t think that’s the cause. I think there are two main causes:
Incumbent governments around the world have had a super tough time since around 2023. The chart below only goes up to 2024, but I think it has held up: the party in power has consistently been doing really poorly across the world.
Separately, I think that some of Trump’s aggressive foreign policy actions (such as tariffs against allies) have made right-wing parties do worse since he became president. Most famously, this led to the Liberal Party unexpectedly holding onto power in Canada, despite looking like it was going to lose in a landslide before Trump took power.
Often (but not always) I can distill my confusion down to two things that I believe to be true that seem to be in contradiction (or in tension).
I grimly predict that they would basically behave like OpenBrain does in AI 2027
Whether this is true of not seems like a critically important question.
My understanding is that the “Anthropic consensus”, to the extent that such a thing exists, is that catastrophic misalignment is pretty unlikely, and that other kinds of risks stemming from powerful actors misusing AI account for most of the way that humanity fails to achieve its long-term potential.
I’m curious whether you consider that to be a crux: if you agreed with the “Anthropic consensus” on this point, do you think you would act in a way that is similar to the way that you’re predicting they will in fact act?
Riffing off Friendly Fire, and with apologies to the non-Russian speakers:
We talk existential risk like it’s casual TV drama
Кто кого — московский «Спартак» или киевское «Динамо»?
Ah, thanks! I didn’t understand that it meant ”...to go bad”.
Thanks! I meant to make the narrower point that my probability that the race will be decided by a small number of votes has gone up. I’ve expanded on footnote 8 to clarify how I’ve updated / what has changed.