I work at the Alignment Research Center (ARC). I write a blog on stuff I’m interested in (such as math, philosophy, puzzles, statistics, and elections): https://ericneyman.wordpress.com/
Eric Neyman
Sorry but the analogous situation is clearly if Biden had banned Trump from appearing on TV. Further, the reason it was hard for RFK Jr. to get on TV was financial decisions made by TV channels, rather than political decisions made by the federal government.
I dunno man, not letting your opposition appear on TV is pretty far into “not a democracy” territory.
I don’t think that’s the cause. I think there are two main causes:
Incumbent governments around the world have had a super tough time since around 2023. The chart below only goes up to 2024, but I think it has held up: the party in power has consistently been doing really poorly across the world.
Separately, I think that some of Trump’s aggressive foreign policy actions (such as tariffs against allies) have made right-wing parties do worse since he became president. Most famously, this led to the Liberal Party unexpectedly holding onto power in Canada, despite looking like it was going to lose in a landslide before Trump took power.
Often (but not always) I can distill my confusion down to two things that I believe to be true that seem to be in contradiction (or in tension).
I grimly predict that they would basically behave like OpenBrain does in AI 2027
Whether this is true of not seems like a critically important question.
My understanding is that the “Anthropic consensus”, to the extent that such a thing exists, is that catastrophic misalignment is pretty unlikely, and that other kinds of risks stemming from powerful actors misusing AI account for most of the way that humanity fails to achieve its long-term potential.
I’m curious whether you consider that to be a crux: if you agreed with the “Anthropic consensus” on this point, do you think you would act in a way that is similar to the way that you’re predicting they will in fact act?
Riffing off Friendly Fire, and with apologies to the non-Russian speakers:
We talk existential risk like it’s casual TV drama
Кто кого — московский «Спартак» или киевское «Динамо»?
Ah, thanks! I didn’t understand that it meant ”...to go bad”.
Loosely inspired by this post, I believe!
Excited to listen to the album!
I understand what Friday’s Far Enough for Milk is about, but what does the title mean?
I continue to believe that donations to Bores are the #1 best donation opportunity, and that donations to Wiener are the #2 donation opportunity!
+1 to Zach’s comment. It’s true that politicians always ask for more money; however, we don’t always ask for more money from donors. For almost all fundraisers, there’s a target amount we’re trying to hit. The only exceptions are the rare fundraisers that we think are good enough that we won’t be able to saturate them to our funding bar.
So yeah, politicians don’t say “we have enough money, save it for the next guy”, but we do.
My threat model is actually
More of the lightcone will be controlled by the Chinese government (or its successor).
My current guess is that the long-term future looks better if American actors have more bargaining power over the long-term future than Chinese actors. If space is subdivided amongst the key actors (including the Chinese government, the U.S. government, and possibly others), I worry that the parts controlled by the Chinese government would be illiberal, in the same kinds of ways that China is now. One particularly bad version of this is an AI-enabled surveillance state that locks in something like current Chinese ideology, curtailing the potential for moral improvement. I think this is less likely in the parts of space controlled by the U.S. government, because I think those are reasonably likely to be founded on fairly liberal values, perhaps similar in spirit to the U.S. constitution.
I’m not sure if this is the answer you’re looking for, but: most things that could exist don’t. The space of ideas is wide, and few of them are implemented in practice. Is this idea particularly privileged in the space of possible governance ideas, in such a way where you would have expected it to have been tried?
A couple of other things that stand out to me as particularly egregious:
My understanding is that Trump is far more corrupt than past presidents (including Trump in his first term). An example of this is allowing export controls of Nvidia’s AI chips to China in exchange for gifts from Jensen Huang.
The Trump administration has launched criminal investigations against political opponents at an unprecedented rate, most recently against Jerome Powell yesterday.
And of course, the fake electors plot to steal the 2020 presidential election (not to be confused with January 6th—I think his conduct on January 6th was really bad, but the fake electors plot is a much greater indictment of Trump’s character and much stronger evidence of his authoritarianism).
We finally did it, we found the median voter!
I think that by far the most important thing in this space is for a Democrat to win the 2028 presidential election. And I think the most important thing for making that happen is to nominate a Democrat whose positions on the issues are relatively close to the median voter.
We can get a sense of this by seeing how much potential Democratic candidates outperformed fundamentals (i.e. what you would have predicted given the state they were running in and the political environment that year). Some candidates who have done well on this metric include:
Andy Beshear (governor of Kentucky, a really red state)
Josh Shapiro (governor of Pennsylvania, a swing state, where he won his election by a large margin)
Amy Klobuchar (senator from Minnesota)
Ruben Gallego (senator from Arizona)
Mark Kelly (senator from Arizona)
Raphael Warnock (senator from Georgia)
Some candidates who have not done well on this metric include:
Gavin Newsom
Kamala Harris
Tim Walz
AOC
[Edited to add] Elizabeth Warren fares particularly badly on this metric, though I don’t think she’ll run in 2028
Anthropic: “we expect powerful AI systems will emerge in late 2026 or early 2027… Intellectual capabilities matching or exceeding that of Nobel Prize winners across most disciplines… The ability to navigate all interfaces available to a human doing digital work today… The ability to autonomously reason through complex tasks over extended periods—hours, days, or even weeks… The ability to interface with the physical world”
This is kind of annoyingly phrased, because it sounds like they’re saying that AIs will be making Nobel Prize-level discoveries by late 2026 or early 2027. But in fact that’s not what they’re saying. They’re only claiming that AIs will be capable of doing tasks that Nobel Prize winners could do in hours or days (or maybe even weeks) -- whereas Nobel Prize-level discoveries take years.
Bores repeatedly addressed concerns about regulatory burden by saying that frontier AI developers’ own memos said this bill would add 1 full time employee, and so wasn’t that burdensome.
I’d be surprised if this was true and very surprised if it’s what frontier developers said even if it was true, given their incentives. I’m waiting to hear back on the memo
As far as I can tell, Bores never said that frontier AI developers’ own memos said this; rather, it was that an opposition memo said this. Bores mentions this memo a few times during the 90 minutes; here’s a typical quote:
I’ll note that what came in as an opposition memo said that they estimated that this would require one full-time employee to comply with.
I believe that this is the memo that Bores was talking about. It was written by Will Rinehart of the American Entreprise Institute, which opposed the bill.
I just found myself here (two years later) because of a discussion about Control AI. I feel conflicted but am closer to agreeing than disagreeing with your comment. I think that my original comment was somewhat written in soldier mindset.
Ah I see. I think the analogous thing would be if Harris but not Trump could appear on PBS. Which I think would be quite bad. But maybe not so bad that it would tempt me into calling the US “not a democracy”.