This is all related to something Buck recently wrote: “I spend most of my time thinking about relatively cheap interventions that AI companies could implement to reduce risk assuming a low budget, and about how to cause AI companies to marginally increase that budget”. I’m sure Buck has thought a lot about his strategy here, and I’m sure that you’ve thought a lot about your strategy as laid out in this post, and so on. But a part of me is sitting here thinking: man, everyone sure seems to have given up. (And yes, I know it doesn’t feel like giving up from the inside, but from my perspective that’s part of the problem.)
Thanks for pointing this out.
I’ve been thinking lately about how much folks around more or less dismiss the idea of an AI pause as unrealistic because we’re not going to get that much political buyin.
I (speculatively) think that this is a bit trapped in a mindset that is assuming the conclusion. Big political changes like that one have happened in the past, and they have often seemed impossible before they happened and inevitable in retrospect. And, when something big like that changes, part of the process is a cascade, where whole deferral structures change their mind / attitude / preferences, about something. How much buyin you have before that cascade happens may not be very indicative of where that cascade can end up.
I, personally, don’t feel like I know how to “call it” when big changes are on the table or when they’re not. But it sure does seem like people are counting us out much too early, given the fundamentals of the situation. We all think that the world is going to change very radically in the next few years. It’s not clear what kinds of cascades are on the table.
I provisionally think that we should feel less bashful about advocating for an AI pause, and more agnostic about how likely that is to come to pass.
I agree with you, but also think you’re not going far enough. In a world where things are changing radically, the space of possibilities opens up dramatically. And so it’s less a question of “does advocating for policy X become viable?”, and more a question of “how can we design the kinds of policies that our past selves wouldn’t even have been able to conceive of?”
In other words, in a world that’s changing a lot, you want to avoid privileging your hypotheses in advance, which is what it feels like the “pro AI pause vs anti AI pause” debate is doing.
(And yes, in some sense those radical future policies might fall into a broad category like “AI pause”. But that doesn’t mean that our current conception of “AI pause” is a very useful guide for how to make those future policies come about.)
Kind of a tangent:
Thanks for pointing this out.
I’ve been thinking lately about how much folks around more or less dismiss the idea of an AI pause as unrealistic because we’re not going to get that much political buyin.
I (speculatively) think that this is a bit trapped in a mindset that is assuming the conclusion. Big political changes like that one have happened in the past, and they have often seemed impossible before they happened and inevitable in retrospect. And, when something big like that changes, part of the process is a cascade, where whole deferral structures change their mind / attitude / preferences, about something. How much buyin you have before that cascade happens may not be very indicative of where that cascade can end up.
I, personally, don’t feel like I know how to “call it” when big changes are on the table or when they’re not. But it sure does seem like people are counting us out much too early, given the fundamentals of the situation. We all think that the world is going to change very radically in the next few years. It’s not clear what kinds of cascades are on the table.
I provisionally think that we should feel less bashful about advocating for an AI pause, and more agnostic about how likely that is to come to pass.
I agree with you, but also think you’re not going far enough. In a world where things are changing radically, the space of possibilities opens up dramatically. And so it’s less a question of “does advocating for policy X become viable?”, and more a question of “how can we design the kinds of policies that our past selves wouldn’t even have been able to conceive of?”
In other words, in a world that’s changing a lot, you want to avoid privileging your hypotheses in advance, which is what it feels like the “pro AI pause vs anti AI pause” debate is doing.
(And yes, in some sense those radical future policies might fall into a broad category like “AI pause”. But that doesn’t mean that our current conception of “AI pause” is a very useful guide for how to make those future policies come about.)