Ok. I don’t think your original post is clear about which of these many different theses it has, or which points it thinks are evidence for other points, or how strongly you think any of them.
I don’t know how to understand your thesis other than “in politics you should always pitch people by saying how the issue looks to you, Overton window or personalized persuasion style be damned”. I think the strong version of this claim is obviously false. Though maybe it’s good advice for you (because it matches your personality profile) and perhaps it’s good advice for many/most of the people we know.
I think that making SB-1047 more restrictive would have made it less likely to pass, because it would have made it easier to attack and fewer people would agree that it’s a step in the right direction. I don’t understand who you think would have flipped from negative to positive on the bill based on it being stronger—surely not the AI companies and VCs who lobbyied against it and probably eventually persuaded Newsom to veto?
I feel like the core thing that we’ve seen in DC is that the Overton window has shifted, almost entirely as a result of AI capabilities getting better, and now people are both more receptive to some of these arguments and more willing to acknowledge their sympathy.
To be clear, my recommendation for SB-1047 was not “be basically the same bill but talk about extinction risks and levy a few more restrictions on the labs”, but rather “focus very explicitly on the extinction threat; say ‘this bill is trying to address a looming danger described by a variety of scientists and industry leaders’ or suchlike, shape the bill differently to actually address the extinction threat straightforwardly”.
I don’t have a strong take on whether SB-1047 would have been more likely to pass in that world. My recollection is that, back when I attempted to give this advice, I said I thought it would make the bill less likely to pass but more likely to have good effects on the conversation (in addition to it being much more likely to matter in cases where it did pass). But that could easily be hindsight bias; it’s been a few years. And post facto, the modern question of what is “more likely” depends a bunch on things like how stochastic you think Newsom is (we already observed that he vetoed the real bill, so I think there’s a decent argument that a bill with different content has a better chance even if it’s a lower than our a-priori odds on SB-1047), though that’s a digression.
I do think that SB-1047 would have had a substantially better effect on the conversation if it was targeted towards the “superintelligence is on track to kill us all” stuff. I think this is a pretty low bar because I think that SB-1047 had an effect that was somewhere between neutral and quite bad, depending on which follow-on effects you attribute to it. Big visible bad effects that I think you can maybe attribute to it are Cruz and Vance polarizing against (what they perceived as) attempts to regulate a budding normal tech industry, and some big dems also solidifying a position against doing much (e.g. Newsom and Pelosi). More insidiously and less clearly, I suspect that SB-1047 was a force holding the Overton window together. It was implicitly saying “you can’t talk about the danger that AI kills everyone and be taken seriously” to all who would listen. It was implicitly saying “this is a sort of problem that could be pretty-well addressed by requiring labs to file annual safety reports” to all who would listen. I think these are some pretty false and harmful memes.
With regards to the Overton window shifting: I think this effect is somewhat real, but I doubt it has as much importance as you imply.
For one thing, I started meeting with various staffers in the summer of 2023, and the reception I got is a big part of why I started pitching Eliezer on the world being ready for a book (a project that we started in early 2024). Also, the anecdote in the post is dated to late 2024 but before o3 or DeepSeek. Tbc, it did seem to me like the conversation changed markedly in the wake of DeepSeek, but it changed from a baseline of elected officials being receptive in ways that shocked onlookers.
For another thing, in my experience, anecdotes like “the AI cheats and then hides it” or experimental results like “the AI avoids shutdown sometimes” are doing as much if not more of the lifting as capabilities advances. (Though I think that’s somewhat of a digression.)
For a third thing, I suspect that one piece of the puzzle you’re missing is how much the Overton window has been shifting because courageous people have been putting in the legwork for the last couple years. My guess is that the folks putting these demos and arguments in front of members of congress are a big part of why we’re seeing the shift, and my guess is that the ones who are blunt and courageous are causing the shift to happen moreso (and are causing it to happen in a better direction).
I’m worried about the people who go in and talk only about (e.g.) AI-enabled biorisk while avoiding saying a word about superintelligence or loss-of-control. I think this happens pretty often and that it comes with a big opportunity cost in the best cases, and that it’s actively harmful in the worst cases—when (e.g.) it reinforces a silly Overton window, or when it shuts down some congress member’s budding thoughts about the key problems, or when it orients them towards silly issues. I also think it spends down future credibility; I think it risks exasperating them when you try to come back next year and say that we’re on track to all die. I also think that the lack of earnestness is fishy in a noticeable way (per the link in the OP).
[edited for clarity and to fix typos, with apologies about breaking the emoji-reaction highlights]
Ok. I agree with many particular points here, and there are others that I think are wrong, and others where I’m unsure.
For what it’s worth, I think SB-1047 would have been good for AI takeover risk on the merits, even though (as you note) it isn’t close to all we’d want from AI regulation.
Ok. I don’t think your original post is clear about which of these many different theses it has, or which points it thinks are evidence for other points, or how strongly you think any of them.
I don’t know how to understand your thesis other than “in politics you should always pitch people by saying how the issue looks to you, Overton window or personalized persuasion style be damned”. I think the strong version of this claim is obviously false. Though maybe it’s good advice for you (because it matches your personality profile) and perhaps it’s good advice for many/most of the people we know.
I think that making SB-1047 more restrictive would have made it less likely to pass, because it would have made it easier to attack and fewer people would agree that it’s a step in the right direction. I don’t understand who you think would have flipped from negative to positive on the bill based on it being stronger—surely not the AI companies and VCs who lobbyied against it and probably eventually persuaded Newsom to veto?
I feel like the core thing that we’ve seen in DC is that the Overton window has shifted, almost entirely as a result of AI capabilities getting better, and now people are both more receptive to some of these arguments and more willing to acknowledge their sympathy.
To be clear, my recommendation for SB-1047 was not “be basically the same bill but talk about extinction risks and levy a few more restrictions on the labs”, but rather “focus very explicitly on the extinction threat; say ‘this bill is trying to address a looming danger described by a variety of scientists and industry leaders’ or suchlike, shape the bill differently to actually address the extinction threat straightforwardly”.
I don’t have a strong take on whether SB-1047 would have been more likely to pass in that world. My recollection is that, back when I attempted to give this advice, I said I thought it would make the bill less likely to pass but more likely to have good effects on the conversation (in addition to it being much more likely to matter in cases where it did pass). But that could easily be hindsight bias; it’s been a few years. And post facto, the modern question of what is “more likely” depends a bunch on things like how stochastic you think Newsom is (we already observed that he vetoed the real bill, so I think there’s a decent argument that a bill with different content has a better chance even if it’s a lower than our a-priori odds on SB-1047), though that’s a digression.
I do think that SB-1047 would have had a substantially better effect on the conversation if it was targeted towards the “superintelligence is on track to kill us all” stuff. I think this is a pretty low bar because I think that SB-1047 had an effect that was somewhere between neutral and quite bad, depending on which follow-on effects you attribute to it. Big visible bad effects that I think you can maybe attribute to it are Cruz and Vance polarizing against (what they perceived as) attempts to regulate a budding normal tech industry, and some big dems also solidifying a position against doing much (e.g. Newsom and Pelosi). More insidiously and less clearly, I suspect that SB-1047 was a force holding the Overton window together. It was implicitly saying “you can’t talk about the danger that AI kills everyone and be taken seriously” to all who would listen. It was implicitly saying “this is a sort of problem that could be pretty-well addressed by requiring labs to file annual safety reports” to all who would listen. I think these are some pretty false and harmful memes.
With regards to the Overton window shifting: I think this effect is somewhat real, but I doubt it has as much importance as you imply.
For one thing, I started meeting with various staffers in the summer of 2023, and the reception I got is a big part of why I started pitching Eliezer on the world being ready for a book (a project that we started in early 2024). Also, the anecdote in the post is dated to late 2024 but before o3 or DeepSeek. Tbc, it did seem to me like the conversation changed markedly in the wake of DeepSeek, but it changed from a baseline of elected officials being receptive in ways that shocked onlookers.
For another thing, in my experience, anecdotes like “the AI cheats and then hides it” or experimental results like “the AI avoids shutdown sometimes” are doing as much if not more of the lifting as capabilities advances. (Though I think that’s somewhat of a digression.)
For a third thing, I suspect that one piece of the puzzle you’re missing is how much the Overton window has been shifting because courageous people have been putting in the legwork for the last couple years. My guess is that the folks putting these demos and arguments in front of members of congress are a big part of why we’re seeing the shift, and my guess is that the ones who are blunt and courageous are causing the shift to happen moreso (and are causing it to happen in a better direction).
I’m worried about the people who go in and talk only about (e.g.) AI-enabled biorisk while avoiding saying a word about superintelligence or loss-of-control. I think this happens pretty often and that it comes with a big opportunity cost in the best cases, and that it’s actively harmful in the worst cases—when (e.g.) it reinforces a silly Overton window, or when it shuts down some congress member’s budding thoughts about the key problems, or when it orients them towards silly issues. I also think it spends down future credibility; I think it risks exasperating them when you try to come back next year and say that we’re on track to all die. I also think that the lack of earnestness is fishy in a noticeable way (per the link in the OP).
[edited for clarity and to fix typos, with apologies about breaking the emoji-reaction highlights]
Ok. I agree with many particular points here, and there are others that I think are wrong, and others where I’m unsure.
For what it’s worth, I think SB-1047 would have been good for AI takeover risk on the merits, even though (as you note) it isn’t close to all we’d want from AI regulation.