Huh, I thought my reasoning was relatively clear. To expand, the OP says:
non-proliferation strategies roughly as light-touch as the IAEA—e.g., bans on AI data centers, or powerful GPUs—might suffice to seriously slow AI progress
and:
at least absent non-proliferation regimes drastically more costly/invasive than the IAEA;
I am proposing a set of non-proliferation regimes that are not “drastically more invasive” than the IAEA (the matter of “costliness” feels confusing given that there is an unclear counterfactual, so I’ll footnote that instead[1]).
The “just” here means “you can just do X using non-proliferation regimes that are not drastically more invasive than the IAEA”. For example the IAEA has heavily curtailed research into how to build nuclear weapons more cheaply and efficiently, which seems like it applies pretty straightforwardly to algorithmic progress. A simple code-signing regime where high-performance chips are limited to only run signed code seems like it would also not be “drastically more invasive than the IAEA”. Of course the IAEA has drastically curtailed buildout of nuclear energy and weapons production, and indeed nuclear energy has been getting more expensive per watt over time, which seems like it implies a reversal of Moore’s law would also be less invasive than the IAEA.
I am not saying this means it will be easy for decision-makers to get buy-in for any of these policies, but in as much as we are evaluating policies on the dimension of “how privacy/control invasive do these policies have to be in order to achieve their goals of non-proliferation even over the course of decades”, I think there are many entries in that set of policies that are less invasive than what we’ve done with nuclear reactors and nuclear weapons that we could “just” implement.
Of course this is ignoring most of the political tradeoffs for many of these policies, and I am quite uncertain which things will actually end up in the political Overton window, and be feasible. But I am pretty sure that “we will have no options that do not involve very serious violation of civil liberties for getting this done” will not actually be the reason this fails, which I feel like the OP is pretty clearly arguing.
Of course, compared to a magical counterfactual where you instead got to reap the benefits of ASI without the risks of ASI, doing anything in the space will be “costly”, but I think all the things I list meet the lower standard of “implementing them will not leave you worse off than if you had never built the chips at all” in terms of economic cost.
For example the IAEA has heavily curtailed research into how to build nuclear weapons more cheaply and efficiently, which seems like it applies pretty straightforwardly to algorithmic progress.
IIUC, it’s legal everywhere on Earth to do basic research that might eventually lead to a new, much more inexpensive and hard-to-monitor method to enrich uranium to weapons grade.
I’m thinking mainly of laser isotope enrichment, which was first explored in the 1970s. No super-inexpensive method has turned up, thankfully. (The best-known approach seems to be in the same ballpark as gas centrifuges in terms of cost, specialty parts etc., or if anything somewhat worse. Definitely not radically simpler and cheaper.) But I think there’s a big space of possible techniques, and meanwhile people in academia keep inventing new types of lasers and new optical excitation and separation paradigms. I don’t think there’s any general impossibility proof that kg-scale uranium enrichment in a random basement with only widely-available parts can’t ever get invented someday by this line of research.
(If it did, it probably wouldn’t be the death of nonproliferation because you can still try to monitor and control the un-enriched uranium. But it would still make nonproliferation substantially harder. By the way, once you have a lot of weapons grade uranium, making a nuclear bomb is trivial. The fancy implosion design is only needed for plutonium bombs not uranium.)
AFAICT, if someone is explicitly developing a system for “kg-scale uranium enrichment via laser isotope separation”, then the authorities will definitely go talk to them. But for every step prior to that last stage, where you’re doing “basic R&D”, building new types of lasers, etc., then my impression is that people can freely do whatever they want, and publish it, and nobody will ask questions. I mean, it’s possible that there’s someone in some secret agency who is on the ball, five steps ahead of everyone in academia and industry, and they know where problems might arise on the future tech tree and are ready to quietly twist arms if necessary. But I dunno man, that seems pretty over-optimistic, especially when the research can happen in any country.
My former PhD advisor wrote a book in the 1980s with a whole chapter on laser isotope separation techniques, and directions for future research. The chapter treats it as completely unproblematic! Not even one word about why this might be bad. I remember feeling super weirded out by that when I read it (15 years ago), but I figured, maybe I’m the crazy one? So I never asked him about it.
Yep, that’s roughly my model. So the equivalent here would be that if you are doing related work in computer science or whatever, nobody would show up at your door, but if you tried to release a paper being like “we improved training performance by 2x using this fancy convolution” the authorities would show up at your door.
I think this is sufficient to prevent most large-scale investment into this kind of research (at least within a nation, internationally this all is a bit trickier and my models are more uncertain).
prevent most large-scale investment into this kind of research
Seems plausible. At a guess, the effect of that would be to barely or at best somewhat slow down the actual core research leading to AGI, while greatly slowing down any visible impacts and last mile research. (I’m unsure because IDK how much of the current influx of resources gets directed to [real capabilities research, as defined presuming my mainline model where learning programs as of 2026 aren’t all that relevant to AGI], and I’m unsure what the effect on research is of current big labs blundering around and showing more clearly what limits are of current learning programs.)
Ot1h, you would get more warning of AGI coming in the sense that people who deeply understand the related fields (e.g. let’s say, vaguely, cognitive science, “technical philosophy/epistemology”, algorithms) might be able to call that we’re actually getting most of the relevant understanding. Ototh, you would get less warning in the sense that you wouldn’t be getting big new coalescences of economic productivity or scary demos.
I certainly think your world model in which learning programs as of 2026 are not very relevant to AGI should predict different things here! I do think that’s likely the underlying crux.
For example the IAEA has heavily curtailed research into how to build nuclear weapons more cheaply and efficiently, which seems like it applies pretty straightforwardly to algorithmic progress.
I assume very few people are interested in doing independent research into improving nuclear weapons. If institutional AI algorithmic research were effectively banned, all else equal, I assume many more people would be interested in independently researching it, which would require more in-practice restriction on speech to curtail. (Based on your tweets I’m guessing you think that curtailing independent research wouldn’t be necessary and aren’t considering it here; but this may be a background disagreement with people saying invasive restrictions would be needed.)
A simple code-signing regime where high-performance chips are limited by a code-signing regime seems like it would also not be “drastically more invasive than the IAEA”.
Controlling widely-used hardware and only allowing approved code to run on it does seem drastically more invasive, sufficiently obviously so that I have no idea where you’re coming from here. If this only applied to the largest supercomputers I might not call it more invasive, but the whole premise of this thread is not-that.
(Based on your tweets I’m guessing you think that curtailing independent research wouldn’t be necessary and aren’t considering it here; but this may be a background disagreement with people saying invasive restrictions would be needed.)
Yeah, my current guess (with like 85% confidence?) is that if you curtail large industrial-scale investment you would get there. I don’t think independent research would get there any time soon. You would need some kind of industrial level of investment at some point in the coming decades.
I agree there is more motivation to build more efficient AI algorithms than for people to build more effective nuclear reactor or nuclear weapon designs, though I think a lot of that is downstream of a societal stigma for nuclear weapons, on which I think there is a lot of uncertainty on how it will develop in the case of AI.
A proper treatment of this topic would start from the top about the role of social stigma and how this will affect talent allocation and resource investment, and how it interfaces with regulations. I think the default outcome here is that there won’t be much of a social stigma around AI development, this will consistently keep investment into developing more competent AI systems high, and also be itself the force that prevents regulation from interfering with that investment.
In the worlds where you actually have widespread buy-in to do something IAEA-like, you probably have quite a bit of stigma, and this will be largely responsible for keeping the level of investment low (though the regulations itself will also help). And if you manage to keep it that way, then I think you are probably fine for a very long time. Of course the stigma might break at any point, similar to how nuclear agreements might break at any point, and then you are back on a clock (and the clock will of course be shorter because you will have made at least some relevant progress in other scientific fields). All my statements here are about how if you keep the stigma and regulations up, you will be fine. In most worlds the stigma and regulations will break at some point, and you will have much less time, but this isn’t because the alternative would be some kind of Orwellian or invasive regime.
Controlling widely-used hardware and only allowing approved code to run on it does seem drastically more invasive, sufficiently obviously so that I have no idea where you’re coming from here. If this only applied to the largest supercomputers I might not call it more invasive, but the whole premise of this thread is not-that.
I don’t super understand why “AI chips that cost $1k+ can only run signed code” would be invasive in any meaningful way. I don’t really think it would change anyone’s life in any particularly meaningful way. Both Android phones and iPhones can only run signed code, and the vast majority of gaming happens on game consoles that can only run signed code.
but the whole premise of this thread is not-that.
What do you mean by “the whole premise of this thread?”. I am arguing against the claim (paraphrased) “it is inevitable that you can train superintelligence on a consumer laptop eventually unless we take some kind of drastic and invasive measures”.
Of course by the time you actually have made it so that you can train AI on a random laptop (from 2026?) then you do indeed have no choice but to take some kind of drastic and invasive measures, but my whole point is that you can just avoid getting there. I don’t think the OP is saying “conditional on superinteligence being trainable on a 2026 laptop, you have to curtail social freedoms” the OP is saying “you will be able to train superintelligence on a laptop within a few decades unless you drastically curtail social freedom”, and that is the statement I am objecting to.
It sounds like I have more expectation of a much more efficient paradigm (a la e.g. Steven Byrnes) being feasibly discovered through purely theoretical work (though not necessarily single-2026-laptop efficient, or discovered on any particular schedule), which is coloring my takes here.
I agree that stigma is important and would reduce the level of intervention needed to shut down independent research. It’s only very recently that I’ve seen any discussion of stigma as load-bearing in pause scenarios, so I wasn’t thinking of it.
I don’t super understand why “AI chips that cost $1k+ can only run signed code” would be invasive in any meaningful way. I don’t really think it would change anyone’s life in any particularly meaningful way.
I was thinking of it as more invasive in affecting (by limiting what code they can run) far more actors (as opposed to, what, reactor operators and uranium handlers?, in the nuclear case). If unrestricted general-purpose CPUs are still readily available, it does seem like nothing much would change in practice & the important freedoms would be preserved; combined with only a few chipmakers actually being liable for compliance, I can see calling this not more invasive.
Both Android phones and iPhones can only run signed code, and the vast majority of gaming happens on game consoles that can only run signed code.
(I do think it’s probably meaningful that these aren’t legal mandates, and more meaningful that unrestricted platforms are also readily available.)
Huh, I thought my reasoning was relatively clear. To expand, the OP says:
and:
I am proposing a set of non-proliferation regimes that are not “drastically more invasive” than the IAEA (the matter of “costliness” feels confusing given that there is an unclear counterfactual, so I’ll footnote that instead[1]).
The “just” here means “you can just do X using non-proliferation regimes that are not drastically more invasive than the IAEA”. For example the IAEA has heavily curtailed research into how to build nuclear weapons more cheaply and efficiently, which seems like it applies pretty straightforwardly to algorithmic progress. A simple code-signing regime where high-performance chips are limited to only run signed code seems like it would also not be “drastically more invasive than the IAEA”. Of course the IAEA has drastically curtailed buildout of nuclear energy and weapons production, and indeed nuclear energy has been getting more expensive per watt over time, which seems like it implies a reversal of Moore’s law would also be less invasive than the IAEA.
I am not saying this means it will be easy for decision-makers to get buy-in for any of these policies, but in as much as we are evaluating policies on the dimension of “how privacy/control invasive do these policies have to be in order to achieve their goals of non-proliferation even over the course of decades”, I think there are many entries in that set of policies that are less invasive than what we’ve done with nuclear reactors and nuclear weapons that we could “just” implement.
Of course this is ignoring most of the political tradeoffs for many of these policies, and I am quite uncertain which things will actually end up in the political Overton window, and be feasible. But I am pretty sure that “we will have no options that do not involve very serious violation of civil liberties for getting this done” will not actually be the reason this fails, which I feel like the OP is pretty clearly arguing.
Of course, compared to a magical counterfactual where you instead got to reap the benefits of ASI without the risks of ASI, doing anything in the space will be “costly”, but I think all the things I list meet the lower standard of “implementing them will not leave you worse off than if you had never built the chips at all” in terms of economic cost.
IIUC, it’s legal everywhere on Earth to do basic research that might eventually lead to a new, much more inexpensive and hard-to-monitor method to enrich uranium to weapons grade.
I’m thinking mainly of laser isotope enrichment, which was first explored in the 1970s. No super-inexpensive method has turned up, thankfully. (The best-known approach seems to be in the same ballpark as gas centrifuges in terms of cost, specialty parts etc., or if anything somewhat worse. Definitely not radically simpler and cheaper.) But I think there’s a big space of possible techniques, and meanwhile people in academia keep inventing new types of lasers and new optical excitation and separation paradigms. I don’t think there’s any general impossibility proof that kg-scale uranium enrichment in a random basement with only widely-available parts can’t ever get invented someday by this line of research.
(If it did, it probably wouldn’t be the death of nonproliferation because you can still try to monitor and control the un-enriched uranium. But it would still make nonproliferation substantially harder. By the way, once you have a lot of weapons grade uranium, making a nuclear bomb is trivial. The fancy implosion design is only needed for plutonium bombs not uranium.)
AFAICT, if someone is explicitly developing a system for “kg-scale uranium enrichment via laser isotope separation”, then the authorities will definitely go talk to them. But for every step prior to that last stage, where you’re doing “basic R&D”, building new types of lasers, etc., then my impression is that people can freely do whatever they want, and publish it, and nobody will ask questions. I mean, it’s possible that there’s someone in some secret agency who is on the ball, five steps ahead of everyone in academia and industry, and they know where problems might arise on the future tech tree and are ready to quietly twist arms if necessary. But I dunno man, that seems pretty over-optimistic, especially when the research can happen in any country.
My former PhD advisor wrote a book in the 1980s with a whole chapter on laser isotope separation techniques, and directions for future research. The chapter treats it as completely unproblematic! Not even one word about why this might be bad. I remember feeling super weirded out by that when I read it (15 years ago), but I figured, maybe I’m the crazy one? So I never asked him about it.
(Low confidence on all this.)
Yep, that’s roughly my model. So the equivalent here would be that if you are doing related work in computer science or whatever, nobody would show up at your door, but if you tried to release a paper being like “we improved training performance by 2x using this fancy convolution” the authorities would show up at your door.
I think this is sufficient to prevent most large-scale investment into this kind of research (at least within a nation, internationally this all is a bit trickier and my models are more uncertain).
Seems plausible. At a guess, the effect of that would be to barely or at best somewhat slow down the actual core research leading to AGI, while greatly slowing down any visible impacts and last mile research. (I’m unsure because IDK how much of the current influx of resources gets directed to [real capabilities research, as defined presuming my mainline model where learning programs as of 2026 aren’t all that relevant to AGI], and I’m unsure what the effect on research is of current big labs blundering around and showing more clearly what limits are of current learning programs.)
Ot1h, you would get more warning of AGI coming in the sense that people who deeply understand the related fields (e.g. let’s say, vaguely, cognitive science, “technical philosophy/epistemology”, algorithms) might be able to call that we’re actually getting most of the relevant understanding. Ototh, you would get less warning in the sense that you wouldn’t be getting big new coalescences of economic productivity or scary demos.
Cf. Red vs. Blue here: https://www.lesswrong.com/posts/K4K6ikQtHxcG49Tcn/hia-and-x-risk-part-2-why-it-hurts#Red_vs__Blue_AGI_capabilities_research
I certainly think your world model in which learning programs as of 2026 are not very relevant to AGI should predict different things here! I do think that’s likely the underlying crux.
I assume very few people are interested in doing independent research into improving nuclear weapons. If institutional AI algorithmic research were effectively banned, all else equal, I assume many more people would be interested in independently researching it, which would require more in-practice restriction on speech to curtail. (Based on your tweets I’m guessing you think that curtailing independent research wouldn’t be necessary and aren’t considering it here; but this may be a background disagreement with people saying invasive restrictions would be needed.)
Controlling widely-used hardware and only allowing approved code to run on it does seem drastically more invasive, sufficiently obviously so that I have no idea where you’re coming from here. If this only applied to the largest supercomputers I might not call it more invasive, but the whole premise of this thread is not-that.
Yeah, my current guess (with like 85% confidence?) is that if you curtail large industrial-scale investment you would get there. I don’t think independent research would get there any time soon. You would need some kind of industrial level of investment at some point in the coming decades.
I agree there is more motivation to build more efficient AI algorithms than for people to build more effective nuclear reactor or nuclear weapon designs, though I think a lot of that is downstream of a societal stigma for nuclear weapons, on which I think there is a lot of uncertainty on how it will develop in the case of AI.
A proper treatment of this topic would start from the top about the role of social stigma and how this will affect talent allocation and resource investment, and how it interfaces with regulations. I think the default outcome here is that there won’t be much of a social stigma around AI development, this will consistently keep investment into developing more competent AI systems high, and also be itself the force that prevents regulation from interfering with that investment.
In the worlds where you actually have widespread buy-in to do something IAEA-like, you probably have quite a bit of stigma, and this will be largely responsible for keeping the level of investment low (though the regulations itself will also help). And if you manage to keep it that way, then I think you are probably fine for a very long time. Of course the stigma might break at any point, similar to how nuclear agreements might break at any point, and then you are back on a clock (and the clock will of course be shorter because you will have made at least some relevant progress in other scientific fields). All my statements here are about how if you keep the stigma and regulations up, you will be fine. In most worlds the stigma and regulations will break at some point, and you will have much less time, but this isn’t because the alternative would be some kind of Orwellian or invasive regime.
I don’t super understand why “AI chips that cost $1k+ can only run signed code” would be invasive in any meaningful way. I don’t really think it would change anyone’s life in any particularly meaningful way. Both Android phones and iPhones can only run signed code, and the vast majority of gaming happens on game consoles that can only run signed code.
What do you mean by “the whole premise of this thread?”. I am arguing against the claim (paraphrased) “it is inevitable that you can train superintelligence on a consumer laptop eventually unless we take some kind of drastic and invasive measures”.
Of course by the time you actually have made it so that you can train AI on a random laptop (from 2026?) then you do indeed have no choice but to take some kind of drastic and invasive measures, but my whole point is that you can just avoid getting there. I don’t think the OP is saying “conditional on superinteligence being trainable on a 2026 laptop, you have to curtail social freedoms” the OP is saying “you will be able to train superintelligence on a laptop within a few decades unless you drastically curtail social freedom”, and that is the statement I am objecting to.
It sounds like I have more expectation of a much more efficient paradigm (a la e.g. Steven Byrnes) being feasibly discovered through purely theoretical work (though not necessarily single-2026-laptop efficient, or discovered on any particular schedule), which is coloring my takes here.
I agree that stigma is important and would reduce the level of intervention needed to shut down independent research. It’s only very recently that I’ve seen any discussion of stigma as load-bearing in pause scenarios, so I wasn’t thinking of it.
I was thinking of it as more invasive in affecting (by limiting what code they can run) far more actors (as opposed to, what, reactor operators and uranium handlers?, in the nuclear case). If unrestricted general-purpose CPUs are still readily available, it does seem like nothing much would change in practice & the important freedoms would be preserved; combined with only a few chipmakers actually being liable for compliance, I can see calling this not more invasive.
(I do think it’s probably meaningful that these aren’t legal mandates, and more meaningful that unrestricted platforms are also readily available.)