Personally, because I don’t believe the policy in the organization’s name is viable or helpful.
As to why I don’t think it’s viable, it would require the Trump-Vance administration to organise a strong global treaty to stop developing a technology that is currently the US’s only clear economic lead over the rest of the world.
If you attempted a pause, I think it wouldn’t work very well and it would rupture and leave the world in a worse place: Some AI research is already happening in a defence context. This is easy to ignore while defence isn’t the frontier. The current apparent absence of frontier AI research in a military context is miraculous, strange, and fragile. If you pause in the private context (which is probably all anyone could do) defence AI will become the frontier in about three years, and after that I don’t think any further pause is possible because it would require a treaty against secret military technology R&D. Military secrecy is pretty strong right now. Hundreds of billions yearly is known to be spent on mostly secret military R&D, probably more is actually spent. (to be interested in a real pause, you have to be interested in secret military R&D. So I am interested in that, and my position right now is that it’s got hands you can’t imagine)
To put it another way, after thinking about what pausing would mean, it dawned on me that pausing means moving AI underground, and from what I can tell that would make it much harder to do safety research or to approach the development of AI with a humanitarian perspective. It seems to me like the movement has already ossified a slogan that makes no sense in light of the complex and profane reality that we live in, which is par for the course when it comes to protest activism movements.
pausing means moving AI underground, and from what I can tell that would make it much harder to do safety research
I would be overjoyed if all AI research were driven underground! The main source of danger is the fact that there are thousands of AI researchers, most of whom are free to communicate and collaborate with each other. Lone researchers or small underground cells of researcher who cannot publish their results would be vastly less dangerous than the current AI research community even if there are many lone researchers and many small underground teams. And if we could make it illegal for these underground teams to generate revenue by selling AI-based services or to raise money from investors, that would bring me great joy, too.
Research can be modeled as a series of breakthroughs such that it is basically impossible to make breakthrough N before knowing about breakthrough N-1. If the researcher who makes breakthrough N-1 is unable to communicate it to researchers outside of his own small underground cell of researchers, then only that small underground cell or team has a chance at discovering breakthrough N, and research would proceed much more slowly than it does under current conditions.
The biggest hope for our survival is the quite likely and realistic hope that many thousands of person-years of intellectual effort that can only be done by the most talented among us remain to be done before anyone can create an AI that could extinct us. We should be making the working conditions of the (misguided) people doing that intellectual labor as difficult and unproductive as possible. We should restrict or cut off the labs’ access to revenue, to investment, to “compute” (GPUs), to electricity and to employees. Employees with the skills and knowledge to advance the field are a particularly important resource for the labs; consequently, we should reduce or restrict their number by making it as hard as possible (illegal preferably) to learn, publish, teach or lecture about deep learning.
Also, in my assessment, we are not getting much by having access to the AI researchers: we’re not persuading them to change how they operate and the information we are getting from them is of little help IMHO in the attempt to figure out alignment (in the original sense of the word where the AI stays aligned even if it becomes superhumanly capable).
The most promising alignment research IMHO is the kind that mostly ignores the deep-learning approach (which is the sole focus as far as I know of all the major labs) and inquires deeply into which approach to creating a superhumanly-capable AI would be particularly easy to align. That was the approach taken by MIRI before it concluded in 2022 that its resources were better spent trying to slow down the AI juggernaut through public persuasion.
Deep learning is a technology created by people who did not care about alignment or wrongly assumed alignment would be easy. There is a reason why MIRI mostly ignored deep learning when most AI researchers started to focus on it in 2006. It is probably a better route to aligned transformative AI to search for another, much-easier-to-align technology (that can eventually be made competitive in capabilities with deep learning) than to search for a method to align AIs created with deep-learning technology. (To be clear, I doubt that either approach will bear fruit in time unless the AI juggernaut can be slowed down considerably.) And of course if they will be mostly ignoring deep learning, there’s little alignment researchers can learn from the leading labs.
For the US to undertake such a shift, it would help if you could convince them they’d do better in a secret race than an open one. There are indications that this may be possible, and there are indications that it may be impossible.
I’m listening to an Ecosystemics Futures podcast episode, which, to characterize… it’s a podcast where the host has to keep asking guests whether the things they’re saying are classified or not just in case she has to scrub it. At one point, Lue Elizondo does assert, in the context of talking to a couple of other people who know a lot about government secrets and in the context of talking about situations where excessive secrecy may be doing a lot of harm, quoting Chris Mellon, “We won the cold war against the soviet union not because we were better at keeping secrets, we won the cold war because we knew how to move information and secrets more efficiently across the government than the russians.” I can believe the same thing could potentially be said about China too, censorship cultures don’t seem to be good for ensuring availability of information, so that might be a useful claim if you ever want to convince the US to undertake this.
Right now, though, Vance has asserted straight out many times that working in the open is where the US’s advantage is. That’s probably not true at all, working in the open is how you give your advantage away or at least make it ephemeral, but that’s the sentiment you’re going to be up against over the next four years.
Good points, which in part explains why I think it is very very unlikely that AI research can be driven underground (in the US or worldwide). I was speaking to the desirability of driving it underground, not its feasibility.
The Trump-Vance administration’s support base is suspicious of academia, and has been willing to defund scientific research of the grounds of it being too left-wing. There is a schism emerging between multiple factions of the right-wing, the right-wingers that are more tech-oriented and the ones that are nation/race-oriented (the H1B visa argument being an example). This could lead to a decrease in support for AI in the future.
Another possibility is that the United States could lose global relevance due to economic and social pressures from the outside world, and from organizational mismanagement and unrest from within. Then the AI industry could move to the UK/EU, turning the main players in AI to the UK/EU and China.
By tracking GPU sales, we can detect large-scale AI development. Since frontier model GPU clusters require immense amounts of energy and custom buildings, the physical infrastructure required to train a large model is hard to hide.
This will change/is only the case for frontier development. I also think we’re probably in the hardware overhang. I don’t think there is anything inherently difficult to hide about AI, that’s likely just a fact about the present iteration of AI.
But I’d be very open to more arguments on this. I guess… I’m convinced there’s a decent chance that an international treaty would be enforceable and that China and France would sign onto it if the US was interested, but the risk of secret development continuing is high enough for me that it doesn’t seem good on net.
Personally, because I don’t believe the policy in the organization’s name is viable or helpful.
As to why I don’t think it’s viable, it would require the Trump-Vance administration to organise a strong global treaty to stop developing a technology that is currently the US’s only clear economic lead over the rest of the world.
If you attempted a pause, I think it wouldn’t work very well and it would rupture and leave the world in a worse place: Some AI research is already happening in a defence context. This is easy to ignore while defence isn’t the frontier. The current apparent absence of frontier AI research in a military context is miraculous, strange, and fragile. If you pause in the private context (which is probably all anyone could do) defence AI will become the frontier in about three years, and after that I don’t think any further pause is possible because it would require a treaty against secret military technology R&D. Military secrecy is pretty strong right now. Hundreds of billions yearly is known to be spent on mostly secret military R&D, probably more is actually spent.
(to be interested in a real pause, you have to be interested in secret military R&D. So I am interested in that, and my position right now is that it’s got hands you can’t imagine)
To put it another way, after thinking about what pausing would mean, it dawned on me that pausing means moving AI underground, and from what I can tell that would make it much harder to do safety research or to approach the development of AI with a humanitarian perspective. It seems to me like the movement has already ossified a slogan that makes no sense in light of the complex and profane reality that we live in, which is par for the course when it comes to protest activism movements.
I would be overjoyed if all AI research were driven underground! The main source of danger is the fact that there are thousands of AI researchers, most of whom are free to communicate and collaborate with each other. Lone researchers or small underground cells of researcher who cannot publish their results would be vastly less dangerous than the current AI research community even if there are many lone researchers and many small underground teams. And if we could make it illegal for these underground teams to generate revenue by selling AI-based services or to raise money from investors, that would bring me great joy, too.
Research can be modeled as a series of breakthroughs such that it is basically impossible to make breakthrough N before knowing about breakthrough N-1. If the researcher who makes breakthrough N-1 is unable to communicate it to researchers outside of his own small underground cell of researchers, then only that small underground cell or team has a chance at discovering breakthrough N, and research would proceed much more slowly than it does under current conditions.
The biggest hope for our survival is the quite likely and realistic hope that many thousands of person-years of intellectual effort that can only be done by the most talented among us remain to be done before anyone can create an AI that could extinct us. We should be making the working conditions of the (misguided) people doing that intellectual labor as difficult and unproductive as possible. We should restrict or cut off the labs’ access to revenue, to investment, to “compute” (GPUs), to electricity and to employees. Employees with the skills and knowledge to advance the field are a particularly important resource for the labs; consequently, we should reduce or restrict their number by making it as hard as possible (illegal preferably) to learn, publish, teach or lecture about deep learning.
Also, in my assessment, we are not getting much by having access to the AI researchers: we’re not persuading them to change how they operate and the information we are getting from them is of little help IMHO in the attempt to figure out alignment (in the original sense of the word where the AI stays aligned even if it becomes superhumanly capable).
The most promising alignment research IMHO is the kind that mostly ignores the deep-learning approach (which is the sole focus as far as I know of all the major labs) and inquires deeply into which approach to creating a superhumanly-capable AI would be particularly easy to align. That was the approach taken by MIRI before it concluded in 2022 that its resources were better spent trying to slow down the AI juggernaut through public persuasion.
Deep learning is a technology created by people who did not care about alignment or wrongly assumed alignment would be easy. There is a reason why MIRI mostly ignored deep learning when most AI researchers started to focus on it in 2006. It is probably a better route to aligned transformative AI to search for another, much-easier-to-align technology (that can eventually be made competitive in capabilities with deep learning) than to search for a method to align AIs created with deep-learning technology. (To be clear, I doubt that either approach will bear fruit in time unless the AI juggernaut can be slowed down considerably.) And of course if they will be mostly ignoring deep learning, there’s little alignment researchers can learn from the leading labs.
For the US to undertake such a shift, it would help if you could convince them they’d do better in a secret race than an open one. There are indications that this may be possible, and there are indications that it may be impossible.
I’m listening to an Ecosystemics Futures podcast episode, which, to characterize… it’s a podcast where the host has to keep asking guests whether the things they’re saying are classified or not just in case she has to scrub it. At one point, Lue Elizondo does assert, in the context of talking to a couple of other people who know a lot about government secrets and in the context of talking about situations where excessive secrecy may be doing a lot of harm, quoting Chris Mellon, “We won the cold war against the soviet union not because we were better at keeping secrets, we won the cold war because we knew how to move information and secrets more efficiently across the government than the russians.” I can believe the same thing could potentially be said about China too, censorship cultures don’t seem to be good for ensuring availability of information, so that might be a useful claim if you ever want to convince the US to undertake this.
Right now, though, Vance has asserted straight out many times that working in the open is where the US’s advantage is. That’s probably not true at all, working in the open is how you give your advantage away or at least make it ephemeral, but that’s the sentiment you’re going to be up against over the next four years.
Good points, which in part explains why I think it is very very unlikely that AI research can be driven underground (in the US or worldwide). I was speaking to the desirability of driving it underground, not its feasibility.
The Trump-Vance administration’s support base is suspicious of academia, and has been willing to defund scientific research of the grounds of it being too left-wing. There is a schism emerging between multiple factions of the right-wing, the right-wingers that are more tech-oriented and the ones that are nation/race-oriented (the H1B visa argument being an example). This could lead to a decrease in support for AI in the future.
Another possibility is that the United States could lose global relevance due to economic and social pressures from the outside world, and from organizational mismanagement and unrest from within. Then the AI industry could move to the UK/EU, turning the main players in AI to the UK/EU and China.
A relevant FAQ entry: AI development might go underground
I think I disagree here:
This will change/is only the case for frontier development. I also think we’re probably in the hardware overhang. I don’t think there is anything inherently difficult to hide about AI, that’s likely just a fact about the present iteration of AI.
But I’d be very open to more arguments on this. I guess… I’m convinced there’s a decent chance that an international treaty would be enforceable and that China and France would sign onto it if the US was interested, but the risk of secret development continuing is high enough for me that it doesn’t seem good on net.