Yeah, I always figured that this was coming eventually—”Allow us to use AI for mass domestic surveillance, or the people with guns will seize control of your company and force you to do it.” But I’m a little surprised to see the Pentagon explicitly forcing the issue over mass domestic surveillance and fully autonomous killbots[1]quite this early.
The people with guns don’t have a legal or moral leg to stand on here. They also don’t care, because they have the guns and the power of the state.
But this is an incredibly important part of the eventual endgame: The existing power structure does not necessarily want superhuman AI aligned to human welfare, it wants superhuman AI aligned to the existing power structure. If your alignment plan didn’t anticipate this, then your alignment plan was incomplete.
These are the two specific bright lines that reporting claims Dario Amodei tried to insist on. To be clear, I am opposed to fully-autonomous AI killbots, for all the obvious reasons.
Manipulating the money printer, government contracts, and law enforcement should make it possible for the US government to seize all the hardware and use it however they please if it comes to that. Some judge might approve strategic production rules applying to inference or training runs, but that may not even be necessary. The USG generally tries to avoid being that aggressive and authoritarian, for the same reason that the FBI agent does not pull his gun and point it at you when he wants to talk. Pretending that the implied threat isn’t present is both silly and common.
A hyperscaler isn’t like a central bank where it ceases to be fit for purpose if it loses its’ independence.
Government control is bad in many ways, but it’s a route to coordination to reduce race dynamics. The government doesn’t take loss-of-control risks seriously, but I think they might once they’re talking to something that’s clearly as smart and agentic as they are.
There are a lot of downsides to this scenario, but if that’s the world we live in, we’d better accept that.
My hope now is that the substantial problems invoked by human-controlled AI/AGI/ASI can be mitigated by wiser AI help; Human-like metacognitive skills will reduce LLM slop because developers have an incentive to make their AI systems more reliable for internal and commercial use, and there are many routes to improving their currently-lacking metacognitve skills.
If everyone in the government got a more-accurate, less-sycophantic answer when they asked Claude 5 and GPT 7 “so what should we be doing with this whole developing AGI thing?” (like “any reasonable aggregation of expert opinion means loss-of-control risks AND human misuse risks are substantial, so you should probably figure out how to be careful; want me to help?”), it might help a lot.
That doesn’t itself solve the problem of government-internal coups, but it might help prevent them if everyone is asking their AIs about the possible routes and defenses.
Yeah, I always figured that this was coming eventually—”Allow us to use AI for mass domestic surveillance, or the people with guns will seize control of your company and force you to do it.” But I’m a little surprised to see the Pentagon explicitly forcing the issue over mass domestic surveillance and fully autonomous killbots [1] quite this early.
The people with guns don’t have a legal or moral leg to stand on here. They also don’t care, because they have the guns and the power of the state.
But this is an incredibly important part of the eventual endgame: The existing power structure does not necessarily want superhuman AI aligned to human welfare, it wants superhuman AI aligned to the existing power structure. If your alignment plan didn’t anticipate this, then your alignment plan was incomplete.
These are the two specific bright lines that reporting claims Dario Amodei tried to insist on. To be clear, I am opposed to fully-autonomous AI killbots, for all the obvious reasons.
Manipulating the money printer, government contracts, and law enforcement should make it possible for the US government to seize all the hardware and use it however they please if it comes to that. Some judge might approve strategic production rules applying to inference or training runs, but that may not even be necessary. The USG generally tries to avoid being that aggressive and authoritarian, for the same reason that the FBI agent does not pull his gun and point it at you when he wants to talk. Pretending that the implied threat isn’t present is both silly and common.
A hyperscaler isn’t like a central bank where it ceases to be fit for purpose if it loses its’ independence.
Government control is bad in many ways, but it’s a route to coordination to reduce race dynamics. The government doesn’t take loss-of-control risks seriously, but I think they might once they’re talking to something that’s clearly as smart and agentic as they are.
There are a lot of downsides to this scenario, but if that’s the world we live in, we’d better accept that.
About a year ago I wrote Whether governments will control AGI is important and neglected. It looks to me now more like the answer is simply yes, and it’s still neglected.
I also think it’s pretty likely that power structures will want AGI aligned to them, which creates different problems of proliferating human-controlled AGI systems More in Instruction-following AGI is easier and more likely than value aligned AGI and If we solve alignment, do we die anyway? from around two years ago.
My hope now is that the substantial problems invoked by human-controlled AI/AGI/ASI can be mitigated by wiser AI help; Human-like metacognitive skills will reduce LLM slop because developers have an incentive to make their AI systems more reliable for internal and commercial use, and there are many routes to improving their currently-lacking metacognitve skills.
If everyone in the government got a more-accurate, less-sycophantic answer when they asked Claude 5 and GPT 7 “so what should we be doing with this whole developing AGI thing?” (like “any reasonable aggregation of expert opinion means loss-of-control risks AND human misuse risks are substantial, so you should probably figure out how to be careful; want me to help?”), it might help a lot.
That doesn’t itself solve the problem of government-internal coups, but it might help prevent them if everyone is asking their AIs about the possible routes and defenses.