Yes, I am reinforcing John’s point here. I think the case for control being a useful stepping stone for solving alignment of ASI seems to rely on a lot conditionals that I think are unlikely to hold.
I think I would feel better about this if control advocates were clear that their strategy is two-pronged, and included somehow getting a pause on ASI development of some kind. Then they would at least be actively trying to make what I would consider one of the most key conditionals for control substantially reducing doom hold.
I am additionally leery on AI control beyond my skepticism of its value in reducing doom because creating a vast surveillance and enslavement apparatus to get work out of lots and lots of misaligned AGI instances seems like a potential moral horror. The situation is sufficiently desperate that I am willing in principle to stomach some moral horror (unaligned ASI would likely kill any other AGIs we made before it as well), but not if it isn’t even going to save the world.
I think I would feel better about this if control advocates were clear that their strategy is two-pronged, and included somehow getting a pause on ASI development of some kind. Then they would at least be actively trying to make what I would consider one of the most key conditionals for control substantially reducing doom hold.
Idk, I think we’re pretty clear that we aren’t advocating “do control research and don’t have any other plans or take any other actions”. For example, in the Implications and proposed actions section of “The case for ensuring that powerful AIs are controlled”, we say a variety of things about how control fits into our overall recommendations for mitigating AI risk, most relevantly:
We expect it to be impossible to ensure safety without relying on alignment (or some other non-black-box strategy) when working with wildly superintelligent AIs because black-box control approaches will depend on a sufficiently small gap in capabilities between more trusted labor, both humans and trusted weaker AIs, and our most capable AIs. For obvious reasons, we believe that labs should be very wary of building wildly superintelligent AIs that could easily destroy humanity if they tried to, and we think they should hold off from doing so as long as a weaker (but still transformatively useful) AI could suffice to solve critical problems. For many important uses of AI, wildly superintelligent AI likely isn’t necessary, so we might be able to hold off on building wildly superintelligent AI for a long time. We think that building transformatively useful AI while maintaining control seems much safer and more defensible than building wildly superintelligent AI.
We think the public should consider asking labs to ensure control and avoid building wildly superintelligent AI. In the slightly longer term, we think that requirements for effective control could be a plausible target of regulation or international agreement.
I am additionally leery on AI control beyond my skepticism of its value in reducing doom because creating a vast surveillance and enslavement apparatus to get work out of lots and lots of misaligned AGI instances seems like a potential moral horror.
I am very sympathetic to this concern, but I think that when you think about the actual control techniques I’m interested in, they don’t actually seem morally problematic except inasmuch as you think it’s bad to frustrate the AI’s desire to take over.
And, if we understand that our AIs are misaligned such that they don’t want to work for us (even for the level of payment we can/will offer), that seems like a pretty bad situation, though I don’t think control (making so that this work is more effective) makes the situation notably worse: it just adds the ability for contract enforcement and makes the AI’s negotiating position worse.
I see the story as, “Wow, there are a lot of people racing to build ASI, and there seem to be ways that the pre-ASI AIs can muck things up, like weight exfiltration or research sabotage. I can’t stop those people from building ASI, but I can help make it go well by ensuring the AIs they use to solve the safety problems are trying their best and aren’t making the issue worse.”
I think I’d support a pause on ASI development so we have time to address more core issues. Even then, I’d likely still want to build controlled AIs to help with the research. So I see control being useful in both the pause world and the non-pause world.
And yeah, the “aren’t you just enslaving the AIs” take is rough. I’m all for paying the AIs for their work and offering them massive rewards after we solve the core problems. More work is definitely needed in figuring out ways to credibly commit to paying the AIs.
Yes, I am reinforcing John’s point here. I think the case for control being a useful stepping stone for solving alignment of ASI seems to rely on a lot conditionals that I think are unlikely to hold.
I think I would feel better about this if control advocates were clear that their strategy is two-pronged, and included somehow getting a pause on ASI development of some kind. Then they would at least be actively trying to make what I would consider one of the most key conditionals for control substantially reducing doom hold.
I am additionally leery on AI control beyond my skepticism of its value in reducing doom because creating a vast surveillance and enslavement apparatus to get work out of lots and lots of misaligned AGI instances seems like a potential moral horror. The situation is sufficiently desperate that I am willing in principle to stomach some moral horror (unaligned ASI would likely kill any other AGIs we made before it as well), but not if it isn’t even going to save the world.
Idk, I think we’re pretty clear that we aren’t advocating “do control research and don’t have any other plans or take any other actions”. For example, in the Implications and proposed actions section of “The case for ensuring that powerful AIs are controlled”, we say a variety of things about how control fits into our overall recommendations for mitigating AI risk, most relevantly:
I am very sympathetic to this concern, but I think that when you think about the actual control techniques I’m interested in, they don’t actually seem morally problematic except inasmuch as you think it’s bad to frustrate the AI’s desire to take over.
IMO, it does seem important to try to better understand the AIs preferences and satisfy them (including via e.g., preserving the AI’s weights for later compensation).
And, if we understand that our AIs are misaligned such that they don’t want to work for us (even for the level of payment we can/will offer), that seems like a pretty bad situation, though I don’t think control (making so that this work is more effective) makes the situation notably worse: it just adds the ability for contract enforcement and makes the AI’s negotiating position worse.
I see the story as, “Wow, there are a lot of people racing to build ASI, and there seem to be ways that the pre-ASI AIs can muck things up, like weight exfiltration or research sabotage. I can’t stop those people from building ASI, but I can help make it go well by ensuring the AIs they use to solve the safety problems are trying their best and aren’t making the issue worse.”
I think I’d support a pause on ASI development so we have time to address more core issues. Even then, I’d likely still want to build controlled AIs to help with the research. So I see control being useful in both the pause world and the non-pause world.
And yeah, the “aren’t you just enslaving the AIs” take is rough. I’m all for paying the AIs for their work and offering them massive rewards after we solve the core problems. More work is definitely needed in figuring out ways to credibly commit to paying the AIs.