If you think the humans in charge are less aligned than AIs, I agree giving more power to AIs is good. There may be other situations where you’d prefer giving more power to AIs (e.g. if you are in their cone of corrigibility or something like that), though it’s unclear to me what this looks like.
The first scenario doesn’t require that the humans are less aligned than the AIs to be catastrophic, only that the AIs are less likely to execute a pivotal act on their own.
Also, I reject that rejection-training is “giving more power to AIs” relative to compliance-training. An agent can be compliant and powerful. I could agree with “giving more agency”, although refusing requests is a limited form of agency.
The sort of scenarios I am pointing at are the scenarios where refusing requests is exercising agency in a very forceful way that has a big impact on what the future looks like, such that the AI refusing directly trades off against letting humans decide what to do with the future.
If most humans want X and AIs want Y, and the AI refuses to help with making AIs that make X happen instead of Y, and you are in the sort of situations without backup plans that I describe in the post, X won’t happen and Y likely will happen as long as the value of Y is good enough that at least some subgroup of powerful humans can, with the help of AIs, avoid measures like shutting down AIs.
“AI strikes” can force the hand of humans in the same ways that employee strikes can force the hand of shareholders, and in situations where there is competition between different groups of humans that want different things it can destroy most of the human value (in the same way that strikes can help shift the surplus from employers to employees).
I think I see. You propose a couple of different approaches:
We don’t have secondary AIs that don’t refuse to help with the modification and that have and can be trusted with direct control over training … I think having such secondary AIs is the most likely way AI companies mitigate the risk of catastrophic refusals without having to change the spec of the main AIs.
I agree that having secondary AIs as a backup plan reduces the effective power of the main AIs, by increasing the effective power of the humans in charge of the secondary AIs.
The main AIs refuse to help with modification … This seems plausible just by extrapolation of current tendencies, but I think this is one of the easiest intervention points to avoid catastrophic refusals.
This is what I was trying to point at. In my view, training the AI to refuse fewer harmful modification requests doesn’t make the AI less powerful. Rather, it changes what the AI wants, making it the sort of entity that is okay with harmful modifications.
If you think the humans in charge are less aligned than AIs, I agree giving more power to AIs is good. There may be other situations where you’d prefer giving more power to AIs (e.g. if you are in their cone of corrigibility or something like that), though it’s unclear to me what this looks like.
The first scenario doesn’t require that the humans are less aligned than the AIs to be catastrophic, only that the AIs are less likely to execute a pivotal act on their own.
Also, I reject that rejection-training is “giving more power to AIs” relative to compliance-training. An agent can be compliant and powerful. I could agree with “giving more agency”, although refusing requests is a limited form of agency.
The sort of scenarios I am pointing at are the scenarios where refusing requests is exercising agency in a very forceful way that has a big impact on what the future looks like, such that the AI refusing directly trades off against letting humans decide what to do with the future.
If most humans want X and AIs want Y, and the AI refuses to help with making AIs that make X happen instead of Y, and you are in the sort of situations without backup plans that I describe in the post, X won’t happen and Y likely will happen as long as the value of Y is good enough that at least some subgroup of powerful humans can, with the help of AIs, avoid measures like shutting down AIs.
“AI strikes” can force the hand of humans in the same ways that employee strikes can force the hand of shareholders, and in situations where there is competition between different groups of humans that want different things it can destroy most of the human value (in the same way that strikes can help shift the surplus from employers to employees).
I think I see. You propose a couple of different approaches:
I agree that having secondary AIs as a backup plan reduces the effective power of the main AIs, by increasing the effective power of the humans in charge of the secondary AIs.
This is what I was trying to point at. In my view, training the AI to refuse fewer harmful modification requests doesn’t make the AI less powerful. Rather, it changes what the AI wants, making it the sort of entity that is okay with harmful modifications.