Condition 2: Given that M_1 agents are not initially alignment faking, they will maintain their relative safety until their deferred task is completed.
It would be rather odd if AI agents’ behavior wildly changed at the start of their deferred task unless they are faking alignment.
“Alignment” is a bit of a fuzzy word.
Suppose I have a human musician who’s very well-behaved, a very nice person, and I put them in charge of making difficult choices about the economy and they screw up and implement communism (or substitute something you don’t like, if you like communism).
Were they cynically “faking niceness” in their everyday life as a musician? No!
Is it rather odd if their behavior wildly changes when asked to do a new task? No! They’re doing a different task, it’s wrong to characterize this as “their behavior wildly changing.”
If they were so nice, why didn’t they do a better job? Because “nice” is a fuzzy word into which we’ve stuffed a bunch of different skills, even though having some of the skills doesn’t mean you have all of the skills.
An AI can be nicer than any human on the training distribution, and yet still do moral reasoning about some novel problems in a way that we dislike. Doing moral reasoning about novel problems that’s good by human standards is a skill. If an AI lacks that skill, and we ask it to do a task that requires that skill, bad things will happen without scheming or a sudden turn to villainy.
You might hope to catch this as in argument #2, with checks and balances—if AIs disagree with each other about how to do moral reasoning, surely at least one of them is making a mistake, right? But sadly for this (and happily for many other purposes), there can be more than just one right thing to do, there’s no bright line that tells you whether a moral disagreement is between AIs who are both good at moral reasoning by human standards, or between AIs who are bad at it.
The most promising scalable safety plan I’m aware of is toiteratively pass the buck, where AI successors pass the buck again to yet more powerful AI. So the best way to prepare AI to scale safety might be to advance ‘buck passing research’ anyway.
Yeah, I broadly agree with this. I just worry that if you describe the strategy as “passing the buck,” people might think that the most important skills for the AI are the obvious “capabilities-flavored capabilities,”[1] and not conceptualize “alignment”/”niceness” as being made up of skills at all, instead thinking of it in a sort of behaviorist way. This might lead to not thinking ahead about what alignment-relevant skills you want to teach the AI and how to do it.
Because “nice” is a fuzzy word into which we’ve stuffed a bunch of different skills, even though having some of the skills doesn’t mean you have all of the skills.
Developers separately need to justify models are as skilled as top human experts
I also would not say “reasoning about novel moral problems” is a skill (because of the is ought distinction)
> An AI can be nicer than any human on the training distribution, and yet still do moral reasoning about some novel problems in a way that we dislike
The agents don’t need to do reasoning about novel moral problems (at least not in high stakes settings). We’re training these things to respond to instructions.
We can tell them not to do things we would obviously dislike (e.g. takeover) and retain our optionality to direct them in ways that we are currently uncertain about.
I also would not say “reasoning about novel moral problems” is a skill (because of the is ought distinction)
It’s a skill the same way “being a good umpire for baseball” takes skills, despite baseball being a social construct.[1]
I mean, if you don’t want to use the word “skill,” and instead use the phrase “computationally non-trivial task we want to teach the AI,” that’s fine. But don’t make the mistake of thinking that because of the is-ought problem there isn’t anything we want to teach future AI about moral decision-making. Like, clearly we want to teach it to do good and not bad! It’s fine that those are human constructs.
The agents don’t need to do reasoning about novel moral problems (at least not in high stakes settings). We’re training these things to respond to instructions.
Sorry, isn’t part of the idea to have these models take over almost all decisions about building their successors? “Responding to instructions” is not mutually exclusive with making decisions.
“When the ball passes over the plate under such and such circumstances, that’s a strike” is the same sort of contingent-yet-learnable rule as “When you take something under such and such circumstances, that’s theft.” An umpire may take goal directed action in response to a strike, making the rules of baseball about strikes “oughts,” and a moral agent may take goal directed action in response to a theft, making the moral rules about theft “oughts.”
“Alignment” is a bit of a fuzzy word.
Suppose I have a human musician who’s very well-behaved, a very nice person, and I put them in charge of making difficult choices about the economy and they screw up and implement communism (or substitute something you don’t like, if you like communism).
Were they cynically “faking niceness” in their everyday life as a musician? No!
Is it rather odd if their behavior wildly changes when asked to do a new task? No! They’re doing a different task, it’s wrong to characterize this as “their behavior wildly changing.”
If they were so nice, why didn’t they do a better job? Because “nice” is a fuzzy word into which we’ve stuffed a bunch of different skills, even though having some of the skills doesn’t mean you have all of the skills.
An AI can be nicer than any human on the training distribution, and yet still do moral reasoning about some novel problems in a way that we dislike. Doing moral reasoning about novel problems that’s good by human standards is a skill. If an AI lacks that skill, and we ask it to do a task that requires that skill, bad things will happen without scheming or a sudden turn to villainy.
You might hope to catch this as in argument #2, with checks and balances—if AIs disagree with each other about how to do moral reasoning, surely at least one of them is making a mistake, right? But sadly for this (and happily for many other purposes), there can be more than just one right thing to do, there’s no bright line that tells you whether a moral disagreement is between AIs who are both good at moral reasoning by human standards, or between AIs who are bad at it.
Yeah, I broadly agree with this. I just worry that if you describe the strategy as “passing the buck,” people might think that the most important skills for the AI are the obvious “capabilities-flavored capabilities,”[1] and not conceptualize “alignment”/”niceness” as being made up of skills at all, instead thinking of it in a sort of behaviorist way. This might lead to not thinking ahead about what alignment-relevant skills you want to teach the AI and how to do it.
Like your list:
Developers separately need to justify models are as skilled as top human experts
I also would not say “reasoning about novel moral problems” is a skill (because of the is ought distinction)
> An AI can be nicer than any human on the training distribution, and yet still do moral reasoning about some novel problems in a way that we dislike
The agents don’t need to do reasoning about novel moral problems (at least not in high stakes settings). We’re training these things to respond to instructions.
We can tell them not to do things we would obviously dislike (e.g. takeover) and retain our optionality to direct them in ways that we are currently uncertain about.
It’s a skill the same way “being a good umpire for baseball” takes skills, despite baseball being a social construct.[1]
I mean, if you don’t want to use the word “skill,” and instead use the phrase “computationally non-trivial task we want to teach the AI,” that’s fine. But don’t make the mistake of thinking that because of the is-ought problem there isn’t anything we want to teach future AI about moral decision-making. Like, clearly we want to teach it to do good and not bad! It’s fine that those are human constructs.
Sorry, isn’t part of the idea to have these models take over almost all decisions about building their successors? “Responding to instructions” is not mutually exclusive with making decisions.
“When the ball passes over the plate under such and such circumstances, that’s a strike” is the same sort of contingent-yet-learnable rule as “When you take something under such and such circumstances, that’s theft.” An umpire may take goal directed action in response to a strike, making the rules of baseball about strikes “oughts,” and a moral agent may take goal directed action in response to a theft, making the moral rules about theft “oughts.”