Because “nice” is a fuzzy word into which we’ve stuffed a bunch of different skills, even though having some of the skills doesn’t mean you have all of the skills.
Developers separately need to justify models are as skilled as top human experts
I also would not say “reasoning about novel moral problems” is a skill (because of the is ought distinction)
> An AI can be nicer than any human on the training distribution, and yet still do moral reasoning about some novel problems in a way that we dislike
The agents don’t need to do reasoning about novel moral problems (at least not in high stakes settings). We’re training these things to respond to instructions.
We can tell them not to do things we would obviously dislike (e.g. takeover) and retain our optionality to direct them in ways that we are currently uncertain about.
I also would not say “reasoning about novel moral problems” is a skill (because of the is ought distinction)
It’s a skill the same way “being a good umpire for baseball” takes skills, despite baseball being a social construct.[1]
I mean, if you don’t want to use the word “skill,” and instead use the phrase “computationally non-trivial task we want to teach the AI,” that’s fine. But don’t make the mistake of thinking that because of the is-ought problem there isn’t anything we want to teach future AI about moral decision-making. Like, clearly we want to teach it to do good and not bad! It’s fine that those are human constructs.
The agents don’t need to do reasoning about novel moral problems (at least not in high stakes settings). We’re training these things to respond to instructions.
Sorry, isn’t part of the idea to have these models take over almost all decisions about building their successors? “Responding to instructions” is not mutually exclusive with making decisions.
“When the ball passes over the plate under such and such circumstances, that’s a strike” is the same sort of contingent-yet-learnable rule as “When you take something under such and such circumstances, that’s theft.” An umpire may take goal directed action in response to a strike, making the rules of baseball about strikes “oughts,” and a moral agent may take goal directed action in response to a theft, making the moral rules about theft “oughts.”
Developers separately need to justify models are as skilled as top human experts
I also would not say “reasoning about novel moral problems” is a skill (because of the is ought distinction)
> An AI can be nicer than any human on the training distribution, and yet still do moral reasoning about some novel problems in a way that we dislike
The agents don’t need to do reasoning about novel moral problems (at least not in high stakes settings). We’re training these things to respond to instructions.
We can tell them not to do things we would obviously dislike (e.g. takeover) and retain our optionality to direct them in ways that we are currently uncertain about.
It’s a skill the same way “being a good umpire for baseball” takes skills, despite baseball being a social construct.[1]
I mean, if you don’t want to use the word “skill,” and instead use the phrase “computationally non-trivial task we want to teach the AI,” that’s fine. But don’t make the mistake of thinking that because of the is-ought problem there isn’t anything we want to teach future AI about moral decision-making. Like, clearly we want to teach it to do good and not bad! It’s fine that those are human constructs.
Sorry, isn’t part of the idea to have these models take over almost all decisions about building their successors? “Responding to instructions” is not mutually exclusive with making decisions.
“When the ball passes over the plate under such and such circumstances, that’s a strike” is the same sort of contingent-yet-learnable rule as “When you take something under such and such circumstances, that’s theft.” An umpire may take goal directed action in response to a strike, making the rules of baseball about strikes “oughts,” and a moral agent may take goal directed action in response to a theft, making the moral rules about theft “oughts.”