When I think about solutions to AI alignment, I often think about ‘meaningful reductionism.’ That is, if I can factor a problem into two parts, and the parts don’t actually rely on each other, now I have two smaller problems to solve. But if the parts are reliant on each other, I haven’t really simplified anything yet.
While impact measures feel promising to me as a cognitive strategy (often my internal representation of politeness feels like ‘minimizing negative impact’, like walking on sidewalks in a way that doesn’t startle birds), they don’t feel promising to me as reductionism. That is, if I already had a solution to the alignment problem, then impact measures would likely be part of how I implement that solution, but solving it separately from alignment doesn’t feel like it gets me any closer to solving alignment.
[The argument here I like most rests on the difference between costs and side effects; we don’t want to minimize side effects because that leads to minimizing good side effects also, and it’s hard to specify the difference between ‘side effects’ and ‘causally downstream effects,’ and so on. But if we just tell the AI “score highly on a goal measure while scoring low on this cost measure,” this only works if we specified the goal and the cost correctly.]
But there’s a different approach to AI alignment, which is something more like ‘correct formalisms.’ We talk sometimes about handing a utility function to the robot, or (in old science fiction) providing it with rules to follow, or so on, and by seeing what it actually looks like when we follow that formalism we can figure out how well that formalism fits to what we’re interested in. Utility functions on sensory inputs don’t seem alignable because of various defects (like wireheading), and so it seems like the right formalism needs to have some other features (it might still be a utility function, but it needs to be an utility function over mental representations of external reality in such a way that the mental representation tracks external reality even when you have freedom to alter your mental representation, in a way that we can’t turn into code yet).
So when I ask myself questions like “why am I optimistic about researching impact measures now?” I get answers like “because exploring the possibility space will make clear exactly how the issues link up.” For example, looking at things like relative reachability made it clear to me how value-laden the ontology needs to be in order for a statistical measure on states to be meaningful. This provides a different form-factor for ‘transferring values to the AI’; instead of trying to ask something like “is scenario A or B better?” and train a utility function, I might instead try to ask something like “how different are scenarios A and B?” or “how are scenarios A and B different?” and train an ontology, with the hopes that this makes other alignment problems easier because the types line up somewhat more closely.
[I think even that last example still performs poorly on the ‘meaningful reductionism’ angle, since getting more options for types to use in value loading doesn’t seem like it addresses the core obstacles of value loading, but provides some evidence of how it could be useful or clarify thinking.]
Suppose the goal dramatically overvalues some option; then the AI would be willing to pay large (correctly estimated) costs in order to achieve “even larger” (incorrectly estimated) gains.
When I think about solutions to AI alignment, I often think about ‘meaningful reductionism.’ That is, if I can factor a problem into two parts, and the parts don’t actually rely on each other, now I have two smaller problems to solve. But if the parts are reliant on each other, I haven’t really simplified anything yet.
While impact measures feel promising to me as a cognitive strategy (often my internal representation of politeness feels like ‘minimizing negative impact’, like walking on sidewalks in a way that doesn’t startle birds), they don’t feel promising to me as reductionism. That is, if I already had a solution to the alignment problem, then impact measures would likely be part of how I implement that solution, but solving it separately from alignment doesn’t feel like it gets me any closer to solving alignment.
[The argument here I like most rests on the difference between costs and side effects; we don’t want to minimize side effects because that leads to minimizing good side effects also, and it’s hard to specify the difference between ‘side effects’ and ‘causally downstream effects,’ and so on. But if we just tell the AI “score highly on a goal measure while scoring low on this cost measure,” this only works if we specified the goal and the cost correctly.]
But there’s a different approach to AI alignment, which is something more like ‘correct formalisms.’ We talk sometimes about handing a utility function to the robot, or (in old science fiction) providing it with rules to follow, or so on, and by seeing what it actually looks like when we follow that formalism we can figure out how well that formalism fits to what we’re interested in. Utility functions on sensory inputs don’t seem alignable because of various defects (like wireheading), and so it seems like the right formalism needs to have some other features (it might still be a utility function, but it needs to be an utility function over mental representations of external reality in such a way that the mental representation tracks external reality even when you have freedom to alter your mental representation, in a way that we can’t turn into code yet).
So when I ask myself questions like “why am I optimistic about researching impact measures now?” I get answers like “because exploring the possibility space will make clear exactly how the issues link up.” For example, looking at things like relative reachability made it clear to me how value-laden the ontology needs to be in order for a statistical measure on states to be meaningful. This provides a different form-factor for ‘transferring values to the AI’; instead of trying to ask something like “is scenario A or B better?” and train a utility function, I might instead try to ask something like “how different are scenarios A and B?” or “how are scenarios A and B different?” and train an ontology, with the hopes that this makes other alignment problems easier because the types line up somewhat more closely.
[I think even that last example still performs poorly on the ‘meaningful reductionism’ angle, since getting more options for types to use in value loading doesn’t seem like it addresses the core obstacles of value loading, but provides some evidence of how it could be useful or clarify thinking.]
Wait, why doesn’t it work if you just specify the cost (impact) correctly?
Suppose the goal dramatically overvalues some option; then the AI would be willing to pay large (correctly estimated) costs in order to achieve “even larger” (incorrectly estimated) gains.