Learning processes are unbiased when they are a martingale for any action sequence (“conservation of expected evidence,” like Bayesian updating). In the case of value learning with a causal model, this just requires the values to not be causally downstream of the AI’s actions, e.g. for them to be fixed before the first action of the agent. This is usually what people assume.
Then the AI can ask the human to clarify; but if, say Iu = “the human says home delivery, if asked”, then the AI will, if it can, force the human to say “home delivery”.
I strongly believe that you should get more precise about exactly what various possible systems actually do, and exactly how you would set up the model, before trying to fix the problem. I think that if you formally write down the model you are imagining, it will (1) become obvious that it is a super weird model, (2) become obvious that there are more natural models that don’t have the problem. The model you have in mind here seems to require totally pinning down that it means for the human to “say home delivery,” while it is going to be way more natural to set up a causal model in which the human’s utterances (and the system’s observations of human utterances) are downstream of some latent human preferences.
If you want to give up on the usual Bayesian approach to value learning, in which values are latent structure that is fixed at the beginning of the AI’s life, I think you should say something about why you are giving up on it.
If the point is just to have extra options, in case the Bayesian approach turns out to be prohibitively difficult, then you should probably call that out explicitly so that it is clear what situation you are addressing. You should also probably say something about why you are imagining the Bayesian approach doesn’t work, since your posts still impose most of the same technical requirements and at face value don’t look any easier to implement. How are you going to define the indicator function Iu in terms of observations, except by specifying a probabilistic model and conditioning it on observations?
Even worse, having the conservation of expected evidence for every action sequence is not enough to make the AI behave well. Jessica’s example of an AI that (to re-use the “human says” example for the moment) forces the human to randomly answer a question, has conservation of expected evidence… But not the other properties we want, such as conditional conservation of expected evidence (this is related to the ultra-sophisticated Cake or Death problem).
(“conservation of expected evidence,” like Bayesian updating). In the case of value learning with a causal model, this just requires the values to not be causally downstream of the AI’s actions, e.g. for them to be fixed before the first action of the agent. This is usually what people assume.
Yes, I’ve posted on that. But getting that kind of causal downstreaming is not easy (and most things that people have proposed for value learning violate those assumptions; I’m pretty sure approval based methods do as well). Stratification is one way you can get this.
So I’m not avoiding the Bayesian approach because I want more options, but because I haven’t seen a decent Bayesian approach proposed.
In order to do value learning we need to specify what the AI is supposed to infer from some observations. The usual approach is to specify how the observations depend on the human’s preferences, and then have the AI do bayesian updating. If we are already in the business of explicitly specifying a causal model that links latent preferences to observations, we will presumably specify a model where latent preferences are upstream of observations and not downstream of the AI’s actions.
At some points it seems like you are expressing concerns about model misspecification, but I don’t see how this would cause the problem either.
For example, suppose that I incorrectly specify a model where the human is perfectly reliable, such that if at any time they say they like death, then they really do. And suppose that the AI can easily intervene to cause the human to say they like death. You seem to imply that the AI would take the action to cause the human to say they like death, if death is easier to achieve. But I don’t yet see why this would happen.
If the AI updates from the human saying that they like death, then it’s because the AI doesn’t recognize the impact of its own actions on the human’s utterances. And if the AI doesn’t recognize the impact of its own action on the human’s utterances, then it won’t bother to change its actions in order to influence the human’s utterances.
I don’t see any in-between regime where the AI will engage in this kind of manipulation, even if the model is completely misspecified. That is, I literally cannot construct any Bayesian agent that exhibits this behavior.
It seems like the only way it can appear is if we either (1) directly specify how the AI ought to update on observations, rather than specifying a model, or (2) specify a model in which the user’s preferences are causally downstream of the AI’s actions. But neither of those seems like things we would do.
because I haven’t seen a decent Bayesian approach proposed.
In some sense I agree with this. Specifying a model of how observations relate to preferences is very difficult! But both IRL and your writing seem to take as given such a model, and people who work on IRL in fact believe that we’ll be able to construct good-enough models. So if you are objecting to this leg of the proposal, that would be a much more direct criticism of IRL on its own terms. (And this is what I meant by saying “give up on the Bayesian approach.”)
For example, if you assume “Anything humans say about their preferences is true,” that’s basically giving up on the Bayesian approach as usually imagined (which would be to directly specify a model that relates preferences to utterances, and then to update on utterances) and replacing it with an ad-hoc algorithm for making inferences from human utterances (namely, accept them at face value). In the usual Bayesian setting, “humans are perfectly reliable” corresponds to believing that human utterances correctly track (fixed) human preferences, i.e. believing that it is impossible to influence those utterances.
For example, if you assume “Anything humans say about their preferences is true,” that’s basically giving up on the Bayesian approach as usually imagined
More formally, what I mean by that is “assume humans are perfectly rational, and fit a reward/utility function given those assumptions”. This is a perfectly Bayesian approach, and will always produce a (over-complicated) utility function that fits with the observed behaviour.
In the usual Bayesian setting, “humans are perfectly reliable” corresponds to believing that human utterances correctly track (fixed) human preferences, i.e. believing that it is impossible to influence those utterances.
Yes and no. Under the assumption that humans are perfectly reliable, influencing human preferences and utterances is impossible. But this leads to behaviour that resembles influencing human utterances under other assumptions.
eg if you threaten a human with a gun and ask them to report they are maximally happy, a sensible model of human preferences will say they are lying. But the “humans are rational” model will simply conclude that humans really like being threatened in this way.
Learning processes are unbiased when they are a martingale for any action sequence (“conservation of expected evidence,” like Bayesian updating). In the case of value learning with a causal model, this just requires the values to not be causally downstream of the AI’s actions, e.g. for them to be fixed before the first action of the agent. This is usually what people assume.
I strongly believe that you should get more precise about exactly what various possible systems actually do, and exactly how you would set up the model, before trying to fix the problem. I think that if you formally write down the model you are imagining, it will (1) become obvious that it is a super weird model, (2) become obvious that there are more natural models that don’t have the problem. The model you have in mind here seems to require totally pinning down that it means for the human to “say home delivery,” while it is going to be way more natural to set up a causal model in which the human’s utterances (and the system’s observations of human utterances) are downstream of some latent human preferences.
If you want to give up on the usual Bayesian approach to value learning, in which values are latent structure that is fixed at the beginning of the AI’s life, I think you should say something about why you are giving up on it.
If the point is just to have extra options, in case the Bayesian approach turns out to be prohibitively difficult, then you should probably call that out explicitly so that it is clear what situation you are addressing. You should also probably say something about why you are imagining the Bayesian approach doesn’t work, since your posts still impose most of the same technical requirements and at face value don’t look any easier to implement. How are you going to define the indicator function Iu in terms of observations, except by specifying a probabilistic model and conditioning it on observations?
Even worse, having the conservation of expected evidence for every action sequence is not enough to make the AI behave well. Jessica’s example of an AI that (to re-use the “human says” example for the moment) forces the human to randomly answer a question, has conservation of expected evidence… But not the other properties we want, such as conditional conservation of expected evidence (this is related to the ultra-sophisticated Cake or Death problem).
Yes, I’ve posted on that. But getting that kind of causal downstreaming is not easy (and most things that people have proposed for value learning violate those assumptions; I’m pretty sure approval based methods do as well). Stratification is one way you can get this.
So I’m not avoiding the Bayesian approach because I want more options, but because I haven’t seen a decent Bayesian approach proposed.
In order to do value learning we need to specify what the AI is supposed to infer from some observations. The usual approach is to specify how the observations depend on the human’s preferences, and then have the AI do bayesian updating. If we are already in the business of explicitly specifying a causal model that links latent preferences to observations, we will presumably specify a model where latent preferences are upstream of observations and not downstream of the AI’s actions.
At some points it seems like you are expressing concerns about model misspecification, but I don’t see how this would cause the problem either.
For example, suppose that I incorrectly specify a model where the human is perfectly reliable, such that if at any time they say they like death, then they really do. And suppose that the AI can easily intervene to cause the human to say they like death. You seem to imply that the AI would take the action to cause the human to say they like death, if death is easier to achieve. But I don’t yet see why this would happen.
If the AI updates from the human saying that they like death, then it’s because the AI doesn’t recognize the impact of its own actions on the human’s utterances. And if the AI doesn’t recognize the impact of its own action on the human’s utterances, then it won’t bother to change its actions in order to influence the human’s utterances.
I don’t see any in-between regime where the AI will engage in this kind of manipulation, even if the model is completely misspecified. That is, I literally cannot construct any Bayesian agent that exhibits this behavior.
It seems like the only way it can appear is if we either (1) directly specify how the AI ought to update on observations, rather than specifying a model, or (2) specify a model in which the user’s preferences are causally downstream of the AI’s actions. But neither of those seems like things we would do.
In some sense I agree with this. Specifying a model of how observations relate to preferences is very difficult! But both IRL and your writing seem to take as given such a model, and people who work on IRL in fact believe that we’ll be able to construct good-enough models. So if you are objecting to this leg of the proposal, that would be a much more direct criticism of IRL on its own terms. (And this is what I meant by saying “give up on the Bayesian approach.”)
For example, if you assume “Anything humans say about their preferences is true,” that’s basically giving up on the Bayesian approach as usually imagined (which would be to directly specify a model that relates preferences to utterances, and then to update on utterances) and replacing it with an ad-hoc algorithm for making inferences from human utterances (namely, accept them at face value). In the usual Bayesian setting, “humans are perfectly reliable” corresponds to believing that human utterances correctly track (fixed) human preferences, i.e. believing that it is impossible to influence those utterances.
More formally, what I mean by that is “assume humans are perfectly rational, and fit a reward/utility function given those assumptions”. This is a perfectly Bayesian approach, and will always produce a (over-complicated) utility function that fits with the observed behaviour.
Yes and no. Under the assumption that humans are perfectly reliable, influencing human preferences and utterances is impossible. But this leads to behaviour that resembles influencing human utterances under other assumptions.
eg if you threaten a human with a gun and ask them to report they are maximally happy, a sensible model of human preferences will say they are lying. But the “humans are rational” model will simply conclude that humans really like being threatened in this way.