A brief note on factoring out certain variables

Jessica Taylor and Chris Olah has a post on “Maximizing a quantity while ignoring effect through some channel”. I’ll briefly present a different way of doing this, and compare the two.

Essentially, the AI’s utility is given by a function of a variable . The AI’s actions are a random variable , but we want to ‘factor out’ another random variable .

If we have a probability distribution over actions, then, given background evidence , the standard way to maximise would be to maximise:

  • .

The most obvious idea, for me, is to replace with , making artificially independent of and giving the expression:

  • .

If is dependent on - if it isn’t, then factoring it out is not interesting—then needs some implicit probability distribution over (which is independent of ). So, in essence, this approach relies on two distributions over the possible actions, one that the agent is optimising, the other than is left unoptimised. In terms of Bayes nets, this just seems to be cutting from .

Jessica and Chris’s approach also relies on two distributions. But, as far as I understand their approach, the two distributions are taken to be the same, and instead, it is assumed that cannot be improved by changes to the distribution of , if one keeps the distribution of constant. This has the feel of being a kind of differential condition—the infinitesimal impact on of changes to but not is non-positive.

I suspect my version might have some odd behaviour (defining the implicit distribution for does not seem necessarily natural), but I’m not sure of the transitive properties of the differential approach.