Improved formalism for corruption in DIRL

We give a treatment of advisor corruption in DIRL, more elegant and general than our previous formalism.

The following definition replaces the original Definition 5.

Definition

Consider a meta-universe and . A metapolicy is called -rational for (as opposed to before, we assume is an -metapolicy rather than an -metapolicy; this is purely for notational convenience and it is straightforward to generalize the definition) when there exists s.t.

i. For any and , there is s.t. .

ii.

iii. For any and


In condition ii, is understood to mean . Conditions i+ii can be seen as the definition of given . A notable special case of condition iii is when for any

As a simple example, we can have a set of corrupt states in which the behavior of the advisor becomes arbitrary, but for each there is s.t. and (i.e., to corrupt the advisor one has to take an action that the advisor would never take). As opposed to before, this formalism can also account for partial corruption, e.g. if for each and , we have (like in strict -rationality) whereas for , we only have for some constant , then to ensure -rationality, it is sufficient that for each :

Theorem

Consider a countable family of -meta-universes and s.t. . Let be a family of -metapolicies s.t. for every , is -rational for . Define . Then, is learnable.

Proof of Theorem

We don’t spell out the proof in detail, but only the modifications with respect to the original.

As in the proof of the original theorem, we can assume without loss of generality that is finite. Define the same way as in Lemma A, but with redefined as

Similarly, define the same way as in the proof of Lemma A, but with redefined as

As in the proof of Lemma A, we have

Using condition iii in the Definition, we conclude that for some function with

We can now repeat the same arguments as in the proof of Lemma A to get

The desired result follows.

No comments.