Subagents and impact measures: summary tables
These tables will summarise the results of this whole sequence, checking whether subagents can neutralise the impact penalty.
First of all, given a subagent, here are the results for various impact penalties and baselines, and various “value difference summary functions” :
Another way of phrasing ” decreasing”: it penalises too little power, not too much. Conversely, ” increasing” penalises too much power, not too little. Thus, unfortunately:
Subagents do allow an agent to get stronger than the indexical impact penalty would allow.
Subagents don’t allow an agent to get weaker than the indexical impact penalty would allow.
Examples
This table presents, for three specific examples, whether they could actually build a subagent, and whether that would neutralise their impact penalty in practice (in the inaction baseline):
Here, 20BQ is twenty billion questions, RR is relative reachability, and AU is attainable utility.
Now, whether the RR or AU penalties are undermined technically depends on , not on what measure is being used for value. However, I feel that the results undermine the spirit of AU much more than the spirit of RR. AU attempted to control an agent by limiting its power; this effect is mainly neutralised. RR attempted to control the side-effects of an agent by ensuring it had enough power to reach a lot of states; this effect is not neutralised by a subagent.
- Building and using the subagent by Feb 12, 2020, 7:28 PM; 17 points) (
- In theory: does building the subagent have an “impact”? by Feb 13, 2020, 2:17 PM; 17 points) (
- Counterfactuals versus the laws of physics by Feb 18, 2020, 1:21 PM; 16 points) (
- Stepwise inaction and non-indexical impact measures by Feb 17, 2020, 10:32 AM; 12 points) (
- Appendix: mathematics of indexical impact measures by Feb 17, 2020, 1:22 PM; 12 points) (
- (In)action rollouts by Feb 18, 2020, 2:48 PM; 11 points) (
- May 19, 2020, 2:21 PM; 4 points) 's comment on Conclusion to ‘Reframing Impact’ by (
- Feb 17, 2020, 2:14 PM; 2 points) 's comment on In theory: does building the subagent have an “impact”? by (
Things might get complicated by partial observability; in the real world, the agent is minimizing change in its beliefs about what it can reach. Otherwise, you could just get around the SA problem for AUP as well by substituting the reward functions for state indicator reward functions.
AU and RR have the same SA problem, formally, in terms of excess power; it’s just that AU wants low power and RR wants high power, so they don’t have the same problem in practice.