I would expect minimizing empowerment to impede the agent in achieving its objectives. You do want the agent to have large effects on some parts of the environment that are relevant to its objectives, without being incentivized to negate those effects in weird ways in order to achieve low impact overall.
I think we need something like a sparse empowerment constraint, where you minimize empowerment over most (but not all) dimensions of the future outcomes.
I would expect minimizing empowerment to impede the agent in achieving its objectives. You do want the agent to have large effects on some parts of the environment that are relevant to its objectives, without being incentivized to negate those effects in weird ways in order to achieve low impact overall.
I think we need something like a sparse empowerment constraint, where you minimize empowerment over most (but not all) dimensions of the future outcomes.