To your point, one of the premises in the paper is that for a moral hijacking scenario to be meaningful, there must be a situation where no “easy-outs” exist, such as simply equipping the AIs with a purple light filter (Section 3.1). Legitimate instrumental goals may exist in these AIs, such as the desire to understand the true nature of their surroundings, that could conflict with the need to hide behind such a filter.
I found it challenging to come up with a clear example of purely aesthetic harm to use as the running example in the work. I think it’s important for the harm to be minor, since these are the situations where we have the most reason to actually accomodate for the AI’s suffering. Purple is just a color, but as you mention, we could imagine more abstract concepts as sources of suffering that are difficult to simply cover up.
There are some interesting duty of care arguments put forward by Clare Palmer surrounding animal rights that could be applied here. There is an argument to be made that whoever benefits from such AIs is also on the hook for alleviating their suffering, even if they didn’t cause it in the first place. For example, if you benefit from a violet-averting household helper, you have a duty of care to it.
I think some of the other interesting questions surround tradeoffs in moral hijacking scenarios. What novel sources of suffering are acceptable to create in AIs in the first place?