[Question] Since figuring out human values is hard, what about, say, monkey values?

So, hu­man val­ues are frag­ile, vague and pos­si­bly not even a well defined con­cept, yet figur­ing it out seems es­sen­tial for an al­igned AI. It seems rea­son­able that, faced with a hard prob­lem, one would start in­stead with a sim­pler one that has some con­nec­tion to the origi­nal prob­lem. For some­one not work­ing in the area of ML or AI al­ign­ment, it seems ob­vi­ous that re­search­ing sim­pler-than-hu­man val­ues might be a way to make progress. But maybe this is one of those false ob­vi­ous ideas that non-ex­perts tend to push af­ter a cur­sory learn­ing about a com­plex re­search topic.

That said, as­sum­ing that the value com­plex­ity scales with in­tel­li­gence, study­ing less in­tel­li­gent agents and their ver­sion of val­ues maybe some­thing to pur­sue. Dolphin val­ues. Mon­key val­ues. Dog val­ues. Cat val­ues. Fish val­ues. Amoeba val­ues. Sure, we lose the in­side view in this case, but the trade-off seems at least be­ing wor­thy of ex­plor­ing. Is there any re­search go­ing in that area?