What’s wrong with simplicity of value?

In the Wiki ar­ti­cle on com­plex­ity of value, Eliezer wrote:

The the­sis that hu­man val­ues have high Kol­mogorov com­plex­ity—our prefer­ences, the things we care about, don’t com­press down to one sim­ple rule, or a few sim­ple rules.


Thou Art God­shat­ter de­scribes the evolu­tion­ary psy­chol­ogy be­hind the com­plex­ity of hu­man val­ues—how they got to be com­plex, and why, given that ori­gin, there is no rea­son in hind­sight to ex­pect them to be sim­ple.

But in light of Yvain’s re­cent se­ries of posts (i.e., if we con­sider our “ac­tual” val­ues to be the val­ues we would en­dorse in re­flec­tive equil­ibrium, in­stead of our cur­rent ap­par­ent val­ues), I don’t see any par­tic­u­lar rea­son, whether from evolu­tion­ary psy­chol­ogy or el­se­where, that they must be com­plex ei­ther. Most of our ap­par­ent val­ues (which ad­mit­tedly are com­plex) could eas­ily be mere be­hav­ior, which we would dis­card af­ter suffi­cient re­flec­tion.

For those who might wish to defend the com­plex­ity-of-value the­sis, what rea­sons do you have for think­ing that hu­man value is com­plex? Is it from an in­tu­ition that we should trans­late as many of our be­hav­iors into prefer­ences as pos­si­ble? If other peo­ple do not have a similar in­tu­ition, or per­haps even have a strong in­tu­ition that val­ues should be sim­ple (and there­fore would be more will­ing to dis­card things that are on the fuzzy bor­der be­tween be­hav­iors and val­ues), could they think that their val­ues are sim­ple, with­out be­ing wrong?