“Taste for variety” seems to be a pretty fundamental preference of mine(and many other people. all people maybe?) If this preference is a widely shared one among intelligent agents, I wonder if it could lead to a surprising amount of convergence among the things they end up optimizing for, as they might want to “try” the things the other agents would do. Perhaps you could imagine that there are “universal values”(a la “universal distribution”) that are within some constant factor of optimality for a very wide range of other values, including the other universal values[1]. Variety-preference also seems like it would be pretty convergently useful. Reason for optimism?
Lack of variety also seems to be a common thing that’s perverse about some of the imagined agi takeover worlds. Paperclips, tiling the world...
“Taste for variety” [...] could lead to a surprising amount of convergence among the things they end up optimizing for
Wouldn’t this be tautologically untrue? Speaking as a variety-preferrer, I’d rather my values not converge with all the other agents going around preferring varieties. It’d be boring! I’d rather have the meta-variety where we don’t all prefer the same distribution of things.
Speaking as a variety-preferrer, I’d rather my values not converge with all the other agents going around preferring varieties.
Maybe you could try to verify this, by writing a long list of things you would like to experience… and then marking each item on the list either “I invented this myself” or “I heard someone else doing it, and it inspired me”.
I guess it depends on whether you have a preference for variety in the world in general, or in your own actions/experiences. But even in the world-in-general case there would be a force towards convergence in the things that overall get optimized for compared across different worlds.(Unless your preference is over variety across possible worlds, but that starts to seem a bit unnatural/hard to optimize for)
“Taste for variety” seems to be a pretty fundamental preference of mine(and many other people. all people maybe?) If this preference is a widely shared one among intelligent agents, I wonder if it could lead to a surprising amount of convergence among the things they end up optimizing for, as they might want to “try” the things the other agents would do. Perhaps you could imagine that there are “universal values”(a la “universal distribution”) that are within some constant factor of optimality for a very wide range of other values, including the other universal values [1] . Variety-preference also seems like it would be pretty convergently useful. Reason for optimism?
Lack of variety also seems to be a common thing that’s perverse about some of the imagined agi takeover worlds. Paperclips, tiling the world...
Yes, obviously the devil is in the details of this “very wide range of other values” and “constant factor”
Wouldn’t this be tautologically untrue? Speaking as a variety-preferrer, I’d rather my values not converge with all the other agents going around preferring varieties. It’d be boring! I’d rather have the meta-variety where we don’t all prefer the same distribution of things.
Maybe you could try to verify this, by writing a long list of things you would like to experience… and then marking each item on the list either “I invented this myself” or “I heard someone else doing it, and it inspired me”.
I guess it depends on whether you have a preference for variety in the world in general, or in your own actions/experiences. But even in the world-in-general case there would be a force towards convergence in the things that overall get optimized for compared across different worlds.(Unless your preference is over variety across possible worlds, but that starts to seem a bit unnatural/hard to optimize for)