I think the conflict here reflects some of the issues of consciousness vs cancer. A basic concern is the uncertainty about whether agents that follow short description length decision/optimization procedures might be much more competitive after all, and that we got complexity of values out of evolution might be a lucky happenstance. I’m unsure what sorts of evidence we could look for one way or the other on that question.
I’m not sure why “complexity of values” is itself valuable. I mean, it’s perhaps a confused framing to think of what values are valuable, but on a consequentialist account, it’s possible to compare one’s own values to another set of values. Assuming human values are complex (which I’m still not sure of), I’m not sure why one would in general think that complex value-sets are closer to human values than simple value-sets, since complex value-sets differ from each other.
Very enjoyable!
I think the conflict here reflects some of the issues of consciousness vs cancer. A basic concern is the uncertainty about whether agents that follow short description length decision/optimization procedures might be much more competitive after all, and that we got complexity of values out of evolution might be a lucky happenstance. I’m unsure what sorts of evidence we could look for one way or the other on that question.
I’m not sure why “complexity of values” is itself valuable. I mean, it’s perhaps a confused framing to think of what values are valuable, but on a consequentialist account, it’s possible to compare one’s own values to another set of values. Assuming human values are complex (which I’m still not sure of), I’m not sure why one would in general think that complex value-sets are closer to human values than simple value-sets, since complex value-sets differ from each other.
The intuitive concern that too simple a specification destroys things we might care about via lossy compression.