Here we were talking about a superintelligent agent whose “fondest desire is to fill the universe with orgasmium”. About the only way such an agent would fail to produce enormous complexity is if it died—or was otherwise crippled or imprisoned.
Or if the agent has a button that, through simple magic, directly fills the universe with (stable) orgasmium. Did you even read what I wrote?
Whether humans would want to live—or would survive in—the same universe as an orgasmium-loving superintelligence seems like a totally different issue to me—and it seems rather irrelevant to the point under discussion.
Human morality is the point under discussion, so of course it’s relevant. It seems clear that the chief kind of “complexity” that human morality values is that of conscious (whatever that means) minds and societies of conscious minds, not complex technology produced by unconscious optimizers.
I think I missed the bit where you went off into a wild and highly-improbable fantasy world.
Re: Human morality is the point under discussion
What I was discussing was the “tendency to assume that complexity of outcome must have been produced by complexity of value”. That is not specifically to do with human values.
Or if the agent has a button that, through simple magic, directly fills the universe with (stable) orgasmium. Did you even read what I wrote?
Human morality is the point under discussion, so of course it’s relevant. It seems clear that the chief kind of “complexity” that human morality values is that of conscious (whatever that means) minds and societies of conscious minds, not complex technology produced by unconscious optimizers.
Re: Did you even read what I wrote?
I think I missed the bit where you went off into a wild and highly-improbable fantasy world.
Re: Human morality is the point under discussion
What I was discussing was the “tendency to assume that complexity of outcome must have been produced by complexity of value”. That is not specifically to do with human values.