Get rid of enough constraints, and you’ll get the equivalent of a Spiegelman’s monster, no longer even remotely human.
And this is bad how?
Human value is definitely the something to protect, and business as usual will destroy us.
What do you mean by “destroy us”? Change 21-century human animals into something better adapted to survive in the new Universe?
EDIT: I guess I should articulate my confusion better: what’s wrong with gradually becoming an Egan’s jewelhead (sounds like an equivalent of uploading to me) or growing an earring-based prosthetic neocortex?
I guess I should articulate my confusion better: what’s wrong with gradually becoming an Egan’s jewelhead (sounds like an equivalent of uploading to me) or growing an earring-based prosthetic neocortex?
I don’t think those outcomes would be particularly bad: they’re still keeping most constraints in place. If all that remained of humanity were replicators who only cared about making more copies of themselves and might not even be conscious, now that sounds much worse.
No, why do you think so? The alien might of course be simply mistaken about the consciousness, but unless you’re going to assert that humans are not in fact conscious, an alien who did say that would actually be making a mistake. And it seems clear that humans care about a lot of things besides reproduction, or birth rates would not fall in wealthy countries.
The alien might of course be simply mistaken about the consciousness
What behavior would unambiguously tell an alien that humans are conscious?
birth rates would not fall in wealthy countries
This can be simply an instinctive reaction related to saturation of some resource or a chemical reaction due presence of some inhibitor (e.g. auto emissions).
What behavior would unambiguously tell an alien that humans are conscious?
I have no idea, but there needn’t be one. The alien may be just out of luck. He’ll still be mistaken. My point is that you cannot use an outside view that you know to be mistaken, as an argument for anything in particular.
This can be simply an instinctive reaction related to saturation of some resource or a chemical reaction due presence of some inhibitor (e.g. auto emissions).
Well yes, it could; but are you genuinely asserting that this is in fact the case? If not, what’s your point?
I don’t understand what you’re trying to argue here. You presumably do not actually believe that humans are non-conscious and care only about replication. So where are you going with the alien?
My point was that, were we to see the “future of humanity”, what may look to us now as “replicators who only cared about making more copies of themselves and might not even be conscious” could be nothing of the sort, just like the current humanity looking to an alien as “replicators” is nothing of the sort. We are the alien and have no capabilities to judge the future.
Ok, but we are discussing hypothetical scenarios and can define the hypotheticals as we like; we are not directly observing the posthumans and thus liable to be misled by what we see. You cannot be mistaken about something you’re making up! In short, you’re just fighting the hypothetical. I suggest that this is not productive.
Am i? Fighting the hypothetical is unproductive when you challenge the premises of the hypothetical scenario. Kaj Sotala’s hypothetical was “If all that remained of humanity were replicators who only cared about making more copies of themselves and might not even be conscious”. I pointed out that we are in no position to judge the future replicators based on our current understanding of humanity and its goals. Or what “being conscious” might mean. Does this count as challenging the premises?
This seems like the least of our concerns here. I think a far-flung, spacefaring strain of highly efficient mindless replicators well-protected against all forms of existential risk is still a horrifying future for humanity.
Ah, I see. I did not read the original post or Yvain’s examples as necessarily resulting in the loss of flexibility, but I can see how this can be a fatal side effect in some cases. I guess this would be akin to sacrificing far mode for near mode, though not as extreme as wireheading.
Increase average total human wealth significantly, such that a greater proportion of the total population has more ability to meaningfully try new things or respond to novel challenges in a stabilizing manner.
There is something I am missing here.
And this is bad how?
What do you mean by “destroy us”? Change 21-century human animals into something better adapted to survive in the new Universe?
EDIT: I guess I should articulate my confusion better: what’s wrong with gradually becoming an Egan’s jewelhead (sounds like an equivalent of uploading to me) or growing an earring-based prosthetic neocortex?
I don’t think those outcomes would be particularly bad: they’re still keeping most constraints in place. If all that remained of humanity were replicators who only cared about making more copies of themselves and might not even be conscious, now that sounds much worse.
Adopting a somewhat external view: would not an alien looking at the earthlings describe them exactly like that?
No, why do you think so? The alien might of course be simply mistaken about the consciousness, but unless you’re going to assert that humans are not in fact conscious, an alien who did say that would actually be making a mistake. And it seems clear that humans care about a lot of things besides reproduction, or birth rates would not fall in wealthy countries.
What behavior would unambiguously tell an alien that humans are conscious?
This can be simply an instinctive reaction related to saturation of some resource or a chemical reaction due presence of some inhibitor (e.g. auto emissions).
I have no idea, but there needn’t be one. The alien may be just out of luck. He’ll still be mistaken. My point is that you cannot use an outside view that you know to be mistaken, as an argument for anything in particular.
Well yes, it could; but are you genuinely asserting that this is in fact the case? If not, what’s your point?
I don’t understand what you’re trying to argue here. You presumably do not actually believe that humans are non-conscious and care only about replication. So where are you going with the alien?
My point was that, were we to see the “future of humanity”, what may look to us now as “replicators who only cared about making more copies of themselves and might not even be conscious” could be nothing of the sort, just like the current humanity looking to an alien as “replicators” is nothing of the sort. We are the alien and have no capabilities to judge the future.
Ok, but we are discussing hypothetical scenarios and can define the hypotheticals as we like; we are not directly observing the posthumans and thus liable to be misled by what we see. You cannot be mistaken about something you’re making up! In short, you’re just fighting the hypothetical. I suggest that this is not productive.
Am i? Fighting the hypothetical is unproductive when you challenge the premises of the hypothetical scenario. Kaj Sotala’s hypothetical was “If all that remained of humanity were replicators who only cared about making more copies of themselves and might not even be conscious”. I pointed out that we are in no position to judge the future replicators based on our current understanding of humanity and its goals. Or what “being conscious” might mean. Does this count as challenging the premises?
People are somewhat flexible. If they’re highly optimized for a particular set of constraints, then the human race is more likely to get wiped out.
This seems like the least of our concerns here. I think a far-flung, spacefaring strain of highly efficient mindless replicators well-protected against all forms of existential risk is still a horrifying future for humanity.
I probably have a stronger belief in unknown unknowns than you do, but I agree that either outcome is undesirable.
Ah, I see. I did not read the original post or Yvain’s examples as necessarily resulting in the loss of flexibility, but I can see how this can be a fatal side effect in some cases. I guess this would be akin to sacrificing far mode for near mode, though not as extreme as wireheading.
Second thought: Is there any conceivable way of increasing human flexibility, or would it get borked by Goodhart’s Law?
Increase average total human wealth significantly, such that a greater proportion of the total population has more ability to meaningfully try new things or respond to novel challenges in a stabilizing manner.
(The caveats pretty much write themselves.)