Page 102, “Many more orders of magnitudes of human-like beings could exist if we countenance digital implementations of minds—as we should.” I’d like to hear others thoughts about that, especially why he writes “as we should.”
I think Bostrom wrote it that way to signal that while hist own position is that digital mind implementations can carry the same moral relevance as e.g. minds running on human brains, he acknowledges that there are differing opinions about the subject, and he doesn’t want to entirely dismiss people who disagree.
He’s right about the object-level issue, of course: Solid state societies do make sense. Mechanically embodying all individual minds is too inefficient to be a good idea in the long run, and there’s no overriding reason to stick to that model.
If by ‘countenance’ we mean support normatively (I think sometimes it is used as ‘accept as probable’) then aside from the possible risks from the transition, digital minds seem more efficient in many ways (e.g. they can reproduce much more cheaply, and run on power from many sources, and live forever, and be copied in an educated state), and so likely to improve progress on many things we care about. They seem likely to be conscious if we are, but even if they aren’t, it would plausibly be useful to have many of them around alongside conscious creatures.
So, what is going in the bloodstream of these “digital minds?” That will change the way they function completely.
What kind of sensory input are they being supplied?
Why would they have fake nerves running to non-existent hands, feet, hearts and digestive systems?
Will we improve them voluntary? Are they allowed to request improvements?
I would certainly request a few improvements, if I was one.
Point being: what you end up with if you go down this road is not a copy of a human mind: It is almost immediately a neuromorphic entity.
A lot of analysis in this book imagines that these entities will continue to be somewhat human-like for quite some time. That direction does not parse for me.
Point being: what you end up with if you go down this road is not a copy of a human mind: It is almost immediately a neuromorphic entity.
People who have thought about this seem to mostly think that a lot of things would change quickly—I suspect any disagreement you have with Bostrom is about whether this creature derived from a human is close enough to a human to be thought of as basically human-like. Note that Bostrom thinks of the space of possible minds as being vast, so even a very weird human-descendent might seem basically human-like.
Page 102, “Many more orders of magnitudes of human-like beings could exist if we countenance digital implementations of minds—as we should.” I’d like to hear others thoughts about that, especially why he writes “as we should.”
I think Bostrom wrote it that way to signal that while hist own position is that digital mind implementations can carry the same moral relevance as e.g. minds running on human brains, he acknowledges that there are differing opinions about the subject, and he doesn’t want to entirely dismiss people who disagree.
He’s right about the object-level issue, of course: Solid state societies do make sense. Mechanically embodying all individual minds is too inefficient to be a good idea in the long run, and there’s no overriding reason to stick to that model.
If by ‘countenance’ we mean support normatively (I think sometimes it is used as ‘accept as probable’) then aside from the possible risks from the transition, digital minds seem more efficient in many ways (e.g. they can reproduce much more cheaply, and run on power from many sources, and live forever, and be copied in an educated state), and so likely to improve progress on many things we care about. They seem likely to be conscious if we are, but even if they aren’t, it would plausibly be useful to have many of them around alongside conscious creatures.
So, what is going in the bloodstream of these “digital minds?” That will change the way they function completely.
What kind of sensory input are they being supplied?
Why would they have fake nerves running to non-existent hands, feet, hearts and digestive systems?
Will we improve them voluntary? Are they allowed to request improvements?
I would certainly request a few improvements, if I was one.
Point being: what you end up with if you go down this road is not a copy of a human mind: It is almost immediately a neuromorphic entity.
A lot of analysis in this book imagines that these entities will continue to be somewhat human-like for quite some time. That direction does not parse for me.
People who have thought about this seem to mostly think that a lot of things would change quickly—I suspect any disagreement you have with Bostrom is about whether this creature derived from a human is close enough to a human to be thought of as basically human-like. Note that Bostrom thinks of the space of possible minds as being vast, so even a very weird human-descendent might seem basically human-like.