There is significant progress in genetic modification of humans and in physical modification/augmentation of humans. It is plausible we will have genetically modified and/or physically modified human intelligence before we have artificial intelligence.
FAI is the pursuit of artificial intelligence constrained in a way that it will not be a threat to unmodified humans. Or at least that is what it seems to be to me as an observer of discussions here, is this a reasonable description of FAI?
It occurs to me that natural human intelligence has certainly not developed with any such constraints. Indeed, if humanity can develop UAI, then that is essentially proof that human intelligence is not Friendly in the sense we wish FAI to be.
Presumably we have been more worried with how to constrain AI to be friendly because AI could learn to self-modify and experience exponential growth and thus overwhelm human intelligence. But what of modified human intelligence, genetic or physical? These ARE examples of self-modification. And they both appear to be capable of inducing exponential growth.
Is the threat from unfriendly human intelligence any less or any different, or worthy of consideration as an existential risk? If an intelligence arises from modified human, is it a threat to unmodified human, or an enhancement on it? How do we define natural and artificial when our purpose in defining it is to protect the one from the other?
Human intelligence has already chosen to maximize the burning of oil with no regard for the viability of our biosphere, so we’re already living under an Unfriendly Human Intelligence scenario.
Bostrom discusses this possibility in Superintelligence, both in the form of enhanced biological cognition and in brain/machine interfaces. Ultimately he argues that a super intelligent singleton is more likely to be a machine than an enhanced biological brain. He argues that increases in cognitive ability should be much faster with a machine intelligence than through biological enhancement, and that machine intelligence is more scalable (I believe that he makes the point that, while a human brain the size of a warehouse is not practical, a computer the size of a warehouse is).
human intelligence is not Friendly in the sense we wish FAI to be.
Well, of course it’s not. Nobody ever said it is.
capable of inducing exponential growth.
Biologically, on the wetware substrate? I don’t think that’s possible. And if you mean uploads/ems, the distinction between human and AI becomes somewhat vague at this point...
Currently, I’d say the threat from unfriendly natural intelligence is many orders of magnitude higher than that from AI.
There is a valid question of the shape of the improvement curve, and it’s at least somewhat believable that technological intelligence outstrips puny humans very rapidly at some point, and shortly thereafter the balance shifts by more than is imaginable.
Personally, I’m with you—we should be looking for ways to engineer friendliness into humans as the fist step toward understanding and engineering it into machines.
we should be looking for ways to engineer friendliness into humans
No. That’s a really bad idea.
First, no one even knows what “friendliness” is. Second, I strongly suspect that attempts to genetically engineer “friendly humans” will end up creating genetic slaves.
Perhaps. Don’t both of those concerns apply to AI as well?
Humans are the bigger threat, are more easily studied, and are (currently) changing slowly enough that we can be more deliberate than we can of a near-foom AI (presuming post-foom is too late).
I don’t have anything in my moral framework that makes it acceptable to tinker with future conscious AIs and not with future conscious humans. Do you?
I don’t have anything in my moral framework that makes it acceptable to tinker with future conscious AIs and not with future conscious humans. Do you?
Sure I do. I’m a speciesist :-)
Besides, we’re not discussing what to do or not to do with hypothetical future conscious AIs. We’re discussing whether “we should be looking for ways to engineer friendliness into humans”. Humans are not hypothetical and “ways to engineer into humans” are not hypothetical either. They are usually known by the name of “eugenics” and have a… mixed history. Do you have reasons to believe that future attempts to “engineer humans” will be much better?
For the most part, eugenics does not have a mixed history. Eugenics has a bad name because it has historically been preformed by eliminating people from the gene pool—through murder or sterilization. As far as I am aware, no significant eugenics movement has avoided this, and therefor the history would not qualify as mixed.
We should assume that future attempts will be better when those future attempts involve well developed, well understood, well tested, and widely (preferably universally) available changes to humans before they are born—that is, changes that do not take anyone out of the gene pool.
I probably am too, but I don’t much like it. I want to be a consciousness-ist.
Most humans are hypothetical, just like all AIs are. They haven’t existed yet, and may not exist in the forms we imagine them. Much like MIRI is not recommending termination of any existing AIs, I am not recommending termination of existing humans.
I am merely pointing out that most of what I’ve read about FAI goals seems to apply to future humans as much or more as to future AIs.
Personally, I’m with you—we should be looking for ways to engineer friendliness into humans as the fist step toward understanding and engineering it into machines.
As far as I understand engineering humans to be more friendly is a concern for the Chinese. They also happen to be more likely to do genetic engineering than the West.
There is significant progress in genetic modification of humans and in physical modification/augmentation of humans. It is plausible we will have genetically modified and/or physically modified human intelligence before we have artificial intelligence.
FAI is the pursuit of artificial intelligence constrained in a way that it will not be a threat to unmodified humans. Or at least that is what it seems to be to me as an observer of discussions here, is this a reasonable description of FAI?
It occurs to me that natural human intelligence has certainly not developed with any such constraints. Indeed, if humanity can develop UAI, then that is essentially proof that human intelligence is not Friendly in the sense we wish FAI to be.
Presumably we have been more worried with how to constrain AI to be friendly because AI could learn to self-modify and experience exponential growth and thus overwhelm human intelligence. But what of modified human intelligence, genetic or physical? These ARE examples of self-modification. And they both appear to be capable of inducing exponential growth.
Is the threat from unfriendly human intelligence any less or any different, or worthy of consideration as an existential risk? If an intelligence arises from modified human, is it a threat to unmodified human, or an enhancement on it? How do we define natural and artificial when our purpose in defining it is to protect the one from the other?
Human intelligence has already chosen to maximize the burning of oil with no regard for the viability of our biosphere, so we’re already living under an Unfriendly Human Intelligence scenario.
Bostrom discusses this possibility in Superintelligence, both in the form of enhanced biological cognition and in brain/machine interfaces. Ultimately he argues that a super intelligent singleton is more likely to be a machine than an enhanced biological brain. He argues that increases in cognitive ability should be much faster with a machine intelligence than through biological enhancement, and that machine intelligence is more scalable (I believe that he makes the point that, while a human brain the size of a warehouse is not practical, a computer the size of a warehouse is).
Well, of course it’s not. Nobody ever said it is.
Biologically, on the wetware substrate? I don’t think that’s possible. And if you mean uploads/ems, the distinction between human and AI becomes somewhat vague at this point...
Currently, I’d say the threat from unfriendly natural intelligence is many orders of magnitude higher than that from AI.
There is a valid question of the shape of the improvement curve, and it’s at least somewhat believable that technological intelligence outstrips puny humans very rapidly at some point, and shortly thereafter the balance shifts by more than is imaginable.
Personally, I’m with you—we should be looking for ways to engineer friendliness into humans as the fist step toward understanding and engineering it into machines.
No. That’s a really bad idea.
First, no one even knows what “friendliness” is. Second, I strongly suspect that attempts to genetically engineer “friendly humans” will end up creating genetic slaves.
Perhaps. Don’t both of those concerns apply to AI as well?
Humans are the bigger threat, are more easily studied, and are (currently) changing slowly enough that we can be more deliberate than we can of a near-foom AI (presuming post-foom is too late).
I don’t have anything in my moral framework that makes it acceptable to tinker with future conscious AIs and not with future conscious humans. Do you?
Sure I do. I’m a speciesist :-)
Besides, we’re not discussing what to do or not to do with hypothetical future conscious AIs. We’re discussing whether “we should be looking for ways to engineer friendliness into humans”. Humans are not hypothetical and “ways to engineer into humans” are not hypothetical either. They are usually known by the name of “eugenics” and have a… mixed history. Do you have reasons to believe that future attempts to “engineer humans” will be much better?
For the most part, eugenics does not have a mixed history. Eugenics has a bad name because it has historically been preformed by eliminating people from the gene pool—through murder or sterilization. As far as I am aware, no significant eugenics movement has avoided this, and therefor the history would not qualify as mixed.
We should assume that future attempts will be better when those future attempts involve well developed, well understood, well tested, and widely (preferably universally) available changes to humans before they are born—that is, changes that do not take anyone out of the gene pool.
I probably am too, but I don’t much like it. I want to be a consciousness-ist.
Most humans are hypothetical, just like all AIs are. They haven’t existed yet, and may not exist in the forms we imagine them. Much like MIRI is not recommending termination of any existing AIs, I am not recommending termination of existing humans.
I am merely pointing out that most of what I’ve read about FAI goals seems to apply to future humans as much or more as to future AIs.
As far as I understand engineering humans to be more friendly is a concern for the Chinese. They also happen to be more likely to do genetic engineering than the West.