I use the term “biological superintelligence” to refer to superhuman intelligences that have a functional architecture that closely resembles that of the natural human brain. A biological superintelligence does not necessarily have an organic substrate.
I would use a slightly refined definition, according to which biological superintelligence per se does necessarily have an organic substrate. A superintelligent mind upload could be called a biologically descended superintelligence, and a neuromorphic AI could be called a biologically inspired superintelligence.
(“Neuromorphic” itself is a broad fuzzy term. To some extent, all artificial neural networks are already brain-inspired.)
Biological superintelligence is the holy grail of AI safety because it solves the alignment problem and the control problem by avoiding them entirely.
This isn’t true, for reasons which illuminate the nature of problems pertaining to superintelligence.
Natural human brains do not contain superintelligence. Therefore, to make them superintelligent, you either have to add something to them, or change something in them. If you change something in them, you’re potentially changing the parts of the individual or the parts of human nature that matter for alignment. If you add something to them, it’s the same situation as a human with an external AI.
Biohacking, neurohacking, and uploading contain their own risks. The obvious risk is that you kill or injure yourself. A more subtle risk is that you change yourself in a way that you wouldn’t actually have wanted. This second kind of “risk” encompasses a continuum from clearly undesirable outcomes, e.g. after the change you end up believing things that you would never have wanted to believe, through to very subtle things which are on a par with the ambiguous decisions of existing everyday life.
But to consider these topics is far from a waste of time. Neuroscience matters for at least two reasons. One is that knowledge of the human brain has often been considered essential to the creation of deeply aligned AI; all the way from the days of CEV, through to more recent proposals like Metaethical AI. The other is that the theory and practice of safely aligning brainlike AI is a natural source of ideas for the presently unknown theory and practice of safely modifying and enhancing biological human brains.
I would use a slightly refined definition, according to which biological superintelligence per se does necessarily have an organic substrate. A superintelligent mind upload could be called a biologically descended superintelligence, and a neuromorphic AI could be called a biologically inspired superintelligence.
(“Neuromorphic” itself is a broad fuzzy term. To some extent, all artificial neural networks are already brain-inspired.)
This isn’t true, for reasons which illuminate the nature of problems pertaining to superintelligence.
Natural human brains do not contain superintelligence. Therefore, to make them superintelligent, you either have to add something to them, or change something in them. If you change something in them, you’re potentially changing the parts of the individual or the parts of human nature that matter for alignment. If you add something to them, it’s the same situation as a human with an external AI.
Biohacking, neurohacking, and uploading contain their own risks. The obvious risk is that you kill or injure yourself. A more subtle risk is that you change yourself in a way that you wouldn’t actually have wanted. This second kind of “risk” encompasses a continuum from clearly undesirable outcomes, e.g. after the change you end up believing things that you would never have wanted to believe, through to very subtle things which are on a par with the ambiguous decisions of existing everyday life.
But to consider these topics is far from a waste of time. Neuroscience matters for at least two reasons. One is that knowledge of the human brain has often been considered essential to the creation of deeply aligned AI; all the way from the days of CEV, through to more recent proposals like Metaethical AI. The other is that the theory and practice of safely aligning brainlike AI is a natural source of ideas for the presently unknown theory and practice of safely modifying and enhancing biological human brains.