In all this research by SI into making a friendly super intelligence, how much effort is being expended on making us the friendly super intelligence? Is there any institute particularly looking into that?
From the limited amount I’ve seen, SI seems to be planning on the super AI as more a descendant of a PC than a descendant of Homo Sapiens.
I mean neither, although whole brain emulation would be one avenue for what I mean. If you could upload you into silicon, then interfacing and merging with other programs would be simplified, and effectively you(now) could more easily transform into the you(later) Super Intelligence.
I mean improving the wetware through drugs, training, genetic modification, and improving wetware/hardware interfaces. Watson beat humans, but Watson + Human would beat Watson. Currently, the most capable intelligent system is human+machine, and that will remain so for a while. How do we make that a while longer, until we have effectively merged with machines, and the Super Intelligence is literally us? Making existing human friendliness the base of the solution seems more promising than trying to build friendliness from scratch.
I’m skeptical of our ability to use Mathemagic to constrain a Super Intelligence. I wish SingInst luck in the endeavor, but I expect any Super Intelligence to display emergent properties likely to subvert our best laid plans—not woo woo emergence, just functional interactions creating capabilities that we won’t anticipate and don’t understand.
If you think silicon will win, that’s fine, but improving the wetware is at least a risk mitigation strategy versus a silicon SI—the smarter we are, the more likely we can keep up, and the more capable we’ll be to create a SI that doesn’t turn us into paper clips.
Maybe the same thing happens to us if we merge with technology; we change in ways we don’t anticipate, and become paper clip maximizers. Oh well. Better a paper clip maximizer than a paper clip.
In all this research by SI into making a friendly super intelligence, how much effort is being expended on making us the friendly super intelligence? Is there any institute particularly looking into that?
From the limited amount I’ve seen, SI seems to be planning on the super AI as more a descendant of a PC than a descendant of Homo Sapiens.
Do you mean whole brain emulation as opposed to what Anna and I called “de novo” AI? E.g. the sort of thing Carl discussed in Whole Brain Emulation and the Evolution of Superorganisms?
I mean neither, although whole brain emulation would be one avenue for what I mean. If you could upload you into silicon, then interfacing and merging with other programs would be simplified, and effectively you(now) could more easily transform into the you(later) Super Intelligence.
I mean improving the wetware through drugs, training, genetic modification, and improving wetware/hardware interfaces. Watson beat humans, but Watson + Human would beat Watson. Currently, the most capable intelligent system is human+machine, and that will remain so for a while. How do we make that a while longer, until we have effectively merged with machines, and the Super Intelligence is literally us? Making existing human friendliness the base of the solution seems more promising than trying to build friendliness from scratch.
I’m skeptical of our ability to use Mathemagic to constrain a Super Intelligence. I wish SingInst luck in the endeavor, but I expect any Super Intelligence to display emergent properties likely to subvert our best laid plans—not woo woo emergence, just functional interactions creating capabilities that we won’t anticipate and don’t understand.
If you think silicon will win, that’s fine, but improving the wetware is at least a risk mitigation strategy versus a silicon SI—the smarter we are, the more likely we can keep up, and the more capable we’ll be to create a SI that doesn’t turn us into paper clips.
Maybe the same thing happens to us if we merge with technology; we change in ways we don’t anticipate, and become paper clip maximizers. Oh well. Better a paper clip maximizer than a paper clip.
“Merging” is vague. And homo sapiens aren’t Friendly.
I like my chances with homo sapiens better than an alien intelligence designed by us.
If its an alien intelligence and doesn’t have a global association table like Starmap-AI we are already doomed.