I think this analysis assumes or emphasizes a false distinction between humans and “AI”. For example, Searle’s Room is an artificial intelligence built partly out of a human. It is easy to imagine intelligences built strictly out of humans, without paperwork. When humans behave like humans, we naturally form supervening entities (groups, tribes, memes).
I tried to rephrase Chalmers’ four-point argument without making a distinction between humans acting “naturally” (whatever that means) and “artificial intelligences”:
There is some degree of human intelligence and capabilities. In particular, human intelligence and capabilities has always involved manipulating the world indirectly (mediated by other humans or by nonhuman tools). “There is I”
Since intelligence and capabilities are currently helpful in modifying ourselves and our tools, as we apply our intelligence and capabilities to ourselves and our tools, we will grow in intelligence and capabilities. “If there is I, there will be I+”
If this self-applicability continues for many cycles, we will become very smart and capable. “If there is I+, there will be I++”.
Therefore, we will become very smart and very capable. “There will be I++.”
I’m not trying to dismiss the dangers involved in this process; all I’m saying is that the language used feeds a Skynet “us versus them” mentality that isn’t helpful. Admitting that “We have met the enemy and he is us.” focuses attention where it ought to be.
A lot of AI-risks dialogue is a blend of: foolish people focusing on Skynet scenarios, foolish rhetoric (whatever the author is thinking) alluding to Skynet scenarios, and straightforward sensible policies that could and should be separated from the bad science fiction.
This is what I mean by straightforward, sensible, non-sf policies: We have always made mistakes when using tools. Software tools allow us to make more mistakes faster, especially “unintended consequences” mistakes. We should put effort into developing more safety techniques guarding against unintended consequences of our software tools.
“Us versus them” presupposes the existence of them, ie UFAI. Which means we have probably already lost. So really, no mentality would be remotely helpful for dealing with an existing UFAI.
I think this analysis assumes or emphasizes a false distinction between humans and “AI”. For example, Searle’s Room is an artificial intelligence built partly out of a human. It is easy to imagine intelligences built strictly out of humans, without paperwork. When humans behave like humans, we naturally form supervening entities (groups, tribes, memes).
I tried to rephrase Chalmers’ four-point argument without making a distinction between humans acting “naturally” (whatever that means) and “artificial intelligences”:
There is some degree of human intelligence and capabilities. In particular, human intelligence and capabilities has always involved manipulating the world indirectly (mediated by other humans or by nonhuman tools). “There is I”
Since intelligence and capabilities are currently helpful in modifying ourselves and our tools, as we apply our intelligence and capabilities to ourselves and our tools, we will grow in intelligence and capabilities. “If there is I, there will be I+”
If this self-applicability continues for many cycles, we will become very smart and capable. “If there is I+, there will be I++”.
Therefore, we will become very smart and very capable. “There will be I++.”
I’m not trying to dismiss the dangers involved in this process; all I’m saying is that the language used feeds a Skynet “us versus them” mentality that isn’t helpful. Admitting that “We have met the enemy and he is us.” focuses attention where it ought to be.
A lot of AI-risks dialogue is a blend of: foolish people focusing on Skynet scenarios, foolish rhetoric (whatever the author is thinking) alluding to Skynet scenarios, and straightforward sensible policies that could and should be separated from the bad science fiction.
This is what I mean by straightforward, sensible, non-sf policies: We have always made mistakes when using tools. Software tools allow us to make more mistakes faster, especially “unintended consequences” mistakes. We should put effort into developing more safety techniques guarding against unintended consequences of our software tools.
What mentality other than “us versus them” would be even remotely helpful for dealing with a UFAI?
We have met the enemy and we are paperclips.
“Us versus them” presupposes the existence of them, ie UFAI. Which means we have probably already lost. So really, no mentality would be remotely helpful for dealing with an existing UFAI.
Sci-fi policies can’t be good policies?