Hell, even if some group wanted to make big money off of predicting AI
doom in particular, they could do it a lot better than SIAI does [...]
People have tried much the same plan before, you know. Hugo de Garis was using much the same fear-mongering marketing strategy to draw attention to himself before the Singularity Institute came along.
Hugo de Garis predicts a future war between AI supporters and AI opponents that will cause billions of death. That is a highly-inflammatory prediction, because it fits neatly with human instincts about ideological conflicts and science-fiction-style technology.
The prediction that AIs will be dangerously indifferent to our existence unless we take great care to make them otherwise is not an appeal to human intuitions about conflict or important causes. Eliezer could talk about uFAI as if it were approximately like Skynet and draw substantially more (useless) attention, while still advocating for his preferred course of research. That he has not done so is evidence that he is more concerned with representing his beliefs accurately than attracting media attention.
People have tried that too. In 2004 Kevin Warwick published “March of the Machines”. It was an apocalyptic view of what the future holds for mankind—with the superior machines out-competing the obsolete humans—crushing them like ants.
Obviously some DOOM mongers will want their vision of DOOM to be as convincing and realistic as possible. The more obviously fake the visions of DOOM are, the fewer believe—and the poorer the associated marketing. Making DOOM seem as plausible as possible is a fundamental part of the DOOM monger’s trade.
The Skynet niche, the Matrix niche, the 2012 niche, the “earth fries” niche, the “alien invasion” niche, the “asteroid impact” niche, the “nuclear apocalypse” niche, and the “deadly plague” niche are all already being exploited by other DOOM mongers—in their own way. Humans just love a good disaster, you see.
People have tried much the same plan before, you know. Hugo de Garis was using much the same fear-mongering marketing strategy to draw attention to himself before the Singularity Institute came along.
Hugo de Garis predicts a future war between AI supporters and AI opponents that will cause billions of death. That is a highly-inflammatory prediction, because it fits neatly with human instincts about ideological conflicts and science-fiction-style technology.
The prediction that AIs will be dangerously indifferent to our existence unless we take great care to make them otherwise is not an appeal to human intuitions about conflict or important causes. Eliezer could talk about uFAI as if it were approximately like Skynet and draw substantially more (useless) attention, while still advocating for his preferred course of research. That he has not done so is evidence that he is more concerned with representing his beliefs accurately than attracting media attention.
People have tried that too. In 2004 Kevin Warwick published “March of the Machines”. It was an apocalyptic view of what the future holds for mankind—with the superior machines out-competing the obsolete humans—crushing them like ants.
Obviously some DOOM mongers will want their vision of DOOM to be as convincing and realistic as possible. The more obviously fake the visions of DOOM are, the fewer believe—and the poorer the associated marketing. Making DOOM seem as plausible as possible is a fundamental part of the DOOM monger’s trade.
The Skynet niche, the Matrix niche, the 2012 niche, the “earth fries” niche, the “alien invasion” niche, the “asteroid impact” niche, the “nuclear apocalypse” niche, and the “deadly plague” niche are all already being exploited by other DOOM mongers—in their own way. Humans just love a good disaster, you see.