Can AI X-risk be effectively communicated by analogy to climate change? That is, the threat isn’t manifesting itself clearly yet, but experts tell us it will if we continue along the current path.
Though there are various disanalogies, this specific comparison seems both honest and likely to be persuasive to the left?
I don’t like it. Among various issues, people already muddy the waters by erroneously calling climate change an existential risk (rather than what it was, a merely catastrophic one, before AI timelines made any worries about climate change in the year 2100 entirely irrelevant), and it’s extremely partisan-coded. And you’re likely to hear that any mention of AI x-risk is a distraction from the real issues, which are whatever the people cared about previously.
I prefer an analogy to gain-of-function research. As in, scientists grow viruses/AIs in the lab, with promises of societal benefits, but without any commensurate acknowledgment of the risks. And you can’t trust the bio/AI labs to manage these risks, e.g. even high biosafety levels can’t entirely prevent outbreaks.
I agree that there is a consistent message here, and I think it is one of the most practical analogies, but I get the strong impression that tech experts do not want to be associated with environmentalists.
I think it would be persuasive to the left, but I’m worried that comparing AI x-risk to climate change would make it a left-wing issue to care about, which would make right-wingers automatically oppose it (upon hearing “it’s like climate change”).
Generally it seems difficult to make comparisons/analogies to issues that (1) people are familiar with and think are very important and (2) not already politicized.
I’m looking at this not from a CompSci point of view by a rhetoric point of view: Isn’t it much easier to make tenuous or even flat out wrong links between Climate Change and highly publicized Natural Disaster events that have lot’s of dramatic, visceral footage than it is to ascribe danger to a machine that hasn’t been invented yet, that we don’t know the nature or inclinations of?
I don’t know about nowadays but for me the two main pop-culture touchstones for me for “evil AI” are Skynet in Terminator, or HAL 9000 in 2001: A Space Odyssey (and by inversion—the Butlerian Jihad in Dune). Wouldn’t it be more expedient to leverage those? (Expedient—I didn’t say accurate)
Can AI X-risk be effectively communicated by analogy to climate change? That is, the threat isn’t manifesting itself clearly yet, but experts tell us it will if we continue along the current path.
Though there are various disanalogies, this specific comparison seems both honest and likely to be persuasive to the left?
I don’t like it. Among various issues, people already muddy the waters by erroneously calling climate change an existential risk (rather than what it was, a merely catastrophic one, before AI timelines made any worries about climate change in the year 2100 entirely irrelevant), and it’s extremely partisan-coded. And you’re likely to hear that any mention of AI x-risk is a distraction from the real issues, which are whatever the people cared about previously.
I prefer an analogy to gain-of-function research. As in, scientists grow viruses/AIs in the lab, with promises of societal benefits, but without any commensurate acknowledgment of the risks. And you can’t trust the bio/AI labs to manage these risks, e.g. even high biosafety levels can’t entirely prevent outbreaks.
I agree that there is a consistent message here, and I think it is one of the most practical analogies, but I get the strong impression that tech experts do not want to be associated with environmentalists.
I think it would be persuasive to the left, but I’m worried that comparing AI x-risk to climate change would make it a left-wing issue to care about, which would make right-wingers automatically oppose it (upon hearing “it’s like climate change”).
Generally it seems difficult to make comparisons/analogies to issues that (1) people are familiar with and think are very important and (2) not already politicized.
I’m looking at this not from a CompSci point of view by a rhetoric point of view: Isn’t it much easier to make tenuous or even flat out wrong links between Climate Change and highly publicized Natural Disaster events that have lot’s of dramatic, visceral footage than it is to ascribe danger to a machine that hasn’t been invented yet, that we don’t know the nature or inclinations of?
I don’t know about nowadays but for me the two main pop-culture touchstones for me for “evil AI” are Skynet in Terminator, or HAL 9000 in 2001: A Space Odyssey (and by inversion—the Butlerian Jihad in Dune). Wouldn’t it be more expedient to leverage those? (Expedient—I didn’t say accurate)