So how on Earth can it be difficult to convince people that UFAI is an issue?
Well, there’s a couple prongs to that. For one thing, it’s tagged as fiction in most people’s minds, as might be suggested by the fact that it’s easily described in trope. That’s bad enough by itself.
Probably more importantly, though, there’s a ferocious tendency to anthropomorphize this sort of thing, and you can’t really grok UFAI without burning a good bit of that tendency out of your head. Sure, we ourselves aren’t capital-F Friendly, but we’re a far cry yet from a paperclip maximizer or even most of the subtler failures of machine ethics; a jealous or capricious machine god is bad, but we’re talking Screwtape here, not Azathoth. HAL and Agent Smith are the villains of their stories, but they’re human in most of the ways that count.
You may also notice that we tend to win fictional robot wars.
Also, note that the tropes tend to work against people who say “we have a systematic proof that our design of AI will be Friendly”. In fact, in general the only way a fictional AI will turn out ‘friendly’ is if it is created entirely by accident—ANY fictional attempt to intentionally create a Friendly AI will result in an abomination, usually through some kind of “dick Genie” interpretation of its Friendliness rules.
Yeah. I think I’d consider that a form of backdoor anthropomorphization by way of vitalism, though. Since we tend to think of physically nonhuman intelligences as cognitively human, and since we tend to think of human ethics and cognition as something sacred and ineffable, fictional attempts to eff them tend to be written as crude morality plays.
Intelligence arising organically from a telephone exchange or an educational game or something doesn’t trigger the same taboos.
Well, there’s a couple prongs to that. For one thing, it’s tagged as fiction in most people’s minds, as might be suggested by the fact that it’s easily described in trope. That’s bad enough by itself.
Probably more importantly, though, there’s a ferocious tendency to anthropomorphize this sort of thing, and you can’t really grok UFAI without burning a good bit of that tendency out of your head. Sure, we ourselves aren’t capital-F Friendly, but we’re a far cry yet from a paperclip maximizer or even most of the subtler failures of machine ethics; a jealous or capricious machine god is bad, but we’re talking Screwtape here, not Azathoth. HAL and Agent Smith are the villains of their stories, but they’re human in most of the ways that count.
You may also notice that we tend to win fictional robot wars.
Also, note that the tropes tend to work against people who say “we have a systematic proof that our design of AI will be Friendly”. In fact, in general the only way a fictional AI will turn out ‘friendly’ is if it is created entirely by accident—ANY fictional attempt to intentionally create a Friendly AI will result in an abomination, usually through some kind of “dick Genie” interpretation of its Friendliness rules.
Yeah. I think I’d consider that a form of backdoor anthropomorphization by way of vitalism, though. Since we tend to think of physically nonhuman intelligences as cognitively human, and since we tend to think of human ethics and cognition as something sacred and ineffable, fictional attempts to eff them tend to be written as crude morality plays.
Intelligence arising organically from a telephone exchange or an educational game or something doesn’t trigger the same taboos.