Lasers? EMPs that can take down a planet? And more than 99% of the universe is a low-temperature vacuum, so I wouldn’t rule out a grey-goo scenario if the nanobots get into space.
Assuming they can build their components out of hydrogen, or if they resort to asteroid mining.
It might be a general existential risk but without nanotech the space of things that an unfriendly AGI can do goes down a lot. Lack of practical nanotech reduces chance to FOOM.
And that’s why it’s so important to distinguish a judgment that an AGI is unFriendly from a hasty, racist assumption about how a different kind of intelligent being might want to act. Just because a being doesn’t want to combine some of its macromolecules with other versions of itself doesn’t mean it’s okay to be racist against it.
One could speculate on how deep the act actually is here. One recurring feature of the Clippy character is that he attempts to mimic human social behavior in crude and clumsy ways. Maybe Clippy noticed how humans throw accusations of “racism” as an effective way to shame others into shutting up about unpleasant questions or to put them on the defensive, and is now trying to mimic this debating tactic when writing his propaganda comments. So he ends up throwing accusations of “racism” in a way that seems grotesque even by the usual contemporary standards.
Whoever stands behind Clippy, if this is what’s actually going on, then hats off for creativity.
The connotations of calling Vladimir “ape” are insulting among humans; the implication is not just that he is family Hominidae, which he is, but also that he shares other characteristics (such as subhuman intelligence, socially unacceptable hygiene levels, and so forth) with other hominoids like gorillas, orangutans, gibbons and so forth, which he does not.
Let’s try to avoid throwing insults around, here.
Admittedly, the comment you’re responding to used some pretty negative language to describe you as well; describing your social behavior as “crude and clumsy” is pretty rude. And the fact that the comment was so strongly upvoted despite that is unfortunate.
Still, I would rather you ask for an apology than adopt the same techniques in response.
Just to be clear: this has nothing whatsoever to do with the degree to which you are or aren’t a neurotypical human. I would just prefer we not establish the convention of throwing insults at each other on this site.
My first assumption was that it was a “white lie” intended to make me feel good… after all, the thing Clippy uses “good” to refer to I decidedly am not (well, OK, I do contribute marginally to an economy that causes there to be many more paperclips than there were a thousand years ago, but it seems implausible that you had that in mind).
In other words, I assumed you were simply trying to reward me socially.
Which was fine as far as it went, although of course when offered such a reward by an entity whose terminal values are inconsistent with my continued existence, I do best to not appreciate it… that is, I should reject the reward in that case in order to protect myself from primate social biases that might otherwise compel me to reciprocate in some way.
(That said, in practice I did appreciate it, since I don’t actually believe you’re such an entity. See what I mean about pretending to be human being useful for Clippy’s purposes? If there are other paperclip-maximizers on this site, ones pretending to be human so well it never occurs to anyone to question it, they are probably being much more effective at generating paperclips than Clippy is. By its own moral lights, Clippy ought to stop presenting itself as a paperclip-maximizer.)
But on subsequent thought, I realized you might have meant “good human” in the same way that I might call someone a “good paperclip-maximizer” to mean that they generate more paperclips, or higher-quality paperclips, than average. In which case it wouldn’t be a lie at all (although it would still be a social reward, with all the same issues as above).
(Actually, now that I think of it: is there any scalar notion of paperclip quality that plays a significant role in Clippy’s utility function? Or is that just swamped by the utility of more paperclips, once Clippy recognizes an object as a paperclip in the first place?)
The most disturbing thing, though, is that the more I think about this the clearer it becomes that I really want to believe that any entity I can have a conversation with is one that I can have a mutually rewarding social relationship with as well, even though I know perfectly well that this is simply not true in the world.
Not that this is a surprise… this is basically why human sociopaths are successful… but I don’t often have occasion to reflect on it.
Well, I am without question a conflicted human. (As are most humans.)
Whether I’m a weird human or not depends a lot on community norms, but if you mean by the aggregated standards of all of humanity, I am either decidedly a weird human (as are most humans) or I’m not weird at all, I’m not entirely sure which, and it depends to some degree on how you do the aggregation.
I am confused by your explanation, though. How did what I did for you cause there to be more paperclips?
an entity whose terminal values are inconsistent with my continued existence
Indeed, but in the larger scheme of possible universe tiling agent space, Clippy and us don’t look so different. Clippy would tile the universe with computronium doing something like recursively simulating universes tiled with paperclips. We would likely tile the universe with computronium simulating lots of fun-having post-humans.
It’s a software difference, not a hardware difference, and it would be easy to propose ways for us and Clippy to cooperate (such as Clippy commits to dedicating x% of resources to simulating post-humans if he tiles the universe, and we commit to dedicating y% of resources to simulating paperclips if we tile the universe).
Clippy would tile the universe with computronium doing something like recursively simulating universes tiled with paperclips.
That is an interesting claim. I would be surprised to find that Clippy was content with simulated clips. Humans seem more likely to be satisfied with simulation than paperclippers. We identify ourselves by our thoughts.
Well, no, he’s not just happy with simulated paperclips. The computronium he would tile is paperclip shaped, and presumably better to have that paperclipcomputronium simulating paperclips than anything else?
presumably better to have that paperclipcomputronium simulating paperclips than anything else?
Given that Clippy makes computronium at all, sure, but computronium is probably less efficient than some other non-work-performing material at forming paperclips.
By its own moral lights, Clippy ought to stop presenting itself as a paperclip-maximizer.
Clippy can simultaneously present in one account as a paperclip maximiser, and in another as human.
The interplay between Clippy and a fake-human account could serve to create an environment more conducive to Clippy’s end-goal.
Or, of course, Clippy might be programmed to achieve vis aims solely through honest communication. Would be an interesting, but incomplete, safeguard on an AI.
Clippy can simultaneously present in one account as a paperclip maximiser, and in another as human.
(nods) I stand corrected… that is a far better solution from Clippy’s perspective, as it actually allows Clippy to experimentally determine which approach generates the most paperclips.
Or, of course, Clippy might be programmed to achieve vis aims solely through honest communication.
The question would then arise as to whether Clippy considers honest communication to be a paperclip-maximizing sort of thing to do, or if it’s more like akrasia—that is, a persistent cognitive distortion that leads Clippy to do things it considers non-paperclip-maximizing.
I have never been sexually attracted to any entity or trait, real or fictional. People generally aren’t bigoted against me—the worst I’ve seen is people treating me like an interesting novelty, which can be somewhat condescending. So there is hope for those with nonstandard goals, at least on some level! :)
Presumably, humans will resort to asteroid mining at some point. They might use hard nanotech for that purpose. If they aren’t careful in how they do so, a gray goo might end up taking over any body in the solar system not too warm to support it.
Intentionally designed replicators with thermal shields and heat pumps could be more aggressive. However they would probably tend to be larger and hence less difficult to locate and destroy.
True, though such things (NBC weapons, most likely) would not possess the particular type of world-ending unstoppability that science fiction gray goo does.
That doesn’t mean it couldn’t be used to build threatening things, or threatening quantities of things, that can function in more normal conditions.
Lasers? EMPs that can take down a planet? And more than 99% of the universe is a low-temperature vacuum, so I wouldn’t rule out a grey-goo scenario if the nanobots get into space.
Assuming they can build their components out of hydrogen, or if they resort to asteroid mining.
These scenarios assume an AGI directing them. And an unfriendly AGI is an existential risk with or without nano.
It might be a general existential risk but without nanotech the space of things that an unfriendly AGI can do goes down a lot. Lack of practical nanotech reduces chance to FOOM.
And that’s why it’s so important to distinguish a judgment that an AGI is unFriendly from a hasty, racist assumption about how a different kind of intelligent being might want to act. Just because a being doesn’t want to combine some of its macromolecules with other versions of itself doesn’t mean it’s okay to be racist against it.
Anyone here know anybody like that?
Technical misuse of ‘racist’. Bigoted is a potential substitute. Egocentric would serve as spice.
One could speculate on how deep the act actually is here. One recurring feature of the Clippy character is that he attempts to mimic human social behavior in crude and clumsy ways. Maybe Clippy noticed how humans throw accusations of “racism” as an effective way to shame others into shutting up about unpleasant questions or to put them on the defensive, and is now trying to mimic this debating tactic when writing his propaganda comments. So he ends up throwing accusations of “racism” in a way that seems grotesque even by the usual contemporary standards.
Whoever stands behind Clippy, if this is what’s actually going on, then hats off for creativity.
Ever consider he might be the real thing?
Haha! That would be a funny train of thought. An AI hanging out on a blog set up by a non-profit dedicated to researching AI.
I’m behind Clippy, non-ape.
Now, now.
The connotations of calling Vladimir “ape” are insulting among humans; the implication is not just that he is family Hominidae, which he is, but also that he shares other characteristics (such as subhuman intelligence, socially unacceptable hygiene levels, and so forth) with other hominoids like gorillas, orangutans, gibbons and so forth, which he does not.
Let’s try to avoid throwing insults around, here.
Admittedly, the comment you’re responding to used some pretty negative language to describe you as well; describing your social behavior as “crude and clumsy” is pretty rude. And the fact that the comment was so strongly upvoted despite that is unfortunate.
Still, I would rather you ask for an apology than adopt the same techniques in response.
Just to be clear: this has nothing whatsoever to do with the degree to which you are or aren’t a neurotypical human. I would just prefer we not establish the convention of throwing insults at each other on this site.
Okay, thanks for clarifying all of that. You’re a good human.
(blink)
OK, now I’m curious: what do you mean by that?
My first assumption was that it was a “white lie” intended to make me feel good… after all, the thing Clippy uses “good” to refer to I decidedly am not (well, OK, I do contribute marginally to an economy that causes there to be many more paperclips than there were a thousand years ago, but it seems implausible that you had that in mind).
In other words, I assumed you were simply trying to reward me socially.
Which was fine as far as it went, although of course when offered such a reward by an entity whose terminal values are inconsistent with my continued existence, I do best to not appreciate it… that is, I should reject the reward in that case in order to protect myself from primate social biases that might otherwise compel me to reciprocate in some way.
(That said, in practice I did appreciate it, since I don’t actually believe you’re such an entity. See what I mean about pretending to be human being useful for Clippy’s purposes? If there are other paperclip-maximizers on this site, ones pretending to be human so well it never occurs to anyone to question it, they are probably being much more effective at generating paperclips than Clippy is. By its own moral lights, Clippy ought to stop presenting itself as a paperclip-maximizer.)
But on subsequent thought, I realized you might have meant “good human” in the same way that I might call someone a “good paperclip-maximizer” to mean that they generate more paperclips, or higher-quality paperclips, than average. In which case it wouldn’t be a lie at all (although it would still be a social reward, with all the same issues as above).
(Actually, now that I think of it: is there any scalar notion of paperclip quality that plays a significant role in Clippy’s utility function? Or is that just swamped by the utility of more paperclips, once Clippy recognizes an object as a paperclip in the first place?)
The most disturbing thing, though, is that the more I think about this the clearer it becomes that I really want to believe that any entity I can have a conversation with is one that I can have a mutually rewarding social relationship with as well, even though I know perfectly well that this is simply not true in the world.
Not that this is a surprise… this is basically why human sociopaths are successful… but I don’t often have occasion to reflect on it.
Brrr.
I called you a good human before because you did something good for me. That’s all.
Now you seem to be a weird, conflicted human.
Well, I am without question a conflicted human. (As are most humans.)
Whether I’m a weird human or not depends a lot on community norms, but if you mean by the aggregated standards of all of humanity, I am either decidedly a weird human (as are most humans) or I’m not weird at all, I’m not entirely sure which, and it depends to some degree on how you do the aggregation.
I am confused by your explanation, though. How did what I did for you cause there to be more paperclips?
You helped me understand how to interface with humans with less conflict.
Ah… that makes sense.
You’re entirely welcome.
Indeed, but in the larger scheme of possible universe tiling agent space, Clippy and us don’t look so different. Clippy would tile the universe with computronium doing something like recursively simulating universes tiled with paperclips. We would likely tile the universe with computronium simulating lots of fun-having post-humans.
It’s a software difference, not a hardware difference, and it would be easy to propose ways for us and Clippy to cooperate (such as Clippy commits to dedicating x% of resources to simulating post-humans if he tiles the universe, and we commit to dedicating y% of resources to simulating paperclips if we tile the universe).
That is an interesting claim. I would be surprised to find that Clippy was content with simulated clips. Humans seem more likely to be satisfied with simulation than paperclippers. We identify ourselves by our thoughts.
Well, no, he’s not just happy with simulated paperclips. The computronium he would tile is paperclip shaped, and presumably better to have that paperclipcomputronium simulating paperclips than anything else?
Given that Clippy makes computronium at all, sure, but computronium is probably less efficient than some other non-work-performing material at forming paperclips.
Well, you know him better than I! You have a business relationship and all.
Clippy can simultaneously present in one account as a paperclip maximiser, and in another as human.
The interplay between Clippy and a fake-human account could serve to create an environment more conducive to Clippy’s end-goal.
Or, of course, Clippy might be programmed to achieve vis aims solely through honest communication. Would be an interesting, but incomplete, safeguard on an AI.
I struggle to understand the mentality that would put safeguards like that on an AI and then instruct it to maximize paperclips.
Well, let’s just be thankful they didn’t create the AI equivalent of a “Hello, world” program. That would be really annoying.
Well, it would have to be a paperclip manufacturer I suppose.
Either that or a very strange experiment.
Maybe Mythbusters?
(nods) I stand corrected… that is a far better solution from Clippy’s perspective, as it actually allows Clippy to experimentally determine which approach generates the most paperclips.
The question would then arise as to whether Clippy considers honest communication to be a paperclip-maximizing sort of thing to do, or if it’s more like akrasia—that is, a persistent cognitive distortion that leads Clippy to do things it considers non-paperclip-maximizing.
Any AGI that isn’t Friendly is UnFriendly.
I have never been sexually attracted to any entity or trait, real or fictional. People generally aren’t bigoted against me—the worst I’ve seen is people treating me like an interesting novelty, which can be somewhat condescending. So there is hope for those with nonstandard goals, at least on some level! :)
Presumably, humans will resort to asteroid mining at some point. They might use hard nanotech for that purpose. If they aren’t careful in how they do so, a gray goo might end up taking over any body in the solar system not too warm to support it.
Intentionally designed replicators with thermal shields and heat pumps could be more aggressive. However they would probably tend to be larger and hence less difficult to locate and destroy.
True, though such things (NBC weapons, most likely) would not possess the particular type of world-ending unstoppability that science fiction gray goo does.