That’s a kind of the giant cheesecake fallacy. Capability increases risk caused by some people, but it also increases the power of other people to mitigate the risks. Knowing about the increase in the capability of these factors doesn’t help you in deciding which of them wins.
And I will suggest in turn that you are guilty of the catchy fallacy name fallacy. The giant cheesecake fallacy was originally introduced as applying to those who anthropomorphize minds in general, often slipping from capability to motivation because a given motivation is common in humans.
I’m talking about a certain class of humans and not suggesting that they are actually motivated to bring about bad effects. Rather all it takes is for there to be problems where it is significantly easier to mess things up than to get it right.
I agree, this doesn’t fall clearly under the original concept of giant cheesecake fallacy, but it points to a good non-specious generalization of that concept, for which I gave a self-contained explanation in my comment.
Aside from that, your reply addresses issues irrelevant to my critique of your assertion. It sounds like a soldier-argument.
It’s not the giant cheesecake fallacy, but Vladimir Nesov is completely correct when he says:
Capability increases risk caused by some people, but it also increases the power of other people to mitigate the risks. Knowing about the increase in the capability of these factors doesn’t help you in deciding which of them wins.
Anyone arguing that existential risks are elevated by increasing intelligence must also account for the mitigating factor against existential risk that intelligence also plays.
That is rather easily accounted for, I would think. Attack is easier than defense. It is easier to build a bomb than to defend against bomb attacks; it is easier to build a laser than to defend against laser attacks—and so on.
This is true. Yet capability to attack isn’t the same thing as actually attacking.
Even at our current level of intelligence, and the world is not ravaged by nuclear weapons or biological weapons. Maybe we are just lucky so far.
All else being equal, smarter people are probably less likely to attack with globally threatening weapons, particularly when mutually assured destruction is a factor. In cases of MAD, attack isn’t exactly “easy” when you are ensuring your own destruction as well. There are some crazy people with nukes, but you have to be crazy and stupid to attack in the case of MAD, and nobody so far has that combination of craziness and stupidity. MAD is an IQ test that all humans with nukes have passed so far (the US bombing Japan was not under MAD).
I propose a study:
The participants are a sample of despots randomly assigned to two conditions. The control condition is given an IQ test and some nukes. The experimental condition is given intelligence enhancement, an IQ test, and some nukes. At the end of the experiment, scientists stationed on the moon will measure the effect of the intelligence manipulation on nuke usage.
And notice that it didn’t provoke a nuclear war, and the human race still exists. Nuclear weapons weren’t an existential threat until multiple parties obtained them. If MAD isn’t a concern in using a given weapon, it doesn’t sound like much of an existential threat.
If MAD isn’t a concern in using a given weapon, it doesn’t sound like much of an existential threat.
I dont understand the logic of this sentence. If I create an Earth-destroying bomb in my basement, MAD doesn’t apply but it’s still an existential threat. Similar reasoning works for nanotech, biotech and AI.
There could be cases when an older-generation technology can be used to assure destruction. Say, if the new tech doesn’t prevent ICBMs and nuclear explosions, both sides will still be bound by MAD.
This is a problem, but not necessarily an existential risk, which is the topic under discussion. Existential risk has a particular meaning: it must be global, whereas the US bombing Japan was local.
If we assume that causing risk requires a certain intelligence level and mitigating risks requires a certain (higher) level, changing the distribution of intelligence in a way that enlarges both groups will not, in general, enlarge both by the same factor.
That statement shows a way in which the claim that increasing the number of intelligent people will increase rather than decrease risk might be supported.
Both are special cases of the following fallacy. A certain factor increases the strength of some possible positive effect, and also the strength of some possible negative effect, with the consequences of these effects taken in isolation being mutually exclusive. An argument is then given that since this factor increases the positive effect (negative effect), the consequences are going to be positive (negative), and therefore the factor itself is instrumentally desirable (undesirable). The argument doesn’t recognize the other side of the possible consequences, ignoring the possibility that the opposite effect is going to dominate instead.
Maybe it has another existing name; the analogy seems useful.
Giant cheesecake is about the jump from capability to motive, usually in the presence of anthropomorphism or other reasons to assume the preference without thinking.
This sounds more like a generic problem of technophilia (phobia) - mostly just confirmation bias or standard filtering of arguments. It probably does need a name, though, like Appeal to Selected Possibilities or something like that.
That’s a kind of the giant cheesecake fallacy. Capability increases risk caused by some people, but it also increases the power of other people to mitigate the risks. Knowing about the increase in the capability of these factors doesn’t help you in deciding which of them wins.
And I will suggest in turn that you are guilty of the catchy fallacy name fallacy. The giant cheesecake fallacy was originally introduced as applying to those who anthropomorphize minds in general, often slipping from capability to motivation because a given motivation is common in humans.
I’m talking about a certain class of humans and not suggesting that they are actually motivated to bring about bad effects. Rather all it takes is for there to be problems where it is significantly easier to mess things up than to get it right.
I agree, this doesn’t fall clearly under the original concept of giant cheesecake fallacy, but it points to a good non-specious generalization of that concept, for which I gave a self-contained explanation in my comment.
Aside from that, your reply addresses issues irrelevant to my critique of your assertion. It sounds like a soldier-argument.
It’s not the giant cheesecake fallacy, but Vladimir Nesov is completely correct when he says:
Anyone arguing that existential risks are elevated by increasing intelligence must also account for the mitigating factor against existential risk that intelligence also plays.
That is rather easily accounted for, I would think. Attack is easier than defense. It is easier to build a bomb than to defend against bomb attacks; it is easier to build a laser than to defend against laser attacks—and so on.
This is true. Yet capability to attack isn’t the same thing as actually attacking.
Even at our current level of intelligence, and the world is not ravaged by nuclear weapons or biological weapons. Maybe we are just lucky so far.
All else being equal, smarter people are probably less likely to attack with globally threatening weapons, particularly when mutually assured destruction is a factor. In cases of MAD, attack isn’t exactly “easy” when you are ensuring your own destruction as well. There are some crazy people with nukes, but you have to be crazy and stupid to attack in the case of MAD, and nobody so far has that combination of craziness and stupidity. MAD is an IQ test that all humans with nukes have passed so far (the US bombing Japan was not under MAD).
I propose a study:
The participants are a sample of despots randomly assigned to two conditions. The control condition is given an IQ test and some nukes. The experimental condition is given intelligence enhancement, an IQ test, and some nukes. At the end of the experiment, scientists stationed on the moon will measure the effect of the intelligence manipulation on nuke usage.
But the US did bomb Japan. For each new existentially threatening tech, the first power to develop it won’t be bound by MAD.
And notice that it didn’t provoke a nuclear war, and the human race still exists. Nuclear weapons weren’t an existential threat until multiple parties obtained them. If MAD isn’t a concern in using a given weapon, it doesn’t sound like much of an existential threat.
If MAD isn’t a concern in using a given weapon, it doesn’t sound like much of an existential threat.
I dont understand the logic of this sentence. If I create an Earth-destroying bomb in my basement, MAD doesn’t apply but it’s still an existential threat. Similar reasoning works for nanotech, biotech and AI.
There could be cases when an older-generation technology can be used to assure destruction. Say, if the new tech doesn’t prevent ICBMs and nuclear explosions, both sides will still be bound by MAD.
This is a problem, but not necessarily an existential risk, which is the topic under discussion. Existential risk has a particular meaning: it must be global, whereas the US bombing Japan was local.
If we assume that causing risk requires a certain intelligence level and mitigating risks requires a certain (higher) level, changing the distribution of intelligence in a way that enlarges both groups will not, in general, enlarge both by the same factor.
Obviously. A coin is also going to land on exactly one of the sides (but you don’t know which one). Why do you pronounce this fact?
That statement shows a way in which the claim that increasing the number of intelligent people will increase rather than decrease risk might be supported.
How the heck is that a giant cheesecake fallacy?
Both are special cases of the following fallacy. A certain factor increases the strength of some possible positive effect, and also the strength of some possible negative effect, with the consequences of these effects taken in isolation being mutually exclusive. An argument is then given that since this factor increases the positive effect (negative effect), the consequences are going to be positive (negative), and therefore the factor itself is instrumentally desirable (undesirable). The argument doesn’t recognize the other side of the possible consequences, ignoring the possibility that the opposite effect is going to dominate instead.
Maybe it has another existing name; the analogy seems useful.
Giant cheesecake is about the jump from capability to motive, usually in the presence of anthropomorphism or other reasons to assume the preference without thinking.
This sounds more like a generic problem of technophilia (phobia) - mostly just confirmation bias or standard filtering of arguments. It probably does need a name, though, like Appeal to Selected Possibilities or something like that.