I think many of the most pressing existential risks (e.g. nanotech, biotech and AI accidents) come from the likely actions of moderately intelligent, well-intentioned, and rational humans (compared to the very low baseline). If that is right then increasing the number of such people will increase rather than decrease risk.
And also, this argument is vulnerable to the reversal test. If you think that higher IQ increases existential risk, then you think that lower IQ decreases it. Presumably you don’t believe that putting lead in the water supply would decrease existential risks?
If you decreased the intelligence of everyone to 100 IQ points or lower, I think overall quality of life would decrease but that it would also drastically decrease existential risks.
Edit: On second thought, now that I think about nuclear and biological weapons, I might want to take that back while pointing out that these large threats were predominantly created by quite intelligent, well-intentioned and rational people.
If you decreased the intelligence of everyone to 100 IQ points or lower, that would probably eliminate all hope for a permanent escape from existential risk. Risk in this scenario might be lower per time unit in the near future, but total risk over all time would approach 100%.
Consider a world without nuclear weapons. What would there be to prevent world war I ad infinitum? As a male of conscriptable age, I would consider such a scenario to be so bad as to be not much better than global thermonuclear war.
Why do you think it’s the nuclear weapons that keep the current peace, and not the memory of past wars, and more generally/recently cultural moral progress? This is related to your prediction in resource depletion scenario.
There’s little evidence for theory that threat of global thermonuclear war creates global peace.
Even during the world wars, percentage of people who died of violence seems vastly smaller than in typical hunter gatherer societies.
There were long periods of peace before, most notably 1815-1914 where military technology was essentially equivalent to that of World War I. Before that 18th century was relatively bloodless too.
One of the countries with massive nuclear weapons stockpiles suffered total collapse. This might happen again in the future, in near future most likely to Pakistan or North Korea, but in longer term to any country.
Countries having nuclear weapons engaged in plenty of conventional wars, mostly on smaller scale, and fought each other by proxy.
Also, on a more pragmatic and personal level, increasing average human intelligence increases the probability of immortality and other “surprisingly good” outcomes of humans or other intelligences optimizing our world, such as universal beauty, health, happiness and better quality of life. This needn’t be through superintelligence, it could just be through the intelligence/wealth production correlation.
I don’t see why this being an epistemic probe makes risk per near future time unit more relevant than total risk integrated over time.
The whole thing is kind of academic, because for any realistic policy there’d be specific groups who’d be made smarter than others, and risk effects depend on what those groups are.
You seem to be assuming that the relation between IQ and risk must be monotonic.
I think existential risk mitigation is better pursued by helping the most intelligent and rational efforts than by trying to raise the average intelligence or rationality.
This claim is false—The reversal test does not require the function risk(IQ) to be monotonic. It only requires that the function is locally monotonic around the current IQ value of 100.
I think many of the most pressing existential risks (e.g. nanotech, biotech and AI accidents) come from the likely actions of moderately intelligent, well-intentioned, and rational humans (compared to the very low baseline).
Could you elaborate a bit more on why you think this? Are there any historical examples you are thinking of?
Could you answer my first question, too? Which are the intelligent, well-intentioned, and relatively rational humans you are thinking of? Scientists developing nanotech, biotech, and AI? Policy-makers? Who? How would an example disaster scenario unfold in your view?
Are you saying that the very development of nanotech, biotech, and AI would create an elevated level of existential risk? If so, I would agree. A common counter-argument I’ve heard is that whether we like it or not, someone is going to make progress in at least one of those areas, and that we should try to be the first movers rather than someone less scrupulous.
In terms of safety, using AI as an example:
World with no AI > World where relatively scrupulous people develop an AI > World where unscrupulous people develop an AI
Think about how the world would be if Russia or Germany had developed nukes before the US.
Global nuclear warfare and biological weapons would be the best candidates I can think of.
Intelligence did allow the development of nukes. Yet given that we already have them, global intelligence would probably decrease the risk of them being used.
Let’s assume, for the sake of argument, that the mere development of future nanotech, biotech, and AI doesn’t go horribly wrong and create an existential disaster. If so, then the existential risk will lie in how these technologies are used.
I will suggest that there is a certain threshold of intelligence greater than ours where everyone is smart enough not to do globally harmful stunts with nuclear weapons, biotech, nanotech, and AI and/or smart enough to create safeguards where small amounts of intelligent crazy people can’t do so either. The trick will be getting to that level of intelligence without mishap.
I was reading the Wikipedia Cuban Missile Crisis article, and it does seem that intelligence helped avert catastrophe. There are multiple points where things could have gone wrong but didn’t due to people being smart enough not to do something rash. I suggest that even greater intelligence might ensure that situations like this never develop or are resolved.
Here are some interesting parts:
That morning, a U-2 piloted by USAF Major Rudolf Anderson, departed its forward operating location at McCoy AFB, Florida, and at approximately 12:00 p.m. Eastern Standard Time, was shot down by an S-75 Dvina (NATO designation SA-2 Guideline) SAM launched from an emplacement in Cuba. The stress in negotiations between the USSR and the U.S. intensified, and only later was it learned that the decision to fire was made locally by an undetermined Soviet commander on his own authority.
If this guy had been smarter, maybe this mistake would never have been made.
We had to send a U-2 over to gain reconnaissance information on whether the Soviet missiles were becoming operational. We believed that if the U-2 was shot down that—the Cubans didn’t have capabilities to shoot it down, the Soviets did—we believed if it was shot down, it would be shot down by a Soviet surface-to-air-missile unit, and that it would represent a decision by the Soviets to escalate the conflict. And therefore, before we sent the U-2 out, we agreed that if it was shot down we wouldn’t meet, we’d simply attack. It was shot down on Friday [...]. Fortunately, we changed our mind, we thought “Well, it might have been an accident, we won’t attack.” Later we learned that Khrushchev had reasoned just as we did: we send over the U-2, if it was shot down, he reasoned we would believe it was an intentional escalation. And therefore, he issued orders to Pliyev, the Soviet commander in Cuba, to instruct all of his batteries not to shoot down the U-2.
Luckily, Kruschev and McNamara were smart enough not to escalate. Their intelligence protected against the risk caused by the stupid Soviet commander.
Arguably the most dangerous moment in the crisis was unrecognized until the Cuban Missile Crisis Havana conference in October 2002, attended by many of the veterans of the crisis, at which it was learned that on October 26, 1962 the USS Beale had tracked and dropped practice depth charges on the B-39, a Soviet Foxtrot-class submarine which was armed with a nuclear torpedo. Running out of air, the Soviet submarine was surrounded by American warships and desperately needed to surface. An argument broke out among three officers on the B-39, including submarine captain Valentin Savitsky, political officer Ivan Semonovich Maslennikov, and chief of staff of the submarine flotilla, Commander Vasiliy Arkhipov. An exhausted Savitsky became furious and ordered that the nuclear torpedo on board be made combat ready. Accounts differ about whether Commander Arkhipov convinced Savitsky not to make the attack, or whether Savitsky himself finally concluded that the only reasonable choice left open to him was to come to the surface.[29]
At the Cuban Missile Crisis Havana conference, Robert McNamara admitted that nuclear war had come much closer than people had thought. Thomas Blanton, director of the National Security Archive, said that “a guy called Vasili Arkhipov saved the world.”
Basically, a stupid dude on the sub wanted to use the missile, but a smart dude stopped him.
Yes, existential risk ultimately came from the intelligent developers of nuclear weapons. Yet once the cat was out of the bag, existential risks came from people being stupid, and those risks were counteracted by people being smart. I would expect that more intelligence would be even more helpful in potential disaster situations like this.
The real risk seems to be from weapons developed by smart people falling into the hands of stupid people. Yet if even the stupidest people were smart enough not to play around with mutually assured destruction, then the world would be a safer place.
I agree with Annoyance here. My guess is that a higher IQ may help the individuals in the situations Hughristik describes, but this is not the type of evidence we should consider very convincing. In this example, I would guess that differences in the individual’s desire and ability to think through the consequences of their actions is far more important than differences in there IQ. This may be explained by the incentives facing each individual.
In this example, I would guess that differences in the individual’s desire and ability to think through the consequences of their actions is far more important than differences in there IQ.
This may be true, but “ability to think through the consequences of actions” is probably not independent of general intelligence. People with higher g are better at thinking through everything. This is what the research I linked to (and much that I didn’t link to) shows.
This graph from one of the articles shows that people with higher IQ are less likely to be unemployed, have illegitimate children, live in poverty, or be incarcerated. These life outcomes seem potentially related to considering consequences and planning for the long-term. If intelligence is related to positive individual life outcomes, then it would be unsurprising if it is also related to positive group or world outcomes.
In the case of avoiding use of nuclear weapons, there is probably only a certain threshold of intelligence necessary. Yet from the historical example of the Cuban Missile Crisis, the thinking involved wasn’t always trivial:
We had to send a U-2 over to gain reconnaissance information on whether the Soviet missiles were becoming operational. We believed that if the U-2 was shot down that—the Cubans didn’t have capabilities to shoot it down, the Soviets did—we believed if it was shot down, it would be shot down by a Soviet surface-to-air-missile unit, and that it would represent a decision by the Soviets to escalate the conflict. And therefore, before we sent the U-2 out, we agreed that if it was shot down we wouldn’t meet, we’d simply attack. It was shot down on Friday [...]. Fortunately, we changed our mind, we thought “Well, it might have been an accident, we won’t attack.” Later we learned that Khrushchev had reasoned just as we did: we send over the U-2, if it was shot down, he reasoned we would believe it was an intentional escalation. And therefore, he issued orders to Pliyev, the Soviet commander in Cuba, to instruct all of his batteries not to shoot down the U-2.
Both sides were constantly guessing the reasoning of the other.
In short, we do have reasons to suspect a relationship between intelligence and restraint with existentially risky technologies. People with higher intelligence don’t merely have greater “book smarts,” they have better cognitive performance in general and better life and career outcomes on an individual level, which may also extrapolate to a group/world level. Will more research be necessary to make us confident in this notion? Of course, but our current knowledge of intelligence should establish it as probable.
Furthermore, people with higher intelligence probably have a better ability to guess the moves of other people with existentially risky technologies and navigate Prisoners’ Dilemmas of mutually assured destruction, as we see in the historical example of the Cuban Missile Crisis. We don’t have rigorous scientific evidence for this point yet, though I don’t think it’s a stretch, and hopefully we will never have a large sample size of existential crises.
I’m not sure we have serious disagreements on this. Research on intelligence enhancement sounds like a good idea, for many reasons. I’m just choosing to emphasize that there are probably other much more effective approaches to reducing existential risks, and its by no means impossible that intelligence enhancement could increase existential risks.
When I said “smartness,” I was thinking of general intelligence, the g-factor. As it happens, g does have a high correlation with IQ (0.8 as I recall, though I can’t find the source right now). g is a highly general factor related to better performance in many areas including career and general life tasks, not just in academic settings (see p. 342 for a summary of research), so we should hypothesize that nuclear missile restraint is related to g also.
As it happens, g does have a high correlation with IQ
Someone who knows the details of this is welcome to correct me if I’m wrong, but as I understand it g is a hypothetical construct derived via factor analysis on the components of IQ tests, so it will necessarily have a high correlation with those tests (provided the results of the components are themselves correlated).
Correct. g is the degree to which performances on various subtypes of IQ tests are statistically correlated—the degree that performance on one predicts performance on another.
It’s a very crude concept, and one that has not been reliably identified as being detectable without use of IQ tests, although several neurophysiologic properties have been suggested as indicating g.
That’s a kind of the giant cheesecake fallacy. Capability increases risk caused by some people, but it also increases the power of other people to mitigate the risks. Knowing about the increase in the capability of these factors doesn’t help you in deciding which of them wins.
And I will suggest in turn that you are guilty of the catchy fallacy name fallacy. The giant cheesecake fallacy was originally introduced as applying to those who anthropomorphize minds in general, often slipping from capability to motivation because a given motivation is common in humans.
I’m talking about a certain class of humans and not suggesting that they are actually motivated to bring about bad effects. Rather all it takes is for there to be problems where it is significantly easier to mess things up than to get it right.
I agree, this doesn’t fall clearly under the original concept of giant cheesecake fallacy, but it points to a good non-specious generalization of that concept, for which I gave a self-contained explanation in my comment.
Aside from that, your reply addresses issues irrelevant to my critique of your assertion. It sounds like a soldier-argument.
It’s not the giant cheesecake fallacy, but Vladimir Nesov is completely correct when he says:
Capability increases risk caused by some people, but it also increases the power of other people to mitigate the risks. Knowing about the increase in the capability of these factors doesn’t help you in deciding which of them wins.
Anyone arguing that existential risks are elevated by increasing intelligence must also account for the mitigating factor against existential risk that intelligence also plays.
That is rather easily accounted for, I would think. Attack is easier than defense. It is easier to build a bomb than to defend against bomb attacks; it is easier to build a laser than to defend against laser attacks—and so on.
This is true. Yet capability to attack isn’t the same thing as actually attacking.
Even at our current level of intelligence, and the world is not ravaged by nuclear weapons or biological weapons. Maybe we are just lucky so far.
All else being equal, smarter people are probably less likely to attack with globally threatening weapons, particularly when mutually assured destruction is a factor. In cases of MAD, attack isn’t exactly “easy” when you are ensuring your own destruction as well. There are some crazy people with nukes, but you have to be crazy and stupid to attack in the case of MAD, and nobody so far has that combination of craziness and stupidity. MAD is an IQ test that all humans with nukes have passed so far (the US bombing Japan was not under MAD).
I propose a study:
The participants are a sample of despots randomly assigned to two conditions. The control condition is given an IQ test and some nukes. The experimental condition is given intelligence enhancement, an IQ test, and some nukes. At the end of the experiment, scientists stationed on the moon will measure the effect of the intelligence manipulation on nuke usage.
And notice that it didn’t provoke a nuclear war, and the human race still exists. Nuclear weapons weren’t an existential threat until multiple parties obtained them. If MAD isn’t a concern in using a given weapon, it doesn’t sound like much of an existential threat.
If MAD isn’t a concern in using a given weapon, it doesn’t sound like much of an existential threat.
I dont understand the logic of this sentence. If I create an Earth-destroying bomb in my basement, MAD doesn’t apply but it’s still an existential threat. Similar reasoning works for nanotech, biotech and AI.
There could be cases when an older-generation technology can be used to assure destruction. Say, if the new tech doesn’t prevent ICBMs and nuclear explosions, both sides will still be bound by MAD.
This is a problem, but not necessarily an existential risk, which is the topic under discussion. Existential risk has a particular meaning: it must be global, whereas the US bombing Japan was local.
If we assume that causing risk requires a certain intelligence level and mitigating risks requires a certain (higher) level, changing the distribution of intelligence in a way that enlarges both groups will not, in general, enlarge both by the same factor.
That statement shows a way in which the claim that increasing the number of intelligent people will increase rather than decrease risk might be supported.
Both are special cases of the following fallacy. A certain factor increases the strength of some possible positive effect, and also the strength of some possible negative effect, with the consequences of these effects taken in isolation being mutually exclusive. An argument is then given that since this factor increases the positive effect (negative effect), the consequences are going to be positive (negative), and therefore the factor itself is instrumentally desirable (undesirable). The argument doesn’t recognize the other side of the possible consequences, ignoring the possibility that the opposite effect is going to dominate instead.
Maybe it has another existing name; the analogy seems useful.
Giant cheesecake is about the jump from capability to motive, usually in the presence of anthropomorphism or other reasons to assume the preference without thinking.
This sounds more like a generic problem of technophilia (phobia) - mostly just confirmation bias or standard filtering of arguments. It probably does need a name, though, like Appeal to Selected Possibilities or something like that.
I think many of the most pressing existential risks (e.g. nanotech, biotech and AI accidents) come from the likely actions of moderately intelligent, well-intentioned, and rational humans (compared to the very low baseline). If that is right then increasing the number of such people will increase rather than decrease risk.
And also, this argument is vulnerable to the reversal test. If you think that higher IQ increases existential risk, then you think that lower IQ decreases it. Presumably you don’t believe that putting lead in the water supply would decrease existential risks?
believing lead in the water supply would decrease existential risks != advocating putting lead in the water supply
See correction
If you decreased the intelligence of everyone to 100 IQ points or lower, I think overall quality of life would decrease but that it would also drastically decrease existential risks.
Edit: On second thought, now that I think about nuclear and biological weapons, I might want to take that back while pointing out that these large threats were predominantly created by quite intelligent, well-intentioned and rational people.
If you decreased the intelligence of everyone to 100 IQ points or lower, that would probably eliminate all hope for a permanent escape from existential risk. Risk in this scenario might be lower per time unit in the near future, but total risk over all time would approach 100%.
Consider a world without nuclear weapons. What would there be to prevent world war I ad infinitum? As a male of conscriptable age, I would consider such a scenario to be so bad as to be not much better than global thermonuclear war.
Why do you think it’s the nuclear weapons that keep the current peace, and not the memory of past wars, and more generally/recently cultural moral progress? This is related to your prediction in resource depletion scenario.
List of wars by death toll is very interesting.
There’s little evidence for theory that threat of global thermonuclear war creates global peace.
Even during the world wars, percentage of people who died of violence seems vastly smaller than in typical hunter gatherer societies.
There were long periods of peace before, most notably 1815-1914 where military technology was essentially equivalent to that of World War I. Before that 18th century was relatively bloodless too.
One of top ten most deadly wars happened just a few years ago. So even accepting the premise that thermonuclear threat prevents war, we face either wide proliferation, or it won’t really do much to stop wars.
One of the countries with massive nuclear weapons stockpiles suffered total collapse. This might happen again in the future, in near future most likely to Pakistan or North Korea, but in longer term to any country.
Countries having nuclear weapons engaged in plenty of conventional wars, mostly on smaller scale, and fought each other by proxy.
I had exactly the same thought.
Also, on a more pragmatic and personal level, increasing average human intelligence increases the probability of immortality and other “surprisingly good” outcomes of humans or other intelligences optimizing our world, such as universal beauty, health, happiness and better quality of life. This needn’t be through superintelligence, it could just be through the intelligence/wealth production correlation.
That’s a good point, but it would be more relevant if this were a policy proposal rather than an epistemic probe.
I don’t see why this being an epistemic probe makes risk per near future time unit more relevant than total risk integrated over time.
The whole thing is kind of academic, because for any realistic policy there’d be specific groups who’d be made smarter than others, and risk effects depend on what those groups are.
You seem to be assuming that the relation between IQ and risk must be monotonic.
I think existential risk mitigation is better pursued by helping the most intelligent and rational efforts than by trying to raise the average intelligence or rationality.
This claim is false—The reversal test does not require the function risk(IQ) to be monotonic. It only requires that the function is locally monotonic around the current IQ value of 100.
Could you elaborate a bit more on why you think this? Are there any historical examples you are thinking of?
To answer your second question: No, there aren’t any historical examples I am thinking of. Do you find many historical examples of existential risks?
Edit: Global nuclear warfare and biological weapons would be the best candidates I can think of.
Could you answer my first question, too? Which are the intelligent, well-intentioned, and relatively rational humans you are thinking of? Scientists developing nanotech, biotech, and AI? Policy-makers? Who? How would an example disaster scenario unfold in your view?
Are you saying that the very development of nanotech, biotech, and AI would create an elevated level of existential risk? If so, I would agree. A common counter-argument I’ve heard is that whether we like it or not, someone is going to make progress in at least one of those areas, and that we should try to be the first movers rather than someone less scrupulous.
In terms of safety, using AI as an example:
World with no AI > World where relatively scrupulous people develop an AI > World where unscrupulous people develop an AI
Think about how the world would be if Russia or Germany had developed nukes before the US.
Intelligence did allow the development of nukes. Yet given that we already have them, global intelligence would probably decrease the risk of them being used.
Let’s assume, for the sake of argument, that the mere development of future nanotech, biotech, and AI doesn’t go horribly wrong and create an existential disaster. If so, then the existential risk will lie in how these technologies are used.
I will suggest that there is a certain threshold of intelligence greater than ours where everyone is smart enough not to do globally harmful stunts with nuclear weapons, biotech, nanotech, and AI and/or smart enough to create safeguards where small amounts of intelligent crazy people can’t do so either. The trick will be getting to that level of intelligence without mishap.
I was reading the Wikipedia Cuban Missile Crisis article, and it does seem that intelligence helped avert catastrophe. There are multiple points where things could have gone wrong but didn’t due to people being smart enough not to do something rash. I suggest that even greater intelligence might ensure that situations like this never develop or are resolved.
Here are some interesting parts:
If this guy had been smarter, maybe this mistake would never have been made.
Luckily, Kruschev and McNamara were smart enough not to escalate. Their intelligence protected against the risk caused by the stupid Soviet commander.
Basically, a stupid dude on the sub wanted to use the missile, but a smart dude stopped him.
Yes, existential risk ultimately came from the intelligent developers of nuclear weapons. Yet once the cat was out of the bag, existential risks came from people being stupid, and those risks were counteracted by people being smart. I would expect that more intelligence would be even more helpful in potential disaster situations like this.
The real risk seems to be from weapons developed by smart people falling into the hands of stupid people. Yet if even the stupidest people were smart enough not to play around with mutually assured destruction, then the world would be a safer place.
What relationship does the kind of ‘smartness’ possessed by the individuals in question have with IQ?
I don’t think there are good reasons for thinking they’re one and the same.
I agree with Annoyance here. My guess is that a higher IQ may help the individuals in the situations Hughristik describes, but this is not the type of evidence we should consider very convincing. In this example, I would guess that differences in the individual’s desire and ability to think through the consequences of their actions is far more important than differences in there IQ. This may be explained by the incentives facing each individual.
This may be true, but “ability to think through the consequences of actions” is probably not independent of general intelligence. People with higher g are better at thinking through everything. This is what the research I linked to (and much that I didn’t link to) shows.
This graph from one of the articles shows that people with higher IQ are less likely to be unemployed, have illegitimate children, live in poverty, or be incarcerated. These life outcomes seem potentially related to considering consequences and planning for the long-term. If intelligence is related to positive individual life outcomes, then it would be unsurprising if it is also related to positive group or world outcomes.
In the case of avoiding use of nuclear weapons, there is probably only a certain threshold of intelligence necessary. Yet from the historical example of the Cuban Missile Crisis, the thinking involved wasn’t always trivial:
Both sides were constantly guessing the reasoning of the other.
In short, we do have reasons to suspect a relationship between intelligence and restraint with existentially risky technologies. People with higher intelligence don’t merely have greater “book smarts,” they have better cognitive performance in general and better life and career outcomes on an individual level, which may also extrapolate to a group/world level. Will more research be necessary to make us confident in this notion? Of course, but our current knowledge of intelligence should establish it as probable.
Furthermore, people with higher intelligence probably have a better ability to guess the moves of other people with existentially risky technologies and navigate Prisoners’ Dilemmas of mutually assured destruction, as we see in the historical example of the Cuban Missile Crisis. We don’t have rigorous scientific evidence for this point yet, though I don’t think it’s a stretch, and hopefully we will never have a large sample size of existential crises.
I’m not sure we have serious disagreements on this. Research on intelligence enhancement sounds like a good idea, for many reasons. I’m just choosing to emphasize that there are probably other much more effective approaches to reducing existential risks, and its by no means impossible that intelligence enhancement could increase existential risks.
What about the inherent incentive that motivates people even in the absence of strong external factors?
I’m not sure I understand you. Are you referring to the distinction between intrinsic and extrinsic motivation?
More like a distinction between different types of intrinsic factors.
I still have no idea what you’re talking about and how it relates to my comment.
When I said “smartness,” I was thinking of general intelligence, the g-factor. As it happens, g does have a high correlation with IQ (0.8 as I recall, though I can’t find the source right now). g is a highly general factor related to better performance in many areas including career and general life tasks, not just in academic settings (see p. 342 for a summary of research), so we should hypothesize that nuclear missile restraint is related to g also.
Someone who knows the details of this is welcome to correct me if I’m wrong, but as I understand it g is a hypothetical construct derived via factor analysis on the components of IQ tests, so it will necessarily have a high correlation with those tests (provided the results of the components are themselves correlated).
Correct. g is the degree to which performances on various subtypes of IQ tests are statistically correlated—the degree that performance on one predicts performance on another.
It’s a very crude concept, and one that has not been reliably identified as being detectable without use of IQ tests, although several neurophysiologic properties have been suggested as indicating g.
That’s a kind of the giant cheesecake fallacy. Capability increases risk caused by some people, but it also increases the power of other people to mitigate the risks. Knowing about the increase in the capability of these factors doesn’t help you in deciding which of them wins.
And I will suggest in turn that you are guilty of the catchy fallacy name fallacy. The giant cheesecake fallacy was originally introduced as applying to those who anthropomorphize minds in general, often slipping from capability to motivation because a given motivation is common in humans.
I’m talking about a certain class of humans and not suggesting that they are actually motivated to bring about bad effects. Rather all it takes is for there to be problems where it is significantly easier to mess things up than to get it right.
I agree, this doesn’t fall clearly under the original concept of giant cheesecake fallacy, but it points to a good non-specious generalization of that concept, for which I gave a self-contained explanation in my comment.
Aside from that, your reply addresses issues irrelevant to my critique of your assertion. It sounds like a soldier-argument.
It’s not the giant cheesecake fallacy, but Vladimir Nesov is completely correct when he says:
Anyone arguing that existential risks are elevated by increasing intelligence must also account for the mitigating factor against existential risk that intelligence also plays.
That is rather easily accounted for, I would think. Attack is easier than defense. It is easier to build a bomb than to defend against bomb attacks; it is easier to build a laser than to defend against laser attacks—and so on.
This is true. Yet capability to attack isn’t the same thing as actually attacking.
Even at our current level of intelligence, and the world is not ravaged by nuclear weapons or biological weapons. Maybe we are just lucky so far.
All else being equal, smarter people are probably less likely to attack with globally threatening weapons, particularly when mutually assured destruction is a factor. In cases of MAD, attack isn’t exactly “easy” when you are ensuring your own destruction as well. There are some crazy people with nukes, but you have to be crazy and stupid to attack in the case of MAD, and nobody so far has that combination of craziness and stupidity. MAD is an IQ test that all humans with nukes have passed so far (the US bombing Japan was not under MAD).
I propose a study:
The participants are a sample of despots randomly assigned to two conditions. The control condition is given an IQ test and some nukes. The experimental condition is given intelligence enhancement, an IQ test, and some nukes. At the end of the experiment, scientists stationed on the moon will measure the effect of the intelligence manipulation on nuke usage.
But the US did bomb Japan. For each new existentially threatening tech, the first power to develop it won’t be bound by MAD.
And notice that it didn’t provoke a nuclear war, and the human race still exists. Nuclear weapons weren’t an existential threat until multiple parties obtained them. If MAD isn’t a concern in using a given weapon, it doesn’t sound like much of an existential threat.
If MAD isn’t a concern in using a given weapon, it doesn’t sound like much of an existential threat.
I dont understand the logic of this sentence. If I create an Earth-destroying bomb in my basement, MAD doesn’t apply but it’s still an existential threat. Similar reasoning works for nanotech, biotech and AI.
There could be cases when an older-generation technology can be used to assure destruction. Say, if the new tech doesn’t prevent ICBMs and nuclear explosions, both sides will still be bound by MAD.
This is a problem, but not necessarily an existential risk, which is the topic under discussion. Existential risk has a particular meaning: it must be global, whereas the US bombing Japan was local.
If we assume that causing risk requires a certain intelligence level and mitigating risks requires a certain (higher) level, changing the distribution of intelligence in a way that enlarges both groups will not, in general, enlarge both by the same factor.
Obviously. A coin is also going to land on exactly one of the sides (but you don’t know which one). Why do you pronounce this fact?
That statement shows a way in which the claim that increasing the number of intelligent people will increase rather than decrease risk might be supported.
How the heck is that a giant cheesecake fallacy?
Both are special cases of the following fallacy. A certain factor increases the strength of some possible positive effect, and also the strength of some possible negative effect, with the consequences of these effects taken in isolation being mutually exclusive. An argument is then given that since this factor increases the positive effect (negative effect), the consequences are going to be positive (negative), and therefore the factor itself is instrumentally desirable (undesirable). The argument doesn’t recognize the other side of the possible consequences, ignoring the possibility that the opposite effect is going to dominate instead.
Maybe it has another existing name; the analogy seems useful.
Giant cheesecake is about the jump from capability to motive, usually in the presence of anthropomorphism or other reasons to assume the preference without thinking.
This sounds more like a generic problem of technophilia (phobia) - mostly just confirmation bias or standard filtering of arguments. It probably does need a name, though, like Appeal to Selected Possibilities or something like that.