I personally am optimistic about the world’s elites navigating AI risk as well as possible subject to inherent human limitations that I would expect everybody to have, and the inherent risk. Some points:
I’ve been surprised by people’s ability to avert bad outcomes. Only two nuclear weapons have been used since nuclear weapons were developed, despite the fact that there are 10,000+ nuclear weapons around the world. Political leaders are assassinated very infrequently relative to how often one might expect a priori.
AI risk is a Global Catastrophic Risk in addition to being an x-risk. Therefore, even people who don’t care about the far future will be motivated to prevent it.
The people with the most power tend to be the most rational people, and the effect size can be expected to increase over time (barring disruptive events such as economic collapses, supervolcanos, climate change tail risk, etc). The most rational people are the people who are most likely to be aware of and to work to avert AI risk. Here I’m blurring “near mode instrumental rationality” and “far mode instrumental rationality,” but I think there’s a fair amount of overlap between the two things. e.g. China is pushing hard on nuclear energy and on renewable energies, even though they won’t be needed for years.
Availability of information is increasing over time. At the time of the Dartmouth conference, information about the potential dangers of AI was not very salient, now it’s more salient, and in the future it will be still more salient.
In the Manhattan project, the “will bombs ignite the atmosphere?” question was analyzed and dismissed without much (to our knowledge) double-checking. The amount of risk checking per hour of human capital available can be expected to increase over time. In general, people enjoy tackling important problems, and risk checking is more important than most of the things that people would otherwise be doing.
I should clarify that with the exception of my first point, the arguments that I give are arguments that humanity will address AI risk in a near optimal way – not necessarily that AI risk is low.
For example, it could be that people correctly recognize that building an AI will result in human extinction with probability 99%, and so implement policies to prevent it, but that sometime over the next 10,000 years, these policies will fail, and AI will kill everyone.
But the actionable thing is how much we can reduce the probability of AI risk, and if by default people are going to do the best that one could hope, we can’t reduce the probability substantially.
There is lots of evidence that people in power are the most rational, but there is a huger prior to overcome.
Among people for whom power has an unsatiated major instrumental or intrinsic value, the most rational tend to have more power- but I don’t think that very rational people are common and I think that they are less likely to want more power than they have.
Particularly since the previous generation of power-holders used different factors when they selected their successors.
I agree with all of this. I think that “people in power are the most rational” was much less true in 1950 than it is today, and that it will be much more true in 2050.
Actually that’s a badly titled article. At best “Rationality is systematized winning” applies to instrumental, not epistemic, rationality. And even for that you can’t make rationality into systematized winning by defining it so. Either that’s a tautology (whatever systematized winning is, we define that as “rationality”) or it’s an empirical question. I.e. does rationality lead to winning? Looking around the world at “winners”, that seems like a very open question.
And now that I think about it, it’s also an empirical question whether there even is a system for winning. I suspect there is—that is, I suspect that there are certain instrumental practices one can adopt that are generically useful for achieving a broad variety of life goals—but this too is an empirical question we should not simply assume the answer to.
Every politician I’ve ever met has in fact been a completely sincere person who considers themselves to do what they do with the aim of good in the world. Even the ones that any outsider would say “haha, leave it out” to the notion. Every politician is completely sincere. I posit that this is a much more frightening notion than the comfort of a conspiracy theory.
Cf. Stephen Pinker historians who’ve studied Hitler tend to come away convinced he really believed he was a good guy.
In the Manhattan project, the “will bombs ignite the atmosphere?” question was analyzed and dismissed without much (to our knowledge) double-checking. The amount of risk checking per hour of human capital available can be expected to increase over time...
It’s not much evidence, but the two earliest scientific investigations of existential risk I know of, LA-602 and the RHIC Review, seem to show movement in the opposite direction: “LA-602 was written by people curiously investigating whether a hydrogen bomb could ignite the atmosphere, and the RHIC Review is a work of public relations.”
Perhaps the trend you describe is accurate, but I also wouldn’t be surprised to find out (after further investigation) that scientists are now increasingly likely to avoid serious analysis of real risks posed by their research, since they’re more worried than ever before about funding for their field (or, for some other reason). The AAAI Presidential Panel on Long-Term AI Futures was pretty disappointing, and like the RHIC Review seems like pure public relations, with a pre-determined conclusion and no serious risk analysis.
I’ve been surprised by people’s ability to avert bad outcomes. Only two nuclear weapons have been used since nuclear weapons were developed, despite the fact that there are 10,000+ nuclear weapons around the world. Political leaders are assassinated very infrequently relative to how often one might expect a priori.
Why would a good AI policy be one which takes as a model a universe where world destroying weapons in the hands of incredibly unstable governments controlled by glorified tribal chieftains is not that bad of a situation? Almost but not quite destroying ourselves does not reflect well on our abilities. The Cold War as a good example of averting bad outcomes? Eh.
AI risk is a Global Catastrophic Risk in addition to being an x-risk. Therefore, even people who don’t care about the far future will be motivated to prevent it.
This is assuming that people understand what makes an AI so dangerous—calling an AI a global catastrophic risk isn’t going to motivate anyone who thinks you can just unplug the thing (and even worse if it does motivate them, since then you have someone who is running around thinking the AI problem is trivial).
The people with the most power tend to be the most rational people, and the effect size can be expected to increase over time (barring disruptive events such as economic collapses, supervolcanos, climate change tail risk, etc). The most rational people are the people who are most likely to be aware of and to work to avert AI risk. Here I’m blurring “near mode instrumental rationality” and “far mode instrumental rationality,” but I think there’s a fair amount of overlap between the two things. e.g. China is pushing hard on nuclear energy and on renewable energies, even though they won’t be needed for years.
I think you’re just blurring “rationality” here. The fact that someone is powerful is evidence that they are good at gaining a reputation in their specific field, but I don’t see how this is evidence for rationality as such (and if we are redefining it to include dictators and crony politicians, I don’t know what to say), and especially of the kind needed to properly handle AI—and claiming evidence for future good decisions related to AI risk because of domain expertise in entirely different fields is quite a stretch. Believe it or not, most people are not mathematicians or computer scientists. Most powerful people are not mathematicians or computer scientists. And most mathematicians and computer scientists don’t give two shits about AI risk—if they don’t think it worthy of attention, why would someone who has no experience with these kind of issues suddenly grab it out of the space of all possible ideas he could possibly be thinking about? Obviously they aren’t thinking about it now—why are you confident this won’t be the case in the future? Thinking about AI requires a rather large conceptual leap—“rationality” is necessary but not sufficient, so even if all powerful people were “rational” it doesn’t follow that they can deal with these issues properly or even single them out as something to meditate on, unless we have a genius orator I’m not aware of. It’s hard enough explaining recursion to people who are actually interested in computers. And it’s not like we can drop a UFAI on a country to get people to pay attention.
Availability of information is increasing over time. At the time of the Dartmouth conference, information about the potential dangers of AI was not very salient, now it’s more salient, and in the future it will be still more salient.
In the Manhattan project, the “will bombs ignite the atmosphere?” question was analyzed and dismissed without much (to our knowledge) double-checking. The amount of risk checking per hour of human capital available can be expected to increase over time. In general, people enjoy tackling important problems, and risk checking is more important than most of the things that people would otherwise be doing.
It seems like you are claiming that AI safety does not require a substantial shift in perspective (I’m taking this as the reason why you are optimistic, since my cynicism tells me that expecting a drastic shift is a rather improbable event) - rather, we can just keep chugging along because nice things can be “expected to increase over time”, and this somehow will result in the kind of society we need. These statements always confuse me; one usually expects to be in a better position to solve a problem 5 years down the road, but trying to describe that advantage in terms of out of thin air claims about incremental changes in human behavior seems like a waste of space unless there is some substance behind it. They only seem useful when one has reached that 5 year checkpoint and can reflect on the current context in detail—for example, it’s not clear to me that the increasing availability of information is always a net positive for AI risk (since it could be the case that potential dangers are more salient as a result of unsafe AI research—the more dangers uncovered could even act as an incentive for more unsafe research depending on the magnitude of positive results and the kind of press received. But of course the researchers will make the right decision, since people are never overconfident...). So it comes off (to me) as a kind of sleight of hand where it feels like a point for optimism, a kind of “Yay Open Access Knowledge is Good!” applause light, but it could really go either way.
Also I really don’t know where you got that last idea—I can’t imagine that most people would find AI safety more glamorous then, you know, actually building a robot. There’s a reason why it’s hard to get people to do unit tests and software projects get bloated and abandoned. Something like what Haskell is to software would be optimal. I don’t think it’s a great idea to rely on the conscientiousness of people in this case.
Why would a good AI policy be one which takes as a model a universe where world destroying weapons in the hands of incredibly unstable governments controlled by glorified tribal chieftains is not that bad of a situation? Almost but not quite destroying ourselves does not reflect well on our abilities. The Cold War as a good example of averting bad outcomes? Eh.
The point is that I would have expected things to be worse, and that I imagine that a lot of others would have as well.
This is assuming that people understand what makes an AI so dangerous—calling an AI a global catastrophic risk isn’t going to motivate anyone who thinks you can just unplug the thing (and even worse if it does motivate them, since then you have someone who is running around thinking the AI problem is trivial).
I think that people will understand what makes AI dangerous. The arguments aren’t difficult to understand.
The fact that someone is powerful is evidence that they are good at gaining a reputation in their specific field, but I don’t see how this is evidence for rationality as such (and if we are redefining it to include dictators and crony politicians, I don’t know what to say),
Broadly, the most powerful countries are the ones with the most rational leadership (where here I mean “rational with respect to being able to run a country,” which is relevant), and I expect this trend to continue.
Also, wealth is skewing toward more rational people over time, and wealthy people have political bargaining power.
why would someone who has no experience with these kind of issues suddenly grab it out of the space of all possible ideas he could possibly be thinking about?
Political leaders have policy advisors, and policy advisors listen to scientists. I expect that AI safety issues will percolate through the scientific community before long.
It seems like you are claiming that AI safety does not require a substantial shift in perspective (I’m taking this as the reason why you are optimistic, since my cynicism tells me that expecting a drastic shift is a rather improbable event) - rather, we can just keep chugging along because nice things can be “expected to increase over time”, and this somehow will result in the kind of society we need. [...]
I agree that AI safety requires a substantial shift in perspective — what I’m claiming is that this change in perspective will occur organically substantially before the creation of AI is imminent.
Also I really don’t know where you got that last idea—I can’t imagine that most people would find AI safety more glamorous then, you know, actually building a robot.
You don’t need “most people” to work on AI safety. It might suffice for 10% or fewer of the people who are working on AI to work on safety. There are lots of people who like to be big fish in a small pond, and this will motivate some AI researchers to work on safety even if safety isn’t the most prestigious field.
If political leaders are sufficiently rational (as I expect them to be), they’ll give research grants and prestige to people who work on AI safety.
Things were a lot worse then everyone knew: Russia almost invaded Yugoslavia, which would have triggered a war according to newly declassified NSA journals, in the 1950′s. The Cuban Missile Crisis could easily have gone hot, and several times early warning systems were triggered by accident. Of course, estimating what could have happened is quite hard.
I agree that there were close calls. Nevertheless, things turned out better than I would have guessed, and indeed, probably better than a large fraction of military and civilian people would have guessed.
World war three seems certain to significantly decrease human population. From my point of view, I can’t eliminate anthropic reasoning for why there wasn’t such a war before I was born.
There’s a difference between “sufficiently difficult so that a few readers of one person’s exposition can’t follow it” and “sufficiently difficult so that after being in the public domain for 30 years, the arguments won’t have been distilled so as to be accessible to policy makers.”
I don’t think that the arguments are any more difficult than the arguments for anthropogenic global warming. One could argue that the difficulty of these arguments has been a limiting factor in climate change policy, but I believe that by far the dominant issue has been misaligned incentives, though I’d concede that this is not immediately obvious.
Only two nuclear weapons have been used since nuclear weapons were developed,
And I have the impression that relatively low-ranking people helped produce this outcome by keeping information from their superiors. Petrov chose not to report a malfunction of the early warning system until he could prove it was a malfunction. People during the Korean war and possibly Vietnam seem not to have passed on the fact that pilots from Russia or America were cursing in their native languages over the radio (and the other side was hearing them).
This in fact is part of why I don’t think we ‘survived’ through the anthropic principle. Someone born after the end of the Cold War could look back at the apparent causes of our survival. And rather than seeing random events, or no causes at all, they would see a pattern that someone might have predicted beforehand, given more information.
This pattern seems vanishingly unlikely to save us from unFriendly AI. It would take, at the very least, a much more effective education/propaganda campaign.
As I remark elsewhere in this thread, the point is that I would have expected substantially more nuclear exchange by now than actually happened, and in view of this, I updated in the direction of things being more likely to go well than I would have thought. I’m not saying “the fact that there haven’t been nuclear exchanges means that destructive things can’t happen.”
This pattern seems vanishingly unlikely to save us from unFriendly AI. It would take, at the very least, a much more effective education/propaganda campaign.
I was using the nuclear war thing as one of many outside views, not as direct analogy. The AI situation needs to be analyzed separately — this is only one input.
I’ve been surprised by people’s ability to avert bad outcomes. Only two nuclear weapons have been used since nuclear weapons were developed, despite the fact that there are 10,000+ nuclear weapons around the world.
It may be challenging to estimate the “actual, at the time” probability of a past event that would quite possibly have resulted in you not existing. Survivor bias may play a role here.
It depends on the nuclear war. An exchange of bombs between India and Pakistan probably wouldn’t end human life on the planet. However an all-out war between the U.S. and the U.S.S.R in the 1980s most certainly could have. Fortunately that doesn’t seem to be a big risk right now. 30 years ago it was. I don’t feel confident in any predictions one way or the other about whether this might be a threat again 30 years from now.
Because all the evidence I’ve read or heard (most of it back in the 1980s) agreed on this. Specifically in a likely exchange between the U.S. and the USSR the northern, hemisphere would have been rendered completely uninhabitable within days. Humanity in the southern hemisphere would probably have lasted somewhat longer, but still would have been destroyed by nuclear winter and radiation. Details depend on the exact distribution of targets.
Remember Hiroshima and Nagasaki were 2 relatively small fission weapons. By the 1980s the USSR and the US each had enough much bigger fusion bombs to individually destroy the planet. The only question was how many each would use in an exchange and where they target them.
I’m not sure what the correct way to approach this would be. I think it may be something like comparing the number of people in your immediate reference class—depending on preference, this could be “yourself precisely” or “everybody who would make or have made the same observation as you”—and then ask “how would nuclear war affect the distribution of such people in that alternate outcome”. But that’s only if you give each person uniform weighting of course, which has problems of its own.
Sure, these things are subtle — my point was that the numbers who would have perished isn’t very large in this case, so that under a broad class of assumptions, one shouldn’t take the observed absence of nuclear conflict to be a result of survivorship bias.
I personally am optimistic about the world’s elites navigating AI risk as well as possible subject to inherent human limitations that I would expect everybody to have, and the inherent risk. Some points:
I’ve been surprised by people’s ability to avert bad outcomes. Only two nuclear weapons have been used since nuclear weapons were developed, despite the fact that there are 10,000+ nuclear weapons around the world. Political leaders are assassinated very infrequently relative to how often one might expect a priori.
AI risk is a Global Catastrophic Risk in addition to being an x-risk. Therefore, even people who don’t care about the far future will be motivated to prevent it.
The people with the most power tend to be the most rational people, and the effect size can be expected to increase over time (barring disruptive events such as economic collapses, supervolcanos, climate change tail risk, etc). The most rational people are the people who are most likely to be aware of and to work to avert AI risk. Here I’m blurring “near mode instrumental rationality” and “far mode instrumental rationality,” but I think there’s a fair amount of overlap between the two things. e.g. China is pushing hard on nuclear energy and on renewable energies, even though they won’t be needed for years.
Availability of information is increasing over time. At the time of the Dartmouth conference, information about the potential dangers of AI was not very salient, now it’s more salient, and in the future it will be still more salient.
In the Manhattan project, the “will bombs ignite the atmosphere?” question was analyzed and dismissed without much (to our knowledge) double-checking. The amount of risk checking per hour of human capital available can be expected to increase over time. In general, people enjoy tackling important problems, and risk checking is more important than most of the things that people would otherwise be doing.
I should clarify that with the exception of my first point, the arguments that I give are arguments that humanity will address AI risk in a near optimal way – not necessarily that AI risk is low.
For example, it could be that people correctly recognize that building an AI will result in human extinction with probability 99%, and so implement policies to prevent it, but that sometime over the next 10,000 years, these policies will fail, and AI will kill everyone.
But the actionable thing is how much we can reduce the probability of AI risk, and if by default people are going to do the best that one could hope, we can’t reduce the probability substantially.
What?
Rationality is systematized winning. Chance plays a role, but over time it’s playing less and less of a role, because of more efficient markets.
There is lots of evidence that people in power are the most rational, but there is a huger prior to overcome.
Among people for whom power has an unsatiated major instrumental or intrinsic value, the most rational tend to have more power- but I don’t think that very rational people are common and I think that they are less likely to want more power than they have.
Particularly since the previous generation of power-holders used different factors when they selected their successors.
I agree with all of this. I think that “people in power are the most rational” was much less true in 1950 than it is today, and that it will be much more true in 2050.
Actually that’s a badly titled article. At best “Rationality is systematized winning” applies to instrumental, not epistemic, rationality. And even for that you can’t make rationality into systematized winning by defining it so. Either that’s a tautology (whatever systematized winning is, we define that as “rationality”) or it’s an empirical question. I.e. does rationality lead to winning? Looking around the world at “winners”, that seems like a very open question.
And now that I think about it, it’s also an empirical question whether there even is a system for winning. I suspect there is—that is, I suspect that there are certain instrumental practices one can adopt that are generically useful for achieving a broad variety of life goals—but this too is an empirical question we should not simply assume the answer to.
I agree that my claim isn’t obvious. I’ll try to get back to you with detailed evidence and arguments.
The problem is that politicians have a lot to gain from really believing the stupid things they have to say to gain and hold power.
To quote an old thread:
Cf. Stephen Pinker historians who’ve studied Hitler tend to come away convinced he really believed he was a good guy.
To get the fancy explanation of why this is the case, see “Trivers’ Theory of Self-Deception.”
It’s not much evidence, but the two earliest scientific investigations of existential risk I know of, LA-602 and the RHIC Review, seem to show movement in the opposite direction: “LA-602 was written by people curiously investigating whether a hydrogen bomb could ignite the atmosphere, and the RHIC Review is a work of public relations.”
Perhaps the trend you describe is accurate, but I also wouldn’t be surprised to find out (after further investigation) that scientists are now increasingly likely to avoid serious analysis of real risks posed by their research, since they’re more worried than ever before about funding for their field (or, for some other reason). The AAAI Presidential Panel on Long-Term AI Futures was pretty disappointing, and like the RHIC Review seems like pure public relations, with a pre-determined conclusion and no serious risk analysis.
Why would a good AI policy be one which takes as a model a universe where world destroying weapons in the hands of incredibly unstable governments controlled by glorified tribal chieftains is not that bad of a situation? Almost but not quite destroying ourselves does not reflect well on our abilities. The Cold War as a good example of averting bad outcomes? Eh.
This is assuming that people understand what makes an AI so dangerous—calling an AI a global catastrophic risk isn’t going to motivate anyone who thinks you can just unplug the thing (and even worse if it does motivate them, since then you have someone who is running around thinking the AI problem is trivial).
I think you’re just blurring “rationality” here. The fact that someone is powerful is evidence that they are good at gaining a reputation in their specific field, but I don’t see how this is evidence for rationality as such (and if we are redefining it to include dictators and crony politicians, I don’t know what to say), and especially of the kind needed to properly handle AI—and claiming evidence for future good decisions related to AI risk because of domain expertise in entirely different fields is quite a stretch. Believe it or not, most people are not mathematicians or computer scientists. Most powerful people are not mathematicians or computer scientists. And most mathematicians and computer scientists don’t give two shits about AI risk—if they don’t think it worthy of attention, why would someone who has no experience with these kind of issues suddenly grab it out of the space of all possible ideas he could possibly be thinking about? Obviously they aren’t thinking about it now—why are you confident this won’t be the case in the future? Thinking about AI requires a rather large conceptual leap—“rationality” is necessary but not sufficient, so even if all powerful people were “rational” it doesn’t follow that they can deal with these issues properly or even single them out as something to meditate on, unless we have a genius orator I’m not aware of. It’s hard enough explaining recursion to people who are actually interested in computers. And it’s not like we can drop a UFAI on a country to get people to pay attention.
It seems like you are claiming that AI safety does not require a substantial shift in perspective (I’m taking this as the reason why you are optimistic, since my cynicism tells me that expecting a drastic shift is a rather improbable event) - rather, we can just keep chugging along because nice things can be “expected to increase over time”, and this somehow will result in the kind of society we need. These statements always confuse me; one usually expects to be in a better position to solve a problem 5 years down the road, but trying to describe that advantage in terms of out of thin air claims about incremental changes in human behavior seems like a waste of space unless there is some substance behind it. They only seem useful when one has reached that 5 year checkpoint and can reflect on the current context in detail—for example, it’s not clear to me that the increasing availability of information is always a net positive for AI risk (since it could be the case that potential dangers are more salient as a result of unsafe AI research—the more dangers uncovered could even act as an incentive for more unsafe research depending on the magnitude of positive results and the kind of press received. But of course the researchers will make the right decision, since people are never overconfident...). So it comes off (to me) as a kind of sleight of hand where it feels like a point for optimism, a kind of “Yay Open Access Knowledge is Good!” applause light, but it could really go either way.
Also I really don’t know where you got that last idea—I can’t imagine that most people would find AI safety more glamorous then, you know, actually building a robot. There’s a reason why it’s hard to get people to do unit tests and software projects get bloated and abandoned. Something like what Haskell is to software would be optimal. I don’t think it’s a great idea to rely on the conscientiousness of people in this case.
Thanks for engaging.
The point is that I would have expected things to be worse, and that I imagine that a lot of others would have as well.
I think that people will understand what makes AI dangerous. The arguments aren’t difficult to understand.
Broadly, the most powerful countries are the ones with the most rational leadership (where here I mean “rational with respect to being able to run a country,” which is relevant), and I expect this trend to continue.
Also, wealth is skewing toward more rational people over time, and wealthy people have political bargaining power.
Political leaders have policy advisors, and policy advisors listen to scientists. I expect that AI safety issues will percolate through the scientific community before long.
I agree that AI safety requires a substantial shift in perspective — what I’m claiming is that this change in perspective will occur organically substantially before the creation of AI is imminent.
You don’t need “most people” to work on AI safety. It might suffice for 10% or fewer of the people who are working on AI to work on safety. There are lots of people who like to be big fish in a small pond, and this will motivate some AI researchers to work on safety even if safety isn’t the most prestigious field.
If political leaders are sufficiently rational (as I expect them to be), they’ll give research grants and prestige to people who work on AI safety.
Things were a lot worse then everyone knew: Russia almost invaded Yugoslavia, which would have triggered a war according to newly declassified NSA journals, in the 1950′s. The Cuban Missile Crisis could easily have gone hot, and several times early warning systems were triggered by accident. Of course, estimating what could have happened is quite hard.
I agree that there were close calls. Nevertheless, things turned out better than I would have guessed, and indeed, probably better than a large fraction of military and civilian people would have guessed.
World war three seems certain to significantly decrease human population. From my point of view, I can’t eliminate anthropic reasoning for why there wasn’t such a war before I was born.
We still get people occasionally who argue the point while reading through the Sequences, and that’s a heavily filtered audience to begin with.
There’s a difference between “sufficiently difficult so that a few readers of one person’s exposition can’t follow it” and “sufficiently difficult so that after being in the public domain for 30 years, the arguments won’t have been distilled so as to be accessible to policy makers.”
I don’t think that the arguments are any more difficult than the arguments for anthropogenic global warming. One could argue that the difficulty of these arguments has been a limiting factor in climate change policy, but I believe that by far the dominant issue has been misaligned incentives, though I’d concede that this is not immediately obvious.
And I have the impression that relatively low-ranking people helped produce this outcome by keeping information from their superiors. Petrov chose not to report a malfunction of the early warning system until he could prove it was a malfunction. People during the Korean war and possibly Vietnam seem not to have passed on the fact that pilots from Russia or America were cursing in their native languages over the radio (and the other side was hearing them).
This in fact is part of why I don’t think we ‘survived’ through the anthropic principle. Someone born after the end of the Cold War could look back at the apparent causes of our survival. And rather than seeing random events, or no causes at all, they would see a pattern that someone might have predicted beforehand, given more information.
This pattern seems vanishingly unlikely to save us from unFriendly AI. It would take, at the very least, a much more effective education/propaganda campaign.
As I remark elsewhere in this thread, the point is that I would have expected substantially more nuclear exchange by now than actually happened, and in view of this, I updated in the direction of things being more likely to go well than I would have thought. I’m not saying “the fact that there haven’t been nuclear exchanges means that destructive things can’t happen.”
I was using the nuclear war thing as one of many outside views, not as direct analogy. The AI situation needs to be analyzed separately — this is only one input.
It may be challenging to estimate the “actual, at the time” probability of a past event that would quite possibly have resulted in you not existing. Survivor bias may play a role here.
Nuclear war would have to be really, really big to kill a majority of the population, and probably even if all weapons were used the fatality rate would be under 50% (with the uncertainty coming from nuclear winter). Note that most residents of Hiroshima and Nagasaki survived the 1945 bombings, and that fewer than 60% of people live in cities.
It depends on the nuclear war. An exchange of bombs between India and Pakistan probably wouldn’t end human life on the planet. However an all-out war between the U.S. and the U.S.S.R in the 1980s most certainly could have. Fortunately that doesn’t seem to be a big risk right now. 30 years ago it was. I don’t feel confident in any predictions one way or the other about whether this might be a threat again 30 years from now.
Why do you think this?
Because all the evidence I’ve read or heard (most of it back in the 1980s) agreed on this. Specifically in a likely exchange between the U.S. and the USSR the northern, hemisphere would have been rendered completely uninhabitable within days. Humanity in the southern hemisphere would probably have lasted somewhat longer, but still would have been destroyed by nuclear winter and radiation. Details depend on the exact distribution of targets.
Remember Hiroshima and Nagasaki were 2 relatively small fission weapons. By the 1980s the USSR and the US each had enough much bigger fusion bombs to individually destroy the planet. The only question was how many each would use in an exchange and where they target them.
This is mostly out of line with what I’ve read. Do you have references?
I’m not sure what the correct way to approach this would be. I think it may be something like comparing the number of people in your immediate reference class—depending on preference, this could be “yourself precisely” or “everybody who would make or have made the same observation as you”—and then ask “how would nuclear war affect the distribution of such people in that alternate outcome”. But that’s only if you give each person uniform weighting of course, which has problems of its own.
Sure, these things are subtle — my point was that the numbers who would have perished isn’t very large in this case, so that under a broad class of assumptions, one shouldn’t take the observed absence of nuclear conflict to be a result of survivorship bias.