It is one thing to say “Something must be done!” with a tone of righteous superiority. It is another thing entirely to specify what must be done. Many of these risks do not seem existential to me, some (like dystopia) should really be properly buried as ideas (Bostrom actually dismisses this idea in that paper). The ones that do seem realistically existential seem almost impossible to prepare against on any realistic scale—aliens, gray goo, uploads, and massive global warfare/conquest don’t seem like they’re going to be sensitive to many investments we make now, since they’re either too small and specific or too large and non-specific to address generally.
You also forgot to list the biggest problem: “Something Unforseen.”
It’s not terribly constructive to say we lack good self-preservation mechanisms without at least hand-waving what good self preservation mechanisms might look like and how we could theoretically try to start having them. The mere fact that we could all die at any moment is not much of a cause for alarm if there’s really nothing we can do about it.
Edit: My general point is clarified in a response to a response to this post.
It is one thing to say “Something must be done!” with a tone of righteous superiority. It is another thing entirely to specify what must be done.
Truths about the world should be stated if they seem important to our collective utility function, and I resent being criticised for tone, righteousness, etc. I would expect that here I can freely state important truths and expect to be criticised based only upon the accuracy and utility-relevance of those statements. If we start criticising people for “sounding righteous”, we are incentivizing people to write posts that are pleasant sounding over posts that are accurate. This is suboptimal for rational group behaviour.
Furthermore, you shouldn’t criticise me for not saying what to do about these problems. If LW implements a general policy of punishing anyone who posts a problem without also posting the solution because that constitutes “righteous superiority”, we are incentivizing people to minimize the amount of time they spend thinking about the solution, write down the first solution that comes into their heads and then publish it right next to the problem to prime everyone on less wrong with that likely poor quality solution as a reference point with the special status of being in the post. We are also providing people with an incentive not to publish what they think are important problems that they can’t think of solutions to, biasing LW to not even tackle the hard problems.
The mere fact that we could all die at any moment is not much of a cause for alarm if there’s really nothing we can do about it.
I didn’t say that we can’t do anything about these risks. Absence of evidence of an ability to risk-mitigate is not evidence of absence of an ability to risk-mitigate.
Our western civilization lacks an effective long-term (order of 50 years plus) self-preservation system. Hence we should reasonably expect to either build one, or get wiped out.
This is a huge claim. You’re claiming first of all that the odds of succumbing to a truly existential event are higher than not. You don’t (IMO) provide evidence to support this - you provide some evidence that we may have had really catastrophic events in the past, but, again, only 100% is existential, and you do not finish off your examples—“Hitler could have won” and “Hitler could have won and created a repressive regime that lasted for the remainder of human history” are two very different claims, and the former is not existential. Second, you claim that if we take some steps, we can expect such events not to happen—it is because we lack an “effective long-term preservation system” that we can expect to be destroyed completely.
Thus, you have, to my understanding, made two claims: one about the likelihood of existential events, and one about the likelihood of us being able to mitigate them. Again, to my evaluation, you have provided compelling evidence for neither of these conclusions; indeed you’ve provided virtually no evidence for either of these conclusion (probability, not possibility). That is the root of my criticism of not providing solutions: you claim solutions are possible, desirable, and effective, and you do not provide any evidence to support this claim.
Thus, my criticism of your tone as “righteous” is because you seem to be making a strong, “deep” claim without providing adequate supporting evidence or argument. It is not a criticism of your word choice. I have absolutely no problem with people posting about problems that occur to them that they don’t know how to solve. I do have a problem with people making strong claims with a definitive tone without providing adequate supporting evidence.
I admit this may all hinge on a disagreement in definition over “existential.” I take existential to require true obliteration. Gray goo would reach this, as would the-simulation-loses-power or every-atom-splits or humanity-is-enslaved-by-something-forever. “Nuclear holocaust kills billions and it takes ten thousand years to recover” does not count in my mind, as it is not terminal. Similarly, “Hitler reigns for ten thousand years” is also non-existential (at least for humanity as a whole); If recovery occurs, even after a fairly large gap, it does not seem to count as existential. This view is consistent with Bostrom’s definition in the linked paper. With a weaker definition of existential, it is quite possible that there is no disagreement here, in which I have the (smaller) criticism that you should have clarified this at the beginning.
This is a huge claim. You’re claiming first of all that the odds of succumbing to a truly existential event are higher than not.
Given that most societies that ever existed were wiped out, often violently or otherwise catastrophically, and that we have a list of 6 near misses, and that almost all homeostatic complex systems that are loosely analogous to civilization such as ecosystems or long-lived organisms like coral reefs of even organisms in general have existed and then got wiped out again, I think that this is a reasonable claim.
If we had actually had 6 “near misses”, then that would be pertinent evidence. In which case, maybe they should be listed, their probabilities and potential impact estimated.
I now get what lead to this confusion. You’ve referred to both “existential” and “major civilizational-level catastrophes” without much effort to distinguish between the two, though they differ in both extent and probability by a few orders of magnitude. I assumed from the Bostrom paper citation and the long list of existential threats that the article in general was about existential risks, which, on a rereading, it isn’t.
My concern over showing that something could reasonably be done remains, but you do provide appropriate evidence regarding civilization-level catastrophes. It might be worth a sentence or two clarifying that your concern is civ-level or greater, rather than specifically existential, though I may be the only one who misread the focus here.
without much effort to distinguish between the two
Well, I used two different phrases. I drew the distinction in the first sentence, and several other times throughout the article. What else did I not do that I should have done?
though they differ in probability by a few orders of magnitude
What probability do you assign to human civilization being wiped out over the next, say, 10,000 years? Less than 0.1% or less than 1%, I presume, since it must be a few orders of magnitude less than 100%?
It might be worth a sentence or two clarifying that your concern is civ-level or greater
how about this:
“The prospect of a dangerous collection of existential risks and risks of major civilizational-level catastrophes … ” ?
It is one thing to say “Something must be done!” with a tone of righteous superiority. It is another thing entirely to specify what must be done.
Ah, ok, now I understand why this post is being binned: I sound righteous. Can you give me some hints as to what about this post triggered your righteousness detector, because this was not intended…
I don’t think that’s the main thrust of his complaint. Lack of specifics is the main problem. If you say “Something must be done!” but not what, then the tone of the writing is moot, so far as righteousness-detectors go.
But at the end of the day, this is supposed to be a rationalist community. All I did was communicate a true fact, without attempting to “sound righteous”—which is a form of social signalling.
If we cannot state true facts without false accusations of social signalling being levelled—well, then we have a long we to go as a rationalist group.
Telling someone off because they tripped your righteousness detector when all they are trying to do is present an accurate piece of the map is not good group epistemic rationality.
It is one thing to say “Something must be done!” with a tone of righteous superiority. It is another thing entirely to specify what must be done. Many of these risks do not seem existential to me, some (like dystopia) should really be properly buried as ideas (Bostrom actually dismisses this idea in that paper). The ones that do seem realistically existential seem almost impossible to prepare against on any realistic scale—aliens, gray goo, uploads, and massive global warfare/conquest don’t seem like they’re going to be sensitive to many investments we make now, since they’re either too small and specific or too large and non-specific to address generally.
You also forgot to list the biggest problem: “Something Unforseen.”
It’s not terribly constructive to say we lack good self-preservation mechanisms without at least hand-waving what good self preservation mechanisms might look like and how we could theoretically try to start having them. The mere fact that we could all die at any moment is not much of a cause for alarm if there’s really nothing we can do about it.
Edit: My general point is clarified in a response to a response to this post.
Truths about the world should be stated if they seem important to our collective utility function, and I resent being criticised for tone, righteousness, etc. I would expect that here I can freely state important truths and expect to be criticised based only upon the accuracy and utility-relevance of those statements. If we start criticising people for “sounding righteous”, we are incentivizing people to write posts that are pleasant sounding over posts that are accurate. This is suboptimal for rational group behaviour.
Furthermore, you shouldn’t criticise me for not saying what to do about these problems. If LW implements a general policy of punishing anyone who posts a problem without also posting the solution because that constitutes “righteous superiority”, we are incentivizing people to minimize the amount of time they spend thinking about the solution, write down the first solution that comes into their heads and then publish it right next to the problem to prime everyone on less wrong with that likely poor quality solution as a reference point with the special status of being in the post. We are also providing people with an incentive not to publish what they think are important problems that they can’t think of solutions to, biasing LW to not even tackle the hard problems.
I didn’t say that we can’t do anything about these risks. Absence of evidence of an ability to risk-mitigate is not evidence of absence of an ability to risk-mitigate.
This is a huge claim. You’re claiming first of all that the odds of succumbing to a truly existential event are higher than not. You don’t (IMO) provide evidence to support this - you provide some evidence that we may have had really catastrophic events in the past, but, again, only 100% is existential, and you do not finish off your examples—“Hitler could have won” and “Hitler could have won and created a repressive regime that lasted for the remainder of human history” are two very different claims, and the former is not existential. Second, you claim that if we take some steps, we can expect such events not to happen—it is because we lack an “effective long-term preservation system” that we can expect to be destroyed completely.
Thus, you have, to my understanding, made two claims: one about the likelihood of existential events, and one about the likelihood of us being able to mitigate them. Again, to my evaluation, you have provided compelling evidence for neither of these conclusions; indeed you’ve provided virtually no evidence for either of these conclusion (probability, not possibility). That is the root of my criticism of not providing solutions: you claim solutions are possible, desirable, and effective, and you do not provide any evidence to support this claim.
Thus, my criticism of your tone as “righteous” is because you seem to be making a strong, “deep” claim without providing adequate supporting evidence or argument. It is not a criticism of your word choice. I have absolutely no problem with people posting about problems that occur to them that they don’t know how to solve. I do have a problem with people making strong claims with a definitive tone without providing adequate supporting evidence.
I admit this may all hinge on a disagreement in definition over “existential.” I take existential to require true obliteration. Gray goo would reach this, as would the-simulation-loses-power or every-atom-splits or humanity-is-enslaved-by-something-forever. “Nuclear holocaust kills billions and it takes ten thousand years to recover” does not count in my mind, as it is not terminal. Similarly, “Hitler reigns for ten thousand years” is also non-existential (at least for humanity as a whole); If recovery occurs, even after a fairly large gap, it does not seem to count as existential. This view is consistent with Bostrom’s definition in the linked paper. With a weaker definition of existential, it is quite possible that there is no disagreement here, in which I have the (smaller) criticism that you should have clarified this at the beginning.
Given that most societies that ever existed were wiped out, often violently or otherwise catastrophically, and that we have a list of 6 near misses, and that almost all homeostatic complex systems that are loosely analogous to civilization such as ecosystems or long-lived organisms like coral reefs of even organisms in general have existed and then got wiped out again, I think that this is a reasonable claim.
If we had actually had 6 “near misses”, then that would be pertinent evidence. In which case, maybe they should be listed, their probabilities and potential impact estimated.
I now get what lead to this confusion. You’ve referred to both “existential” and “major civilizational-level catastrophes” without much effort to distinguish between the two, though they differ in both extent and probability by a few orders of magnitude. I assumed from the Bostrom paper citation and the long list of existential threats that the article in general was about existential risks, which, on a rereading, it isn’t.
My concern over showing that something could reasonably be done remains, but you do provide appropriate evidence regarding civilization-level catastrophes. It might be worth a sentence or two clarifying that your concern is civ-level or greater, rather than specifically existential, though I may be the only one who misread the focus here.
Well, I used two different phrases. I drew the distinction in the first sentence, and several other times throughout the article. What else did I not do that I should have done?
What probability do you assign to human civilization being wiped out over the next, say, 10,000 years? Less than 0.1% or less than 1%, I presume, since it must be a few orders of magnitude less than 100%?
how about this:
“The prospect of a dangerous collection of existential risks and risks of major civilizational-level catastrophes … ” ?
This comment is a much more useful criticism than the previous one. I will be making some changes to the article.
Ah, ok, now I understand why this post is being binned: I sound righteous. Can you give me some hints as to what about this post triggered your righteousness detector, because this was not intended…
I don’t think that’s the main thrust of his complaint. Lack of specifics is the main problem. If you say “Something must be done!” but not what, then the tone of the writing is moot, so far as righteousness-detectors go.
But at the end of the day, this is supposed to be a rationalist community. All I did was communicate a true fact, without attempting to “sound righteous”—which is a form of social signalling.
If we cannot state true facts without false accusations of social signalling being levelled—well, then we have a long we to go as a rationalist group.
Telling someone off because they tripped your righteousness detector when all they are trying to do is present an accurate piece of the map is not good group epistemic rationality.
As discussed I screwed this post up. I wanted to split up the two tasks logically:
establish that there really is a problem
say what to do about it—IN THE NEXT POST!
I could have rolled it all into one big post, but that’s a lot of material.