Moving to your social examples, doomers predicted that the legalisation of gay marriage would lead to the legalisation of bestiality. They were wrong.
You even provide an example yourself where people claim that D&D leads to satanism. This didn’t happen! Having an orc hero is not satanism! satanism has remained a niche religion. The doomers were wrong.
Cherrypicking examples where doomerist predictions were wrong and examples where they were right is a very poor way to figure out whether a particular doomerist prediction is wrong or right.
Skeptics are not arguing that no technology or social movement has ever had a negative effect, they are arguing that humans are biased towards apocalyptic thinking and overestimating the threat of new stuff. If you want to rebut this, you have to actually collect some unbiased data to test the claim.
While I agree at a basic level, this also seems like a motte-and-bailey.
There is clearly a vibe that all doomers have obviously always been wrong. The author is clearly trying to push back against that vibe. I too prefer arguing at ‘motte’ level, but vibes (baileys) matter, and pushing back against one should not require a long airtight argument that stands up to the stronger version of the claims being made. Even though I agree the stronger version would be better, that’s true for both sides of any debate.
I sort of see your argument here, but similarly just based on vibes associating the AI-risk concepts with other doom predictions feels like it does more harm than good to me. The vibe that doomers are always wrong doesn’t feel countered by cherry picking examples of smaller predicted harms because (as illustrated in the comment) the body of doom predictions is much larger than the ones with nuggets of foresight.
That’s comparing apples to oranges. There are doomers and doomers. I don’t think the “doomers” predicting the Rapture or some other apocalypse are the same thing as the “doomers” predicting the moral decline of society. The two categories overlap in many people, but they are distinct, and I think it’s misleading to conflate them. (Which is kind of a critique of the premise of the article as a whole—I would put the AI doomers in the former category, but the article only gives examples from the latter.)
The existential risk doomers historically are usually crazy, and they’ve never been right yet (in the context of modern society anyway—I suppose if you were an apocalypse doomer in 1300s China saying that the Mongols were going to come and wipe out your entire society you were pretty spot on), but that doesn’t mean they are always wrong or totally off base. It’s completely rational to be concerned about doom from a nuclear war, for example, even though it hasn’t happened yet. Whether AI risk is crazy “Y2K/Rapture” doom or feasible “nuclear war” doom is the real debate, and this article doesn’t really contribute anything to it.
What this article does a good job of is illustrating how “moral decline” doomers as opposed to “apocalypse” doomers are often proved technically correct by history. I think what both they and this article miss is that they often see events as causes of the so-called decline, when they’re actually milestones in an already-existing trend. Legalizing gay marriage didn’t cause other “degenerate” sexual behavior to be more accepted in society—we legalized gay marriage because we had already been moving away from the Puritanical sexual mores of the past towards a more liberated attitude, and this was just one more milestone in that process. Now that’s not always true—the invention of the book, and later, the smartphone absolutely did cause a devaluing of the ability to memorize and recite knowledge. And sometimes it’s a little bit of both, where an event is both a symptom of an underlying trend, and also contributes to accelerating it. But I really like how the article acknowledges that they could be right even if “doom” as we think of it today did not occur, because the values that were important to them were lost--
Probably the ancients would see our lives as greatly impoverished in many ways downstream of the innovations they warned against. We do not recite poetry as we once used to, sing together for entertainment, roam alone as children, or dance freely in the presence of the internet’s all-seeing eyes. Less sympathetic would be ancient’s sadness at our sexual deviances, casual blasphemy or so on. But those were their values.
We laugh at them for being prudish for how appalled they would be at our society with homosexuality, polyamory, weird fetishes, etc. all being more or less openly discussed and acceptable, but think what it would feel like to you if in the future you saw your society trending towards one where, say, pedophilia was becoming less of a taboo? It doesn’t matter if it’s right or wrong, it’s the visceral response that most people have to that idea that you need to understand. That’s what it feels like to be a culturally conservative doomer watching their society experience value drift. People today like to think that our values are somehow backed up by reality in a way that isn’t true of other past or present value systems, but guess what? That’s what it feels like to have a value system. Everyone, everywhere, in all times and all places has believed that, and the human mind excels at no other task more than coming up with rationalizations for why your values are the right ones, and opposing values are wrong.
Overall I think this article is pretty insightful about the “moral decline” type of doomers, just completely unrelated to the question of AI existential risk that brought it up in the first place.
Correct, my mistake. 1200s. I was just reaching for a historical example of when a real “apocalypse” did in fact come to pass—when not only are you and everyone you know going to get killed but also your entire society as you know it will come to an end—and the brutal Mongol conquest of China was the first one that came to my mind, probably thanks to Dan Carlin’s excellent Hardcore History podcast on the subject. I didn’t take the 2 seconds on Wikipedia I should have to make sure I was talking about the right century.
I was thinking of other contenders like the smallpox epidemic in North America following the Columbian exchange, but in that scenario you didn’t really have “doomers” who were predicting that outcome, because their epidemiology at the time wasn’t quite up to understanding the problem they were facing. But in China at the time, it’s feasible that some individuals would have had access to enough news and information to make doom predictions about the Mongol apocalypse that turned out to be unfortunately correct.
This seems like a misleading example of doomers being wrong (agree denotationally, disagree connotationally), since I think it’s plausible that Y2K was not a big deal (to such an extent that “most people think it was a myth, hoax, or urban legend”) precisely because of the mitigation efforts stemmed by the doomsayers’ predictions.
IIRC however I heard it said that the Y2K bug didn’t cause serious problems even in countries where there wasn’t much effort to deal with it, and hence the doomsayers’ predictions were exaggerated (in that much lesser mitigation efforts would have served almost as well). I don’t know if this is true though
Even if that were true, it might not mean anything. Why might a country not invest in Y2K prevention? Well, maybe it’s not a problem there! You don’t decide on investments at random, after all.
And this is clearly a case where (1) USA/Western investments would save a lot of other countries the need to invest in Y2K prevention because that is where most software comes from; and (2) those countries might not have the problem in the first place because they computerized later (and skipped the phase of hardwiring in dangerously short data types), or hadn’t computerized at all. (“We don’t have a Y2K problem because we don’t have any computers” doesn’t imply Y2K prevention is a bad idea.)
Indeed, I had similar thoughts but didn’t type them up.
In any case I suspect it was a situation in which the cost-benefit analysis would show high risk-aversion (hence probable over-reaction to avoid under-reaction) was justified.
Yep. Put another way: With Y2K, the higher-quality “predictions of doom” were sufficiently specific that they were also a road map to preventing the doom.
(If nothing else, you could frequently test a system by running the system clock ahead to 1999-12-31 23:59:59 and waiting a moment to see if anything caught fire.)
Last month, doomers predicted that the Rapture would happen. The doomers were wrong, as they have been all the other dozens of notable times they predicted this.
Doomers predicted that the Y2K bug would cause massive death and destruction. They were wrong.
Moving to your social examples, doomers predicted that the legalisation of gay marriage would lead to the legalisation of bestiality. They were wrong.
You even provide an example yourself where people claim that D&D leads to satanism. This didn’t happen! Having an orc hero is not satanism! satanism has remained a niche religion. The doomers were wrong.
Cherrypicking examples where doomerist predictions were wrong and examples where they were right is a very poor way to figure out whether a particular doomerist prediction is wrong or right.
Skeptics are not arguing that no technology or social movement has ever had a negative effect, they are arguing that humans are biased towards apocalyptic thinking and overestimating the threat of new stuff. If you want to rebut this, you have to actually collect some unbiased data to test the claim.
While I agree at a basic level, this also seems like a motte-and-bailey.
There is clearly a vibe that all doomers have obviously always been wrong. The author is clearly trying to push back against that vibe. I too prefer arguing at ‘motte’ level, but vibes (baileys) matter, and pushing back against one should not require a long airtight argument that stands up to the stronger version of the claims being made. Even though I agree the stronger version would be better, that’s true for both sides of any debate.
I sort of see your argument here, but similarly just based on vibes associating the AI-risk concepts with other doom predictions feels like it does more harm than good to me. The vibe that doomers are always wrong doesn’t feel countered by cherry picking examples of smaller predicted harms because (as illustrated in the comment) the body of doom predictions is much larger than the ones with nuggets of foresight.
That’s comparing apples to oranges. There are doomers and doomers. I don’t think the “doomers” predicting the Rapture or some other apocalypse are the same thing as the “doomers” predicting the moral decline of society. The two categories overlap in many people, but they are distinct, and I think it’s misleading to conflate them. (Which is kind of a critique of the premise of the article as a whole—I would put the AI doomers in the former category, but the article only gives examples from the latter.)
The existential risk doomers historically are usually crazy, and they’ve never been right yet (in the context of modern society anyway—I suppose if you were an apocalypse doomer in 1300s China saying that the Mongols were going to come and wipe out your entire society you were pretty spot on), but that doesn’t mean they are always wrong or totally off base. It’s completely rational to be concerned about doom from a nuclear war, for example, even though it hasn’t happened yet. Whether AI risk is crazy “Y2K/Rapture” doom or feasible “nuclear war” doom is the real debate, and this article doesn’t really contribute anything to it.
What this article does a good job of is illustrating how “moral decline” doomers as opposed to “apocalypse” doomers are often proved technically correct by history. I think what both they and this article miss is that they often see events as causes of the so-called decline, when they’re actually milestones in an already-existing trend. Legalizing gay marriage didn’t cause other “degenerate” sexual behavior to be more accepted in society—we legalized gay marriage because we had already been moving away from the Puritanical sexual mores of the past towards a more liberated attitude, and this was just one more milestone in that process. Now that’s not always true—the invention of the book, and later, the smartphone absolutely did cause a devaluing of the ability to memorize and recite knowledge. And sometimes it’s a little bit of both, where an event is both a symptom of an underlying trend, and also contributes to accelerating it. But I really like how the article acknowledges that they could be right even if “doom” as we think of it today did not occur, because the values that were important to them were lost--
We laugh at them for being prudish for how appalled they would be at our society with homosexuality, polyamory, weird fetishes, etc. all being more or less openly discussed and acceptable, but think what it would feel like to you if in the future you saw your society trending towards one where, say, pedophilia was becoming less of a taboo? It doesn’t matter if it’s right or wrong, it’s the visceral response that most people have to that idea that you need to understand. That’s what it feels like to be a culturally conservative doomer watching their society experience value drift. People today like to think that our values are somehow backed up by reality in a way that isn’t true of other past or present value systems, but guess what? That’s what it feels like to have a value system. Everyone, everywhere, in all times and all places has believed that, and the human mind excels at no other task more than coming up with rationalizations for why your values are the right ones, and opposing values are wrong.
Overall I think this article is pretty insightful about the “moral decline” type of doomers, just completely unrelated to the question of AI existential risk that brought it up in the first place.
Small historical nit: China was actually ruled by the Mongols for most of the 1300s.
Correct, my mistake. 1200s. I was just reaching for a historical example of when a real “apocalypse” did in fact come to pass—when not only are you and everyone you know going to get killed but also your entire society as you know it will come to an end—and the brutal Mongol conquest of China was the first one that came to my mind, probably thanks to Dan Carlin’s excellent Hardcore History podcast on the subject. I didn’t take the 2 seconds on Wikipedia I should have to make sure I was talking about the right century.
I was thinking of other contenders like the smallpox epidemic in North America following the Columbian exchange, but in that scenario you didn’t really have “doomers” who were predicting that outcome, because their epidemiology at the time wasn’t quite up to understanding the problem they were facing. But in China at the time, it’s feasible that some individuals would have had access to enough news and information to make doom predictions about the Mongol apocalypse that turned out to be unfortunately correct.
This seems like a misleading example of doomers being wrong (agree denotationally, disagree connotationally), since I think it’s plausible that Y2K was not a big deal (to such an extent that “most people think it was a myth, hoax, or urban legend”) precisely because of the mitigation efforts stemmed by the doomsayers’ predictions.
IIRC however I heard it said that the Y2K bug didn’t cause serious problems even in countries where there wasn’t much effort to deal with it, and hence the doomsayers’ predictions were exaggerated (in that much lesser mitigation efforts would have served almost as well). I don’t know if this is true though
Even if that were true, it might not mean anything. Why might a country not invest in Y2K prevention? Well, maybe it’s not a problem there! You don’t decide on investments at random, after all.
And this is clearly a case where (1) USA/Western investments would save a lot of other countries the need to invest in Y2K prevention because that is where most software comes from; and (2) those countries might not have the problem in the first place because they computerized later (and skipped the phase of hardwiring in dangerously short data types), or hadn’t computerized at all. (“We don’t have a Y2K problem because we don’t have any computers” doesn’t imply Y2K prevention is a bad idea.)
Indeed, I had similar thoughts but didn’t type them up.
In any case I suspect it was a situation in which the cost-benefit analysis would show high risk-aversion (hence probable over-reaction to avoid under-reaction) was justified.
Consensus view is that they were shielded by those who did invest in it.
I’ve written more about Y2K at https://www.lesswrong.com/posts/zvQdgfFEDFQQhDDuS/y2k-successful-practice-for-ai-alignment
Yep. Put another way: With Y2K, the higher-quality “predictions of doom” were sufficiently specific that they were also a road map to preventing the doom.
(If nothing else, you could frequently test a system by running the system clock ahead to 1999-12-31 23:59:59 and waiting a moment to see if anything caught fire.)