That’s comparing apples to oranges. There are doomers and doomers. I don’t think the “doomers” predicting the Rapture or some other apocalypse are the same thing as the “doomers” predicting the moral decline of society. The two categories overlap in many people, but they are distinct, and I think it’s misleading to conflate them. (Which is kind of a critique of the premise of the article as a whole—I would put the AI doomers in the former category, but the article only gives examples from the latter.)
The existential risk doomers historically are usually crazy, and they’ve never been right yet (in the context of modern society anyway—I suppose if you were an apocalypse doomer in 1300s China saying that the Mongols were going to come and wipe out your entire society you were pretty spot on), but that doesn’t mean they are always wrong or totally off base. It’s completely rational to be concerned about doom from a nuclear war, for example, even though it hasn’t happened yet. Whether AI risk is crazy “Y2K/Rapture” doom or feasible “nuclear war” doom is the real debate, and this article doesn’t really contribute anything to it.
What this article does a good job of is illustrating how “moral decline” doomers as opposed to “apocalypse” doomers are often proved technically correct by history. I think what both they and this article miss is that they often see events as causes of the so-called decline, when they’re actually milestones in an already-existing trend. Legalizing gay marriage didn’t cause other “degenerate” sexual behavior to be more accepted in society—we legalized gay marriage because we had already been moving away from the Puritanical sexual mores of the past towards a more liberated attitude, and this was just one more milestone in that process. Now that’s not always true—the invention of the book, and later, the smartphone absolutely did cause a devaluing of the ability to memorize and recite knowledge. And sometimes it’s a little bit of both, where an event is both a symptom of an underlying trend, and also contributes to accelerating it. But I really like how the article acknowledges that they could be right even if “doom” as we think of it today did not occur, because the values that were important to them were lost--
Probably the ancients would see our lives as greatly impoverished in many ways downstream of the innovations they warned against. We do not recite poetry as we once used to, sing together for entertainment, roam alone as children, or dance freely in the presence of the internet’s all-seeing eyes. Less sympathetic would be ancient’s sadness at our sexual deviances, casual blasphemy or so on. But those were their values.
We laugh at them for being prudish for how appalled they would be at our society with homosexuality, polyamory, weird fetishes, etc. all being more or less openly discussed and acceptable, but think what it would feel like to you if in the future you saw your society trending towards one where, say, pedophilia was becoming less of a taboo? It doesn’t matter if it’s right or wrong, it’s the visceral response that most people have to that idea that you need to understand. That’s what it feels like to be a culturally conservative doomer watching their society experience value drift. People today like to think that our values are somehow backed up by reality in a way that isn’t true of other past or present value systems, but guess what? That’s what it feels like to have a value system. Everyone, everywhere, in all times and all places has believed that, and the human mind excels at no other task more than coming up with rationalizations for why your values are the right ones, and opposing values are wrong.
Overall I think this article is pretty insightful about the “moral decline” type of doomers, just completely unrelated to the question of AI existential risk that brought it up in the first place.
Correct, my mistake. 1200s. I was just reaching for a historical example of when a real “apocalypse” did in fact come to pass—when not only are you and everyone you know going to get killed but also your entire society as you know it will come to an end—and the brutal Mongol conquest of China was the first one that came to my mind, probably thanks to Dan Carlin’s excellent Hardcore History podcast on the subject. I didn’t take the 2 seconds on Wikipedia I should have to make sure I was talking about the right century.
I was thinking of other contenders like the smallpox epidemic in North America following the Columbian exchange, but in that scenario you didn’t really have “doomers” who were predicting that outcome, because their epidemiology at the time wasn’t quite up to understanding the problem they were facing. But in China at the time, it’s feasible that some individuals would have had access to enough news and information to make doom predictions about the Mongol apocalypse that turned out to be unfortunately correct.
That’s comparing apples to oranges. There are doomers and doomers. I don’t think the “doomers” predicting the Rapture or some other apocalypse are the same thing as the “doomers” predicting the moral decline of society. The two categories overlap in many people, but they are distinct, and I think it’s misleading to conflate them. (Which is kind of a critique of the premise of the article as a whole—I would put the AI doomers in the former category, but the article only gives examples from the latter.)
The existential risk doomers historically are usually crazy, and they’ve never been right yet (in the context of modern society anyway—I suppose if you were an apocalypse doomer in 1300s China saying that the Mongols were going to come and wipe out your entire society you were pretty spot on), but that doesn’t mean they are always wrong or totally off base. It’s completely rational to be concerned about doom from a nuclear war, for example, even though it hasn’t happened yet. Whether AI risk is crazy “Y2K/Rapture” doom or feasible “nuclear war” doom is the real debate, and this article doesn’t really contribute anything to it.
What this article does a good job of is illustrating how “moral decline” doomers as opposed to “apocalypse” doomers are often proved technically correct by history. I think what both they and this article miss is that they often see events as causes of the so-called decline, when they’re actually milestones in an already-existing trend. Legalizing gay marriage didn’t cause other “degenerate” sexual behavior to be more accepted in society—we legalized gay marriage because we had already been moving away from the Puritanical sexual mores of the past towards a more liberated attitude, and this was just one more milestone in that process. Now that’s not always true—the invention of the book, and later, the smartphone absolutely did cause a devaluing of the ability to memorize and recite knowledge. And sometimes it’s a little bit of both, where an event is both a symptom of an underlying trend, and also contributes to accelerating it. But I really like how the article acknowledges that they could be right even if “doom” as we think of it today did not occur, because the values that were important to them were lost--
We laugh at them for being prudish for how appalled they would be at our society with homosexuality, polyamory, weird fetishes, etc. all being more or less openly discussed and acceptable, but think what it would feel like to you if in the future you saw your society trending towards one where, say, pedophilia was becoming less of a taboo? It doesn’t matter if it’s right or wrong, it’s the visceral response that most people have to that idea that you need to understand. That’s what it feels like to be a culturally conservative doomer watching their society experience value drift. People today like to think that our values are somehow backed up by reality in a way that isn’t true of other past or present value systems, but guess what? That’s what it feels like to have a value system. Everyone, everywhere, in all times and all places has believed that, and the human mind excels at no other task more than coming up with rationalizations for why your values are the right ones, and opposing values are wrong.
Overall I think this article is pretty insightful about the “moral decline” type of doomers, just completely unrelated to the question of AI existential risk that brought it up in the first place.
Small historical nit: China was actually ruled by the Mongols for most of the 1300s.
Correct, my mistake. 1200s. I was just reaching for a historical example of when a real “apocalypse” did in fact come to pass—when not only are you and everyone you know going to get killed but also your entire society as you know it will come to an end—and the brutal Mongol conquest of China was the first one that came to my mind, probably thanks to Dan Carlin’s excellent Hardcore History podcast on the subject. I didn’t take the 2 seconds on Wikipedia I should have to make sure I was talking about the right century.
I was thinking of other contenders like the smallpox epidemic in North America following the Columbian exchange, but in that scenario you didn’t really have “doomers” who were predicting that outcome, because their epidemiology at the time wasn’t quite up to understanding the problem they were facing. But in China at the time, it’s feasible that some individuals would have had access to enough news and information to make doom predictions about the Mongol apocalypse that turned out to be unfortunately correct.