Agree now turning 40 or 20 need not make a bit difference for those aware of the weirdness of the time.
But: Seems like a stretch to say it’s already been like that few decades ago. Now the sheer uncertainty seems objectively different, qualitatively truly incomparable, to 20y ago (well, at least if the immediacy of potential changes is considered too).
Nuclear war/winter was the expected form of the destructor in my youth (I’m now in my 50s). Then Malthusian resource exhaustion, then resource failure through climate change, then supply chain fragility causing/resulting from all of the above. There really have been good reasons to expect species failure on a few decades timeframe. I watched the world go from paper ledgers and snail mail to fax machines and then electronic shared spreadsheets and actual apps/databases for most important things, and human society seemed incapable of coping with those changes at the time.
And none of it compares to the current and near-future rate of change, with all the above risks being amplified by human foibles related to the uncertainty, IN ADDITION to the direct risk of AI takeover.
Living in the USSR, I never felt a sense of impending apocalypse because, at the end of the Two Minutes Hate, Emmanuel Goldstein always showed up and saved the world. Although genuinely dramatic films about the end of the world were made in the USSR—such as Dead Man’s Letters (1986)—the expectation of doomsday is not the most constructive life stance, especially if one doesn’t seek a way out of the situation, which, to be honest, has arisen more than once throughout human history. It seems to me the situation is painfully simple: Emmanuel Goldstein will soon step onto the stage and give clear instructions to those gripped by panic over the approaching techno-apocalypse—and that will be truly terrifying.
as it’s sung in a well-known song: Mister Reagan says “We will protect you”…
Emmanuel Goldstein usually appears in the guise of a benevolent politician offering a strategic defense initiative, or a simple way to wipe out all the world’s terrorists by bombing country X; or as a successful businessman offering a “reliable” operating system for housewives; or a hastily tested vaccine; or “green” energy in exchange for nuclear power plants that supposedly very bad. Was the risk in these cases explicitly named, or was it deliberately overstated in order to extract political or economic benefit?
I see that someone is exaggerating the risks of a techno-apocalypse, But I don’t deny that they really exist.
It’s like man landing on the moon, of course, and the USSR didn’t give up until 1974 and blew up four unmanned rockets at launch, but it requires such a huge amount of resources that probably everyone wants it, but only the US and China can do it.
Yes, true, the level and timeline are very very different, whether we call the difference qualitative or quantitative.
I guess I considered it quantitative because when I was 20 I was already thinking there was at least a possibility of seeing human extinction or immortality in my lifetime, though my probabilities and timelines are now hugely different. Extinction has seemed like a possibility since the Cold War, and IIRC Kurzweil started talking about the singularity in the 90s.
People who want to fear an imminent apocalypse had plenty of options in previous decades too. Runaway global warming, peak oil, hitting global carrying capacity, etc. There was even a while where they could’ve feared nuclear war! That’s plenty immediate and dramatic, IMO.
With the possible exception of nuclear war, none of those apocalypses were as imminent, nor as dire, as this one, according to a reasonable assessment of the evidence at the time.
I’m not sure this matters for the lived experience of the humans living through those other times, given the worse information environments they/we faced. Unless you happened to actually be an expert in the relevant fields (and sometimes even then) the types of warnings and fearmongering going on, however wrong compared to the actual risks we now face from AI, were just as dire. There are still large communities of otherwise seemingly intelligent people utterly convinced that climate change and resource depletion are imminent apocalyptic threats that will cause collapse of civilization and/or human extinction by mid-century. In other words: “A reasonable assessment of the evidence at the time” is a much higher bar than most people ever attain about almost anything anywhere near this complex and novel.
Agree now turning 40 or 20 need not make a bit difference for those aware of the weirdness of the time.
But: Seems like a stretch to say it’s already been like that few decades ago. Now the sheer uncertainty seems objectively different, qualitatively truly incomparable, to 20y ago (well, at least if the immediacy of potential changes is considered too).
Nuclear war/winter was the expected form of the destructor in my youth (I’m now in my 50s). Then Malthusian resource exhaustion, then resource failure through climate change, then supply chain fragility causing/resulting from all of the above. There really have been good reasons to expect species failure on a few decades timeframe. I watched the world go from paper ledgers and snail mail to fax machines and then electronic shared spreadsheets and actual apps/databases for most important things, and human society seemed incapable of coping with those changes at the time.
And none of it compares to the current and near-future rate of change, with all the above risks being amplified by human foibles related to the uncertainty, IN ADDITION to the direct risk of AI takeover.
Living in the USSR, I never felt a sense of impending apocalypse because, at the end of the Two Minutes Hate, Emmanuel Goldstein always showed up and saved the world. Although genuinely dramatic films about the end of the world were made in the USSR—such as Dead Man’s Letters (1986)—the expectation of doomsday is not the most constructive life stance, especially if one doesn’t seek a way out of the situation, which, to be honest, has arisen more than once throughout human history.
It seems to me the situation is painfully simple: Emmanuel Goldstein will soon step onto the stage and give clear instructions to those gripped by panic over the approaching techno-apocalypse—and that will be truly terrifying.
As someone also born in the USSR (and still occasionally pinching myself to make sure I haven’t gone back), I confirm: I’ve seen this pattern before.
Fear is a resource. Someone always shows up to monetize it. But here’s the good news: a risk that’s explicitly named is harder to exploit.
as it’s sung in a well-known song:
Mister Reagan says “We will protect you”…
Emmanuel Goldstein usually appears in the guise of a benevolent politician offering a strategic defense initiative, or a simple way to wipe out all the world’s terrorists by bombing country X; or as a successful businessman offering a “reliable” operating system for housewives; or a hastily tested vaccine; or “green” energy in exchange for nuclear power plants that supposedly very bad.
Was the risk in these cases explicitly named, or was it deliberately overstated in order to extract political or economic benefit?
I see that someone is exaggerating the risks of a techno-apocalypse, But I don’t deny that they really exist.
There are two powers developing frontier AI, China and America. Do you see the elite of either country abandoning the pursuit of ever-more-potent AI?
It’s like man landing on the moon, of course, and the USSR didn’t give up until 1974 and blew up four unmanned rockets at launch, but it requires such a huge amount of resources that probably everyone wants it, but only the US and China can do it.
Yes, true, the level and timeline are very very different, whether we call the difference qualitative or quantitative.
I guess I considered it quantitative because when I was 20 I was already thinking there was at least a possibility of seeing human extinction or immortality in my lifetime, though my probabilities and timelines are now hugely different. Extinction has seemed like a possibility since the Cold War, and IIRC Kurzweil started talking about the singularity in the 90s.
People who want to fear an imminent apocalypse had plenty of options in previous decades too. Runaway global warming, peak oil, hitting global carrying capacity, etc. There was even a while where they could’ve feared nuclear war! That’s plenty immediate and dramatic, IMO.
With the possible exception of nuclear war, none of those apocalypses were as imminent, nor as dire, as this one, according to a reasonable assessment of the evidence at the time.
I agree with this 100%.
I’m not sure this matters for the lived experience of the humans living through those other times, given the worse information environments they/we faced. Unless you happened to actually be an expert in the relevant fields (and sometimes even then) the types of warnings and fearmongering going on, however wrong compared to the actual risks we now face from AI, were just as dire. There are still large communities of otherwise seemingly intelligent people utterly convinced that climate change and resource depletion are imminent apocalyptic threats that will cause collapse of civilization and/or human extinction by mid-century. In other words: “A reasonable assessment of the evidence at the time” is a much higher bar than most people ever attain about almost anything anywhere near this complex and novel.