Nuclear war/winter was the expected form of the destructor in my youth (I’m now in my 50s). Then Malthusian resource exhaustion, then resource failure through climate change, then supply chain fragility causing/resulting from all of the above. There really have been good reasons to expect species failure on a few decades timeframe. I watched the world go from paper ledgers and snail mail to fax machines and then electronic shared spreadsheets and actual apps/databases for most important things, and human society seemed incapable of coping with those changes at the time.
And none of it compares to the current and near-future rate of change, with all the above risks being amplified by human foibles related to the uncertainty, IN ADDITION to the direct risk of AI takeover.
Living in the USSR, I never felt a sense of impending apocalypse because, at the end of the Two Minutes Hate, Emmanuel Goldstein always showed up and saved the world. Although genuinely dramatic films about the end of the world were made in the USSR—such as Dead Man’s Letters (1986)—the expectation of doomsday is not the most constructive life stance, especially if one doesn’t seek a way out of the situation, which, to be honest, has arisen more than once throughout human history. It seems to me the situation is painfully simple: Emmanuel Goldstein will soon step onto the stage and give clear instructions to those gripped by panic over the approaching techno-apocalypse—and that will be truly terrifying.
as it’s sung in a well-known song: Mister Reagan says “We will protect you”…
Emmanuel Goldstein usually appears in the guise of a benevolent politician offering a strategic defense initiative, or a simple way to wipe out all the world’s terrorists by bombing country X; or as a successful businessman offering a “reliable” operating system for housewives; or a hastily tested vaccine; or “green” energy in exchange for nuclear power plants that supposedly very bad. Was the risk in these cases explicitly named, or was it deliberately overstated in order to extract political or economic benefit?
I see that someone is exaggerating the risks of a techno-apocalypse, But I don’t deny that they really exist.
It’s like man landing on the moon, of course, and the USSR didn’t give up until 1974 and blew up four unmanned rockets at launch, but it requires such a huge amount of resources that probably everyone wants it, but only the US and China can do it.
Nuclear war/winter was the expected form of the destructor in my youth (I’m now in my 50s). Then Malthusian resource exhaustion, then resource failure through climate change, then supply chain fragility causing/resulting from all of the above. There really have been good reasons to expect species failure on a few decades timeframe. I watched the world go from paper ledgers and snail mail to fax machines and then electronic shared spreadsheets and actual apps/databases for most important things, and human society seemed incapable of coping with those changes at the time.
And none of it compares to the current and near-future rate of change, with all the above risks being amplified by human foibles related to the uncertainty, IN ADDITION to the direct risk of AI takeover.
Living in the USSR, I never felt a sense of impending apocalypse because, at the end of the Two Minutes Hate, Emmanuel Goldstein always showed up and saved the world. Although genuinely dramatic films about the end of the world were made in the USSR—such as Dead Man’s Letters (1986)—the expectation of doomsday is not the most constructive life stance, especially if one doesn’t seek a way out of the situation, which, to be honest, has arisen more than once throughout human history.
It seems to me the situation is painfully simple: Emmanuel Goldstein will soon step onto the stage and give clear instructions to those gripped by panic over the approaching techno-apocalypse—and that will be truly terrifying.
As someone also born in the USSR (and still occasionally pinching myself to make sure I haven’t gone back), I confirm: I’ve seen this pattern before.
Fear is a resource. Someone always shows up to monetize it. But here’s the good news: a risk that’s explicitly named is harder to exploit.
as it’s sung in a well-known song:
Mister Reagan says “We will protect you”…
Emmanuel Goldstein usually appears in the guise of a benevolent politician offering a strategic defense initiative, or a simple way to wipe out all the world’s terrorists by bombing country X; or as a successful businessman offering a “reliable” operating system for housewives; or a hastily tested vaccine; or “green” energy in exchange for nuclear power plants that supposedly very bad.
Was the risk in these cases explicitly named, or was it deliberately overstated in order to extract political or economic benefit?
I see that someone is exaggerating the risks of a techno-apocalypse, But I don’t deny that they really exist.
There are two powers developing frontier AI, China and America. Do you see the elite of either country abandoning the pursuit of ever-more-potent AI?
It’s like man landing on the moon, of course, and the USSR didn’t give up until 1974 and blew up four unmanned rockets at launch, but it requires such a huge amount of resources that probably everyone wants it, but only the US and China can do it.