I’ve suspected for awhile now that the democratic, egalitarian and feminist (DEF) era we’ve lived in represents a kind of unsustainable drunkard’s walk from long-term and more stable social norms which has fooled our pattern-recognition heuristics into imposing a vector on this deviation and calling it “progress.” It wouldn’t surprise me if future societies descended from ours, for example, the ones which in which we might reanimate from cryostasis (assuming that could even happen) will look noticeably more aristocratic, hierarchical and patriarchal than our departure society. That might suck for the feminist women who have signed up for cryosuspension and survive the ambulance ride across time, but I think I could handle it. ; )
Interestingly enough, much American science fiction written during the mid 20th Century presents a similarly skeptical view of the current DEF ideology. How many science fiction stories postulate noble houses, monarchies and feudal-looking societies with advanced sciences and technologies, but set in “the future”? These writers might have followed the lead of their predecessor H.G. Wells, who advocates in his works that an aristocracy of the mind should run things.
Spengler notoriously predicted that a Caesar-like figure could, in his lifetime, reinvigorate the European civilization and end its weakness and complacency, at least for a while. I think we all know how well that turned out. (Although he criticized Hitler’s lack of refinement, sophistication and aristocratism after voting for him.)
That might suck for the feminist women who have signed up for cryosuspension and survive the ambulance ride across time, but I think I could handle it. ; )
Do you think you’d be able to persuade many of today’s people—not just women who don’t like the idea of patriarchy, etc—to support building a society that they find cruel and morally abhorrent on the sole argument that it might be “sustainable” and colonize the stars, etc? [Edit: couldn’t make heads or tails of my grammar upon revision, simplified this.]
Unless you personally want to participate in it and are confident you’d enjoy it… how is optimizing for a powerful and self-sustaining fascist/baby-eating/etc society (e.g. Ancient Rome as it would look to us, with genocides and crucifixions and slave fights) different from just building a computronium-paving AI and putting a memory of our culture and knowledge into it? It would also last for a long time and build big things. It might even be programmed to derive utility from making and comprehending our kind of art, texts, etc. Would it be a good deal to let it destructively assimilate/enslave/whatever “our” branch of humanity, just because we are too fragile and might not last long?
These writers might have followed the lead of their predecessor H.G. Wells, who advocates in his works that an aristocracy of the mind should run things.
You do understand that even in his day Wells became a byword for naive liberalism and belief in progressivist technocracy? His so-called “aristocracy of the mind”, and the manner in which it was supposed to rule, was worlds apart from “future feudalism” (although I think both are tyrannical upon closer inspection). See Orwell.
As to the (sickening and perverse IMO) idea of a Hari Seldon—with all that it implies—here’s a quote from Chesterton, the great crusader against “nihilism” and anti-humanism:
In the July 10, 1920 issue of The Illustrated London News, G. K. Chesterton took issue with both pessimists (such as Spengler) and their optimistic critics, arguing that neither took into consideration human choice: “The pessimists believe that the cosmos is a clock that is running down; the progressives believe it is a clock that they themselves are winding up. But I happen to believe that the world is what we choose to make it, and that we are what we choose to make ourselves; and that our renascence or our ruin will alike, ultimately and equally, testify with a trumpet to our liberty.” http://en.wikipedia.org/wiki/Oswald_Spengler#Aftermath
I don’t get your position. Are you arguing that we should support a “moral” society even if it’s unstable and hope(pray?) it doesn’t collapse into something much worse than the stable society we could create if we actively attempt to?
In the July 10, 1920 issue of The Illustrated London News, G. K. Chesterton took issue with both pessimists (such as Spengler) and their optimistic critics, arguing that neither took into consideration human choice: “The pessimists believe that the cosmos is a clock that is running down; the progressives believe it is a clock that they themselves are winding up. But I happen to believe that the world is what we choose to make it, and that we are what we choose to make ourselves; and that our renascence or our ruin will alike, ultimately and equally, testify with a trumpet to our liberty.”
Weren’t you attempting to arguing earlier that treating humans as capable of morally significant choices was a cardinal sin?
Brian Tomasik argues that human extinction might be greatly preferrable to creating a lasting supercivilization(s) more tolerant of suffering/torture and willing to induce it than ours. Thus, he argues that we should devote way less effort to averting X-risks in themselves, and way more to improving our current society + simultaneously increasing the odds of a future that’s not abhorrent to our values.
Targeted interventions to change society in ways that will lead to better policies and values could be more cost-effective than increasing the odds of a future-of-some-sort that might be good but might be bad.
I’ve found an excellent negative-utilitarian critique of the “stability/X-risk-reduction” mindset.
Existential risk is by far not the only risk of unstable societies. In fact devolving into a lasting supercivilization based on torture is closer to what I had in mind in the parent.
In fact devolving into a lasting supercivilization based on torture is closer to what I had in mind in the parent.
And note that Western liberalism/progressivism has pretty much created the first culture in history with strong norms against torture (extending to things like child discipline). It’s inconsistent and hypocritical in applying those norms to itself, true (especially regarding imprisonment) - but I’d still consider it a kind of moral progress that a Western citizen would be more likely to lose sleep and make some noise about police brutality, waterboarding, etc than a Russian, Chinese or, say, Singaporean one. To say nothing of the subjects of past empires.
This recent aversion to torture seems to endure despite the high perceptions of crime, terrorist threats, etc (see the latest scandal over Zero Dark Thirty) - and wouldn’t it be a very convenient thing for a “rational”, non-squeamish social engineer to optimize away? And then where would the slippery slope end?
And note that Western liberalism/progressivism has pretty much created the first culture in history with strong norms against torture
I agree that Western civilization has many unique accomplishments, I would argue that it is therefore worth defending.
(extending to things like child discipline). It’s inconsistent and hypocritical in applying those norms to itself, true (especially regarding imprisonment)
I’d argue that these are examples of taking the prohibition too far. In any case if Western civilization collapses because parents failed to adequately pass it on to their children, or because it is no longer capable of dealing with crime (for example), its replacement will likely have a lot fewer prohibitions on torture, and probably no free speech or free inquiry, nor anything resembling democracy.
and wouldn’t it be a very convenient thing for a “rational”, non-squeamish social engineer to optimize away? And then where would the slippery slope end?
This is actually my biggest issue with “progressives”, you destroy traditional Schelling points on the grounds that they’re arbitrary and “irrational” and then discover you have no way of taking non-extreme positions.
Even if the “moral” society is doomed to fail and bring a horrible disaster tomorrow, you can still get fuzzies and social status for promoting it today.
On the other hand, it is also important to ask how much certaintly do you have that it is doomed, and what is your evidence. But in the real world, promoting the “moral” society would include actively destroying all such evidence. But also reversed stupidity is not intelligence, and just because someone is trying to destroy the evidence, we should not automatically conclude that the evidence was overwhelming. Also… it’s complicated.
Vox Day suggests that we can find another Hari Seldon in Oswald Spengler:
http://voxday.blogspot.com/2013/01/spenglerian-decline.html
Which discusses:
http://nationalinterest.org/article/spenglers-ominous-prophecy-7878?page=show
I’ve suspected for awhile now that the democratic, egalitarian and feminist (DEF) era we’ve lived in represents a kind of unsustainable drunkard’s walk from long-term and more stable social norms which has fooled our pattern-recognition heuristics into imposing a vector on this deviation and calling it “progress.” It wouldn’t surprise me if future societies descended from ours, for example, the ones which in which we might reanimate from cryostasis (assuming that could even happen) will look noticeably more aristocratic, hierarchical and patriarchal than our departure society. That might suck for the feminist women who have signed up for cryosuspension and survive the ambulance ride across time, but I think I could handle it. ; )
Interestingly enough, much American science fiction written during the mid 20th Century presents a similarly skeptical view of the current DEF ideology. How many science fiction stories postulate noble houses, monarchies and feudal-looking societies with advanced sciences and technologies, but set in “the future”? These writers might have followed the lead of their predecessor H.G. Wells, who advocates in his works that an aristocracy of the mind should run things.
Spengler notoriously predicted that a Caesar-like figure could, in his lifetime, reinvigorate the European civilization and end its weakness and complacency, at least for a while. I think we all know how well that turned out. (Although he criticized Hitler’s lack of refinement, sophistication and aristocratism after voting for him.)
Do you think you’d be able to persuade many of today’s people—not just women who don’t like the idea of patriarchy, etc—to support building a society that they find cruel and morally abhorrent on the sole argument that it might be “sustainable” and colonize the stars, etc? [Edit: couldn’t make heads or tails of my grammar upon revision, simplified this.]
Unless you personally want to participate in it and are confident you’d enjoy it… how is optimizing for a powerful and self-sustaining fascist/baby-eating/etc society (e.g. Ancient Rome as it would look to us, with genocides and crucifixions and slave fights) different from just building a computronium-paving AI and putting a memory of our culture and knowledge into it? It would also last for a long time and build big things. It might even be programmed to derive utility from making and comprehending our kind of art, texts, etc. Would it be a good deal to let it destructively assimilate/enslave/whatever “our” branch of humanity, just because we are too fragile and might not last long?
You do understand that even in his day Wells became a byword for naive liberalism and belief in progressivist technocracy? His so-called “aristocracy of the mind”, and the manner in which it was supposed to rule, was worlds apart from “future feudalism” (although I think both are tyrannical upon closer inspection). See Orwell.
As to the (sickening and perverse IMO) idea of a Hari Seldon—with all that it implies—here’s a quote from Chesterton, the great crusader against “nihilism” and anti-humanism:
In the July 10, 1920 issue of The Illustrated London News, G. K. Chesterton took issue with both pessimists (such as Spengler) and their optimistic critics, arguing that neither took into consideration human choice: “The pessimists believe that the cosmos is a clock that is running down; the progressives believe it is a clock that they themselves are winding up. But I happen to believe that the world is what we choose to make it, and that we are what we choose to make ourselves; and that our renascence or our ruin will alike, ultimately and equally, testify with a trumpet to our liberty.”
http://en.wikipedia.org/wiki/Oswald_Spengler#Aftermath
Re: Stability:
I don’t get your position. Are you arguing that we should support a “moral” society even if it’s unstable and hope(pray?) it doesn’t collapse into something much worse than the stable society we could create if we actively attempt to?
Weren’t you attempting to arguing earlier that treating humans as capable of morally significant choices was a cardinal sin?
I’ve found an excellent negative-utilitarian critique of the “stability/X-risk-reduction” mindset.
Brian Tomasik argues that human extinction might be greatly preferrable to creating a lasting supercivilization(s) more tolerant of suffering/torture and willing to induce it than ours. Thus, he argues that we should devote way less effort to averting X-risks in themselves, and way more to improving our current society + simultaneously increasing the odds of a future that’s not abhorrent to our values.
I’m inclined to agree.
Existential risk is by far not the only risk of unstable societies. In fact devolving into a lasting supercivilization based on torture is closer to what I had in mind in the parent.
And note that Western liberalism/progressivism has pretty much created the first culture in history with strong norms against torture (extending to things like child discipline). It’s inconsistent and hypocritical in applying those norms to itself, true (especially regarding imprisonment) - but I’d still consider it a kind of moral progress that a Western citizen would be more likely to lose sleep and make some noise about police brutality, waterboarding, etc than a Russian, Chinese or, say, Singaporean one. To say nothing of the subjects of past empires.
This recent aversion to torture seems to endure despite the high perceptions of crime, terrorist threats, etc (see the latest scandal over Zero Dark Thirty) - and wouldn’t it be a very convenient thing for a “rational”, non-squeamish social engineer to optimize away? And then where would the slippery slope end?
I agree that Western civilization has many unique accomplishments, I would argue that it is therefore worth defending.
I’d argue that these are examples of taking the prohibition too far. In any case if Western civilization collapses because parents failed to adequately pass it on to their children, or because it is no longer capable of dealing with crime (for example), its replacement will likely have a lot fewer prohibitions on torture, and probably no free speech or free inquiry, nor anything resembling democracy.
This is actually my biggest issue with “progressives”, you destroy traditional Schelling points on the grounds that they’re arbitrary and “irrational” and then discover you have no way of taking non-extreme positions.
Even if the “moral” society is doomed to fail and bring a horrible disaster tomorrow, you can still get fuzzies and social status for promoting it today.
On the other hand, it is also important to ask how much certaintly do you have that it is doomed, and what is your evidence. But in the real world, promoting the “moral” society would include actively destroying all such evidence. But also reversed stupidity is not intelligence, and just because someone is trying to destroy the evidence, we should not automatically conclude that the evidence was overwhelming. Also… it’s complicated.