We really should consider the possibility of having to stomp on Islam as an existential risk, or at least stopping immigration from Muslims’ weak and dysfunctional countries until Muslims can show good citizen behavior, if ever. .
If you engage in Long Game thinking, you should also have to consider the need for prophylaxis against the emergence of new nuisance religions in the coming centuries. I think randomness has fooled us into thinking that undesirable things from the Before-Times will never come back because we have “progressed beyond that,”, when I wouldn’t assume anything of the sort.
I don’t know if it’s an existential risk, but if technology keeps enabling fewer and fewer people to kill more and more people for less and less money (hint: it will), and Islamic countries continue to produce as many people who want to kill us as they do now, then at some point, perhaps 50 years from now, “we” will have to kill everyone in every country where radical Islam has a hold. (That’s about 400 million people at present.) That would radicalize much of the rest of the world against us.
I don’t know what the right response is, but it probably isn’t continuing to insist that people have the right to preach hatred and violence as long as it’s out of a sufficiently old book. I suspect, though, that America is more ready to sacrifice 400 million foreigners on the altar of religious liberty than to change that.
I don’t know if it’s an existential risk, but if technology keeps enabling fewer and fewer people to kill more and more people for less and less money (hint: it will)
Do you see any clear linear increase in bodies per attacker in that database?
I don’t know about the data in that database but there’s other evidence for a similar conclusion. See e.g. this paper which argues that the ratio of dollars of damage done to dollars of cost has been going up. (Disclaimer: one of the authors is my twin.) They cite another paper which I have not read in citation 64 which argues that over the last 70 years smaller and smaller groups have been killing more people.
the ratio of dollars of damage done to dollars of cost has been going up.
I wonder if that would be considered evidence for a much broader (and scarier) claim—that the modern society is becoming increasingly fragile and brittle.
Or simply that societal wealth is growing. If each person is worth their lifetime output, then in societies with higher output per person, even with fixed terrorist effectiveness and cost of terrorism*, the monetary cost will increase. But this has no relevance to any sort of x-risk claim, because society is also wealthier and better able to absorb the damage without x-risk-level collapse.
* there’s not really much reason to expect terrorism costs to increase over time. Guns are cheap.
And as James points out, technology does not necessarily have to be a one-way ratchet.
That’s certainly true, but the interesting question remains—is that all there is to it?
Also note that I am not talking about x-risk: if the contemporary Western society turns out to be too specialized to survive a change in the environment (a very common scenario in evolution), that doesn’t mean the humans die out—all it means is that this particular form of society didn’t work out well.
This is a very interesting and important question. Is there a general trend for how robustness scales with complexity? For evolved species, there will probably be an answer that depends on their population size and reproductive strategy. For constructed things like civilizations, the answer will probably be different. Gotta run but I’ll edit this comment later.
Is there a general trend for how robustness scales with complexity?
I think the general answer is the useless one: “it depends”, but for the particular case of the contemporary high-tech society I get a feeling (a “prior” in local lingo) that the robustness is negatively correlated with complexity.
A notable recent example: the 2008 financial crisis.
A notable recent example: the 2008 financial crisis.
In general, economic crashes after World War II have been small compared to many in the 19th century. For example it is sometimes estimated that the panic of 1819 had an unemployment rate around 20% at the height. Serious economic catastrophes have also been not as common.
I should have been more clear—my point isn’t its severity, but rather what was seen as the greatest danger to be avoided at all costs. That greatest danger was the domino collapse of the entire global financial system and it is precisely that which led the US Fed to adopt rather unconventional methods in the aftermath of the Lehman Brothers bankruptcy.
What was widely seen (correctly or not is a different and complicated issue) as the major issue was the possibility of all the big world’s banks freezing up in a chain of defaults or maybe-defaults as all of them are interlinked and hold each other’s debt. That didn’t exist as a problem in the XIX century.
Domino effects of banks was definitely a thing in the 19th and even 18th century. Moreover, even if it did happen to the extent that the worst case situations envisioned, it isn’t clear it would have been worse than 1819. And even if total unemployment did get worse, it is likely that the overall standard of living would still remain far better than any time in the 19th century. Larger events can occur but the ratchet is still slowly moving in the same direction.
Well, I never saw an estimate for a worst case scenario with an unemployment rate as high that in 1819, but now that I state my reasoning explicitly, that sounds pretty weak.
There have been a bunch of papers on this, I’ll have to track them down, but if memory serves me the low estimate is around 15% and the high is around 22 or 23%. Unfortunately, precise economic pre-1920s is generally hard to come by.
That’s a good point. I suppose one could argue that Spain is a relatively small part of Europe as a whole, but that seems like a pretty weak argument. I think I’m going to have to update on this general position to substantially reduce my confidence that the economic situation has been becoming more stable. Thanks.
I don’t know if it’s an existential risk, but if technology keeps enabling fewer and fewer people to kill more and more people for less and less money (hint: it will), and Islamic countries continue to produce as many people who want to kill us as they do now, then at some point, perhaps 50 years from now, “we” will have to kill everyone in every country where radical Islam has a hold. (That’s about 400 million people at present.)
Hey guys, defection has gotten really easy… screw cooperation, let’s start defecting all over the place!
In the very long-run there’s going to be a problem that if the trends keep continuing it will be possible for single crazy people to do nearly indefinite levels of damage. That’s a problem whether the individuals are motivated by religion, ideology, or are just wacko. Charlie Stross in one set of novels imagined a world where there’s a real problem that every crazy person can build their own nuclear weapon, but I don’t think he took it to its logical conclusion. The solution here may be simply to spread out so much to other planets that isolated incidents cannot do enough damage.
No. In 50 years it will probably be possible for the U.S. to have total drone surveillance of a country. We could assign everyone their own drone that monitors their behavior and alerts us if they do something we don’t like. And even if this proves not to be the case, rather than resort to genocide couldn’t we just cut off electricity to dangerous areas?
Your solution appears to require first conquering the entire world. Also, drones can’t tell what’s happening inside a building, or what’s in the packages or trucks going in and out of a building. Unless you mean micro-drones too small to detect, which is possible.
General point taken: It is very difficult to talk about what would be necessary 50 years from now.
Much of the world would likely support total drone surveillance of certain countries. Also, in fifty years we could probably put recording devices in peoples’ brains that tell us everything they say and hear, and combine this with AI to immediately identify any terrorist threats.
If we’re talking about brain implants and advanced AI, the the singularity would occur by the time we reach this level of development. The problem is: what if superweapons occur before superintelligence?
Doesn’t the US have some sort of “fourth amendment” which prevents surveillance of its own citizens (who might become terrorists)? And, unlike spying on internet usage, people are going to be really aware of drones buzzing them.
No, it does not. The Fourth Amendment prevents “unreasonable searches and seizures”—there is no explicit right to privacy in the US Constitution. The Supreme Court managed to find one, though (via a “penumbra of rights”), for a specific politicized purpose, but hasn’t been willing to take it seriously otherwise.
There are a few current court cases against the NSA surveillance in the US, but none got anywhere so far.
Yes, but, as they say “the constitution isn’t a suicide pact” and if the only way to stop mass terrorist attacks in the U.S. is by trashing the fourth amendment, the fourth amendment will get trashed.
We really should consider the possibility of having to stomp on Islam as an existential risk, or at least stopping immigration from Muslims’ weak and dysfunctional countries until Muslims can show good citizen behavior, if ever. .
If you engage in Long Game thinking, you should also have to consider the need for prophylaxis against the emergence of new nuisance religions in the coming centuries. I think randomness has fooled us into thinking that undesirable things from the Before-Times will never come back because we have “progressed beyond that,”, when I wouldn’t assume anything of the sort.
http://tvtropes.org/pmwiki/pmwiki.php/Main/LongGame
I don’t know if it’s an existential risk, but if technology keeps enabling fewer and fewer people to kill more and more people for less and less money (hint: it will), and Islamic countries continue to produce as many people who want to kill us as they do now, then at some point, perhaps 50 years from now, “we” will have to kill everyone in every country where radical Islam has a hold. (That’s about 400 million people at present.) That would radicalize much of the rest of the world against us.
I don’t know what the right response is, but it probably isn’t continuing to insist that people have the right to preach hatred and violence as long as it’s out of a sufficiently old book. I suspect, though, that America is more ready to sacrifice 400 million foreigners on the altar of religious liberty than to change that.
Do you see any clear linear increase in bodies per attacker in that database?
I don’t know about the data in that database but there’s other evidence for a similar conclusion. See e.g. this paper which argues that the ratio of dollars of damage done to dollars of cost has been going up. (Disclaimer: one of the authors is my twin.) They cite another paper which I have not read in citation 64 which argues that over the last 70 years smaller and smaller groups have been killing more people.
I wonder if that would be considered evidence for a much broader (and scarier) claim—that the modern society is becoming increasingly fragile and brittle.
Or simply that societal wealth is growing. If each person is worth their lifetime output, then in societies with higher output per person, even with fixed terrorist effectiveness and cost of terrorism*, the monetary cost will increase. But this has no relevance to any sort of x-risk claim, because society is also wealthier and better able to absorb the damage without x-risk-level collapse.
* there’s not really much reason to expect terrorism costs to increase over time. Guns are cheap.
And as James points out, technology does not necessarily have to be a one-way ratchet.
That’s certainly true, but the interesting question remains—is that all there is to it?
Also note that I am not talking about x-risk: if the contemporary Western society turns out to be too specialized to survive a change in the environment (a very common scenario in evolution), that doesn’t mean the humans die out—all it means is that this particular form of society didn’t work out well.
This is a very interesting and important question. Is there a general trend for how robustness scales with complexity? For evolved species, there will probably be an answer that depends on their population size and reproductive strategy. For constructed things like civilizations, the answer will probably be different. Gotta run but I’ll edit this comment later.
I think the general answer is the useless one: “it depends”, but for the particular case of the contemporary high-tech society I get a feeling (a “prior” in local lingo) that the robustness is negatively correlated with complexity.
A notable recent example: the 2008 financial crisis.
In general, economic crashes after World War II have been small compared to many in the 19th century. For example it is sometimes estimated that the panic of 1819 had an unemployment rate around 20% at the height. Serious economic catastrophes have also been not as common.
I should have been more clear—my point isn’t its severity, but rather what was seen as the greatest danger to be avoided at all costs. That greatest danger was the domino collapse of the entire global financial system and it is precisely that which led the US Fed to adopt rather unconventional methods in the aftermath of the Lehman Brothers bankruptcy.
What was widely seen (correctly or not is a different and complicated issue) as the major issue was the possibility of all the big world’s banks freezing up in a chain of defaults or maybe-defaults as all of them are interlinked and hold each other’s debt. That didn’t exist as a problem in the XIX century.
Domino effects of banks was definitely a thing in the 19th and even 18th century. Moreover, even if it did happen to the extent that the worst case situations envisioned, it isn’t clear it would have been worse than 1819. And even if total unemployment did get worse, it is likely that the overall standard of living would still remain far better than any time in the 19th century. Larger events can occur but the ratchet is still slowly moving in the same direction.
Nationally. But not globally.
Nothing is clear since we’re dealing with counterfactuals, but why do you believe so?
Well, I never saw an estimate for a worst case scenario with an unemployment rate as high that in 1819, but now that I state my reasoning explicitly, that sounds pretty weak.
What was the unemployment rate in 1819? A brief look at the web gave me nothing.
There have been a bunch of papers on this, I’ll have to track them down, but if memory serves me the low estimate is around 15% and the high is around 22 or 23%. Unfortunately, precise economic pre-1920s is generally hard to come by.
Given that right now the unemployment in Spain is about 25%, that doesn’t sound too horrible.
That’s a good point. I suppose one could argue that Spain is a relatively small part of Europe as a whole, but that seems like a pretty weak argument. I think I’m going to have to update on this general position to substantially reduce my confidence that the economic situation has been becoming more stable. Thanks.
Great Depression.
Hey guys, defection has gotten really easy… screw cooperation, let’s start defecting all over the place!
If defection becomes cheaper, then you would expect more defections. All else being equal.
In the very long-run there’s going to be a problem that if the trends keep continuing it will be possible for single crazy people to do nearly indefinite levels of damage. That’s a problem whether the individuals are motivated by religion, ideology, or are just wacko. Charlie Stross in one set of novels imagined a world where there’s a real problem that every crazy person can build their own nuclear weapon, but I don’t think he took it to its logical conclusion. The solution here may be simply to spread out so much to other planets that isolated incidents cannot do enough damage.
No. In 50 years it will probably be possible for the U.S. to have total drone surveillance of a country. We could assign everyone their own drone that monitors their behavior and alerts us if they do something we don’t like. And even if this proves not to be the case, rather than resort to genocide couldn’t we just cut off electricity to dangerous areas?
Your solution appears to require first conquering the entire world. Also, drones can’t tell what’s happening inside a building, or what’s in the packages or trucks going in and out of a building. Unless you mean micro-drones too small to detect, which is possible.
General point taken: It is very difficult to talk about what would be necessary 50 years from now.
Much of the world would likely support total drone surveillance of certain countries. Also, in fifty years we could probably put recording devices in peoples’ brains that tell us everything they say and hear, and combine this with AI to immediately identify any terrorist threats.
If we’re talking about brain implants and advanced AI, the the singularity would occur by the time we reach this level of development. The problem is: what if superweapons occur before superintelligence?
Like, say, in 1945?
I don’t think what I described would require a super-intelligence.
No, but the scenario you’re describing reminds me very much of the post on the definition of existential threat. In particular,
Networking loads of brains together is one of the more eclectic proposals on how to create a super-intelligence.
The simpler proposal of panopticon surveillance plus AI to interpret the data might be doable without AGI however.
Doesn’t the US have some sort of “fourth amendment” which prevents surveillance of its own citizens (who might become terrorists)? And, unlike spying on internet usage, people are going to be really aware of drones buzzing them.
No, it does not. The Fourth Amendment prevents “unreasonable searches and seizures”—there is no explicit right to privacy in the US Constitution. The Supreme Court managed to find one, though (via a “penumbra of rights”), for a specific politicized purpose, but hasn’t been willing to take it seriously otherwise.
There are a few current court cases against the NSA surveillance in the US, but none got anywhere so far.
Yes, but, as they say “the constitution isn’t a suicide pact” and if the only way to stop mass terrorist attacks in the U.S. is by trashing the fourth amendment, the fourth amendment will get trashed.
Unfortunately, if this is the case people will probably only realise it after the first serious mass terrorist attacks.
I place a high probability on the NSA already doing things that pre-9/11 would have been considered gross violations of the fourth amendment.
As in, like, 99%? :-D That seems to be a “well, duh” observation.