Yes, have you considered any reasonable strategy to provoke a nuclear war, as the worst-case plan to stop AI progress?
Teerth Aloke
Honestly, some of these arguments are laughable, like the ones about aliens and Everett branches. I don’t think that an AI would believe this.
Not public data, at least.
You are contemplating suicide. Seek help.
They are looking to China, perhaps, as a new patron.
We will see in Afghanistan whether banning contraception is one of those viable policies to inflate fertility.
Admitted.
Biowarfare won’t kill everyone. Nuclear war won’t kill everyone. Anthropogenic global warming won’t kill everyone. At worst, these will destroy civilization which, counterintuitively, makes Homo sapiens more likely to survive on the short term (century). The same goes for minor natural disasters like volcanic eruptions.
Natural disasters like giant meteors, or perhaps a gamma ray burst, are unlikely. The last time something like that happened was 66 million years ago. The odds of something similar happening in the next century are on the order of . That’s small enough to ignore for the purposes of this bet. The only way everyone dies is via AI
Strongly agreed with this. The only non-infinitesimal probability of human extinction comes from an alien intelligence (like AI) actively pursuing the goal of human extinction.
It appears that Microsoft and Google are know in a high-stakes race. That somehow must be increasing the likelihood of catastrophe.
Ethnic violence is rarely one sided.
What? When? Where?
The vast majority of chicken would not have existed but for the meat industry. Would you accept that argument?
So, did it happen due to atheism?
With God, 9/11 was permitted for Mohammed Atta, Inquisition for medieval Catholics, and so on. With it was permitted the brutal massacres of the Crusaders in the Middle East (against Muslims) and in France against heretics. With Him was permitted the pogroms against Jews in medieval Europe.
You see the evils of WW2, but what caused the evils of the Thirty Year War, as a part of the European Wars of Religion? What caused evils during the brutal Arab-Islamic conquests?
Hypothetically, if you know an AI lab is very close to developing AGI which is very likely to destroy humanity, and the only way to stop it is to carry out a bomb blast on the premises of the organization, would you not do it?
From a longtermist viewpoint, even the tactic of terrorism to prevent X-Risk cannot and should not be ruled out. If the future of humanity is at stake, any laws and deontological rules can be overruled.
I didn’t comprehend your purpose
Ever heard of scorched earth?
Any data for India?
In my understanding, this is only possible by rote memorization.