Unless you just consider it a Mouse That Roared scenario in which no one dares commit a terrorist attack under threat of global annihilation.
(just read the book, it’s well worth it)
Unless you just consider it a Mouse That Roared scenario in which no one dares commit a terrorist attack under threat of global annihilation.
(just read the book, it’s well worth it)
Tiiba—I believe Nate’s suggesting that part of the reason non-rationalists feel hostile towards rationalists could be that they fear the rationalists are not rationalists at all, but Clever Arguers.
That is, they fear that a superior intelligence is attempting to manipulate their beliefs through rationalization.
How easily would you be able to distinguish between a fAI trying to help you discover the truth and an unfriendly AI concocting a clever argument to lead you to the conclusion it (for whatever reason) wants for you, assuming both are vastly more intelligent than you?
I’m confused by your last comment—what use would the LHC be in a global economic crisis or nuclear war? I don’t suppose you mean something like “rig the LHC to activate if the market does not recover by date X according to measure Y, and then we will only be able to observe the scenario in which the market does recover” or something like that, do you?
Caledonian: I assume he means that, for all X, if X is true, he wishes to know X. This opposed to “if the universe is made of puppies and unicorns, tell me about it, otherwise I don’t want to know.”
Jennifer—He doesn’t seriously want us to lock up our science libraries for good. He’s using fiction to make a point about how people react to scarcity, and mysterious information: