I’d just note that if you believe in a deity, it actually isn’t particularly less rational to believe that it can be three and one at the same time. How would you prove the invisible, incorporeal, floating dragon who spits heatless fire isn’t simultaneously one and three?
see
Hmm. To clarify my meaning:
Since anyone who applies Occam’s Razor in the correct form will reject theism to start with, I strongly doubt that any such person has, in fact, wasted the time to actually work out whether the vast convolutions necessary to “rationalize” theism are ultimately made more or less simple by the introduction of a variant of multiple personality disorder into the theistic godhead.
So, I doubt anybody is actually in a position to say that unitarian theism is, in fact, simpler than trinitarian theism. A rational person would never spend the time and effort to work out which ridiculously convoluted theory is actually simpler, because he’s already discarded both of them, and there’s no point in debating which is more ridiculous. The irrational can’t be trusted to do the reasoning correctly, and thus the rational can’t leverage their results.
Therefore, it’s optimal when making the case for rationality to avoid comment on trinitarianism. A rationalist is unlikely to actually be able to demonstrate it is actually inferior to unitarian theism, and he wouldn’t get any benefit from bolstering the relative case for unitarian theism anyway.
If we work from assumptions that make it likely for the universe to contain a “large number” of natural intelligences that go on to build UFAIs that assimilate on an interstellar or intergalactic level, then Earth would almost certainly have already been assimilated millions, even billions of years ago, and we accordingly would not be around to theorize.
After all, one needs only one species to make a single intergalacitc assimilating UFAI emerge somewhere in the Virgo Supercluster more than 110 million years ago to have assimilated the whole supercluster by now, using no physics we don’t already know.
Hmm. This one I laughed at. Orange-Head, I didn’t.
I note that by 1933, the SA had already been committing violent crimes under Hitler’s command for over a decade. So the edited puzzle presented is fundamentally unrelated to the question, unless you think that a $40 fine and a ruined evening is an excessive punishment for a decade of violent crimes.
Upon reading this, I immediately went,
“Well, General Relativity includes solutions that have closed timelike curves, and I certainly am not in any position to rule out the possibility of communication by such. So I have no actual reason to rule out the possibility that which strategy I choose will, after I make my decision, be communicated to Omega in my past and then the boxes filled accordingly. So I better one-box in order to choose the closed timelike loop where Omega fills the box.”
I understand, looking at Wikipedia, that in Nozick’s formulation he simply declared that the box won’t be filled based on the actual decision. Fine. How would he go about proving that to someone actually faced with the scenario? Rational people do not risk a million dollars based on an unprovable statement by a philosopher. Same with claims that, for example, Omega didn’t set up the boxes so that two-boxing actually results in the annihilation of the contents of box B. Or that Omega doesn’t teleport the money in B somehow after the decider makes the decision to one-box. Those declarations may have a truth value of 1 for purposes of a person outside observing the scenario, but unless empirically testable within the scenario, cannot be valued as approximating 1 by the person making the decision.
Every “given” that the decision-maker can’t verify is a “given” that is not usable for making the decision. The whole argument for two-boxing depends on a boundary violation; that the knowledge known by the reader but which cannot be known to the character in the scenario can somehow be used by the character in the scenario to make a decision.
Your “ADDED” bit is nonsense.
The odds of intelligent, tool-using life developing could easily be so low that in the entire observable universe (all eighty billion galaxies of it) it only will happen once. This gives us at least ten orders of magnitude difference in possible probabilities, which is not even remotely “precisely balanced”.
Earth being “lucky” is meaningless in this context. The whole point of the anthropic principle is that anyone capable of considering the prior improbability of a particular planet giving origin to intelligent life is absolutely certain to trace his origin to a planet that has a posterior probability of 1 of giving origin to intelligent life. If the conditions of the universe are such that only 1 planet in the lifetime of every 1 million galaxies will develop intelligent tool-using life, then we can expect about 80,000 intelligent tool-using species in the observable universe to observe that high infrequency. In none of those 80,000 cases will any member of those species be able to correctly conclude that the prior improbability of intelligence arising on his planet somehow proves that there should be more than 80,000 planets with intelligent species based on the posterior observation that his planet did indeed give origin to intelligent life.
Finally, your second option doesn’t actually explain why Earth is not assimilated. If UFAI is highly improbable while natural tool-using intelligence is reasonably frequent, then Earth still should have been assimilated by unfriendly natural intelligence already. A hundred million years is more than enough time for a species to have successfully colonized the whole Milky Way, and the more such species that exist, the higher the probability that one actually would.
I would expect “I’m on the wagon”, with any further questioning deflected by “I’d rather not talk about it” would be enough explanation for any but the most impolite people, with “Medical reasons, I’d rather not go into detail” as the last-resort deflector.
Which is, in fact, absolutely true. You’re not drinking (you’re on the wagon), you don’t want to talk about it, and ultimately it’s the pharmacological effects of alcohol that are why you don’t want to drink and you don’t want to go into detail.
If a person jumps to the assumption that you’re a recovering alcoholic (and not, say, on a medication that reacts badly with alcohol), he might keep slightly closer eye on you for a little while. But since you’re not drinking and not an alcoholic, you’re not going to show any signs of “relapsing”, and the vigilance will be relaxed.
Granted, it’s possible that someone might actually obsess over why you don’t drink, but my experience is that it’s highly unlikely. People just don’t care that much about trivia about other people, in general.
“(Hermione was starting to worry about what exactly the impressionable youths of the Chaos Legion were learning from Harry Potter.)”
Hah!
A major problem of many future existential threat charities is evaluating whether they actually are reducing existential risk, or whether they will actually increase it. The evidence of history, for example, indicates that even the best foreign policy experts are not very good at evaluating a policy’s secondary effects and perverse incentives. The result that it is very hard to evaluate whether the net effect of spending money on what is supposed to be “reducing the risk of global thermonuclear war” will actually increase or decrease the risk of global thermonuclear war. The very same multipliers that leverage to massive utility on the assumption of intended consequences leverage to massive disutility on the assumption of perverse consequences.
On the other hand, it’s rather easy to evaluate the net value of a heavily-studied vaccination against an endemic disease, and you can be reasonably certain you’re not actually spreading the very disease you’re trying to fight.
If you assigned a 0.2% probability to a social intervention producing a specific result, I’d mostly be highly skeptical that you have enough, good enough, data to put that precise a number on it. Once probabilities get small enough, they’re too small for the human brain to accurately estimate them.
To be neutral in reality, yes, the probability must be in a very narrow range. To be neutral within the ability of a human brain to evaluate without systematic quantitative study, it just needs to be small enough that you can’t really tell if you’re in case 1 or case 2.
The orders of magnitude of scope can only matter if you know which way they fall. If a donation to the Nuclear Threat initiative increases the risk of global nuclear war (like a reduction in arsenals deceiving a leader into believing he can make a successful first strike), the orders of magnitude of negative result make it a vastly worse choice than burning a hundred-dollar bill just to see the pretty colors.
The second.
Specifically, I am of the opinion that it is well-demonstrated that calculating adverse consequences of social policy is both sufficiently complicated and sufficiently subject to priming and biases that it is beyond human capacity at this time to accurately estimate whether the well-intentioned efforts of the Nuclear Threat Initiative are more likely to reduce or increase the risk of global thermonuclear war.
If I were forced to take a bet on the issue, I would set the odds at perfectly even. Not because I expect that a full and complete analysis by, say, Omega would come up with the probability being even, but because I have no ability to predict whether Omega would find that the Nuclear Threat initiative reduces or increases the chance of global thermonuclear war.
Sure. Now, show me the detailed analysis as to how you got those very precise numbers of your specific proposed intervention in existential risk having a 1.00% chance of working and 0.80% chance of backfiring, instead of the opposite numbers of 0.80% working and 1.00% backfiring.
Because, see, if the odds are the second way, then the expected utility of your intervention is massively, massively negative. Existential risk is so important that while reducing it is better than many other things by much more than a factor of five, increasing it is much, much worse than many evils by much more than a factor of five.
The real universe has no “good intentions” exception, but there’s a cognitive bias which causes people to overestimate the likelihood that an act taken with good intentions will produce the intended good result and underestimate the risks of negative results. When uncorrected for in matters of existential risk, the result could be, because of the mathematics of scope, an unintentional atrocity.
Now, my back-of-the-envelope calculation is that SIAI doesn’t actually increase the risk of an unfriendly AI by actively trying to create friendly AI. There are so many people doing AI anyway, and the default result is so likely to be unfriendly, that SIAI is a decent choice of existential risk charity. If it succeeds, we have upside; if it creates an unfriendly AI, we were screwed anyway.
On the other hand, the Nuclear Threat Initiative is not merely fucking around with what has seemingly been shown to be a fairly stable system in a quest to achieve an endpoint that is itself unlikely to be stable (total nuclear disarmament; the official goal of the NPT), with all sorts of very-hard-to-calculate scenarios which mean it might on net increase risks of nuclear annihilation of humanity. No, it also might be increasing existential threat of, say, runaway greenhouse warming, by secondary discouraging effects on (for example) nuclear power production. There is nobody on the planet who understands human society and economics and power production and everything else involved well enough to say with any confidence whatsoever that a donation to the Nuclear Threat Initiative will have positive expected utility. All we have to go on is the good intentions of the NTI people, which is no more a guarantee than assurances from a local newspaper horoscope that “Your efforts will be rewarded.”
Yes, that’s what you do. And my analysis is that the best decision under the available uncertainty is that the probability of donating to NTI doing massive good is not distinguishable from the probability of it doing massive harm. The case for 1.0 vs. 0.8 is not any more convincing to me than the case for 0.8 vs. 1.0. Given a hundred questions on the level of whether the Nuclear Threat Initiative is a good thing to do or not, I would not expect my answers to have any more chance of being right than if I answered based entirely on the results of a fair coin. I would, as I said elsewhere in this discussion, take an even-money bet on either side of reality, in the fullness of time, proving the result either is massive weal or massive woe. The massiveness on either side is meaningless because both sides cancel out. The expected utility of a donation to the NTI is, by my estimates, accordingly zero.
Furthermore, I am of the opinion that the question is, given the current state of human knowledge, such that no human expert could do better than a fair coin, any more than any Babylonian expert in astronomy could say whether Mars or Sirius was the larger, despite the massive actual difference in their size. Anyone opining on whether the NTI is a good or bad idea is, in my opinion, just as foolish as Ptolemy opining on whether the Indian Ocean was enclosed by land in the south. I don’t know, you don’t know, nobody on Earth knows enough to privilege any hypothesis about the value of NTI above any other.
When you don’t know enough to privilege any particular hypothesis over any other, the sheer scale of the possible results doesn’t magically create a reason to act.
Yes, if there are any observations that do constitute significant evidence, they are unlikely to balance out. But when a question is of major potential importance, people tend to engage emotionally, which often causes them to take perfectly meaningless noise and interpret it as evidence with significance.
This general cognitive bias to overestimating significance of evidence on issues of importance is an important component of the mind-killing nature of politics. Having misinterpreted noise as evidence, people find it harder to believe that others can honestly evaluate the balance of evidence on such an important issue differently, and find the hypothesis that their opponents are evil more and more plausible, leading to fanaticism.
And, of course, the results of political fanaticism are often disastrous, which means the stakes are high, which means, of course, I may well be being pushed by my emotional reaction to the stakes to overestimate the significance of the evidence that people tend to overestimate the significance of evidence.
There is almost certainly real evidence at some level; human beings (and thus human society) are fundamentally deterministic physical systems. I don’t know any method to distinguish the evidence from the noise in the case of, for example, the Nuclear Threat Initiative . . . except handing the problem to a friendly superhuman intelligence. (Which probably will use some method other than the NTI’s to ending the existential threat of global thermonuclear war anyway, rendering such a search for evidence moot.)
It doesn’t apply to the SIAI, because I can’t think of an SIAI high-negative failure mode that isn’t more likely to happen in the absence of the SIAI. The SIAI might make a paperclip maximizer or a sadist . . . but I expect anybody trying to make AIs without taking the explicit care SIAI is using is at least as likely to do so by accident, and I think eventual development of AI is near-certain in the short term (the next thousand years, which against billions of years of existence is certainly the short term). Donations to SIAI accordingly come with an increase in existential threat avoidance (however small and hard-to-estimate the probability), but not an increase in existential threat creation (AI is coming anyway).
(So why haven’t I donated to SIAI? Akrasia. Which isn’t a good thing, but being able to identify it as such in the SIAI donation case at least increases my confidence that my anti-NTI argument isn’t just a rationalization of akrasia in that case.)
Do you know of any cases in which philosophers suggested ways of thinking about things which then got used productively?
Sure. Roger Bacon. Sir Francis Bacon.
Both of whom, I note, greatly respected Aristotle while deriding the “Aristotelian” philosophers of their days, which suggests the problem in the Galileo and Copernicus cases was less defects of Aristotle and more defects of Scholasticism (during Roger’s day) and Second Scholasticism (during Francis’s day).
I was at school, and started a conversation with another kid about Santa, and he said, “Santa’s just your parents.” And that made sense to me, and I said, “Oh.” And I didn’t believe in Santa any more. I don’t remember any particular emotional reaction; it just seemed like an obvious answer I hadn’t previously noticed.