I do believe you get in an interesting confusion when talking about the blue tentacle example. When you’re asked to answer why, you’re not prompted for scenarios which are likely beforehand, you’re prompted for argmax(P(Ai|T)) (T is the situation of waking up that day with a blue tentacle, argmax taken to choose from i’s), which equals—by theoreme of Bayes—argmax(P(T|Ai)*P(Ai)/P(T))=argmax(P(T|Ai)*P(Ai))=argmax(P(Ai&T)). And this—despite the fact that ALL P(Ai&T) are very small and the P(T), equaling their sum, is also too small to worry about it in usual life—is an allowed mathematical task in itself. The task of asking “who is taller, John or Mary” does not entail either John or Mary is tall—in fact, they can be two dwarves. Same logic applies here.
Дмитрий Зеленский
“Anyone who really believed in a vague deity, would have recognized their strange inhuman creator when Darwin said “Aha!”—too strong a statement. Inability to recognize something you believe in is not improbable. Suppose I believe Harry Potter mages infiltrate Earth and suppose they really do. If I see a guy waving their wand should I actually believe it’s a real Harry Potter mage not someone who pretends to be? No, since even if Harry Potter mages are real it is much more likely to see a Muggle pretender than a real breaker of the Statute of Secrecy. For similar reasons they may wait for “cause beyond the cause”—judging (wrongly) that evolution does not suffice.
Well, bodiless does not necessarily mean abstract, here it’s more about omnipresence and lacking defined boundaries. You can’t run a stake through evolution’s heart or poke out its eye as every cell uses it.
“In general” does not mean “always”, it means “by default”. It is not the same thing. Rectangles, in general, do not have equal sides with a common dot—except the squares which do. However, there must be reasons for excluding something from a default—and a random false belief is unlikely to find such reasons (not to mention the very going from belief to finding such reasons is backwards).
“If you once tell a lie...” should, of course, read “If you once tell a lie then, until you give it up...”.
Attributing Obi-Wan’s highly emotional statement in the situation of the Order’s destruction to all the Jedi is a no-go. They did have problems with their actions but more of the kind of being, so to say, “too careful”.
Are the gambling guys actually suggested that they will play several times? Because I, seeing both things side by side, still prefer 29⁄36 over 7⁄36 by the easy reasoning “better win $2 then nothing”, and, if I’m only playing once, I claim it *is* a good strategy (otherwise you could play a lottery suggesting to win a billion with chances of 1 to million instead of working on a one-time job for $500 guaranteed—again, I claim it is salient to choose the job).
Then how come we see utilitarian libertarianism as in Machinery of Freedom?
Also, the general claim “the fact is that institutions are completely unnecessary” seems suspicious. Capability of communities to provide help does not ensure that any help will be provided, whereas an institution, given certain control from outside, is unlikely to outright ignore its habitants.
“You’re also denying Martine Rothblatt’s founding of United Therapeutics to seek a cure for her daughter’s pulmonary hypertension”—if I were defending a mind/science division I would say you are messing up possibilities of science and motivations to do science. She may have been motivated by her parental (ahem, maternal… funny how parental and paternal consist of the same letters...) love to go to seek Science’s help but it tells us nothing more about possibilities of science than going to seek a shaman’s help would tell about possibilities of shaman rituals.
But both rule “have no seemingly random exceptions” and the passage in Virtues are special cases of Occam’s razor. So the argument does become rounded (or, at best, thrown one step to the “low-entropy universe” and rounded there).
Do write the PhD thesis and get the PhD whose lack makes you complain a bit too often)))
On a more serious note—same thing as Musashi says is all too often said about chess (always think how to make a checkmate). And in both cases it seems to be a heuristics at best. We do not have the chess programming the best chess-playing computers have (nor the fencing one). And we do seem to be able to think about next steps better than the steps after them. So it seems plausible that sometimes we are to forget the enemy king/body and defend our own, for we, being imperfect, will lose ours otherwise well before getting to the enemy.
Please restore apostrophees...
“our probability of surviving”—probably extrapolated from other similar objects going through black holes. Enterprise, because fictional laws, eschews the odds, but it may only mean that some other ships get destroyed even somewhat more frequently and Enterprise has “five points… for sheer dumb luck!”
The vast majority of people is both incapable of and uninterested in creating new technology OR doing science (and their incapability supports their lack of interest). So, if nerds move to nerdtopia taking some already-deadly technologies with them, the remaining world will never create something AI-like… well, given that newborns with nerds’ skills are taken away early. People are generally stupid—not only in the sense of exhibiting specific biases discussed by Eliezer but also in the sense of lack of both curiosity and larger-than-three working memory (or larger-than-120 IQ, whitherever you prefer) in the majority (and larger-than-two/larger-than-100 in a big group). Having intelligence—IQ above roughly 120 or any isomorphic measure—is something so rare that from standard p<0.05 view it’s inexistent (Bell’s curve, 100 as mean, 10 as sigma).
People also tend to believe that some changes, especially to their intelligence, somehow “destroy their integrity”. So they may actually believe that, if you raise that girl’s IQ, some human being will live on but it will not be HER in some (admittedly ununderstandable to me) sense. So their answer to “either it is better to have IQ X then IQ Y or not” is “No, it is better to have the IQ (or relevant measure which is more age-constant) you start with—so it IS good to heal the boy and it IS bad to enhance the girl”.
(Yes, I do play advocatus diaboli and I do not endorse such a position myself—pace “Present your own perspective”, as it is my perspective on “what a clever skeptic would say” not my perspective on “what should be said”.)
If Bayesian derivation is a frequentist derivation, it does not mean that any frequentist derivation is necessarily equivalent to Bayesian. Mr. Yudkowsky claims, more or less, that Bayesian derivation is equivalent to the ideal frequentist derivation.
A time to turn off advocatus diaboli and really *be* a clever (hopefully) skeptic… One could say that there is still a difference between probabilities so high/low that you can use ~1/~0 writings and probable but not THAT probable situations such as 98:2 (there is the obvious question of threshold but please bear with me here). You suggest about the same course of actions for, say, a scientific theory of the first and the second type while I believe that the first type is to be in practice equated to the 1⁄0 and thus called (quasi-)deterministic whereas the second is probably wrong unless you can find a (quasi-)deterministic explanation for the rogue 2 (and thus change the theory) - so in that sense there is no longer place for intrinsically non-deterministic theories like quantum theory.
One may respond that some intrinsically non-deterministic theories do *work* (whether we are speaking of statistics or quantum theory) - but it is a difficult question whether that means they are true or that they are close to an (unknown) deterministic theory. Do we have actual reasons *besides* “working” to believe world is non-(quasi-)deterministic? Threshold uncertainty may be one—but then the unknown deterministic theory may derive the threshold itself so that the uncertainty is only a property of our wrong map.
The answer to this seems to be as to the sound example and to most philosophical debates in general:
1)Different categorization patterns, or, simply put, different meanings of a word. In this situation, even two words: people can disagree on what “will” is (in the context of “free”) and on what “free” is (in the context of “will”; let us assume a Frege-Heimian world where if you know the two nodes you always know their combination to ignore the “context” addenda).
2)Politization of the question. In the world where “free is good”, having free will is good. In the world where “determinism is good” and “free will is incompatible with determinism” having free will is bad. And people want to be good. Also, we may want to agree with someone (say, Gandhi) and disagree with someone (say, Fomenko) no matter what they say. Thus affective death spirals and one-sided politics and whatnot.
Such readers may believe that they lack some skills needed in this world (scientific or otherwise) and actually dream of being more skillful—but imagining yourself with a wand sending around magic (which you don’t have to understand to use) is easier than imagining yourself smarter or more socialized or, you know, anything that could help in the real world.
Or wait—not anything. Imagine a world of late Middle Ages where some guy desperately wants to become a great warrior but is too weak to wield a sword or a longbow… and then you give him a crossbow or a firegun. The effect is similar: you give him a way to achieve his dreams without, you know, exhausting exercise and stuff. Same applies to mind exercise—it is difficult (and its practitioners know it, so SF&F is actually more popular in smart people).
Hymns to the non-existence of God may make sense if one really believes that the world where a God would exist is a worse world than a world where there is no God. Needless to say, that is not what seems to lead their usual authors.
I would expect that libertarians’ utility comes unexpectedly close to what Mr. Yudkowsky calls morality.