Irrationality game
0 and 1 are probabilities. (100%)
Irrationality game
0 and 1 are probabilities. (100%)
Least optimal truths are probably really scary and to be avoided at all costs. At the risk of helping everyone here generalize from fictional evidence, I will point out the similarity to the Cthaeh in The Wise Man’s Fear.
On the other hand, a reasonably okay falsehood to end up believing is something like “35682114754753135567 is prime”, which I don’t expect to affect my life at all if I suddenly start believing it. The optimal falsehood can’t possibly be worse than that. Furthermore, if you value not being deceived about important things then the optimality of the optimal falsehood should take that into account, making it more likely that the falsehood won’t be about anything important.
Edit: Would the following be a valid falsehood? “The following program is a really cool video game: "
The physical universe doesn’t need to “solve” protein folding in the sense of having a worst-case polynomial-time algorithm. It just needs to fold proteins. Many NP-complete problems are “mostly easy” with a few hard instances that rarely come up. (In fact, it’s hard to find an NP-complete problem for which random instances are hard: if we could do this, we would use it for cryptography.) It’s reasonable to suppose protein folding is like this.
Of course, if this is the case, maybe the AI doesn’t care about the rare hard instances of protein folding, either.
It’s rather obnoxious of guys at your college to misspell “your” even while talking.
Once you describe “feminine” as “nurturing, compassionate, cooperative and socially-conscious” and define feminism as a movement to protect all things feminine, I think you have gone far beyond what most people mean by either word.
Then for the first time it dawned on him that classing all drowthers together made no more sense than having a word for all animals that can’t stand upright on two legs for more than a minute, or all animals with dry noses. What possible use could there be for such classifications? The word “drowther” didn’t say anything about people except that they were not born in a Westil Family. “Drowther” meant “not us,” and anything you said about drowthers beyond that was likely to be completely meaningless. They were not a “class” at all. They were just… people.
Orson Scott Card, The Lost Gate
An empirical statement, even a true one, can place undue emphasis on a particular fact. There’s a hundred things in the same reference class that the father could have said; this particular one isn’t being picked out because it is more true than the others, but because it conforms to gender stereotypes.
To keep the limits of the log argument in mind, log 50k is 10.8 and log (50k+70k) is 11.69 and log 1 billion is 20.7
Comparing these numbers tells you pretty much nothing. First of all, taking log($50k) is not a valid operation; you should only ever take logs of a dimensionless quantity. The standard solution is to pick an arbitrary dollar value $X, and compare log($50k/$X), log($120k/$X), and log($10^9/$X). This is equivalent to comparing 10.8 + C, 11.69 + C, and 20.7 + C, where C is an arbitrary constant.
This shouldn’t be a surprise, because under the standard definition, utility functions are translation-invariant. They are only compared in cases such as “is U1 better than U2?” or “is U1 better than a 50⁄50 chance of U2 and U3?” The answer to this question doesn’t change if we add a constant to U1, U2, and U3.
In particular, it’s invalid to say “U1 is twice as good as U2”. For that matter, even if you don’t like utility functions, this is suspicious in general: what does it mean to say “I would be twice as happy if I had a million dollars”?
It would make sense to say, if your utility for money is logarithmic and you currently have $50k, that you’re indifferent between a 100% chance of an extra $70k and a 8.8% chance of an extra $10^9 -- that being the probability for which the expected utilities are the same. If you think logarithmic utilities are bad, this is the claim you should be refuting.
...62535796399618993967905496638003222348723967018485186439059104575627262464195387.
Boo-yah.
Edit: obviously this was not done by hand. I used Mathematica. Code:
TowerMod[base_, m_] := If[m == 1, 0, PowerMod[base, TowerMod[base, EulerPhi[m]], m]];
TowerMod[3, 10^80]
Edit: this was all done to make up for my distress at only having an Erdos number of 3.
Dumbledore’s Army is a good example of canon Hermione taking the initiative, Harry just went along with the idea, if I recall correctly.
And how, exactly, would we discover that?
It turns out that what you’ve thought of as consciousness or self-awareness is a process in the shadow-particle world. The reason you find yourself talking about your experiences is that the real world contains particles that duplicate the interactions of your shadow particles. They do not actually interact with your thoughts, but because of the parallel structure maintained in the real world and the shadow-particle world, you don’t notice this. Think of the shadow particles as your soul, which corresponds exactly to the real-world particles in your brain, with the only difference being that the shadow particle interactions are the only once you actually experience.
You conduct a particularly clever physics experiment that somehow manages to affect the shadow-particle world but not the real-particle world. Suddenly the shadow particles that make up your soul diverge from the real-world particles that make up your brain! This is a novel experience, but you find yourself unable to report it. It is the brain that determines your body’s actions, and for the first time in your life, this actually matters. The brain acts as though the experiment had done nothing.
Once your brain and soul diverge, the change never cancels out and you find yourself living a horrific existence. Because real-world particles do affect shadow particles, you still receive sensory input from your body. However, your brain is now thinking subtly different thoughts from your soul. To you, this feels as though something has hijacked your body, leaving you unable to cry out for help.
Of course, you never have, and never could have, found out about shadow particles. But you are a brilliant physicist, so your soul eventually figures out what happened. Your brain never does, of course; it lives in the real world, where your clever experiment had absolutely no effect, and was written off as a failure.
Voldemort is the last known Parselmouth, so it would be highly suspicious for Quirrell to also be one.
I am reminded of:
“Arf arf arf! Not because arf arf! But exactly because arf NOT arf!” GK Chesterton’s dog
In trying to find the above quote by wildcard searching on Google, I stumbled upon another quote of this nature by the dog’s owner himself: “I want to love my neighbour not because he is I, but precisely because he is not I.” There appears to be another one about science being bad not because it encourages doubt, but because it encourages credulity, but I’m unable to find the exact quote.
I very much like “Abortion is a medical procedure”. It’s actually a believable WAitW to make, and has the admirable feature that it completely ignores every aspect of abortion relevant to the debate.
I think the “free speech” examples don’t quite have the right form: the central question probably is whether or not pornography or flag burning is free speech, and the conclusion “Flag burning is free speech, therefore it should be legal” is valid if you accept the premise.
Which calls back to this bit in Chapter 51:
But then Professor Quirrell had also seen Harry taught Occlumency, he had taught Harry how to lose… if the Defense Professor wanted to make some use of Harry Potter, it was a use that required a strengthened Harry Potter, not a weakened one. That was what it meant to be used by a friend, that they would want the use to make you stronger instead of weaker.
The main use I put Fermi estimates to is fact-checking: when I see a statistic quoted, I would like to know if it is reasonable (especially if I suspect that it has been misquoted somehow).
There is a correlation of 0.13 between non-responses and N.
Of course, there’s also a correlation of −0.13 between C and the random number generator.
We do get
Then the Dark Lord tapped his finger upon Hermione Granger’s forehead, and said, in a voice so low Harry almost did not hear, “Requiescus.”
And later:
Hermione Granger slept on peacefully, whatever spell of repose Voldemort had cast on her being sufficient to the task.
So as much as I like your theory, I don’t buy it.
Shorter (but not necessarily more legible): ∀x∀y∀z: (R(x, 0, z)↔(x=z)) ∧ (R(x, Sy, z)↔R(Sx, y, z)).
Don’t feed the troll.