My origin story began when I stumbled across Probability Theory: The Logic of Science. From it, I learned that I didn’t have to make up an ad hoc method for grappling with uncertainty each time I encountered a new data analytic problem, and that the general rules encompass a great deal more that data analysis.
Cyan
P( whole argument is wrong ) = P( first subargument is wrong ) * P( second subargument is wrong | first subargument is wrong )
P( whole argument is wrong ) is not P( first subargument is wrong AND second subargument is wrong), so the above conditional probability decomposition is incorrect.
The logical operator between the two subarguments was ambiguous—I assumed the total argument would be something like a lemma and a theorem that depends on the lemma, not a disjunction of propositions.
The lie of Santa Claus may be a learning experience, but it isn’t actually a test per se—every child finds out the truth one way or another.
The prosecutor’s fallacy seems to be in play here—Pr(rationalist | lied to as a child) is not necesarily equal to Pr(lied to as a child | rationalist). (And that’s not even getting into the swamp of causality...)
“how often do you beat your wife?”
I never beat my wife. The notorious question you’re looking for is, “when did you stop beating your wife?”
If your reasoning works on edge cases, you can be more confident of reasoning correctly in less difficult cases.
Unfortunately, they can’t consider that you have have held the position since 1992 -- all they can consider is that you claim to have done so. You could get your handful of friends to testify, I suppose...
Scientists also have highly unrepresentative personalities, high in openness to experience, and tend not to care about conservative values like respect for authority, group loyalty, and various taboos.
What evidence is this assertion based on?
Excellent. Thank you for the link.
The advice isn’t about your attitude towards your seatmate’s stupidity and irrationality. It’s directed at your rationalist buddy sitting on your other side—she’s being advised not to be annoyed at you if you choose to be tolerant.
I don’t disagree with the above post—I just wanted to make a pedantic distinction between claims and facts in evidence. (Also, my choice of the pronoun “they” rather than “we” was deliberate.)
Which rather begs the question of why people [in hunter-gatherer societies] move to subsistence farming.
Population pressure is probably the reason. Once population density reaches a certain threshold, hunter-gatherer lifestyles aren’t sustainable anymore, and cultivated land supports a lot more people
a mixture of two parts Red Bull to one part Eliezer Yudkowsky creates a universal question solvent.
Eliezer Yudkowsky experiences all paths through configuration space because he only constructively interferes with himself.
Eliezer Yudkowsky’s mental states are not ontologically fundamental, but only because he chooses so of his own free will.
A charitable reading would be that mathematical expertise is necessary but not sufficient for rationality.
You’re right that the existence of Omega is information relevant to the existence of other omniscient beings, but in the least convenient world Omega tells you that it is not the Catholic version God, and you still need to decide if that being exists. (And you really do have to decide that specific question because eternal damnation is in the payoff matrix.)
Omniscience is almost a side issue.
...we instinctively shy away from using this strategy… often we’re using it in practice anyway.
Is this not contradictory?
The ideas are interesting, but I’m finding the use of italics and especially bold font somewhat distracting—I feel like I’m being harangued. (Not as bad as all caps, but still.)
- 27 Feb 2010 2:54 UTC; 13 points) 's comment on Improving The Akrasia Hypothesis by (
I’m going to go with “Knowing About Biases Can Hurt People”, but only because I got the Mind Projection Fallacy straight from Jaynes.