(There seems to be a sort of assumption ’round these parts that high status is better than low status and that dominance is better than submission. I think that this should not be unquestioningly assumed. There are many goals that can usually be more easily achieved by someone in a lower status position, e.g. discovering truth or learning from people. There are many exceptions, but high status tends to make people prideful, petty, unreflective, stupid, unwilling to change, unwilling to compromise, incautious, overconfident, &c. The benefits of material wealth, better mating options, better ally options, &c., are not obviously worth the costs; sometimes there are ways to get those things without risk. One would be wise to worry about slippery slopes and goal distortion.)
Will_Newsome
This is an Irrationality Game comment; do not be too alarmed by its seemingly preposterous nature.
We are living in a simulation (some agent’s (agents’) computation). Almost certain. >99.5%.
(ETA: For those brave souls who reason in terms of measure, I mean that a non-negligible fraction of my measure is in a simulation. For those brave souls who reason in terms of decision theoretic significantness, screw you, you’re ruining my fun and you know what I mean.)
- 4 Oct 2010 22:10 UTC; 7 points) 's comment on The Irrationality Game by (
- Help: When are two computations isomorphic? by 8 Oct 2010 0:29 UTC; 4 points) (
- 4 Oct 2010 0:27 UTC; 1 point) 's comment on The Irrationality Game by (
Sorry for being unclear. I meant that any subculture that is allergic to parody of itself is just inviting less fair and less jocular criticism. Eliezer has already greatly damaged LessWrong’s reputation by making it seem cultish. Making comments about how people are sensitive to appearances of cultishness and thus it’s good for parody of that alleged cultishness to be banned, is just sowing the wind. I think that there are many interesting and independent intellectuals on LessWrong and I don’t want them to be tarred as discreditable cultists. And that’s why I would like it to be known that LessWrong is capable of self-parody and isn’t going to pathetically grasp at credibility it never had in the first place.
For the Greek philosophers, Greek was the language of reason. Aristotle’s list of categories is squarely based on the categories of Greek grammar. This did not explicitly entail a claim that the Greek language was primary: it was simply a case of the identification of thought with its natural vehicle. Logos was thought, and Logos was speech. About the speech of barbarians little was known; hence, little was known about what it would be like to think in the language of barbarians. Although the Greeks were willing to admit that the Egyptians, for example, possessed a rich and venerable store of wisdom, they only knew this because someone had explained it to them in Greek.
— Umberto Eco, The Search for the Perfect Language
No. Unreflective happy death spirals get people killed. Shame on all of you for being bad people.
Addressed to general audience, not katydee specifically.
Michael Vassar, 5th comment down: “It’s important to pay attention to what people’s words actually say. …” I am going to attempt an exercise trying that out and see what happens. I’ll also poorly echo Yvain.
Eliezer and others were looking for exercises to aid aspiring rationalists in developing generally applicable social/conversational skills/attributes. For aspiring rationalist males, the common perception is that PUA has demonstrated large positive effects. Eliezer or Amy suggested that succeeding at “getting a handsome guy to buy you a drink without promising him anything else” would build skills for female rationalists. Eliezer then relays Amy’s suggestion that PUA and “getting a handsome guy to buy you a drink” might be of equivalent difficulty. Note that “equivalent” was used as an adjective, not a noun. This is the only flavor of equivalence suggested by Eliezer or Amy. It is a bad comparison. Female attractiveness is harder to substantially improve than male attractiveness, and “handsome” is vague.
The thread goes downhill immediately. Manfred writes: “The female possible-equivalent kind of skeeves me out, and doesn’t seem to exercise the same skills.” What does ‘equivalent’ mean here? It does not appear to be a reference to Eliezer/Amy’s suggestion of equivalence of difficulty, and Manfred notes that the exercises utilize somewhat different skills. The most likely explanation is that Manfred misinterpreted Eliezer’s unfortunate use of “equivalent” as a much stronger claim. He thus probably-accidentally-automatically designated the female rationalist exercise the “possible-equivalent” of the male exercise without finding a concrete and exclusively interesting relationship between the two clusters in conceptspace, simply because he felt no need to do what he perceived as flatly disagreeing with Eliezer’s claim of PUA-equivalence. His skepticism that “getting a handsome guy to buy you a drink” is PUA-equivalent is apparent. No one would have independently thought that up.
Also worth noting is Manfred’s use of “skeeves”. He later clarifies that he is not sure which aspects of the man-buying-drink-for-woman scenario are off-putting, but “Maybe it’s the injection of money/commodification, or maybe it’s just that I dislike many people in that culture so the badness gets associated.” Additionally it should be noted that the rest of Manfred’s comment was okay, and the subtly introduced and subtly misleading ‘equivalence’ blunder could theoretically have been patched.
Then, Alicorn, quoting Manfred’s “The female possible-equivalent kind of skeeves me out”, replies “There is something to notice here.” katydee explained Alicorn’s comment: “Alicorn is suggesting that, just as the female equivalent of PUA skeeves Manfred (presumably a male) out, the traditional version of PUA skeeves her (and presumably other females) out.” Alicorn essentially confirmed this interpretation. The comment, interpretation, and confirmation picked up 14 karma as of my writing this. I really like Alicorn’s posts and think she is awesome. And even here, if you squint your brain a little, her comment seems reasonable. But if you actually read the words, it’s insane troll logic.
Manfred is not skeeved by the female equivalent of PUA. No one ever talked about a female equivalent of PUA. Manfred incorrectly called something a “female possible-equivalent” due to what really looks like a combination of a misinterpretation, an accident of doxastic language, and social norms.
One could attempt careful, complex arguments about social psychology intending to show that “getting a handsome guy to buy you a drink without promising him anything” and PUA use cognitive machinery or social capital in similar fashions or have other concrete similarities, but it wouldn’t work, and it would be the result of rationalizing a misleading artifact of LW social epistemology to make a demonstrably false point about the preferences of females generally. (If not about youngish American females generally, then a speculative claim about whatever reference class Alicorn thinks she is in.)
For the sake of argument, even given a powerful connection in the territory between “PUA-related behaviors” and “beta-sapping-related behaviors including gold digging” (which, sufficiently generalized, cover most humans), enough to make them “equivalent”, it is still not clear that Manfred is averse to either, as long as they are not happening in crass bars with crass beers. PUA skills and wealth-absorbing/gold-digging/beta-exploiting skills can be used anywhere, anytime. And again if there was such a connection, it would still be incorrect to make an argument for skeeving symmetry between PUA (a large class of general purpose social interaction skills/attributes used in many ways towards many ends) and a single bar skill that is known for being particularly easy to use unvirtuously.
If I am wrong to think that Manfred’s use of “female possible-equivalent” was unintentional, he still used the word incorrectly, and thus the symmetry arguments still do not apply. And if Alicorn had picked up on clues I had not, and noticed that Manfred had used “female possible-equivalent” intentionally, and furthermore decided to reply to the comment implicitly only addressing those with conceptual schemas sufficiently like Manfred’s seemingly accidental one (i.e. no one, I think), then I admit my criticism does not apply as strongly. I find such a scenario to be very unlikely.
Summary: katydee put 95% on a total breakdown of sanity within 4 sentences including Eliezer’s 2 word sentence, written by 3 people, 2 of which are number 1 and number 3 on Less Wrong’s Top Contributors list, and ended up guessing correctly, as if the reasoning was obvious. And I find it kinda funny...
- 10 Jul 2011 11:50 UTC; 38 points) 's comment on Guardian column on ugh fields, mentions LW by (
I think Less Wrong is a pretty cool guy. eh writes Hary Potter fanfic and doesnt afraid of acausal blackmails.
I often tried plays that looked recklessly daring, maybe even silly. But I never tried anything foolish when a game was at stake, only when we were far ahead or far behind. I did it to study how the other team reacted, filing away in my mind any observations for future use.
— Ty Cobb
Some people can perform surgery to save kittens. Eliezer Yudkowsky can perform counterfactual surgery to save kittens before they’re even in danger.
You’re using a Roko algorithm! Well, you might be, anyway. Specifically, trying to resolve troubling internal tension by drumming up social drama in the hopes that some decisive external event will knock you into stability. However you don’t seem to be going out of your way to appear discreditable like he did, maybe because you don’t yet identify with the “x-rationalist” memeplex to as great an extent as Roko.
Similarly, the message you might be trying to send after it’s made explicit and reflected upon for a bit might be something like the following:
“A large number of people on this site (Less Wrong) could be held in contempt by a reasonably objective outside observer, e.g. a semi-prestigious academic or a smart Democratic senator or an exemplary member of a less contemptible fraction of Less Wrong. I would like to point this out because it is a very bad sign both epistemically and pragmatically. I want to make sure that people keep this in mind instead of shrugging it off or letting it become an ugh field. However the social pragmatics of the community have made it such that I cannot directly talk about the most representative plausibly-contemptible local beliefs, and furthermore I am discouraged from even talking about how it is plausibly-contemptible that I can’t even talk about the plausibly-contemptible beliefs. I am thus forced to make what appear to be snide side-remarks about the absurdity of the situation in order to have a chance at refocusing the attention of the plausibly-contemptible fraction of Less Wrong—of which I am worried I might be a member—on this obviously important and distractingly disturbing meta-level epistemic question/conflict.
(Potentially ascending the reflective meta-level ladder to the moral high-ground:) Unfortunately I still cannot go meta here by pointing out the absurdity of my only being able to communicate distress with what appear to be snide side-remarks, because Less Wrong members—like all humans—only really respond to the tone of sentences and what that tone implies about the moral virtue of the writer. That is, they don’t respond to the reasonableness of the actual sentences, and definitely not to the reasonableness of the cognitive algorithms that would make the strategy of writing such sentences feel appealing. And they definitely definitely definitely do not reason about the complex social pragmatics that would cause those cognitive algorithms to deem that strategy a reasonable one, or that would differentially cause a mind or mind-mode or mind-parts-coalition to differentially emphasize those cognitive algorithms as a reasonable adaptation to the local environment. And they definitely don’t reflect on any of that, because there’s no affordance. Sometimes they will somewhat usefully (often uselessly) taboo a word, or at the very most they’ll dissolve it; but never will a sentence be deconstructed such that it can be understood and thoughtfully analyzed, nor will a sentence-generator. Thus I am left with no options and will only become more distressed over time, without any tools to point out how insane everyone in the world is being, and am forced to use low-variance small-negative-reward strategies in the hopes that somehow they will catalyze something.”
Maybe I’m partially projecting. I’m pretty sure I’m ranting at least.
Edit: Here’s a simplified concrete example of this (insightfully reported by Yvain so you know you want to click the link, it’s a comment with 74 karma, for seriously), but it’s everywhere, implicitly, constantly, without any reflection or any sense that something is terrifyingly disgustingly insanely wrongly completely barking mad. Or a subtler example from Less Wrong.
- I’ve had it with those dark rumours about our culture rigorously suppressing opinions by 25 Jan 2012 17:43 UTC; 44 points) (
- 24 Nov 2014 7:55 UTC; 16 points) 's comment on Breaking the vicious cycle by (
- 18 Mar 2012 13:10 UTC; 8 points) 's comment on The Best Comments Ever by (
- 17 Jul 2011 18:15 UTC; 5 points) 's comment on Experiment: Psychoanalyze Me by (
- 16 Mar 2012 15:02 UTC; 2 points) 's comment on Open Thread, March 16-31, 2012 by (
Will: “Damn, how do people make things like that?”
Nick: “I’m guessing a very high visuo-spatial intelligence.”
Will: “I bet a superintelligence could create a whole universe like that.”
Both, after thinking for a few seconds: “Gyahh!!”
Will: “Everything would have an infinite amount of interpretations, all of them equally correct. Everyone’s beliefs would be right and everyone would be happy! Yay!”
Nick: “But whether or not everyone’s interpretations were equally correct would depend on one’s interpretation.”
Will: “Doesn’t this look like the world we live in?”
This argument for postmodernism brought to you by the Singularity Institute Visiting Fellows Program.
- Virtue Ethics for Consequentialists by 4 Jun 2010 16:08 UTC; 46 points) (
- 14 Sep 2010 0:10 UTC; 17 points) 's comment on Intellectual Hipsters and Meta-Contrarianism by (
Here is a hand. How do I know? Look closely, asshole, it’s clearly a hand.
Look, if you really insist on doubting that here is a hand, or anything else, there’s nothing really I can say to convince you otherwise. What the tits would the world even look like if this weren’t a hand? What sort of system is your doubt endorsing? After all, you can’t just say “It’s not true that here is a hand.” You have to be endorsing some other picture of the world. [...]
So it turns out when I say things like “Here is a hand” I’m not really making a claim about the world, I’m laying down some rules for discussion. If you doubt there’s a hand here, then fuck you and that’s all there is to it. We can’t really talk about anything now, because we can’t even agree on something as simple as a goddamn hand. When we all agree here is a hand, then we can go about discussing our world in meaningful ways. Skepticism just undermines a foundation and replaces it with nothing; it[’]s paralyzing. The grounds for such radical skepticism don’t exist; it presupposes and relies on the very certainty it tries to undermine.
This is more practical than you realize. There are people who actually believe that the world is only 6,000 years old. What the fuck, right? But if you’ve ever talked with one of them, you know that they’re fucking impossible to have what you consider a ‘reasonable’ discussion with. It’s not like they don’t have answers for everything, it[‘]s just that those answers don’t make any fucking sense to you. It[’]s the sort of gibberish that makes you want to scream. The problem is that you don’t even play the game by the same goddamn rules. You’re both certain of your positions, because those positions are logically derived from the worldview each of you endorses as your starting point, and you both look at each other’s foundations and say, “Seriously, what the fuck are you talking about?” You don’t even know how you would go about convincing them that you’re right and they’re wrong; you don’t even agree on a method by which to do that.
If you flew to some part of the world where they’d never heard of an airplane or even a bird, how the fuck could you convince them you flew? They don’t even know what that means. They would have all sorts of questions, and would consider your answers nonsensical or magical. When a non-believer is told that God exists, he reacts in the same way; also, a believer when he is told there is no God.
So everything we believe about the world is built on some sort of foundation. Sure, that foundation can change, but there is always something there at the base, and it is that base that enables us to talk about the world. Not everyone has the same base you do, and that has to be okay. Just know that some of your beliefs are just as unsupported as everyone else’s. It’s just the way it is, bro.
— Philosophy Bro summarizing Wittgenstein’s “On Certainty”. (I’m not sure the summary is very true to the original but it’s interesting nonetheless.)
Condensed Less Wrong Wisdom: Yudkowsky Edition, Part I
Mysterious Answers to Mysterious Questions
Ask “What experiences do I anticipate?”, not “What statements do I believe?”
Your strength as a rationalist is your ability to be more confused by fiction than by reality. If you are equally good at explaining any outcome, you have zero knowledge.
The strength of a model is not what it can explain, but what it can’t, for only prohibitions constrain anticipation.
There’s nothing wrong with focusing your mind, narrowing your categories, excluding possibilities, and sharpening your propositions.
For every expectation of evidence, there is an equal and opposite expectation of counterevidence.
You can only ever seek evidence to test a theory, not to confirm it.
Write down your predictions in advance.
Hindsight bias devalues science: we need to make a conscious effort to be shocked enough.
Be consciously aware of the difference between an explanation and a password.
Fake explanations don’t feel fake. That’s what makes them dangerous.
What distinguishes a semantic stopsign is failure to consider the obvious next question.
Ignorance exists in the map, not in the territory. If I am ignorant about a phenomenon, that is a fact about my own state of mind, not a fact about the phenomenon itself. A phenomenon can seem mysterious to some particular person. There are no phenomena which are mysterious of themselves. To worship a phenomenon because it seems so wonderfully mysterious, is to worship your own ignorance.
What you must avoid is skipping over the mysterious part; you must linger at the mystery to confront it directly.
You have to feel which parts of your map are still blank, and more importantly, pay attention to that feeling.
When you run into something you don’t understand, say “magic”, and leave yourself a placeholder, a reminder of work you will have to do later, and one that prevents an illusion of understanding.
Much of a rationalist’s skill is below the level of words.
Avoid positive bias: look for negative examples.
If a hypothesis does not today have a favorable likelihood ratio over “I don’t know”, it raises the question of why you today believe anything more complicated than “I don’t know”.
If you don’t know, and you guess, you’ll end up being wrong.
You need one whole hell of a lot of rationality before it does anything but lead you into new and interesting mistakes.
Never forget that there are many more ways to worship something than lighting candles around an altar.
Why should your curiosity be diminished because someone else, not you, knows how the light bulb works? Is this not spite? It’s not enough for you to know; other people must also be ignorant, or you won’t be happy?
The world around you is full of puzzles. Prioritize, if you must. But do not complain that cruel Science has emptied the world of mystery.
Inverted stupidity looks like chaos. Something hard to handle, hard to grasp, hard to guess, something you can’t do anything with.
Saying “I’m ignorant” doesn’t make you knowledgeable. But it is, at least, a different path than saying “it’s too chaotic”.
A Human’s Guide to Words
http://lesswrong.com/lw/od/37_ways_that_words_can_be_wrong/
If you’re trying to go anywhere, or even just trying to survive, you had better start paying attention to the three or six dozen optimality criteria that control how you use words, definitions, categories, classes, boundaries, labels, and concepts.
Everything you do in the mind has an effect, and your brain races ahead unconsciously without your supervision.
Logic stays true, wherever you may go,
So logic never tells you where you live.Before you can question your intuitions, you have to realize that what your mind’s eye is looking at is an intuition—some cognitive algorithm, as seen from the inside—rather than a direct perception of the Way Things Really Are.
Definitions don’t need words.
Words do not have intrinsic definitions.
Playing the game of Taboo—being able to describe without using the standard pointer/label/handle—is one of thefundamental rationalist capacities.
Where you see a single confusing thing, with protean and self-contradictory attributes, it is a good guess that your map is cramming too much into one point—you need to pry it apart and allocate some new buckets.
Categorizing has consequences.
People insist that “X, by definition, is a Y!” on those occasions when they’re trying to sneak in a connotation of Y that isn’t directly in the definition, and X doesn’t look all that much like other members of the Y cluster.
Just because there’s a word “art” doesn’t mean that it has a meaning, floating out there in the void, which you can discover by finding the right definition.
The way to carve reality at its joints, is to draw your boundaries around concentrations of unusually high probability density in Thingspace.
Reductionism
Reality is laced together a lot more tightly than humans might like to believe.
Since the beginning not one unusual thing has ever happened.
Many philosophers share a dangerous instinct: If you give them a question, they try to answer it.
If there is any lingering feeling of a remaining unanswered question, or of having been fast-talked into something, then this is a sign that you have not dissolved the question.
If you keep asking questions, you’ll get to your destination eventually. If you decide too early that you’ve found an answer, you won’t.
When you can lay out the cognitive algorithm in sufficient detail that you can walk through the thought process, step by step, and describe how each intuitive perception arises—decompose the confusion into smaller pieces not themselves confusing—then you’re done.
Be warned that you may believe you’re done, when all you have is a mere triumphant refutation of a mistake.
Those who dream do not know they dream, but when you wake you know you are awake.
One good cue that you’re dealing with a “wrong question” is when you cannot even imagine any concrete, specific state of how-the-world-is that would answer the question.
To write a wrong question, compare: “Why do I have free will?” with “Why do I think I have free will?”
Probabilities express uncertainty, and it is only agents who can be uncertain. A blank map does not correspond to a blank territory. Ignorance is in the mind.
Hug the query.
Joy in the Merely Real
Want to fly? Don’t give up on flight. Give up on flying potions and build yourself an airplane.
If I’m going to be happy anywhere,
Or achieve greatness anywhere,
Or learn true secrets anywhere,
Or save the world anywhere,
Or feel strongly anywhere,
Or help people anywhere,
I may as well do it in reality.If you only care about scientific issues that are controversial, you will end up with a head stuffed full of garbage.
If we cannot take joy in the merely available, our lives will always be frustrated.
If we cannot learn to take joy in the merely real, our lives shall be empty indeed.
The novice goes astray and says “The art failed me”; the master goes astray and says “I failed my art.”
I probably missed a lot in my cursory glances. I chose things based on no objective criteria. Sometimes I paraphrased, perhaps incorrectly. There are a few other big sequences to do.
- 9 Jan 2011 20:12 UTC; 3 points) 's comment on A LessWrong “rationality workbook” idea by (
- 18 Nov 2011 7:22 UTC; 3 points) 's comment on Open thread, November 2011 by (
You seem to mostly disagree in spirit with all Grognor’s points but the last, though on that point you didn’t share your impression of the H&B literature.
I’ll chime in and say that at some point about two years ago I would have more or less agreed with all six points. These days I disagree in spirit with all six points and with the approach to rationality that they represent. I’ve learned a lot in the meantime, and various people, including Anna Salamon, have said that I seem like I’ve gained fifteen or twenty IQ points. I’ve read all of Eliezer’s posts maybe three times over and I’ve read many of the cited papers and a few books, so my disagreement likely doesn’t stem from not having sufficiently appreciated Eliezer’s sundry cases. Many times when I studied the issues myself and looked at a broader set of opinions in the literature, or looked for justifications of the unstated assumptions I found, I came away feeling stupid for having been confident of Eliezer’s position: often Eliezer had very much overstated the case for his positions, and very much ignored or fought straw men of alternative positions.
His arguments and their distorted echoes lead one to think that various people or conclusions are obviously wrong and thus worth ignoring: that philosophers mostly just try to be clever and that their conclusions are worth taking seriously more-or-less only insofar as they mirror or glorify science; that supernaturalism, p-zombie-ism, theism, and other philosophical positions are clearly wrong, absurd, or incoherent; that quantum physicists who don’t accept MWI just don’t understand Occam’s razor or are making some similarly simple error; that normal people are clearly biased in all sorts of ways, and that this has been convincingly demonstrated such that you can easily explain away any popular beliefs if necessary; that religion is bad because it’s one of the biggest impediments to a bright, Enlightened future; and so on. It seems to me that many LW folk end up thinking they’re right about contentious issues where many people disagree with them, even when they haven’t looked at their opponents’ best arguments, and even when they don’t have a coherent understanding of their opponents’ position or their own position. Sometimes they don’t even seem to realize that there are important people who disagree with them, like in the case of heuristics and biases. Such unjustified confidence and self-reinforcing ignorance is a glaring, serious, fundamental, and dangerous problem with any epistemology that wishes to lay claim to rationality.
This isn’t a fault of the post per se, but I wish there wasn’t so damn much equivocation on the word “happiness”. I know what sadness, contempt, contentment, rapture, &c. are—introspectively they strike me as a rather distinct states. But “happiness” means like ten or fifteen different things that are only somewhat related to each other. (FWIW smiling makes me feel bitter-sweet, not happy, so this might be an undue generalization from one example.)
Also, at least many kinds of happiness are measures of value, not ends in themselves, and so chasing after them specifically is getting dangerously close to wireheading or the problems of Goodhart’s law more generally.
- 24 Apr 2012 6:53 UTC; 7 points) 's comment on Be Happier by (
Note that someone just gave a confidence level of 10^4478296 to one and was wrong. This is the sort of thing that should never ever happen. This is possibly the most wrong anyone has ever been.
I was in some discussion at SIAI once and made an estimate that ended up being off by something like three hundred trillion orders of magnitude. (Something about giant look-up tables, but still.) Anyone outdo me?
An arrogant atheist Jewish conspiracy leader born on September 11th intent on taking over the world with an artificial god because of his longing for immortality and obsession with doing things for the greater good, on record as saying humanity’s ahem ‘second’ greatest need is a supervillain and that he wants to go into that line of work? Seriously? Transhuman screenwriters have no sense of subtlety. Who’s the main character, Ray Comfort?
Teasing, of course. Happy birthday Eliezer. Good luck with ahem ‘reorganizing’ the universe.
Is there any way to go more meta-contrarian? Like, by eating liquid fluoride thorium instead and generating power with genetically modified giant venus flytraps? Fuck it, let’s generate power with chocobos and get all our nutrition from LSD.
(Commenters: talking about the ‘supernatural’ in terms of metaphysics is metaphysically interesting but phenomenologically speaking it just clouds the issue unnecessarily. The way most people actually use the concept is just ‘weird things happening that would require human or transhuman agency, in situations where there’s no good reason to suspect human agency’. Talking about reductionism &c. is missing the point—it doesn’t matter whether the agency comes from an engineered superintelligence or an “ontologically fundamental” god, what matters is there’s non-human agency around. Note that all reports of supernatural phenomena can be explained “naturally” by superintelligences, simulators, highly advanced aliens, &c., all of which seem not-unlikely in a big universe. The improbability stems from the necessity of their having seemingly bizarre motivations; the mechanisms themselves, however, aren’t fantastically improbable.)
- 5 Nov 2013 9:55 UTC; 18 points) 's comment on 2013 Census/Survey: call for changes and additions by (
- 23 Mar 2013 18:01 UTC; 7 points) 's comment on Personal Evidence—Superstitions as Rational Beliefs by (
- 15 Jan 2014 2:07 UTC; 2 points) 's comment on AALWA: Ask any LessWronger anything by (
- 13 Jul 2013 11:40 UTC; 0 points) 's comment on “Stupid” questions thread by (
Hm...