I suffer the same symptom. (and have an excessive amount of body hair, not that that’s more than negligibly indicative of high testosterone levels)
What’s the cheapest/easiest way to get tested? (more out of curiosity than anything else)
I suffer the same symptom. (and have an excessive amount of body hair, not that that’s more than negligibly indicative of high testosterone levels)
What’s the cheapest/easiest way to get tested? (more out of curiosity than anything else)
A satirization of the mahou shoujo genre? Complete with costumes!
An aside, they way you used “feminist critique” isn’t the standard meaning of the phrase. A feminist critique would be a critique from a feminist framework, not a critique of feminism, much like a Bayesian critique of something would argue that it’s fallaciously reasoning about probabilities.
Bludgers are still magical and therefore could still “get at” the ‘mind’ regardless of the physical brain.
Flip-flops are excellent for that reason. I have really sweaty, weird-sized feet, so they’re especially nice, and if, like me, you already have a reputation for being eccentric, people won’t mind when you show up to nicer events in a blazer, jeans and flip-flops.
Extraordinary claims....
It’s a way to cash out on your reputation. The team that won the Netflix Prize may have ended up with a net gain, if you count the value of having “won the Netflix Prize” on their resume (in terms of both job opportunities and higher salaries afforded), and in order to offer such a reputation boost, Netflix had to have built up a reputation for itself.
With the exception of people on Less Wrong and people in the mathematical community, I’ve almost never seen high functioning people use the “relatively one strong argument” approach.
I think it’s more general than that (depending in your definition of the ‘mathematical community’). For example, I rarely see physicists attempt to argue something based on many weak arguments, and I think you would find the same to be true of engineers. More generally, I think that anyone who’s used to formalism is used to being presented with extremely strong arguments, and ending the search for arguments there. Consider a Bayesian actor who happens to be in a quantitative field of study:
I decide proposition A is true, and sketch out a proof on some scratch paper. The probability that I made a mistake is significantly smaller than the probability that I didn’t. I go home and write the proof out formally and carefully, and the probability of me being wrong drops further. I ask a peer to look over it and the probability that I make a mistake is vanishingly small. If prop A is important, then I may publish it, and after peer review, I can say that I have a strong argument for A: I have a proof P, and if P is correct, then so is A, with probability 1. The probability that P is incorrect is small, thanks to the formalism and many levels of peer review.
Since most of the arguments we believe are thus strong arguments, this trains our intuition with a heuristic to not bother looking for arguments that aren’t extremely strong. This effect would probably scale with the rigor of the field (eg be much stronger in mathematicians, where proofs are essentially the only form of argument written down)
Hmm, I think I may be misunderstanding what you mean by “many weak arguments.” As in, I don’t think it’s uncommon for physicists to make multiple arguments in support of a proposition, but even each of those arguments, IME, are strong enough to bet at least a year of one’s career on (eg the old arguments for renormalization), by contrast with, say, continental drift, where you probably wouldn’t be taken seriously if you’d produced merely one or two lines of evidence. What this shares with the “one strong argument” position is that we’re initially looking for a sufficiently convincing argument, discarding lines of though that would lead to insufficiently strong arguments. It’s different mostly in that we go back and find more arguments “to be extra sure,” but you’re still screening your arguments for sufficient strongness as you make them.
Though admittedly, as a student, I may be biased towards finding my professors’ arguments more convincing than they ought to be.
Paul Graham suggests keeping your identity as small as sustainable. [1] That is, it’s beneficial to keep your identity to just “rationalist” or just “scientist”, since they contradict having a large identity. He puts it better than I do:
There may be some things it’s a net win to include in your identity. For example, being a scientist. But arguably that is more of a placeholder than an actual label—like putting NMI on a form that asks for your middle initial—because it doesn’t commit you to believing anything in particular. A scientist isn’t committed to believing in natural selection in the same way a bibilical literalist is committed to rejecting it. All he’s committed to is following the evidence wherever it leads.
Considering yourself a scientist is equivalent to putting a sign in a cupboard saying “this cupboard must be kept empty.” Yes, strictly speaking, you’re putting something in the cupboard, but not in the ordinary sense.
aspell handles tex just fine.
things like episodic memories (separated from believing the information contained in them)
I’m not sure what you’re saying here; you think of your memories as part of your identity?
realtionship[sic] in neutral groups such as a family or a fandom, precommitments, or mannerisms?
These memberships are all heuristics for expected interactions with people. Nothing actionable is lost if you bayes-induct for each situation separately, save the effort you’re using to compute and the cognitive biases and emotional reactions you get from claiming “membership”. Alternately you could still use the membership heuristic, but with a mental footnote that you’re only using it because it’s convenient, and there are senses in which the membership’s representation of you may be misleading.
Dibs on ‘A Utilful Mind’ if you don’t take it?
To clarify, I’m not talking about “your identity” here as in the information about what you consider your identity, but rather the referent of that identity.
Ah, it appears we’re talking about different things. I’m referring to ideological identity (“I’m a rationalist” , “I’m a libertarian”, “I’m pro-choice”, “I’m an activist” ), which I think is distinct from “I’m my mind” identity. In particular, you can be primed psychologically and emotionally by the former more than the latter.
No, it doesn’t.
Rather, I meant that it works fine with math mode.
Specifically, it fails on words that have accents.
It doesn’t yet understand tex accents, but if you set the encoding using the tex package, you can directly enter è, é, ê, …
And if Yudkowsky’s going to make a Fullmetal Alchemist reference, we know how to make a philosopher’s stone, or even crude approximations, but only using human scarifice.
“”is a thing” is a thing” is a thing in sense C.
I think “Spontaneous Duplication” is made-up by Minerva or someone as an explaination to wave off anyone who might see mulitple Harrys running around due to the time turner.
a natural, self-protective response to what seems like an impossible demand. Sometimes the demand actually is impossible, sometimes the demand is understood correctly and falsely believed to be impossible, and sometimes the demand is defensively interpreted as impossible because the reasonable part is felt to be not worth doing but it doesn’t feel safe to just refuse it.
I’m not sure I follow. What demand?
i.e. unceremoniously killed offscreen
nitpick: Hermionie wasn’t just killed onscreen, she was front and center.
Even if you place literally infinite value on being immortal, I imagine you’d rather spend the time wasted praying on something more likely to make you immortal, eg minimizing your chance of heart disease.