Steven K
steven0461
I like my women the way I like my coffee: detrimental to hippocampal neurogenesis, but conducive to short term memory and attentional control.
You know how sometimes when you’re falling asleep you start having thoughts that don’t make sense, but it takes some time before you realize they don’t make sense? I swear that last night while I was awake in bed my stream of thought went something like this, though I’m not sure how much came from layers of later interpretation:
″ … so hmm, maybe that has to do with person X, or with person Y, or with the little wiry green man in the cage in the corner of the room that’s always sitting there threatening me and smugly mocking all my endeavors but that I’m in absolute denial about, or with the dog, or with… wait, what?”
Having had my sanity eroded by too much rationalism and feeling vaguely that I’d been given an accidental glimpse into an otherwise inaccessible part of the world, I actually checked the corner of the room. I didn’t find anything, though. (Or did I?)
Not sure what moral to draw here.
Here’s the main thing that bothers me about this debate. There’s a set of many different questions involving the degree of past and current warming, the degree to which such warming should be attributed to humans, the degree to which future emissions would cause more warming, the degree to which future emissions will happen given different assumptions, what good and bad effects future warming can be expected to have at different times and given what assumptions (specifically, what probability we should assign to catastrophic and even existential-risk damage), what policies will mitigate the problem how much and at what cost, how important the problem is relative to other problems, what ethical theory to use when deciding whether a policy is good or bad, and how much trust we should put in different aspects of the process that produced the standard answers to these questions and alternatives to the standard answers. These are questions that empirical evidence, theory, and scientific authority bear on to different degrees, and a LessWronger ought to separate them out as a matter of habit, and yet even here some vague combination of all these questions tends to get mashed together into a vague question of whether to believe “the global warming consensus” or “the pro-global warming side”, to the point where when Stuart says some class of people is more irrational than theists, I have no idea if he’s talking about me. If the original post had said something like, “everyone whose median estimate of climate sensitivity to doubled CO2 is lower than 2 degrees Celsius is more irrational than theists”, I might still complain about it falling afoul of anti-politics norms, but at least it would help create the impression that the debate was about ideas rather than tribes.
- Only You Can Prevent Your Mind From Getting Killed By Politics by 26 Oct 2013 13:59 UTC; 61 points) (
- 17 Oct 2013 15:21 UTC; 0 points) 's comment on Trusting Expert Consensus by (
It sounds like they meant they used to work at CFAR, not that they currently do.
The interpretation of “I’m a CFAR employee commenting anonymously to avoid retribution” as “I’m not a CFAR employee, but used to be one” seems to me to be sufficiently strained and non-obvious that we should infer from the commenter’s choice not to use clearer language that they should be treated as having deliberately intended for readers to believe that they’re a current CFAR employee.
I asked my caveman friend to translate. He’s a paleoanthropics expert.
Big chunk of space! It has three parts. Is there some guy in each part? Let’s say no. Not unless the part is very lucky!
Now think of many such chunks of space that could have been! Whoa! Sense of wonder! Let’s pick some guy in some chunk. That’ll be us!
First let’s pick some random chunk. Self-Sampling Assumption says we’re a random guy in the chunk! (What if there is no guy in the chunk? Don’t think about it!) Are we alone? Probably yes! Most chunks with a guy don’t have a second guy. Because we said guys are rare! Math!
But now let’s not pick a random chunk. Let’s pick a random guy, in any chunk. Say there’s two guys in a chunk. Then we’ll pick a guy in the chunk twice as often! Self-Indication Assumption! (Maybe they meet and live happily ever after. Just because I’m caveman doesn’t mean I heteronormatize!) Now are we alone? Still probably yes! Most guys are in their own chunk. Yes, if there’s two guys in a chunk it has two chances to be picked. But there’s just so few chunks with two guys. Because we said guys are rare! So this Self-Indication business hardly matters at all! Math!
There seems to be a lot of interest in abstract decision theory, but is there interest in more practical decision analysis? That’s the sort of thing I suspect I could write a useful primer on
Please do! This is exactly the sort of topic that should be LessWrong’s specialty.
Good picture. Together, we can punch the sun!
I’d be hesitant to generalize from normal people’s motivations for giving to those of optimal philanthropists.
Do you think advocating optimal philanthropy is likely to yield greater returns than more direct ways to reduce existential risk? I could see it going either way, and it’s hard to figure out what calculations to do to find out.
The project was initially described as synthesizing some of the comments on Karnofsky’s post into a response mentioning counterintuitive implications of the approach, or into whichever synthesis of responses I thought was accurate.
this is your warning that Crocker’s Rules apply to the following content
That’s not how Crocker’s Rules work; they’re supposed to be declared by the listener, who thereby takes responsibility for any hurt feelings caused by the content. You can’t declare Crocker’s rules on behalf of others.
I for one would like to applaud the 20 members of the LessWrong community who just applauded Eliezer for applauding SarahC for applauding the LessWrong community.
I wonder if it’s accurate to say that for hacks, it’s the means that’s considered “cheating”, whereas for cryonics, it’s the end itself that’s considered “cheating”.
In both your GTD example and Kaj’s posting example, virtue doesn’t seem to affect what you think you should do, just how you motivate yourself to do it, so “virtue psychology” might be a more accurate description than “virtue ethics”.
When an author of a work of fiction has run out of elements that everyone will like, he or she still has the option to put in high-variance elements that some people will love and some people will hate. Could it be that the objects of fandom are just those that went for these high-variance choices?
Yes, my experience of “nobody listened 20 years ago when the case for caring about AI risk was already overwhelmingly strong and urgent” doesn’t put strong bounds on how much I should anticipate that people will care about AI risk in the future, and this is important; but it puts stronger bounds on how much I should anticipate that people will care about counterintuitive aspects of AI risk that haven’t yet undergone a slow process of climbing in mainstream respectability, even if the case for caring about those aspects is overwhelmingly strong and urgent (except insofar as LessWrong culture has instilled a general appreciation for things that have overwhelmingly strong and urgent cases for caring about them), and this is also important.
My memory of LW 1.0 is that it had a lot of mediocre content that made me not want to read it regularly.
Here’s a different hypothesis that also accounts for opinions reverting in the direction of the original uneducated position. Suppose “uneducated” and “contrarian” opinion are two independent random (e.g. normal) variables with the same mean representing the truth (but maybe higher variance for “uneducated”); and suppose what you call “meta-contrarian” opinion is just the truth. Then if you start from “contrarian” it’s more likely that “meta-contrarian” opinion will be in the direction of “uneducated” than in the opposite direction, simply because “uneducated” contains nonzero information about where the truth is. I think you can also see this as a kind of regression to the mean.
Upvoted for not being about gender.
If you ask me, the term “instrumental rationality” has been subject to inflation. It’s not supposed to mean better achieving your goals, it’s supposed to mean better achieving your goals by improving your decision algorithm itself, as opposed to by improving the knowledge, intelligence, skills, possessions, and other inputs that your decision algorithm works from. Where to draw the line is a matter of judgment but not therefore meaningless.
- 5 Nov 2011 2:00 UTC; 7 points) 's comment on Rational Romantic Relationships, Part 1: Relationship Styles and Attraction Basics by (
Not only are people nuts, nuts are people, and they scream when we eat them.
- 21 Aug 2012 9:05 UTC; 17 points) 's comment on [Link] Social interventions gone wrong by (
When you have eliminated the impossible, whatever remains is often more improbable than your having made a mistake in one of your impossibility proofs.