Yes! In fact I was just reading the lbod a week ago!
adamisom
… Which is fucking awesome. The dude’s been my inspiration for at least two years and I remember reading the announcement on his blog a year ago. In fact, it’s likely that reading his blog lead me to other blogs which lead me to LessWrong. (I don’t remember exactly how I found LessWrong.)
Somebody sounds grouchy :/ In fact, it would be completely unsurprising if I had read the other comments. Oops.
Results: 4+16+2+16+1+27(last option) = 144? WTF?
The book will be written before he feels his work is done on FAI.
You know it’s true because intuitively you need a non-work outlet for creativity and flow, and it makes sense to write about rationality in an entertaining way and to write a character (Harry) which he can take as an inspiration.
Of course, you also know it’s false because the man prioritizes and FAI matters far more.
… I find a lot of rationalizations are like that. One of the most useful quick-n-dirty rationality heuristics LessWrong has given me is to ‘consider the opposite’
Cool. I should have specified ‘I’m intrigued; can you move down a level of specificity (as to how)?’
Or both: isn’t intelligence correlated with size of social circle?
If so it really could be that the average friend is smarter than the average person.
Perhaps the problem here is simply that we think average intelligence is dumber than it really is.
Everyone considers themselves to be “above average” and so merely “average” surely cannot be-gasp!-not so bad! (Obviously it depends on your perspective. To a LessWronger, pretty much everything else looks stupid.)
I nearly did too, which makes me wonder if a few people did; the only difference is an ‘a’ and I guess I assumed atheism would be at the top
How (do you use it to mock/undermine theism)?
… which explains why there is actually a Mormon Transhumanist group, and why there was even a conference on the subject in April where I live (Salt Lake City; unfortunately couldn’t attend)
Thanks for the ultimately encouraging comment. Agreed that there is such a great quantity of possible papers to read that some care must be taken in what one recommends. To some extent, I think we’d have to wait-and-see how conscientious/well-targeted fellow LessWrongers are in their recommendations.
Here’s another consideration: figuring out how to better target articles to those who could use them for research. For example, a particular FAI researcher may find the first useful. This would require research profiles of some kind, of course, which is getting too complicated… unless there is already a highly-used website like Mendeley that many LWers (at least, those who do a lot of research) use?
How to help:
Do you know of a good website for implementing this idea? (if the idea is sufficiently clear)
Do you know if something already exists in the LessWrong community and I’m just ignorant of it?
Do you have a few particularly interesting articles you’d like to share?
How to help: Do you know of a good website for implementing this idea? (if the idea is sufficiently clear) Do you know if something already exists in the LessWrong community and I’m just ignorant of it? * Do you have a few particularly interesting articles you’d like to share?
Why aren’t we pooling our knowledge-resources yet?
We are indeed in a “vanishingly-unlikely future” and (obviously) if you say what’s P(me existing|no contingencies except the existence of the Universe) it’s so small as to be ridiculous.
I’ve often wondered at this. In my darker moments I’ve thought “what if some not-me who was very like me but more accomplished and rational had existed instead of me?”
You seem to have a deep understanding of this. Could you expand on it?
“On the margin, you should consider having more kids. If you were planning to have zero kids, consider having one. If you were planning to have 3 kids, consider 4, etc.” Wait, I thought happiness research indicates that the step from zero to one is a decrease in happiness whereas the step from, say, 3 to 4 would be only a negligible decrease in happiness. So there’s that asymmetry, if I remember one of Caplan’s blog posts correctly.
Holy shit, even today only 1 in 10,000 articles are retracted for fraud.
I am assuming these retracted articles are a tiny fraction of the actual number/% of articles with fraud, and such a tiny fraction as to not give reliable evidence for the proportion increasing; so the graph’s data isn’t particularly useful.