See something I’ve written which you disagree with? I’m experimenting with offering cash prizes of up to US$1000 to anyone who changes my mind about something I consider important. Message me our disagreement and I’ll tell you how much I’ll pay if you change my mind + details :-) (EDIT: I’m not logging into Less Wrong very often now, it might take me a while to see your message—I’m still interested though)
John_Maxwell
TANSTAAFL.
I’m not sure that makes much sense in this context, because you were offering a free lunch last year...
Lara Foster, I would be interested to hear a really solid example of nontransitive utility preferences, if you can think of one.
One idea is to tell the AI not to expend a portion of its resources greater than the chance of the mugger’s statement being true.
Should I think the universe is probably a coarse-grained simulation of my mind rather than real quantum physics, because a coarse-grained human mind is fifty(?) orders of magnitude cheaper than real quantum physics? Should I think the galaxies are tiny lights on a painted backdrop, because that Turing machine would require less space to compute?
I think a large universe full of randomly scattered matter is much more probable than a small universe that consists of a working human mind and little else.
Well, I’d say the most important thing I learned was to be less confident when taking a stand on controversial topics. So to that end, I’ll nominate
Twelve Virtues of Rationality
Politics is the Mind-Killer
Having some sort of acknowledgment when I fail to log in properly might be nice.
(If someone actually does come up with a new teachable supertrick, so that civilization itself is about to take another lurching step forward, then you should expect to have a lot of fellow superstars by the time you’re done learning!)
Not necessarily. There are a lot of people who claim to have the next supertrick. I wouldn’t be surprised if the next actual supertrick isn’t as heavily promoted as the fakes. So it might be worthwhile to do research in areas that seem promising but neglected.
Bayesian statistics used to be pretty obscure, I hear.
Let’s say we strive to vote according to our personal judgment. Should we vote strategically or not?
For example, let’s say I read a post that seems marginally good. It has a score that’s significantly higher than other posts which seem superior. Should I downvote the post to indicate that I think its score should be lower, or upvote to indicate that I think the post was marginally good?
Or is there a hidden variable rating R that visualizes as max(R,0)?
That’s how things work on reddit, so my guess is that’s how it is here too.
Edit: What’s up with Marshall’s karma score? Perhaps karma is being stored as an unsigned integer?
For the argument to be wrong, only one of its subarguments has to be wrong. So the correct equation is
P(whole argument is wrong) = 1 - P(first subargument is right) * P(second subargument is right | first argument is right)
No, I’m sure the Top Contributors sidebar just leaves off anyone with karma higher than Eliezer’s.
Perhaps a better response to the forays of amateurs would be to define a formal model that represents your understanding of their argument, explain it to them, and see if they agree that it’s accurate.
I vote in favor of equal rights for all here on Less Wrong. Yudkowsky and Hanson should stick with Overcoming Bias if they want exclusive privileges.
Really? Maybe I’m just naive. Could you give me an example of an argument and its formalization?
Alrighty, I gotcha.
I don’t like it. Maybe you could give the submitters the option of allowing their post to be delayed in exchange for a better chance at promotion?
I take the argument seriously. Please explain why you think the content is stupid.
Yes, but there are ways to save people for not very much money. See http://givewell.net/psi
Yep, there is definitely a difference between killing and not saving. However, I think a reasonable definition of an asshole is someone whose coefficient modifying the experiences of others in their decision-making is below some threshold. (An example of how to compute the coefficient: Let’s say I want a car. If n is the number of people like me such that I would be indifferent between giving a car to all those people and giving a car to myself, then my coefficient is 1/n.) So it’s possible to be an asshole without breaking any specific moral rules.
Also, presumably in a society where you lived a very long time, problems like starvation, AIDS, and malaria would have been solved.
Re: Misanthropic’s quote
The internet is a modern human’s greatest tool. Use it wisely and you can read thousands of books free, take hundreds of college courses free, find instructions to do almost anything under the sun, and learn about anything you desire instantly. Use it unwisely and you will find a distraction far more potent than television.