The source is Jonathan Haidt, right?
Randaly
Counter-signalling?
I’m in.
You’ve also got to consider the size of the outcome. Winning a lottery gives you, what, a few hundred thousand to a few million dollars? The expected value of buying a lottery ticket is then around 2 cents- less than the price of a lottery ticket, and so buying is a terrible decision.
Elections have much larger impacts. The 2000 election is a classic example: it’s been argued that Gore would not have invaded Iraq, and that the Iraq invasion has cost around a trillion dollars; in that case, your vote has an expected value to the US of around 16,000 USD. (More if you live in a swing state, more if you believe Gore would have been better in other ways. Less otherwise.)
Not that I’m saying that this is a closed question- in fact, my point is that it isn’t.
Suppose that you and somebody else shoot somebody at the exact same time. Because they would have died anyways, your killing them hasn’t caused any harm, and so is morally OK by your theory. However, the other killer is also morally blameless under your theory, because if they hadn’t killed the other person, you would have.
I believe that this requires a modification.
“This also explains why rewarding success may be more useful than punishing it in the long run: if a kid does his homework because otherwise he doesn’t get dessert, it’s labor. If he gets some reward for getting it done, it becomes a positive.”
Shouldn’t we expect loss aversion to partially or completely counteract this, and doesn’t receiving a reward for something qualify as labor too?
(Side note: This reads as ‘rewarding success is better than punishing success.’)
A recent study found that one effective way to resist procrastination in future tasks is to forgive previous procrastination- because the negative emotions that would otherwise remain create an ugh field around that task.
I found the study recently, but I’ve personally found this to be effective previously. Forcing your way through an ugh field isn’t sustainable due to our limited supply of willpower (this is hardly a new idea, but I haven’t seen it referenced in my limited readings on LW.)
It’s more like ~5%, really.
For instrumental rationality, yes; for epistemic rationality, no. If the reason the EU-maximizer loses money is because it believes that the encoding will be different than it actually is, then it is irrational.
The subjective way we experience things.
“Any sufficiently analyzed magic is indistinguishable from SCIENCE!”
~Girl Genius
- 4 Dec 2011 6:42 UTC; 1 point) 's comment on Rationality Quotes December 2011 by (
By experience I mean anything which we detect with one of our senses.
The subjective part is, IMHO, the key to qualia.
Suppose that you’ve never seen red light, and that you are then told all of its properties in perfect detail. You would still gain new information by actually seeing red light, because you still don’t know “what it feels like” to see it. The qualia is not the objective facts, but rather what seeing the light “feels like”, your perception of the effect produced on your brain by the light.
(Qualia are usually taken to be an argument against materialism- because after you know every objective fact about something, you still gain new information (qualia) by experiencing it.)
I read it as stating that there was a curse, and everybody had noticed it, and acted accordingly (McGonagall trying to keep Quirrel through the entire year, Quirrel refusing to stay the next year.)
Hedwig/Quirrelmort. Dog!Sirius/Firenze. Cat!Mcgonagall/Nagini.
Per Rudi Hoffman, he’s “open to the idea, but has not done anything about it.”
Skeptic magazine? I’d guess that it would be unwilling to publish anything very positive about cryonics, given Founder, Publisher, and Editor-in-Chief Michael Shermer’s very public anti-cryonics stance.
My memory of the books isn’t perfect- but wasn’t Voldemort’s main goal immortality, not conquest?
If so, then Voldemort could be trying to manipulate Harry into merging science and magic to create a means of immortality, before he takes over. This would also fit with author tract nature of the story- it would give him a chance to reference transhumanism, SENS, or cryonics.
As I recall, Eliezer’s definition of consciousness is borrowed from GEB- it’s when the mind examines itself, essentially. That has very real physical consequences, so the idea of non-conscious AGI doesn’t support the idea of zombies, which require consciousness to have no physical effects.
I generally like this post, and am unsure why it was voted down. However, I think that you need to separate “not useful” from “not true”- while it may or may not be true that neither is particularly useful in real life, under the definitions accepted by LW, both are almost certainly true.
Hi!