In accordance with ancient tradition, I took the survey.
Ben_Welchner
This all seems to have more to do with rule consequentialism than deontology. This isn’t necessarily a bad thing, and rule consequentialism has indeed been considered a halfway point between deontology and act consequentialism, but it’s worth noting.
Take for example an agent that is facing the Prisoner’s dilemma. Such an agent might originally tend to cooperate and only after learning about game theory decide to defect and gain a greater payoff. Was it rational for the agent to learn about game theory, in the sense that it helped the agent to achieve its goal or in the sense that it deleted one of its goals in exchange for a allegedly more “valuable” goal?
The agent’s goals aren’t changing due to increased rationality, but just because the agent confused him/herself. Even if this is a payment-in-utilons and no-secondary-consequences Dilemma, it can still be rational to cooperate if you expect the other agent will be spending the utilons in much the same way. If this is a more down-to-earth Prisoner’s Dilemma, shooting for cooperate/cooperate to avoid dicking over the other agent is a perfectly rational solution that no amount of game theory can dissuade you from. Knowledge of game theory here can only change your mind if it shows you a better way to get what you already want, or if you confuse yourself reading it and think defecting is the ‘rational’ thing to do without entirely understanding why.
You describe a lot of goals as terminal that I would describe as instrumental, even in their limited context. While it’s true that our ideals will be susceptible to culture up until (if ever) we can trace and order every evolutionary desire in an objective way, not many mathematicians would say “I want to determine if a sufficiently-large randomized Conway board would converge to an all-off state so I will have determined if a sufficiently-large randomized Conway board would converge to an all-off state”. Perhaps they find it an interesting puzzle or want status from publishing it, but there’s certainly a higher reason than ‘because they feel it’s the right thing to do’. No fundamental change in priorities needs occur between feeding one’s tribe and solving abstract mathematical problems.
I won’t extrapolate my arguments farther than this, since I really don’t have the philosophical background it needs.
In the above examples, there may well be more net harm than gain from staying in an unpleasant relationship or firing a problematic employee. It’s pretty case-by-case in nature, and you’re not required to ignore your own feelings entirely. If not, yes, utilitarianism would say you’d be “wrong” for indulging yourself at the expense of others.
I know I’ll probably trigger a flamewar...
Nitpick: LW doesn’t actually have a large proportion of cryonicists, so you’re not that likely to get angry opposition. As of the 2011 survey, 47 LWers (or 4.3% of respondents) claimed to have signed up. There were another 583 (53.5%) ‘considering it’, but comparing that to the current proportion makes me skeptical they’ll sign up.
I figured it was because it was a surprising and more-or-less unsupported statement of fact (that turned out to be, according to the only authority anyone cited, false). When I read ‘poor people are better long-term planners than rich people due to necessity’ I kind of expect the writer to back it up. I would have considered downvoting if it wasn’t already downvoted, and my preferences are much closer to socialist than libertarian.
I don’t have an explanation for the parent getting upvoted beyond a ‘planning is important’ moral and some ideological wiggle room for being a quote, so I guess it could still be hypocrisy. Of course, as of the 2011 survey LW is 32% libertarian (compared to 26% socialist and 34% liberal), so if there is ideological bias it’s of the ‘vocal minority’ kind.
It was grammar nitpicking. “The authors where wrong”.
He also notes that the experts who’d made failed predictions and employed strong defenses tended to update their confidence, while the experts who’d made failed predictions but didn’t employ strong defenses did update.
I assume there’s a ‘not’ missing in one of those.
Disliking meetings and reading in a crowded environment doesn’t seem like much evidence that you’re neither introverted nor extroverted (except that you’re not one of Those Nasty Extraverts that keep supposedly fawning over meetings), which doesn’t seem like much evidence that the introvert/extrovert split isn’t helpful. I can’t enjoy parties or meetings, prefer to read in silence and work alone.
Depends on if you’re hallucinating everything or your vision has at least some bearing in the real world. I mean, I’d rather see spiders crawling on everything than be blind, since I could still see what they were crawling on.
If you know of any illusions that give inevitably ceasing to exist negative utility to someone leading a positive-utility life, I would love to have them dispelled for me.
Judging by the recent survey, your cryonics beliefs are pretty normal with 53% considering it, 36% rejecting it and only 4% having signed up. LW isn’t a very hive-mindey community, unless you count atheism.
(The singularity, yes, you’re very much in the minority with the most skeptical quartile expecting it in 2150)
I recall another article about optimization processes or probability pumps being used to rig elections; I would imagine it’s a lighthearted reference to that, but I can’t turn it up by searching. I’m not even sure if it came before this comment.
(Richard_Hollerith2 hasn’t commented for over 2.5 years, so you’re not likely to get a response from him)
As a psychology student, I can say with some certainty that Watson is a behaviorist poster boy.
And would newer readers know what “EY” meant?
Given it’s right after an anecdote about someone whose name starts with “E”, I think they could make an educated guess.
You pretty much got it. Eliezer’s predicting that response and saying, no, they’re really not the same thing. (Tu quoque)
EDIT: Never mind, I thought it was a literal question.
I sympathize. One of my professors jokes about having discovered a new optical illusion, then going to the literature and having the incredible good luck that for once nobody else discovered it first.
On a similar note, what should be 13.9′s solution links to 13.8′s solution.
I’m also finding this really interesting and approachable. Thanks very much.
I’m not talking about SI (which I’ve never donated money to), I’m talking about you. And you’re starting to repeat yourself.
The same reason fat people can derail trolleys and businesspeople have lifeguard abilities, I’d imagine.