On 1, both candidates suck, and not because someone on the margin votes or doesn’t but because of a thousand upstream causes: the personality type required to succeed in politics, the voting system that ensures a two-party lock in, the inability of citizens to comprehend the complexity of modern nation governments, etc.
On 2, let me make my general argument very particular:
1. Polls show that polarization on politics (“Would you let your child marry a Democrat?“) is stronger than polarization on any other major alignment.
2. Unlike other things, political party affiliation is mostly a symbolic thing with few physical implications (compared to a job, a sexual orientation, or even being a rationalist). This makes one’s interaction with political parties consist mostly of signaling virtue and loyalty by vilifying the other party.
3. Unlike other things, there’s an entire industry (news media) that fans the flames of political party mind-kill 24⁄7.
Some people are willing to die on the Batman-v-Superman-was-better-than-Avengers hill, but not a lot. On the other hand, the Romney-was-better-than-Obama hill is covered in dead bodies ten layers deep. Myside bias and tribalism are bad everywhere, but party politics is the area where they’re observably already causing immense harm.
I’m a huge Sixers fan, but I don’t hate Celtics fans. We bond over our mutual love of basketball. That’s not how party politics works.
I’ve given my own reasons against voting before. I specifically addressed the “altruistic” justification for voting, since nobody thinks they can make a case for selfish voting anymore. My two main arguments:
1. You shouldn’t expect to know who the better candidate will be with any confidence, since the policies actually implemented are unpredictable, let alone their effects.
2. Voting contributes to your own mind-kill and to disliking your friends. You will think less clearly about a politician and their supporters once you cast a vote for/against them because of consistency bias, myside bias, confirmation bias etc.
With that said, I actually enjoyed this essay. The X-risk-EA argument presented here actually presents a case that’s both novel and would make my two main objections irrelevant. However, there’s some evidence that it’s not very applicable to real life.
In summer 2016 I heard from several prominent EAs that they think EA orgs should recommend Hillary’s campaign as a key cause, and that EAs should donate to it. I have also seen zero attempts at rigorous analysis showing that Trump is a bigger X-risk than Hillary. If we convince ourselves that elections are an EA cause, the false-positive rate for “important” election will quickly approach 100%, and the chance that EAs decide that the Republican candidate is actually safer will approach 0%. The only effect would be losing a lot of resources, friends and mental energy to this nonsensical theater.
I can understand the frustrations of people like Zvi who don’t want to invest in local rationality communities, but I don’t think that reaction is inevitable.
I went to a CFAR mentor’s workshop in March and it didn’t make me sad that the average Tuesday NYC rationality meetup isn’t as awesome. It gave me the agency-inspiration to make Tuesdays in NYC more awesome, at least by my own selfish metrics. Since March we’ve connected with several new people, established a secondary location for meetups in a beautiful penthouse (and have a possible tertiary location), hosted a famous writer, and even forced Zvi to sit through another circle. The personal payoff for investing in the local community isn’t just in decades-long friendships, it’s also in how cool next Tuesday will be. It pays off fast.
And besides, on a scale of decades people will move in an out of NYC/Berkeley/anywhere else several times anyway as jobs, schools, and residential zoning laws come and go. Several of my best friends, including my wife, came to NYC from the Bay Area. Should the Areans complain that NYC is draining them of wonderful people?
One of my favorite things about this community is that we’re all geographically diverse rootless cosmopolitans. I could move to a shack in Montana next year and probably find a couple of people I met at NYC/CFAR/Solstice/Putanumonit to start a meetup with. Losing friends sucks, but it doesn’t mean that investing in the local rationality community is pointless.
I guess it makes sense. I was coming to this from the selfish perspective of someone who’s kinda established as a writer, not the perspective of someone submitting their first post to LW with trembling fingers (which was me four years ago).
Moderators will move it to the frontpage if it seems appropriate.
Not a big fan of this, as writers now have zero input on whether their posts make it to the frontpage. I suggest at least letting writers choose one of three options for their posts:
1. Submit for frontpage consideration.
2. Allow on frontpage, but not really promoted.
3. Disallow moving to frontpage.
This way moderators could just sift through the queue of things marked #1, (and the occasional #2 post if they stumble upon it and really love it). And if someone really wants their own writing out of the frontpage, they can choose so with #3.
Oh man, I really don’t want to be on the other side of that debate. But, I swore allegiance to the cause of local validity, and I must uphold that.
Let’s use a simplified model: Total intergroup variance= P*(genetic component of variance) + (1-P)*(environmental component). This is very simplified because genes and environment interact, but it will suffice.
Your logic only works if our prior was: 50% that all the difference is environmental (P=0), 50% that all the difference is genetic (P=1). In this case, finding an environmental difference would disprove the existence of a genetic difference.
But that’s not what our prior is. The prior for biological group differences in a highly heritable trait is some bell curve of P, with its peak probably somewhere around P=0.5 (or at least neither P=0 nor P=1). The fact that IQ-affecting environmental differences exist only rules out P=1, which maybe changes the posterior expectation of P from 0.5 to 0.45, but not to 0.
After all, our prior was that we would almost certainly find environmental differences that affect IQ, so finding them can’t cause that much of an update.
And if we haven’t even proven that the environmental differences affect IQ, only that they exist, then we shouldn’t update at all. Our prior for that was basically 1: any two groups will have some environmental difference (food, language, location...), so the existence of those differences can’t be evidence either way.
Yo, the Karate Kid post is awesome. You buried the lede.
From the OP:
You know what feels crappy? 3% improvement. You busted your ass for a year, trying to get better at dating, at being less of an introvert, at self-soothing your anxiety – and you only managed to get 3% better at it.
The fact that 3% a year feels a bit too long is precisely the point. It’s never going to feel good. If the change was visible day-to-day or even month-to-month, people wouldn’t have trouble sticking with it. Part of the thing with 3% improvement is that for the first year or two, you just have to trust in the process and what you’re doing. Only after a couple of years you start noticing results, and then you become motivated and keep the habit for life. But getting through the first year for just 3% is the hardest part.
The psychology is the same for investments, where you probably shouldn’t expect a lot more than 3-4% a year. Some people see no difference between $10,000 now and $10,300 next year. Other people start investing when they’re 25, and are a lot richer than the first group when they’re 50.
I don’t have much to add to gjm’s description, but I’ll add a little bit of flavor to get at Said’s situational vs. dispositional dichotomy.
“Having a bad day” means something like experiencing a day in such a way that it causes mental suffering, and being an “angry person” is someone who reacts to mental suffering with violence. My claim is that those things aren’t clean categories: they are hard to separate from each other, and they are both situation and dispositional.
If you experience a lot of suffering from some external misfortune, you are more likely to react in a way that makes it worse, and also to build up a subconscious habit of reacting in this way, which in turn creates more chances for you to suffer and get angry and react and reinforce the pattern… eventually you will end up kicking a lot of vending machines.
It doesn’t make a lot of sense to draw a circle around something called “bad day” or “angry person” and blame your machine kicking on that. These two things are causes and effects of each other, and of a million other situational and dispositional things. That’s what I mean by “bad day” and “angry person” being fake, and the definition of FAE that I googled doesn’t quite address this.
Oops, just realized that. Let me try again:
In what way can the process of discovering or realizing these truths about how people work, be reasonably described...
In the way that I just did.
You asked me if this is just FAE, I answer “Kinda, but I like my description better. FAE doesn’t capture all of it”.
You ask if it this is just getting closer to the truth, I answer “Kinda, but I like my description better. Getting closer to the truth doesn’t tell you what mental movement is actually taking place.”
If you think you know what I mean but I’m explaining it poorly, you probably won’t be able to squeeze a better explanation out of me. This isn’t a factual claim, it’s a metaphor for a complex mental process. If 4,000 words weren’t enough to make this make sense in your head, then go read someone else—the point of non-expert explanation is that everyone can find the one explanation that makes sense for them.
...can the process of discovering or realizing these truths about how people work, be reasonably described as...
I mean—yes, I think so, otherwise I would not have written this post.
I’m not sure where this conversation is going. We’re not talking about whether X is true, but whether Y is the optimal metaphor that can conceived of to describe X. While I always want to learn how to make my writing more clear and lucid, I don’t find this sort of discussion particularly productive.
it makes you shy away from situations that might disapprove those wrong beliefs
This is another good reason. I was gesturing roughly in that direction when talking about the Christian convert being blocked from learning about new religions.
I think that there’s a general concept of being “truth aligned”, and being truth aligned is the right choice. Truth-seeking things reinforce each other, and things like lying, bullshitting, learning wrong things, avoiding disconfirming evidence etc. also reinforce each other. Being able to convince yourself of arbitrary belief is an anti-truth skill, and Eliezer suggests you should dis-cultivate it by telling yourself you can’t do it.
Your point about spirituality is a major source of conflict about those topics, with non-believers saying “tell us what it is” and the enlightened saying “if I did, you’d misunderstand”. I do think that it’s at least fair to expect that the spiritual teachers understand the minds of beginners, if not vice versa. This is why I’m much more interested in Val’s enlightenment than in Vinay Gupta’s.
Not quite, I think that either of those talks about only a small piece of misunderstanding people’s behaviors.
Learning about FAE tells me that the other person kicked the vending machine not because he’s an “angry person” but because he had a bad day. But really, “bad day” isn’t any more of a basic entity than “angry person” is. A zen master has no “bad days” and also isn’t an “angry person”, which one is the reason why a zen master doesn’t kick vending machines?
Also, the reason I kicked a vending machine isn’t just because I had a bad day, but also because 5 minutes ago I was thinking about soccer, and 5 weeks ago I kicked a machine and it gave me a can, and 5 years ago I read a book about the benefits of not suppressing emotions. The causes of a simple act like that are enormously complicated, and FAE is just a step in that direction.
Good question! Also hard to give a clear cut example of, but I think this is somewhat true of how I understand people’s behavior.
When I was little, I saw people as having an unchanging character: good person, angry person, mean person.
When I grew up I realized that “character” isn’t really an immutable part of a person, just the way I see them. I started understanding behavior in terms of following incentives and executing strategies: this person wants X, so he does Y.
Now, I have a sense that “a person wants something” is really just an abstraction. People look like they’re following goals, but at any given moment we are executing a bunch of routines that are very context-dependent. We do some things driven by system 2, and other things that reenact previous actions or roles, and some things in response to arbitrary stimuli etc. I don’t see behavior in the moment, let alone over time, as necessarily being unified or coherent.
This final stage allows me to be more flexible about describing character and behavior, because I see that those aren’t ontologically basic. Instead of “this person is tribal” or “this person is signaling group loyalty”, I may see someone as executing group signalling routines in a certain social context, and doing that by taking cues from a specific person. If I meet someone new I may form an initial impression of them at the level of character or goals, but it’s much easier to add nuance to those or at least to moderate the strength of my predictions about what they may do.