SlateStarCodex, EA, and LW helped me get out of the psychological, spiritual, political nonsense in which I was mired for a decade or more.
I started out feeling a lot smarter. I think it was community validation + the promise of mystical knowledge.
Now I’ve started to feel dumber. Probably because the lessons have sunk in enough that I catch my own bad ideas and notice just how many of them there are. Worst of all, it’s given me ambition to do original research. That’s a demanding task, one where you have to accept feeling stupid all the time.
But I still look down that old road and I’m glad I’m not walking down it anymore.
Too smart for your own good. You were supposed to believe it was about rationality. Now we have to ban you and erase your comment before other people can see it. :D
Now I’ve started to feel dumber. Probably because the lessons have sunk in enough that I catch my own bad ideas and notice just how many of them there are. [...] you have to accept feeling stupid all the time. But I still look down that old road and I’m glad I’m not walking down it anymore.
Yeah, same here.
Things I come to LessWrong for:
An outlet and audience for my own writing
Acquiring tools of good judgment and efficient learning
Practice at charitable, informal intellectual argument
A somewhat less mind-killed politics
Cons: I’m frustrated that I so often play Devil’s advocate, or else make up justifications for arguments under the principle of charity. Conversations feel profit-oriented and conflict-avoidant. Overthinking to the point of boredom and exhaustion. My default state toward books and people is bored skepticism and political suspicion. I’m less playful than I used to be.
Pros: My own ability to navigate life has grown. My imagination feels almost telepathic, in that I have ideas nobody I know has ever considered, and discover that there is cutting edge engineering work going on in that field that I can be a part of, or real demand for the project I’m developing. I am more decisive and confident than I used to be. Others see me as a leader.
Some people optimize for drama. It is better to put your life in order, which often means getting the boring things done. And then, when you need some drama, you can watch a good movie.
Well, it is not completely a dichotomy. There is also some fun to be found e.g. in serious books. Not the same intensity as when you optimize for drama, but still. It’s like when you stop eating refined sugar, and suddenly you notice that the fruit tastes sweet.
Math is training for the mind, but not like you think
Just a hypothesis:
People have long thought that math is training for clear thinking. Just one version of this meme that I scooped out of the water:
“Mathematics is food for the brain,” says math professor Dr. Arthur Benjamin. “It helps you think precisely, decisively, and creatively and helps you look at the world from multiple perspectives . . . . [It’s] a new way to experience beauty—in the form of a surprising pattern or an elegant logical argument.”
But math doesn’t obviously seem to be the only way to practice precision, decision, creativity, beauty, or broad perspective-taking. What about logic, programming, rhetoric, poetry, anthropology? This sounds like marketing.
As I’ve studied calculus, coming from a humanities background, I’d argue it differently.
Mathematics shares with a small fraction of other related disciplines and games the quality of unambiguous objectivity. It also has the ~unique quality that you cannot bullshit your way through it. Miss any link in the chain and the whole thing falls apart.
It can therefore serve as a more reliable signal, to self and others, of one’s own learning capacity.
Experiencing a subject like that can be training for the mind, because becoming successful at it requires cultivating good habits of study and expectations for coherence.
Math is interesting in this regard because it is both very precise and there’s no clear-cut way of checking your solution except running it by another person (or becoming so good at math to know if your proof is bullshit).
Programming, OTOH, gives you clear feedback loops.
In programming, that’s true at first. But as projects increase in scope, there’s a risk of using an architecture that works when you’re testing, or for your initial feature set, but will become problematic in the long run.
For example, I just read an interesting article on how a project used a document store database (MongoDB), which worked great until their client wanted the software to start building relationships between data that had formerly been “leaves on the tree.” They ultimately had to convert to a traditional relational database.
Of course there are parallels in math, as when you try a technique for integrating or parameterizing that seems reasonable but won’t actually work.
Yep. Having worked both as a mathematician and a programmer, the idea of objectivity and clear feedback loops starts to disappear as the complexity amps up and you move away from the learning environment. It’s not unusual to discover incorrect proofs out on the fringes of mathematical research that have not yet become part of the cannon, nor is it uncommon (in fact, it’s very common) to find running production systems where the code works by accident due to some strange unexpected confluence of events.
Feedback, yes. Clarity… well, sometimes it’s “yes, it works” today, and “actually, it doesn’t if the parameter is zero and you called the procedure on the last day of the month” when you put it in production.
Proof verification is meant to minimize this gap between proving and programming
The thing I like about math is that it gives the feeling that the answers are in the territory. (Kinda ironic, when you think about what the “territory” of math is.) Like, either you are right or you are wrong, it doesn’t matter how many people disagree with you and what status they have. But it also doesn’t reward the wrong kind of contrarianism.
Math allows you to make abstractions without losing precision. “A sum of two integers is always an integer.” Always; literally. Now with abstractions like this, you can build long chains out of them, and it still works. You don’t create bullshit accidentally, by constructing a theory from approximations that are mostly harmless individually, but don’t resemble anything in the real world when chained together.
Whether these are good things, I suppose different people would have different opinions, but it definitely appeals to my aspie aesthetics. More seriously, I think that even when in real world most abstractions are just approximations, having an experience with precise abstractions might make you notice the imperfection of the imprecise ones, so when you formulate a general rule, you also make a note “except for cases such as this or this”.
(On the other hand, for the people who only become familiar with math as a literary genre, it might have an opposite effect: they may learn that pronouncing abstractions with absolute certainty is considered high-status.)
Mathematics shares with a small fraction of other related disciplines and games the quality of unambiguous objectivity. It also has the ~unique quality that you cannot bullshit your way through it. Miss any link in the chain and the whole thing falls apart.
Isn’t programming even more like this?
I could get squidgy about whether a proof is “compelling”, but when I write a program, it either runs and does what I expect, or it doesn’t, with 0 wiggle room.
Sometimes programming is like that, but then I get all anxious that I just haven’t checked everything thoroughly!
My guess is this has more to do with whether or not you’re doing something basic or advanced, in any discipline. It’s just that you run into ambiguity a lot sooner in the humanities
It helps you to look at the world from multiple perspectives: It gets you into a position to make a claim like that soley based on anecdotal evidence and wishful thinking.
What gives LessWrong staying power?
On the surface, it looks like this community should dissolve. Why are we attracting bread bakers, programmers, stock market investors, epidemiologists, historians, activists, and parents?
Each of these interests has a community associated with it, so why are people choosing to write about their interests in this forum? And why do we read other people’s posts on this forum when we don’t have a prior interest in the topic?
Rationality should be the art of general intelligence. It’s what makes you better at everything. If practice is the wood and nails, then rationality is the blueprint.
To determine whether or not we’re actually studying rationality, we need to check whether or not it applies to everything. So when I read posts applying the same technique to a wide variety of superficially unrelated subjects, it confirms that the technique is general, and helps me see how to apply it productively.
This points at a hypothesis, which is that general intelligence is a set of defined, generally applicable techniques. They apply across disciplines. And they apply across problems within disciplines. So why aren’t they generally known and appreciated? Shouldn’t they be the common language that unites all disciplines?
Perhaps it’s because they’re harder to communicate and appreciate. If I’m an expert baker, I can make another delicious loaf of bread. Or I can reflect on what allows me to make such tasty bread, and speculate on how the same techniques might apply to architecture, painting, or mathematics. Most likely, I’m going to choose to bake bread.
This is fine, until we start working on complex, interdisciplinary projects. Then general intelligence becomes the bottleneck for having enough skill to get the project done. Sounds like the 21st century. We’re hitting the limits of what’s achievable through sheer persistence in a single specialty, and we’re learning to automate them away.
What’s left is creativity, which arises from structured decision-making. I’ve noticed that the longer I practice rationality, the more creative I become. I believe that’s because it gives me the resources to turn an intuition into a specified problem, envision a solution, create a sort of Fermi-approximation to give it definition, and guidance on how to develop the practical skills and relationships that will let me bring it into being.
If I’m right, human application of these techniques will require deliberate practice with the general techniques—both synthesizing them and practicing them individually, until they become natural.
The challenge is that most specific skills lend themselves to that naturally. If I want to become a pianist, I practice music until I’m good. If I want to be a baker, I bake bread. To become an architect, design buildings.
What exactly do you do to practice the general techniques of rationality? I can imagine a few methods:
Participate in superforecasting tournaments, where Bayesian and gears/policy level thinking are the known foundational techniques.
Learn a new skill, and as you go, notice the problems you encounter along the way. Try to imagine what a general solution to that problem might look like. Then go out and build it.
Pick a specific rationality technique, and try to apply it to every problem you face in your life.
For me, it’s the relatively high epistemic standards combined with relative variety of topics. I can imagine a narrowly specialized website with no bullshit, but I haven’t yet seen a website that is not narrowly specialized and does not contain lots of bullshit. Even most smart people usually become quite stupid outside the lab. Less Wrong is a place outside the lab that doesn’t feel painfully stupid. (For example, the average intelligence at Hacker News seems quite high, but I still regularly find upvoted comments that make me cry.)
Yeah, Less Wrong seems to be a combination of project and aesthetic. Insofar as it’s a project, we’re looking for techniques of general intelligence, partly by stress-testing them on a variety of topics. As an aesthetic, it’s a unique combination of tone, length, and variety + familiarity of topics that scratches a particular literary itch.
Markets are the worst form of economy except for all those other forms that have been tried from time to time.
I used this line when having a conversation at a party with a bunch of people who turned out to be communists, and the room went totally silent except for one dude who was laughing.
It was the silence of sullen agreement.
Are rationalist ideas always going to be offensive to just about everybody who doesn’t self-select in?
One loved one was quite receptive to Chesterton’s Fence the other day. Like, it stopped their rant in the middle of its tracks and got them on board with a different way of looking at things immediately.
On the other hand, I routinely feel this weird tension. Like to explain why I think as I do, I‘d need to go through some basic rational concepts. But I expect most people I know would hate it.
I wish we could figure out ways of getting this stuff across that was fun, made it seem agreeable and sensible and non-threatening.
Less negativity—we do sooo much critique. I was originally attracted to LW partly as a place where I didn’t feel obligated to participate in the culture war. Now, I do, just on a set of topics that I didn’t associate with the CW before LessWrong.
My guess? This is totally possible. But it needs a champion. Somebody willing to dedicate themselves to it. Somebody friendly, funny, empathic, a good performer, neat and practiced. And it needs a space for the educative process—a YouTube channel, a book, etc. And it needs the courage of its convictions. The sign of that? Not taking itself too seriously, being known by the fruits of its labors.
Traditionally, things like this are socially achieved by using some form of “good cop, bad cop” strategy. You have someone who explains the concepts clearly and bluntly, regardless of whom it may offend (e.g. Eliezer Yudkowsky), and you have someone who presents the concepts nicely and inoffensively, reaching a wider audience (e.g. Scott Alexander), but ultimately they both use the same framework.
The inoffensiveness of Scott is of course relative, but I would say that people who get offended by him are really not the target audience for rationalist thought. Because, ultimately, saying “2+2=4” means offending people who believe that 2+2=5 and are really sensitive about it; so the only way to be non-offensive is to never say anything specific.
If a movement only has the “bad cops” and no “good cops”, it will be perceived as a group of assholes. Which is not necessarily bad if the members are powerful; people want to join the winning side. But without actual power, it will not gain wide acceptance. Most people don’t want to go into unnecessary conflicts.
On the other hand, a movement with “good cops” without “bad cops” will get its message diluted. First, the diplomatic believers will dilute their message in order not to offend anyone. Their fans will further dilute the message, because even the once-diluted version is too strong for normies’ taste. At the end, the message may gain popular support… kind of… because the version that gains the popular support will actually contain maybe 1% of the original message, but mostly 99% of what the normies already believed, peppered by the new keywords.
The more people will present rationality using different methods, the better. Because each of them will reach a different audience. So I completely approve the approach you suggest… in addition to the existing ones.
I need to try a lot harder to remember that this is just a community full of individuals airing their strongly held personal opinions on a variety of topics.
Those opinions often have something in common—respect for the scientific method, effort to improve one’s rationality, concern about artificial intelligence—and I like to believe it is not just a random idiosyncratic mix (a bunch of random things Eliezer likes), but different manifestations of the same underlying principle (use your intelligence to win, not to defeat yourself). However, not everyone is interested in all of this.
And I would definitely like to see “somebody friendly, funny, empathic, a good performer, neat and practiced” promoting these values in a YouTube channel or in books. But that requires a talent I don’t have, so I can only wait until someone else with the necessary skills does it.
This reminded me of the YouTube channel of Julia Galef, but the latest videos there are 3 years old.
You’re both assuming that you have a set of correct ideas coupled with bad PR...but how well are Bayes, Aumann and MWI (eg.) actually doing?
Like to explain why I think as I do, I‘d need to go through some basic rational concepts.
I believe that if the rational concepts are pulling their weight, it should be possible to explain the way the concept is showing up concretely in your thinking, rather than justifying it in the general case first.
As an example, perhaps your friend is protesting your use of anecdotes as data, but you wish to defend it as Bayesian, if not scientific, evidence. Rather than explaining the difference in general, I think you can say “I think that it’s more likely that we hear this many people complaining about an axe murderer downtown if that’s in fact what’s going on, and that it’s appropriate for us to avoid that area today. I agree it’s not the only explanation and you should be able to get a more reliable sort of data for building a scientific theory, but I do think the existence of an axe murderer is a likely enough explanation for these stories that we should act on it”
If I’m right that this is generally possible, then I think this is a route around the feeling of being trapped on the other side of an inferential gap (which is how I interpreted the ‘weird tension’)
I think you’re right, when the issue at hand is agreed on by both parties to be purely a “matter of fact.”
As soon as social or political implications crop in, that’s no longer a guarantee.
But we often pretend like our social/political values are matters of fact. The offense arises when we use rational concepts in a way that gives the lie to that pretense. Finding an indirect and inoffensive way to present the materials and let them deconstruct their pretenses is what I’m wishing for here. LW has a strong culture surrounding how these general-purpose tools get applied, so I’d like to see a presentation of the “pure theory” that’s done in an engaging way not obviously entangled with this blog.
The alternative is to use rationality to try and become savvier social operators. This can be “instrumental rationality” or it can be “dark arts,” depending on how we carry it out. I’m all for instrumental rationality, but I suspect that spreading rational thought further will require that other cultural groups appropriate the tools to refine their own viewpoints rather than us going out and doing the convincing ourselves.
I’m annoyed that I think so hard about small daily decisions.
Is there a simple and ideally general pattern to not spend 10 minutes doing arithmetic on the cost of making burritos at home vs. buying the equivalent at a restaurant? Or am I actually being smart somehow by spending the time to cost out that sort of thing?
“Spend no more than 1 minute per $25 spent and 2% of the price to find a better product.”
This heuristic cashes out to:
Over a year of weekly $35 restaurant meals, spend about $35 and an hour and a half finding better restaurants or meals.
For $250 of monthly consumer spending, spend a total of $5 and 10 minutes per month finding a better product.
For bigger buys of around $500 (about 2x/year), spend $10 and 20 minutes on each purchase.
Buying a used car ($15,000) I’d spend $300 and 10 hours. I could use the $300 to hire somebody at $25/hour to test-drive an additional 5-10 cars, a mechanic to inspect it on the lot, a good negotiator to help me secure a lower price.
For work over the next year ($30,000), spend $600 and 20 hours.
Getting a Master’s degree ($100,000 including opportunity costs), spend 66 hours and $2,000 finding the best school.
Choosing from among STEM career options ($100,000 per year), spend about 66 hours and $600 per year exploring career decisions.
Comparing that with my own patterns, that simplifies to:
Spend much less time thinking about daily spending. You’re correctly calibrated for ~$500 buys. Spend much more time considering your biggest buys and decisions.
For some (including younger-me), the opposite advice was helpful—I’d agonize over “big” decisions, without realizing that the oft-repeated small decisions actually had a much larger impact on my life.
To account for that, I might recommend you notice cache-ability and repetition, and budget on longer timeframes. For monthly spending, there’s some portion that’s really $120X decade spending (you can optimize once, then continue to buy monthly for the next 10 years), a bunch that’s probably $12Y of annual spending, and some that’s really $Z that you have to re-consider every month.
Also, avoid the mistake of inflexible permissions. Notice when you’re spending much more (or less!) time optimizing a decision than your average, but there are lots of them that actually benefit from the extra time. And lots that additional time/money doesn’t change the marginal outcome by much, so you should spend less time on.
I wonder if your problem as a youth was in agonizing over big decisions, rather than learning a productive way to methodically think them through. I have lots of evidence that I underthink big decisions and overthink small ones. I also tend to be slow yet ultimately impulsive in making big changes, and fast yet hyper-analytical in making small changes.
Daily choices have low switching and sunk costs. Everybody’s always comparing, so one brand at a given price point tends to be about as good as another.
But big decisions aren’t just big spends. They’re typically choices that you’re likely stuck with for a long time to come. They serve as “anchors” to your life. There are often major switching and sunk costs involved. So it’s really worthwhile anchoring in the right place. Everything else will be influenced or determined by where you’re anchored.
The 1 minute/$25 + 2% of purchase price rule takes only a moment’s thought. It’s a simple but useful rule, and that’s why I like it.
There are a few items or services that are relatively inexpensive, but have high switching costs and are used enough or consequential enough to need extra thought. Examples include pets, tutors, toys for children, wedding rings, mattresses, acoustic pianos, couches, safety gear, and textbooks. A heuristic and acronym for these exceptions might be CHEAPS: “Is it a Curriculum? Is it Heavy? Is it Ergonomic? Is it Alive? Is it Precious? Is it Safety-related?”
I’ve been thinking about honesty over the last 10 years. It can play into at least three dynamics.
One is authority and resistance. The revelation or extraction of information, and the norms, rules, laws, and incentives surrounding this, including moral concepts, are for the primary purpose of shaping the power dynamic.
The second is practical communication. Honesty is the idea that specific people have a “right to know” certain pieces of information from you, and that you meet this obligation. There is wide latitude for “white lies,” exaggeration, storytelling, “noble lies,” self-protective omissions, image management, and so on in this conception. It’s up to the individual’s sense of integrity to figure out what the “right to know” entails in any given context.
The third is honesty as a rigid rule. Honesty is about revealing every thought that crosses your mind, regardless of the effect it has on other people. Dishonesty is considered a person’s natural and undesirable state, and the ability to reveal thoughts regardless of external considerations is considered a form of personal strength.
Better rationality should lead you to think less, not more. It should make you better able to
Set a question aside
Fuss less over your decisions
Accept accepted wisdom
while still having good outcomes. What’s your rationality doing to you?
I like this line of reasoning, but I’m not sure it’s actually true. “better” rationality should lead your thinking to be more effective—better able to take actions that lead to outcomes you prefer. This could express as less thinking, or it could express as MORE thinking, for cases where return-to-thinking is much higher due to your increase in thinking power.
Whether you’re thinking less for “still having good outcomes”, or thinking the same amount for “having better outcomes” is a topic for introspection and rationality as well.
That’s true, of course. My post is really a counter to a few straw-Vulcan tendencies: intelligence signalling, overthinking everything, and being super argumentative all the time. Just wanted to practice what I’m preaching!
How should we weight and relate the training of our mind, body, emotions, and skills?
I think we are like other mammals. Imitation and instinct lead us to cooperate, compete, produce, and take a nap. It’s a stochastic process that seems to work OK, both individually and as a species.
We made most of our initial progress in chemistry and biology through very close observation of small-scale patterns. Maybe a similar obsessiveness toward one semi-arbitrarily chosen aspect of our own individual behavior would lead to breakthroughs in self-understanding?
I’m experimenting with a format for applying LW tools to personal social-life problems. The goal is to boil down situations so that similar ones will be easy to diagnose and deal with in the future.
To do that, I want to arrive at an acronym that’s memorable, defines an action plan and implies when you’d want to use it. Examples:
OSSEE Activity—“One Short Simple Easy-to-Exit Activity.” A way to plan dates and hangouts that aren’t exhausting or recipes for confusion.
DAHLIA—“Discuss, Assess, Help/Ask, Leave, Intervene, Accept.” An action plan for how to deal with annoying behavior by other people. Discuss with the people you’re with, assess the situation, offer to help or ask the annoying person to stop, leave if possible, intervene if not, and accept the situation if the intervention doesn’t work out.
I came up with these by doing a brief post-mortem analysis on social problems in my life. I did it like this:
Describe the situation as fairly as possible, both what happened and how it felt to me and others.
Use LW concepts to generalize the situation and form an action plan. For example, OSSEE Activity arose from applying the concept of “diminishing marginal returns” to my outings.
Format the action plan into a mnemonic, such as an acronym.
Experiment with applying the action plan mnemonic in life and see if it leads you to behave differently and proves useful.
Idea for online dating platform:
Each person chooses a charity and an amount of money that you must donate to swipe right on them. This leads to higher-fidelity match information while also giving you a meaningful topic to kick the conversation off.
If a gears-level understanding becomes the metric of expertise, what will people do?
Go out and learn until they have a gears-level understanding?
Pretend they have a gears-level understanding by exaggerating their superficial knowledge?
Feel humiliated because they can’t explain their intuition?
Attack the concept of gears-level understanding on a political or philosophical level?
Use the concept of gears-level understanding to debug your own knowledge. Learn for your own sake, and allow your learning to naturally attract the credibility it deserves.
Evaluating expertise in others is a different matter. Probably you want to use a cocktail of heuristics:
Can they articulate a gears-level understanding?
Do they have the credentials and experience you’d expect someone with deep learning in the subject to have?
Can they improvise successfully when a new problem is thrown at them?
Do other people in the field seem to respect them?
I’m sure there are more.
Explanation for why displeasure would be associated with meaningfulness, even though in fact meaning comes from pleasure:
Meaningful experiences involve great pleasure. They also may come with small pains. Part of how you quantify your great pleasure is the size of the small pain that it superceded.
Pain does not cause meaning. It is a test for the magnitude of the pleasure. But only pleasure is a causal factor for meaning.
I looked through that post but didn’t see any support for the claim that meaning comes from pleasure.
My own theory is that meaning comes from values, and both pain and pleasure are a way to connect to the things we value, so both are associated with meaning.
I’m a classically trained pianist. Music practice involves at least four kinds of pain:
I perceive none of these to add meaning to music practice. In fact, it was loneliness, frustration, and monotony that caused my music practice to be slowly drained of its meaning and led me ultimately to stop playing, even though I highly valued my achievements as a classical pianist and music teacher. If there’d been an issue with physical pain, that would have been even worse.
I think what pain can do is add flavor to a story. And we use stories as a way to convey meaning. But in that context, the pain is usually illustrating the pleasures of the experience or of the positive achievement. In the context of my piano career, I was never able to use these forms of pain as a contrast to the pleasures of practice and performance. My performance anxiety was too intense, and so it also was not a source of pleasure.
By contrast, I herded sheep on the Navajo reservation for a month in the middle of winter. That experience generated many stories. Most of them revolve around a source of pain, or a mistake. But that pain or mistake serves to highlight an achievement.
That achievement could be the simple fact of making it through that month while providing a useful service to my host. Or moments of success within it: getting the sheep to drink from the hole I cut in the icy lake, busting a tunnel through the drifts with my body so they could get home, finding a mother sheep that had gotten lost when she was giving birth, not getting cannibalized by a Skinwalker.
Those make for good stories, but there is pleasure in telling those stories. I also have many stories from my life that are painful to tell. Telling them makes me feel drained of meaning.
So I believe that storytelling has the ability to create pleasure out of painful or difficult memories. That is why it feels meaningful: it is pleasurable to tell stories. And being a good storyteller can come with many rewards. The net effect of a painful experience can be positive in the long run if it lends itself to a lot of good storytelling.
Where do values enter the picture?
I think it’s because “values” is a term for the types of stories that give us pleasure. My community gets pleasure out of the stories about my time on the Navajo reservation. They also feel pleasure in my story about getting chased by a bear. I know which of my friends will feel pleasure in my stories from Burning Man, and who will find them uncomfortable.
So once again, “values” is a gloss for the pleasure we take in certain types of stories. Meaning comes from pleasure; it appears to come from values because values also come from pleasure. Meaning can come from pain only indirectly. Pain can generate stories, which generate pleasure in the telling.
“values” is a term for the types of stories that give us pleasure.
It really depends on what you mean by “pleasure”. If pleasure is just “things you want”, then almost tautologically meaning comes from pleasure, since you want meaning.
If instead, pleasure is a particular phenomological feeling similar to feeling happy or content, I think that many of us actually WANT the meaning that comes from living our values, and it also happens to give us pleasure. I think that there are also people that just WANT the pleasure, and if they could get it while ignoring their values, they would.
I call this the”Heaven/Enlightenment” dichotomy, and I think it’s a frequent misunderstanding.
I’ve seen some people say “all we care about is feeling good, and people who think they care about the outside world are confused.” I’ve also seen people say “All we care about is meeting our values, and people who think it’s about feeling good are confused.”
Personally, I think that people are more towards one side of the spectrum or the other along different dimensions, and I’m inclined to believe both sides about their own experience.
I think we can consider pleasure, along with altruism, consistency, rationality, fitting the categorical imperative, and so forth as moral goods.
People have different preferences for how they trade off one against the other when they’re in conflict. But they of course prefer them not to be in conflict.
What I’m interested is not what weights people assign to these values—I agree with you that they are diverse—but on what causes people to adopt any set of preferences at all.
My hypothesis is that it’s pleasure. Or more specifically, whatever moral argument most effectively hijacks an individual person’s psychological reward system.
So if you wanted to understand why another person considers some strange action or belief to be moral, you’d need to understand why the belief system that they hold gives them pleasure.
Some predictions from that hypothesis:
People who find a complex moral argument unpleasant to think about won’t adopt it.
People who find a moral community pleasant to be in will adopt its values.
A moral argument might be very pleasant to understand, rehearse, and think about, and unpleasant to abandon. It might also be unpleasant in the actions it motivates its subscriber to undertake. It will continue to exist in their mind if the balance of pleasure in belief to displeasure in action is favorable.
Deprogramming somebody from a belief system you find abhorrent is best done by giving them alternative sources of “moral pleasure.” Examples of this include the ways people have deprogrammed people from cults and the KKK, by including them in their social gatherings, including Jewish religious dinners, and making them feel welcome. Eventually, the pleasure of adopting the moral system of that shared community displaces whatever pleasure they were deriving from their former belief system.
Paying somebody in money and status to uphold a given belief system is a great way to keep them doing it, no matter how silly it is.
If you want people do do more of a painful but necessary action X, helping them feel compensating forms of moral pleasure is a good way to go about it. Effective Altruism is a great example. By helping people understand how effective donations or direct work can save lives, they give people a feeling of heroism. Its failure mode is making people feel like the demands are impossible, and the displeasure of that disappointment is a primary issue in that community.
Another good way to encourage more of a painful but necessary action X is to teach people how to shape it into a good story that they and others will appreciate in the telling. Hence the story-fication of charity.
Many people don’t give to charity because their community disparages it as “do-gooderism,” as futile, as bragging, or as a tasteless display of wealth and privilege. If you want people to give more to charity, you have to give people a way of being able to enjoy talking about their charitable contributions. One solution is to form a community in which that’s openly accepted and appreciated. Like EA.
Likewise for the rationality community. If you want people to do more good epistemology outside of academia, give them an outlet where that’ll be appreciated and an axis from where it can be spread.
Sci-hub has moved to https://sci-hub.st/
Do you treat “the dark arts” as a set of generally forbidden behaviors, or as problematic only in specific contexts?
As a war of good and evil or as the result of trade-offs between epistemic rationality and other values?
Do you shun deception and manipulation, seek to identify contexts where they’re ok or wrong, or embrace them as a key to succeeding in life?
Do you find the dark arts dull, interesting, or key to understanding the world, regardless of whether or not you employ them?
Asymmetric weapons may be the only source of edge for the truth itself. But should the side of the truth therefore eschew symmetric weapons?
What is the value of the label/metaphor “dark arts/dark side?” Why the normative stance right from the outset? Isn’t the use of this phrase, with all its implications of evil intent or moral turpitude, itself an example of the dark arts? An attempt to halt the workings of other minds, or of our own?
There are things like “lying for a good cause”, which is a textbook example of what will go horribly wrong because you almost certainly underestimate the second-order effects. Like the “do not wear face masks, they are useless” expert advice for COVID-19, which was a “clever” dark-arts move aimed to prevent people from buying up necessary medical supplies. A few months later, hundreds of thousands have died (also) thanks to this advice.
(It would probably be useful to compile a list of lying for a good cause gone wrong, just to drive home this point.)
Thinking about historical record of people promoting the use of dark arts within rationalist community, consider Intentional Insights. Turned out, the organization was also using the dark arts against the rationalist community itself. (There is a more general lesson here: whenever a fan of dark arts tries to make you see the wisdom of their ways, you should assume that at this very moment they are probably already using the same techniques on you. Why wouldn’t they, given their expressed belief that this is the right thing to do?)
The general problem with lying is that people are bad at keeping multiple independent models of the world in their brains. The easiest, instinctive way to convince others about something is to start believing it yourself. Today you decide that X is a strategic lie necessary for achieving goal Y, and tomorrow you realize that actually X is more correct than you originally assumed (this is how self-deception feels from inside). This is in conflict with our goal to understand the world better. Also, how would you strategically lie as a group? Post it openly online: “Hey, we are going to spread the lie X for instrumental reasons, don’t tell anyone!” :)
Then there are things like “using techniques-orthogonal-to-truth to promote true things”. Here I am quite guilty myself, because I have long ago advocated turning the Sequences into a book, reasoning, among other things, that for many people, a book is inherently higher-status than a website. Obviously, converting a website to a book doesn’t increase its truth value. This comes with smaller risks, such as getting high on your own supply (convincing ourselves that articles in the book are inherently more valuable than those that didn’t make it for whatever reason, e.g. being written after the book was published), or wasting too many resources on things that are not our goal.
But at least, in this category, one can openly and correctly describe their beliefs and goals.
Metaphorically, reason is traditionally associated with vision/light (e.g. “enlightenment”), ignorance and deception with blindness/darkness. The “dark side” also references Star Wars, which this nerdy audience is familiar with. So, if the use of the term itself is an example of dark arts (which I suppose it is), at least it is the type where I can openly explain how it works and why we do it, without ruining its effect.
But does it make us update too far against the use of deception? Uhm, I don’t know what is the optimal amount of deception. Unlike Kant, I don’t believe it’s literally zero. I also believe that people err on the side of lying more than is optimal, so a nudge in the opposite direction is on average an improvement, but I don’t have a proof for this.
We already had words for lies, exaggerations, incoherence, and advertising. Along with a rich discourse of nuanced critiques and defenses of each one.
The term “dark arts” seems to lump all these together, then uses cherry picked examples of the worst ones to write them all off. It lacks the virtue of precision. We explicitly discourage this way of thinking in other areas. Why do we allow it here?
How to reach simplicity?
You can start with complexity, then simplify. But that’s style.
What would it mean to think simple?
I don’t know. But maybe...
Accept accepted wisdom.
Limit your words.
Rehearse your core truths, think new thoughts less.
Start with inner knowledge. Intuition. Genius. Vision. Only then, check yourself.
Argue if you need to, but don’t ever debate. Other people can think through any problem you can. Don’t let them stand in your way just because they haven’t yet.
If you know, let others find their own proofs. Move on with the plan.
Be slow. Rest. Deliberate. Daydream. But when you find the right project, unleash everything you have. Learn what you need to learn and get the job done right.
Question re: “Why Most Published Research Findings are False”:
Let R be the ratio of the number of “true relationships” to “no relationships” among those tested in the field… The pre-study probability of a relationship being true is R/(R + 1).
What is the difference between “the ratio of the number of ‘true relationships’ to ‘no relationships’ among those tested in the field” and “the pre-study probability of a relationship being true”?
You could think of it this way: If R is the ratio of (combinations that total N on two dice) to (combinations that don’t total N on two dice), then the chance of (rolling N on two dice) is R/(R+1). For example, there are 2 ways to roll a 3 (1 and 2, and 2 and 1) and 34 ways to not roll a 3. The probability of rolling a 3 is thus (2/34)/(1+2/34)=2/36.
You can justify all sorts of spiritual ideas by a few arguments:
They’re instrumentally useful in producing good feelings between people.
They help you escape the typical mind fallacy.
They’re memetically refined, which means they’ll fit better with your intuition than, say, trying to guess where the people you know fit on the OCEAN scale.
They’re provocative and generative of conversation in a way that scientific studies aren’t. Partly that’s because the language they’re wrapped in is more intriguing, and partly isn’t because everybody’s on a level playing field.
It’s a way to escape the trap of intelligence-signalling and lowers the barrier for verbalizing creative ideas. If you’re able to talk about astrology, it lets people feel like they have permission to babble.
They’re aesthetically pleasing if you don’t take them too seriously
I would be interested in arguments about why we should eschew them that don’t resort to activist ideas of making the world a “better place” by purging the world of irrationality and getting everybody on board with a more scientific framework for understanding social reality or psychology.
I’m more interested in why individual people should anticipate that exploring these spiritual frameworks will make their lives worse, either hedonistically or by some reasonable moral framework. Is there a deontological or utilitarian argument against them?
A checklist for the strength of ideas:
Is it worth discussing?
Is it worth studying?
Is it worth using as a heuristic?
Is it worth advertising?
Is it worth regulating or policing?
Worthwhile research should help the idea move either forward or backward through this sequence.
Why isn’t California investing heavily in desalination? Has anybody thought through the economics? Is this a live idea?
There’s plenty of research going on, but AFAIK, no particular large-scale push for implementation. I haven’t studied the topic, but my impression is that this is mostly something they can get by with current sources and conservation for a few decades yet. Desalinization is expensive, not just in terms of money, but in terms of energy—scaling it up before absolutely needed is a net environmental harm.
This article seems to be about the case. The economics seem unclear. The politics seem bad because it means taking on the enviromentalists.
My modified Pomodoro has been working for me. I set a timer for 5 minutes and start working. Every 5 minutes, I just reset the timer and continue.
For some reason it gets my brain into “racking up points” mode. How many 5-minute sessions can I do without stopping or getting distracted? Aware as I am of my distractability, this has been an unquestionably powerful technique for me to expand my attention span.
All actions have an exogenous component and an endogenous component. The weights we perceive differ from action to action, context to context.
The endogenous component has causes and consequences that come down to the laws of physics.
The exogenous component has causes and consequences from its social implications. The consequences, interpretation, and even the boundaries of where the action begins and ends are up for grabs.
Failure modes in important relationships
Being quick and curt when they want to connect and share positive emotions
Meeting negative emotions with blithe positive emotions (ie. pretending like they’re not angry, anxious, etc)
Mirroring negative emotions: meeting anxiety with anxiety, anger with anger
Being uncompromising, overly “logical”/assertive to get your way in the moment
Not trying to express what you want, even to yourself
Compromising/giving in, hoping next time will be “your turn”
Focusing to identify your own elusive feelings
Empathy to identify and express the other person’s needs, feelings, information. Look for a “that’s right.” You’re not rushing to win, nor rushing to receive empathy. The more they reveal, the better it is for you (and for them, because now you can help find a high-value trade rather than a poor compromise).
Good reading habit #1: Turn absolute numbers into proportions and proportions into absolute numbers.
For example, in reading “With almost 1,000 genes discovered to be differentially expressed between low and high passage cells [in mouse insulinoma cells],” look up the number of mouse genes (25,000) and turn it into a percentage so that you can see that 1,000 genes is 4% of the mouse genome.
What is the difference between playing devil’s advocate and steelmanning an argument? I’m interested in any and all attempts to draw a useful distinction, even if they’re only partial.
Devil’s advocate comes across as being deliberately disagreeable, while steelmanning comes across as being inclusive.
Devil’s advocate involves advancing a clearly-defined argument. Steelmanning is about clarifying an idea that gets a negative reaction due to factors like word choice or some other superficial factor.
Devil’s advocate is a political act and is only relevant in a conversation between two or more people. Steelmanning can be social, but it can also be done entirely in conversation with yourself.
Devil’s advocate is about winning an argument, and can be done even if you know exactly how the argument goes and know in advance that you’ll still disagree with it when you’re done making it. Steelmanning is about exploring an idea without preconceptions about where you’ll end up.
Devil’s advocate doesn’t necessarily mean advancing the strongest argument, only the one that’s most salient, hardest for your conversation partner to argue against, or most complex or interesting. Steelmanning is about searching for an argument that you genuinely find compelling, even if it’s as simple as admitting your own lack of expertise and the complexity of the issue.
Devil’s advocate can be a diversionary or stalling tactic, meant to delay or avoid an unwanted conclusion of a larger argument by focusing in on one of its minor components. Steelmanning is done for its own sake.
Devil’s advocate comes with a feeling of tension, attention-hogging, and opposition. Steelmanning comes with a feeling of calm, curiosity, and connection.
Empathy is inexpensive and brings surprising benefits. It takes a little bit of practice and intent. Mainly, it involves stating the obvious assumption about the other person’s experience and desires. Offer things you think they’d want and that you’d be willing to give. Let them agree or correct you. This creates a good context in which high-value trades can occur, without needing an conscious, overriding, selfish goal to guide you from the start.
FWIW, I like to be careful about my terms here.
Empathy is feeling what the other person is feeling.
Understanding is understanding what the other person is feeling.
Active Listening is stating your understanding and letting the other person correct you.
Empathic listening is expressing how you feel what the other person is feeling.
In this case, you stated Empathy, but you’re really talking about Active Listening. I agree it’s inexpensive and brings surprising benefits.
I think whether it’s inexpensive isn’t that obvious. I think it’s a skill/habit, and it depends a lot on whether you’ve cultivated the habit, and on your mental architecture.
Active listening at a low level is fairly mechanical, and can still acrue quite a few benefits. Its not as dependent on mental architecture as something like empathic listening. It does require some mindfulness to create the habit, but for most people I’d put it on only a slightly higher level of difficulty to acquire than e.g. brushing your teeth.
Fair, but I think gaining a new habit like brushing your teeth is actually pretty expensive.
Empathy isn’t like brushing your teeth. It’s more like berry picking. Evolution built you to do it, you get better with practice, and it gives immediate positive feedback. Nevertheless, due to a variety of factors, it is a sorely neglected practice, even when the bushes are growing in the alley behind your house.
I don’t think what I’m calling empathy, either in common parlance or in actual practice, decomposes neatly. For me, these terms comprise a model of intuition that obscures with too much artificial light.
In that case, I don’t agree that the thing you’re claiming has low costs. As Raemon says in another comment this type of intuition only comes easily to certain people. If you’re trying to lump together the many skills I just pointed to, some are easy for others and some harder.
If however, the thing you’re talking about is the skill of checking in to see if you understand another person, then I would refer to that as active listening.
Of course, you’re right. This is more a reminder to myself and others who experience empathy as inexpensive.
Though empathy is cheap, there is a small barrier, a trivial inconvenience, a non-zero cost to activating it. I too often neglect it out of sheer laziness or forgetfulness. It’s so cheap and makes things so much better that I’d prefer to remember and use it in all conversations, if possible.
Chris Voss thinks empathy is key to successful negotiation.
Is there a line between negotiating and not, or only varying degrees of explicitness?
Should we be openly negotiating more often?
How do you define success, when at least one of his own examples of a “successful negotiation” is entirely giving over to the other side?
I think the point is that the relationship comes first, greed second. Negotiation for Voss is exchange of empathy, seeking information, being aware of your leverage. Those factors are operating all the time—that’s the relationship.
The difference between that and normal life? Negotiation is making it explicit.
Are there easy ways to extend more empathy in more situations? Casual texts? First meetings? Chatting with strangers?