How should we weight and relate the training of our mind, body, emotions, and skills?
I think we are like other mammals. Imitation and instinct lead us to cooperate, compete, produce, and take a nap. It’s a stochastic process that seems to work OK, both individually and as a species.
We made most of our initial progress in chemistry and biology through very close observation of small-scale patterns. Maybe a similar obsessiveness toward one semi-arbitrarily chosen aspect of our own individual behavior would lead to breakthroughs in self-understanding?
In programming, that’s true at first. But as projects increase in scope, there’s a risk of using an architecture that works when you’re testing, or for your initial feature set, but will become problematic in the long run.
For example, I just read an interesting article on how a project used a document store database (MongoDB), which worked great until their client wanted the software to start building relationships between data that had formerly been “leaves on the tree.” They ultimately had to convert to a traditional relational database.
Of course there are parallels in math, as when you try a technique for integrating or parameterizing that seems reasonable but won’t actually work.
Math is training for the mind, but not like you think
Just a hypothesis:
People have long thought that math is training for clear thinking. Just one version of this meme that I scooped out of the water:
“Mathematics is food for the brain,” says math professor Dr. Arthur Benjamin. “It helps you think precisely, decisively, and creatively and helps you look at the world from multiple perspectives . . . . [It’s] a new way to experience beauty—in the form of a surprising pattern or an elegant logical argument.”
But math doesn’t obviously seem to be the only way to practice precision, decision, creativity, beauty, or broad perspective-taking. What about logic, programming, rhetoric, poetry, anthropology? This sounds like marketing.
As I’ve studied calculus, coming from a humanities background, I’d argue it differently.
Mathematics shares with a small fraction of other related disciplines and games the quality of unambiguous objectivity. It also has the ~unique quality that you cannot bullshit your way through it. Miss any link in the chain and the whole thing falls apart.
It can therefore serve as a more reliable signal, to self and others, of one’s own learning capacity.
Experiencing a subject like that can be training for the mind, because becoming successful at it requires cultivating good habits of study and expectations for coherence.
It was the silence of sullen agreement.
Markets are the worst form of economy except for all those other forms that have been tried from time to time.
What gives LessWrong staying power?
On the surface, it looks like this community should dissolve. Why are we attracting bread bakers, programmers, stock market investors, epidemiologists, historians, activists, and parents?
Each of these interests has a community associated with it, so why are people choosing to write about their interests in this forum? And why do we read other people’s posts on this forum when we don’t have a prior interest in the topic?
Rationality should be the art of general intelligence. It’s what makes you better at everything. If practice is the wood and nails, then rationality is the blueprint.
To determine whether or not we’re actually studying rationality, we need to check whether or not it applies to everything. So when I read posts applying the same technique to a wide variety of superficially unrelated subjects, it confirms that the technique is general, and helps me see how to apply it productively.
This points at a hypothesis, which is that general intelligence is a set of defined, generally applicable techniques. They apply across disciplines. And they apply across problems within disciplines. So why aren’t they generally known and appreciated? Shouldn’t they be the common language that unites all disciplines?
Perhaps it’s because they’re harder to communicate and appreciate. If I’m an expert baker, I can make another delicious loaf of bread. Or I can reflect on what allows me to make such tasty bread, and speculate on how the same techniques might apply to architecture, painting, or mathematics. Most likely, I’m going to choose to bake bread.
This is fine, until we start working on complex, interdisciplinary projects. Then general intelligence becomes the bottleneck for having enough skill to get the project done. Sounds like the 21st century. We’re hitting the limits of what’s achievable through sheer persistence in a single specialty, and we’re learning to automate them away.
What’s left is creativity, which arises from structured decision-making. I’ve noticed that the longer I practice rationality, the more creative I become. I believe that’s because it gives me the resources to turn an intuition into a specified problem, envision a solution, create a sort of Fermi-approximation to give it definition, and guidance on how to develop the practical skills and relationships that will let me bring it into being.
If I’m right, human application of these techniques will require deliberate practice with the general techniques—both synthesizing them and practicing them individually, until they become natural.
The challenge is that most specific skills lend themselves to that naturally. If I want to become a pianist, I practice music until I’m good. If I want to be a baker, I bake bread. To become an architect, design buildings.
What exactly do you do to practice the general techniques of rationality? I can imagine a few methods:
Participate in superforecasting tournaments, where Bayesian and gears/policy level thinking are the known foundational techniques.
Learn a new skill, and as you go, notice the problems you encounter along the way. Try to imagine what a general solution to that problem might look like. Then go out and build it.
Pick a specific rationality technique, and try to apply it to every problem you face in your life.
The Lindy Effect gives no insight about which of the two books will be more “relevant“. For example, you could be comparing two political biographies, one on Donald Trump and the other on Jimmy Carter. They might both look equally interesting, but the Trump biography will make you look better informed about current affairs.
Choosing the timely rather than the timeless book is a valid rule. There‘ll always be time for the timeless literature later but the timely literature gives you the most bang for your buck if you read it now.
The Lindy Effect only tells you which of the two books is more likely to remain in print for another 40 years. It doesn’t even give you insight on how many total copies will be sold of each book. Maybe one will sell a million copies this year, 1,000 the next, and be out of print in two years. The other will sell a steady 10,000 copies per year for 40 years. The first one still will outsell it over that period of time.
What I find frustrating about the Lindy Effect, and other low-info priors like Chesterton’s Fence, is the way they get spun into heuristics for conservatism by conflating the precise claim they make with other claims that feel related but really aren’t.
Treat my post as utterly worthless for the purposes of actually dissecting the COVID response, and useful for the purpose of a quick sketch illustrating how a different choice of language could help us dissect the COVID response. I find it hard to express their behavior using the “social vs. physical reality frame.” It’s more natural to me to say something like:
“The CDC didn’t trust the American public to conserve the limited early supply of masks for healthcare workers if they knew that masks were helpful. So they transferred the false information that masks were unhelpful, because it was easier to triangulate a single message that produced the desired behavior to the entire American public than it was to triangulate two separate messages—one for the untrustworthy public who’d buy up all the masks if they knew it would help; and one for the trustworthy public who would refrain from buying up the masks even knowing that the masks would be helpful to them. This decision reduced the transaction cost of buying compliance from the public in the short run by eliminating the triangulation problem, but also undermined trust in the CDC to an unknown extent. This may increase the transaction cost of buying compliance from the American public in the future due to the increased price of purchasing their future trust.”
Shoehorning this analysis into the trust/transfer/triangulation framework for transaction costs is both easy for me to do, and feels like it makes my thoughts clearer. I know precisely what I mean by those words. By contrast, I don’t think that “social reality” or “physical” reality carve nature at the joints, and I don’t think I’ll ever have a definition of them that I trust will be interpreted correctly by my audience. That’s why I choose not to use them.
Obviously it’s very clunky compared to just saying it in natural speech. But sometimes a linguistic straightjacket is helpful.
Oh I completely agree. My aim here wasn’t to give a complete account of pandemic weirdness, just to show how alternative language can help us dissect the situation in a more detailed way than seems possible with “social reality vs. physical reality.”
The economist Mike Munger identifies 3 types of transaction costs: trust, transfer, and triangulation. Triangulation is making a connection between buyer and seller; transfer is the difficulty in allowing the good to change hands, and trust is the ability of buyer and seller to trust each other to uphold their promises to each other.
Every meaningful choice starts with a transaction. I can’t pound a nail unless I buy a hammer. I can’t go to college unless I pay my tuition. And so these transaction costs, which are social in nature, will be a part of every choice we make.
As the world gets more materially abundant, we make more choices over the course of a lifetime. So it makes sense that social signalling occupies an increasing fraction of our lives.
Because of that, it doesn’t seem to me that “social reality” is something separate from physical reality, nor that it’s wasteful or perverted. I feel very comfortable saying that we have serious trust, transfer, and triangulation problems in our society.
If I had to give an account of the weird pandemic response in terms of transaction costs, it might go something like this:
Some people have high trust in science, and enough knowledge and engagement with the news media that it’s easy to transfer information about COVID-19 to them. Other people have low trust in science, media, and government, and their lack of scientific knowledge means that the cost of transferring scientific information about COVID-19 to them is high.Compounding this problem, they don’t trust most other people to know better than they do, or to explain things to them in a way that’s accessible and respectful. There are probably people out there who could explain things to them in a way they’d be receptive to. But triangulating that interaction would be very difficult.Given those difficulties, they “don’t buy” that COVID-19 is a serious issue, and turn to their favorite politician or pundit, who reinforces their distrust and misguided notions of how the world works, because their constituents have an interest in feeling wise and respected by powerful people, and it’s cheap to triangulate and transfer that feeling in exchange for a vote. By contrast, building trust in a smart pandemic response, transferring information about it, and triangulating that information with the voters is much more difficult.
Some people have high trust in science, and enough knowledge and engagement with the news media that it’s easy to transfer information about COVID-19 to them. Other people have low trust in science, media, and government, and their lack of scientific knowledge means that the cost of transferring scientific information about COVID-19 to them is high.
Compounding this problem, they don’t trust most other people to know better than they do, or to explain things to them in a way that’s accessible and respectful. There are probably people out there who could explain things to them in a way they’d be receptive to. But triangulating that interaction would be very difficult.
Given those difficulties, they “don’t buy” that COVID-19 is a serious issue, and turn to their favorite politician or pundit, who reinforces their distrust and misguided notions of how the world works, because their constituents have an interest in feeling wise and respected by powerful people, and it’s cheap to triangulate and transfer that feeling in exchange for a vote. By contrast, building trust in a smart pandemic response, transferring information about it, and triangulating that information with the voters is much more difficult.
I think that’s a fair account of COVID-19 weirdness, and I think it gets a little closer to the causal dynamics of this particular situation. “Social reality vs physical reality” and “simulacra” just feel a little bit hand-wavey for my taste.
Forgive me for asking a really basic question. But do you find it reasonable that we’re living in a world where ground truth is on its way out? And how can Baudrillard?
If we “live in a postmodern world… where meaning is composed wholly of simulacra, which does not actually reference the real world which our bodies live in...” what does this mean in practice? Right now I’m starving and I’m about to go pick up a burrito from the restaurant down the road. My back hurts from a long session in the lab. This situation seems pretty real to me, even though I can find elements in my world that are more disembodied. But honestly, most of the stuff I have around me has a physical, practical, tangible purpose.
It seems like simulacra aren’t so much about our direct, moment-to-moment experience of life, but how we think about and talk about the systems we inhabit, especially on a broad scale. But that seems to me more a consequence of how difficult they are to put into words. It’s easy to find examples of deceptive or misleading behavior and speech. But it seems like part of the argument here is not just that there are 4 diagnostic categories of simulacra, but that we’re in the midst of a crisis, or even already inescapably lost, in a totally consuming fantasy world. And I just don’t see that. It doesn’t seem even remotely true—regardless of whether or not that would be good.
Can somebody enlighten me as to how the “we live in the Matrix, and it’s inescapable” perspective might be reasonable, just on an everyday lived-experience level?
The central and original case of a good use of Chesterton’s Fence is a powerful political figure who chooses to hold off on imposing a radical change on society through military force, because he wants his economists to investigate the current practice for a few years/decades and understand its ramifications first. When we’re talking about a small group of individuals experimenting on a local level with a new way of doing things in their private lives, that’s a real stretch.
Since most of us don’t have the ear of our local dictator, it’s these non-central cases that we’re usually discussing. In such cases, I think the onus is on the reactionary to explain why a given experiment might need extra caution, as much as on the reformer to understand the norm’s purpose and explain why it’s nevertheless OK to try something new. Investigating norms takes time, and isn’t always a good use of it.
I need to try a lot harder to remember that this is just a community full of individuals airing their strongly held personal opinions on a variety of topics.
I think you’re right, when the issue at hand is agreed on by both parties to be purely a “matter of fact.”
As soon as social or political implications crop in, that’s no longer a guarantee.
But we often pretend like our social/political values are matters of fact. The offense arises when we use rational concepts in a way that gives the lie to that pretense. Finding an indirect and inoffensive way to present the materials and let them deconstruct their pretenses is what I’m wishing for here. LW has a strong culture surrounding how these general-purpose tools get applied, so I’d like to see a presentation of the “pure theory” that’s done in an engaging way not obviously entangled with this blog.
The alternative is to use rationality to try and become savvier social operators. This can be “instrumental rationality” or it can be “dark arts,” depending on how we carry it out. I’m all for instrumental rationality, but I suspect that spreading rational thought further will require that other cultural groups appropriate the tools to refine their own viewpoints rather than us going out and doing the convincing ourselves.
I’m experimenting with a format for applying LW tools to personal social-life problems. The goal is to boil down situations so that similar ones will be easy to diagnose and deal with in the future.
To do that, I want to arrive at an acronym that’s memorable, defines an action plan and implies when you’d want to use it. Examples:
OSSEE Activity—“One Short Simple Easy-to-Exit Activity.” A way to plan dates and hangouts that aren’t exhausting or recipes for confusion.
DAHLIA—“Discuss, Assess, Help/Ask, Leave, Intervene, Accept.” An action plan for how to deal with annoying behavior by other people. Discuss with the people you’re with, assess the situation, offer to help or ask the annoying person to stop, leave if possible, intervene if not, and accept the situation if the intervention doesn’t work out.
I came up with these by doing a brief post-mortem analysis on social problems in my life. I did it like this:
Describe the situation as fairly as possible, both what happened and how it felt to me and others.
Use LW concepts to generalize the situation and form an action plan. For example, OSSEE Activity arose from applying the concept of “diminishing marginal returns” to my outings.
Format the action plan into a mnemonic, such as an acronym.
Experiment with applying the action plan mnemonic in life and see if it leads you to behave differently and proves useful.
Are rationalist ideas always going to be offensive to just about everybody who doesn’t self-select in?
One loved one was quite receptive to Chesterton’s Fence the other day. Like, it stopped their rant in the middle of its tracks and got them on board with a different way of looking at things immediately.
On the other hand, I routinely feel this weird tension. Like to explain why I think as I do, I‘d need to go through some basic rational concepts. But I expect most people I know would hate it.
I wish we could figure out ways of getting this stuff across that was fun, made it seem agreeable and sensible and non-threatening.
Less negativity—we do sooo much critique. I was originally attracted to LW partly as a place where I didn’t feel obligated to participate in the culture war. Now, I do, just on a set of topics that I didn’t associate with the CW before LessWrong.
My guess? This is totally possible. But it needs a champion. Somebody willing to dedicate themselves to it. Somebody friendly, funny, empathic, a good performer, neat and practiced. And it needs a space for the educative process—a YouTube channel, a book, etc. And it needs the courage of its convictions. The sign of that? Not taking itself too seriously, being known by the fruits of its labors.
I wonder if your problem as a youth was in agonizing over big decisions, rather than learning a productive way to methodically think them through. I have lots of evidence that I underthink big decisions and overthink small ones. I also tend to be slow yet ultimately impulsive in making big changes, and fast yet hyper-analytical in making small changes.
Daily choices have low switching and sunk costs. Everybody’s always comparing, so one brand at a given price point tends to be about as good as another.
But big decisions aren’t just big spends. They’re typically choices that you’re likely stuck with for a long time to come. They serve as “anchors” to your life. There are often major switching and sunk costs involved. So it’s really worthwhile anchoring in the right place. Everything else will be influenced or determined by where you’re anchored.
The 1 minute/$25 + 2% of purchase price rule takes only a moment’s thought. It’s a simple but useful rule, and that’s why I like it.
There are a few items or services that are relatively inexpensive, but have high switching costs and are used enough or consequential enough to need extra thought. Examples include pets, tutors, toys for children, wedding rings, mattresses, acoustic pianos, couches, safety gear, and textbooks. A heuristic and acronym for these exceptions might be CHEAPS: “Is it a Curriculum? Is it Heavy? Is it Ergonomic? Is it Alive? Is it Precious? Is it Safety-related?”
I’m annoyed that I think so hard about small daily decisions.
Is there a simple and ideally general pattern to not spend 10 minutes doing arithmetic on the cost of making burritos at home vs. buying the equivalent at a restaurant? Or am I actually being smart somehow by spending the time to cost out that sort of thing?
“Spend no more than 1 minute per $25 spent and 2% of the price to find a better product.”
This heuristic cashes out to:
Over a year of weekly $35 restaurant meals, spend about $35 and an hour and a half finding better restaurants or meals.
For $250 of monthly consumer spending, spend a total of $5 and 10 minutes per month finding a better product.
For bigger buys of around $500 (about 2x/year), spend $10 and 20 minutes on each purchase.
Buying a used car ($15,000) I’d spend $300 and 10 hours. I could use the $300 to hire somebody at $25/hour to test-drive an additional 5-10 cars, a mechanic to inspect it on the lot, a good negotiator to help me secure a lower price.
For work over the next year ($30,000), spend $600 and 20 hours.
Getting a Master’s degree ($100,000 including opportunity costs), spend 66 hours and $2,000 finding the best school.
Choosing from among STEM career options ($100,000 per year), spend about 66 hours and $600 per year exploring career decisions.
Comparing that with my own patterns, that simplifies to:
Spend much less time thinking about daily spending. You’re correctly calibrated for ~$500 buys. Spend much more time considering your biggest buys and decisions.
A checklist for the strength of ideas:
Is it worth discussing?
Is it worth studying?
Is it worth using as a heuristic?
Is it worth advertising?
Is it worth regulating or policing?
Worthwhile research should help the idea move either forward or backward through this sequence.