pearls before swine >:[
I think I would be a much better-trained rationalist if I did my basic rationality practices as regularly as I do physical exercise. The practices are:
I keep a list of things I have changed my mind about. Everything from geopolitics to my personal life.
I hardly ever go looking outside my echo chamber in search for things that can challenge/correct my beliefs because I find it very effortful & unpleasant. (You know what else is effortful? Pushups).
I sometimes write letters to my future or past selves. I tried giving comprehensive life advice to my 16-year-old self, and ended up learning a lot about advice and spurious counterfactuals...
I sometimes do the Line of Retreat negative visualization. For immediate things, I tend to do it out loud while on a walk. For political beliefs, I slowly add to a private document over time and occasionally review it.
I maintain a list of my disagreements with various public thinkers. Helps me separate tribal thinking from truth-seeking.
I made an Anki deck for memorizing my defensive epistemology heuristics: “is this explainable by selection effects?”, Proving Too Much, “is this claim consistent with their previous claim?”, Reversal Test, etc.
I notice I’m at a point where I can make surprisingly good fermi estimates if I spend a couple minutes thinking really hard, usually not otherwise. Feels like there’s room for improvement.
Hard to practice with regularity, but these days I try to restrain myself from joining into an in-progress debate when I overhear one, and instead sit on the sideline and patiently wait for openings to point out (double) cruxes.
Prompt myself to answer, “what would a slightly improved version of me do in this situation? What would I think if I were more rested and better hydrated?” It’s embarrassing how much mileage I have gotten out of role-playing as myself.
Privately journaling about my internal conflicts or difficult feelings. Simple but underpracticed (much like sit-ups).
I wrote down a page of predictions about the state of crypto tech in 2031, aiming for maximum specificity & minimal future embarrassment. Similar for Twitter in this post. I guess I might call this “conscientious futurism” or just “sticking my neck out”.
Pro/Con lists. They’re effortful & time-intensive. But so is half-focused vacillating, which is what I do by default.
So yeah, those are my rationality exercises, and I really wish I practiced them more regularly. It’s not exactly high-level SEAL-inspired training, and it’s pretty hard to verify, but...it feels like it makes me more rational.
I think a world of widespread economic literacy might be even better than it is depicted here. Speculative sci-fi has traditionally suffered from issues like predicting flying cars instead of smartphones. In Optimism About Social Technology, I wrote that my pet heuristic is:
Imagine how much worse the world would be if there were a worldwide ban on e.g. standard insurance contracts—no health insurance, no auto insurance, no fire insurance.
Now imagine how much better the world would be if we had not only those things but also widespread liability insurance...or dominant assurance contracts, or prediction markets, or something that hasn’t even been invented yet!
I think EY is off to a great start with Dath Ilan, but speculative fiction is hard, so I want there to be a whole genre of Dath Ilan-style world-building.
Conditional payments for paywalled content (after you pay for a piece of downloadable content and view it, you can decide after the fact if payments should go to the author or to proportionately refund previous readers)
-- Vitalik Buterin, On Radical Markets
Good post. I myself have gotten into the habit of referring to an outside view instead of the outside view.
I wonder where the Spiral of Silence fits in here. I guess opposite the Respectability cascade?
society can respond to new information relatively quickly, but does so smoothly. This seems like a good thing.
This makes me think of the Concave Disposition.
I guess it shouldn’t come as a surprise that these concepts are already well-known.
Well I think independent discovery is underrated anyway.
I think this remains an outstanding, top-tier problem in group rationality. I feel like I encounter it constantly. I’m surprised this post doesn’t have more engagement.
I suspect that the long days break down some of your usual defences. It makes their techniques more effective, but you may not want to provide them with this power over you.
I personally feel less concerned by the long hours than by the notion of “psychological hacks” that lead to testimonials like, “What is, is; and what isn’t isn’t”. That stuff makes me imagine some kind of “leap of faith” maneuver, which I usually see as unreliable and prone to misfiring.
The Western focus on individuality and autonomy can be limiting as often a push is exactly what we need. This may explain part of why they were able to achieve what seemed like remarkable results—psychologists are limited by ethics in a way in which Landmark is not.
Yeah, this is plausible. It’s easy to imagine scenarios where a push from a trusted friend is exactly what I want. However, I’m still wary of hiring an organization of strangers to overpower my narratives & worldview using psychological hacks.
Contrast with certain types of meditation, whereby you can directly observe evidence that challenges your narrative, without ever doing anything epistemically questionable.
Purely for completeness, I’ll go ahead and represent the opposite preference: I am noticeably energized by overcast days, and I enjoy rain. Long, unbroken sequences of sunny days feel oppressive to me. I think my ideal week would be overcast 4 days, medium-light rain 2 of those days, and sunny on the remaining 3 days for evaporation & variety.
Of course, I realize that pluviophiles are a small minority, so any community/subcultural hub in a chronically cloudy place will suffer an excess SAD burden.
The seismic shift that’s occurred in the last 10 years is the ability of social media platforms to freebase user generated content and create serious behavioral addictions with very salient real world consequences. We‘re making a category error if we continue to discuss Twitter like it’s the same platform it was 10 years ago.
Important point, and well-put.
The Jaron quote was also powerful; I hadn’t heard that sort of thing about Trump before but it’s not surprising. I personally think the highest reasonable hope would be for Trump to return to how he was in 2012--the birth certificate stuff was much less bad than the Qanon stuff and the capitol insurrection. That was less bad, but it was still bad and this might undermine a sanguine narrative of “Trump in Recovery”...but if it somehow didn’t, then yeah, I’d be happy to see that narrative get some airtime.
Regardless of how the stories of Trump end up being told, I do hope that people start to see Twitter as the psychotoxic game that it is. I have expressed some optimism about this in Predictions for future dispositions toward Twitter. It’s possible that tech companies will eventually try to sell cleaner digital ecosystems to conscientious end-users—I imagine a high-income, tech-savvy buyer paying extra for a well-integrated device-app ecosystem that tends to respect & enhance one’s mental/emotional life rather than harming it. This could come to represent a high-status, post-Twitter lifestyle. Again this is optimistic, but perhaps worth hoping for.
(I use the term “full reversal” to mean going from high confidence in a belief to high confidence in the opposite belief. A “hard reversal” is when a full reversal happens quickly.)
When have you noticed and remembered peers or colleagues changing their minds?
I think the question might need some modifiers to exclude the vast amounts of boring examples. Obviously your question does not evoke answers so boring as “Oh, the store is closed? Okay, then we can’t get milk tonight” but what about a corporate executive pivoting his strategy when he hears business-relevant news? By now I am bored of Losing-the-Faith stories, but I don’t deny their relevance to human rationality.
Anyway, I think full reversals tend to happen much less frequently than moderate reductions in confidence. Much more common are things of the form “I used to be totally anti-X, but now I see that the reality is a lot more complex and I’ve become much less certain” or “I used to be completely convinced that Y was true and the deniers were just being silly, but I read a couple decent challenges and now I’m just pretty confused overall”. One way in which this happens is when someone accepts that their strong belief actually depends on some fact that they don’t know much about.
But to try to directly answer your question, I might list:
Megan Phelps-Roper left the Westboro Baptist Church, in part due to having respectful debates on Twitter
Bostrom’s Hypothetical Apostasy never really caught on, despite sounding pretty cool on paper. Too bad.
Rationalists have gotten some recognition for anticipating the pandemic early—you might be able to find some good examples of mind-changing there.
Rationalist-adjascent blogger Tim Urban had a fairly sharp reversal on cryonics.
There’s that classic (boring?) example of a person quitting grad school after spending a few minutes answering reasonable questions about their motivations.
If you want a more politically-charged example: Scott Alexander loosely identifies as libertarian, having formerly been vocally anti-libertarian. Seems like this happened via deliberate argumentation, including some email exchanges with David Friedman (son of Milton Friedman).
I’ve seen some of my friends and acquaintances change their minds about psychoactive drugs.
Thanks for putting out more fiction.
the rocket began to tilt slightly east.
I interpret this as subtle world-building. A future with Jewish space lasers AND peace in the Middle East.
When I first read the post, about 50% of my reaction was, “this platform could never get traction with a major political party”. But is that true? (...also, is it too meta?)
Scott writes in the piece,
There’s a theory that the US party system realigns every 50-or-so years. Last time, in 1965, it switched from the Democrats being the party of the South and the Republicans being the party for blacks, to vice versa. If the theory’s right, we’re in the middle of an equally big switch. Wouldn’t it be great if the Republicans became the racially diverse party of the working class? You can make it happen!
So I guess that’s my biggest question about all this. Is the realignment theory correct? And more importantly, would a 1960s-magnitude realignment be enough to cause a major US political party to adopt a prominently anti-credentialist, pro-betting, anti-gatekeeping platform?
Thanks, this is really helpful! I’ll ask more questions if I think of them.