Hiding in a shrubbery
hamnox
I went to some martial arts class, jiu jitsu, and before they taught me anything else they taught me how to break falls safely. Same with parkour class. You’re going to fall, they said. You need a way to catch yourself without fucking up your arms or back. It’s not just as mistakes when you’re learning a new move, either, though it will certainly happen more often then. You’re throwing yourself all over the place, tripping each other; you’re going to hit the ground at momentum. You need to know how to handle yourself when that happens, how to roll with it and get up right after safe and sound. Every class, the first thing we do is drill break falls.
I don’t think The Art of Rationality has that.
Yes we notice the skulls. It seems like I see a new treatise pointing out the valley of bad rationality every few months. And yet...
When you share what you know, do you share safety skills and warnings with it?
Do you have a sense of how likely are you to injure yourself in your practice?
What specific actions do you take when you notice you’re taking epistemic damage?
How strong are your skills in harm-minimization? Do you have it down to ingrained reaction or habit?
Do you practice locating individual personal abilities + limits with the distribution of expected human traits as a guide, or are you fitting your strategies to a population-level statistic?
I have some ideas.
I wanna hear yours.
Took the survey.
I think I failed it.
The most commonsense example of making assumptions irrelevant I’ve heard of is from weapons safety: always act as if the gun is loaded.
See also
Cut away from yourself
Eating Mealsquares instead of Soylent
And yet it’s a true observation, and entirely relevant if you’re going to concern yourself with convincing other people to resist against being human.
The tails come apart. If you aim for extremes, you wind up selecting against other forms of goodness. It’s not robust. This is bad both for your goals, if you are incorrect about your goals at all, and socially bad, because it moves you off of (?)cooperative ground(?) where seeking your values correlates with seeking theirs.
I like having these distinctions laid out to think about. While it’s on my mind I’d like to share an extension of Brienne’s quadrants I’d made in my own notes.
To “Easy vs. Difficult” and “Fast vs. Slow”, I added a third dimension of “Hype vs. Signal”. A grand epiphany can turn out to be insight porn. A long gruel to attain wizardry could be an investment scam. Bug patches can be surface-level fads. Tortoise skill practice might be lotus-eating distraction.
(I may have been a bit disillusioned with rationality lore at the time I named these. Because yes, it *was* demoralizing to get 2-3% returns when I expected bursts of 300%.)
A useful core can have many subtly-off instantiations. The expected signal-to-noise ratio matters, when you’re figuring out where it makes sense to focus your efforts.
My first thought was a slightly more sophisticated version of “OMG, WANT!”. This seems like a brilliant idea, and I’d absolutely love to see it come to fruition. I can taste the sweet hintings of a future rationality dojos, already envision the unfolding of a greater future where more is possible. Ten weeks dedicated strictly to the Art, with other people who will actually CARE DEEPLY about being sane. How could I NOT want to be there? I’m a little iffy on whether or not all of these ideas are really the best, but hey—it’s a work in progress.
I open up an application and start typing. But I’m finding myself intimidated by vastly open-ended form questions, and the mention that they’re looking for “people who’ve demonstrated high productivity” and “who already seem like good epistemic rationalists”. I have no such qualifications; I’m inexperienced, lazy, and honestly, I’ve internalized frustratingly little of what I’ve ‘learned’ on LessWrong. So I close the window.
But, the only way I can possible be sure that I won’t get in is if I don’t apply. And I do want to go, I really want this experience. So I open it and start again.
Then close it once more a few seconds later. Open. Close. Open. Close.
I think I may have a problem.
I think I also understand why rejection therapy is part of the curriculum. Unwillingness to put yourself out there is a severe handicap to winning.
Suggestions:
wear gloves. seriously, major life hack for sensory dysphoria.
individually pack things yourself from bulk purchases right away. It can be a lovely experience, making forty little gifts to your future self.
It’s a worse tradeoff to have to engage with bad packaging every time you want a thing. you get efficiency benefits from dealing with that all at once. you’re very correct to point that out.
Small reusable containers exist! get beautiful ones, get neatly stackable ones, get convenient ones.
part of *my* experience with disposables is wincing at the environmental externalities
Deli paper, metal foil, plastic sheets and bags.
Make plates stackable in your fridge. Repurpose something used to store pot lids, probably
I’m starting 30 days of rejection therapy. Right off the bat, I notice I have low inhibitions against asking for ridiculous things that are sure to be rejected. I cultivated an identity of being an oddball who makes bizarre and safely ignorable interjections back in high school, so such things are right inside my comfort zone. What I am not comfortable with is the making suggestions reasonable enough that there is uncertainty about whether or not someone will accept them, or such that asking might be interpreted to suggest specific negative traits (e.g. greedy or dangerous) instead of a general peculiarity.
I decided to make a move West with my friend. It’s sudden and it’s a change, so my brain keeps hitting the panic button every time I think about it. When I reframe it as happening a year or two from now, I know it’s somewhere that I’ll want to be close to eventually, that having in-state tuition right now doesn’t make it much more likely that I’ll get somewhere in college, and loss aversion (plus persistent alief in own unworthiness) is making me cling a lot harder to my local safety nets than I actually believe they’re worth. Now I just need to pull my head out of the ground long enough to set specific subgoals and murphy-proof my landing plan..
edit: After murphy-proofing, it’s apparent the cost of hitting undo on the sudden move is higher than I realized. It would be highly preferable to negotiate spending a couple of weeks with said friend to get more information, and I can probably optimize a short visit to claim a good portion of the social and motivational benefits I was looking for anyways.
Post is very informal. It reads like, well, a personal blog post. A little in the direction of raw freewriting. It’s fluid. Easy to read and relate to.
That matters, when you’re trying to convey nuanced information about how minds work. Relatable means the reader is making connections with their personal experiences; one of the most powerful ways to check comprehension and increase retention. This post shows a subtle error as it appears from the inside. It doesn’t surprise me that this post sparked some rich discussion in the comments.
To be frank, I’d be very wary of trying to suggest edits. I don’t want this post to lose that feeling of unfiltered thought-to-page, when it’s a crucial element of its magic. Maybe I’d add some doodley illustrations to vividly supplement the textual imagery. I imagine it *could* get clearer benefit from light restructuring and expansions. The most authentic-*feeling* writing does not perfectly align with with the most *authentic* writing, after all.
(Maybe edit the bit at the end of “Relevant context” so the ironic ‘stands out’ better… It was perfectly clear from context that this was ironic, but it could have been clearer from ?structure?wording?. idk “yeah nopes” felt kind of weak as the turning point.)
What I would like to see: It’s a year later now. Write a postscript with updated thoughts since then. How has your model, and your use of it, evolved since you wrote this post? Does the basic practice of “sit with the fact that I’m feeling something, and hug the child that brought that emotion instead of slapping them” produce the same results for you it did at the start?
Also expand on the ‘related’. See if you can find specific posts and quotes to support the sentiment of safety as the biggest barrer to rational thinking/discourse. If you can collect and quotes some small anecdotes of other people’s experiences with needing or finding emotional safety to improve their thinking, I believe that would make it feel more… ?connected?. Increase safety not just by providing a skill but also generating a sense that ‘i am not alone in this’.
I have one niggling question: Is it actually true that most people have all the machinery to move their ears? I thought there was a piece missing or something in the median person....
Looking up the facts, it looks as though that whether conscious control can be taught is under contention but the function is all there.
sigh. This post digs into why I can’t watch the news without feeling frustrated.
Because even when I agree with the newscaster’s overall assessment of a situation, there’s just… never quite enough acknowledgement that some evidence might point a different direction than is politically convenient. That small or selective samples can even appear to point against the truth. That alternate perspectives on the facts don’t come into existence solely to try to knock yours down.
uncomfortable squirm
In my culture, one is to be super wary of lionizing martyrs.
I want to be excited about cool new holiday ideas. I think trying a fast in a coordinated group is a splendid idea. I want to celebrate the amazing capacity of humans to care about others and to do hard things for good reasons.
but pain is not the unit of effort.
dying for the cause is not a success.
not every cost is avoidable, but i never, ever want to become the kind of person
who mistakes the price sacrificed for a value bought
In my culture there’s a meta-tradition around ritual hardships or labors: you are to set aside at least 5 minutes, by the very clock, for considering how you might cheat. If you find you could get results without the hardship, you are expected to cheat for the results and then go find some other way to challenge yourself.
Rationality 010 Meetup (Jester’s Court) Principles:
The zeroth skill is being able to notice evidence at all
The point of learning is not to come to the same conclusion as the teacher: the bottom line is not yet written.
Make room for private reasoning, practice non-confrontational forms of dissent, and preserve freedom to self-direct.
Iff it passes muster, pass it on
Prompts/Exercises:
Name a trivial promise you could make to someone here, yourself even. Can you make it even simpler?
How long can the group maintain a conversation made of only nods, head-shakes, finger-pointing, and raised eyebrows?
Pick a small 30s task. Imagine it vividly, start to finish, with lots of sensory detail. Then do it. Was it like you imagined? Repeat the task. Was it the same?
Exercise (If there’s enough time and focus for it, try the whole Core Transformation sequence)
Pick an aspect / thought / behavior you don’t like.
Recall an instance of when it came up, and where in your body it seemed to reside.
Assume it has a positive intent for you, and thank it as you would a well-meaning but mistaken friend.
Ask what outcome it’s trying to achieve.
Thank it for what answer it can give.
If you can honestly promise to give that outcome-goal serious consideration next time this aspect comes up, then do so. If you can’t, then DON’T.
Pick a partner and lead them (or together, teach a rubber duck) through one of the previous exercises you thought was good. Get feedback on one thing you did well, and one thing you can do differently to improve. Try it again with that in mind.
Bonus points if you record it so you can see your own presentation style
Alternative: notice how they do the exercise differently than you, try to improve your model of the person, the exercise, their engagement.
Practice: One person says some things that are factually incorrect, predicated on bad reasoning, or harm-promoting. Everyone takes a turn expressing their lack of endorsement and/or intent to sit out of the activity.
(Sharks are smooth every which way! Contingency plans are important because anything could happen in a quantum universe! We should all go visit so-and-so-with-the-flu’s house to cheer them up!)
EDIT: continued from where I left off
If we all agree to chip $5 to anyone who could make effective use of it, we could have a pool of up to $5*n to spend on achieving our shared goals. How would you propose using it?
If you can manage anonymous approval voting, tally up how much money each proposal’d actually wind up with.
Brainstorm some small things you don’t know how to do, or don’t know how to do as well as you’d like. Which of them could you actually commit to pay for resources/lessons/tutoring in if the opportunity presented itself?
Given you are made of physics and chemistry, what properties of your chemical machine might you want to know?
Given that you are made of natural selection and memetics and reinforcement learning, what properties of your algorithm might you want to know?
Given that you are made of fluid plumbing and electrical networks, what properties of your logistics systems might you want to know?
Have you ever had one good tip put you at a surprisingly major advantage?
When have you felt good about giving a gift? When have you not?
If there miraculously existed one fast and easy action that could solve your recent problem, what would it look like?
Think about the last time someone you knew (yourself, even) seemed in need of support or help. Brainstorm ideas for specific, concrete actions you could take to try to contribute to their wellbeing. Vividly imagine being in that kind of situation again, having since become the kind of person for whom implementing one of them is straightforward and easy, and just doing it. Do it a few times, with your memory as comparison
common—grieving, sick, stressed, anxious, melancholy, depressed, bored
I’m interested in whether people can guess what the objectives behind these are, especially if they guess before reading other comments.
Folk values—the qualities of the “I love science” crowd as contrasted to the qualities of actual, exceptional scientists—matter too. The common folk outnumber the epic heroes.
This holds true even if you believe that everyone can become an epic hero! People need to know, rather than guess and hope, that walking the path to becoming an epic hero might look and feel rather different than doing active epic heroing. In theory one ought to be able to derive the appropriate instrumental goals from the terminal goal, but in practice people very frequently mess this up.
The general crowd has a different job than the inner circle, and treating this difference as orthogonal propagates fewer errors than treating it as a matter of degree.
Folk rationality needs to strongly protect against infohazards until one gets a chance to develop less vulnerable internal habits. Folk rationality needs to celebrate successfully satisficing goals and identifying picas rather than going for hard optimization because amateur min-maxing just spawns Goodhart demons every which way. Folk rationality needs to prize keeping social commitments and good conflict mediation tools; it needs to honor social workers straightforwardly addressing social or resource problems. Folk rationality needs luminosity, and therapy. Folk rationality should also have civic duty of proactive personal data collection, cheering on replications, participating in RCTs, and not ghosting or lizardmanning surveys… because science needs to get done d’arvit.
Interested in cruxing
whperson’s comment touches on why examples are rarely publicized.
I watched Constantin’s Double-Crux, and noticed that, no matter how much I identified with one participant or another, they were not representing me. They explored reciprocally and got to address concerns as they came up, while the audience gained information about them unilaterally. They could have changed each other’s minds without ever coming near points I considered relevant. Double-crux mostly accrues benefits to individuals in subtle shifts, rather than to the public in discrete actionable updates.
A good double-crux can get intensely personal. Double-crux has an empirical advantage over scientific debate because it focuses on integrating real, existing perspectives instead of attempting to simultaneously construct and deconstruct a solid position. On the flip side, you have to deal with real perspectives, not coherent platforms. Double-crux only integrates those two perspectives, cracked and flawed as they are. It’s not debate 2.0 and won’t solve the same problems that arguments do.
I didn’t think the marriage contract was as big a deal as the implication that, if the story were taken as something other than a complete fabrication, someone had messed with Gringott’s proceedings to cover it up. I can’t imagine that Rita Skeeter could keep her job after courting that kind of scandal with the goblin nation.
- 29 Jan 2012 7:57 UTC; 3 points) 's comment on HPMOR: What could’ve been done better? by (
Biorisk—well wouldn’t it be nice if we’d all been familiar with the main principles of biorisk before 2020? i certainly regretted sticking my head in the sand.
> If concerned, intelligent people cannot articulate their reasons for censorship, cannot coordinate around principles of information management, then that itself is a cause for concern. Discussions may simply move to unregulated forums, and dangerous ideas will propagate through well intentioned ignorance.
Well. It certainly sounds prescient in hindsight, doesn’t it?
Infohazards in particular cross my mind: so many people operate on extremely bad information right now. Conspiracies theories abound, and I imagine the legitimate coordination for secrecy surrounding the topic do not help in the least. What would help? Exactly this essay. A clear model of *what* we should expect well-intentioned secrecy to cover, so we can reason sanely over when it’s obviously not.
Y’all done good. This taxonomy clarifies risk profiles better than Gregory Lewis’ article, though I think his includes a few vivid-er examples.
I opened a document to experiment tweaking away a little dryness from the academic tone. I hope you don’t take offense. Your writing represents massive improvements in readability in its examples and taxonomy, and you make solid, straightforward choices in phrasing. No hopelessly convoluted sentence trees. I don’t want to discount that. Seriously! Good job.
As I read I had a few ideas spark on things that could likely get done at a layman level, in line with spiracular’s comment. That comment could use some expansion, especially in the direction of “Prefer to discuss this over that, or discuss in *this way* over *that way” for bad topics. Very relevantly,
I think basic facts should get added to some the good discussion topics, since they represent information it’s better to disseminate!we seek to review basic facts under the good discussion topics, since they represent information it’s better to disseminate (EDIT, see comments).Summarize or link to standard lab safety materials.
Summarize the various levels of PPE and sanitation practices. It doesn’t have to get into the higher end to prove useful for people:
how do you keep dishes sanitary?
the fridge?
a wound?
How can you neutralize sewage,
purify water
responsibly use antiobiotics?
The state of talent… I imagine there’s low-hanging fruit here but idk what it is. Could list typical open positions and what the general degree track looks like.
Give a quick overview of the major biosecurity funds
Do a give-well-esque summary of which organizations have room for more funding, and which promising subcause-areas have relatively few/poor organizations pursuing them. (open phil’s)
I noticed that I almost upvoted your post because it was an in-group thing to say and not because of its actual merit in this conversation. Having the word ‘conspiracy’ anywhere near the name of this organization would be a downright awful idea in practice. I’d as soon suggest changing the official LessWrong slogan to “We’re Not A Cult!”.
We can, but as this case study points out, social/unfocused discussions usually have poor attendance because hanging out is harder to justify than having a specific purpose. It would be fine for a first meeting, probably, but I’d expect most would find more important things to do the second or third time around if we’re not doing anything obviously useful.
I finished a Machine Learning Course by Andrew Ng!
WOOOH ME! AND COURSERA!!
Edit: I changed my mind. My awesomest achievement of the month is endless conversations and a kiss from a poly connection who thinks deeply and plays DDR and mini-golf and reads fascinating books and makes art. I’m high on hedons right now.