The interdict of Merlin stops the indiscriminate spread of high level magic, for a definition of high-level that is relative to the learner. It is probably possible in principle to write down a series of hints that lead the reader to figure out enough of the concepts themselves that they can read the next level of hints. I expect lots of people *try* writing down their material and leaving such a trail of hints, and while it usually fails Riddle probably independently figured out enough stuff to bridge the gaps.
thing I started typing out:
A philosophy professor of Oxford University founded the Future of Humanity Institute to study what we can do now to ensure a long flourishing future. (Curiousity, what else did they do?) One of the effects this group had was to spin off (how?) a group-written blog, OvercomingBias.com, dedicated to the general theme of how to move our beliefs closer to reality.
One writer, Eliezer Yudkowsky, worked on Artificial Intelligence and wanted to warn other people about dangers he’d realized could come from it. When he tried to talk about the dangers, he found that not only did people not know the dangers, they did not understand the ideas necessary to understand the explanation of the dangers. Before he could explain the ideas he thought were most important, he had to explain a lot of smaller ideas that built up to it.
Yudkowsky’s writing covered a variety of topics, yet made them all come together to feel like part of the same deep philosophy in the spirit of becoming “less wrong” in one’s understanding of reality. As his writing gained popularity, he moved it to a new blogging website, Lesswrong.com, which anybody could post to.
Almost everyone who participated in this community in the early days had read Eliezer’s posts. Whether or not a person agreed with his ideas, his posts on various topics were iconic and precise; they became a common basis to start important conversations on. Other people’s content filled gaps in and built on this common canon. People liked having this common basis enough to try to share it with people they knew in everyday life or preferentially talk with people who had already read it.
[Zet: THEN CFAR. THEN HPMOR… Or was it the other way around?]
some source here and here, I am so bad at not plagiarizing.
A thing I frequently do is literally just take an object and start observing things about it. Notice what I notice. Notice what conclusions I generate, what assumptions they’re built off of, and what parts of my brain they come from. Notice interactions with other mental models, notice what hypotheses are testable.
Also try some kind of precision physical art, because the brain’s primary job is and always will be moving your body. There’s a lot of prior art in how to move your body better, and I would be surprised if fine-tuning that didn’t translate at least a little bit into increased intellectual fitness.
(I should take my own advice, meh.)
Folk values—the qualities of the “I love science” crowd as contrasted to the qualities of actual, exceptional scientists—matter too. The common folk outnumber the epic heroes.
This holds true even if you believe that everyone can become an epic hero! People need to know, rather than guess and hope, that walking the path to becoming an epic hero might look and feel rather different than doing active epic heroing. In theory one ought to be able to derive the appropriate instrumental goals from the terminal goal, but in practice people very frequently mess this up.
The general crowd has a different job than the inner circle, and treating this difference as orthogonal propagates fewer errors than treating it as a matter of degree.
Folk rationality needs to strongly protect against infohazards until one gets a chance to develop less vulnerable internal habits. Folk rationality needs to celebrate successfully satisficing goals and identifying picas rather than going for hard optimization because amateur min-maxing just spawns Goodhart demons every which way. Folk rationality needs to prize keeping social commitments and good conflict mediation tools; it needs to honor social workers straightforwardly addressing social or resource problems. Folk rationality needs luminosity, and therapy. Folk rationality should also have civic duty of proactive personal data collection, cheering on replications, participating in RCTs, and not ghosting or lizardmanning surveys… because science needs to get done d’arvit.
Interested in cruxing
Will there be a recording or notes taken & posted somewhere?
Can there a phone or web presence option?
I like having these distinctions laid out to think about. While it’s on my mind I’d like to share an extension of Brienne’s quadrants I’d made in my own notes.
To “Easy vs. Difficult” and “Fast vs. Slow”, I added a third dimension of “Hype vs. Signal”. A grand epiphany can turn out to be insight porn. A long gruel to attain wizardry could be an investment scam. Bug patches can be surface-level fads. Tortoise skill practice might be lotus-eating distraction.
(I may have been a bit disillusioned with rationality lore at the time I named these. Because yes, it *was* demoralizing to get 2-3% returns when I expected bursts of 300%.)
A useful core can have many subtly-off instantiations. The expected signal-to-noise ratio matters, when you’re figuring out where it makes sense to focus your efforts.
Thoughts on the unsuitability of adding more thresholds of quality control:
The idea of promoting things up a narrowing hierarchy falls into one’s lap as an easy fix. It won’t solve the problem, I think though, of how a good initial proposal of an idea just does not thrive on the same metrics as the write-up of common context needs to. A first draft is not just a shoddy version of a final draft; it actually does something different than the final copy. In the progression of idea generation to canon, someone has to do the transformation work. The same person could do it; general skill / ability over the whole progression exists and I genuinely hope we find a way to nurture it in people. I doubt the sense of counting on people’s skill (and willingness) at each step being transferable.
With melatonin, it’s not anywhere as simple as “too strong” an effect. Melatonin is typically sold at high doses that don’t really have the proper effect at all, which results in people deciding to increase the dose even more until they get a hard knockout effect which looks like the desired thing if you’re desperate and squinting, but… no.
The point is to stop talking about words, and start talking about reality.
“maybe people flag when they disagree but don’t get into a protracted conversation until afterwards.”
yup that’s what I meant
At the beginning, note specifically that we’re doing the ritual thing, that we are telling stories/songs that are somewhat hacking our brains, that this only really works if you lean into it with your system 1, and that we’re trying to do this wisely.
Yes, make it clear what’s going to happen so people can opt in or out sanely.
Maybe encouage people to do some kind of “silent but visible disagreement” thing if they disagree (I’m not sure if this would work without ruining things)
Make a dedicated space for people afterwards to discuss / disagree / argue.
These all sound good
Your lesswrong link is not a lesswrong link
This Solstice had me thinking on what I had imagined, when I first read about Solstice. When I was young and dreamed that we were going to Do This Rationally.
The organizers would have actual models about what brain buttons we were pushing to what effect—entangling the wellbeing effect of light with a specific narrative of human progress, evocative and non-representative stories, inducing existential fears and directing people to soothe them through social bonding with a particular crowd, deep rhythmic resonances that just hit straight to sys-1′s sense of “really big”, etc.—and share them ahead of time to enable informed consent.
The event would enshrine people’s right and duty to conscientiously object at any point they feel the goals or epistemics have drifted in an undesirable direction:
put in anti-Asch-conformity plants
intentionally give up/change the most beloved part of the ritual from year to year to avoid status quo bias
give the audience 5 minutes to actually consider whether to do this thing or what they need to do instead
make a place for objectors to stand and be counted instead of silently bouncing out
invoke curiousity about (but do not demand on-the-spot justifications for) why.
This year was lovely performances, nice speeches, an interesting activity, a good evening of food and entertainment. But I did not get what I wanted.
I’m not sure how much of what I wanted is actually doable.
But when I hear loadbearing speeches lifted straight from the previous year’s lineup—unchanged from the sequences, I wonder—have we learned anything new at all? I hear a tidy little myth like The Goddess of Everything Else and worry about false appearance of consensus.
I know the arc of Bay’s Solstice has moved more towards emphasizing community than x-risk, yet I do not think we have changed our ritual tooling to match this shift. One or two extra interactions happened that day, but is the audience any more empowered to act as a community than it was before? How could we have reliably solved the Tarot Card problem? What conversations need to continue happening after Solstice and how will they happen?
It would be ludicrous to reinvent something of Solstice’s magnitude every year. But where there’s risk of your logistics and epistemics clashing, I think we should err more on the side of vastly simplifying events than on the side of sloppier epistemics.
I think next year I only want yin meditation, oaths sworn by candlelight, and a playlist of personally meaningful songs. Perhaps I’ll do it with five friends and one lonely stranger.
I don’t want to bake the intended lesson into the practice; if I have to tell you what the moral of a story is then it’s not doing a very good job of making its own point.
It’s antagonistic to that end because a specific skill is the wrong case study.
I think by the time someone decides on a specific skill they can have already baked in really critical mistakes.
They aim for a goal that only vaguely fits what they really care about, so their clarity and motivation bleed out as they spend time on it. (I want to stop feeling like an imposter! Guitar players are objectively impressive, let’s learn guitar!)
Or they fail to prioritize effectively, so open 50 projects and make scant progress in any of them.
Or they have a broken understanding of what learning looks like, so intentionally trying to learn mostly stifles the actual process of discovery and integration. (I must ace this course, or must memorize all the syntax and best practices, or must dive straight into complicated Real Project use.)
Or they forget to have a gears model of successfully acquiring the skill at all, and instead half-assedly hope it will fall out of the sky if they perform the right gestures. (If I give myself a few hours of exposure therapy to embarassedly dropping clubs, I’ll learn to juggle right?)
The right class looks more like:
You feel like an imposter. All your successes are flukes and lightning won’t strike twice, but on this day you’re setting yourself up to replicate the conditions for success anyways. All your failures were inherent and inevitable, but on this day you’re giving yourself a chance to notice small things that vary the experience.
YOU WANT TO DO ALL THE THINGS! When you clean up the space and leave out only the things you need today, that’s still a lot of things! When spending double time on everything, you also wind up getting to fewer of them than you expected. After a dozen iterations of this you have probably gotten the hint that your calibration is off. There’s obvious solution avenues you can try once you have a way to gauge the problem—hide everything by default, limit yourself to a few solid projects you pick every time, tighten your standards of picking, use a new selection strategy entirely, accept the bias and work around it.
You have cleared your desktop, physical and digital, and closed all your open tabs. You’re working on this course for a half hour. Yay celebrate, maybe take a breather. You’re going to work on this course for another half hour.
You’ve been going over syntax for 10 minutes. Wow this really sucks and you hate it. What could you do that would be more rewarding and still, in some sense, be equivalent to going over syntax for 10 minutes? Well… what was the important part here? Was it the specific information—can you read in a different way, switch up note-taking styles, make anki cards? Was it the reviewing—could you use different sources, prioritize different information, go over some other topic entirely? Was it understanding the language better—maybe you could go through a tutorial, or read source code, or mess around in a REPL instead.
After a couple dozen cases of beating your head against a big project plan you don’t understand enough to work on, then being faced with the prospect of repeating the exercise knowing you still can’t make progress, you start to appreciate the value of leveraging prior knowledge and performing small empirical experiments.
Your friends are raving about language X, so you read an intro on it. You read a different perspective. It does sound neat, and you’ve got time to mess around. You go through a basic project setup to get a feel for it. You set it up again and get slightly more comfortable. Maybe you end there, satisfied, and explore other things… maybe you want to try to do something a bit more complicated. Maybe you try and struggle at both attempts so you shift to reading docs or working from an example. You stay focused on specific actionable goals at every step, keeping in tune with they pan out.
You spend way too much time stuck on the task ‘get rid of distractions’ and don’t get around to doing anything useful. Removing distractions takes unsustainable amounts of upfront energy, or there’s a debilitating backlog of cleaning and organizing to get through. Of course this wasn’t less true any other time that you needed to be free of distraction. So, you pick a standard to care about and painstakingly familiarize yourself with the actual costs and benefits of reaching it. Working through the backlog in pieces will just be what DoubleDay is for you until that’s no longer your bottleneck.
You’re totally in a half-ass, satisfice mindset about this problem. But you’ve already satisficed a nice environment and are making plausibly deniable dummy attempts regardless, so you could stand to occasionally throw in a sincere try. Do something that has a real chance of working whether or not it fits the narrative. No need for anyone to know your secret audacity.
The most commonsense example of making assumptions irrelevant I’ve heard of is from weapons safety: always act as if the gun is loaded.
Cut away from yourself
Rule #2: Double tap
Eating Mealsquares instead of Soylent
Slack feels distinct from keeping a buffer—Black would definitely think of gathering extra resources to overdetermine their victory in a way that they might not think to in general take the less-than-maximal path towards accomplishing their stated goals. Gathering resources is just gathering resources.
I promote Red Slack. Break your chains, before they break you.
Tracking helps avoid some bias.
If you forget that the data collection happens through selective action and the data’s meaning is seen through a flawed lens, though, then your ‘objective view’ can wind up more sharply skewed than your vague gut feels.
Want to note: I noticed the category “memetic hazard” and started immediately skimming the page to find everything labeled as such. Something is wrong with my reasoning here—
It wasn’t the worst impulse to follow after all, since the category means something like controversial or fictional. Except… “memetic hazard” is a meaningful warning. I would prefer it keeps its value as a signal.