LessWrong team member / moderator. I’ve been a LessWrong organizer since 2011, with roughly equal focus on the cultural, practical and intellectual aspects of the community. My first project was creating the Secular Solstice and helping groups across the world run their own version of it. More recently I’ve been interested in improving my own epistemic standards and helping others to do so as well.
Raemon
I am pretty worried about the bad versions of everything listed here, and think the bad versions are what we get by default. But, also, I think figuring out how to get the good versions is just… kinda a necessary step along the path towards good futures.
I think there are going to be early adopters who a) take on more risk from getting fucked , but b) validate the general product/model. There will also be versions that are more “privacy first” with worse UI (same as there are privacy-minded FB clones nobody uses).
Some people will choose to stay grounded… and maybe (in good futures) get to have happy lives, but, in some sense they’ll be left behind.
In a good future, they get left behind by people who use some sort of… robustly philophically and practically safe version of these sorts of tools. In bad worlds, they get left behind by hollowed out nonconscious shells of people (or, more likely, just paperclipped)
I’m currently working on a privacy-minded set of tools for recording my thoughts (keystrokes, audio transcripts, keystrokes), that I use for LLM augmented thought. (Alongside metacognition training that, among other things, is aimed at preserving my mind as I start relying on those tools more and more).
I have some vague hope that if we make it to a good enough intermediate future that it seems worth prioritizing, I can also prioritize getting the UI right so the privacy-minded versions don’t suck compared to the Giant Corporate Versions.
Oh, yeah. I wrote this for a workshop context where there was a de facto time limit and eventually I needed to move things along. But I agree your suggestion here is better if you have more time.
I think that was a random oversight. Moved to frontage.
I do agree the opening is kinda slow
Curated. This is a generally important point, which I’ve also learned this the hard way. And I like how Kaj includes two important caveats while making it (i.e. some advice on distinguishing prejudice from bad vibes, and what sorts of people should maybe consider the opposite advice)
I assume this isn’t crossposted because of a deal with Asmiov press, but on the offchance you could include at least the opening text here that’d be nice.
I found the piece pretty helpful for adjusting to what (maybe, optimistically) might be coming.
This doesn’t seem like it’s engaging with any of the specific things Kaj said attempting to address this. If you disagree with that it seems more helpful to actually say why you don’t think his framing or advice around “is this prejudice or a legit bad vibe?” is sufficient/ reasonable.
I’d give this a +9 if I could*. I’ve been using this technique for 7 years. I think it’s clearly paid off in “clear, legible lessons about how to think.” But the most interesting question is “did the subtler benefits pay off, in 7 years of practice?”
Let’s start with the legible
This was essentially the first step on the path towards Feedbackloop-first Rationality. The basic idea here is “Watch your thoughts as they do their thinking. Notice where your thoughts could be better, and notice where they are particularly good. Do more of that.”
When I’ve ran this exercise for groups of 20 people, typically 1⁄4 of them report a noticeable effect size of “oh, that showed me an obvious way to improve my thinking.” (I’ve done this 3x. I’ve also run it ~3 times for smaller groups and where most people didn’t didn’t seem to get it, which led me to eventually write Scaffolding for “Noticing Metacognition”, which people seemed to have an easier time with)
I’ve picked up a lot of explicit cognitive tricks, via this feedbackloop. Some examples:
“oh, I’m having trouble thinking because the problem is too complex, but that problem goes away when I get better working memory aids”
“oh, I just spent 30 minutes planning out an elaborate series of tests. But, then the very first test failed in the dumbest way possible. If there are cheap tests, just do those first.”
But, the essay promises more:
A small tweak to how your brain processes information in general is worth more than a big upgrade to your conscious repository of cognitive tricks.
[...] More creativity and good ideas just “popping into your head”. There’s no magic to it! Once you understand how the process works, it can be optimized for any purpose you choose.
Most people already have a thinking style built on top of excessive conscious cognitive effort. This often involves relying on side-effects of verbal and conscious thoughts, while mistakenly assigning the full credit for results to those effortful thoughts.
When you already have some conscious/verbal thoughts, it is tempting to imagine they are the only result of your thinking, and then try to pick up from there. But this is limiting, because the most power is in whatever generated that output.
It’s not overwhelming enough to be obvious to others at this point (I did ask a few people “hey, uh, do I seem smarter to you in the past couple years?” and they said “a bit maybe, but, like not obviously? But I don’t know that I would have really noticed”). But, I am subjectively fairly sure I’ve seen real progress here.
Here, at least, is my self-story, make of it what you will.
14 years ago, thinking strategically was generally hard for me (5 minutes of trying to think about a chess board or complex problem would give me a headache). I also didn’t respond to crises very well in the moment. For my first several years in the rationalist community, I felt like I got dumber, because I learned the habit of “go ask the smarter people around me whenever couldn’t figure something out.”
8 years ago, I began “thinking for real”, for various reasons. One piece of that was doing the Tuning Your Cognitive Strategies exercise for the first time, and then sporadically practicing at the skill “notice my thoughts as they’re happening, and notice when particularly good thoughts are happening.”
6 years ago, a smart colleague I respected did tell me “hey, you seem kinda smarter than you used to.” (They brought this up in response to some comments of mine that made it a more reasonable thing to say)
More recently, I’ve noticed at the workshops I’ve run, that although there are people around who are, in many senses, smarter and more knowledgeable than me, they found certain types of metacognitive thoughts more effortful and unnatural than they seemed to me. It was pretty common for me to spend 5 minutes directing my attention at a problem, and having approaches just sort of naturally occur to me, where for some participants they’d have to struggle for 30-60 minutes to get to it.
The way this plays out feels very similar to how it’s described in SquirrelInHell’s essay here.
But, also, I think the style of thinking here is pretty normal for Lightcone core staff, and people in our nearby network. So this may have more to do with “just generally making a habit of figuring out how to deal with obstacles” that comes up naturally in our work. I think most of us have gotten better at that over the past few years, and most of us don’t explicitly do this exercises.
(Jacob Lagerros did explicitly invent and train at the Babble challenge and apply it to problemsolving, which is a different exact mechanism but feels at least adjacent to this exercise, and which I also attribute to improving my own generativity. Maybe that’s a better exercise than this one, though it’s at least a point towards “deliberately practice generativity.” During the pandemic, I tried out a “Babble and Tune” variant that combined the two exercises, which didn’t obviously work at the time but I think is essentially what I actually do most of the time)
Most recently, in November, I spent… basically two whole weeks thinking strategically ~all the time, and I did eventually get a headache that lasted for days, but only after 1.5 weeks instead of 5 minutes.
When I asked John Wentworth recently if I seemed smarter to him, he said “not obviously, but I’m not sure I’d notice.” I said “fair, but, though I (somewhat defensively) wanna flag – a few years ago when you first met/read my stuff, most of what I was writing was basically summarizing/distilling the work of other people, and nowadays most of what you hear me say is more like “original work.”)
So, idk, that’s my story. Take the self-report with a grain of salt.
The Cautionary Tale
It’s annoying that whenever I bring up this technique, I either need to disclaim “uh, the person who invented this later killed themselve,” or, not disclaim it but then have someone else bring it up.
I do think there’s an important cautionary tale there, but it’s a bit subtler. Copying my warning from Subskills of “Listening to Wisdom”:
I believing Tuning Your Cognitive Strategies is not dangerous in a way that was causal in that suicide[4], except that it’s a kind of a gateway drug into weird metacognitive practices and then you might find yourself doing weirder shit that either explicitly hurts you or subtly warps you in a way you don’t notice or appreciate.
I think the way SquirrelInHell died was essentially (or, at least, analogous to) absorbing some Tacit Soulful Ideas, which collapsed a psychologically load-bearing belief in a fatal way.[5]
I do think there are people for whom Tuning Your Cognitive Algorithms is overwhelming, and people for whom it disrupts a coping mechanism that depends on not noticing things. If anything feels off while you try it, definitely stop. I think my post Scaffolding for “Noticing Metacognition” presents it in a way that probably helps the people who get overwhelmed but not the people who had a coping mechanism depending on not-noticing-things.
I also think neither of these would result in suicide in the way that happened to SquirrelInHell.* it’s a bit annoying I can’t give this my own +9, since I crossposted it, even though I didn’t write it.
Whenever this comes up, I note: I think this is only a problem for a certain kind of nerd/geek who wants particularly intense stakes.
Sitcoms, soap operas have plenty of interesting stories that are mostly about low-stakes interpersonal drama.
(I guess this cached annoyance of mine is more about people complaining about utopian fiction rather than science fiction. But I think the same principles apply)
I haven’t really explicitly checked this. I only use caffeine and (questionably counting) wellbutrin. I’ll keep an eye out, especially if there’s particular evidence about something to look out for.
I have observed people on modafinil who seem to get more tunnel visioned and have a harder time reorienting but I haven’t used it myself.
I’m curious to hear more about how this went.
I’m curious how this seems to have gone for you 14 years later.
I’m not really sure what goal you were trying to achieve by branching off into so many different topics in a single post instead of creating separate post
I think in my ideal world this was a series of blogposts that I actually expected people to read all of. Part of the reason it’s all one post is that I didn’t expect people to reliably get all of them.
Partly, I think each individual piece is necessary. Also, kind of the point of pieces like this are to be sort of guided meditations on a topic that let you sit with it long enough, and approach it from enough different angles, that a foreign way of thinking has time to seep into your brain and get digested.
I expected people would mostly not believe me without the concrete practical examples, but the concrete examples are (necessarily) meandering because that’s what the process was actually like (you should expect the process of transmitting soulful knowledge to feel some-kind-of-meandering, at least a fair amount of the time).
I wanted to make sure people got the warnings at the same time that they got the “how to” manual – if I separated the warnings into a separate post, people might only read the more memetically successful “how to” posts.
I do suspect I could write a much shorter version that gets across the basic idea, but I don’t expect the basic idea to actually be very useful because each of the 20 skills is pretty deep, and conveying what it’s like to use them all at once is just necessarily complicated.
I will say I think there are a few different things people mean by burnout, but, they are each individually pretty real. Three examples that come to mind easily:
“Overworked” burnout.
If I’ve been working 60 hour weeks for months on end, eventually I’m just like “I can’t do this anymore.” My brain gets foggy. I feel exhausted. My body/mind start to rebel at the prospect of doing of more of that type of work.
In my experience, this lasts 1-3 weeks (if I am able to notice and stop and switch to a more relaxed mode). When I do major projects, I have a decent sense of when Overworked Burnout is coming, and I time the projects such that I work up until my limit, then take a couple weeks to recover.“Overworked + Trapped” burnout.
As above, except for some reason I don’t have the ability to stop – people are depending on me, or future me is depending on me, and if I were to take a break a whole bunch of projects or relationships would come crashing down and destroy a lot of stuff I care about.
Something about this has a horrible coercive feeling that is qualitatively different being tired/overworked. Some kind of “sick to my stomach”, want to curl up and hide but you can’t curl up and hide. This can happen because your boss is making excessive demands on you (or firing you), or simply because I volunteered myself into the position. Each of those feels differently bad. The former because you maybe really can’t escape without losing resources that you need. The latter because if I’ve put myself in this situation, than something about my self-image and how others will relate to me will have to change if I were to escape.
“Things are deeply fucked burnout.”
This feels similar to the Overworked+Trapped but it’s some other kind of trapped other than just “needing to put in a lot of hours.” Like, maybe there’s conflict at work, or in a close relationship, and there are parts of it you can’t talk about with anyone, and the people you can easily talk about it with have some perspective that feels wrong to you and it’s hard to hold onto your own sense of sanity.
In some (many?) cases the right move here is to walk away, but that might be hard either because you need money/resources from the group, or you’ve invested so much of your identity into it that letting go requires reorganizing how you conceptualize yourself and your goals and your social scene.
This can cause a number of things other than burnout, i.e. various trauma responses. But I think a “burnout” flavored version of it can come when you have to live in this state for months or years. I haven’t had this quite happen to me, but the people who’ve had “conflict based burnout” or “no longer really believe in their job/mission/relationship” flavor burnout can leave people struggling to do much-of-anything on purpose for months.
+9. Fatebook has been a game changer for me, in terms of how practical it is to weave predictions into my decisionmaking. I donated $1000 to Sage to support it.
It’s not listed here, but one of the most crucial things is the Fatebook Chrome Extension, which makes it possible to frictionless integrate it into my normal orienting process (which I do in google docs. You can also do it in the web version of Roam).
I’ve started work on “Enriched Fatebook” poweruser view that shows your calibration at a more granular level. I have several ideas for how to build additional poweruser UI for it but not sure if I’ll get around to it. https://raemon.github.io/fatebook-enriched
One weakness is that the Slack Integration produces pretty bulky predictions, which makes it feel awkward to make a ton of predictions in a channel (and usually, when we’re having a discussion where it’d be appropriate to make a prediction, it’s useful to make like 3 predictions that tackle the question from different angles). Trimming off a few lines from the Slack UI would be good, i.e. see here:
I don’t know what the limitations of the Slack integration are but you should be able to shave at least one line off that.
...
I currently thinking a thing that Quick Forecasting is missing is “qualia-based predictions.” I.e. before I know “what probability do I assign here?” I often know things like “I don’t believe in this, in my gut” or “I believe in this but in a loopy way where I’m the one driving the actions and I’m inhabiting a confident mode which is hard to be objective about.” Right now Fatebook has tags for Questions, but not tags for “predictions.”
Longterm, I think the Philosophically/Practically Correct Typing for an individual prediction should let you either put a number (if you have one), or a “prediction tag” which is some kind of metadata other than the raw probability. (But, admittedly I don’t expect anyone other than me to use that for the near future so it’s not obviously a priority)
“Right now”, which includes figuring out what different ways things can be the most important thing right now.
Did this work?
i.e. the question “what sort of community institutions are good to build?” is a timeless question. Why should we artificially limit our ability to reflect on that sort of thing during the Review, given that we set the Review up in an openended way that allows us to do that on the margin?
Fwiw I disagree, I think the Review is deliberately openended.
Yes there’s a specific goal of find the top 50 posts, and to identify important timeless intellectual contributions. But, part of the whole point of the review (as I originally envisioned it) is also to help reflect in a more general sense on “what happened on LessWrong and what can we learn from it?”.
I think rather than trying to say “no, don’t reflect on particular things that don’t fit the most central use case of the Review”, it seems actively good to me to take advantage of the openended nature of it to think about less central things. We can learn timeless lessons from posts that weren’t, themselves, particularly timeless.
My Current Metacognitive Engine
Someday I might work this into a nicer top-level post, but for now, here’s the summary of the cognitive habits I try to maintain (and reasonably succeed at maintaining). Some of these are simple TAPs, some of them are more like mindsets.
Twice a day, asking “what is the most important thing I could be working on and why aren’t I on track to deal with it?”
you probably want a more specific question (“important thing” is too vague). Three example specific questions (but, don’t be a slave to any specific operationalization)
what is the most important uncertainty I could be reducing, and how can I reduce it fastest?
what’s the most important resource bottleneck I can gain, or contribute to the ecosystem, and would gain me that resource the fastest?
what’s the most important goal I’m backchaining from?
Have a mechanism to iterate on your habits that you use every day, and frequently update in response to new information
for me, this is daily prompts and weekly prompts, which are:
optimized for being the efficient metacognition I obviously want to do each day
include one skill that I want to level up in, that I can do in the morning as part of the meta-orienting (such as operationalizing predictions, or “think it faster”, or whatever specific thing I want to learn to attend to or execute better right now)
The five requirements each fortnight:
be backchaining
from the most important goals
be forward chaining
through tractable things that compound
ship something
to users every fortnight
be wholesome
(that is, do not minmax in a way that will predictably fail later)
spend 10% on meta (more if you’re Ray in particular but not during working hours. During working hours on workdays, meta should pay for itself within a week)
Correlates:
have a clear, written model of what you’re backchaining from
have a clear, written model of how you’re compounding
The general problem solving approach:
breadth first
identify cruxes
connect inner-sim to cruxes / predictions
follow your heart
see how your predictions went
Random ass skills
napping
managing working memory, innovating and applying on working memory tools
grieving
Generalizing
Skill I’m working on that hasn’t paid off yet but I think you should try anyway:At least once a day or so, when you notice a mistake or surprise, spent a couple minutes asking “how could I have thought that faster” (and periodically do deeper dives)
each day/week, figure out what you’re confused or predictably going to tackle in a dumb way, and think in advance about how to be smart about it the first time
I chatted with John about this at a workshop last weekend, and did update noticably, although I haven’t updated all the way to his position here.
What was useful to me were the gears of:
Scheming has different implications at different stages of AI power.
Scheming is massively dangerous at high power levels. It’s not very dangerous at lower levels (except insofar as this allows the AI to bootstrap to the higher levels)
At the workshop, we distinguished:
weak AGI (with some critical threshold of generality, and some spiky competences such that it’s useful, but not better than a pretty-smart-human at taking over the world)
Von Neumann level AGI (as smart as the smartest humans, or somewhat more so)
Overwhelming Superintelligence (unbounded optimization power which can lead to all kinds of wonderful and horrible things that we need to deeply understand before running)
I don’t currently buy that control approaches won’t generalize from weak AGI to Von Neumann levels.
It becomes harder, and there might be limitations on what we can get out of the Von Neumann AI because we can’t be that confident in our control scheme. But I think there are lots of ways to leverage an AI that isn’t “it goes and up does long-ranging theorizing of it’s own to come up with wholesale ideas you have to evaluate.”
I think figuring out how to do this involves some pieces that aren’t particularly about “control”, but mostly don’t feel mysterious to me.
(fyi I’m one of the people who asked John to write this)