I stumbled upon a Twitter thread where Eliezer describes what seems to be his cognitive algorithm that is equivalent to Tune Your Cognitive Strategies, and have decided to archive / repost it here.
Sarah Constantin: I really liked this example of an introspective process, in this case about the “life problem” of scheduling dates and later canceling them: malcolmocean.com/2021/08/int…
Eliezer Yudkowsky: See, if I’d noticed myself doing anything remotely like that, I’d go back, figure out which steps of thought were actually performing intrinsically necessary cognitive work, and then retrain myself to perform only those steps over the course of 30 seconds.
SC: if you have done anything REMOTELY like training yourself to do it in 30 seconds, then you are radically smarter/more able/etc than me and all the other people who do slower introspective practices.
SC: I don’t know whether to be impressed or to roll to disbelieve.
EY: I mean I suspect that this actually requires something like a fast perceptual view of minds as engines and thoughts as doing work and like actually draws on my mind design knowledge, but, even so, I ask: Do you constantly look back and ask “How could I have thought that faster?”
SC: No, I’ve never asked that.
EY: Okay, well, every time I’m surprised by reality I look back and think “What about my model and my way of thinking could I change that would have predicted that better, without predicting a bunch of other things worse?”
EY: When somebody at a MIRI workshop comes up with a math proof, I look over it and ask if there’s a way to simplify it. Usually, somebody else does beat me to inventing a proof first; but if my intuition says it was too complicated, I often am first to successfully simplify it.
EY: And every time I complete a chain of thought that took what my intuition says was a lot of time, I look back and review and ask myself “How could I have arrived at the same destination by a shorter route?”
EY: It’s not impossible that you have to be Eliezer Yudkowsky for this to actually work—I am never sure about that sort of thing, and have become even less so as time goes on—but if AI timelines were longer I’d tell somebody, like, try that for 30 years and see what happens.
EY: Man, now I’m remembering when I first started doing this consciously as a kid. I called it Shortening the Way, because a rogue rabbi had recently told me that “Kwisatz Haderach” was actually a reference to a Kabbalistic concept about teleportation, so that term was on my mind.
I read this and think “ah, yes, this is valuable and important and I should be trying to do that more”. And thought as much when I first read it. I don’t think it stayed on my mind. It’s too compressed and not a ready a cognitive strategy.
But taking a few moments to extrapolate it into something better, starting with why I’m not doing it to begin with:
A reason I don’t do more of this is I can’t do this on the order of 30 seconds. My guess is constructing a picture of what mental operations I did and what I could have done alone is the work of many minutes.
For the kinds of reasoning I really wish I’d done faster, they happened over time and it really would take a bunch of mental excavation to reconstruct my reasoning.
I don’t have a well-specified ontology for mental operations such that it’s easy to specify changes. (In contrast, I have a very clear ontology for driving a car and realize error + rehearse doing it differently in 30 seconds feels doable there). This means the work of figuring out how to do better is trying to carve out descriptions of what went wrong.
The things that went wrong run deep, or something, into weird emotional territory that are hard to analyze.
Solving problems and reaching true conclusions is hard enough that I’m caught up on that level, from one problem to the next, such that I feel too busy for reflection.
Yet I don’t fully buy all the above.
I do think that to do more of this, to make it a habit, it’ll need intentional practice. Scheduled blocks of 30-min on the schedule. Seems worth it, I should add it to ye old exobrain to remind me. I’m forming an intention to try it.
The other piece is the noticing. I don’t think I have a part of my brain that registers a “reached some milestone” event such that other actions could be triggered by it. Something, something Logan’s Noticing sequence. I’ll try that.
Ok, so where does that leave me regarding this crosspost?
I want to give this a 4 because it’s Rationality stuff from Eliezer. I don’t think I can because that great seeming, I don’t see that people will have a lot to do with it, without a bunch of unpacking (as I’m attempting). Then again, if I do the post-inspired work for a while and get great gains, I might want to say “it was short, but it had such a large effect on me it, it was def worth a 4 or even 9!”
I’ve spent a lot of time figuring out how to implement this exercise, which I wrote up in The “Think It Faster” Exercise (and slightly more streamlined “Think it Faster” worksheet).
I’ve reviewed it more thoroughly over there.
I would not normally vote on this post, as the technique of “How could I have thought that faster?” seems extremely obvious to me but also very important if you are not in fact trying to improve your thinking after being surprised (or any other shortcoming). Since this post has 241 upvotes and multiple comments from people (example: Said Achmiz, who is not an idiot!) and others disagreeing with the framing, I have review-upvoted this post.
I think the framing of “think it faster” is specifically something you should track, beyond just “What did I learn here really?” (which I see as important subskills that help you figure out how to think it faster) or “How could I have thought that with less information?” (which I see as fully subordinate to thinking it faster, because you get later info later). By focusing on thinking it faster, you focus on cognitive strategies—on how you could’ve approached the issue differently with what you knew at the time, or maybe you should’ve put more/less stock in a certain kind of evidence.
The main problem with this post is that it gives no guide for how to go about learning how to think faster. Maybe you can’t come up with a good guide, but for this sort of thing a list of examples is itself useful.
Here’s a list of examples (that are too abstracted—next time I encounter something that I see how I could’ve thought it faster, I’ll write it down, and when I’ve gotten a bunch I’ll either post about it or add to this comment):
Say I am trying to prove a theorem. I will pursue a couple approaches, and then finally get something that works. When I look back on what I did, I will often find that I should’ve known better. Common problems:
Spending too much time up front trying a direct approach instead of just looking at small examples.
Not previously picking up on a general strategy for problems similar to the one at issue.
Spending a bunch of time trying to prove the theorem true/false and could’ve quickly figured it out had I switched to the other one earlier. Performing the “error analysis” feels like it’s a significant contributor to my mathematical (or physics) progress.
Likewise, for more mundane life stuff:
Caching a thought on poor justification back when I was younger (and dumber). Especially if I was a kid when I cached it! Examples: Using shaving cream (it in fact actually works to make shaving not hurt), becoming a vegetarian (younger me had some shaky justification for not doing so). The implication is that I should more readily question what I take for granted, and mentally “decay” the trust I put in them as time elapses from when I last considered it.
Rationalizing to myself at multiple earlier points, or that I had ignored intuitive feelings of wrongness or confusion.
Ignoring plenty of early warning signals that I should’ve acted on.
Consistently making an error in the same direction instead veering hard the other way (assuming that it’s not risky to do so). For example, when learning to park a car I noticed that I was constantly turning in too early, and so I then decided to turn way later (against my intuitions). This instantly improved my parking a lot. It also suggests that I should apply this strategy everywhere. For tasks where failing in one direction isn’t much worse than failing in the other, you should shoot for being just as likely to miss in every direction—so if you expect yourself to turn too early, you should turn later until you are just as worried about turning too late (assuming you’re in an empty parking lot where you don’t have to worry about overshoots hitting someone).
Or when I’m surprised by e.g. the news or a factoid, I might’ve:
Under (or over) estimated how common a certain phenomena is (“Weird outcome X actually occurs in 70% of cases”)
Underestimated the state of the art in some field (like gwern’s list of side channel attacks like determining what someone was saying by watching a bag of potato chips or identifying you by your heartbeat as detected by an invisible laser from 200 m away
That there a bunch of pieces of evidence that I failed to notice or consider that could’ve predicted it (usually I discover these by making surprised sounds at my friends. Your mileage might vary if your friends aren’t as good at examining their reasons for not being surprised or if they wouldn’t feel surprised by it either way)
The best updates are more general, but unfortunately those are harder to discover.
So I think this post is pointing at something very important for my personal rationality practice, but it gives me almost none of what I need to actually do it successfully.