Research productivity tip: “Solve The Whole Problem Day”
(This is about a research productivity strategy that’s been working very well for me personally. But YMMV, consider reversing any advice, etc. etc.)
As a researcher, there’s kinda a stack of “what I’m trying to do”, from the biggest picture to the most microscopic task. Like here’s a typical “stack trace” of what I might be doing on a random morning:
LEVEL 5: I’m trying to ensure a good future for life
LEVEL 4: …by trying to solve the AGI control problem
LEVEL 3: …in the hypothetical scenario where the future AGI’s algorithms will resemble a human brain’s algorithms
LEVEL 2: …by trying to understand the functions of the various modulatory signals going from the brainstem to the telencephalon
LEVEL 1: …by reading a bunch of articles about the nucleus incertus
So as researchers, we face a practical question: How do we allocate our time between the different levels of the stack? If we’re 100% at the bottom level, we run a distinct risk of “losing the plot”, and working on things that won’t actually help advance the higher levels. If we’re 100% at the top level, with our head way up in the clouds, never drilling down into details, then we’re probably not learning anything or making any progress.
Obviously, you want a balance.
And I’ve found that striking that balance properly isn’t something that takes care of itself by default. Instead, my default is to spend too much time at the bottom of the stack and not enough time higher up.
So to counteract that tendency, I have for many months now had a practice of “Solve The Whole Problem Day”. That’s one day a week (typically Friday) where I force myself to take a break from whatever detailed things I would otherwise be working on, and instead I fly up towards the top of the stack, and try to see what I’m missing, question my assumptions, find new avenues to explore, etc.
In my case, “The Whole Problem” = “The Whole Safe & Beneficial AGI Problem”. For you, it might be The Whole Climate Change Problem, or The Whole Animal Suffering Problem, or The Whole Becoming A Billionaire Problem, or whatever. (If it’s not obvious how to fill in the blank, well then you especially need a Solve The Whole Problem Day! And maybe start here & here & here.)
Implementation details
The most concrete and obvious way that my Solve The Whole Problem Days are different from my other workdays is that I have a rule that I impose on myself: No neuroscience. (“Awww c’mon, not even a little? Pretty please?” “No!!!!!”). So that automatically forces me up to like Levels 3 & 4 on the bulleted list above, instead of my usual perch at Levels 1 & 2. Of course, there’s more to it than that one rule—the point is Solving The Whole Problem, not following self-imposed rules. But still, that rule is especially helpful.
For example, when I’m answering emails and commenting on other people’s blog posts, that’s often not about neuroscience, nor about Solving The Whole Problem. So I wouldn’t count those towards fulfilling the spirit of Solve The Whole Problem Day.
The point is not to stay at a high level on the stack. The point is to visit a high level on the stack, and then drill down to lower levels. That’s fine … as long as I’m drilling down into lower-level details along a new and different branch of the tree.
I also have a weekly cleanup and reorganization of my to-do list, but I think of it as a totally different thing from Solve The Whole Problem Day, and indeed I do it on a different day. In fact, a separate sub-list on my Trello board to-do list is a list of tasks that I want to try tackling on an upcoming Solve The Whole Problem Day.
I have no qualms about Solving The Whole Problem on other days of the week too—I’m trying to correct a particular bias in my own workflow, and am not at risk of overcorrecting.
Why do I need to force myself to do this, psychologically?
It’s crazy: practically every Solve The Whole Problem Day, I start the morning with a feeling of dread and annoyance and strong temptation to skip it this week. And I end the day feeling really delighted about all the great things I got done. Why the annoyance and dread? Introspectively, I think there are a few things going on in my mind:
First, I’m very often immersed in some interesting problem, and reluctant to pause. “Aww,” I say to myself, “I really wanted to know what the nucleus incertus does! What on earth could it be? And now I have to wait all the way until Monday to figure it out? C’mon!!” Not just that, but all my normal heuristics for to-list prioritization would say that I should figure out the nucleus incertus right now: I need to do it eventually one way or the other, and I’m motivated to do it now, and I’m in an especially good position to do it right now (given that all the relevant context is fresh in my mind), and finally, the “Solve The Whole Problem” activities are not time-sensitive.
Second, I prefer working on problems that definitely have solutions, even if nobody knows them. The nucleus incertus does something. Its secrets are just waiting to be revealed, if only we know where to look! Other low-level tasks are of the form “Try doing X with method Y”, which might or might not succeed, but at least I can figure out whether it succeeds or fails, cross it off my to-do list, and move on. By contrast, higher-level things are sometimes in that awful place where there’s neither a solution, nor a proof that no solution exists. (Think of things like “solve the whole AGI control problem”, or “find an interpretability technique that scales to AGI”.) If I’m stumped, well maybe it’s not just me, maybe there’s just no progress to be made. I find that somewhat demotivating and aversive. Not terribly so, but just enough to push me away, if I’m not being self-aware about it.
Third, I have certain ways of thinking about the bigger-picture context of what I’m working on, and I’m used to thinking that way, and it’s comfortable and I like it. But a frequent task of Solve The Whole Problem Day is to read someone coming with a very different perspective, sharing none of my assumptions or proximate goals or terminologies, and try to understand that perspective and get something out of it. Sometimes this is fun and awesome, but also sometimes it’s just a really long hard dense slog with no reward at the end. So it feels aversive, and comparatively unproductive.
But again that’s just me. YMMV.
(Related: Richard Hamming’s “Great Thoughts Time” on each Friday afternoon.)
- [Intro to brain-like-AGI safety] 15. Conclusion: Open problems, how to help, AMA by 17 May 2022 15:11 UTC; 92 points) (
- Solving the whole AGI control problem, version 0.0001 by 8 Apr 2021 15:14 UTC; 63 points) (
- Friendship is Optimal: EAGs should be online by 2 Sep 2022 0:00 UTC; 31 points) (EA Forum;
- Exegesis by 31 Dec 2021 17:48 UTC; 9 points) (
- 7 Jul 2022 16:12 UTC; 3 points) 's comment on Principles for Alignment/Agency Projects by (
- Exegesis by 31 Dec 2021 17:51 UTC; 2 points) (EA Forum;
- 1 Jul 2022 23:59 UTC; 2 points) 's comment on Who is this MSRayne person anyway? by (
Pretty sure I need to reverse the advice on this one. Thanks for including the reminder to do so!
I really like this. I think it should indeed apply equally to a startup-like context such as LessWrong. We already periodically have strategy retreats, but this “drilling down along a new and different branch of the tree” framing, that’s not an approach I had before. Thanks for writing this up.
I use an alternative technique that works well for me—making sure to walk up the stack on every significant new development at lower levels.
E.g. if on level 5 am trying to solve X with technique Y, and I realize that it does not quite work, but I would probably be able to do X’ that is as good with Y’, before jumping into Y’, I take time to consider—well, X’ is as good as X for level 4, but does it perhaps mutate level 4 away from higher-level goals? Maybe the fact that Y does not actually work for X indicates that approach at one of the higher levels is off?
And it’s actually similar when Y does succeed for X—once it does, I learned something new, and need to check my stack again. Or maybe I realize that Y is taking me much longer than expected—again, need to walk the stack and figure out whether X and Y are even worth it. This way when I am in the zone on Y, there is no distraction, but I also do not have the stack ignored for too long as beeing in the zone for Y for too long is an indication that something went wrong and the plan needs to be reexamined.
Having hard deadlines, even artificially imposed, helps. Having goals explicit (and explicitly written, so that I can remind yourself how I ended up in the rabbit hole I am in) for each of higher levels helps.
YMMV, of course.
My mileage varies. I have a bias for the 5th level, and if I’m currently deeply immersed in a rabbit hole that I reflectively think is usefwl, then going up the ladder again risks reminding me how distal the rabbit hole objectives are. I remind myself just how much I care about saving the world, but the caring mostly leaks out when trying to reach more distal instrumental objectives.
The “drilling down along a new and different branch of the tree” concept makes me think of tree search algorithms, naively being depth or breadth first searching. It’s overly simplified, but might uncover related theory.
The goal is to search from whichever node you estimate to being closest to the goal. Calculating the estimate is difficult, so we tend to only look at a small nearby neighbourhood, which is usually low level. Backtracking forces you to make estimates for earlier nodes.
If I was making this algorithm faster, I’d try to find a way to make the heuristic (the estimate of nearness to the goal) more efficient. I’ve no idea how to do that, but maybe looking at how past discoveries were made could help.
Then again, given that research takes a long time, maybe it’s not worth making any sacrifices to the heuristic accuracy.