The most vivid passage I’ve read recently on trying hard, which reminded me of Eliezer’s challenging the difficult sequence, is the opener in John Psmith’s review of Reentry by Eric Berger:
My favorite ever piece of business advice comes from a review by Charles Haywood of a book by Daymond John, the founder of FUBU. Loosely paraphrased, the advice is: “Each day, you need to do all of the things that are necessary for you to succeed.” Yes, this is tautological. That’s part of its beauty. Yes, actually figuring out what it is you need to do is left as an exercise for the reader. How could it be otherwise? But the point of this advice, the stinger if you will, is that most people don’t even attempt to follow it.
Most people will make a to-do list, do as many of the items as they can until they get tired, and then go home and go to bed. These people will never build successful companies. If you want to succeed, you need to do all of the items on your list. Some days, the list is short. Some days, the list is long. It doesn’t matter, in either case you just need to do it all, however long that takes. Then on the next day, you need to make a new list of all the things you need to do, and you need to complete every item on that list too. Repeat this process every single day of your life, or until you find a successor who is also capable of doing every item on their list, every day. If you slip up, your company will probably die. Good luck.
A concept related to doing every item on your to-do list is “not giving up.” I want you to imagine that it is a Friday afternoon, and a supplier informs you that they are not going to be able to deliver a key part that your factory needs on Monday. Most people, in most jobs, will shrug and figure they’ll sort it out after the weekend, accepting the resulting small productivity hit. But now I want you to imagine that for some reason, if the part is not received on Monday, your family will die.
Are you suddenly discovering new reserves of determination and creativity? You could call up the supplier and browbeat/scream/cajole/threaten them. You could LinkedIn stalk them, find out who their boss is, discover that their boss is acquaintances with an old college friend, and beg said friend for the boss’s contact info so you can apply leverage (I recently did this). You could spend all night calling alternative suppliers in China and seeing if any of them can send the part by airmail. You could spend all weekend redesigning your processes so the part is unnecessary. And I haven’t even gotten to all the illegal things you could do! See? If you really, really cared about your job, you could be a lot more effective at it.
Most people care an in-between amount about their job. They want to do right by their employer and they have pride in their work, but they will not do dangerous or illegal or personally risky things to be 5% better at it, and they will not stay up all night finishing their to-do list every single day. They will instead, very reasonably, take the remaining items on their to-do list and start working on them the next day. Part of what makes “founder mode” so effective is that startup founders have both a compensation structure and social permission that lets them treat every single issue that comes up at work as if their family is about to die.
The rest of the review is about Elon and SpaceX, who are well beyond “founder mode” in trying hard; the anecdotes are both fascinating and a bit horrifying in the aggregate, but also useful in recalibrating my internal threshold for what actually trying hard looks like and whether that’s desirable (short answer: no, but a part of me finds it strangely compelling). It also makes me somewhat confused as to why I get the sense that some folks with both high p(doom)s and a bias towards action aren’t trying as hard, in a missing mood sort of way. (It’s possible I’m simply wrong; I’m not working on anything alignment-related and am simply going off vibes across LW/AF/TPOT/EAGs/Slack/Discord etc.)
This reminded me of another passage by Some Guy armchair psychologizing Elon (so take this with a truckload of salt):
Imagine you’re in the cockpit of an airplane. There’s a war going on outside and the plane has taken damage. The airport where you were going to land has been destroyed. There’s another one, farther away, but all the dials and gauges are spitting out one ugly fact. You don’t have the fuel to get there.
The worst part of your situation is that it’s not hopeless. If you are willing to do the unthinkable you might survive.
You go through the plane with a wrench and you start stripping out everything you possibly can. Out the door it goes. The luggage first. The seats. The overhead storage bins. Some of this stuff you can afford to lose, but it’s not enough to get where you’re going. All the easy, trivial decisions are made early.
Out goes the floor paneling and back-up systems. Wires and conduits and casing. Gauges for everything you don’t need, like all the gauges blaring at you about all the things you threw out the door. You have to stand up in the cockpit because your pilot chair is gone. Even most of the life support systems are out the door because if you can’t get to the other airport you’re going to die anyway. The windows were critical to keep the plane aerodynamic but as long as you can shiver you don’t think you’ll freeze to death so your coat went out the window as well. Same with all the systems keeping the air comfortable in the cabin, so now you’re gasping just to stay standing.
Everything you’re doing is life or death. Every decision.
This is the relationship that Elon has with his own psyche. Oh, it’s not a perfect analogy but this seems close enough to me. There’s some chicken and the egg questions here for me, but consider the missions he’s chosen. All of them involve the long-term survival of humanity. Every last one. … If he didn’t choose those missions because he has a life-or-death way of looking at the world, he certainly seems to have acquired that outlook after the decades leading those companies.
This makes sense when you consider the extreme lengths he’s willing to push himself to in order to succeed. In his own mind, he’s the only thing that stands between mankind and oblivion. He’s repurposed every part of his mind that doesn’t serve the missions he’s selected. Except, of course, no human mind could bear that kind of weight. You can try, and Elon has tried, but you will inevitably fail. …
Put yourself back in the cockpit of the plane.
You tell yourself that none of it matters even if part of you knows that some of your behavior is despicable, because you have to land the plane. All of humanity is on the plane and they’re counting on you to make it to the next airport. You can justify it all away because humanity needs you, and just you, to save it.
Maybe you’ve gone crazy, but everyone else is worse off.
People come into the cockpit to tell you how much better they would do at flying the plane than you. Except none of them take the wheel. None of them even dream of taking the wheel.
You try to reason with them, explain your actions, tell them about the dangers, but all they do is say it doesn’t seem so bad. The plane has always flown. They don’t even look at the gauges. The plane has always flown! Just leave the cockpit and come back into the cabin. It’s nice back there. You won’t have to look at all those troubling gauges!
Eliezer gives me this “I’m the only person willing to try piloting this doomed plane” vibe too.
It’s good to know when you need to “go hard”, and to be able to do so if necessary, and to assess accurately whether it’s necessary. But it often isn’t necessary, and when it isn’t, then it’s really bad to be going hard all the time, for lots of reasons including not having time to mull over the big picture and notice new things. Like how Elon Musk built SpaceX to mitigate x-risk without it ever crossing his mind that interplanetary colonization wouldn’t actually help with x-risk from AI (and then pretty much everything Elon has done about AI x-risk from that point forward made the problem worse not better). See e.g. What should you change in response to an “emergency”? And AI risk, Please don’t throw your mind away, Changing the world through slack & hobbies, etc. Oh also, pain is not the unit of effort.
Furthermore, going hard also imposes opportunity costs and literal costs on future you even if you have all your priorities perfectly lined up and know exactly what should be worked on at any time. If you destabilise yourself enough trying to “go for the goal” your net impact might ultimately be negative (not naming any names here...).
This is very close to some ideas I’ve been trying and failing to write up. In “On Green” Joe Carlsmith writes “Green is what told the rationalists to be more OK with death, and the EAs to be more OK with wild animal suffering.” but wait hang on actually being OK with death is the only way to stay sane, and while it’s not quite the same, the immediate must-reduce-suffering-footprint drive that EAs have might have ended up giving some college students some serious dietary deficiencies.
some ideas I’ve been trying and failing to write up … actually being OK with death is the only way to stay sane
By “being OK with death” you mean something like, accepting that efforts to stop AI might fail, and it really might kill us all? But without entirely giving up?
Yeah basically. I think “OK-ness” in the human psyche is a bit of a binary, which is uncorrelated with ones actions a lot of the time.
So you can imagine four quadrants of “Ok with dying” vs “Not Ok with dying” and, separately “Tries to avoid dying” vs “Doesn’t try to avoid dying”. Where most normies are in the “Ok with dying”+”Doesn’t try to avoid dying” (and quite a few are in the “Not Ok with dying”+”Doesn’t try to avoid dying” quadrant) while lots of rats are in the “Not Ok with dying”+”Tries to avoid dying” quadrant.
I think that, right now, most of the sane work being done is in the “Ok with dying”+”Tries to avoid dying” quadrant. I think Yudkowsky’s early efforts wanted to move people from “Doesn’t try...” to “Tries...” but did this by pulling on the “Ok...” to “Not Ok...” axis, and I think this had some pretty negative consequences.
the opener in John Psmith’s review of Reentry by Eric Berger: “My favorite ever piece of business advice comes from a review by Charles Haywood of a book by Daymond John...”
I found this nesting very funny. Bravo if it was intentional
The most vivid passage I’ve read recently on trying hard, which reminded me of Eliezer’s challenging the difficult sequence, is the opener in John Psmith’s review of Reentry by Eric Berger:
The rest of the review is about Elon and SpaceX, who are well beyond “founder mode” in trying hard; the anecdotes are both fascinating and a bit horrifying in the aggregate, but also useful in recalibrating my internal threshold for what actually trying hard looks like and whether that’s desirable (short answer: no, but a part of me finds it strangely compelling). It also makes me somewhat confused as to why I get the sense that some folks with both high p(doom)s and a bias towards action aren’t trying as hard, in a missing mood sort of way. (It’s possible I’m simply wrong; I’m not working on anything alignment-related and am simply going off vibes across LW/AF/TPOT/EAGs/Slack/Discord etc.)
This reminded me of another passage by Some Guy armchair psychologizing Elon (so take this with a truckload of salt):
Eliezer gives me this “I’m the only person willing to try piloting this doomed plane” vibe too.
It’s good to know when you need to “go hard”, and to be able to do so if necessary, and to assess accurately whether it’s necessary. But it often isn’t necessary, and when it isn’t, then it’s really bad to be going hard all the time, for lots of reasons including not having time to mull over the big picture and notice new things. Like how Elon Musk built SpaceX to mitigate x-risk without it ever crossing his mind that interplanetary colonization wouldn’t actually help with x-risk from AI (and then pretty much everything Elon has done about AI x-risk from that point forward made the problem worse not better). See e.g. What should you change in response to an “emergency”? And AI risk, Please don’t throw your mind away, Changing the world through slack & hobbies, etc. Oh also, pain is not the unit of effort.
Furthermore, going hard also imposes opportunity costs and literal costs on future you even if you have all your priorities perfectly lined up and know exactly what should be worked on at any time. If you destabilise yourself enough trying to “go for the goal” your net impact might ultimately be negative (not naming any names here...).
This is very close to some ideas I’ve been trying and failing to write up. In “On Green” Joe Carlsmith writes “Green is what told the rationalists to be more OK with death, and the EAs to be more OK with wild animal suffering.” but wait hang on actually being OK with death is the only way to stay sane, and while it’s not quite the same, the immediate must-reduce-suffering-footprint drive that EAs have might have ended up giving some college students some serious dietary deficiencies.
By “being OK with death” you mean something like, accepting that efforts to stop AI might fail, and it really might kill us all? But without entirely giving up?
Yeah basically. I think “OK-ness” in the human psyche is a bit of a binary, which is uncorrelated with ones actions a lot of the time.
So you can imagine four quadrants of “Ok with dying” vs “Not Ok with dying” and, separately “Tries to avoid dying” vs “Doesn’t try to avoid dying”. Where most normies are in the “Ok with dying”+”Doesn’t try to avoid dying” (and quite a few are in the “Not Ok with dying”+”Doesn’t try to avoid dying” quadrant) while lots of rats are in the “Not Ok with dying”+”Tries to avoid dying” quadrant.
I think that, right now, most of the sane work being done is in the “Ok with dying”+”Tries to avoid dying” quadrant. I think Yudkowsky’s early efforts wanted to move people from “Doesn’t try...” to “Tries...” but did this by pulling on the “Ok...” to “Not Ok...” axis, and I think this had some pretty negative consequences.
I found this nesting very funny. Bravo if it was intentional
Necessary law of equal and opposite advice mention here: “You can only do as much in a day as you can do.”