As a datapoint, none of this chilling out or sprinting hard discussion resonates with me. Internally I feel that I’ve been going about as hard as I know how to since around 2015, when I seriously got started on my own projects. I think I would be working about similarly hard if my timelines shortened by 5 years or lengthened by 15. I am doing what I want to do, I’m doing the best I can, and I’m mostly focusing on investing my life into building truth-seeking and world-saving infrastructure. I’m fixing all my psychological and social problems insofar as they’re causing friction to my wants and intentions, and as a result I’m able to go much harder today than I was in 2015. I don’t think effort is really a substantially varying factor in how good my output is or impact on the world. My mood/attitude is not especially dour and I’m not pouring blind hope into things I secretly know are dead ends. Sometimes I’ve been more depressed or had more burnout, but it’s not been much to do with timelines and more about the local environment I’ve been working in or internal psychological mistakes. To be clear, I try to take as little vacation time at work as I psychologically can (like 2-4 weeks per year), but that’s because there’s so much great stuff for me to build over the next decade(s), and that’d be true if I had 30-year timelines.
I am sure other people are doing differently-well, but I would like to hear from such people about their experience of things (or for people here to link to others’ writing). (I might also be more interested in the next Val post being an interview with someone, rather than broad advice.)
Added: I mean, I do sometimes work 70 hour weeks, and I sometimes work 50 hour weeks, but this isn’t a simple internal setting I can adjust, it’s way more a fact about what the work demands of me. I could work harder, but primarily by picking projects that require it and the external world is setting deadlines of me, not by “deciding” to work harder. (I’ve never really been able to make that decision, as far as I can quickly recall it’s always failed whenever I’ve tried.)
I would strongly, strongly argue that essentially “take all your vacation” is a strategy that would lead to more impact for you on your goals, almost regardless of what they are.
Humans need rest, and humans like the folks on LW tend not to take enough.
Naively, working more will lead to more output and if someone thinks they feel good while working a lot, I think the default guess should be that working more is improving their output. I would be interested in the evidence you have for the claim that people operating similar to Ben described should take more vacation.
I think there is some minimum amount of breaks and vacation that people should strongly default to taking and it also seems good to take some non-trivial amount of time to at least reflect on their situation and goals in different environments (you can think of this as a break, or as a retreat).
But, 2-4 weeks per year of vacation combined with working more like 70 hours a week seems like a non-crazy default if it feels good. This is only working around 2⁄3 of waking hours (supposing 9 hours for sleep and getting ready for sleep) and working ~95% of weeks. (And Ben said he works 50-70 hours, not always 70.)
It’s worth noting that “human perform better with more rest” isn’t a sufficient argument for thinking more rest is impactful: you need to argue this effect overwhelms the upsides of additional work. (Including things like returns to being particularly fast and possible returns to scale on working hours.)
I mean, two points: 1. We all work too many hours, working 70 hours a week persistently is definitely too many to maximize output. You get dumb fast after hour 40 and dive into negative productivity. There’s a robust organizational psych literature on this, I’m given to understand, that we all choose to ignore, because the first ~12 weeks or so, you can push beyond and get more done, but then it backfires.
2. You’re literally saying statements that I used to say before burning out, and that the average consultant or banker says as part of their path to burnout. And we cannot afford to lose either of you to burnout, especially not right now.
If you’re taking a full 4 weeks, great. 2 weeks a year is definitely not enough at a 70 hours a week pace, based on the observed long term health patterns of everyone I’ve known who works that pace for a long time. I’m willing to assert that you working 48/50ths of the hours a year you’d work otherwise is worth it, assuming fairly trivial speedups in productivity of literally just over 4% from being more refreshed, getting new perspectives from downing tools, etc.
Burnout is not a result of working a lot, it’s a result of work not feeling like it pays out in ape-enjoyableness[citation needed]. So they very well could be having a grand ol time working a lot if their attitude towards intended amount of success matches up comfortably with actual success and they find this to pay out in a felt currency which is directly satisfying. I get burned out when effort ⇒ results ⇒ natural rewards gets broken, eg because of being unable to succeed at something hard, or forgetting to use money to buy things my body would like to be paid with.
If someone did a detailed literature review or had relatively serious evidence, I’d be interested. By default, I’m quite skeptical of your level of confidence in this claims given that they directly contradict my experience and the experience of people I know. (E.g., I’ve done similar things for way longer than 12 weeks.)
To be clear, I think I currently work more like 60 hours a week depending on how you do the accounting, I was just defending 70 hours as reasonable and I think it makes sense to work up to this.
That said, I do think there’s enough evidence that I would bet (not at extreme odds) that it is bad for productivity to have organizational cultures that emphasize working very long hours (say > 60 hours / week), unless you are putting in special care to hire people compatible with that culture. Partly this is because I expect organizations to often be unable to overcome weak priors even when faced with blatant evidence.
i think there’s a lot of variance. i personally can only work in unpredictable short intense bursts, during which i get my best work done; then i have to go and chill for a while. if i were 1 year away from the singularity i’d try to push myself past my normal limits and push chilling to a minimum, but doing so now seems like a bad idea. i’m currently trying to fix this more durably in the long run but this is highly nontrival
Oh that makes sense, thanks. That seems more like a thing for people who’s work comes from internal inspiration / is more artistic, and also for people who have personal or psychological frictions that cause them to burn out a lot when they do this sort of burst-y work.
I think a lot of my work is heavily pulled out of me be the rest of the world setting deadlines (e.g. users making demands, people arriving for an event, etc), and I can cause those sorts of projects to pull lots of work out of me more regularly. I also think I don’t take that much damage from doing it.
it still seems bad to advocate for the exactly wrong policy, especially one that doesn’t make sense even if you turn out to be correct (as habryka points out in the original comment, many think 2028 is not really when most people expect agi to have happened). it seems very predictable that people will just (correctly) not listen to the advice, and in 2028 both sides on this issue will believe that their view has been vindicated—you will think of course rationalists will never change their minds and emotions on agi doom, and most rationalists will think obviously it was right not to follow the advice because they never expected agi to definitely happen before 2028.
i think you would have much more luck advocating for chilling today and citing past evidence to make your case..
it still seems bad to advocate for the exactly wrong policy, especially one that doesn’t make sense even if you turn out to be correct (as habryka points out in the original comment, many think 2028 is not really when most people expect agi to have happened).
I’m super sensitive to framing effects. I notice one here. I could be wrong, and I’m guessing that even if I’m right you didn’t intend it. But I want to push back against it here anyway. Framing effects don’t have to be intentional!
It’s not that I started with what I thought was a wrong or bad policy and tried to advocate for it. It’s that given all the constraints, I thought that preregistering a possibility as a “pause and reconsider” moment might be the most effective and respectful. It’s not what I’d have preferred if things were different. But things aren’t different from how they are, so I made a guess about the best compromise.
I then learned that I’d made some assumptions that weren’t right, and that determining such a pause point that would have collective weight is much more tricky. Alas.
But it was Oliver’s comment that brought this problem to my awareness. At no point did I advocate for what I thought at the time was the wrong policy. I had hope because I thought folk were laying down some timeline predictions that could be falsified soon. Turns out, approximately nope.
i think you would have much more luck advocating for chilling today and citing past evidence to make your case..
Empirically I disagree. That demonstrably has not been within the reach of my skill to do effectively. But it’s a sensible thing to consider trying again sometime.
to be clear, I am not intending to claim that you wrote this post believing that it was wrong. I believe that you are trying your best to improve the epistemics and I commend the effort.
I had interpreted your third sentence as still defending the policy of the post even despite now agreeing with Oliver, but I understand now that this is not what you meant, and that you are no longer in favor of the policy advocated in the post. my apologies for the misunderstanding.
I don’t think you should just declare that people’s beliefs are unfalsifiable. certainly some people’s views will be. but finding a crux is always difficult and imo should be done through high bandwidth talking to many people directly to understand their views first (in every group of people, especially one that encourages free thinking among its members, there will be a great diversity of views!). it is not effective to put people on blast publicly and then backtrack when people push back saying you misunderstood their position.
I realize this would be a lot of work to ask of you. unfortunately, coordination is hard. it’s one of the hardest things in the world. I don’t think you have any moral obligation to do this beyond any obligation you feel to making AI go well / improving this community. I’m mostly saying this to lay out my view of why I think this post did not accomplish its goals, and what I think would be the most effective course of action to find a set of cruxes that truly captures the disagreement. I think this would be very valuable if accomplished and it would be great if someone did it.
I agree. To be honest I didn’t think chilling out now was a real option. I hoped to encourage it in a few years with the aid of preregistration.
As a datapoint, none of this chilling out or sprinting hard discussion resonates with me. Internally I feel that I’ve been going about as hard as I know how to since around 2015, when I seriously got started on my own projects. I think I would be working about similarly hard if my timelines shortened by 5 years or lengthened by 15. I am doing what I want to do, I’m doing the best I can, and I’m mostly focusing on investing my life into building truth-seeking and world-saving infrastructure. I’m fixing all my psychological and social problems insofar as they’re causing friction to my wants and intentions, and as a result I’m able to go much harder today than I was in 2015. I don’t think effort is really a substantially varying factor in how good my output is or impact on the world. My mood/attitude is not especially dour and I’m not pouring blind hope into things I secretly know are dead ends. Sometimes I’ve been more depressed or had more burnout, but it’s not been much to do with timelines and more about the local environment I’ve been working in or internal psychological mistakes. To be clear, I try to take as little vacation time at work as I psychologically can (like 2-4 weeks per year), but that’s because there’s so much great stuff for me to build over the next decade(s), and that’d be true if I had 30-year timelines.
I am sure other people are doing differently-well, but I would like to hear from such people about their experience of things (or for people here to link to others’ writing). (I might also be more interested in the next Val post being an interview with someone, rather than broad advice.)
Added: I mean, I do sometimes work 70 hour weeks, and I sometimes work 50 hour weeks, but this isn’t a simple internal setting I can adjust, it’s way more a fact about what the work demands of me. I could work harder, but primarily by picking projects that require it and the external world is setting deadlines of me, not by “deciding” to work harder. (I’ve never really been able to make that decision, as far as I can quickly recall it’s always failed whenever I’ve tried.)
I would strongly, strongly argue that essentially “take all your vacation” is a strategy that would lead to more impact for you on your goals, almost regardless of what they are.
Humans need rest, and humans like the folks on LW tend not to take enough.
Naively, working more will lead to more output and if someone thinks they feel good while working a lot, I think the default guess should be that working more is improving their output. I would be interested in the evidence you have for the claim that people operating similar to Ben described should take more vacation.
I think there is some minimum amount of breaks and vacation that people should strongly default to taking and it also seems good to take some non-trivial amount of time to at least reflect on their situation and goals in different environments (you can think of this as a break, or as a retreat).
But, 2-4 weeks per year of vacation combined with working more like 70 hours a week seems like a non-crazy default if it feels good. This is only working around 2⁄3 of waking hours (supposing 9 hours for sleep and getting ready for sleep) and working ~95% of weeks. (And Ben said he works 50-70 hours, not always 70.)
It’s worth noting that “human perform better with more rest” isn’t a sufficient argument for thinking more rest is impactful: you need to argue this effect overwhelms the upsides of additional work. (Including things like returns to being particularly fast and possible returns to scale on working hours.)
I mean, two points:
1. We all work too many hours, working 70 hours a week persistently is definitely too many to maximize output. You get dumb fast after hour 40 and dive into negative productivity. There’s a robust organizational psych literature on this, I’m given to understand, that we all choose to ignore, because the first ~12 weeks or so, you can push beyond and get more done, but then it backfires.
2. You’re literally saying statements that I used to say before burning out, and that the average consultant or banker says as part of their path to burnout. And we cannot afford to lose either of you to burnout, especially not right now.
If you’re taking a full 4 weeks, great. 2 weeks a year is definitely not enough at a 70 hours a week pace, based on the observed long term health patterns of everyone I’ve known who works that pace for a long time. I’m willing to assert that you working 48/50ths of the hours a year you’d work otherwise is worth it, assuming fairly trivial speedups in productivity of literally just over 4% from being more refreshed, getting new perspectives from downing tools, etc.
Burnout is not a result of working a lot, it’s a result of work not feeling like it pays out in ape-enjoyableness[citation needed]. So they very well could be having a grand ol time working a lot if their attitude towards intended amount of success matches up comfortably with actual success and they find this to pay out in a felt currency which is directly satisfying. I get burned out when effort ⇒ results ⇒ natural rewards gets broken, eg because of being unable to succeed at something hard, or forgetting to use money to buy things my body would like to be paid with.
If someone did a detailed literature review or had relatively serious evidence, I’d be interested. By default, I’m quite skeptical of your level of confidence in this claims given that they directly contradict my experience and the experience of people I know. (E.g., I’ve done similar things for way longer than 12 weeks.)
To be clear, I think I currently work more like 60 hours a week depending on how you do the accounting, I was just defending 70 hours as reasonable and I think it makes sense to work up to this.
I think the evidence is roughly at “this should be a weakly held prior easily overturned by personal experience”: https://www.lesswrong.com/posts/c8EeJtqnsKyXdLtc5/how-long-can-people-usefully-work
That said, I do think there’s enough evidence that I would bet (not at extreme odds) that it is bad for productivity to have organizational cultures that emphasize working very long hours (say > 60 hours / week), unless you are putting in special care to hire people compatible with that culture. Partly this is because I expect organizations to often be unable to overcome weak priors even when faced with blatant evidence.
but most of my work is very meaningful and what i want to be doing
i don’t want to see paris or play the new zelda game more than i want to make lessonline happen
i think there’s a lot of variance. i personally can only work in unpredictable short intense bursts, during which i get my best work done; then i have to go and chill for a while. if i were 1 year away from the singularity i’d try to push myself past my normal limits and push chilling to a minimum, but doing so now seems like a bad idea. i’m currently trying to fix this more durably in the long run but this is highly nontrival
Oh that makes sense, thanks. That seems more like a thing for people who’s work comes from internal inspiration / is more artistic, and also for people who have personal or psychological frictions that cause them to burn out a lot when they do this sort of burst-y work.
I think a lot of my work is heavily pulled out of me be the rest of the world setting deadlines (e.g. users making demands, people arriving for an event, etc), and I can cause those sorts of projects to pull lots of work out of me more regularly. I also think I don’t take that much damage from doing it.
it still seems bad to advocate for the exactly wrong policy, especially one that doesn’t make sense even if you turn out to be correct (as habryka points out in the original comment, many think 2028 is not really when most people expect agi to have happened). it seems very predictable that people will just (correctly) not listen to the advice, and in 2028 both sides on this issue will believe that their view has been vindicated—you will think of course rationalists will never change their minds and emotions on agi doom, and most rationalists will think obviously it was right not to follow the advice because they never expected agi to definitely happen before 2028.
i think you would have much more luck advocating for chilling today and citing past evidence to make your case..
I’m super sensitive to framing effects. I notice one here. I could be wrong, and I’m guessing that even if I’m right you didn’t intend it. But I want to push back against it here anyway. Framing effects don’t have to be intentional!
It’s not that I started with what I thought was a wrong or bad policy and tried to advocate for it. It’s that given all the constraints, I thought that preregistering a possibility as a “pause and reconsider” moment might be the most effective and respectful. It’s not what I’d have preferred if things were different. But things aren’t different from how they are, so I made a guess about the best compromise.
I then learned that I’d made some assumptions that weren’t right, and that determining such a pause point that would have collective weight is much more tricky. Alas.
But it was Oliver’s comment that brought this problem to my awareness. At no point did I advocate for what I thought at the time was the wrong policy. I had hope because I thought folk were laying down some timeline predictions that could be falsified soon. Turns out, approximately nope.
Empirically I disagree. That demonstrably has not been within the reach of my skill to do effectively. But it’s a sensible thing to consider trying again sometime.
to be clear, I am not intending to claim that you wrote this post believing that it was wrong. I believe that you are trying your best to improve the epistemics and I commend the effort.
I had interpreted your third sentence as still defending the policy of the post even despite now agreeing with Oliver, but I understand now that this is not what you meant, and that you are no longer in favor of the policy advocated in the post. my apologies for the misunderstanding.
I don’t think you should just declare that people’s beliefs are unfalsifiable. certainly some people’s views will be. but finding a crux is always difficult and imo should be done through high bandwidth talking to many people directly to understand their views first (in every group of people, especially one that encourages free thinking among its members, there will be a great diversity of views!). it is not effective to put people on blast publicly and then backtrack when people push back saying you misunderstood their position.
I realize this would be a lot of work to ask of you. unfortunately, coordination is hard. it’s one of the hardest things in the world. I don’t think you have any moral obligation to do this beyond any obligation you feel to making AI go well / improving this community. I’m mostly saying this to lay out my view of why I think this post did not accomplish its goals, and what I think would be the most effective course of action to find a set of cruxes that truly captures the disagreement. I think this would be very valuable if accomplished and it would be great if someone did it.