The improvements in thinking quality of the models doesn’t address one of the main causes of downlift, which is the breaking up of deep work by regularly (and sometimes surprisingly) having 1-10 min periods where you are no longer able to do productive work because the LLM is executing a task, and so you lose cognitive context, and tend toward shallower decision-making. This is something that continues to plague me, often causing me to waste a lot of time (both in the individual chunks and when summing my decision-making over a day).
Not convinced this isn’t a temporary artefact of the current time horizons. Like, in the future, I think it’s plausible that the two categories of tasks you’d be delegating would be either (a) the sort of shallow tasks the future models would be able to complete instantly, or (b) the sort of deep tasks that’d take future models hours to complete.
Fair enough, though, maybe this counts. But is there really a rich suite of skills like that, and would they really take that long to learn by the time learning them does become immediately net-positive?
I think it’s fairly likely I need to re-orient my entire workflow around constantly (but somewhat surprisingly) having heavy-tail distributions of time where I can’t do productive work on my main work. This is not a small deal. I suspect that many people will deal with it very differently.
Here are some possible responses:
Build a practice of having multiple parallel LLM projects you can work on simultaneously (I have not found this cognitively trivial)
Build up a backlog of simple low-context tasks you can do, and figure out how to turn your lower-importance work into that kind of task
Learn how to identify tasks that aren’t worth it because of the downlift, even though you know an AI could do it.
The first two really sound quite complex, and the third sounds genuinely hard. I suspect other people will find other solutions...
The improvements in thinking quality of the models doesn’t address one of the main causes of downlift, which is the breaking up of deep work by regularly (and sometimes surprisingly) having 1-10 min periods where you are no longer able to do productive work because the LLM is executing a task, and so you lose cognitive context, and tend toward shallower decision-making. This is something that continues to plague me, often causing me to waste a lot of time (both in the individual chunks and when summing my decision-making over a day).
Not convinced this isn’t a temporary artefact of the current time horizons. Like, in the future, I think it’s plausible that the two categories of tasks you’d be delegating would be either (a) the sort of shallow tasks the future models would be able to complete instantly, or (b) the sort of deep tasks that’d take future models hours to complete.
Fair enough, though, maybe this counts. But is there really a rich suite of skills like that, and would they really take that long to learn by the time learning them does become immediately net-positive?
I think it’s fairly likely I need to re-orient my entire workflow around constantly (but somewhat surprisingly) having heavy-tail distributions of time where I can’t do productive work on my main work. This is not a small deal. I suspect that many people will deal with it very differently.
Here are some possible responses:
Build a practice of having multiple parallel LLM projects you can work on simultaneously (I have not found this cognitively trivial)
Build up a backlog of simple low-context tasks you can do, and figure out how to turn your lower-importance work into that kind of task
Learn how to identify tasks that aren’t worth it because of the downlift, even though you know an AI could do it.
The first two really sound quite complex, and the third sounds genuinely hard. I suspect other people will find other solutions...