When is the “efficient outcome-achieving hypothesis” false? More narrowly, under what conditions are people more likely to achieve a goal (or harder, better, faster, stronger) with fewer resources?
The timing of this quick take is of course motivated by recent discussion about deepseek-r1, but I’ve had similar thoughts in the past when observing arguments against e.g. hardware restrictions: that they’d motivate labs to switch to algorithmic work, which would be speed up timelines (rather than just reducing the naive expected rate of slowdown). Such arguments propose that labs are following predictably inefficient research directions. I don’t want to categorically rule out such arguments. From the perspective of a person with good research taste, everyone else with worse research taste is “following predictably inefficient research directions”. But the people I saw making those arguments were generally not people who might conceivably have an informed inside view on novel capabilities advancements.
I’m interested in stronger forms of those arguments, not limited to AI capabilities. Are there heuristics about when agents (or collections of agents) might benefit from having fewer resources? One example is the resource curse, though the state of the literature there is questionable and if the effect exists at all it’s either weak or depends on other factors to materialize with a meaningful effect size.
I think you also have to factor in selection bias. Like suppose there are 3 organizations with 100 resource units, 10 with 20 units, 30 with 5 units. And maybe resources are helpful, but not helpful enough that all the advancements will concentrate in the top 3.
When is the “efficient outcome-achieving hypothesis” false? More narrowly, under what conditions are people more likely to achieve a goal (or harder, better, faster, stronger) with fewer resources?
The timing of this quick take is of course motivated by recent discussion about deepseek-r1, but I’ve had similar thoughts in the past when observing arguments against e.g. hardware restrictions: that they’d motivate labs to switch to algorithmic work, which would be speed up timelines (rather than just reducing the naive expected rate of slowdown). Such arguments propose that labs are following predictably inefficient research directions. I don’t want to categorically rule out such arguments. From the perspective of a person with good research taste, everyone else with worse research taste is “following predictably inefficient research directions”. But the people I saw making those arguments were generally not people who might conceivably have an informed inside view on novel capabilities advancements.
I’m interested in stronger forms of those arguments, not limited to AI capabilities. Are there heuristics about when agents (or collections of agents) might benefit from having fewer resources? One example is the resource curse, though the state of the literature there is questionable and if the effect exists at all it’s either weak or depends on other factors to materialize with a meaningful effect size.
I think you also have to factor in selection bias. Like suppose there are 3 organizations with 100 resource units, 10 with 20 units, 30 with 5 units. And maybe resources are helpful, but not helpful enough that all the advancements will concentrate in the top 3.