If you want to be twice as profitable as your competitors, you don’t have to be twice as good as them. You just have to be slightly better.
I think AI development is mainly compute constrained (relevant for intelligence explosion dynamics).
There are some arguments against, based on the high spending of firms on researcher and engineer talent. The claim is that this supports one or both of a) large marginal returns to having more (good) researchers or b) steep power laws in researcher talent (implying large production multipliers from the best researchers).
Given that the workforces at labs remain not large, I think the spending naively supports (b) better.
But in fact I think there is another, even better explanation:
Researchers’ taste (an AI production multiplier) varies more smoothly
(research culture/collective intelligence of a team or firm may be more important)
Marginal parallel researchers have very diminishing AI production returns (sometimes negative, when the researchers have worse taste)
(also determining a researcher’s taste ex ante is hard)
BUT firms’ utility is sharply convex in AI production
capturing more accolades and market share are basically the entire game
spending as much time as possible with a non-commoditised offering allows profiting off fast-evaporating margin
so firms are competing over getting cool stuff out first
time-to-delivery of non-commoditised (!) frontier models
and getting loyal/sticky customer bases
ease-of-adoption of product wrapping
sometimes differentiation of offerings
this turns small differences in human capital/production multiplier/research taste into big differences in firm utility
so demand for the small pool of the researchers with (legibly) great taste is very hot
This also explains why it’s been somewhat ‘easy’ (but capital intensive) for a few new competitors to pop into existence each year, and why firms’ revealed preferred savings rate into compute capital is enormous (much greater than 100%!).
We see token prices drop incredibly sharply, which supports the non-commoditised margin claim (though this is also consistent with a Wright’s Law effect from (runtime) algorithmic efficiency gains, which should definitely also be expected).
A lot of engineering effort is being put into product wrappers and polish, which supports the customer base claim.
The implications include: headroom above top human expert teams’ AI research taste could be on the small side (I think this is right for many R&D domains, because a major input is experimental throughput). So both quantity and quality of (perhaps automated) researchers should have steeply diminishing returns in AI production rate. But might they nevertheless unlock a practical monopoly (or at least an increasingly expensive barrier to entry) on AI-derived profit, by keeping the (more monetisable) frontier out of reach of competitors?
I think AI development is mainly compute constrained (relevant for intelligence explosion dynamics).
There are some arguments against, based on the high spending of firms on researcher and engineer talent. The claim is that this supports one or both of a) large marginal returns to having more (good) researchers or b) steep power laws in researcher talent (implying large production multipliers from the best researchers).
Given that the workforces at labs remain not large, I think the spending naively supports (b) better.
But in fact I think there is another, even better explanation:
Researchers’ taste (an AI production multiplier) varies more smoothly
(research culture/collective intelligence of a team or firm may be more important)
Marginal parallel researchers have very diminishing AI production returns (sometimes negative, when the researchers have worse taste)
(also determining a researcher’s taste ex ante is hard)
BUT firms’ utility is sharply convex in AI production
capturing more accolades and market share are basically the entire game
spending as much time as possible with a non-commoditised offering allows profiting off fast-evaporating margin
so firms are competing over getting cool stuff out first
time-to-delivery of non-commoditised (!) frontier models
and getting loyal/sticky customer bases
ease-of-adoption of product wrapping
sometimes differentiation of offerings
this turns small differences in human capital/production multiplier/research taste into big differences in firm utility
so demand for the small pool of the researchers with (legibly) great taste is very hot
This also explains why it’s been somewhat ‘easy’ (but capital intensive) for a few new competitors to pop into existence each year, and why firms’ revealed preferred savings rate into compute capital is enormous (much greater than 100%!).
We see token prices drop incredibly sharply, which supports the non-commoditised margin claim (though this is also consistent with a Wright’s Law effect from (runtime) algorithmic efficiency gains, which should definitely also be expected).
A lot of engineering effort is being put into product wrappers and polish, which supports the customer base claim.
The implications include: headroom above top human expert teams’ AI research taste could be on the small side (I think this is right for many R&D domains, because a major input is experimental throughput). So both quantity and quality of (perhaps automated) researchers should have steeply diminishing returns in AI production rate. But might they nevertheless unlock a practical monopoly (or at least an increasingly expensive barrier to entry) on AI-derived profit, by keeping the (more monetisable) frontier out of reach of competitors?