I feel like this comes down a lot to intuition, all I can say is gesture at the thinning distance between marginal cost and prices, wave my hand in the direction of discount rates and the valuation of Openai and ask… Are you sure?
The demand curve on this seems textbook inelastic at current margins. slashing the price of milk by 10x would have us cleaning our driveways with it, slashing the price of eggs would have us using crushed eggshells as low grade building material. A 10x decrease in the price per token of AI is barely even noticed, in fact in some markets outside of programming the consumer interest is down during that same window. This an example of a low margin good with little variation in quality descending into a price war. Maybe LLM’s have a long ways left to grow and can scale to agi (maybe, maybe not) but if we’re looking just at the market this doesn’t look like something Jevon’s paradox applies to at all, people are just saying words and if you switched out Jevon for piglet they’d make as much sense imo
The proposal just seems ridiculous to me, right? Who right now is standing on the sidelines with a killer AI app that could rip up the market if only tokens were a bit cheaper? There isn’t, the bottleneck is and always has been quality, the ability for LLM’s to be less-wrong-so-dang-always. Jevon’s paradox seems to be filling the role of a magic word in these conversations, it’s involved despite being out of place.
Sorry if this is invective at all, you’re mostly explaining a point of view so I’m not frustrated in your direction, but people are making little sense to me right now.
This is actually a good use case, which fits with what gpt does well, where very cheap tokens help!
Pending some time for people to pick at it to test it’s limits, this might be really good. My instinct is legal research, case law etc. will be the test of how good it is, if it does well this might be it’s foothold into real commercial use that actually generates profit.
My prediction is that we will be glad this exists. It will not be “phd level”, a phrase which defaces all who utter it, but it will save some people a lot of time and effort
Where I think we disagree:
This will likely not elicit a Jevon’s-paradox scenario where we will collectively spend much more money on LLM tokens despite their decreased cost, Killer app this is not.
My prediction is that low level users will use this infrequently because Google (or vanilla chatGPT) is sufficient, what they are looking for is not a report but a webpage and one likely at the top of their search already. Even if it would save them time, they will never use it so often that their first instinct would be deep research and not Google, they will not recognize where deep research would be better and won’t change their habits even if they do. On the far end, some grad students will use this to get them started but it will not do the work of actually doing the research. Besides pay walls disrupting things and limits to important physical media, there is a high likelihood that this won’t replace any of the actual research grad students (or lawyers/paralegals etc) will have to do. The number of hours they spend won’t be much effected, the range of users who will find much value will be few and they probably won’t use it every day.
I expect that, by token usage, deep research will not be a big part of what people use chatGPT for. If I’m wrong I predict it’s because law professions found a use for it.
I will see everyone in 1 year (if we’re alive) to see if this pans out!
The single factor prime causative factor driving the explosive growth in AI demand/revenue is and always has been the exponential reduction in $/flop via moore’s law, which simply is jevon’s paradox manifested. With more compute everything is increasingly easy and obvious; even idiots can create AGI with enough compute.
I think there’s some miscommunication here, on top of a fundamental disagreement on whether more compute takes us to AGI.
On miscommunication, we’re not talking about the lowering cost per flop, we’re talking about a world where openai either does or does not have a price war eating it’s margins.
On fundamental disagreement, I assume you don’t take very seriously the idea that AI labs are seeing a breakdown of scaling laws? No problem if so, reality should resolve that disagreement relatively soon!
I feel like this comes down a lot to intuition, all I can say is gesture at the thinning distance between marginal cost and prices, wave my hand in the direction of discount rates and the valuation of Openai and ask… Are you sure?
The demand curve on this seems textbook inelastic at current margins. slashing the price of milk by 10x would have us cleaning our driveways with it, slashing the price of eggs would have us using crushed eggshells as low grade building material. A 10x decrease in the price per token of AI is barely even noticed, in fact in some markets outside of programming the consumer interest is down during that same window. This an example of a low margin good with little variation in quality descending into a price war. Maybe LLM’s have a long ways left to grow and can scale to agi (maybe, maybe not) but if we’re looking just at the market this doesn’t look like something Jevon’s paradox applies to at all, people are just saying words and if you switched out Jevon for piglet they’d make as much sense imo
The proposal just seems ridiculous to me, right? Who right now is standing on the sidelines with a killer AI app that could rip up the market if only tokens were a bit cheaper? There isn’t, the bottleneck is and always has been quality, the ability for LLM’s to be less-wrong-so-dang-always. Jevon’s paradox seems to be filling the role of a magic word in these conversations, it’s involved despite being out of place.
Sorry if this is invective at all, you’re mostly explaining a point of view so I’m not frustrated in your direction, but people are making little sense to me right now.
OpenAI’s Deep Research is looking like something that could be big and they were standing on the sidelines in part because the tokens weren’t cheap.
This is actually a good use case, which fits with what gpt does well, where very cheap tokens help!
Pending some time for people to pick at it to test it’s limits, this might be really good. My instinct is legal research, case law etc. will be the test of how good it is, if it does well this might be it’s foothold into real commercial use that actually generates profit.
My prediction is that we will be glad this exists. It will not be “phd level”, a phrase which defaces all who utter it, but it will save some people a lot of time and effort
Where I think we disagree: This will likely not elicit a Jevon’s-paradox scenario where we will collectively spend much more money on LLM tokens despite their decreased cost, Killer app this is not.
My prediction is that low level users will use this infrequently because Google (or vanilla chatGPT) is sufficient, what they are looking for is not a report but a webpage and one likely at the top of their search already. Even if it would save them time, they will never use it so often that their first instinct would be deep research and not Google, they will not recognize where deep research would be better and won’t change their habits even if they do. On the far end, some grad students will use this to get them started but it will not do the work of actually doing the research. Besides pay walls disrupting things and limits to important physical media, there is a high likelihood that this won’t replace any of the actual research grad students (or lawyers/paralegals etc) will have to do. The number of hours they spend won’t be much effected, the range of users who will find much value will be few and they probably won’t use it every day.
I expect that, by token usage, deep research will not be a big part of what people use chatGPT for. If I’m wrong I predict it’s because law professions found a use for it.
I will see everyone in 1 year (if we’re alive) to see if this pans out!
The single factor prime causative factor driving the explosive growth in AI demand/revenue is and always has been the exponential reduction in $/flop via moore’s law, which simply is jevon’s paradox manifested. With more compute everything is increasingly easy and obvious; even idiots can create AGI with enough compute.
I think there’s some miscommunication here, on top of a fundamental disagreement on whether more compute takes us to AGI.
On miscommunication, we’re not talking about the lowering cost per flop, we’re talking about a world where openai either does or does not have a price war eating it’s margins.
On fundamental disagreement, I assume you don’t take very seriously the idea that AI labs are seeing a breakdown of scaling laws? No problem if so, reality should resolve that disagreement relatively soon!