That is, suppose it’s the case that GPT-3 is the first successfully commercialized language model. (I think in order to make this literally true you have to throw on additional qualifiers that I’m not going to look up; pretend I did that.) So on a graph of “language model of type X revenue over time”, total revenue is static at 0 for a long time and then shortly after GPT-3′s creation departs from 0.
I think it’s the nature of every product that comes on the market that it will experience a discontinuity from having zero revenue to having some revenue at some point. It’s an interesting question of when that will happen, and maybe your point is simply that it’s hard to predict when that will happen when you just look at the Penn Treebank trend.
However, I suspect that the revenue curve will look pretty continuous, now that it’s gone from zero to one. Do you disagree?
In a world with continuous, gradual progress across a ton of metrics, you’re going to get discontinuities from zero to one. I don’t think anyone from the Paul camp disagrees with that (in fact, Katja Grace talked about this in her article). From the continuous takeoff perspective, these discontinuities don’t seem very relevant unless going from zero to one is very important in a qualitative sense. But I would contend that going from “no revenue” to “some revenue” is not actually that meaningful in the sense of distinguishing AI from the large class of other economic products that have gradual development curves.
your point is simply that it’s hard to predict when that will happen when you just look at the Penn Treebank trend.
This is a big part of my point; a smaller elaboration is that it can be easy to trick yourself into thinking that, because you understand what will happen with PTB, you’ll understand what will happen with economics/security/etc., when in fact you don’t have much understanding of the connection between those, and there might be significant discontinuities. [To be clear, I don’t have much understanding of this either; I wish I did!]
For example, I imagine that, by thirty years from now, we’ll have language/code models that can do significant security analysis of the code that was available in 2020, and that this would have been highly relevant/valuable to people in 2020 interested in computer security. But when will this happen in the 2020-2050 range that seems likely to me? I’m pretty uncertain, and I expect this to look a lot like ‘flicking a switch’ in retrospect, even tho the leadup to flicking that switch will probably look like smoothly increasing capabilities on ‘toy’ problems.
[My current guess is that Paul / people in “Paul’s camp” would mostly agree with the previous paragraph, except for thinking that it’s sort of weird to focus on specifically AI computer security productivity, rather than the overall productivity of the computer security ecosystem, and this misplaced focus will generate the ‘flipping the switch’ impression. I think most of the disagreements are about ‘where to place the focus’, and this is one of the reasons it’s hard to find bets; it seems to me like Eliezer doesn’t care much about the lines Paul is drawing, and Paul doesn’t care much about the lines Eliezer is drawing.]
However, I suspect that the revenue curve will look pretty continuous, now that it’s gone from zero to one. Do you disagree?
I think I agree in a narrow sense and disagree in a broad sense. For this particular example, I expect OpenAI’s revenues from GPT-3 to look roughly continuous now that they’re selling/licensing it at all (until another major change happens; like, the introduction of a competitor would likely cause the revenue trend to change).
More generally, suppose we looked at something like “the total economic value of horses over the course of human history”. I think we would see mostly smooth trends plus some implied starting and stopping points for those trends. (Like, “first domestication of a horse” probably starts a positive trend, “invention of stirrups” probably starts another positive trend, “introduction of horses to America” starts another positive trend, “invention of the automobile” probably starts a negative trend that ends with “last horse that gets replaced by a tractor/car”.)
In my view, ‘understanding the world’ looks like having a causal model that you can imagine variations on (and have those imaginations be meaningfully grounded in reality), and I think the bits that are most useful for building that causal model are the starts and stops of the trends, rather than the smooth adoption curves or mostly steady equilibria in between. So it seems sort of backwards to me to say that for most of the time, most of the changes in the graph are smooth, because what I want out of the graph is to figure out the underlying generator, where the non-smooth bits are the most informative. The graph itself only seems useful as a means to that end, rather than an end in itself.
I think it’s the nature of every product that comes on the market that it will experience a discontinuity from having zero revenue to having some revenue at some point. It’s an interesting question of when that will happen, and maybe your point is simply that it’s hard to predict when that will happen when you just look at the Penn Treebank trend.
However, I suspect that the revenue curve will look pretty continuous, now that it’s gone from zero to one. Do you disagree?
In a world with continuous, gradual progress across a ton of metrics, you’re going to get discontinuities from zero to one. I don’t think anyone from the Paul camp disagrees with that (in fact, Katja Grace talked about this in her article). From the continuous takeoff perspective, these discontinuities don’t seem very relevant unless going from zero to one is very important in a qualitative sense. But I would contend that going from “no revenue” to “some revenue” is not actually that meaningful in the sense of distinguishing AI from the large class of other economic products that have gradual development curves.
This is a big part of my point; a smaller elaboration is that it can be easy to trick yourself into thinking that, because you understand what will happen with PTB, you’ll understand what will happen with economics/security/etc., when in fact you don’t have much understanding of the connection between those, and there might be significant discontinuities. [To be clear, I don’t have much understanding of this either; I wish I did!]
For example, I imagine that, by thirty years from now, we’ll have language/code models that can do significant security analysis of the code that was available in 2020, and that this would have been highly relevant/valuable to people in 2020 interested in computer security. But when will this happen in the 2020-2050 range that seems likely to me? I’m pretty uncertain, and I expect this to look a lot like ‘flicking a switch’ in retrospect, even tho the leadup to flicking that switch will probably look like smoothly increasing capabilities on ‘toy’ problems.
[My current guess is that Paul / people in “Paul’s camp” would mostly agree with the previous paragraph, except for thinking that it’s sort of weird to focus on specifically AI computer security productivity, rather than the overall productivity of the computer security ecosystem, and this misplaced focus will generate the ‘flipping the switch’ impression. I think most of the disagreements are about ‘where to place the focus’, and this is one of the reasons it’s hard to find bets; it seems to me like Eliezer doesn’t care much about the lines Paul is drawing, and Paul doesn’t care much about the lines Eliezer is drawing.]
I think I agree in a narrow sense and disagree in a broad sense. For this particular example, I expect OpenAI’s revenues from GPT-3 to look roughly continuous now that they’re selling/licensing it at all (until another major change happens; like, the introduction of a competitor would likely cause the revenue trend to change).
More generally, suppose we looked at something like “the total economic value of horses over the course of human history”. I think we would see mostly smooth trends plus some implied starting and stopping points for those trends. (Like, “first domestication of a horse” probably starts a positive trend, “invention of stirrups” probably starts another positive trend, “introduction of horses to America” starts another positive trend, “invention of the automobile” probably starts a negative trend that ends with “last horse that gets replaced by a tractor/car”.)
In my view, ‘understanding the world’ looks like having a causal model that you can imagine variations on (and have those imaginations be meaningfully grounded in reality), and I think the bits that are most useful for building that causal model are the starts and stops of the trends, rather than the smooth adoption curves or mostly steady equilibria in between. So it seems sort of backwards to me to say that for most of the time, most of the changes in the graph are smooth, because what I want out of the graph is to figure out the underlying generator, where the non-smooth bits are the most informative. The graph itself only seems useful as a means to that end, rather than an end in itself.