And to say it also explicitly, I think this is part of why I have trouble betting with Paul. I have a lot of ? marks on the questions that the Gwern voice is asking above, regarding them as potentially important breaks from trend that just get dumped into my generalized inbox one day. If a gradualist thinks that there ought to be a smooth graph of perplexity with respect to computing power spent, in the future, that’s something I don’t care very much about except insofar as it relates in any known way whatsoever to questions like those the Gwern voice is asking. What does it even mean to be a gradualist about any of the important questions like those of the Gwern-voice, when they don’t relate in known ways to the trend lines that are smooth? Isn’t this sort of a shell game where our surface capabilities do weird jumpy things, we can point to some trend lines that were nonetheless smooth, and then the shells are swapped and we’re told to expect gradualist AGI surface stuff? This is part of the idea that I’m referring to when I say that, even as the world ends, maybe there’ll be a bunch of smooth trendlines underneath it that somebody could look back and point out. (Which you could in fact have used to predict all the key jumpy surface thresholds, if you’d watched it all happen on a few other planets and had any idea of where jumpy surface events were located on the smooth trendlines—but we haven’t watched it happen on other planets so the trends don’t tell us much we want to know.)
It feels to me like you mostly don’t have views about the actual impact of AI as measured by jobs that it does or the $s people pay for them, or performance on any benchmarks that we are currently measuring, while I’m saying I’m totally happy to use gradualist metrics to predict any of those things. If you want to say “what does it mean to be a gradualist” I can just give you predictions on them.
To you this seems reasonable, because e.g. $ and benchmarks are not the right way to measure the kinds of impacts we care about. That’s fine, you can propose something other than $ or measurable benchmarks. If you can’t propose anything, I’m skeptical.
My basic guess is that you probably can’t effectively predict $ or benchmarks or anything else quantitative. If you actually agreed with me on all that stuff, then I might suspect that you are equivocating between a gradualist-like view that you use for making predictions about everything near term and then switching to a more bizarre perspective when talking about the future. But fortunately I think this is more straightforward, because you are basically being honest when you say that you don’t understand how the gradualist perspective makes predictions.
I kind of want to see you fight this out with Gwern (not least for social reasons, so that people would perhaps see that it wasn’t just me, if it wasn’t just me).
But it seems to me that the very obvious GPT-5 continuation of Gwern would say, “Gradualists can predict meaningless benchmarks, but they can’t predict the jumpy surface phenomena we see in real life.” We want to know when humans land on the moon, not whether their brain sizes continued on a smooth trend extrapolated over the last million years.
I think there’s a very real sense in which, yes, what we’re interested in are milestones, and often milestones that aren’t easy to define even after the fact. GPT-2 was shocking, and then GPT-3 carried that shock further in that direction, but how do you talk with that about somebody who thinks that perplexity loss is smooth? I can handwave statements like “GPT-3 started to be useful without retraining via just prompt engineering” but qualitative statements like those aren’t good for betting, and it’s much much harder to come up with the right milestone like that in advance, instead of looking back in your rearview mirror afterwards.
But you say—I think? - that you were less shocked by this sort of thing than I am. So, I mean, can you prophesy to us about milestones and headlines in the next five years? I think I kept thinking this during our dialogue, but never saying it, because it seemed like such an unfair demand to make! But it’s also part of the whole point that AGI and superintelligence and the world ending are all qualitative milestones like that. Whereas such trend points as Moravec was readily able to forecast correctly—like 10 teraops / plausibly-human-equivalent-computation being available in a $10 million supercomputer around 2010 - are really entirely unanchored from AGI, at least relative to our current knowledge about AGI. (They would be anchored if we’d seen other planets go through this, but we haven’t.)
But it seems to me that the very obvious GPT-5 continuation of Gwern would say, “Gradualists can predict meaningless benchmarks, but they can’t predict the jumpy surface phenomena we see in real life.”
Don’t you think you’re making a falsifiable prediction here?
Name something that you consider part of the “jumpy surface phenomena” that will show up substantially before the world ends (that you think Paul doesn’t expect). Predict a discontinuity. Operationalize everything and then propose the bet.
What does it even mean to be a gradualist about any of the important questions like those of the Gwern-voice, when they don’t relate in known ways to the trend lines that are smooth?
Perplexity is one general “intrinsic” measure of language models, but there are many task-specific measures too. Studying the relationship between perplexity and task-specific measures is an important part of the research process. We shouldn’t speak as if people do not actively try to uncover these relationships.
I would generally be surprised if there were many highly non-linear relationship between perplexity and something like Winograd accuracy, human evaluation, or whatever other concrete measure you can come up with, such that the underlying behavior of the surface phenomenon is best described as a discontinuity with the past even when the latent perplexity changed smoothly. I admit the existence of some measures that exhibit these qualities (such as, potentially, the ability to do arithmetic), but I expect them to be quite a bit harder to find than the reverse.
Furthermore, it seems like if this is the crux — ie. that surface-level qualitative phenomena will experience discontinuities even while latent variables do not — then I do not understand why it’s hard to come up with bet conditions.
Can’t you just pick a surface level phenomenon that’s easy to measure and strongly interpretable in a qualitative sense — like Sensibleness and Specificity Average from the paper on Google’s chatbot — and then predict discontinuities in that metric?
(I should note that the paper shows a highly linear relationship between perplexity and Sensibleness and Specificity Average. Just look at the first plot in the PDF.)
And to say it also explicitly, I think this is part of why I have trouble betting with Paul. I have a lot of ? marks on the questions that the Gwern voice is asking above, regarding them as potentially important breaks from trend that just get dumped into my generalized inbox one day. If a gradualist thinks that there ought to be a smooth graph of perplexity with respect to computing power spent, in the future, that’s something I don’t care very much about except insofar as it relates in any known way whatsoever to questions like those the Gwern voice is asking. What does it even mean to be a gradualist about any of the important questions like those of the Gwern-voice, when they don’t relate in known ways to the trend lines that are smooth? Isn’t this sort of a shell game where our surface capabilities do weird jumpy things, we can point to some trend lines that were nonetheless smooth, and then the shells are swapped and we’re told to expect gradualist AGI surface stuff? This is part of the idea that I’m referring to when I say that, even as the world ends, maybe there’ll be a bunch of smooth trendlines underneath it that somebody could look back and point out. (Which you could in fact have used to predict all the key jumpy surface thresholds, if you’d watched it all happen on a few other planets and had any idea of where jumpy surface events were located on the smooth trendlines—but we haven’t watched it happen on other planets so the trends don’t tell us much we want to know.)
This seems totally bogus to me.
It feels to me like you mostly don’t have views about the actual impact of AI as measured by jobs that it does or the $s people pay for them, or performance on any benchmarks that we are currently measuring, while I’m saying I’m totally happy to use gradualist metrics to predict any of those things. If you want to say “what does it mean to be a gradualist” I can just give you predictions on them.
To you this seems reasonable, because e.g. $ and benchmarks are not the right way to measure the kinds of impacts we care about. That’s fine, you can propose something other than $ or measurable benchmarks. If you can’t propose anything, I’m skeptical.
My basic guess is that you probably can’t effectively predict $ or benchmarks or anything else quantitative. If you actually agreed with me on all that stuff, then I might suspect that you are equivocating between a gradualist-like view that you use for making predictions about everything near term and then switching to a more bizarre perspective when talking about the future. But fortunately I think this is more straightforward, because you are basically being honest when you say that you don’t understand how the gradualist perspective makes predictions.
I kind of want to see you fight this out with Gwern (not least for social reasons, so that people would perhaps see that it wasn’t just me, if it wasn’t just me).
But it seems to me that the very obvious GPT-5 continuation of Gwern would say, “Gradualists can predict meaningless benchmarks, but they can’t predict the jumpy surface phenomena we see in real life.” We want to know when humans land on the moon, not whether their brain sizes continued on a smooth trend extrapolated over the last million years.
I think there’s a very real sense in which, yes, what we’re interested in are milestones, and often milestones that aren’t easy to define even after the fact. GPT-2 was shocking, and then GPT-3 carried that shock further in that direction, but how do you talk with that about somebody who thinks that perplexity loss is smooth? I can handwave statements like “GPT-3 started to be useful without retraining via just prompt engineering” but qualitative statements like those aren’t good for betting, and it’s much much harder to come up with the right milestone like that in advance, instead of looking back in your rearview mirror afterwards.
But you say—I think? - that you were less shocked by this sort of thing than I am. So, I mean, can you prophesy to us about milestones and headlines in the next five years? I think I kept thinking this during our dialogue, but never saying it, because it seemed like such an unfair demand to make! But it’s also part of the whole point that AGI and superintelligence and the world ending are all qualitative milestones like that. Whereas such trend points as Moravec was readily able to forecast correctly—like 10 teraops / plausibly-human-equivalent-computation being available in a $10 million supercomputer around 2010 - are really entirely unanchored from AGI, at least relative to our current knowledge about AGI. (They would be anchored if we’d seen other planets go through this, but we haven’t.)
Don’t you think you’re making a falsifiable prediction here?
Name something that you consider part of the “jumpy surface phenomena” that will show up substantially before the world ends (that you think Paul doesn’t expect). Predict a discontinuity. Operationalize everything and then propose the bet.
(I’m currently slightly hopeful about the theorem-proving thread, elsewhere and upthread.)
Perplexity is one general “intrinsic” measure of language models, but there are many task-specific measures too. Studying the relationship between perplexity and task-specific measures is an important part of the research process. We shouldn’t speak as if people do not actively try to uncover these relationships.
I would generally be surprised if there were many highly non-linear relationship between perplexity and something like Winograd accuracy, human evaluation, or whatever other concrete measure you can come up with, such that the underlying behavior of the surface phenomenon is best described as a discontinuity with the past even when the latent perplexity changed smoothly. I admit the existence of some measures that exhibit these qualities (such as, potentially, the ability to do arithmetic), but I expect them to be quite a bit harder to find than the reverse.
Furthermore, it seems like if this is the crux — ie. that surface-level qualitative phenomena will experience discontinuities even while latent variables do not — then I do not understand why it’s hard to come up with bet conditions.
Can’t you just pick a surface level phenomenon that’s easy to measure and strongly interpretable in a qualitative sense — like Sensibleness and Specificity Average from the paper on Google’s chatbot — and then predict discontinuities in that metric?
(I should note that the paper shows a highly linear relationship between perplexity and Sensibleness and Specificity Average. Just look at the first plot in the PDF.)