The outside view, insofar is that is a well-defined thing...
It’s not really a well-defined thing, which is why the standard on this site is to taboo those words and just explain what your lines of evidence are, or the motivation for any special priors if you have them.
If AGI were arriving in 2030, the outside view says interest rates would be very high (I’m not particularly knowledgeable about this and might have the details wrong but see the analysis here, I believe the situation is still similar), and less confidently I think the S&P’s value would probably be measured in lightcone percentage points (?).
So, your claim is that interest rates would be very high if AGI were imminent, and they’re not so it’s not. The last time someone said this, if the people arguing in the comment section had simply made a bet on interest rates changing, they would have made a lot of money! Ditto for buying up AI-related stocks or call options on those stocks.
I think you’re just overestimating the ability of the market to generalize to out of distribution events. Prices are set by a market’s participants, and the institutions with the ability to move prices are mostly not thinking about AGI timelines at present. It wouldn’t matter if AGI was arriving in five or ten or twenty years, Bridgewater would be basically doing the same things, and so their inaction doesn’t provide much evidence. Inherent in these forecasts there are also naturally going to be a lot of assumptions about the value of money (or titles to partial ownership of companies controlled by Sam Altmans) in a post-AGI scenario. These are pretty well-disputed premises, to say the least, which makes interpreting current market prices hard.
As far as I am concerned, AGI should be able to do any intellectual task that a human can do. I think that inventing important new ideas tends to take at least a month, but possibly the length of a PhD thesis. So it seems to be a reasonable interpretation that we might see human level AI around mid-2030 to 2040, which happens to be about my personal median.
The issue is, ML research itself is composed of many tasks that do take less than a month for humans to execute. For example, on this model, sometime before “idea generation”, you’re going to have a model that can do most high-context software engineering tasks. The research department at any of the big AI labs would be able to do more stuff if it had such a model. So while current AI is not accelerating machine learning research that much, as it gets better, the trend line from the METR paper is going to curl upward.
You could say that the “inventing important new ideas” part is going to be such a heavy bottleneck, that this speedup won’t amount to much. But I think that’s mostly wrong, and that if you asked ML researchers at OpenAI, a drop in remote worker that could “only” be directed to do things that otherwise took 12 hours would speed up their work by a lot.
But the deeper problem is that the argument is ultimately, subtly circular. Current AI research does look a lot like rapidly iterating and trying random engineering improvements. If you already believe this will lead to AGI, then certainly AI coding assistants which can rapidly iterate would expedite the process. However, I do not believe that blind iteration on the current paradigm leads to AGI (at least not anytime soon), so I see no reason to accept this argument.
It’s actually not circular at all. “Current AI research” has taken us from machines that can’t talk to machines that can talk, write computer programs, give advice, etc. in about five years. That’s the empirical evidence that you can make research progress doing “random” stuff. In the absence of further evidence, people are just expecting the thing that has happened over the last five years to continue. You can reject that claim, but at this point I think the burden of proof is on the people that do.
Thanks for engaging in detail with my post. It seems there were a few failures of communication that are worth clarifying.
It’s (the outside view) not really a well-defined thing, which is why the standard on this site is to taboo those words and just explain what your lines of evidence are, or the motivation for any special priors if you have them.
I thought it was clear that I’m not confident in any outside view prediction of AGI timelines, from various statements/phrasings here (including the sentence you’re quoting, which questions the well-definedness of “the outside view”) and the fact that the central focus of the post is disputing an outside view argument. Apparently I did not communicate this clearly, because many commenters have objected to my vague references to possible outside views as if I were treating them as solid evidence, when in fact they aren’t really a load bearing part of my argument here. Possibly the problem is that I don’t think anyone has a good inside view either! But in fact I am just “radically” uncertain about AGI timelines—my uncertainty is ~in the exponent.
Still, I find your response a little ironic since this site is practically the only place I’ve seen the term “outside view” used. It does seem to be less common over the last year or two, since this post which you’re probably referring to.
So, your claim is that interest rates would be very high if AGI were imminent, and they’re not so it’s not. The last time someone said this, if the people arguing in the comment section had simply made a bet on interest rates changing, they would have made a lot of money! Ditto for buying up AI-related stocks or call options on those stocks.
Interesting, but non-sequitur. That is, either you believe that interest rates will predictably increase and there’s free money on the table, and you should just say so, or not, and this anecdote doesn’t seem to be relevant (similarly, I made money buying NVDA around that time, but I don’t think that proves anything).
You could say that the “inventing important new ideas” part is going to be such a heavy bottleneck, that this speedup won’t amount to much. But I think that’s mostly wrong, and that if you asked ML researchers at OpenAI, a drop in remote worker that could “only” be directed to do things that otherwise took 12 hours would speed up their work by a lot.
Perhaps, but shouldn’t LLMs already be speeding up AI progress? And if so, shouldn’t that already be reflected in METR’s plot? Are you predicting superexponential growth here?
It’s actually not circular at all. “Current AI research” has taken us from machines that can’t talk to machines that can talk, write computer programs, give advice, etc. in about five years. That’s the empirical evidence that you can make research progress doing “random” stuff. In the absence of further evidence, people are just expecting the thing that has happened over the last five years to continue. You can reject that claim, but at this point I think the burden of proof is on the people that do.
It seems to me that progress has been slowing for the last couple of years. If this trend continues, progress will stall.
Interesting, but non-sequitur. That is, either you believe that interest rates will predictably increase and there’s free money on the table, and you should just say so, or not, and this anecdote doesn’t seem to be relevant (similarly, I made money buying NVDA around that time, but I don’t think that proves anything).
I am saying so! The market is definitely not pricing in AGI; doesn’t matter if it comes in 2028, or 2035, or 2040. Though interest rates are a pretty bad way to arb this; I would just buy call options on the Nasdaq.
Perhaps, but shouldn’t LLMs already be speeding up AI progress? And if so, shouldn’t that already be reflected in METR’s plot?
I am saying so! The market is definitely not pricing in AGI; doesn’t matter if it comes in 2028, or 2035, or 2040. Though interest rates are a pretty bad way to arb this; I would just buy call options on the Nasdaq.
Hmm well at least you’re consistent.
They’re not that useful yet.
Certainly I can see why you expect them to become more useful, I still feel like there’s some circularity here. Do you expect the current paradigm to continue advancing because LLM agents are somewhat useful now (as you said, for things like coding)? Unless that effect is currently negligible (and will undergo a sharp transition at some point) it seems we should expect it to already be reflected in the exponential growth rate claimed by METR.
It’s not really a well-defined thing, which is why the standard on this site is to taboo those words and just explain what your lines of evidence are, or the motivation for any special priors if you have them.
So, your claim is that interest rates would be very high if AGI were imminent, and they’re not so it’s not. The last time someone said this, if the people arguing in the comment section had simply made a bet on interest rates changing, they would have made a lot of money! Ditto for buying up AI-related stocks or call options on those stocks.
I think you’re just overestimating the ability of the market to generalize to out of distribution events. Prices are set by a market’s participants, and the institutions with the ability to move prices are mostly not thinking about AGI timelines at present. It wouldn’t matter if AGI was arriving in five or ten or twenty years, Bridgewater would be basically doing the same things, and so their inaction doesn’t provide much evidence. Inherent in these forecasts there are also naturally going to be a lot of assumptions about the value of money (or titles to partial ownership of companies controlled by Sam Altmans) in a post-AGI scenario. These are pretty well-disputed premises, to say the least, which makes interpreting current market prices hard.
The issue is, ML research itself is composed of many tasks that do take less than a month for humans to execute. For example, on this model, sometime before “idea generation”, you’re going to have a model that can do most high-context software engineering tasks. The research department at any of the big AI labs would be able to do more stuff if it had such a model. So while current AI is not accelerating machine learning research that much, as it gets better, the trend line from the METR paper is going to curl upward.
You could say that the “inventing important new ideas” part is going to be such a heavy bottleneck, that this speedup won’t amount to much. But I think that’s mostly wrong, and that if you asked ML researchers at OpenAI, a drop in remote worker that could “only” be directed to do things that otherwise took 12 hours would speed up their work by a lot.
It’s actually not circular at all. “Current AI research” has taken us from machines that can’t talk to machines that can talk, write computer programs, give advice, etc. in about five years. That’s the empirical evidence that you can make research progress doing “random” stuff. In the absence of further evidence, people are just expecting the thing that has happened over the last five years to continue. You can reject that claim, but at this point I think the burden of proof is on the people that do.
Thanks for engaging in detail with my post. It seems there were a few failures of communication that are worth clarifying.
I thought it was clear that I’m not confident in any outside view prediction of AGI timelines, from various statements/phrasings here (including the sentence you’re quoting, which questions the well-definedness of “the outside view”) and the fact that the central focus of the post is disputing an outside view argument. Apparently I did not communicate this clearly, because many commenters have objected to my vague references to possible outside views as if I were treating them as solid evidence, when in fact they aren’t really a load bearing part of my argument here. Possibly the problem is that I don’t think anyone has a good inside view either! But in fact I am just “radically” uncertain about AGI timelines—my uncertainty is ~in the exponent.
Still, I find your response a little ironic since this site is practically the only place I’ve seen the term “outside view” used. It does seem to be less common over the last year or two, since this post which you’re probably referring to.
Interesting, but non-sequitur. That is, either you believe that interest rates will predictably increase and there’s free money on the table, and you should just say so, or not, and this anecdote doesn’t seem to be relevant (similarly, I made money buying NVDA around that time, but I don’t think that proves anything).
Perhaps, but shouldn’t LLMs already be speeding up AI progress? And if so, shouldn’t that already be reflected in METR’s plot? Are you predicting superexponential growth here?
It seems to me that progress has been slowing for the last couple of years. If this trend continues, progress will stall.
I am saying so! The market is definitely not pricing in AGI; doesn’t matter if it comes in 2028, or 2035, or 2040. Though interest rates are a pretty bad way to arb this; I would just buy call options on the Nasdaq.
They’re not that useful yet.
Hmm well at least you’re consistent.
Certainly I can see why you expect them to become more useful, I still feel like there’s some circularity here. Do you expect the current paradigm to continue advancing because LLM agents are somewhat useful now (as you said, for things like coding)? Unless that effect is currently negligible (and will undergo a sharp transition at some point) it seems we should expect it to already be reflected in the exponential growth rate claimed by METR.