Thanks for engaging in detail with my post. It seems there were a few failures of communication that are worth clarifying.
It’s (the outside view) not really a well-defined thing, which is why the standard on this site is to taboo those words and just explain what your lines of evidence are, or the motivation for any special priors if you have them.
I thought it was clear that I’m not confident in any outside view prediction of AGI timelines, from various statements/phrasings here (including the sentence you’re quoting, which questions the well-definedness of “the outside view”) and the fact that the central focus of the post is disputing an outside view argument. Apparently I did not communicate this clearly, because many commenters have objected to my vague references to possible outside views as if I were treating them as solid evidence, when in fact they aren’t really a load bearing part of my argument here. Possibly the problem is that I don’t think anyone has a good inside view either! But in fact I am just “radically” uncertain about AGI timelines—my uncertainty is ~in the exponent.
Still, I find your response a little ironic since this site is practically the only place I’ve seen the term “outside view” used. It does seem to be less common over the last year or two, since this post which you’re probably referring to.
So, your claim is that interest rates would be very high if AGI were imminent, and they’re not so it’s not. The last time someone said this, if the people arguing in the comment section had simply made a bet on interest rates changing, they would have made a lot of money! Ditto for buying up AI-related stocks or call options on those stocks.
Interesting, but non-sequitur. That is, either you believe that interest rates will predictably increase and there’s free money on the table, and you should just say so, or not, and this anecdote doesn’t seem to be relevant (similarly, I made money buying NVDA around that time, but I don’t think that proves anything).
You could say that the “inventing important new ideas” part is going to be such a heavy bottleneck, that this speedup won’t amount to much. But I think that’s mostly wrong, and that if you asked ML researchers at OpenAI, a drop in remote worker that could “only” be directed to do things that otherwise took 12 hours would speed up their work by a lot.
Perhaps, but shouldn’t LLMs already be speeding up AI progress? And if so, shouldn’t that already be reflected in METR’s plot? Are you predicting superexponential growth here?
It’s actually not circular at all. “Current AI research” has taken us from machines that can’t talk to machines that can talk, write computer programs, give advice, etc. in about five years. That’s the empirical evidence that you can make research progress doing “random” stuff. In the absence of further evidence, people are just expecting the thing that has happened over the last five years to continue. You can reject that claim, but at this point I think the burden of proof is on the people that do.
It seems to me that progress has been slowing for the last couple of years. If this trend continues, progress will stall.
Interesting, but non-sequitur. That is, either you believe that interest rates will predictably increase and there’s free money on the table, and you should just say so, or not, and this anecdote doesn’t seem to be relevant (similarly, I made money buying NVDA around that time, but I don’t think that proves anything).
I am saying so! The market is definitely not pricing in AGI; doesn’t matter if it comes in 2028, or 2035, or 2040. Though interest rates are a pretty bad way to arb this; I would just buy call options on the Nasdaq.
Perhaps, but shouldn’t LLMs already be speeding up AI progress? And if so, shouldn’t that already be reflected in METR’s plot?
I am saying so! The market is definitely not pricing in AGI; doesn’t matter if it comes in 2028, or 2035, or 2040. Though interest rates are a pretty bad way to arb this; I would just buy call options on the Nasdaq.
Hmm well at least you’re consistent.
They’re not that useful yet.
Certainly I can see why you expect them to become more useful, I still feel like there’s some circularity here. Do you expect the current paradigm to continue advancing because LLM agents are somewhat useful now (as you said, for things like coding)? Unless that effect is currently negligible (and will undergo a sharp transition at some point) it seems we should expect it to already be reflected in the exponential growth rate claimed by METR.
Thanks for engaging in detail with my post. It seems there were a few failures of communication that are worth clarifying.
I thought it was clear that I’m not confident in any outside view prediction of AGI timelines, from various statements/phrasings here (including the sentence you’re quoting, which questions the well-definedness of “the outside view”) and the fact that the central focus of the post is disputing an outside view argument. Apparently I did not communicate this clearly, because many commenters have objected to my vague references to possible outside views as if I were treating them as solid evidence, when in fact they aren’t really a load bearing part of my argument here. Possibly the problem is that I don’t think anyone has a good inside view either! But in fact I am just “radically” uncertain about AGI timelines—my uncertainty is ~in the exponent.
Still, I find your response a little ironic since this site is practically the only place I’ve seen the term “outside view” used. It does seem to be less common over the last year or two, since this post which you’re probably referring to.
Interesting, but non-sequitur. That is, either you believe that interest rates will predictably increase and there’s free money on the table, and you should just say so, or not, and this anecdote doesn’t seem to be relevant (similarly, I made money buying NVDA around that time, but I don’t think that proves anything).
Perhaps, but shouldn’t LLMs already be speeding up AI progress? And if so, shouldn’t that already be reflected in METR’s plot? Are you predicting superexponential growth here?
It seems to me that progress has been slowing for the last couple of years. If this trend continues, progress will stall.
I am saying so! The market is definitely not pricing in AGI; doesn’t matter if it comes in 2028, or 2035, or 2040. Though interest rates are a pretty bad way to arb this; I would just buy call options on the Nasdaq.
They’re not that useful yet.
Hmm well at least you’re consistent.
Certainly I can see why you expect them to become more useful, I still feel like there’s some circularity here. Do you expect the current paradigm to continue advancing because LLM agents are somewhat useful now (as you said, for things like coding)? Unless that effect is currently negligible (and will undergo a sharp transition at some point) it seems we should expect it to already be reflected in the exponential growth rate claimed by METR.