They might be a few years away from becoming one-hour AGIs.
Might be a few years away? Or might not. This seems like rather a key point. Personally I expect 1-hour AGI before 2027. I think it’s pretty likely going to be here in 2025. Will we see meaningful recursive self-improvement progress before or after that point? I think that’s around the level which will enable recursive self-improvement, given sufficient scaffolding and compute. After that, the slope of the line of improvement might undergo a rather sharp transition.
This seems like a key point to address. If you have arguments for why this can’t/won’t happen in this timeframe, seems like it would be good to include that.
The ML researchers saying stuff like AGI is 15 years away have either not carefully thought it through, or are lying to themselves or the survey.
Rude of me to jump to that oh-so-self-flattering conclusion, yes. And certainly me saying that should not be taken as any sort of evidence in support of my view.
Instead you should judge my view by:
My willingness to make an explicit concrete prediction and put money on it. Admittedly a trivial amount of money 8n this, but I’ve made much larger bets on the topic in the past.
The fact that my views are self-consistent and have remained fairly stable in response to evidence gathered over the past two years about AI progress. Stable views aren’t necessarily a good thing, it could mean I’m failing to update! In this case, the evidence of the past two years confirms the predictions I publicly stated before that time, thus the stability of my prediction is a plus in this case.
Contrast this with the dramatic change in the predictions I was criticizing, which came about because recent evidence strongly contradicted their previous views.
Note that my prediction of “AGI < 10 years” is consistent with my prediction of “and we should expect lots of far reaching changes, and novel dangers which will need careful measurement and regulation”. As compared to the views of many of the ML experts saying “AGI > 15 years away”, and also saying things like, “the changes will be relatively small. On the same order of change as the printing press and the Internet.” and also “the risks aren’t very high. Everything will probably be fine, and even if things go wrong, we can easily iteratively fix the problems with only minor negative consequences”.
I would argue that even if one held the view that AGI is > 15 years away (but less than 50), it would still not make sense to be so unworried about the potential consequences. I claim that that set of views is “insufficiently thought through”, and that if forced to specify all the detailed pieces of their predictions in a lengthy written debate, those views would show themselves to be self-contradictory. I believe that my set of predictions would be relatively much more self-consistent.
Might be a few years away? Or might not. This seems like rather a key point. Personally I expect 1-hour AGI before 2027. I think it’s pretty likely going to be here in 2025. Will we see meaningful recursive self-improvement progress before or after that point? I think that’s around the level which will enable recursive self-improvement, given sufficient scaffolding and compute. After that, the slope of the line of improvement might undergo a rather sharp transition.
This seems like a key point to address. If you have arguments for why this can’t/won’t happen in this timeframe, seems like it would be good to include that.
The ML researchers saying stuff like AGI is 15 years away have either not carefully thought it through, or are lying to themselves or the survey.
But hey, put your money where your mouth is, eh? I made a prediction market here.
Ah yes, the good ol’ “If someone disagrees with me, they must be stupid or lying”
Rude of me to jump to that oh-so-self-flattering conclusion, yes. And certainly me saying that should not be taken as any sort of evidence in support of my view.
Instead you should judge my view by:
My willingness to make an explicit concrete prediction and put money on it. Admittedly a trivial amount of money 8n this, but I’ve made much larger bets on the topic in the past.
The fact that my views are self-consistent and have remained fairly stable in response to evidence gathered over the past two years about AI progress. Stable views aren’t necessarily a good thing, it could mean I’m failing to update! In this case, the evidence of the past two years confirms the predictions I publicly stated before that time, thus the stability of my prediction is a plus in this case. Contrast this with the dramatic change in the predictions I was criticizing, which came about because recent evidence strongly contradicted their previous views.
Note that my prediction of “AGI < 10 years” is consistent with my prediction of “and we should expect lots of far reaching changes, and novel dangers which will need careful measurement and regulation”. As compared to the views of many of the ML experts saying “AGI > 15 years away”, and also saying things like, “the changes will be relatively small. On the same order of change as the printing press and the Internet.” and also “the risks aren’t very high. Everything will probably be fine, and even if things go wrong, we can easily iteratively fix the problems with only minor negative consequences”. I would argue that even if one held the view that AGI is > 15 years away (but less than 50), it would still not make sense to be so unworried about the potential consequences. I claim that that set of views is “insufficiently thought through”, and that if forced to specify all the detailed pieces of their predictions in a lengthy written debate, those views would show themselves to be self-contradictory. I believe that my set of predictions would be relatively much more self-consistent.