So in other words, everything has to go “right” for AGI by 2027? Maybe it will work. I’m only arguing against high confidence in short timelines. Anything could happen.
I’m responding to the point about LLM agents being a thing for years, and that therefore some level of maturity should be expected from them. I think this isn’t quite right, as the current method is new, the older methods didn’t work out, and it’s too early to tell that the new method won’t work out.
So I’m discussing when it’ll be time to tell that it won’t work out either (unless it does), at which point it’ll be possible to have some sense as to why. Which is not yet, probably in 2026, and certainly by 2027. I’m not really arguing about the probability that it does work out.
You are consistent about this kind of reasoning, but a lot of others seem to expect everything to happen really fast (before 2030) while also dismissing anything that doesn’t work as not having been tried because there haven’t been enough years for research.
Numbers? What does “high confidence” mean here? IIRC from our non-text discussions, Tsvi considers anything above 1% by end-of-year 2030 to be “high confidence in short timelines” of the sort he would have something to say about. (But not the level of strong disagreement he’s expressing in our written dialogue until something like 5-10% iirc.) What numbers would you “only argue against”?
I recall it as part of our (unrecorded) conversation, but I could be misremembering. Given your reaction I think I was probably misremembering. Sorry for the error!
So, to be clear, what is the probability someone else could state such that you would have “something to say about it” (ie, some kind of argument against it)? Your own probability being 0.5% − 1% isn’t inconsistent with what I said (if you’d have something to say about any probability above your own), but where would you actually put that cutoff? 5%? 10%?
If someone says 10% by 2030, we disagree, but it would be hard to find something to talk about purely on that basis. (Of course, they could have other more specific beliefs that I could argue with.) If they say, IDK, 25% or something (IDK, obviously not a sharp cutoff by any means, why would there be?), then I start feeling like we ought to be able to find a disagreement just by investigating what makes us say such different probabilities. Also I start feeling like they have strategically bad probabilities (I mean, their beliefs that are incorrect according to me would have practical implications that I think are mistaken actions). (On second thought, probably even 10% has strategically bad implications, assuming that implies 20% by 2035 or similar.)
Well, overconfident/underconfident is always only meaningful relative to some baseline, so if you strongly think (say) 0.001% is the right level of confidence, then 1% is high relative to that.
The various numbers I’ve stated during this debate are 60%, 50%, and 30%, so none of them are high by your meaning. Does that really mean you aren’t arguing against my positions? (This was not my previous impression.)
I think 60% by 2030 is too high, and I am arguing against numbers like that. There’s some ambiguity about drawing the lines because high numbers on very short timelines are of course strictly less plausible than high numbers on merely short timelines, so there isn’t necessarily one best number to compare.
On reflection, I don’t like the phrase “high confidence” for <50% and preferably not even for <75%. Something like “high credence” seems more appropriate—though one can certainly have higher or lower confidence, it is not clear communication to say you are highly confident of something which you believe at little better than even odds. Even if you were buying a lottery ticket with the special knowledge that you had picked one of three possible winning numbers, you still wouldn’t say you were highly confident that ticket would win—even though we would no longer be confident of losing!
Anyway, I haven’t necessarily been consistent / explicit about this throughout the conversation.
So in other words, everything has to go “right” for AGI by 2027?
Maybe it will work. I’m only arguing against high confidence in short timelines. Anything could happen.
I’m responding to the point about LLM agents being a thing for years, and that therefore some level of maturity should be expected from them. I think this isn’t quite right, as the current method is new, the older methods didn’t work out, and it’s too early to tell that the new method won’t work out.
So I’m discussing when it’ll be time to tell that it won’t work out either (unless it does), at which point it’ll be possible to have some sense as to why. Which is not yet, probably in 2026, and certainly by 2027. I’m not really arguing about the probability that it does work out.
You are consistent about this kind of reasoning, but a lot of others seem to expect everything to happen really fast (before 2030) while also dismissing anything that doesn’t work as not having been tried because there haven’t been enough years for research.
Numbers? What does “high confidence” mean here? IIRC from our non-text discussions, Tsvi considers anything above 1% by end-of-year 2030 to be “high confidence in short timelines” of the sort he would have something to say about. (But not the level of strong disagreement he’s expressing in our written dialogue until something like 5-10% iirc.) What numbers would you “only argue against”?
Say what now?? Did I write that somewhere? That would be a typo or possibly a thinko. My own repeatedly stated probabilities would be around 1% or .5%! E.g. in https://www.lesswrong.com/posts/sTDfraZab47KiRMmT/views-on-when-agi-comes-and-on-strategy-to-reduce
I recall it as part of our (unrecorded) conversation, but I could be misremembering. Given your reaction I think I was probably misremembering. Sorry for the error!
So, to be clear, what is the probability someone else could state such that you would have “something to say about it” (ie, some kind of argument against it)? Your own probability being 0.5% − 1% isn’t inconsistent with what I said (if you’d have something to say about any probability above your own), but where would you actually put that cutoff? 5%? 10%?
If someone says 10% by 2030, we disagree, but it would be hard to find something to talk about purely on that basis. (Of course, they could have other more specific beliefs that I could argue with.) If they say, IDK, 25% or something (IDK, obviously not a sharp cutoff by any means, why would there be?), then I start feeling like we ought to be able to find a disagreement just by investigating what makes us say such different probabilities. Also I start feeling like they have strategically bad probabilities (I mean, their beliefs that are incorrect according to me would have practical implications that I think are mistaken actions). (On second thought, probably even 10% has strategically bad implications, assuming that implies 20% by 2035 or similar.)
High confidence means at least over 75%
Short timelines means, say, less than 10 years, though at this point I think the very short timeline picture means “around 2030”
I don’t know how anyone could reasonably refer to 1% confidence as high.
Well, overconfident/underconfident is always only meaningful relative to some baseline, so if you strongly think (say) 0.001% is the right level of confidence, then 1% is high relative to that.
The various numbers I’ve stated during this debate are 60%, 50%, and 30%, so none of them are high by your meaning. Does that really mean you aren’t arguing against my positions? (This was not my previous impression.)
I think 60% by 2030 is too high, and I am arguing against numbers like that. There’s some ambiguity about drawing the lines because high numbers on very short timelines are of course strictly less plausible than high numbers on merely short timelines, so there isn’t necessarily one best number to compare.
On reflection, I don’t like the phrase “high confidence” for <50% and preferably not even for <75%. Something like “high credence” seems more appropriate—though one can certainly have higher or lower confidence, it is not clear communication to say you are highly confident of something which you believe at little better than even odds. Even if you were buying a lottery ticket with the special knowledge that you had picked one of three possible winning numbers, you still wouldn’t say you were highly confident that ticket would win—even though we would no longer be confident of losing!
Anyway, I haven’t necessarily been consistent / explicit about this throughout the conversation.