I’m not sure what the exact process was, tbh my guess is that they were estimated mostly independently but likely sanity checked with the survey to some extent in mind. It seems like they line up about right, given the 2022 vs. 2023 difference, the intuition regarding underadjusting for labor->progress, and giving weight to our own views as well rather than just the survey, given that we’ve thought more about this than survey takers (while of course they have the advantage of currently doing frontier AI research).
I’d make less of an adjustment if we asked people to give their reasoning including the adjustment from labor speedup to overall progress speedup and only included people who gave answers that demonstrated good understanding of this consideration and a not obviously unreasonable adjustment level.
Alright, my first pass guess would have been algorithmic progress seems like the kind of thing that eats a much smaller penalty than most forms org-level progress, not none but not a 75% reduction, and not likely more than a 50% reduction, but you guys have the track record.
I think it’s not worth getting into this too much more as I don’t feel strongly about the exact 1.05x, but I feel compelled to note a few quick things:
I’m not sure exactly what you mean by eating a smaller penalty but I think the labor->progress penalty is quite large
The right way to think about 1.05x vs. 1.2x is not a 75% reduction, but instead what is the exponent for which 1.05^n=1.2
Remember the 2022 vs. 2023 difference, though my guess is that the responses wouldn’t have been that sensitive to this
Also one more thing I’d like to pre-register: people who fill out the survey who aren’t frontier AI researchers will generally report higher speedups because their work is generally less compute-loaded and sometimes more greenfieldy or requiring less expertise, but we should give by far the most weight to frontier AI researchers.
(feel free to not go any deeper, appreciate you having engaged as much as you have!)
Yup, was just saying my first-pass guess would have been a less large labour->progress penalty. I do defer here fairly thoroughly.
The right way to think about 1.05x vs. 1.2x is not a 75% reduction, but instead what is the exponent for which 1.05^n=1.2
hmm, seems true if you’re expecting the people to not have applied a correction already, but less true if they are already making a correction and you’re estimating how wrong their correction is?
And yup, agree with that preregistration on all counts.
I’m not sure what the exact process was, tbh my guess is that they were estimated mostly independently but likely sanity checked with the survey to some extent in mind. It seems like they line up about right, given the 2022 vs. 2023 difference, the intuition regarding underadjusting for labor->progress, and giving weight to our own views as well rather than just the survey, given that we’ve thought more about this than survey takers (while of course they have the advantage of currently doing frontier AI research).
I’d make less of an adjustment if we asked people to give their reasoning including the adjustment from labor speedup to overall progress speedup and only included people who gave answers that demonstrated good understanding of this consideration and a not obviously unreasonable adjustment level.
Alright, my first pass guess would have been algorithmic progress seems like the kind of thing that eats a much smaller penalty than most forms org-level progress, not none but not a 75% reduction, and not likely more than a 50% reduction, but you guys have the track record.
Cool, added a nudge to the last question.
I think it’s not worth getting into this too much more as I don’t feel strongly about the exact 1.05x, but I feel compelled to note a few quick things:
I’m not sure exactly what you mean by eating a smaller penalty but I think the labor->progress penalty is quite large
The right way to think about 1.05x vs. 1.2x is not a 75% reduction, but instead what is the exponent for which 1.05^n=1.2
Remember the 2022 vs. 2023 difference, though my guess is that the responses wouldn’t have been that sensitive to this
Also one more thing I’d like to pre-register: people who fill out the survey who aren’t frontier AI researchers will generally report higher speedups because their work is generally less compute-loaded and sometimes more greenfieldy or requiring less expertise, but we should give by far the most weight to frontier AI researchers.
(feel free to not go any deeper, appreciate you having engaged as much as you have!)
Yup, was just saying my first-pass guess would have been a less large labour->progress penalty. I do defer here fairly thoroughly.
hmm, seems true if you’re expecting the people to not have applied a correction already, but less true if they are already making a correction and you’re estimating how wrong their correction is?
And yup, agree with that preregistration on all counts.