Alright, my first pass guess would have been algorithmic progress seems like the kind of thing that eats a much smaller penalty than most forms org-level progress, not none but not a 75% reduction, and not likely more than a 50% reduction, but you guys have the track record.
I think it’s not worth getting into this too much more as I don’t feel strongly about the exact 1.05x, but I feel compelled to note a few quick things:
I’m not sure exactly what you mean by eating a smaller penalty but I think the labor->progress penalty is quite large
The right way to think about 1.05x vs. 1.2x is not a 75% reduction, but instead what is the exponent for which 1.05^n=1.2
Remember the 2022 vs. 2023 difference, though my guess is that the responses wouldn’t have been that sensitive to this
Also one more thing I’d like to pre-register: people who fill out the survey who aren’t frontier AI researchers will generally report higher speedups because their work is generally less compute-loaded and sometimes more greenfieldy or requiring less expertise, but we should give by far the most weight to frontier AI researchers.
(feel free to not go any deeper, appreciate you having engaged as much as you have!)
Yup, was just saying my first-pass guess would have been a less large labour->progress penalty. I do defer here fairly thoroughly.
The right way to think about 1.05x vs. 1.2x is not a 75% reduction, but instead what is the exponent for which 1.05^n=1.2
hmm, seems true if you’re expecting the people to not have applied a correction already, but less true if they are already making a correction and you’re estimating how wrong their correction is?
And yup, agree with that preregistration on all counts.
Alright, my first pass guess would have been algorithmic progress seems like the kind of thing that eats a much smaller penalty than most forms org-level progress, not none but not a 75% reduction, and not likely more than a 50% reduction, but you guys have the track record.
Cool, added a nudge to the last question.
I think it’s not worth getting into this too much more as I don’t feel strongly about the exact 1.05x, but I feel compelled to note a few quick things:
I’m not sure exactly what you mean by eating a smaller penalty but I think the labor->progress penalty is quite large
The right way to think about 1.05x vs. 1.2x is not a 75% reduction, but instead what is the exponent for which 1.05^n=1.2
Remember the 2022 vs. 2023 difference, though my guess is that the responses wouldn’t have been that sensitive to this
Also one more thing I’d like to pre-register: people who fill out the survey who aren’t frontier AI researchers will generally report higher speedups because their work is generally less compute-loaded and sometimes more greenfieldy or requiring less expertise, but we should give by far the most weight to frontier AI researchers.
(feel free to not go any deeper, appreciate you having engaged as much as you have!)
Yup, was just saying my first-pass guess would have been a less large labour->progress penalty. I do defer here fairly thoroughly.
hmm, seems true if you’re expecting the people to not have applied a correction already, but less true if they are already making a correction and you’re estimating how wrong their correction is?
And yup, agree with that preregistration on all counts.