Doesn’t seem that wild to me? When we scale up compute we’re also scaling up the size of frontier training runs; maybe past a certain point running smaller experiments just isn’t useful (e.g. you can’t learn anything from experiments using 1 billionth of the compute of a frontier training run); and maybe past a certain point you just can’t design better experiments. (Though I agree with you that this is all unlikely to bite before a 10X speed up.)
Yes, but also, if the computers are getting serially faster, then you also have to be able to respond to the results and implement the next experiment faster as you add more compute. E.g., imagine a (physically implausible) computer which can run any experiment which uses less than 1e100 FLOP in less than a nanosecond. To maximally utilize this, you’d want to be able to respond to results and implement the next experiment in less than a nanosecond as well. This is of course an unhinged hypothetical and in this world, you’d also be able to immediately create superintelligence by e.g. simulating a huge evolutionary process.
Yes, but also, if the computers are getting serially faster, then you also have to be able to respond to the results and implement the next experiment faster as you add more compute. E.g., imagine a (physically implausible) computer which can run any experiment which uses less than 1e100 FLOP in less than a nanosecond. To maximally utilize this, you’d want to be able to respond to results and implement the next experiment in less than a nanosecond as well. This is of course an unhinged hypothetical and in this world, you’d also be able to immediately create superintelligence by e.g. simulating a huge evolutionary process.