If I understand correctly, the paper suggests this limit—the maximum improvement in learning efficiency a recursively self-improving superintelligence could gain, beyond the efficiency of human brains—is “4-10 OOMs,” which it describes as equivalent to 4-10 “years of AI progress, at the rate of progress seen in recent years.”
Perhaps I’m missing something, and again I’m sorry if so, but after reading the paper carefully twice I don’t see any arguments that justify this choice of range. Why do you expect the limit of learning efficiency for a recursively self-improving superintelligence is 4-10 recent-progress-years above humans?
Oh there’s lots of arguments feeding into that range. Look at this part of the paper. There’s a long list of bullet points of different ways that superintelligences could be more efficient than humans. Each of the estimates have a range of X-Y OOMs. Then:
Overall, the additional learning efficiency gains from these sources suggest that effective limits are 4 − 12 OOMs above the human brain. The high end seems extremely high, and we think there’s some risk of double counting some of the gains here in the different buckets, so we will bring down our high end to 10 OOMs.
Here: 4 is supposed to be the product of all the lower numbers guessed-at above (“X” in “X-Y”), and 12 is supposed to be the product of all the upper numbers (“Y” in “X-Y”).
Oh there’s lots of arguments feeding into that range. Look at this part of the paper. There’s a long list of bullet points of different ways that superintelligences could be more efficient than humans. Each of the estimates have a range of X-Y OOMs. Then:
Here: 4 is supposed to be the product of all the lower numbers guessed-at above (“X” in “X-Y”), and 12 is supposed to be the product of all the upper numbers (“Y” in “X-Y”).