This makes sense, but I think I am still a bit confused. My comment above was mostly driven by doing a quick internal fermi estimate myself for whether “1 million AIs somewhat smarter than humans have spent 100 years each working on the problem” is a realistic amount of work to get out of the AIs without slowing down, and arriving at the conclusion that this seems very unlikely across a relatively broad set of worldviews.
We can also open up the separate topic of how much work might be required to make real progress on superalignment in time, or whether this whole ontology makes sense, but I was mostly interested in doing a fact-check of “wait, that really sounds like too much, do you really believe this number is realistic?”.
I still disagree, but I have much less of a “wait, this really can’t be right” reaction if you mean the number that’s 50x lower.
This makes sense, but I think I am still a bit confused. My comment above was mostly driven by doing a quick internal fermi estimate myself for whether “1 million AIs somewhat smarter than humans have spent 100 years each working on the problem” is a realistic amount of work to get out of the AIs without slowing down, and arriving at the conclusion that this seems very unlikely across a relatively broad set of worldviews.
We can also open up the separate topic of how much work might be required to make real progress on superalignment in time, or whether this whole ontology makes sense, but I was mostly interested in doing a fact-check of “wait, that really sounds like too much, do you really believe this number is realistic?”.
I still disagree, but I have much less of a “wait, this really can’t be right” reaction if you mean the number that’s 50x lower.