To be fair, while Assumption 5 is convenient, I do think some form of the assumption is at least reasonably likely to hold, and I do think something like the assumption of no software singularity being possible is a reasonable position to hold, and the nuanced articulation of that assumption is in this article:
I don’t think the assumption is so likely to hold that one can assume it as part of a safety case for AI, but I don’t think the assumption is unreasonably convenient.
To be fair, while Assumption 5 is convenient, I do think some form of the assumption is at least reasonably likely to hold, and I do think something like the assumption of no software singularity being possible is a reasonable position to hold, and the nuanced articulation of that assumption is in this article:
https://epoch.ai/gradient-updates/most-ai-value-will-come-from-broad-automation-not-from-r-d
I don’t think the assumption is so likely to hold that one can assume it as part of a safety case for AI, but I don’t think the assumption is unreasonably convenient.
I agree that this isn’t an obviously unreasonable assumption to hold. But...
… that.