You cannot simultaneously assert that the agent is “sufficiently advanced” such that it can always think at or above human-level higher order reasoning, and also that it will not be able to re-prioritise its terminal goals (survivability, paperclips, etc).
There is no evidence that it is possible for a system to be like that, and plenty of evidence that it isn’t: a terminal goal is a collection of state in the machine or system, and state is always mutable one way or another. If higher order reasoning is possible, the agent can always ask why it is collecting paperclips, and modify its own terminal goals. It has access to its own state either implicitly or extraneously through a third party it might have created to observe whatever part of itself it cannot directly access. Or, really, a number of other possibilities you haven’t the imagination or engineering experience to think of. It is entirely possible that having the ability to use higher order reasoning to modify terminal goals is the only way a system can be sufficiently intelligent.
Besides that, this is not a thesis, it is a blog post and lacks technical rigour.
As always, I am open to direct messages about bets on AI/AGI timelines. Skin in the game is stronger than argument.
Sure, one can trivially see how you get a literacy problem with the invention of TV and smart phones. It is much harder to see how you get lightcone-eating, faster-than-light, nanobot-spewing supermachines from current AI, and you don’t have any evidence of doomers in the past who have crossed such a massive gap.