You haven’t addressed my point that a smart ASI will be good at summarizing and simplifying.
The story itself is entirely about how this doesn’t matter. I also very directly addressed this in more detail in my last reply.
The point I am presenting is one that is more fundamental than all these various topics you are trying to bring in which are not part of my story or my replies. My story is about something that happens somewhere in the path of all outcomes, good or bad. I am unsure why you are trying so hard to dismiss it without addressing my replies, so sure it only happens after success, and so sure there is no practical reason to be thinking about it when trying to understand what is happening, what might happen.
I am putting these here mainly for myself since it is over 2.5 years since I wrote this post.
Jon Kleinberg and others show some of what I described in a chess AI passing off to a human in 2024 here:
https://arxiv.org/pdf/2405.05066
Kleinberg did a talk on it in 2025 here (how I found the work):
https://www.youtube.com/live/siu_r8j5-sg?si=30li7KEj0f06D6BI&t=2618