There is no specific page for something that I’ve seen else-where. It seems to be deliberate AI-enabled lock-in of the existing AI leadership positions, as a substitute for de-jure political authority:
https://www.notesfromthecircus.com/p/garbage-in-garbage-out
It is being presented as a meta-mistake of alignment, made by assuming that human preferences are static and discoverable, and not having a mechanism for groups of relatively ordinary humans to seek revision of the preferences that the AI is implementing.
The nasty answer would be ‘all of it, back to the original training run, including all of the other descendants. Now start over.’.
The answer which actually relates to future consequence would require understanding (for example) multiplexing, the ability to reduce two AIs to some canonical form, and the ability to compare two canonical forms. Yes, we’re not there yet.
I wonder where “it’s the manufacturer’s responsibility to prove that it’s not substantially the same” would fit into our existing case-law of responsibility.