It sounds like you are not not claiming that superintelligence will have human-like scope insensitivity baked into its preferences?
I think it’s plausible ASI will have preferences which aren’t totally linear returns-y and/or don’t just care about the final arrangement of matter. These preferences might be very inhuman. Perhaps you think it’s highly over determined that actually-advanced minds would only care about the utimate arrangement of matter at cosmic scales in a linear-ish way, but I don’t think this is so obvious.
I think it’s plausible ASI will have preferences which aren’t totally linear returns-y and/or don’t just care about the final arrangement of matter. These preferences might be very inhuman. Perhaps you think it’s highly over determined that actually-advanced minds would only care about the utimate arrangement of matter at cosmic scales in a linear-ish way, but I don’t think this is so obvious.