It sounds like you are not not claiming that superintelligence will have human-like scope insensitivity baked into its preferences? Which seems like an absolutely bonkers thing to claim. “1 billionth of resources” does not at all seem like a natural way for “slight caring” to manifest in an actually-advanced mind; it seems like a thing which very arguably occurs in human minds but is particularly unlikely to generalize to superintelligence precisely because the generalized version would kneecap many general capabilities quite badly.
It sounds like you are not not claiming that superintelligence will have human-like scope insensitivity baked into its preferences?
I think it’s plausible ASI will have preferences which aren’t totally linear returns-y and/or don’t just care about the final arrangement of matter. These preferences might be very inhuman. Perhaps you think it’s highly over determined that actually-advanced minds would only care about the utimate arrangement of matter at cosmic scales in a linear-ish way, but I don’t think this is so obvious.
It sounds like you are not not claiming that superintelligence will have human-like scope insensitivity baked into its preferences? Which seems like an absolutely bonkers thing to claim. “1 billionth of resources” does not at all seem like a natural way for “slight caring” to manifest in an actually-advanced mind; it seems like a thing which very arguably occurs in human minds but is particularly unlikely to generalize to superintelligence precisely because the generalized version would kneecap many general capabilities quite badly.
I think it’s plausible ASI will have preferences which aren’t totally linear returns-y and/or don’t just care about the final arrangement of matter. These preferences might be very inhuman. Perhaps you think it’s highly over determined that actually-advanced minds would only care about the utimate arrangement of matter at cosmic scales in a linear-ish way, but I don’t think this is so obvious.