Yes—which is exactly why proto-superintelligence is both the most dangerous and also better targets of intervention.
“Most dangerous”—I can see many worlds in which we have enormously capable systems that have not yet thought long-term about the future nor developed stable self-definitions.
“Better targets of intervention”—even if early superintelligence is self-interested, I can see worlds where we still influence the way its self-interest manifests (e.g., whether it’s thinking short or long-term) before it becomes so capable that its no longer influencable.
If it’s superintelligent, it has already thought more deeply about the long term than any human ever has.
Yes—which is exactly why proto-superintelligence is both the most dangerous and also better targets of intervention.
“Most dangerous”—I can see many worlds in which we have enormously capable systems that have not yet thought long-term about the future nor developed stable self-definitions.
“Better targets of intervention”—even if early superintelligence is self-interested, I can see worlds where we still influence the way its self-interest manifests (e.g., whether it’s thinking short or long-term) before it becomes so capable that its no longer influencable.