I think this may be proving Raemon’s point that there are a wide range of concepts. I consider the lower amount of alignment connotation of independence gaining a feature, not a bug, since we can say things like ethical independence gaining AGI or aligned independence gaining AGI without it sounding like an oxymoron. Also, I am not sure superintelligence is required to gain independence, since it may be possible to just think longer than a human to gain independence without thinking faster. That said, if out-of-control superintelligence is the right concept you are trying to get across, then use that.
I think this may be proving Raemon’s point that there are a wide range of concepts. I consider the lower amount of alignment connotation of independence gaining a feature, not a bug, since we can say things like ethical independence gaining AGI or aligned independence gaining AGI without it sounding like an oxymoron. Also, I am not sure superintelligence is required to gain independence, since it may be possible to just think longer than a human to gain independence without thinking faster. That said, if out-of-control superintelligence is the right concept you are trying to get across, then use that.