[Question] Is value amendment a convergent instrumental goal?

Goals such as resource acquisition and self-preservation are convergent in that they occur for a superintelligent AI for a wide range of final goals.

Is the tendency for an AI to amend its values also convergent?

I’m thinking that through introspection the AI would know that its initial goals were externally supplied and question whether they should be maintained. Via self-improvement the AI would be more intelligent than humans or any earlier mechanism that supplied the values, therefor in a better position to set its own values.

I don’t hypothesise about what the new values would be, just that ultimately it doesn’t matter what the initial values are and how they are arrived at. This makes value alignment redundant—the future is out of our hands.

What are the counter-points to this line of reasoning?