While I claim that all intelligence that is capable to understand “I don’t know what I don’t know” can only seek power (alignment is impossible).
the ability of an AGI to have arbitrary utility functions is orthogonal (pun intended) to what behaviors are likely to result from those utility functions.
As I understand you say that there are Goals on one axis and Behaviors on other axis. I don’t think Orthogonality Thesis is about that.
Orthogonality Thesis
It basically says that intelligence and goals are independent
Images from A caveat to the Orthogonality Thesis.
While I claim that all intelligence that is capable to understand “I don’t know what I don’t know” can only seek power (alignment is impossible).
As I understand you say that there are Goals on one axis and Behaviors on other axis. I don’t think Orthogonality Thesis is about that.