I find the ideas you discuss interesting, but they leave me with more questions. I agree that we are moving toward a more generic AI that we can use for all kinds of tasks.
I have trouble understanding the goal-completeness concept. I’d reiterate @Razied ’s point. You mention “steers the future very slowly”, so there is an implicit concept of “speed of steering”. I don’t find the turing machine analogy helpful in infering an analogous conclusion because I don’t know what that conclusion is.
You’re making a qualitative distinction between humans (goal-complete) and other animals (non-goal complete) agents. I don’t understand what you mean by that distinction. I find the idea of goal completeness interesting to explore but quite fuzzy at this point.
Eliezer Yudkowsky once entered an empty newcomb’s box simply so he can get out when the box was opened.
or
When you one-box against Eliezer Yudkowsky on newcomb’s problem, you lose because he escapes from the box with the money.