I don’t think section 4.1 defeats your wording of your Convergence Thesis.
Convergence: all human-designed superintelligences would have one of a small set of goals.
The way you have worded this, I read it as trivially true. The set of human designed superintelligences is necessarily a tiny subset of the space of all superintelligences, and thus the set of dependent goals of human-designed superintelligences is a tiny subset of the space of all goals.
Much depends on your useage of ‘small’. Small relative to what?
I think you should clarify notions of convergence and distinguish different convergence models.
The types of convergence models that are relevant to future predictions involve statements over likely future AI’s, not the set of all AIs. There appears to be some degree of convergence in human ethics/morality/goals over history, which is probably better described as attractors in goal space. It seems highly likely that the goal landscape of future AGI systems will also have natural attractors, firstly because of intentionally anthropocentric AGI designs, and secondly because of market evolutionary forces.
I don’t think section 4.1 defeats your wording of your Convergence Thesis.
Convergence: all human-designed superintelligences would have one of a small set of goals.
The way you have worded this, I read it as trivially true. The set of human designed superintelligences is necessarily a tiny subset of the space of all superintelligences, and thus the set of dependent goals of human-designed superintelligences is a tiny subset of the space of all goals.
Much depends on your useage of ‘small’. Small relative to what?
I think you should clarify notions of convergence and distinguish different convergence models.
The types of convergence models that are relevant to future predictions involve statements over likely future AI’s, not the set of all AIs. There appears to be some degree of convergence in human ethics/morality/goals over history, which is probably better described as attractors in goal space. It seems highly likely that the goal landscape of future AGI systems will also have natural attractors, firstly because of intentionally anthropocentric AGI designs, and secondly because of market evolutionary forces.