Hence my second paragraph: Goals are inherently dangerous things to give AIs. Especially open-ended goals which would require an ever-better intelligence to resolve.
AIs that can’t be described by attributing goals to them don’t really seem too powerful (after all, intelligence is about making the world going into some direction; this is the only property that tells apart an AGI from a rock).
Hence my second paragraph: Goals are inherently dangerous things to give AIs. Especially open-ended goals which would require an ever-better intelligence to resolve.
AIs that can’t be described by attributing goals to them don’t really seem too powerful (after all, intelligence is about making the world going into some direction; this is the only property that tells apart an AGI from a rock).
Evolution and capitalism are both non-goal-oriented, extremely powerful intelligences. Goals are only one form of motivators.