I don’t get what point you’re trying to make about the takeaway of my analogy by bringing up the halting problem. There might not even be something analogous to the halting problem in my analogy of goal-completeness, but so what?
I also don’t get why you’re bringing up the detail that “single correct output” is not 100% the same thing as “single goal-specification with variable degrees of success measured on a utility function”. It’s in the nature of analogies that details are different yet we’re still able to infer an analogous conclusion on some dimension.
Humans are goal-complete, or equivalently “humans are general intelligences”, in the sense that many of us in the smartest quartile can output plans with the expectation of a much better than random score on a very broad range of utility functions over arbitrary domains.
I find the ideas you discuss interesting, but they leave me with more questions. I agree that we are moving toward a more generic AI that we can use for all kinds of tasks.
I have trouble understanding the goal-completeness concept. I’d reiterate @Razied ’s point. You mention “steers the future very slowly”, so there is an implicit concept of “speed of steering”. I don’t find the turing machine analogy helpful in infering an analogous conclusion because I don’t know what that conclusion is.
You’re making a qualitative distinction between humans (goal-complete) and other animals (non-goal complete) agents. I don’t understand what you mean by that distinction. I find the idea of goal completeness interesting to explore but quite fuzzy at this point.
Unlike the other animals, humans can represent any goal in a large domain like the physical universe, and then in a large fraction of cases, they can think of useful things to steer the universe toward that goal to an appreciable degree.
Some goals are more difficult than others / require giving the human control over more resources than others, and measurements of optimization power are hard to define, but this definition is taking a step toward formalizing the claim that humans are more of a “general intelligence” than animals. Presumably you agree with this claim?
It seems the crux of our disagreement factors down to a disagreement about whether this Optimization Power post by Eliezer is pointing at a sufficiently coherent concept.
I don’t get what point you’re trying to make about the takeaway of my analogy by bringing up the halting problem. There might not even be something analogous to the halting problem in my analogy of goal-completeness, but so what?
I also don’t get why you’re bringing up the detail that “single correct output” is not 100% the same thing as “single goal-specification with variable degrees of success measured on a utility function”. It’s in the nature of analogies that details are different yet we’re still able to infer an analogous conclusion on some dimension.
Humans are goal-complete, or equivalently “humans are general intelligences”, in the sense that many of us in the smartest quartile can output plans with the expectation of a much better than random score on a very broad range of utility functions over arbitrary domains.
I find the ideas you discuss interesting, but they leave me with more questions. I agree that we are moving toward a more generic AI that we can use for all kinds of tasks.
I have trouble understanding the goal-completeness concept. I’d reiterate @Razied ’s point. You mention “steers the future very slowly”, so there is an implicit concept of “speed of steering”. I don’t find the turing machine analogy helpful in infering an analogous conclusion because I don’t know what that conclusion is.
You’re making a qualitative distinction between humans (goal-complete) and other animals (non-goal complete) agents. I don’t understand what you mean by that distinction. I find the idea of goal completeness interesting to explore but quite fuzzy at this point.
Unlike the other animals, humans can represent any goal in a large domain like the physical universe, and then in a large fraction of cases, they can think of useful things to steer the universe toward that goal to an appreciable degree.
Some goals are more difficult than others / require giving the human control over more resources than others, and measurements of optimization power are hard to define, but this definition is taking a step toward formalizing the claim that humans are more of a “general intelligence” than animals. Presumably you agree with this claim?
It seems the crux of our disagreement factors down to a disagreement about whether this Optimization Power post by Eliezer is pointing at a sufficiently coherent concept.