[Question] When trying to define general intelligence is ability to achieve goals the best metric?

This is a rather loosely thought out position but one I’ve held for a long time. For me, I will consider some system an artificial intelligence when it starts asking why type questions and then sets out to get the answers (how type questions) on its own.

Put a bit differently, that is problem identification over problem solving. It seems to me that most of the AI researchers and AI alignment people use definitions that only focus on problem solution/​goal attainment without actually requiring the “intelligence” to have its own goals.

Is that take on the working definition incorrect?

If not, should that line of inquiry be fit into someone’s investigations? If so, where in the priority ordering? If not, is that because people think it would be largely intractable or perhaps just low returns?

I know at least one on the forum has raised the question about why an AI would ever “want” anything. That view clearly fits with the view we don’t need to worry about my view of what intelligence is. But I think the recent post by John about predication markets and helping to identify good questions perhaps suggests we should consider that. The questions asked do seem to be very important to success and growth/​innovation and general problem solving results. To me that points towards ability to come up with insightful questions might be a key part of intelligence (though I suspect there is a bit of a bias in though by me here on that “conclusion”).

No answers.
No comments.