I had a pretty long comment about Jurgen Habermas but instead I’ll just say:
I’m not really sure term means anything outside of the assumptions and framework of Critical Theory unless you’re talking about a totally different thing. And given those assumptions and framework you can’t possibly say instrumental rationality is the same thing as intelligence since the whole coinage exists to distinguish it from communicative rationality. But the framework this community is operating under is so far removed from Critical Theory that I don’t even know how to talk about it here.
My guess is not many people here recognize any other kind of rationality and so your question just becomes: are rationality and intelligence the same thing?
“Rationality” seems to be most frequently used here to mean “epistemic rationality”, not “instrumental rationality”. It seems to be one of this community’s oddities. …and yes, the “critical theory” term.
Semi-OT: The problem with the AI researchers’ definitions of intelligence is that they are written as if there can be some kind of perfect intelligence, yet they end up in contradictions like, “I’ve developed the maximally intelligent being, but it’s completely useless.”
(Mr. Vetta (Shane Legg) and Marcus Hutter’s AIXI, I’m looking in your general direction here.)
The idea of universal intelligence is not a bug, it is a feature. It is mainly due to Legg/Hutter that we have that concept in the first place—and it is a fine one.
Not really. If you claim that a) intelligence is useful, and b) a maximally intelligent being that you have invented is useless … you made a mistake somewhere.
And their work is just the formalization of Solomonoff induction—the difficulty is in the derivation. People knew in advance that you can find the shortest theory to fit the data by taking a language, and then iterating up from the shortest expressible program until you find one that matches the data—it’s just that it’s not computable, which for now, means useless, and the exponential approximation isn’t much better.
Can you identify any working, useful system based on AIXI?
Okay, I don’t have a reference for them admitting that AIXI’s useless—but they acknowledge it’s uncomputable, and don’t have working code implementing it for an actual problem in a way better than existing “not intelligent” methods.
Solomonoff induction is concerned with sequence prediction—not decision theory. It is not a trivial extra step.
AIXI is also primarily concerned with sequence prediction and not decision theory.
“AIXI is a universal theory of sequential decision making akin to Solomonoff’s celebrated universal theory of induction. Solomonoff derived an optimal way of predicting future data, given previous observations, provided the data is sampled from a computable probability distribution. AIXI extends this approach to an optimal decision making agent embedded in an unknown environment.”
Right—but they know that. AIXI is a self-confessed abstract model.
IMO, AIXI does have some marketing issues. For instance:
“The book also presents a preliminary computable AI theory. We construct an algorithm AIXItl, which is superior to any other time t and space l bounded agent.”
That seems to be an inaccurate description, to me.
Check with here, for example, though:
http://www.vetta.org/definitions-of-intelligence/
The “AI researcher” definitions in particular seem to be much the same as the definition of instrumental rationality.
I had a pretty long comment about Jurgen Habermas but instead I’ll just say:
I’m not really sure term means anything outside of the assumptions and framework of Critical Theory unless you’re talking about a totally different thing. And given those assumptions and framework you can’t possibly say instrumental rationality is the same thing as intelligence since the whole coinage exists to distinguish it from communicative rationality. But the framework this community is operating under is so far removed from Critical Theory that I don’t even know how to talk about it here.
My guess is not many people here recognize any other kind of rationality and so your question just becomes: are rationality and intelligence the same thing?
“Rationality” seems to be most frequently used here to mean “epistemic rationality”, not “instrumental rationality”. It seems to be one of this community’s oddities. …and yes, the “critical theory” term.
Semi-OT: The problem with the AI researchers’ definitions of intelligence is that they are written as if there can be some kind of perfect intelligence, yet they end up in contradictions like, “I’ve developed the maximally intelligent being, but it’s completely useless.”
(Mr. Vetta (Shane Legg) and Marcus Hutter’s AIXI, I’m looking in your general direction here.)
The idea of universal intelligence is not a bug, it is a feature. It is mainly due to Legg/Hutter that we have that concept in the first place—and it is a fine one.
Not really. If you claim that a) intelligence is useful, and b) a maximally intelligent being that you have invented is useless … you made a mistake somewhere.
And their work is just the formalization of Solomonoff induction—the difficulty is in the derivation. People knew in advance that you can find the shortest theory to fit the data by taking a language, and then iterating up from the shortest expressible program until you find one that matches the data—it’s just that it’s not computable, which for now, means useless, and the exponential approximation isn’t much better.
Can you identify any working, useful system based on AIXI?
I don’t think you have a reference for b).
Solomonoff induction is concerned with sequence prediction—not decision theory. It is not a trivial extra step.
Okay, I don’t have a reference for them admitting that AIXI’s useless—but they acknowledge it’s uncomputable, and don’t have working code implementing it for an actual problem in a way better than existing “not intelligent” methods.
AIXI is also primarily concerned with sequence prediction and not decision theory.
“AIXI is a universal theory of sequential decision making akin to Solomonoff’s celebrated universal theory of induction. Solomonoff derived an optimal way of predicting future data, given previous observations, provided the data is sampled from a computable probability distribution. AIXI extends this approach to an optimal decision making agent embedded in an unknown environment.”
http://www.hutter1.net/ai/uaibook.htm
Okay, you’re right, my apologies. The point about uncomputability and uselessness of the decision theory still stands.
Right—but they know that. AIXI is a self-confessed abstract model.
IMO, AIXI does have some marketing issues. For instance:
“The book also presents a preliminary computable AI theory. We construct an algorithm AIXItl, which is superior to any other time t and space l bounded agent.”
That seems to be an inaccurate description, to me.