ImE GPT doesn’t come close to passing the Turing Test. Whenever I ask it about an inconsistency in what it said, it immediately stops making any sense.
Probably, but I think figuring out exactly what you are measuring/trying to determine is a big part of the problem. GPT doesn’t think like humans, so it’s unclear what it means for it to be close. In some absolute sense, the “intelligence” space has as many axes as there are problems on which you can measure performance.
Correct. That is why the original Turing Test is a sufficient-but-not-necessary test: It is meant to identify an AI that is definitively above human level.
ImE GPT doesn’t come close to passing the Turing Test. Whenever I ask it about an inconsistency in what it said, it immediately stops making any sense.
Do you think there is a place for a Turing-like test that determines how close to human intelligence it is, even if it has not reached that level?
Probably, but I think figuring out exactly what you are measuring/trying to determine is a big part of the problem. GPT doesn’t think like humans, so it’s unclear what it means for it to be close. In some absolute sense, the “intelligence” space has as many axes as there are problems on which you can measure performance.
Correct. That is why the original Turing Test is a sufficient-but-not-necessary test: It is meant to identify an AI that is definitively above human level.