If the Turing test is somehow restricted, and you’re just supposed to have a normal conversation, it can be faked. If you’re allowed to ask anything at all, such as offer the AI’s source code and ask how it could be improved, then you have a strong AI. One that dominates humans in every field. I don’t know if it’s necessarily conscious, but it’s definitely intelligent.
The best way to avoid this is to create more varied analogues of the Turing test—and to keep them secret.
This isn’t necessary. You’d be better off using the opposite approach. Keep the capabilities of machines public, and tailor your questions to them. If you know computers are good at scrabble, but bad at diplomacy, you might play a game of diplomacy, but not scrabble.
If the Turing test is somehow restricted, and you’re just supposed to have a normal conversation, it can be faked. If you’re allowed to ask anything at all, such as offer the AI’s source code and ask how it could be improved, then you have a strong AI. One that dominates humans in every field. I don’t know if it’s necessarily conscious, but it’s definitely intelligent.
This isn’t necessary. You’d be better off using the opposite approach. Keep the capabilities of machines public, and tailor your questions to them. If you know computers are good at scrabble, but bad at diplomacy, you might play a game of diplomacy, but not scrabble.