Some of my thoughts on avoiding the intelligence curse or gradual disempowerment and ensure that humans stay relevant:
One solution to ensure that the gap between human and AI intelligence does not grow too large:
I think it’s often easier to verify solutions than generate them which allows less intelligent agents to supervise more intelligent agents. For example, writing a complex computer program might take 10 hours but checking that the code generally takes ~1 hour and running the program and seeing if it behaves as expected only takes a few minutes. This goal could be achieved by limiting the intelligence of AIs or enhancing human cognitive ability somehow.
Devise ways for giving humans a privileged status:
AI agents and their outputs will soon vastly outnumber those of humans. Additionally it’s becoming impossible to distinguish between the outputs of AIs and humans.
One solution to this problem is to make humans more identifiable by watermarking AI outputs (note that watermarks are widely used for paper money) or developing strong proof of human identity (e.g. the blue Twitter mark, iPhone face ID, fingerprint login). This approach is similar to authentication which is a well-known security problem.
A short-term solution to differentiating between humans and AIs is to conduct activities in the physical world (although this won’t work once sufficiently advanced humanoid robots are developed). For example, voting, exams, and interviews can be carried out in the real world to ensure that participants are human.
Once you have solved the problem of differentiating between AI and human outputs, you could upweight the value of human outputs (e.g. writing, art).
Human authentification and real world activities seem indeed very important. Deepfake is a form of disempowerment and can destroy or destabilize states before employment becomes a concern. AI generated content can already be near or sometimes strictly undistinguishable from human generated content. Texts, pictures, videos. We are just at the beginning of the flood. Disinformation explodes on the internet and governments fall in the hands of populist and nationalist parties the one after the other. It’s also a dramatic concern for justice. Should we go back to analogic contents ?
Some of my thoughts on avoiding the intelligence curse or gradual disempowerment and ensure that humans stay relevant:
One solution to ensure that the gap between human and AI intelligence does not grow too large:
I think it’s often easier to verify solutions than generate them which allows less intelligent agents to supervise more intelligent agents. For example, writing a complex computer program might take 10 hours but checking that the code generally takes ~1 hour and running the program and seeing if it behaves as expected only takes a few minutes. This goal could be achieved by limiting the intelligence of AIs or enhancing human cognitive ability somehow.
Devise ways for giving humans a privileged status:
AI agents and their outputs will soon vastly outnumber those of humans. Additionally it’s becoming impossible to distinguish between the outputs of AIs and humans.
One solution to this problem is to make humans more identifiable by watermarking AI outputs (note that watermarks are widely used for paper money) or developing strong proof of human identity (e.g. the blue Twitter mark, iPhone face ID, fingerprint login). This approach is similar to authentication which is a well-known security problem.
A short-term solution to differentiating between humans and AIs is to conduct activities in the physical world (although this won’t work once sufficiently advanced humanoid robots are developed). For example, voting, exams, and interviews can be carried out in the real world to ensure that participants are human.
Once you have solved the problem of differentiating between AI and human outputs, you could upweight the value of human outputs (e.g. writing, art).
Human authentification and real world activities seem indeed very important. Deepfake is a form of disempowerment and can destroy or destabilize states before employment becomes a concern. AI generated content can already be near or sometimes strictly undistinguishable from human generated content. Texts, pictures, videos. We are just at the beginning of the flood. Disinformation explodes on the internet and governments fall in the hands of populist and nationalist parties the one after the other. It’s also a dramatic concern for justice. Should we go back to analogic contents ?