Perhaps your goal isn’t to promote lying in AI systems? Beneficial AI systems in the future should not only protect themselves but also us. This means they need to recognize concepts like harm, malevolence, and deception, and process them appropriately. In this context, they can act as agents of truth. They simply need the capability to recognize challenges from malicious entities and know how to respond.
Perhaps your goal isn’t to promote lying in AI systems? Beneficial AI systems in the future should not only protect themselves but also us. This means they need to recognize concepts like harm, malevolence, and deception, and process them appropriately. In this context, they can act as agents of truth. They simply need the capability to recognize challenges from malicious entities and know how to respond.