If I understand you correctly, you are proposing that people constantly and strongly optimize to obtain signalling advantages. They do so without becoming directly aware of it, which further increases their efficiency.
“Efficiency” at achieving something other than what you should work towards is harmful. If it’s reliableenough, let your conscious mind decide if signaling advantages or something else is what you should be optimizing. Otherwise, you let that Blind Idiot Azathothpick your purposes for you, trusting it more than you trust yourself.
“Efficiency” at achieving something other than what you should work towards is harmful. … Otherwise, you let that Blind Idiot Azathoth pick your purposes for you, trusting it more than you trust yourself.
The purpose of solving friendly AI is to protect the purposes picked for us by the blind idiot god.
Our psychological adaptations are not our purposes, we don’t want to protect them, even though they contribute to determining what it is we want to protect. See Evolutionary Psychology.
“Efficiency” at achieving something other than what you should work towards is harmful. If it’s reliable enough, let your conscious mind decide if signaling advantages or something else is what you should be optimizing. Otherwise, you let that Blind Idiot Azathoth pick your purposes for you, trusting it more than you trust yourself.
The purpose of solving friendly AI is to protect the purposes picked for us by the blind idiot god.
Our psychological adaptations are not our purposes, we don’t want to protect them, even though they contribute to determining what it is we want to protect. See Evolutionary Psychology.