Sure. But I think at least some conflicts of interests are very hard to conceal. At the very least if someone finds this argument compelling the other party can’t prompt them to denounce this check on principle.
Most strategies that could help one avoid malicious advice stemming from hard to detect conflicts of interests seem to have a (to me) unacceptably high false positive rate. Not so much in the context of a scenario where you are dealing with a a boxed AI but more say when one is interacting very intelligent people in a business envrionment or personal life. It seems to me that such strategies would carry high opportunity costs.
Sure. But I think at least some conflicts of interests are very hard to conceal. At the very least if someone finds this argument compelling the other party can’t prompt them to denounce this check on principle.
Most strategies that could help one avoid malicious advice stemming from hard to detect conflicts of interests seem to have a (to me) unacceptably high false positive rate. Not so much in the context of a scenario where you are dealing with a a boxed AI but more say when one is interacting very intelligent people in a business envrionment or personal life. It seems to me that such strategies would carry high opportunity costs.