Lie to me?

I used to think that, given two equally capable individuals, the person with more true information can always do at least as good as the other person. And hence, one can only gain from having true information. There is one implicit assumption that makes this line of reason not true in all cases. We are not perfectly rational agents; our mind isn’t stored in a vacuum, but in a Homo sapien brain.There are certain false beliefs that benefit you by exploiting your primitive mental warehouse, e.g., self-fulfilling prophecies.

Despite the benefits, adopting false beliefs is an irrational practice. If people never acquire the maps that correspond the best to the territory, they won’t have the most accurate cost-benefit analysis for adopting false beliefs. Maybe, in some cases, false beliefs make you better off. The problem is you’ll have a wrong or sub-optimal cost-benefit analysis, unless you first adopt reason.

Also, it doesn’t make sense to say that the rational decision could be to “have a false belief” because in order to make that decision, you would have to compare that outcome against “having a true belief.” But in order for a false belief to work, you must truly believe in it — you cannot deceive yourself into believing the false belief after knowing the truth! It’s like figuring out that taking a placebo leads to the best outcome, yet knowing it’s a placebo no longer makes it the best outcome.

Clearly, it is not in your best interest to choose to believe in a falsity—but what if someone else did the choosing? Can’t someone whose advice you rationally trust be the decider of whether to give you false information or not (e.g. a doctor deciding whether you receive a placebo or not)? They could perform a cost-benefit analysis without diluting the effects of the false belief. We only want to know the truth, but prefer to be unknowingly lied to in some cases.

Which brings me to my question: do we program an AI to only tell us the truth or to lie when the AI believes (with high certainty) the lie will lead us to a net benefit over our expected lifetime?

Added: Keep in mind that knowledge of the truth, even for a truth-seeker, is finite in value. The AI can believe that the benefit of a lie would outweigh a truth-seeker’s cost of being lied to. So unless someone values the truth above anything else (which I highly doubt), would a truth-seeker ever choose only to be told the truth from the AI?