I guess the thing to do would be to correct yourself right away, apologize to the person you told the lie to and tell the truth. Of course that will be embarrassing, but if you let S1 get away with lying, it will get rewarded for doing it and it won’t stop.
Yes, I agree this is a good policy to try and pre-commit to. In the past ~five years I can think of three times when S1 lied. Once I “rolled it back” fairly quickly, but I think even then it was several minutes worth of sitting with my discomfort before I was able to make that heavy lift. This policy feels equivalent to me of saying “try harder”—it’s not actually clear to me what the mental moves involved in following it is, nor how I can provision the mental resources needed ahead of time in order to be able to carry those moves out when needed to, unexpectedly.
More specifically, my model is some like:
S2 realizes that S1 just put us in a hole.
Minor flight/flight reaction kicks in. This leads to a dramatic reduction of global workspace and executive control.
Some part of S2 formulates the plan “admit to this lie and back us out of this hole at the earliest opportunity” and submits it to the global workspace
Some other systems (S2?? S1??? Something else???) submits the plan: “continue the conversation as normal, don’t worry about this, it’s unlikely it will ever come to light”.
I guess the thing to do would be to correct yourself right away, apologize to the person you told the lie to and tell the truth. Of course that will be embarrassing, but if you let S1 get away with lying, it will get rewarded for doing it and it won’t stop.
Yes, I agree this is a good policy to try and pre-commit to. In the past ~five years I can think of three times when S1 lied. Once I “rolled it back” fairly quickly, but I think even then it was several minutes worth of sitting with my discomfort before I was able to make that heavy lift. This policy feels equivalent to me of saying “try harder”—it’s not actually clear to me what the mental moves involved in following it is, nor how I can provision the mental resources needed ahead of time in order to be able to carry those moves out when needed to, unexpectedly.
More specifically, my model is some like:
S2 realizes that S1 just put us in a hole.
Minor flight/flight reaction kicks in. This leads to a dramatic reduction of global workspace and executive control.
Some part of S2 formulates the plan “admit to this lie and back us out of this hole at the earliest opportunity” and submits it to the global workspace
Some other systems (S2?? S1??? Something else???) submits the plan: “continue the conversation as normal, don’t worry about this, it’s unlikely it will ever come to light”.
Brain systems enter the thunderdome.