One thing that occurs reading this is asking people questions, in a context where high trust might be new, about not their object level boundaries but their relationship to boundaries. E.g. ‘are you comfortable with asserting stop or slow down signals? Will you be able to ask for more or fewer check in points if that seems good? Would you like extra reinforcement for asserting a boundary? Do you have any requests for what to do if you seem to be running out of capacity?’
This also helps in understanding that people are probably confused when they think that ‘being good at boundaries’ is about having a consistent and legible set of them. That would just enable adversarial optimization.
One thing that occurs reading this is asking people questions, in a context where high trust might be new, about not their object level boundaries but their relationship to boundaries. E.g. ‘are you comfortable with asserting stop or slow down signals? Will you be able to ask for more or fewer check in points if that seems good? Would you like extra reinforcement for asserting a boundary? Do you have any requests for what to do if you seem to be running out of capacity?’
This also helps in understanding that people are probably confused when they think that ‘being good at boundaries’ is about having a consistent and legible set of them. That would just enable adversarial optimization.