I was trying to find some references for this but it is common sense enough anyway.
From an active inference (or more generally bayesian) perspective you can view this process as finding a shared generative model to go from. So to reiterate what you said: “yes and” is good for improv as you say but if you have a core disagreement in your model and you do inference from that you’re gonna end up being confused. (Meta: pointing at sameness)
I really like the idea of generating a core of “sameness” as a consequence. By finding the common ground in your models you can then start to deal with the things that you don’t share and this usually leads to better results according to conflict resolution theory than going at it directly. So the “no because” only makes sense after a degree of sameness (which you can also have beforehand). (Meta: Difference introduction from the sameness frame)
I was trying to find some references for this but it is common sense enough anyway.
From an active inference (or more generally bayesian) perspective you can view this process as finding a shared generative model to go from. So to reiterate what you said: “yes and” is good for improv as you say but if you have a core disagreement in your model and you do inference from that you’re gonna end up being confused. (Meta: pointing at sameness)
I really like the idea of generating a core of “sameness” as a consequence. By finding the common ground in your models you can then start to deal with the things that you don’t share and this usually leads to better results according to conflict resolution theory than going at it directly. So the “no because” only makes sense after a degree of sameness (which you can also have beforehand).
(Meta: Difference introduction from the sameness frame)