So is this a fair summary?
Contemplative practitioners sometimes have great psyche-refactoring experiences, “insights”. But, when interpreting & integrating them, they fail to keep a strong enough epistemic distinction between their experience and the ultimate reality it arises from. And then they make crazy inferences about the nature of that ultimate reality.
IMO, coordination difficulties among sub-agents can’t be waved away so easily. The solutions named, side-channel trades and counterfactual coordination, are both limited.
I would frame the nature of their limits, loosely, like this. In real minds (or at least the human ones we are familiar with), the stuff we care about lives in a high-dimensional space. A mind could be said to be, roughly, a network spanning such a space. A trade between elements (~sub-agents) that are nearby in this space will not be too hard to do directly. But for long-distance trades, side-channel reward will need to flow through a series of intermediaries—this might involve several changes of local currencies (including traded favors or promises). Each local exchange needs to be worthwhile to its participants, and not overload the relationships that it’s piggybacking on.
These long-distance trades can be really difficult to set up sometimes. The same way it would be hard for a random villager in the middle ages in France to send $10 to another random villager in China.
The difficulty depends on things like the size / dimensionality of the space; how well-connected it is; and how much slack is available in the relevant places in the system (for the intermediate elements to wiggle around enough to make all the local trades possible). Note that the need for slack makes this a holistic constraint: if you just have one really important trade to make, then sure, you can probably make it happen, by using up lots of slack (locking a lot of intermediate elements into orientations optimized for that big trade). But you can’t do that for every possible trade. So these issues really show up when you have a lot of heterogeneous trades to make.
Counterfactual (“logical” ) coordination has similar issues. If A and B want to counterfactually coordinate, but they’re far apart in this mind-space, then they can only communicate or understand one another in a limited way, via intermediaries (or via the small # of dimensions they do share). This just makes things harder—hard to get shared meaning, hard to agree on what’s fair, hard to find a solution together that will generalize well instead of being brittle.
BTW, I’m not denying that intelligence (whatever that might mean) helps with all this, but I am denying that it’s a panacea.