I like this framing, but I think the examples are missing the bit that makes me most skeptical about the kind of acausal trades that people on this website like to discuss; namely, that they’re acausal in “both directions.” In the apples for charity example, I think some of the intuition rides on the assumption that, even if I can’t initiate or verify the trade, the descendent of the apple civilization will in fact check that I have placed an apple. That’s what lets me do normal, everyday counterfactual reasoning of the form “if I place an apple, then he will donate, and if I don’t, he won’t” (with some probability). So there’s still some causality in there somewhere, in that the presence of the apple directly causes the donation. In the case of superintelligences in different universes or whatever, we don’t even have this, so the metaphor is more like “I think that there’s a descendent of a civilization who donates when I put an apple someplace, and I think that he thinks that I am likely to exist and to put the apple, so he’ll donate.”
The obvious objection is, given only what I just wrote, I still get the donation even if I don’t put the apple, so why should I bother? To get around this it needs to be the case that what the apple guy thinks I’ll do somehow depends on what I actually do, which connects to the various other (controversial, nonintuitive) cans of worms that people like to talk about here. So agreed, “one-directional” acausal trade is not so scary, but I’m still not sure about bidirectional.
Much has been said about superintelligences cooperating with each other via reasoning or proving statements about each other’s source code. But it seems like this “source code” is likely to be a neural network rather than something amenable to formal methods, in which case it’s not at all clear that the problem is computationally tractable. If this has been discussed before, can someone point me to the discussion?