I think a subtle point is that this is saying we merely have to assume predictive agreement of distributions marginalized over the latent variables ΛA/ΛB, but once we assume that & the naturality conditions, then even as each agent receive more information about X to update their distributions & latent variables Λi, the deterministic constraints between the latents will continue to hold.
Or if a human and AI start out with predictive agreement over some future observables, & the AI’s latent satisfy mediation while human’s latent satisfy redundancy, then we could send the AI out to update on information about those future observables, and humans can (in principle) estimate the redundant latent variable they care about from the AI’s latent without observing the observables themselves. The remaining challenge is that humans often care about things that are not approximately deterministic w.r.t observables from typical sensors.
I think a subtle point is that this is saying we merely have to assume predictive agreement of distributions marginalized over the latent variables ΛA/ΛB, but once we assume that & the naturality conditions, then even as each agent receive more information about X to update their distributions & latent variables Λi, the deterministic constraints between the latents will continue to hold.
Or if a human and AI start out with predictive agreement over some future observables, & the AI’s latent satisfy mediation while human’s latent satisfy redundancy, then we could send the AI out to update on information about those future observables, and humans can (in principle) estimate the redundant latent variable they care about from the AI’s latent without observing the observables themselves. The remaining challenge is that humans often care about things that are not approximately deterministic w.r.t observables from typical sensors.
Yes, though I’ll flag that we don’t have robustness with respect to approximation on the agreement condition (though we do have other ways around that to some extent, e.g. using the Solomonoff version of natural latents), and those sorts of updates are the kind of thing which I’d expect to run into that robustness problem.