I second Ray’s claim that we spend loads of time on belief communication. Something like the Aumann convergence to common models might be be “theoretically” doable, but I think it’d require more than 100% of our time to get there. This is indeed a bit sad and worrying for human-human communication.
This is indeed a bit sad and worrying for human-human communication.
Is it newly sad and worrying, though?
By contrast, I find it reassuring when someone explicitly notes the goal, and the gap between here and that goal, because we have rediscovered the motivation for the community. 10 years deep, and still on track.
I second Ray’s claim that we spend loads of time on belief communication. Something like the Aumann convergence to common models might be be “theoretically” doable, but I think it’d require more than 100% of our time to get there. This is indeed a bit sad and worrying for human-human communication.
Is it newly sad and worrying, though?
By contrast, I find it reassuring when someone explicitly notes the goal, and the gap between here and that goal, because we have rediscovered the motivation for the community. 10 years deep, and still on track.
Suck it, value drift!