Suppose instead of a timeline with probabilistic events, the coalition experiences the full tree of all possible futures—but we translate everything to preserve behavior. Then beliefs encode which timelines each member cares about, and bets trade influence (governance tokens) between timelines.
Can you justify Kelly “directly” in terms of Pareto-improvement trades rather than “indirectly” through Pareto-optimality? I feel this gets at the distinction between the selfish vs altruistic view.
Suppose instead of a timeline with probabilistic events, the coalition experiences the full tree of all possible futures—but we translate everything to preserve behavior. Then beliefs encode which timelines each member cares about, and bets trade influence (governance tokens) between timelines.
Can you justify Kelly “directly” in terms of Pareto-improvement trades rather than “indirectly” through Pareto-optimality? I feel this gets at the distinction between the selfish vs altruistic view.