This suggests that choice of the machines in algorithmic priors should be meaningful data for the purposes of agent foundations, and gives some sort of an explanation for how agents with different preferences tend to arrive at the same probabilities of events (after updating on a lot of data), and so agreeing on questions of fact, while still keeping different preferences and endorsing different decisions, so disagreeing on questions of normativity.
But also, even with universality, (algorithmic) Jeffrey-Bolker preference remains dependent on the machines that define its two algorithmic priors (it assigns expected utility to an event as the ratio of two probability measures of that event, using two different priors on the same sample space).
This suggests that choice of the machines in algorithmic priors should be meaningful data for the purposes of agent foundations, and gives some sort of an explanation for how agents with different preferences tend to arrive at the same probabilities of events (after updating on a lot of data), and so agreeing on questions of fact, while still keeping different preferences and endorsing different decisions, so disagreeing on questions of normativity.