I think bringing in logical and indexical dignity may be burying the lede here.
I think the core of idea here is something like:
If your moral theory assigns a utility that’s nonconvex (concave) in the number of existing worlds, you’d weakly prefer (strongly prefer) to take risks that are decorrelated across worlds.
(Most moral theories assign utilities that are nonconvex, and many assign utilities that are concave in the number of actual worlds.) The way in which risks may be decorrelated across worlds doesn’t have to be that some are logical and some are indexical.
I think bringing in logical and indexical dignity may be burying the lede here.
I think the core of idea here is something like:
(Most moral theories assign utilities that are nonconvex, and many assign utilities that are concave in the number of actual worlds.) The way in which risks may be decorrelated across worlds doesn’t have to be that some are logical and some are indexical.