I think the example is confounded by the fact that it’s measuring different kinds of agency valuation assumptions against eachother, while also trying to measure utility consideration.
Someone might consider impositions of safety differently than they’d consider impositions of risk.
Also, are all the actors involved making their decisions based on a world where there’s a good chance someone will swoop in and save them? Is there 1 guardian angel for every 2 people in this world? People make decisions in part based on expectations of intervening forces. Intervening forces include the existence of these expectations in their decisions to intervene.
Ideally these sort of thought experiments are supposed to refine some sort of consistent approach, but any sort of consistency changes the expectations of the actors involved, which changes their behaviour, which means the approach may need adjustment.
I think the example is confounded by the fact that it’s measuring different kinds of agency valuation assumptions against eachother, while also trying to measure utility consideration.
Someone might consider impositions of safety differently than they’d consider impositions of risk.
Also, are all the actors involved making their decisions based on a world where there’s a good chance someone will swoop in and save them? Is there 1 guardian angel for every 2 people in this world? People make decisions in part based on expectations of intervening forces. Intervening forces include the existence of these expectations in their decisions to intervene.
Ideally these sort of thought experiments are supposed to refine some sort of consistent approach, but any sort of consistency changes the expectations of the actors involved, which changes their behaviour, which means the approach may need adjustment.