I understand that, which is why I concede that they may choose the million in one case and not in the other. But I think that their decision may be based on other factors, i.e. that they don’t actually believe they’d get the million with 99% probability. They’re imagining someone telling them ,”I’ll give you a million if this RNG from 1-100 comes out anything but 100 (or something similar)”, and are not factoring out distrust. My example with reversing the flow of money was also intended to correct for that.
Perhaps the heuristics you refer to are based on this? Has this idea of “trust” been tested for correlation with “losing money and gaining money” distinction?
I understand that, which is why I concede that they may choose the million in one case and not in the other. But I think that their decision may be based on other factors, i.e. that they don’t actually believe they’d get the million with 99% probability. They’re imagining someone telling them ,”I’ll give you a million if this RNG from 1-100 comes out anything but 100 (or something similar)”, and are not factoring out distrust. My example with reversing the flow of money was also intended to correct for that.
Perhaps the heuristics you refer to are based on this? Has this idea of “trust” been tested for correlation with “losing money and gaining money” distinction?