Is the potential astronomical waste in our universe too small to care about?

In the not too dis­tant past, peo­ple thought that our uni­verse might be ca­pa­ble of sup­port­ing an un­limited amount of com­pu­ta­tion. To­day our best guess at the cos­mol­ogy of our uni­verse is that it stops be­ing able to sup­port any kind of life or de­liber­ate com­pu­ta­tion af­ter a finite amount of time, dur­ing which only a finite amount of com­pu­ta­tion can be done (on the or­der of some­thing like 10^120 op­er­a­tions).

Con­sider two hy­po­thet­i­cal peo­ple, Tom, a to­tal util­i­tar­ian with a near zero dis­count rate, and Eve, an ego­ist with a rel­a­tively high dis­count rate, a few years ago when they thought there was .5 prob­a­bil­ity the uni­verse could sup­port do­ing at least 3^^^3 ops and .5 prob­a­bil­ity the uni­verse could only sup­port 10^120 ops. (Th­ese num­bers are ob­vi­ously made up for con­ve­nience and illus­tra­tion.) It would have been mu­tu­ally benefi­cial for these two peo­ple to make a deal: if it turns out that the uni­verse can only sup­port 10^120 ops, then Tom will give ev­ery­thing he owns to Eve, which hap­pens to be $1 mil­lion, but if it turns out the uni­verse can sup­port 3^^^3 ops, then Eve will give $100,000 to Tom. (This may seem like a lop­sided deal, but Tom is happy to take it since the po­ten­tial util­ity of a uni­verse that can do 3^^^3 ops is so great for him that he re­ally wants any ad­di­tional re­sources he can get in or­der to help in­crease the prob­a­bil­ity of a pos­i­tive Sin­gu­lar­ity in that uni­verse.)

You and I are not to­tal util­i­tar­i­ans or ego­ists, but in­stead are peo­ple with moral un­cer­tainty. Nick Bostrom and Toby Ord pro­posed the Par­li­a­men­tary Model for deal­ing with moral un­cer­tainty, which works as fol­lows:

Sup­pose that you have a set of mu­tu­ally ex­clu­sive moral the­o­ries, and that you as­sign each of these some prob­a­bil­ity. Now imag­ine that each of these the­o­ries gets to send some num­ber of del­e­gates to The Par­li­a­ment. The num­ber of del­e­gates each the­ory gets to send is pro­por­tional to the prob­a­bil­ity of the the­ory. Then the del­e­gates bar­gain with one an­other for sup­port on var­i­ous is­sues; and the Par­li­a­ment reaches a de­ci­sion by the del­e­gates vot­ing. What you should do is act ac­cord­ing to the de­ci­sions of this imag­i­nary Par­li­a­ment.

It oc­curred to me re­cently that in such a Par­li­a­ment, the del­e­gates would makes deals similar to the one be­tween Tom and Eve above, where they would trade their votes/​sup­port in one kind of uni­verse for votes/​sup­port in an­other kind of uni­verse. If I had a Mo­ral Par­li­a­ment ac­tive back when I thought there was a good chance the uni­verse could sup­port un­limited com­pu­ta­tion, all the del­e­gates that re­ally care about as­tro­nom­i­cal waste would have traded away their votes in the kind of uni­verse where we ac­tu­ally seem to live for votes in uni­verses with a lot more po­ten­tial as­tro­nom­i­cal waste. So to­day my Mo­ral Par­li­a­ment would be effec­tively con­trol­led by del­e­gates that care lit­tle about as­tro­nom­i­cal waste.

I ac­tu­ally still seem to care about as­tro­nom­i­cal waste (even if I pre­tend that I was cer­tain that the uni­verse could only do at most 10^120 op­er­a­tions). (Either my Mo­ral Par­li­a­ment wasn’t ac­tive back then, or my del­e­gates weren’t smart enough to make the ap­pro­pri­ate deals.) Should I nev­er­the­less fol­low UDT-like rea­son­ing and con­clude that I should act as if they had made such deals, and there­fore I should stop car­ing about the rel­a­tively small amount of as­tro­nom­i­cal waste that could oc­cur in our uni­verse? If the an­swer to this ques­tion is “no”, what about the fu­ture go­ing for­ward, given that there is still un­cer­tainty about cos­mol­ogy and the na­ture of phys­i­cal com­pu­ta­tion. Should the del­e­gates to my Mo­ral Par­li­a­ment be mak­ing these kinds of deals from now on?