Why no uniform weightings for ensemble universes?

Every now and then I see a claim that if there were a uniform weight­ing of math­e­mat­i­cal struc­tures in a Teg­mark-like ’verse—what­ever that would mean even if we ig­nore the de­ci­sion the­o­retic as­pects which re­ally can’t be ig­nored but what­ever—that would im­ply we should ex­pect to find our­selves as Boltz­mann mind-com­pu­ta­tions, or in other words thin­gies with just enough con­scious­ness to be con­scious of non­sen­si­cal chaos for a brief in­stant be­fore dis­solv­ing back into noth­ing­ness. We don’t seem to be ex­pe­rienc­ing non­sen­si­cal chaos, there­fore the ar­gu­ment con­cludes that a uniform weight­ing is in­ad­e­quate and an Oc­camian weight­ing over struc­tures is nec­es­sary, lead­ing to some­thing like UDASSA or even­tu­ally giv­ing up and sweep­ing the re­main­ing con­fu­sion into a de­ci­sion the­o­retic frame­work like UDT. (Bring­ing the dreaded “an­throp­ics” into it is prob­a­bly a red her­ring like always; we can just talk di­rectly about pat­terns and groups of struc­tures or cor­re­lated struc­tures given some weight­ing, and pre­sume hu­man minds are struc­tures or groups of struc­tures much like other struc­tures or groups of struc­tures given that weight­ing.)

I’ve seen peo­ple who seem very cer­tain of the Boltz­mann-in­duc­ing prop­er­ties of uniform weight­ings for var­i­ous rea­sons that I am skep­ti­cal of, and oth­ers who seemed un­cer­tain of this for rea­son that sound at least su­perfi­cially rea­son­able. Has any­one thought about this enough to give slightly more than just an in­tu­itive ap­peal? I wouldn’t be sur­prised if ev­ery­one has left such ‘prob­a­bil­is­tic’ cos­molog­i­cal rea­son­ing for the richer soils of de­ci­sion the­o­ret­i­cally in­spired spec­u­la­tion, and if ev­ery­one else never ven­tured into the realms of such mad­ness in the first place.

(Bring­ing in some­thing, any­thing, from the foun­da­tions of set the­ory, e.g. the set the­o­retic mul­ti­verse, might be one way to start, but e.g. “most nat­u­ral num­bers look pretty ran­dom and we can use some­thing like Goedel num­ber­ing for ar­bi­trary math­e­mat­i­cal struc­tures” doesn’t seem to say much to me by it­self, con­sid­er­ing that all of those num­bers have rich lo­cal con­text that in their re­gion is very pre­dictable and non-ran­dom, if you get my metaphor. Or to stretch the metaphor even fur­ther, even if 62534772 doesn’t “causally” fol­low 31256 they might still be cor­re­lated in the style of Dust The­ory, and what meta-level tools are we go­ing to use to talk about the ran­dom­ness or “size” of those cor­re­la­tions, es­pe­cially given that 294682462125 could re­fer to a math­e­mat­i­cal struc­ture of some un­der­speci­fied “size” (e.g. a math­e­mat­i­cally “sim­ple” en­tire mul­ti­verse and not a “com­plex” hu­man brain com­pu­ta­tion)? In gen­eral I don’t see how such metaphors can’t just be twisted into mean­ingless­ness or as­sump­tions that I don’t fol­low, and I’ve never seen clear ar­gu­ments that don’t rely on ei­ther such metaphors or just flat out in­tu­ition.)