# interstice

Karma: 1,946
• Can you in practice use set theory to discover something new in other branches or math, or does it merely provide a different (and less convenient) way to express things that were already discovered otherwise?

The value of set theory as a foundation comes more from being a widely-agreed upon language that is also powerful enough to express pretty much everything mathematicians can think up, rather than as a tool for making new discoveries. I think it’s worth learning at least at a shallow level for this reason, if you want to learn advanced math.

• Did you notice that I linked the very same article that you replied with? :P I’m aware of the issues with UDASSA, I just think it provides a clear example of an imaginable atheistic multiverse containing a great many possible people.

• I think the cardinality should be Beth(0) or Beth(1) since finite beings should have finite descriptions, and additionally finite beings can have at most Beth(1)(if we allow immortality) distinct sequences of thoughts, actions, and observations, given that they can only think, observe, act, in a finite number of ways in finite time, so if you quotient by identical experiences and behaviors you get Beth(0) or Beth(1)(you might think we can e.g. observe a continuum amount of stuff in our visual field but this is an illusion, the resolution is bounded). The Bekenstein bound also implies physically limited beings in our universe have a finite description length.

There could be a God-less universe with Beth 2 people, but I don’t know how that would work

I don’t think it’s hard to imagine such a universe, e.g. consider all possible physical theories in some formal language and all possible initial conditions of such theories. This might be less simple to state than “imagine an infinitely perfect being” but it’s also much less ambiguous, so it’s hard to judge which is actually less simple.

SIA gives reason to think you should assign a uniform prior across possible people

My perspective on these matters is influenced a lot by UDASSA, which recovers a lot of the nice behaviors of SIA at the cost of non-uniform priors. I don’t actually think UDASSA is likely a correct description of reality, but it gives a coherent pictures of what an atheistic multiverse containing a great many possible people could look like.

• I don’t think the anthropic argument works. I have some technical objections to discussion of the set of possible people(I think Beth(0) or Beth(1) at most are more plausible cardinalities, I don’t think we have to assume a uniform prior over possible people which means we don’t need to assign a 0% probability to any particular being’s existence in the absence of uniform existence) but more basically, I just don’t see why God makes much of a difference as to the plausibility of any particular ontological arrangement. If you think God might create a universe with Beth(2) people, why couldn’t there be a God-less universe with the same cardinality of people? If you think God might create a proper Class of people, why couldn’t there be a God-less Proper Universe with the same people? Conversely, if modal realism undermines induction, doesn’t a God-created set of all people undermine it in the same way? These universes might sound pretty “wild” and so appear implausible without intelligent design, but on a description-length perspective, “having beth(2) people and no God” or whatever can be specified pretty compactly. You might appeal to the infallibility, etc. of God to explain away paradoxes, but I think this is essentially invoking a “get out of paradoxes free” card without doing any explanatory work.

# Hal­i­fax Ra­tion­al­ity Meetup

13 Feb 2024 4:17 UTC
6 points
• I think twitter is still the closest thing to a global town square. This post by Tyler Cowen is good on the topic.

• Subsist? Sustain? Self-actualize? Start?

• Why are you guys talking about waves necessarily dissipating, wouldn’t there be an equal probability of waves forming and dissipating given that we are sampling a random initial configuration, hence in equilibrium w.r.t. formation/​dispersion of waves?

• Also, how are you geting 0.8%? This website says that the mortality rate of 65-75 year olds is 2%. So over 4 years that should be 8%, which I think makes it much more plausible that quantum death probability is 0.1%(although clearly 8% isn’t the real probability, any presidential candidate is probably way less likely to die than the background population)

• Yyyyeah, I’m not totally sure if you get 0.1% just from people dying, hence the ~. But I think it’s at least within a factor of 10, which makes me think the total quantum randomness factor is at least 0.1%. And to defend the “people dying” factor, (a) many of the candidates are in fact pretty old these days (b) presidents have a relatively high rate of being assassinated − 4 of 45(!), although I assume the actual probability is lower now than the historical average (c) randomness within the 4-year window could affect how quickly a pre-existing health problem progresses, although this might result in them dropping out early/​not rather than actually dying.

• I don’t think most people die for quantum-randomness reasons

You don’t think so? I think this is clearly the case over someone’s entire life. Starting to condition on a 4-year timescale, I think accidental deaths, assassinations, and viral infections are certainly quantum-randomness-affected at greater than 0.1% probability. Maybe also things like cancer and its progression(dependent on mutations which may or may not happen on a short time scale?) but I don’t really know much about it.

• I think “one of the potential candidates might quantum-randomly die in the timeframe” is a pretty strong argument that there’s at least ~0.1% quantum uncertainty.

ETA: For some stats on this, see this table from the government of Canada. Annual death rate ranges from 0.1% for 35-year olds, up to 0.5% for 55-year-olds, and 3% for 75-year-olds. Multiply those by 4 to get the death rate in the relevant window. Obviously only a small fraction of those deaths will be quantum-randomness-influenced. Also note the relatively high rate of presidential assassinations -- 4 of 45(!) presidents were asssassinated in office(although I assume the “true” probability is lower now)

• I think you’re probably right. It does seem plausible that there is some subtle structure which is preserved after 20 seconds, such that the resulting distribution over states is feasibly distinguishable from a random configuration, but I don’t think we have any reason to think that this structure would be strongly correlated with which side of the box contains the majority of particles.

• but their control over their “portion of the universe” would actually increase

Yes, in the medium term. But given a very long future it’s likely that any control so gained could eventually also be gained while on a more conservative trajectory, while leaving you/​your values with a bigger slice of the pie in the end. So I don’t think that gaining more control in the short run is very important—except insofar as that extra control helps you stabilize your values. On current margins it does actually seem plausible that human population growth improves value stabilization faster than it erodes your share I suppose, although I don’t think I would extend that to creating an AI population larger in size than the human one.

• I see, I think I would classify this under “values can be satisfied with a small portion of the universe” since it’s about what makes your life as an individual better in the medium term.

• Another point, I don’t think that Joe was endorsing the “yet deeper atheism”, just exploring it as a possible way of orienting. So I think that he could take the same fork in the argument, denying that humans have ultimately dissimilar values in the same way that future AI systems might.

• In that case I’m actually kinda confused as to why you don’t think that population growth is bad. Is it that you think that your values can be fully satisfied with a relatively small portion of the universe, and you or people sharing your values will be able to bargain for enough of a share to do this?

• I think people sharing Yudkowsky’s position think that different humans ultimately(on reflection?) have very similar values, so making more people doesn’t decrease the influence of your values that much.

ETA: apparently Eliezer thinks that maybe even ancient Athenians wouldn’t share our values on reflection?! That does sound like he should be nervous about population growth and cultural drift, then. Well “vast majority of humans would have similar values on reflection” is at least a coherent position, even if EY doesn’t hold it.