# gbear605

Karma: 1,250
• I’m claiming that we should only ever reason about infinity by induction-type proofs. Due to the structure of the thought experiment, the only thing that is possible to use for to count in this way is galaxies, so (I claim) counting galaxies is the only thing that you’re allowed to use for moral reasoning. Since all of the galaxies in each universe are moral equivalents (either all happy but one or all miserable but one), how you rearrange galaxies doesn’t affect the outcome.

(To be clear, I agree that if you rearrange people under the concepts of infinity that mathematicians like to use, you can turn HEAVEN into HELL, but I’m claiming that we’re simply not allowed to use that type of infinity logic for ethics.)

Obviously this is taking a stance about the ways in which infinity can be used in ethics, but I think this is a reasonable way to do so without giving up the concept of infinity entirely.

• I have an argument for a way in which infinity can be used but which doesn’t imply any of the negative conclusions. I’m not convinced of its reasonableness or correctness though.

I propose that infinity ethics should only be reasoned about by use of proof through induction. When done this way, the only way to reason about HEAVEN and HELL is by matching up galaxies in each universe, and doing induction across all of the elements:

Theorem: The universe HEAVEN that contains n galaxies is a better universe than HELL which contains n galaxies. We will formalize this as HEAVEN(n) > HELL(n). We will prove this by induction.

• Base case, HEAVEN(1) > HELL(1):

• The first galaxy in HEAVEN (which contains billions of happy people and one miserable person) is better than the first galaxy in HELL (which contains billions of miserable people and one happy person), by our understanding of morality.

• Induction step HEAVEN(n) > HELL(n) ⇒ HEAVEN(n+1) > HELL(n+1):

• HEAVEN(n) > HELL(n) (given)
HEAVEN(n) + billions of happy people + 1 happy person > HELL(n) + billions of miserable people + 1 miserable person (by understanding of morality)
HEAVEN(n) + billions of happy people + 1 miserable person > HELL(n) + billions of miserable people + 1 happy person (moving people around does not improve things if it changes nothing else.)
HEAVEN(n + 1) > HELL(n + 1) □

A downside of this approach is that you lose the ability to reason about uncountably infinite numbers. However, I think that’s a bullet that I am willing to bite, to only be able to reason about a countably infinite number of moral entities.

• One downside to using video games to measure “intelligence” is that they often rely on skills that aren’t generally included in “intelligence”, like how fast and precise you can move your fingers. If someone has poor hand-eye coordination, they’ll perform less well on many video games than people who have good hand-eye coordination.

A related problem is that video games in general have a large element of a “shared language”, where someone who plays lots of video games will be able to use skills from those when playing a new video game. I know people that are certainly more intelligent than I am, but who are less able when playing a new video game, because their parents wouldn’t let them play video games growing up (or, they’re older and didn’t grow up with video games at all).

I like the idea of using a different tool to measure “intelligence”, if you must measure “intelligence”, but I’m not sure that video games are the right one.

• 4 Aug 2023 22:30 UTC
10 points
3

There’s not direct rationality commentary in the post, but there’s plenty of other posts on LW that also aren’t direct rationality commentary (for example, a large majority of posts here about COVID-19). I think that this post is a good fit because it provides tools for understanding this conflict and others like it, which I didn’t possess before and now somewhat do.

It’s not directly relevant to my life, but that’s fine. I imagine that for some here it might actually be relevant, because of connections through things like effective altruism (maybe it helps grant makers decide where to send funds to aid the Sudanese people?).

• Interesting post, thanks!

A couple of formatting notes:

This post gives a context to the deep dives that should be minimally accessible to a general audience. For an explanation of why the war began, see this other post.

It seems like there should be a link here, but there isn’t one.

Also, all of the footnotes don’t link to each other properly, so currently one has to manually scroll down to the footnotes and then scroll back up. LessWrong has a footnote feature that you could use, which makes the reading experience nicer.

• It used to be called Find Friends on iOS, but they rebranded it, presumably because family was a better market fit.

There are others like that too, like Life360, and they’re quite popular. They solve the problem of parents wanting to know where their kids are. It’s perhaps overly zealous on the parents part, but it’s a real desire that the apps are solving.

• Metaculus isn’t very precise near zero, so it doesn’t make sense to multiply it out.

Also, there’s currently a mild outbreak, while most of the time there’s no outbreak (or less of one), so the risk for the next half year is elevated compared to normal.

• I’m not familiar with how Stockfish is trained, but does it have intentional training for how to play with queen odds? If not, then it might be able to start trouncing you if it were trained to play with it, instead of having to “figure out” new strategies uniquely.

• Are there other types of energy storage besides lithium batteries that are plausibly cheap enough (with near-term technological development) to cover the multiple days of storage case?

(Legitimately curious, I’m not very familiar with the topic.)

• If you’re on the open-air viewing platform, it might be feasible to use something like a sextant or shadow lengths to figure out the height from the platform to the top, and then use a different tool to figure out the height of the platform.

• I often realize that I’ve had a headache for a while and had not noticed it. It has real effects—I’m feeling grumpy, I’m not being productive—but it’s been filtered out before my conscious brain noticed it. I think it’s unreasonable to say that I didn’t have a headache, just because my conscious brain didn’t notice it, when the unconscious parts of my brain very much did notice it.

After a split-brain surgery, patients can experience someone on one side of their body and not notice it with the portion of the brain that is controlling speaking, that is, the portion that seems conscious, but the other portion of the brain still experiences the sensation and reacts to it in a way that can seem inexplicable to the conscious portion of the brain (though the conscious brain will try to make up some sort of explanation for it).

The brain is not unitary, and it is so un-unitary that it seems like a mistake to even act as if subjective experience is a single reality.

• The problem is that prior to ~1990, there were lots of supposed photographs of Bigfoot, and now there are ~none. So Bigfoots would have to previously been common close to humans but are now uncommon, or all the photos were fake but the other evidence was real. Plus, all of that other evidence has also died out (now that it’s less plausible that they couldn’t have taken any photos). So it’s possible still that Bigfoot exists, but you have to start by throwing out all of the evidence that people have that Bigfoot exists, and then why believe in Bigfoot?

• I really enjoyed the parts of the post that weren’t related to consciousness, and it helped me think more about the assumptions I have about how the universe works. The Feynman quote was new for me, so thank you for sharing that!

However, when you brought consciousness into the post, it brought along additional assumptions that the rest of the post wasn’t relying on, weakening the post as a whole. Additionally, LessWrong has a long history of debating whether consciousness is “emergent” or not. Most readers here already hold fixed positions on the debate and would need substantial evidence to be convinced to change their position. Simply stating that “that idea feels wrong” doesn’t suffice, especially when many people often feel otherwise (notably, people who have spent time meditating and feel that they have become “one with the universe”).

• Any position that could be considered safe enough to back a market is only going to appreciate in proportion to inflation, which would just make the market zero-sum after adjusting for inflation. Something like ETH or gold wouldn’t be a good solution because it’s going to be massively distorted on questions that are correlated with the performance of that asset, plus there’s always the possibility that they just go down, which would be the opposite of what you want.

• I haven’t read Fossil Future, but it sounds like he’s ignoring the option of combining solar and wind with batteries (and other types of electrical storage, like pumped water). The technology is available today and can be more easily deployed than fossil fuels at this point.

• Parts of this are easily falsifiable through the fact that organ transplant recipients sometimes get donor’s memories and preferences

The citation is to an unreputable journal. Some of their sources might have basis (though a lot of them also seem unreputable), but I wouldn’t take this at face value.

• There can also be meaning that the author simply didn’t intend. In biblical interpretation, for instance, there have been many different (and conflicting!) interpretations given to texts that were written with a completely different intent. One reader reads the story of Adam and Eve as a text that supports feminism, another reader sees the opposite, and the original writer didn’t intend to give either meaning. But both readers still get those meanings from the text.

• 9 Mar 2023 0:35 UTC
2 points
0