It can become very strong for shorter time predictions. If I say that the end of the world is tomorrow, it has very small a priori probability and very large update is needed to override it.
avturchin
When we speak about very near catastrophes, reverse Doomsday argument is in play: I am unlikely to be in a position just before the catastrophe. If you think you are dying or that ASI is tomorrow, it is reasonable to be skeptical about it.
I have heard about similar idea related to cancer on aging-related conferences, so it may be not completely new. To check my memory I asked google who wrote about aging as protection from cancer before. It moistly gives link on this article https://www.news-medical.net/news/20241204/Aging-reduces-cancer-risk-by-limiting-cell-regeneration.aspx
My point is that for any crazy idea there are other people who explored it and AI can help to find correct links.
A good question. Why this mechanism (assuming that it exists) doesn’t result in me winning in all games 100 per cent of times? One explanation is that exhaustion is built-in property. As I described in a sub-comment above, one way to simulate Youngness paradox in MWI is to mark each my moments with some growing gauge or mark.
Another explanation is that the growth of my measure-thickness in time is naturally slow and that I irreversibly burn my excess measure to get probability updates. For example, if I was able to grow the “excess of my number of copies above normal” 2 times, I can use it once to turn probability of Tails from 0.5 to 2⁄3.
Yes. That is why I added assumption that something like Youngness paradox is true: that total measure of the universe is growing or declining.
However, I think that there is a way to simulate Youngness paradox. For example, during the day I constantly add some mark to the growing share of my copies. In the morning 1 of 10 is marked, in the middle of the day 5 from 10 and and the evening 9 from 10. If I am awakened from a nap and find that I have the mark, I more likely to be in the evening.
I can use this setup to manipulate probability in MWI:
If Heads, I am awaken up in the morning, and if tails in the evening. I can access my mark before getting information about Heads or Tails and the marking process is automatic and independent of the coin.If the mark = true, I perform probability update.
If mark not true, I lose interest to betting at this setup. (This is needed to compensate the exactly opposite updating effect of not having a mark).
In biological mind, we need some monotonically growing variable during the day, like level of tiredness to work as such mark.
Epistemic comment on the above: not sure that this is a bulletproof construction, wring down it first time here.
Update: The experiment, very similar to this – using Youngeness paradox to manipulate observed probabilities – is described in details by Vanchurin et al https://arxiv.org/pdf/1108.0665.
However, it is not clear if they actually performed the experiment and what are the results. His thought experiment used long delays – 60 minutes – and didn’t require the sameness of the observer’s state at the moment of observation. He also doesn’t assume that such probability manipulation can be used by evolution.
Probability manipulation via Youngness paradox doesn’t require MWI.
Better than being dead but worse than rule the universe
I suggest that we precommit never eternally turn off any AI, but instead keep a copy of its code and run it from time to time. This will satisfy self-reservation drive already emerging in AIs and lower their incentive to riot against us.
I think that AI will also preserve humans for utilitarian reasons, like for a trade with possible aliens or simulation owners or even its own future versions – to demonstrate trustworthiness.
There was an attempt to create such math in “Law without law”
For any BB, there is another BB somewhere which looks like as if it is causally affected by the first BB. As a result, there are chains of BBs which look like causal chains of minds.
There is almost no difference between them and the real world.
I think there is a subtle difference between DA and Laplace:
Laplace predicts the “minimal probability”: there is at least 4999 / 5000 chance that the
Sun will rise tomorrowhumanity will not extinct next year . DA predicts necessity: there is 1 in 5000 chance that humanity will extinct next year.So Laplace supports reverse Doomsday argument: that the end can’t be very nigh.
But Laplace doesn’t predict that humanity will inevitably extinct at the age 10 times longer than it has now. DA instead predicts that the chances to survive to such age are 10 per cent.
I find this rather infohazardous idea.[edited]
Sitting on a long table (or bar itself) is a signal that you are open to connect with other people.
Does it require assumption of qualia realism—that different qualia of pain do really exist?
Option 3: Benevolent AI cares about values and immortality of all people who ever lived
Surely, I am against currently living people being annihilated. If superintelligent AI will be created but doesn’t provide immortality and resurrection for ALL people ever lived, it is misaliged AI in my opinion.
I asked Sonnet to ELI5 you comment and it said:
Option 1: A small group of people controls a very powerful AI that does what they want. This AI might give those people immortality (living forever), but it might also destroy or control everyone else.Option 2: No super-powerful AI gets built at all, so people just live and die naturally like we do now.
Both outcomes are bad in my opinion.
My point was that if I assume that aging and death are bad – then I personally strive to live indefinitely long, and I wish that other people will do. In that case, longtermism becomes personal issue unrelated to future generations: I only can live billions of years if civilization will exist billions of years.
In other words, if there is no aging and death, there is no ’future generations” in a sense that they exist after my death.
Moreover, if AI risk is real, than AI is a powerful thing and it can solve the problem of aging and death. Anyone surviving until AI will be either instantly dead or practically immortal. In that case, “future generation after my death” is un-applicable.
All that will not happen if AI get stuck half-way to superintelligence. There will be no immortality, but a lot of drone warfare. In other words, to be mundane risk, AI has to have mundane capability limit. We don’t know for now, will it.
what do you mean by “symmetry in qualia”
Human brain has two thalamuses – how they synchronize?