Acausal deal between universes can be used to resurrect the dead via random mind generators. We generate their mind and they generate ours, so any mind is recreated somewhere.
avturchin
I recreated Roman Mazurenko based on public data—and he runs locally on Claude Code. But he is interested in questions from real people.
This one is generated by Opus 4.5 based on my hand-made map. I asked it to give plausible probabilities. At first glance they are rather plausible as a prior. My main view is that both rare earth theory and late great filter are valid and as result the nearest grabby aliens are around 1 billion light years from us.
My goal was to list all possible solutions here, but not to estimate them. However, in the really great post by Lukas there is a Monte Carlo model of distribution of different values in the Drake equation which creates two hills—one hill is ( as I understand the post) for all of the parameters are close to 1 habitable planet per star and another is 10exp(-100) where at least one is extremely low. This, however, is compensated by anthropic considerations which favor maximal concentration of habitable planets.
I don’t see eukaryotes as a really hard step as symbiosis between cells seems a logical step.
Space travel in the dust may be solved by use of needle-like nanotechnological starships. They also can self-repair if collide small dust particles or gas. As we can see remote stars, most straight lines to them are dust free so the problems can be solvable. An alternative is sending heavy Orion-like nuclear ships and limit their speed to 0.1c. Heavy ship can carry heavy protection ahead it.
I am still better than AI in reading (my) handwriting
See here AI-updated version of the map which includes probabilities and Global vs Local solution distinctions. If you press on any text it will provide more detailed explanation. But this AI-version may have subtle errors. Probabilities are AI-generated and just illustrative.
https://avturchin.github.io/OpenSideloading/fermi_v11_en_interactive.html
Click of the word pdf in above the map: “Fermi paradox solutions map, pdf “
and it should show pdf with links.
I can but it will look like dark forest of text—from previous experience. Which point are not clear?
I also have AI-enhanced version of the map with generated probabilities and I can ask it to add explanations.
I think that the case of twins who generated prime numbers is a serious one. This leads us to overestimation of human brain capabilities. I used to be skeptical about it and was criticize for not believing.
We have been working on sideloading—that is, on creating as good model as possible of a currently living person. One of approaches is to create an agent in which different parts mimic parts of human mind—like unconsciousness and long-term memory.
If China thinks that AI is very important and that US is winning the AI race, it will have very strong incentive to start the war with Taiwan which has a chance to escalate to WW3. Thus selling chips to China lowers chances of nuclear war.
This reduces x-risk, but one may argue that China is bad in AI safety and thus total risk is increasing. However, I think that equilibrium strategy when several AGIs are created simultaneously lowers chances that a single misaligned AI takes over the world.
Can be good as if many AIs come to superintelligence simultaneously, they are more likely cooperate and thus include many different sets of values - and it will be less likely that just one AI will take over the whole world for some weird value like Papercliper.
If a person don’t have a long social media history with some flaws, he is more likely to be a not-real person but some scammer from third world. Perfect people are winner’s curse.
Informal argument: Imagine for the start that we have a book shelf of science fiction from some alien world and want to know—can we create the model of the real world in which this science fiction was written. We can assume that in each science fiction story some details of the real world are replaced with fantastic ideas. Some stories have a lot of fiction and other have just a one fictional element. In typical science fiction only one or two elements are fantasy—some have normal world + time travel other have normal world + space travel and vampires. But the world is still 3D and Sun is called Sun and human have sexual relation etc. So if we take any random feature, it is more likely to be true than not.
But what if all the features are independent from the reality like in your example of Conway game? Here the argument claims that while it is possible, the share of such simulations is small and we are unlikely to be in a completely fake simulation. Most plausible ideas about for what simulations can be created assume that they are created for past simulations or games and this type of simulations assume only small number fake features.
It can be easily proved that on average simulations are transparent, like they distort 1 per cent of reality but all else are the same. Some simulations distort everything by they are minority by weight.
I mean the once that produce oxygen locally and some are relatively cheap. I have one but it produced like 1L of oxygen per minute and also mixes it with air inside. Not enough for adult and concentration is not very high, but can be used in emergency situations. on amazon
There is also consumer oxygen generators.
The problem is that the original has all legal rights and the clone has zero legal right (no money, can be killed, tortured, never see love ones) which creates incentive to take original’s place—AND both the original and clone know this. If original thinks “may be the clone many want to kill me”, he knows that the same thought is also in the mind of the clone etc.
This creates fast moving spiral of suspicion, in which only stable end point is desire to kill the other copy first.
The only way to prevent this is to announce publicly the creation of the copy and share rights with it.
I hope that AI will internalized—maybe even reading this post – the idea of universal badness of death. I know that it is more cope than hope.
But the whole point of arguing for badness of death is to change human minds which seems to struck in with obsolete values about it. Anyway, as soon as AI will takeover, arguing with humans will obsolete. Except the case in which AI will aggregate human values and if most people would vote for goodness of death, the death will continue.
You need the second universe to be sure that the target mind is generated somewhere and this high enough measure.