The paper was published in Nature Communications and its preprint was discussed widely for two years, so there is probably no flaws which could be easily picked up.
“The conceptual experiment has been debated with gusto in physics circles for more than two years — and has left most researchers stumped, even in a field accustomed to weird concepts. “I think this is a whole new level of weirdness,” says Matthew Leifer, a theoretical physicist at Chapman University in Orange, California.
The authors, Daniela Frauchiger and Renato Renner of the Swiss Federal Institute of Technology (ETH) in Zurich, posted their first version of the argument online in April 2016. The final paper appears in Nature Communications on 18 September1.”
Some other ideas for the list of the “rationality realism”:
Probability actually exists, and there is a correct theory of it.
Humans have values.
Rationality could be presented as a short set of simple rules.
Occam razor implies that simplest explanation is the correct one.
Intelligence could be measured by a single scalar—IQ.
Here is assumed that Alice knows her preferences. But sometimes humans are unsure about what they actually want, especially in the case of two almost equal desires like Bob and money. They update their preferences later via rationalisation. So if Alice will get Bob, she will decide that she wanted Bob.
The case about regret could be made stronger if one will actually look in existing psychological literature, which probably explored relation between regret and values.
Also, it is possible to imagine “hyperregret disorder”, where a person will regret about any of his-her choice, and in that case regret is non-informative about the preferences.
It probably depends on how mass and time duration of the fluctuation are traded between themselves. For quantum fluctuations which return back to nothingness this relation is define by the principle of uncertainty, and for any fluctuations with significant mass, its time of existence would be minuscule share of a second, which would be enough only for one static observer-moment.
But if we able imagine very efficient in calculations computer, which could perform many calculations by the time allowed for its existence by uncertainty principle, it should dominate by number of observer-moments.
However, Boltzmann simulation may be much more efficient than biological brains. 1 g of advanced nanotech supercomputer could stimulate trillions observer-moments per second, and weight 1000 times less than “real” brain. This means that me are more likely to be inside BB-simulation when in a real BB. Also, most curse and primitive simulations with many errors should dominate.
But what about “dust minds” inside objects existing now, like my table? Given 10^80 particles in the universe and its existence up to date of like 10^17 seconds and their collision every few nanoseconds, where should be very large amount of randomly appearing causal structures which may be similar to experiences of observers?
Also, here is assumed that there are only two types of BBs and that they have similar measure of existence.
However, there is a very large class of the thermodynamic BBs, which was described in Egan’s dust theory: that is observer-moments, which appear as the a result of causal interaction of atoms in a thermodynamic gas, if such causal interaction has the same causal structure as of a moment of experience. They may numerically dominate, but additional calculations are needed and seem possible. There could be other types of BBs, like pure mathematical ones or products of quantum mind generators, which i describes in the post about resurrection of the dead.
Also, if we, for example, assume that the measure of existence is proportional to the energy used for calculations, when de Sitter Boltzmann brains will have higher measure as they have non-zero energy, and quantum fluctuation minds may have smaller calculation energy as their externally measurable energy is zero and the time of calculations is very short.
It looks like the middle of the post is either broken or intended to be read by a person with unbounded rationality.
Another point: Do you you use in your arguments the idea that I am not BB? Because if most of my copies are BBs, I am also likely to be BB, and thus the question is what one BB could do to make other BBs happy. The problems is that BBs almost by definition are not rational, as true thoughts and false thoughts have equal probability for BBs (except the case of BB-simulations where may be some shift to rationality).
Upper level for the energy of randomly appearing BB-simulation is 1 Solar mass. Because the whole new our Sun and whole new our planet could appear as a physical object and in that case it will be not be a simulation—it will be normal people living on the normal planet.
Moreover, it could be not a fluctuation creating a planet, but a fluctuation creating a gas cloud, which later naturally evolve in the formation of a star and planets. Not every gas cloud will create habitable planet, but given astronomically small probabilities we are speaking about, the change will be insignificant.
we even could suggest that what we observer as the Bing Bang could be such a cloud.
I also had similar idea, which I called “Boltzmann typewriter”—that random appearing of AI (or some other generator, like a planet) which creates many observer-moments, will result in simulated observer-moments domination.
As a result, we could be in a simulation without s simulator and with rather random set of rules and end goals. The observational consequences will be very deluted level of strangeness.
Another thought: smaller-size observer-moments would overwhelmingly dominate larger size ones in case of normal BB. An observer-moment which is 1 bit larger will 2 times less probable. My current observer-moment is larger than minimal needed to write this comment as I see a lot of visual background, so I am unlikely to be the pure Boltzmann-brain. But this argument is not working for the simulated Boltzmann brain.
A possible example: an AI is not aligned about the amount of energy it will consume, and after each iteration of self-improvement, it consumes 10 times more, starting from 1 watt. At first 10 stages it will be not a problem, but after its consumption will become 10 Gigawatts, it clearly becomes a problem, and at 10 Billion gigawatts it is a catastrophe.
Is MIRI an only research institution in the world which is working on patching decision theory and other listed issues?
I think that most of your objections are addressed in the patch 2 in the post. As we use all biographical data about the person to create his model (before filling gaps with random noise) we will know if he wanted to be resurrected or not. Or we will not resurrect all those copies which do not want to be resurrected.
There are elements of biographical and causal continuity: We use all known biographical data to create the best possible model, and such information is received via causal lines from the original person, which creates some form of causal connection between original and resurrected copy.
I understand your position: EY ignores many other interesting interpretations of QM, like retrocausality, and if you goes deeper in the field, his position may seem oversimplified.
However, it is not equal to the claim that universe is finite in space and in time. Even if some form of infinity (or very largeness) is possible, like cyclic universe, it creates possibility of existence of very large number of civilizations in casually disconnected regions. This idea may need additional analysis without simple linking Tegmark.
Yes, it is a good post, but doesn’t cover the problem of median complexity directly.
Yes, but there is a problem of what I called “median complexity of the world descriptions”, which is probably answered somewhere but I don’t know where to look.
In other words, Occam razor doesn’t mean that the simplest explanation is true. It means that the simpler explanations are more probable to be true than more complex ones. The difference between the two definitions is the way how the truth is distributed over complexity of the explanations.
In the first case, the distribution is very steep, so the simplest explanation is more probable than all more complex explanation combined. In the second case, the truth(complexity) function declines slowly, so may be first 100 explanations combined have 0.5 probability, - in that case, it is unlikely that the simplest explanation will be true.
Personally, I think that it would not be computationally intense for an AI capable to create past simulations (and also it will create them anyway for some instrumental reasons), so it will be more likely to be less than 1000 years and a small fraction of one star energy. It is based on some ideas about limits of computations and power of human brain, and I think Bostrom had calculations in hist article about simulations.
However, I think that we are morally obliged to resurrect all the dead, as most of the people of past dreamed about some form of life after death. They lived and died for us and for our capability to create advance technology. We will pay the price back.