It sounds possible. However, before even the first people will get it, there should be some progress with animals, and right now there is nothing. So I would bet it is not going to happen in let’s say next 5 years. (Well, unless we suddenly get a radical progress in creating a superAI that will do it for us, but this is the huge another question on its own). I would say, I wanted first to think about the very near future, without a huge technological breakthrough. Of course, the immortality and superAI are far more important than anything I mentioned in the original post. However, I think there is a non-negligible likelihood for something from the original post to happen very soon (maybe even this year), while the likelihood of the immortality before the end of this year seems quite negligible.
I would suggest a minutely subscription. It will be approximately $1/minute, actually close for mine akrasia fine for spending time on job unrelated websites.
It is exactly the point that there should be no proof of simulation unless simulators want it. Namely, there should be no observable (for us) difference between universe controlled simply by laws of Nature and between one with intervention from simulators. We can’t look at any effect and say—this happens, therefore, we are in the simulation. The point was the opposite. Assume we are in simulation with benevolent simulators (what, according to what I wrote in the theoretical part of the post, is highly likely). What they can do so that we still was not able to classify this intervention as something outside of laws of nature, but so that our well-being would be improved? What are the practical results of it for us? By the way, we even do not have to require the ability to change probability. Just the placebo effect is good enough. Consider the person who was suffering from depression, or addiction, or akrasia—and now he is much better. Can a strong placebo (like a very strong religious experience) do it? Well, yes, there were multiple cases. Does it improve well-being? Certainly yes. So the practical point is that if such intervention masquerading under placebo can help, it is certainly worth trying. Of course one can say that I just tricking myself into believing it and then placebo just works, but the point is that I have reasons to believe in it (see theoretical part), and this makes placebo work. Thank you for directing my attention to the post, I will certainly read it.
Of course, placebo is useful from the evolutionary point of view, and it is a subject of quite a lot of research. (Main idea—it is energetically costly to have your immune system always at high alert, so you boost it in particular moments, correlating with pleasure, usually from eating/drinking/sex, which is when germs usually get to the body. If interested, I will find the link to the research paper where it is discussed. ).I am afraid I still fail to explain what I mean. I do not try to deduce from the observation that we are in a simulation, I don’t think it is possible (unless simulators decide to allow it). I am trying to see how the belief that we are in simulation with benevolent simulators can change my subjective experience. Notice, I can’t just trick myself to believe only because it is healthy to believe. This is why I needed all this theory above—to show that benevolent simulators are indeed highly likely. Then, and only then, I can hope for the placebo effect (or for real intervention masquerading under placebo effect), because now I believe that it may work. If I could just make myself to believe in whatever I needed, of course I would not need all these shenanigans—but, after being faithful LW reader for a while, it is really hard, if possible at all.
Hmmm, but I am not saying that the benevolent simulators hypothesis is false and that I just choose to believe in it because it brings a positive effect. Rather opposite—I think that benevolent simulators are highly likely (more than 50% chance). So it is not a method “to believe in things which are known to be false”. It is rather an argument why they are likely to be true (of course, I may be wrong somewhere in this argument, so if you find an error, I will appreciate it). In general, I don’t think people here want to believe false things.
How should we deal with the cases when epistemic rationality contradicts instrumental? For example, we may want to use placebo effect because one of our values is that healthy is better than sick, and less pain is better than more pain. But placebo effect is based on the fact that we believe pill to be a working medicine that is wrong. Is there any way to satisfy both epistemic and instrumental rationality?
Thank you, wonderful series!
Sorry, I didn’t get what do you mean by “non-dominant political controllership”, can you rephrase it?
When I say use, I mean actually detonating—not necessarily destroying a big city, but initially maybe just something small. Within the territory is possible, though I think outside is more realistic (I think the army will be eventually to weak to fight the external enemies with modern technology, but will always be able to fight unarmed citizens).
Well, “democratic transition” will not necessarily solve that (like basically it did not completely resolve the problem with the end of the Cold War), you are right, so actually, the probability must be higher than I estimated—even worse news. Is there any other options for decreasing the risk?From a Russian perspective. Well, I didn’t discuss it with officials in the government, only with the friends who support the current government. So I can only say what they think and feel, and of course, it is just anecdotal evidence. When I explicitly discussed with one of them the possibility of the nuclear war, he stated that this possibility is small and as long as the escalation will be beneficial for Russia he will support it.
I don’t want to go here into politics and discuss what type of government would be better for Russia. I was more interested to estimate the probability of the nuclear war (or other catastrophes mentioned on the main post).
Yes, absolutely, it is the underlying thesis.
The problem is that retaliation is not immediate (missiles takes few hours to reach the goal). For example, Plutonia can demonstratively destroy one object and declare that any attempt of retaliation will be retaliated in double. As soon as other country launches N missiles, Plutonia launches 2 N.
That is exactly the problem. Suppose the Plutonia government sincerely believes, that as soon as other countries will be protected, they will help people of Plutonia to overthrow the government? And they kind of have reasons for such belief. Then (in their model of the world) the world protected from them is a deadly threat, basically capital punishment. The nuclear war, however, is horrible, but there are bomb shelters where they can survive and have enough food inside just for themselves to live till natural death.