I was told by one of researchers that the risk side is accumulation of “wrong antibodies” which may eventually target own tissues as autoimmune diseases. Any new shot increases this small risk. This is more true for complex vector vaccine like AZ, as they trigger generation of antibodies not only to carrier but also to vector. Anyway, I already got third shot of a vector vaccine.
I am still not convinced: it seems that p(abiogenesis) is a very small constant depending on a random generation of a string of around 100 bits. The probability of life becoming intelligence p(li) is also, I assume, is a constant. The only thing we don’t know is a multiplier given by panspermia, which shows how many planets will get “infected” from the Eden in a given type of universes. This multiplier, I assume, is different in different universes and depends, say, on the density of stars. We could use anthropics to suggests that we lives in the universe with the higher values of the panspermia multiplier (depending of the hare of the universes of this type).
The difference here with what you said above is that we don’t make any conclusions about the average global level of the multiplier over all of the multiverse, you are right that anthropics can’t help us here. Here I use anthropics to conclude about what region of the multiverse I am more likely to be located, not to deduce the global properties of the multiverse. Thus there is no SIA, as there is no “possible observers”: all observers are real, but some of them are located in more crowded place.
There was an interesting article “Watchers of the Infinity” in which is suggested that multiverse has coherent timelines which exist without beginning and end. Thus observer’s probabilities could be calculated along such timeline in unique way (no spheres and ambiguities). But it requires that black holes don’t have singularities.
There could be two types of Zoos. Using human analogy, city zoo and wild-life reserve. City zoo has a few animals under very tight control, and they can live in a perfect simulation of the intact world, like fishes in tank. Wild-life reserve has more animals, but less protected from wrong observations. E.g. hunting grounds.
The second type is more probable based on anthropics, as it includes many more observers, and if we are in a zoo, it is probably of the second type. It may explain UFO observation, or, if we discard UFOs, it is an argument againt zoo.
To better understand the suggested model of small anthropic update I imagined the following thought experiment: my copies are created in 4 boxes: 1 copy in first box, 10 in second, 100 in third and 1000 in forth. Before the update, I have 0.25 chances to be in 4th box. After the update I have 0.89 chances to be in 4th box, so the chances increased only around 3.5 times. Is it a correct model?
Ok. Another question. I have been recently interested in anthropic effects of panspermia. Naively, as panspermia creates millions habitable planets for a galaxy vs. one in non-panspermia world, anthropics should be very favourable for panspermia. But a priori probability of panspermia is low. How is your model could be applied to panspermia?
Great post, thanks!
It looks like 7 times update could be decisive in some situations. For example if initial probability that we are not alone in the visible universe is 10 per cent, and after the anthropic update it becomes 70 per cent, it changes the situation from “we are most likely” alone to “we are not alone”.
Catching Treacherous Turn: A Model of the Multilevel AI Boxing
Multilevel defense in AI boxing could have a significant probability of success if AI is used a limited number of times and with limited level of intelligence.
AI boxing could consist of 4 main levels of defense, the same way as a nuclear plant: passive safety by design, active monitoring of the chain reaction, escape barriers and remote mitigation measures.
The main instruments of the AI boxing are catching the moment of the “treacherous turn”, limiting AI’s capabilities, and preventing of the AI’s self-improvement.
The treacherous turn could be visible for a brief period of time as a plain non-encrypted “thought”.
Not all the ways of self-improvement are available for the boxed AI if it is not yet superintelligent and wants to hide the self-improvement from the outside observers.
In which types of cells the most of transpasone damage happens? In stem cells? Other types of cells are recycled quickly. The same question arises about ROS.
Also, how your theory explains difference in life expectancy between different species?
A person who works on other vaccine, told me that Sputnik (and other similar vaccines based on vectors) generate like 2000 random antibodies and there is a chance that some of them will turn autoimmune and cause, say, encephalitis. Other types of vaccines generate antibody not the whole vector but only to spike protein, like 30 different ones, and there are less chances of autoimmune reaction.
But most people do not know these considerations. However, they had observed how government manipulated data during elections and Olympic games and are sure that they will lie again; or they believe in “Bill Gates’ chip”.
BTW, my personal choice is Uber Black.: I don’t have car and I delegate driving to special trained person. Every time I take Comfort, I regret, as I have near-miss accidents. It is relatively cheap in my area.
I have two-three people who I knew and who died in accidents: all of the were “reckless pedestrians”. It supports you point about the ability of pedestrians to manage risks.
Can’t find a link on statistic of accidents by car types
Around half deaths from car accidents are pedestrians (may be less in US). By choosing not to drive, you increase the time of walking and your chances of being hit by other person’s car.
Other means of transport like cycling or buses are also risky.
Sitting home is even more dangerous as there are risks of depression and being overweight.
Finally, some cars are like two-three orders safer than others, if we look at the number of reported deaths per billion km driving. I saw once that Toyota Prius had 1 death for 1 billion km, but Kia Rio was only 1 for 10 millions. Also, there are special racing cars which are reinforced from inside and can roll safely
Wearing helmet inside a car is also useful.
Waiting few years for self-driving Tesla Cybertrack may be an option.
Ok. But what if there are other more effective methods to start believe in things which are known to be false? For example, hypnosis is effective for some.
Placebo could work because it has some evolutionary fitness, like the ability to stop pain in case of the need of activity.
Benevolent simulators could create an upper limit of subjectively perceived pain, like turning off qualia but continue screaming. This will be unobservable scientifically.
The ability to change probability of future events in favor of your wishes is not a proof of simulation, because there is an non-simulation alternative where it is also possible.
Imagine that natural selection in quantum multiverse worked in the direction that helped to survive beings which are capable to influence probabilities in favorable way. Even a slightest ability to affect probabilities will give an enormous increase of measure, so the anthropics favor you to be in such world, and such anthropic shift may be even stronger than the shift in the simulation direction.
In that case, it perfectly reasonable to expect that your wishes (in your subjective timeline) will have a higher probability to be fulfilled. A technological example of such probability shift was discussed in The Anthropic Trilemma by EY.
Doomsday argument and quantum immortality are both true, and it means that I will be the only survivor of a global catastrophe. Moreover, it will be in a simulation.
Both DA and QI could be tested in other fields. DA was tested to predict other things besides the end of the world by Gott. QI is anthropic principle applied to the future. Aranyosi claimed that DA and Simulation argument cancel each other, but actually they support each other: I live (or will live because of QI) in a simulation which simulates a doomsday event with one survivor.
So we could create another intelligent species on Earth by combining selection and designed culture. Any risks?
We could test Doomsday argument on other things, like Gott has tested it on broadway shows. For example, I can predict that your birthday is not 1 of January with high confidence. It is also true for my birthday date, which is randomly selected from all dates of the year. So despite my “who I am” axiom, my external properties are distributed randomly.
All that we know about x-risks tells us that Doomsday argument should be true, or at least very probable. So we can’t use its apparent falsity as an argument that some form of anthropic reasoning is false.