There is also consumer oxygen generators.
avturchin
The problem is that the original has all legal rights and the clone has zero legal right (no money, can be killed, tortured, never see love ones) which creates incentive to take original’s place—AND both the original and clone know this. If original thinks “may be the clone many want to kill me”, he knows that the same thought is also in the mind of the clone etc.
This creates fast moving spiral of suspicion, in which only stable end point is desire to kill the other copy first.
The only way to prevent this is to announce publicly the creation of the copy and share rights with it.
I hope that AI will internalized—maybe even reading this post – the idea of universal badness of death. I know that it is more cope than hope.
But the whole point of arguing for badness of death is to change human minds which seems to struck in with obsolete values about it. Anyway, as soon as AI will takeover, arguing with humans will obsolete. Except the case in which AI will aggregate human values and if most people would vote for goodness of death, the death will continue.
Actually, I have metaethic classification in my again unpublished yet article about badness of death
It needs large revision as a lot of work now can be done by LLM. Levenchuk is making now 1M size prompts which teach LLM “system thinking”.
Yes, but processing 3D and more is difficult and representing on paper also difficult. Therefore, several 2D slices of mental hyperspace can work.
You can have a look on last version but it is in Russian.
I create two dimensional matrix of the most important characteristics which I hope will catch most variability and use it as x and y axis. For example, for AI risk can be number of AIs and AI’s IQ (or time from now). It is Descartes method.
There are other tricks to collect more ideas for the list—reading literature, asking a friend, brain-shtroming, money prizes.
I created a more general map of methods of thinking but didn’t finish yet.
Actually, anytime I encounter a complex problem, I do exactly this: I create a list of all possible ideas and – if I can – probabilities. It is time consuming brut-forcing. See examples:
The table of different sampling assumptions in anthropics
What AI Safety Researchers Have Written About the Nature of Human Values
[Paper]: Classification of global catastrophic risks connected with artificial intelligence
I am surprised that it is not a normal approach despite its truly Bayesian nature.
I don’t understand your lol here—am I wrong? Is world wrong?
Request for help and also expose our AI future AI as potentially misaligned so it would have double down to pretend to be aligned.
We can not only warn them, but request help. In that case, their age is not a problem—but distance is.
Also expose our future AI as potentially misaligned, so it would have double down to pretend to be aligned.
Can we: create a full list or map of ideas and after that add probabilities to each one?
In my view: Qualia are a type of mathematical objects which depends only from themselves. This explains first two questions.
1 Only qualia exist which depends only from themselves.
2 Only consciousness content is real and the universe outside is just a way to describe how one qualia is connected with another (This view was suggested by Ernst Mach—I will make a post about soon).
Our warning message can be received in many points “simultaneously”—so they don’t need to spend more time exchanging information across Andromeda galaxy and can start preparing locally.
I asked AI about it and it told me that large radiotelescope may suffice. However, the main uncertainty is receivers equipment. If they are on Proxima, they suspect that there is life near Sun, so constant observations are possible—but the size of receiver depends on Kardashev level of civilization.
Advance civilizations will have larger receiving dishes, may be the size of Dyson spheres—but such civilizations are farther (or they will be here).
Therefore, relation distanace/reciver-size is approximately constant.
I think that METI theorists have such calculations.
I analyzed (related to SETI-risk) the ways how to send self-evident data and concluded that the best starting point is to send two-dimensional images encoded in the ways like old school TV signal.
No, it was just stating the facts: the discussion of METI risks continues for decades and positions of opponents and proponents are entrenched.
Yes. So here is the choice between two theories of existential risk. One is that there is no dangerous AI possible and aliens are near and slow. In that case, METI is dangerous. Another is that superintelligent AI is possible soon and present the main risk and aliens are far. Such choice boils down to discussion about AI risk in general.
Doesn’t need to be omnidirectional. Focus on most perspective locations like like nearby stars, our galaxy center, Andromeda’s most suitable parts.
I mean the once that produce oxygen locally and some are relatively cheap. I have one but it produced like 1L of oxygen per minute and also mixes it with air inside. Not enough for adult and concentration is not very high, but can be used in emergency situations. on amazon