Yes, I counter visual inputs maximum rate, but my conscious decision-making based on this is slower and equals to yours. I chose visual ones because they produce the most obvious type of qualia – visual qualia, like red.
avturchin
One argument against biological naturalism is that consciousness is quick. I can have like 40 experiences per second (observer-moments). Most biological process like blood flow and gene expression are much slower. Only electric discharges of neurons and some processes in synapses are quick enough. It mostly means that electric model of neuron connections will likely be conscious.
What about writing down his memories? Even if digital immortality will fail, his children can be interested in them.
Eternal Journey in the Space of Possible Minds solves Copies Problem
How much it cost?
How much damage if the body temporary returned to room temperature?
What do you think about other methods of chemical preservation which do not require cryogenics?
I wrote 700 page book Immortality that is mostly covering the same topic. But I am afraid it is too long and nobody will read it and it is also in Russian. Therefore, I created GPTs agent-guide that explains content of the book.
This what agent-guide said about eternal dictatorship in my book
If you mean the common objection “if people stop dying, dictators will rule forever”, the book treats “eternal dictatorship” as a rhetorical scare story, not a real argument for keeping death.
Two key points it makes:
• It’s a false “only one solution” move. The objection assumes death is the only way to prevent bad political outcomes (dictatorship, stagnation, etc.). But even if dictatorship is a risk, there are many other levers besides killing everyone on a timer: institutions, term limits, checks and balances, decentralization, transparency tech, enforcement, even exile or incarceration. So “we need death” doesn’t follow.  
• “Eternal dictatorship” is often an exaggeration tactic. The text explicitly lists constructs like “вечная диктатура / infinite boredom / eternal hell” as cases where critics inflate X to infinity while keeping death constant, to make death look “necessary.” There’s also a very practical aside: even in a world without aging death, dictators don’t automatically get infinity—they still face coups, replacement, accidents, and competition; the book even remarks that “if there were no death from aging, dictators would rule only ~4 years longer on average, because they’re often overthrown.” 
If you tell me whether you meant this as a philosophical objection (“immortality → tyranny”) or a policy worry (“how do we prevent immortal rulers?”), I can point you to the most relevant chapter/section in the text and summarize that part.
- that it can be felt in one unique way defined exactly by that causal structure and nothing else,
casual structure can’t define qualia: casual structure is cloreless and qualia have qualitative differences. This assumes existence of some table of correspondence between casual structures and qualia. Such table is outside the qualia structure but affect experience.Also, casual structure is abstraction as whole particles in the past light cone affects me.
- that any such structure can be felt
Here the problems is with panpsychism and enormous amount of Boltzmann brains it creates in any stone.
- that it can not be felt outside,
If we define something as how it felt from inside, it means that something can be felt outside. But everuthing we feel is what we feel so it is felt from inside. Another point here is that i can’t feel other person or object’s closed casual structure. But here is the problem: if I read a book, my causal structure expand beyong my brain and to the authors brain. Why I don’t feel his feeling if we are just one casual structure?
- that casual structures separate from light cone exit at all—
that the “inside” of causal structure is timeless.The problem here is that if we wanted to say that consciousness is how lightcone feels itself—we would say this—but it is much stronger claim and typically we assume some closed casual structures inside the brain like chains of firing of neurons.
Any observer-moment has all qualia presented simultaneously, so it is timeless, However, causal processes are serial in time. So either I can feel past moments (absurd) or the experience is distributed timelessly inside some empty space in the causal process (also absurd).
Microrant: Defining consciousness as “how causal structures feels from inside” is tautological and also has some hidden assumptions:
- that it can be felt in one unique way defined exactly by that causal structure and nothing else,
- that any such structure can be felt,
- that it can not be felt outside,
- that casual structures separate from light cone exit at all—
that the “inside” of causal structure is timeless.
Still fails to explain why my red is exactly this type of red.
Finally, it is tautological because consciousness is by definition is something which I feel inside.
Strong emotional reaction in this case was expected and not itself bad.
Parents is the most difficult part in the human recreation based on my short experience in the field. In the most cases, they oppose such experiments and also have legal rights to personal data as well as access to the needed private documents and memories. Igor’s mother is the first person who agreed. I can’t tell more about her.
I agree that current versions of mind models do not have qualia in the same form as humans and their self reporting of emotion is not evidence of real emotional experience. The main reason for this is that phenomenological structure is different: a human gets 20 audio-video-body experiences per second and the sideload writes a text about last few minutes and doesn’t get that data structure (though I am working on this).
However, from what I read, writers have very strong empathy to their characters:
“While writing the scene of Emma Bovary’s suicide, Gustave Flaubert reportedly experienced psychosomatic symptoms of arsenic poisoning. He famously claimed to have the distinct taste of poison in his mouth and suffered from actual bouts of vomiting while working. This intense physical reaction highlights his total immersion in the character, famously summarized by his quote, “Madame Bovary, c’est moi.” (I used AI for writing this period).
According to my understanding and observations, a sideload is a pair of real human and a chatbot and the human “donates” her qualia-genereating ability to this pair via empathy. This is not AI psychosis (except extreme cases); in some functional sense it is close to Freudian transference, in which a patient put his feeling on a psychoanalyst – and which is a necessary step in therapy.
Recreation of EA-Pioneer Igor Kiriluk
If there is a topic on which a person decided never to speak publicly—for example because of reputation risks—is it strategic?
As i said in other comment—if we apply the same argument to euthanasia, we would need QI—in the case of euthanasia, the chances of mis-firing is extremely small, like 0.00001% - and normal utility calculation doesn’t work. But QI updates any arbitrary small probability of mis-firing to 1.
But if we apply the same argument to euthanasia, we need QI—in the case of euthanasia, the chances of mis-firing is extremely small, like 0.00001% - and normal utility calculation doesn’t work. But QI updates any arbitrary small probability of mis-firing to 1.
There is no need in infinite survival for this argument against suicide. A large part of suicide attempts ends with serious injuries which also prevent the person from the next attempts. My guess is around 10 per cent (don’t want to use AI).
For example, I read about a boy who shot himself in head but missed and ended destroying both his eyes. This means he will suffer the whole remaining life but will be unable to commit suicide again.
The vibe shift is coming, and it’s going to be very, very sudden. https://x.com/NPCollapse/status/2024553259407384748
You forget flutrackers and Peak Oil.
I had similar idea, which I formulated as: most observers live in non-perfectly fine-tuned universe which means smaller density of observers per unit of space. The same way the biggest part of Sun’s mass is located not in the region of the highest density in its center but in some less dense regions.
My answer to Q1: if you don’t want to live forever, you may be divided into two parts – one which wants and another doesn’t – and the one which doesn’t can be terminated. The real problem, however, is that because of quantum immortality death is impossible and the future seems to be dominated by bad immortality, and terminating your life will increase your changes to be in the bad immortality timeline.
Q2 - the real problem is mind aging which is not the same as brain aging, but it is accumulation of knowledge, bad memes and and general disillusionment. The problem was never postulated but I hope can be solved with the help of advance AI.Q4 - Life remains interesting if you continue to grow and evolve. Therefore immortality without becoming god is a complete waste of time. The road to become god is almost infinitely long (in subjective time at least).
I thought it was 100K ?