Can the laws of physics/​nature prevent hell?

Once again, a warning of a “heavy” topic.

Since about two months ago of coming across the concepts of AI safety and s-risks, my life took a complete turn. Before this, I was just focusing on my survival. I always knew that we live in a world filled with suffering, but it’s not hell. There seem to be some safeguards that this world’s creator put in order.

a) The most obvious one being death, and suffering-causing trauma eventually leading to it, so that the suffering isn’t prolonged into eternity. There’s also suicide as a way out for extreme cases.

b) Many victims of violent deaths will even enter shock, drastically decreasing the amount of suffering that they will go through. Adrenaline also reduces suffering in these cases.

c) Shock is only present in mammals, which, along with other factors, makes me think that the more complex the brain, the higher the suffering. Suffering in insects is still debatable, and indeed it’s hard to phantom how it would be in any order of magnitude compared to that of humans, since the same applies to their consciousness. Also, some of the most vulnerable and abundant beings like bacteria and plants can’t suffer.

d) Certain emotions (or their absence) as a deterrent. Non-human animals can’t be evil. Most humans will feel the suffering of others emotionally, and develop compassion.

e) The existence of medicines to be found in Nature, and more generally, the existence of solutions to most of our problems (until now, at least).

However, I see a very high probability that this balance (which is far from making this world an acceptable one, but again, it’s not hell) being broken with the coming technologies of this century, most importantly advanced AI.

Since there seems at least to be some order in this world that prevents it from falling into hell/​chaos, can we expect some laws of physics/​biology to effectively stop the advent of hell/​chaos? At least partially?

For instance, I hope that emulating consciousness in a computer is impossible (how would we sleep? how would we die? Consciousness seems to necessitate these things, among others). Or that at least, the emulation is not the original identity, aka death is certainly final.

I hope that immortality of any conscious being is impossible (here the wishful thinking of transhumanists comes into play… You WOULDN’T like to be immortal, believe me. Even 80 year olds are tired of life in many cases. Death is as essential as sleep. And don’t get me wrong, I don’t feel good about this fact (I don’t wanna die) but it seems so nonetheless).

Maybe we can only be emulated in a computer of atoms (the universe), and not one contained in it and of different structure and far less complexity. As Michio Kaku says “the most efficient computer that can simulate the Universe, is the Universe” (regarding the Simulation Hypothesis).

I hope that advanced nanotechnology is physically impossible (though here my hopes aren’t that high).

Other thing that gives me some hope is the Fermi Paradox. I mean, we already have people like Erik Lentz proving that warp drives are possible, they just need a ton of energy… If so, then any advanced civilization wouldn’t take tens of millions of years to colonize the galaxy, but way less. But, like Fermi said, where is everyone? Where are the warp drives, or at least the Dyson Spheres and the cosmic AIs etc etc?

Or maybe we’re just the first ones to get to technology and everything is indeed possible and chaos/​hell is indeed possible. Ouch...

There’s also the notion in this community that hell is rare among the possible worlds. Could someone elaborate?

This is all pure speculation obviously. My opinion is still that very bad things are possible, that not even 1% of us are aware (even in AI safety most just mention x-risk), and that we should start freaking out about it and taking action—because death/​extinction is “acceptable” (they don’t hurt (for long) and they’re inevitable); hell on Earth (and beyond) is not.

AGI alignment will be impossible with the current expected timelines. The only hope, in my opinion, is convincing the world to stop all AI development (as well as nanotech) until we’ve made these technologies provably safe. It is a very difficult task but it’s tractable, while the former is not.