Unethical Human Behavior Incentivised by Existence of AGI and Mind-Uploading

I need help getting out of a logical trap I’ve found myself in after reading The Age of Em.

Some statements needed to set the trap:

If mind-uploading is possible, then a mind can theoretically exist for an arbitrary length of time.

If a mind is contained in software, it can be copied, and therefore can be stolen.

An uploaded mind can retain human attributes indefinitely.

Some subset of humans are sadistic jerks, many of these humans have temporal power.

All humans, under certain circumstances, can behave like sadistic jerks.

Human power relationships will not simply disappear with the advent of mind uploading.

Some minor negative implications:

Torture becomes embarrassingly parallel.

US states with the death penalty may adopt death plus simulation as a penalty for some offenses.

The trap:

Over a long enough timeline, the probability of a copy of any given uploaded mind falling into the power of a sadistic jerk approaches unity. Once an uploaded mind has fallen under the power of a sadistic jerk, there is no guarantee that it will ever be ‘free’, and the quantity of experienced sufferring could be arbitrarily large, due in part to the embarrassingly parallel nature of torture enabled by running multiple copies of a captive mind.

Therefore! If you believe that mind uploading will become possible in a given individual’s lifetime, the most ethical thing you can do from the utilitarian standpoint of minimizing aggregate suffering, is to ensure that the person’s mind is securely deleted before it can be uploaded.

Imagine the heroism of a soldier, who faced with capture by an enemy capable of uploading minds and willing to parallelize torture spends his time ensuring that his buddies’ brains are unrecoverable at the cost of his own capture.

I believe that mind uploading will become possible in my lifetime, please convince me that running through the streets with a blender screaming for brains is not an example of effective altruism.

On a more serious note, can anyone else think of examples of really terrible human decisions that would be incentivised by the development of AGI or mind uploading? This problem appears related to AI safety.