an ethical puzzle about brain emulation

I’ve been thinking about ethics and brain emulations for a while and now have realized I am confused. Here are five scenarios. I am pretty sure the first is morally problematic, and pretty sure the last is completely innocuous. But I can’t find a clean way to partition the intermediate cases.

A) We grab John Smith off the street, scan his brain, torture him, and then by some means, restore him to a mental and physical state as though the torture never happened.

B) We scan John Smith’s brain, and then run a detailed simulation of the brain being tortured for ten seconds, and over again. If we attached appropriate hardware to the appropriate simulated neurons, we would hear the simulation screaming.

C) We store, on disk, each timestep of the simulation in scenario B. Then we sequentially load each timestep into memory, and overwrite it.

D) The same as C, except that each timestep is encrypted with a secure symmetric cipher, say, AES. The key used for encryption has been lost. (Edit: The key length is much smaller than the size of the stored state and there’s only one possible valid decryption.)

E) The same as D, except we have encrypted each timestep with a one time pad.

I take for granted that scenario A is bad: one oughtn’t be inflicting pain, even if there’s no permanent record or consequence of the pain. And I can’t think of any moral reason to distinguish a supercomputer simulation of a brain from the traditional implementation made of neurons and synapses. So that says that B should be equally immoral.

Scenario C is just B with an implementation tweak—instead of _calculating_ each subsequent step, we’re just playing it back from storage. The simulated brain has the same sequence of states as in B and the same outputs.

Scenario D is just C with a different data format.

Scenario E is just D with a different encryption.

Now here I am confused. Scenario E is just repeatedly writing random bytes to memory. This cannot possibly have any moral significance! D and E are indistinguishable to any practical algorithm. (By definition, secure encryption produces bytes that “look random” to any adversary that doesn’t know the key).

Either torture in case A is actually not immoral or two of these adjacent scenarios are morally distinct. But none of those options seem appealing. I don’t see a simple clean way to resolve the paradox here. Thoughts?

As an aside: Scenarios C,D and E aren’t so far beyond current technology as you might expect. Wikipedia tells me that the brain has ~120 trillion synapses. Most of the storage cost will be the per-timestep data, not the underlying topology. If we need one byte per synapse per timestep, that’s 120TB/​timestep. If we have a timestep every millisecond, that’s 120 PB/​second. That’s a lot of data, but it’s not unthinkably beyond what’s commercially available today, So this isn’t a Chinese-Room case where the premise can’t possibly be realized, physically.