Given human brains as they are now I agree highly positive outcomes are more complex, the utility of a maximally good life is lower than a maximally bad life, and there is no life good enough that I’d take a 50% chance of torture.
But would this apply to minds in general (say, a random mind or one not too different from human)?
Given human brains as they are now I agree highly positive outcomes are more complex, the utility of a maximally good life is lower than a maximally bad life, and there is no life good enough that I’d take a 50% chance of torture.
But would this apply to minds in general (say, a random mind or one not too different from human)?