The arguments you give seem more about disvalue being more salient than (plus)value, but there are many different forms of disvalue and value, and valence is only one of them. Like, risk avoidance (/applying a strategy that is more conservative/cautious than justified by naïve expected value calculations or whatever) is also an example of disvalue being more salient than value, but need not cache out in terms of pleasure/pain.
(Possibly tangent:) And then there’s the thing that even if you accept the general NU assumptions, is the balance being skewed to the disvalue side some deep fact about minds, or is it a “skillhardware issue”: we could configure our (and/or other animals’) biology, so that things are more balanced or even completely skewed towards the positive side, to the extent that we can do this without sacrificing functionality, like David Pearce argues. (TBC: I’m not advocating for this, at least not as radically/fanatically and not as a sole objective/imperative.)
The arguments you give seem more about disvalue being more salient than (plus)value, but there are many different forms of disvalue and value, and valence is only one of them. Like, risk avoidance (/applying a strategy that is more conservative/cautious than justified by naïve expected value calculations or whatever) is also an example of disvalue being more salient than value, but need not cache out in terms of pleasure/pain.
(Possibly tangent:) And then there’s the thing that even if you accept the general NU assumptions, is the balance being skewed to the disvalue side some deep fact about minds, or is it a “
skillhardware issue”: we could configure our (and/or other animals’) biology, so that things are more balanced or even completely skewed towards the positive side, to the extent that we can do this without sacrificing functionality, like David Pearce argues. (TBC: I’m not advocating for this, at least not as radically/fanatically and not as a sole objective/imperative.)