Thanks for all these clarifications; sorry if I came off as too harsh.
“Yes, so would I! Again, when it is a personal informed choice, the situation is entirely different.” -- It seems to me like in the case of the child (who, having not been born yet, cannot decide either way), the best we can do is guess what their personal informed choice would be. To me it seems likely that the child might choose to trade off a bit of happiness in order to boost other stats (relative to my level of happiness and other stats, and depending of course on how much that lost happiness is buying). After all, that’s what I’d choose, and the child will share half my genes! To me, the fact that it’s not a personal choice is unfortunate, and I take your point—forcing /some random other person/ to donate to EA charities would seem unacceptably coercive. (Although I do support the idea of a government funded by taxes.) But since the child isn’t yet born, the situation is intermediate between “informed personal choice” vs coercing a random guy. In this intermediate situation, I think choosing based on my best guess of the unborn child’s future preferences is the best option. Especially since it’s unclear what the “default” choice should be—selecting for IQ, selecting against IQ, or leaving IQ alone (and going with whatever level of IQ and happiness is implied by the genes of me and my partner), all seem like they have an equal claim to being the default. Unless I thought that my current genes were shaped by evolution to be at the optimal tradeoff point already, which (considering how much natural variation there is among people, and the fact that evolution’s values are not my values) seems unlikely to me.
Agreed that it is possible that IQ --> less happiness, for most people / on average, even though that strikes me as unlikely. It would be great to see more research that tries to look at this more closely and in various ways.
And totally agreed that this would be a tough tradeoff to make either way; that selecting for emotional stability and happiness alongside IQ would be a high priority if I was doing this myself.
I agree with all these considerations and the choice not being straightforward. It gets even more complicated when one goes deeper into the weeds of the J.S. Mill’s version of utilitarianism. I guess my original point expressed less radically is that assuming that higher IQ is automatically better is far from obvious.
Thanks for all these clarifications; sorry if I came off as too harsh.
“Yes, so would I! Again, when it is a personal informed choice, the situation is entirely different.” -- It seems to me like in the case of the child (who, having not been born yet, cannot decide either way), the best we can do is guess what their personal informed choice would be. To me it seems likely that the child might choose to trade off a bit of happiness in order to boost other stats (relative to my level of happiness and other stats, and depending of course on how much that lost happiness is buying). After all, that’s what I’d choose, and the child will share half my genes! To me, the fact that it’s not a personal choice is unfortunate, and I take your point—forcing /some random other person/ to donate to EA charities would seem unacceptably coercive. (Although I do support the idea of a government funded by taxes.) But since the child isn’t yet born, the situation is intermediate between “informed personal choice” vs coercing a random guy. In this intermediate situation, I think choosing based on my best guess of the unborn child’s future preferences is the best option. Especially since it’s unclear what the “default” choice should be—selecting for IQ, selecting against IQ, or leaving IQ alone (and going with whatever level of IQ and happiness is implied by the genes of me and my partner), all seem like they have an equal claim to being the default. Unless I thought that my current genes were shaped by evolution to be at the optimal tradeoff point already, which (considering how much natural variation there is among people, and the fact that evolution’s values are not my values) seems unlikely to me.
Agreed that it is possible that IQ --> less happiness, for most people / on average, even though that strikes me as unlikely. It would be great to see more research that tries to look at this more closely and in various ways.
And totally agreed that this would be a tough tradeoff to make either way; that selecting for emotional stability and happiness alongside IQ would be a high priority if I was doing this myself.
I agree with all these considerations and the choice not being straightforward. It gets even more complicated when one goes deeper into the weeds of the J.S. Mill’s version of utilitarianism. I guess my original point expressed less radically is that assuming that higher IQ is automatically better is far from obvious.