“It is unethical to donate to effective-altruist charities, since giving away money will mean that your life becomes less happy.
Oh come on, this is an informed personal choice, not something your parents decided for you, why would you even put the two together.
Your logic would seem to go beyond “don’t use embryo selection to boost IQ, have kids the regular way instead”.
I said or implied nothing of the sort! Maybe you can select for both intelligence and emotional stability, I don’t know. Just don’t focus on one trait and assume it is an indisputable good.
It also seems like the child’s preferences would matter to this situation. For instance, personally, I am a reasonably happy guy; I wouldn’t mind sacrificing some of my personal life happiness in order to become more intelligent.
Yes, so would I! Again, when it is a personal informed choice, the situation is entirely different.
To be clear, I also dispute the idea that more intelligence --> less happiness.
May well be, I linked a study to that effect, it might be wrong, or not replicated. But you don’t get to discard evidence just because you do not like it.
Thanks for all these clarifications; sorry if I came off as too harsh.
“Yes, so would I! Again, when it is a personal informed choice, the situation is entirely different.” -- It seems to me like in the case of the child (who, having not been born yet, cannot decide either way), the best we can do is guess what their personal informed choice would be. To me it seems likely that the child might choose to trade off a bit of happiness in order to boost other stats (relative to my level of happiness and other stats, and depending of course on how much that lost happiness is buying). After all, that’s what I’d choose, and the child will share half my genes! To me, the fact that it’s not a personal choice is unfortunate, and I take your point—forcing /some random other person/ to donate to EA charities would seem unacceptably coercive. (Although I do support the idea of a government funded by taxes.) But since the child isn’t yet born, the situation is intermediate between “informed personal choice” vs coercing a random guy. In this intermediate situation, I think choosing based on my best guess of the unborn child’s future preferences is the best option. Especially since it’s unclear what the “default” choice should be—selecting for IQ, selecting against IQ, or leaving IQ alone (and going with whatever level of IQ and happiness is implied by the genes of me and my partner), all seem like they have an equal claim to being the default. Unless I thought that my current genes were shaped by evolution to be at the optimal tradeoff point already, which (considering how much natural variation there is among people, and the fact that evolution’s values are not my values) seems unlikely to me.
Agreed that it is possible that IQ --> less happiness, for most people / on average, even though that strikes me as unlikely. It would be great to see more research that tries to look at this more closely and in various ways.
And totally agreed that this would be a tough tradeoff to make either way; that selecting for emotional stability and happiness alongside IQ would be a high priority if I was doing this myself.
I agree with all these considerations and the choice not being straightforward. It gets even more complicated when one goes deeper into the weeds of the J.S. Mill’s version of utilitarianism. I guess my original point expressed less radically is that assuming that higher IQ is automatically better is far from obvious.
A few points:
Oh come on, this is an informed personal choice, not something your parents decided for you, why would you even put the two together.
I said or implied nothing of the sort! Maybe you can select for both intelligence and emotional stability, I don’t know. Just don’t focus on one trait and assume it is an indisputable good.
Yes, so would I! Again, when it is a personal informed choice, the situation is entirely different.
May well be, I linked a study to that effect, it might be wrong, or not replicated. But you don’t get to discard evidence just because you do not like it.
Thanks for all these clarifications; sorry if I came off as too harsh.
“Yes, so would I! Again, when it is a personal informed choice, the situation is entirely different.” -- It seems to me like in the case of the child (who, having not been born yet, cannot decide either way), the best we can do is guess what their personal informed choice would be. To me it seems likely that the child might choose to trade off a bit of happiness in order to boost other stats (relative to my level of happiness and other stats, and depending of course on how much that lost happiness is buying). After all, that’s what I’d choose, and the child will share half my genes! To me, the fact that it’s not a personal choice is unfortunate, and I take your point—forcing /some random other person/ to donate to EA charities would seem unacceptably coercive. (Although I do support the idea of a government funded by taxes.) But since the child isn’t yet born, the situation is intermediate between “informed personal choice” vs coercing a random guy. In this intermediate situation, I think choosing based on my best guess of the unborn child’s future preferences is the best option. Especially since it’s unclear what the “default” choice should be—selecting for IQ, selecting against IQ, or leaving IQ alone (and going with whatever level of IQ and happiness is implied by the genes of me and my partner), all seem like they have an equal claim to being the default. Unless I thought that my current genes were shaped by evolution to be at the optimal tradeoff point already, which (considering how much natural variation there is among people, and the fact that evolution’s values are not my values) seems unlikely to me.
Agreed that it is possible that IQ --> less happiness, for most people / on average, even though that strikes me as unlikely. It would be great to see more research that tries to look at this more closely and in various ways.
And totally agreed that this would be a tough tradeoff to make either way; that selecting for emotional stability and happiness alongside IQ would be a high priority if I was doing this myself.
I agree with all these considerations and the choice not being straightforward. It gets even more complicated when one goes deeper into the weeds of the J.S. Mill’s version of utilitarianism. I guess my original point expressed less radically is that assuming that higher IQ is automatically better is far from obvious.