The classic heuristics and biases literature is about things like the planning fallacy; it has very little to say about intuitions about human value, which is more in the domain of experimental moral philosophy.
Fair point, though I do think it provides at least weak evidence in this domain as well. That said, there are other examples of cases where intuitions about human value can be very wrong in the moment that are perhaps more salient, - addictions and buyer’s remorse come to mind.
I’m willing to entertain this as a hypothesis, although I’d be extremely sad to live in this world. I appreciate your willingness to stick up for this belief; I think this is exactly the kind of getting-past-blindspots thing we need on the meta level even if I currently disagree on the object level.
Thanks!
So as I mentioned in another comment, I think basically all of the weird positions described in the SSC post on EAG 2017 are wrong. People who are worrying about insect suffering or particle suffering seem to me to be making philosophical mistakes and to the extent that those people are setting agendas I think they’re wasting everyone’s time and attention.
I agree that these positions are mistakes. That said, I have three replies:
I don’t think the people who are making these sorts of mistakes are setting agendas or important policies. There are a few small organizations that are concerned with these matters, but they are (as far as I can tell) not taken particularly seriously aside from a small contingent of hardcore supporters.
I worry that similar arguments can very easily be applied to all weird areas, even ones that may be valid. I personally think AI alignment considerations are quite significant, but I’ve often seen people saying things that I would parse as “being worried about AI alignment is a philosophical mistake”, for instance.
It is not clear to me that the “embodied” perspective you describe offers especially useful clarification on these issues. Perhaps it does in a way that I am too unskilled with this approach to understand? I (like you) think insect suffering and particle suffering are mistaken concepts and shouldn’t be taken seriously, but I don’t necessarily feel like I need an embodied perspective to realize that.
Fair point, though I do think it provides at least weak evidence in this domain as well. That said, there are other examples of cases where intuitions about human value can be very wrong in the moment that are perhaps more salient, - addictions and buyer’s remorse come to mind.
Thanks!
I agree that these positions are mistakes. That said, I have three replies:
I don’t think the people who are making these sorts of mistakes are setting agendas or important policies. There are a few small organizations that are concerned with these matters, but they are (as far as I can tell) not taken particularly seriously aside from a small contingent of hardcore supporters.
I worry that similar arguments can very easily be applied to all weird areas, even ones that may be valid. I personally think AI alignment considerations are quite significant, but I’ve often seen people saying things that I would parse as “being worried about AI alignment is a philosophical mistake”, for instance.
It is not clear to me that the “embodied” perspective you describe offers especially useful clarification on these issues. Perhaps it does in a way that I am too unskilled with this approach to understand? I (like you) think insect suffering and particle suffering are mistaken concepts and shouldn’t be taken seriously, but I don’t necessarily feel like I need an embodied perspective to realize that.