Hmm, I guess I think “something basically like hedonic utilitarianism, at least for downside” is pretty plausible.
Maybe a big difference is that I feel like I’ve generally updated away from putting much weight on moral intuitions / heuristics except with respect to forbidding some actions because they violate norms, are otherwise uncooperative, seem like the sort of thing which would be a bad societal policy, are bad for decision theory reasons, etc. So, relatively weak cases can swing me far because I started off being quite unopinionated without putting that much weight on moral intuitions (which feel like they often come from a source mostly unrelated to what I ultimately terminally care about).
I do agree that just directly using “Rethink Priorities says 15%” without flagging relevant caveats is bad.
A shitty summary of the case I would give would be something like:
It seems plausible we should be worried about suffering in a way which doesn’t scale (that much) with the size/complexity of brains in practice. Maybe the thing which is bad about suffering is pretty simple. E.g., I don’t notice that the complexity of my thought has huge effects on my suffering as far as I can tell.
I think there is a case for some asymmetry between downside and upside with respect to complexity, at least in the regime of the biological brains we see in front of us.
If so, then maybe bees have the core suffering circuitry which causes the badness and this is pretty similar to humans.
Then, we have to aggregate this with other arguments for humans being much more important. The aggregation is super non-obvious (and naive averaging isn’t valid due to two envelope problems), but I feel like an intuition for being conservative about suffering points in favor of worrying about bee suffering if there is a chance it matters comparably to human suffering.
Overall, this doesn’t get me to 15%, more like 1% (with a bunch of the discount occurring in aggregation over different views), but 1% is still a lot. (This is all within the frame of the argument.)
I can imagine different moral intuitions (e.g. intuitions more like those of Tomasik) that get out more like 15% by having somewhat different weighting. I think these seem a bit strong to me, but not totally insane.
In practice, the part of my moral views which is compelled by this sort of thing ends up focused on longtermism rather than insect welfare.
(I’m not currently planning on engaging further and I’m extremely sympathetic to you doing the same.)
I’ve generally updated away from putting much weight on moral intuitions / heuristics expect with respect to forbidding some actions because they violate norms, are otherwise uncooperative, seem like the sort of thing which would be a bad societal policy, are bad for decision theory reasons, etc.
I am repeatedly failing to parse this sentence, specifically from where it becomes italicized, and I think there’s probably a missing word. Are you avoiding putting weight on what moral intuitions expect? Did you mean except? (I hope someone who read this successfully can clarify.)
Hmm, I guess I think “something basically like hedonic utilitarianism, at least for downside” is pretty plausible.
Maybe a big difference is that I feel like I’ve generally updated away from putting much weight on moral intuitions / heuristics except with respect to forbidding some actions because they violate norms, are otherwise uncooperative, seem like the sort of thing which would be a bad societal policy, are bad for decision theory reasons, etc. So, relatively weak cases can swing me far because I started off being quite unopinionated without putting that much weight on moral intuitions (which feel like they often come from a source mostly unrelated to what I ultimately terminally care about).
I do agree that just directly using “Rethink Priorities says 15%” without flagging relevant caveats is bad.
A shitty summary of the case I would give would be something like:
It seems plausible we should be worried about suffering in a way which doesn’t scale (that much) with the size/complexity of brains in practice. Maybe the thing which is bad about suffering is pretty simple. E.g., I don’t notice that the complexity of my thought has huge effects on my suffering as far as I can tell.
I think there is a case for some asymmetry between downside and upside with respect to complexity, at least in the regime of the biological brains we see in front of us.
If so, then maybe bees have the core suffering circuitry which causes the badness and this is pretty similar to humans.
Then, we have to aggregate this with other arguments for humans being much more important. The aggregation is super non-obvious (and naive averaging isn’t valid due to two envelope problems), but I feel like an intuition for being conservative about suffering points in favor of worrying about bee suffering if there is a chance it matters comparably to human suffering.
Overall, this doesn’t get me to 15%, more like 1% (with a bunch of the discount occurring in aggregation over different views), but 1% is still a lot. (This is all within the frame of the argument.)
I can imagine different moral intuitions (e.g. intuitions more like those of Tomasik) that get out more like 15% by having somewhat different weighting. I think these seem a bit strong to me, but not totally insane.
In practice, the part of my moral views which is compelled by this sort of thing ends up focused on longtermism rather than insect welfare.
(I’m not currently planning on engaging further and I’m extremely sympathetic to you doing the same.)
I am repeatedly failing to parse this sentence, specifically from where it becomes italicized, and I think there’s probably a missing word. Are you avoiding putting weight on what moral intuitions expect? Did you mean except? (I hope someone who read this successfully can clarify.)
oops, I meant except. My terrible spelling strikes again.