I think you mean “determinism being false...”, the rest of your comment makes sense in that context.
In which case I think you’re saying, if determinism is false, libertarian free will would be possible. And since that’s true, when I suggest that we should define free will in relation to our (lack of) knoweldge about the world, I’m dismissing the possibly better definition given by a libertarian free will perspective.
Is that right?
If so I think that’s right. I do think there are arguments against libertarian free will that hold even if determinism is false, but I don’t make any such arguments in the post, it doesn’t address the question of the validity of libertarian free will at all, and to the extent that I want to make a positive claim with the piece, that is probably a flaw. I’ll consider making a minor edit to the substack version of the article that at least mentions this, though I probably won’t try to make the argument against libertarian free will as the piece is already long enough as it is.
Thanks for pointing this out, I did legitimately miss that.
(And if I misunderstood your point and you were saying something else please let me know!)
1 vote
Overall karma indicates overall quality.
0 votes
Agreement karma indicates agreement, separate from overall quality.
One reason to think that bee suffering and human suffering are comparably important (within one or two orders of magnitude) is just that suffering is suffering. When you feel pain you don’t really feel much else than pain; when it’s intense enough you can’t really experience much other than the pain, you can’t think clearly, you can’t do all of the cognitive things that seem to separate us from bees, you just experience suffering in some raw form, and that seems very bad. If we can imagine bee’s suffering is something like this, it seems like it’s bad in a similar way to human suffering.
But one (not the only) issue here is that this way of viewing human suffering treats the human mind as a discrete entity. There is one individual who is suffering, there is one bee which is suffering, and these seem like comparable things.
I don’t think that’s a reasonable model of the mind. Instead, there are many separate but interconnected parts of the mind, all of them suffering when we are in pain. The bee, by nature of being a simpler creature, has a mind made up of many fewer such parts, and thus there are just fewer beings who are suffering in this way when a bee suffers than when a human does.
Of course, these separate parts of the mind integrate into a larger whole, but that doesn’t make them not present. And I think noticing that the mind is made up of many distinct parts gives a better intuitive picture of what a person is than does thinking of us as discrete entities. But if we take this picture seriously it clearly justifies a moral distinction (not of kind but of quantity) between more complex and less complex beings. That simplification is to see a human mind as made up of more ‘people’ than a bee’s mind. This justifies ideas like treating neuron count as an important moral distinction.
Again, the separate agents within the mind interact and merge to create a larger emergent entity, yet there remain distinctions between them which should make us think that treating a human as a single agent and a bee as a single agent on par with them is misguided.