I think the fact that it feels like you need to make it longer in order to make it clearer is a sign that the concept you’re trying to express is not in a form that is natural yet; maybe it’s simply hard to express in english, such things do occur, but it seems like a bad sign to me. I think if you want to improve clarity, I’d suggest focusing on trying to at least not make your explanation longer than this one, and try to make it more grounded and specific.
I think many people have the feature that they feel strong social requirements that make it challenging for them to think rationally about ethics. A site full of very intelligent, rational, reasonable, positively altruistic people seems to suddenly start manifesting mental blocks, misunderstanding clearly written sentences, accusing me of meaning the exact opposite of what I wrote because they don’t like my conclusion, and even mass down-voting all over the place when the subject comes up. It’s rather an emotion-laden topic, which has social expectations attached. I’m still trying to figure out how to get people out of this mindset into just looking at it like a regular scientific or engineering problem. All we’re doing is writing the software for a society — with the fate of our species depending on not screwing it up. I’d rather like us to get this right. So would everyone else here, I strongly assume.
Unfortunately a number of widely accepted viewpoints, like “whoever proposes the larger moral circle is automatically more virtuous and thus wins the argument”, popular in academia and on the Internet, give answers that will, simply, obviously if you actually think about it for even a few minutes, kill us all (where to be clear, because on this topic I have to be, by “us” I mean “the human species”.) [And don’t even get me started on “anyone who even seriously considers comparing the likely outcomes for society of the use of different alternative ethical systems (for any purpose other than straw-man rhetorical disapproval of one of them) is clearly unprincipled”…] Seriously, find a way to give ants equal individual moral weight to that of humans that does not automatically kill us all if implemented by an ASI? Think about it as a math problem rather than a moral problem, for even a few minutes: write it down in symbols and then pretend the symbols mean something other than ants and humans, that they’re just A and H. Ant are super-beneficiaries, they require vastly less resources per individual than we do, the ants get everything, we humans all starve and are eaten by ants, end of analysis. It’s like two lines of basic algebra that a ten-year old could do easily — you need to be able to do division and ratios and comparisons. But when I point out that this means that we need to find a better solution than the human-instinctive default assumption of equal moral weight just blindly scaled up from the tribe to every last sentient (not just sapient) being or virtual persona of one, suddenly a bunch of people who don’t want to hear a word against animal welfare become positively un-Rationalist. I care about animals too. I own cats, or they own me. They’re fluffy and cute. I spent several years actually thinking about how to give rights to an entire ecosystem, and also a human population, on the same planet run by ASI, without either of them simply outweighing the other. I eventually managed it. It’s not actually impossible, but it’s really, genuinely, actually pretty amazingly complicated, and people sticking their fingers in their ears and assuming I must hate animals, or must hate AIs, because I’m not echoing the standard party line on ethics… makes me want to write very lengthy posts explaining my views very clearly with lots of reasoning and worked examples. I wrote an entire sequence of them a few years ago (for example, I understand exactly where the human moral intuition of fairness came from, why we instinctively believe that within a community moral weights should be equal: it’s a rather simple deduction in the framework of Evolutionary Moral Psychology for humans, which however doesn’t apply between humans and ants, and I wrote a whole post about it). Which as you correctly point out, doesn’t help, because then no-one reads them. But I haven’t figured out how to write compactly to people many of whose brains appear to have frozen up because the subject of ethics came up and they’re worried someone’s going to think they’re a bad person and be mean to them if they don’t immediately toe the party line and parrot what everyone else always says on this subject and also performatively assume that anyone who even questions it is a bad person. Yes, I recognize that behavior pattern, and the evolutionary reason for it, however, we’re trying to save humanity here, please reengage your brain… </rant>
I think the fact that it feels like you need to make it longer in order to make it clearer is a sign that the concept you’re trying to express is not in a form that is natural yet; maybe it’s simply hard to express in english, such things do occur, but it seems like a bad sign to me. I think if you want to improve clarity, I’d suggest focusing on trying to at least not make your explanation longer than this one, and try to make it more grounded and specific.
I think many people have the feature that they feel strong social requirements that make it challenging for them to think rationally about ethics. A site full of very intelligent, rational, reasonable, positively altruistic people seems to suddenly start manifesting mental blocks, misunderstanding clearly written sentences, accusing me of meaning the exact opposite of what I wrote because they don’t like my conclusion, and even mass down-voting all over the place when the subject comes up. It’s rather an emotion-laden topic, which has social expectations attached. I’m still trying to figure out how to get people out of this mindset into just looking at it like a regular scientific or engineering problem. All we’re doing is writing the software for a society — with the fate of our species depending on not screwing it up. I’d rather like us to get this right. So would everyone else here, I strongly assume.
Unfortunately a number of widely accepted viewpoints, like “whoever proposes the larger moral circle is automatically more virtuous and thus wins the argument”, popular in academia and on the Internet, give answers that will, simply, obviously if you actually think about it for even a few minutes, kill us all (where to be clear, because on this topic I have to be, by “us” I mean “the human species”.) [And don’t even get me started on “anyone who even seriously considers comparing the likely outcomes for society of the use of different alternative ethical systems (for any purpose other than straw-man rhetorical disapproval of one of them) is clearly unprincipled”…] Seriously, find a way to give ants equal individual moral weight to that of humans that does not automatically kill us all if implemented by an ASI? Think about it as a math problem rather than a moral problem, for even a few minutes: write it down in symbols and then pretend the symbols mean something other than ants and humans, that they’re just A and H. Ant are super-beneficiaries, they require vastly less resources per individual than we do, the ants get everything, we humans all starve and are eaten by ants, end of analysis. It’s like two lines of basic algebra that a ten-year old could do easily — you need to be able to do division and ratios and comparisons. But when I point out that this means that we need to find a better solution than the human-instinctive default assumption of equal moral weight just blindly scaled up from the tribe to every last sentient (not just sapient) being or virtual persona of one, suddenly a bunch of people who don’t want to hear a word against animal welfare become positively un-Rationalist. I care about animals too. I own cats, or they own me. They’re fluffy and cute. I spent several years actually thinking about how to give rights to an entire ecosystem, and also a human population, on the same planet run by ASI, without either of them simply outweighing the other. I eventually managed it. It’s not actually impossible, but it’s really, genuinely, actually pretty amazingly complicated, and people sticking their fingers in their ears and assuming I must hate animals, or must hate AIs, because I’m not echoing the standard party line on ethics… makes me want to write very lengthy posts explaining my views very clearly with lots of reasoning and worked examples. I wrote an entire sequence of them a few years ago (for example, I understand exactly where the human moral intuition of fairness came from, why we instinctively believe that within a community moral weights should be equal: it’s a rather simple deduction in the framework of Evolutionary Moral Psychology for humans, which however doesn’t apply between humans and ants, and I wrote a whole post about it). Which as you correctly point out, doesn’t help, because then no-one reads them. But I haven’t figured out how to write compactly to people many of whose brains appear to have frozen up because the subject of ethics came up and they’re worried someone’s going to think they’re a bad person and be mean to them if they don’t immediately toe the party line and parrot what everyone else always says on this subject and also performatively assume that anyone who even questions it is a bad person. Yes, I recognize that behavior pattern, and the evolutionary reason for it, however, we’re trying to save humanity here, please reengage your brain…
</rant>