“Ethical Injunctions” is making a Kantian argument about certain patterns of behavior being inherently self-contradictory and thus impossible to consistently follow, not a rule-utilitarian argument about certain patterns of behavior causing bad outcomes if everyone were to do them.
So, basically, the argument against things like “let’s kill a few thousand people so that we can make this planet a paradise for millions” is not “killing people is absolutely forbidden”. Because, we are actually making similar tradeoffs, even with much smaller ratio, all the time. For example, using cars (as opposed to banning them worldwide, and e.g. only using trains) condemns thousands of people to painful death in car accidents, and it doesn’t even bring a paradise to the rest, only a little more convenience. And the mainstream consensus is that this is acceptable.
The real objection instead is that plans like “let’s kill a few thousand people so that we can make this planet a paradise for millions” predictably fail all the time, and if you actually thought about it a little, you could easily see it. Thousands of things would need to magically go right in order for such plan to succeed, and many of them are extremely unlikely, so the overall probability is practically zero. (For example, any plan that involves killing thousands will attract people who enjoy killing, and who enjoy organizing killing. Now after they succeed, you expect them to simply give up all the power and stop killing? As opposed to, e.g. putting a bullet through your brain, and declaring themselves the kings of the new order? Is that a likely scenario?) And the reason you don’t immediately see this is that your brain has a blind spot here—any plan that matches “I need to get more power, and then good things will happen” sounds instinctively very plausible to you, because your ancestors who believed such things and succeeded to convince others to give them the power, were usually very successful… that is, reproductively; not necessarily at making the good things actually happen.
That’s sort of it, but it was specifically talking about certain types of self-deceptive behavior that appears to be instrumentally rational. The problem being is that once you’ve deceived yourself, you can’t tell if it’s a good idea or not.
So, basically, the argument against things like “let’s kill a few thousand people so that we can make this planet a paradise for millions” is not “killing people is absolutely forbidden”. Because, we are actually making similar tradeoffs, even with much smaller ratio, all the time. For example, using cars (as opposed to banning them worldwide, and e.g. only using trains) condemns thousands of people to painful death in car accidents, and it doesn’t even bring a paradise to the rest, only a little more convenience. And the mainstream consensus is that this is acceptable.
The real objection instead is that plans like “let’s kill a few thousand people so that we can make this planet a paradise for millions” predictably fail all the time, and if you actually thought about it a little, you could easily see it. Thousands of things would need to magically go right in order for such plan to succeed, and many of them are extremely unlikely, so the overall probability is practically zero. (For example, any plan that involves killing thousands will attract people who enjoy killing, and who enjoy organizing killing. Now after they succeed, you expect them to simply give up all the power and stop killing? As opposed to, e.g. putting a bullet through your brain, and declaring themselves the kings of the new order? Is that a likely scenario?) And the reason you don’t immediately see this is that your brain has a blind spot here—any plan that matches “I need to get more power, and then good things will happen” sounds instinctively very plausible to you, because your ancestors who believed such things and succeeded to convince others to give them the power, were usually very successful… that is, reproductively; not necessarily at making the good things actually happen.
That’s sort of it, but it was specifically talking about certain types of self-deceptive behavior that appears to be instrumentally rational. The problem being is that once you’ve deceived yourself, you can’t tell if it’s a good idea or not.