do you think that you’ve identified a set of characteristics that, throughout history, will never lead to the wrong conclusion?
Not at all. There is no such set of characteristics. Wrong conclusions are inevitable and commonplace. Godel’s Theorems apply to all formalisms.
In the current world, the harm of unsanctioned killing being commonly accepted (and cheered) is generally a LOT higher than the harm of statistically-evil people continuing to live. So, yes, a heuristic argument: this is a loss of civilization and order, even if it might have been justifiable on some dimensions.
Wrong conclusions are inevitable and commonplace. Godel’s Theorems apply to all formalisms.
A tangent, but Godel’s incompleteness theorems simply show that for sufficiently powerful formal systems:
There are statements which are true, but unprovable
If the system is consistent, “this system is consistent” is such a statement.
Neither of which show that all formal systems are unsound. That is, if a statement is provable in a formal system, the corresponding property is true in all models of that formal system. So this point is not correct because of Godel (though it could be practically correct for other reasons, such as the world being complicated).
Not at all. There is no such set of characteristics. Wrong conclusions are inevitable and commonplace. Godel’s Theorems apply to all formalisms.
They do not apply to all formalisms, morality is not a formal system, and even if it were this is not what either of Godel’s theorems would say about it. I don’t know why this particular bit of math misunderstanding is so popular online, I suspect it’s because it enables moves like the one you’re making here (i.e. of the form “it’s impossible to justify any statement so I can’t be expected to justify my statements”).
In the current world, the harm of unsanctioned killing being commonly accepted (and cheered) is generally a LOT higher than the harm of statistically-evil people continuing to live. So, yes, a heuristic argument: this is a loss of civilization and order, even if it might have been justifiable on some dimensions.
Ah, I wouldn’t call this a heuristic argument—by “heuristic argument” I mean something like “I can’t come up with any utilitarian calculation that says the bad outweighs the good here, but I know that human brains are prone to underestimating this sort of bad, so I assume there is a calculation saying it was overall bad even if I don’t know what it is.” (Incidentally this is how I understand this situation.) If you have an argument to this effect, I’d love to see it! But to satisfy me it will need to be the sort of argument that permits killing Stalin or Hitler or Idi Amin with a pretty wide margin for error, and if it’s not I’ll make the same critique I did before.
Not at all. There is no such set of characteristics. Wrong conclusions are inevitable and commonplace. Godel’s Theorems apply to all formalisms.
In the current world, the harm of unsanctioned killing being commonly accepted (and cheered) is generally a LOT higher than the harm of statistically-evil people continuing to live. So, yes, a heuristic argument: this is a loss of civilization and order, even if it might have been justifiable on some dimensions.
A tangent, but Godel’s incompleteness theorems simply show that for sufficiently powerful formal systems:
There are statements which are true, but unprovable
If the system is consistent, “this system is consistent” is such a statement.
Neither of which show that all formal systems are unsound. That is, if a statement is provable in a formal system, the corresponding property is true in all models of that formal system. So this point is not correct because of Godel (though it could be practically correct for other reasons, such as the world being complicated).
They do not apply to all formalisms, morality is not a formal system, and even if it were this is not what either of Godel’s theorems would say about it. I don’t know why this particular bit of math misunderstanding is so popular online, I suspect it’s because it enables moves like the one you’re making here (i.e. of the form “it’s impossible to justify any statement so I can’t be expected to justify my statements”).
Ah, I wouldn’t call this a heuristic argument—by “heuristic argument” I mean something like “I can’t come up with any utilitarian calculation that says the bad outweighs the good here, but I know that human brains are prone to underestimating this sort of bad, so I assume there is a calculation saying it was overall bad even if I don’t know what it is.” (Incidentally this is how I understand this situation.) If you have an argument to this effect, I’d love to see it! But to satisfy me it will need to be the sort of argument that permits killing Stalin or Hitler or Idi Amin with a pretty wide margin for error, and if it’s not I’ll make the same critique I did before.