Nietzsche seems to always see the project of self-improvement in opposition to the project of building a functional society out of multiple people who don’t kill each other, and the second one always seemed more important to me.
It’s hard for me to understand what he’s saying because he doesn’t engage (much? at all?) with Actually True Morality, that is the utilitarian/”group is just a sum of individuals” paradigm. The question of whether it’s OK for the strong to bully the weak almost doesn’t seem to interest him.
One man is not a whole lot better than one ape, but a group of men is infinitely superior to a group of apes.
ETA: I often like to think of FAI as not the ultimate transhuman, but the ultimate institution/legal system/moral code.
You might say that Nietzsche takes opposition to the Repugnant Conclusion to an extreme: his philosophy values humanity by the $L^\infty$ norm rather than the $L^1$ norm.
That’s an emendation, not the original; in most of his mid-to-late works, he really does mean that the absolute magnitude of a character, without reference to its direction, is of value.
No one believes in the $L^1$ norm. There is only Nietzsche, who believes in $L_\infty$, and utilitarians, who believe in the integral.
In this thread: people using mathematics where it doesn’t belong.
I suppose. It’s a more efficient and fun form of communication then writing it out in English, but it loses big on the number of people who can understand it.
I know how it looked when you jumped in (presumably from the Recent Comments page), but both of us did know the proper math- it’s the analogy that we were ironing out.
I read from the start of the L^p talk to now, and I can’t think why both of you bothered to speak in that language. The major point of contention occurs in a lacuna in the L^p semantic space, so continuing in that vein is… hmmm.
It’s like arguing whether the moon is pale-green or pale-blue, and deciding that since plain English just doesn’t cut it, why not discuss the issue in Japanese?
deciding that since plain English just doesn’t cut it, why not discuss the issue in Japanese?
Why not, if you know Japanese, and it has more suitable means of expressing the topic? (I see your point, but don’t think the analogy stands as stated.)
No offense to Fred, but he’s a bitter loner. Idealistic nerd wants to make the world awesome, runs out and tells everyone, everyone laughs at him, idealistic nerd gives up in disgust and walks away muttering “I’ll show them! I’ll show them all!”.
Also, he thinks this project is really really important, worth declaring war against the rest of the world and killing whoever stands in the way of becoming cooler. (As you say, whether he thinks we can also kill people who don’t actively oppose it is unclear.) This is a dangerous idea (see the zillion glorious revolutions that executed critics and plunged happily into dictatorship) - though it is less dangerous when your movement is made of complete individualists. As it happens, becoming superhumans will not require offing any Luddites (though it does require offending them and coercing them by legal means), but I can’t confidently say it wouldn’t be worth it if it were the only way—even after correcting for historical failures.
By the same token, group rationality is in fact the way to go, but individual rationality does require telling society to take a hike every now and then.
FAI as not the ultimate transhuman, but the ultimate institution/legal system/moral code
It certaintly shouldn’t be a transhuman. Eliezer’s preferred metaphor is more like “the ultimate laws of physics”, which says quite a bit about how individualistic you and he are.
Nietzsche seems to always see the project of self-improvement in opposition to the project of building a functional society out of multiple people who don’t kill each other, and the second one always seemed more important to me.
It’s hard for me to understand what he’s saying because he doesn’t engage (much? at all?) with Actually True Morality, that is the utilitarian/”group is just a sum of individuals” paradigm. The question of whether it’s OK for the strong to bully the weak almost doesn’t seem to interest him.
One man is not a whole lot better than one ape, but a group of men is infinitely superior to a group of apes.
ETA: I often like to think of FAI as not the ultimate transhuman, but the ultimate institution/legal system/moral code.
You might say that Nietzsche takes opposition to the Repugnant Conclusion to an extreme: his philosophy values humanity by the $L^\infty$ norm rather than the $L^1$ norm.
(Assuming that individual value is nonnegative.)
That’s an emendation, not the original; in most of his mid-to-late works, he really does mean that the absolute magnitude of a character, without reference to its direction, is of value.
But certainly the people who believe in the $L^1$ norm don’t take the absolute value...
What? The L^1 norm is the integral of the absolute value of the function.
In this thread: people using mathematics where it doesn’t belong.
I should say:
No one believes in the $L^1$ norm. There is only Nietzsche, who believes in $L_\infty$, and utilitarians, who believe in the integral.
I suppose. It’s a more efficient and fun form of communication then writing it out in English, but it loses big on the number of people who can understand it.
Yes, that’s what I should have written.
I know how it looked when you jumped in (presumably from the Recent Comments page), but both of us did know the proper math- it’s the analogy that we were ironing out.
I read from the start of the L^p talk to now, and I can’t think why both of you bothered to speak in that language. The major point of contention occurs in a lacuna in the L^p semantic space, so continuing in that vein is… hmmm.
It’s like arguing whether the moon is pale-green or pale-blue, and deciding that since plain English just doesn’t cut it, why not discuss the issue in Japanese?
Why not, if you know Japanese, and it has more suitable means of expressing the topic? (I see your point, but don’t think the analogy stands as stated.)
If we extend the analogy to the above conversation, it’s an argument between non-Japanese otaku.
No offense to Fred, but he’s a bitter loner. Idealistic nerd wants to make the world awesome, runs out and tells everyone, everyone laughs at him, idealistic nerd gives up in disgust and walks away muttering “I’ll show them! I’ll show them all!”.
Also, he thinks this project is really really important, worth declaring war against the rest of the world and killing whoever stands in the way of becoming cooler. (As you say, whether he thinks we can also kill people who don’t actively oppose it is unclear.) This is a dangerous idea (see the zillion glorious revolutions that executed critics and plunged happily into dictatorship) - though it is less dangerous when your movement is made of complete individualists. As it happens, becoming superhumans will not require offing any Luddites (though it does require offending them and coercing them by legal means), but I can’t confidently say it wouldn’t be worth it if it were the only way—even after correcting for historical failures.
By the same token, group rationality is in fact the way to go, but individual rationality does require telling society to take a hike every now and then.
It certaintly shouldn’t be a transhuman. Eliezer’s preferred metaphor is more like “the ultimate laws of physics”, which says quite a bit about how individualistic you and he are.