So, how did you feel when you read that bit of sf hand-waving?
Interestingly, my reaction wasn’t negative. Instead, my curiosity was stimulated, and I immediately set myself the task of figuring out what was wrong with it. (Turned out to be easy, of course.)
On a larger scale, I’ve found the exercise of going through a certain 427 pages of wrongness, and coming to precise understandings of the mistakes, to be strangely satisfying (and informative). Perhaps it could be compared to the feeling of satisfaction a repairman might get from fixing a broken machine.
On the other hand, when (the very same) bad arguments are presented in this manner, I get so enraged I can barely stand to look. (Dark techniques really grate on me when they’re used in an attempt to persuade people of something I know to be incorrect; and if you’re wrong, you darn well better not be sanctimonious about your wrong answer.)
I also suspect that if I had encountered the above sci-fi argument in context, where the incorrect deduction would either have been used to support a further, important, wrong conclusion, or would just have indicated carelessness on the part of the author or character, I would have been annoyed.
Perhaps oddly, I find myself far more often infuriated by invalid arguments used to persuade people of something I believe to be correct, than incorrect.
Huh. That’s interesting. Introspecting on that now, I conclude that the same is true of me, but that I then experience anger in response to that discomfort. Of course, this sort of introspection isn’t terribly reliable, but I’ll try to pay closer attention the next time it comes up.
Sturgeon (the book is The Cosmic Rape) wanted a galactic scale group mind which could think quickly. I don’t know if the book would have been better without the argument. IIRC, it was written in omniscient third person, and that argument was merely stated rather than given to a character.
I haven’t read the book, but there’s nothing wrong with the FTL communication and FTL travel tropes in science fiction, IMO. Yes, it makes no physical sense, but then, neither do fairies, and they can still be fun to read about.
I don’t have anything against ftl in sf, either, but that seemed like an astonishingly bad argument for making it plausible.
Now that I think about it, the book may be of interest to LWers because it’s about telepathy making utilitarianism easier. And it’s a reasonably good sf novel.
It’s hard to say, which moral theories you had in mind, and what do you mean by “decent” ? For example, a strictly rule-based deontological system, such as the one outlined in certain holy books, may not benefit from telepathy, since its rules focus solely on prescribing certain specific actions.
Since this is LessWrong and there’s a strong leaning towards a certain view of normative ethics, I had better ask this before I go any further. Would you consider any form of deontology or virtue ethics to be a “decent moral theory”? It feels like I should check this before commenting any further. I know, for example, that at least one person here (not naming names) has openly said that all non-consequentialist approaches to ethics are “insane”.
I am not one of those who thinks non-consequentialist ethics are inherently nonsense. Reflecting on my position slightly, I was saying:
1) A “decent” moral system will very likely have the property that misleading others about one’s preferences will be advantageous to the individual, but bad for the group.
2) Telepathy makes misleading others about one’s preferences more difficult. That assumes telepathy is essentially involuntary mind-reading. If it is more like reliable cell phone service, then I’m not sure telepathy would make any moral system easier to implement.
Telepathy that’s more like reliable cellphone service would make a lot of general societal things, including any widely-agreed-upon moral system, easier to implement because transaction cost reductions benefit everyone involved.
Tentative: telepathy would be useful for consequentialism, but it would take more time and thought to gain the advantages from telepathy than it would for (preference?) utilitarianism.
See, this sort of thing is entirely justifiable as characterisation. (The reader may be forgiven for hoping for a whacking dose of morality play where said character wins a Darwin award in the next chapter.) The hard part would be coming up with a convincingly awful string of logic. Bonus points if real-life bad thinkers defend the character’s logic.
Interestingly, my reaction wasn’t negative. Instead, my curiosity was stimulated, and I immediately set myself the task of figuring out what was wrong with it. (Turned out to be easy, of course.)
On a larger scale, I’ve found the exercise of going through a certain 427 pages of wrongness, and coming to precise understandings of the mistakes, to be strangely satisfying (and informative). Perhaps it could be compared to the feeling of satisfaction a repairman might get from fixing a broken machine.
On the other hand, when (the very same) bad arguments are presented in this manner, I get so enraged I can barely stand to look. (Dark techniques really grate on me when they’re used in an attempt to persuade people of something I know to be incorrect; and if you’re wrong, you darn well better not be sanctimonious about your wrong answer.)
I also suspect that if I had encountered the above sci-fi argument in context, where the incorrect deduction would either have been used to support a further, important, wrong conclusion, or would just have indicated carelessness on the part of the author or character, I would have been annoyed.
Perhaps oddly, I find myself far more often infuriated by invalid arguments used to persuade people of something I believe to be correct, than incorrect.
The feeling I get from that tends to be one of cringing discomfort rather than agitated anger.
Huh. That’s interesting. Introspecting on that now, I conclude that the same is true of me, but that I then experience anger in response to that discomfort. Of course, this sort of introspection isn’t terribly reliable, but I’ll try to pay closer attention the next time it comes up.
Sturgeon (the book is The Cosmic Rape) wanted a galactic scale group mind which could think quickly. I don’t know if the book would have been better without the argument. IIRC, it was written in omniscient third person, and that argument was merely stated rather than given to a character.
OWWWWWWWWWWWWWWWWW
Yes. STOP BEING STUPID AAAAAAAAAAAAAAAA
I haven’t read the book, but there’s nothing wrong with the FTL communication and FTL travel tropes in science fiction, IMO. Yes, it makes no physical sense, but then, neither do fairies, and they can still be fun to read about.
I don’t have anything against ftl in sf, either, but that seemed like an astonishingly bad argument for making it plausible.
Now that I think about it, the book may be of interest to LWers because it’s about telepathy making utilitarianism easier. And it’s a reasonably good sf novel.
Is there any decent moral theory that wouldn’t be easier to implement with reliable telepathy?
It’s hard to say, which moral theories you had in mind, and what do you mean by “decent” ? For example, a strictly rule-based deontological system, such as the one outlined in certain holy books, may not benefit from telepathy, since its rules focus solely on prescribing certain specific actions.
Since this is LessWrong and there’s a strong leaning towards a certain view of normative ethics, I had better ask this before I go any further. Would you consider any form of deontology or virtue ethics to be a “decent moral theory”? It feels like I should check this before commenting any further. I know, for example, that at least one person here (not naming names) has openly said that all non-consequentialist approaches to ethics are “insane”.
I am not one of those who thinks non-consequentialist ethics are inherently nonsense. Reflecting on my position slightly, I was saying:
1) A “decent” moral system will very likely have the property that misleading others about one’s preferences will be advantageous to the individual, but bad for the group.
2) Telepathy makes misleading others about one’s preferences more difficult. That assumes telepathy is essentially involuntary mind-reading. If it is more like reliable cell phone service, then I’m not sure telepathy would make any moral system easier to implement.
Telepathy that’s more like reliable cellphone service would make a lot of general societal things, including any widely-agreed-upon moral system, easier to implement because transaction cost reductions benefit everyone involved.
I expect that if telepathy of this sort were common, self-deception would be even more common than it already is.
Tentative: telepathy would be useful for consequentialism, but it would take more time and thought to gain the advantages from telepathy than it would for (preference?) utilitarianism.
See, this sort of thing is entirely justifiable as characterisation. (The reader may be forgiven for hoping for a whacking dose of morality play where said character wins a Darwin award in the next chapter.) The hard part would be coming up with a convincingly awful string of logic. Bonus points if real-life bad thinkers defend the character’s logic.