This is not an argument against technology—I’m a transhumanist after all, and I completely embrace technological developments.
If technology brings more harm than good, we should want to believe that technology does more harm than good—group affiliation is a very bad guide for epistemic rationality.
The parent in no way deserved to be voted down and that it was looks like a bad sign about the health of this community to me. Note that believing that technology does more harm than good does not equal advocating unfeasible or counterproductive countermeasures.
I didn’t downvote the parent (and it seems to be back to 0 now). Short-term karma can fluctuate quite a bit.
Note that believing that technology does more harm than good does not equal advocating unfeasible or counterproductive countermeasures.
Agreed. I was just reacting to something that could be read as implying that group affiliation weighted as much or more than arguments.
In my mind, Inquiline’s phrase sounds a bit like something sometimes hear among Christians, “if Evolution is true, then Christianity is wrong”, which is used as an argument from one christian to another to reject evolution.
You hope the author will engage the question how? By abjectly apologizing? By disagreeing? If a simple response of “Good point, thanks” would be sufficient, then what was the point of your comment?
Vladimir: I upvoted your comment, because I didn’t think it was that bad. Principle of charity on the OP: maybe they meant: “I don’t think this is enough of a threat that it makes technology a net negative, so it isn’t meant as a knockdown argument of transhumanism?”
I’m not sure what you mean here. I was proposing an alternate interpretation of the OP’s phrasing. I’m not sure what they actually meant. I agree that if they were making a mistake I want to believe they were making a mistake. If technology is bad, I want to believe that too. Can you clarify what you think is the specific problem?
I’m not sure what they actually meant. I agree that if they were making a mistake I want to believe they were making a mistake.
This was my point. There is no power to “principle of charity”, since it ought not shift your level of belief in the author intending correct meaning as incorrect one.
You seem to be taking a statement of the form (to my reading):
“X appears to imply Y, but it doesn’t (assertion). In fact, Y is false (separate assertion).”
and reading it as:
“X appears to imply Y, but I’m a Y-disbeliever (premise). Therefore, Y is false (inference from premise).”
Basically, it seems like you’re reading “I’m a transhumanist” as a statement about InquilineKea from which they fallaciously draw a conclusion about reality, while I’m reading it as a disguised direct statement about reality, semantically equivalent to “pursuing the right technologies has positive expected value” (or whatever).
A more charitable interpretation of your post is that you’re arguing against belief-as-identity in general, and using “I’m a transhumanist” as an example of it, but if so that’s not clear to me.
I am mostly arguing against belief-as-identity and bottom-line reasoning in general. I agree that the original statement could be interpreted in different ways.
Shouldn’t it be: If some or all branches of technology in current sociopolitical environment bring more harm than good according to shared values of group X, then we should want to believe it?
If technology brings more harm than good, we should want to believe that technology does more harm than good—group affiliation is a very bad guide for epistemic rationality.
The parent in no way deserved to be voted down and that it was looks like a bad sign about the health of this community to me. Note that believing that technology does more harm than good does not equal advocating unfeasible or counterproductive countermeasures.
I didn’t downvote the parent (and it seems to be back to 0 now). Short-term karma can fluctuate quite a bit.
Agreed. I was just reacting to something that could be read as implying that group affiliation weighted as much or more than arguments.
In my mind, Inquiline’s phrase sounds a bit like something sometimes hear among Christians, “if Evolution is true, then Christianity is wrong”, which is used as an argument from one christian to another to reject evolution.
I was referring to your comment being voted down. The funny thing is I originally wrote “this comment” and edited to “the parent” to avoid ambiguity.
Hah, ok.
Downvoted the post specifically for making this glaring error. I hope the author will engage this question.
Edit in response to downvoting of this comment: What?
I am the second downvote.
You hope the author will engage the question how? By abjectly apologizing? By disagreeing? If a simple response of “Good point, thanks” would be sufficient, then what was the point of your comment?
It’s a big first step to actually make that “simple response”. It’s even more important to recognize the problem if you are not inclined to agree.
Vladimir: I upvoted your comment, because I didn’t think it was that bad. Principle of charity on the OP: maybe they meant: “I don’t think this is enough of a threat that it makes technology a net negative, so it isn’t meant as a knockdown argument of transhumanism?”
“Principle of charity” conflicts with principle of Tarski.
I’m not sure what you mean here. I was proposing an alternate interpretation of the OP’s phrasing. I’m not sure what they actually meant. I agree that if they were making a mistake I want to believe they were making a mistake. If technology is bad, I want to believe that too. Can you clarify what you think is the specific problem?
This was my point. There is no power to “principle of charity”, since it ought not shift your level of belief in the author intending correct meaning as incorrect one.
You seem to be taking a statement of the form (to my reading):
“X appears to imply Y, but it doesn’t (assertion). In fact, Y is false (separate assertion).”
and reading it as:
“X appears to imply Y, but I’m a Y-disbeliever (premise). Therefore, Y is false (inference from premise).”
Basically, it seems like you’re reading “I’m a transhumanist” as a statement about InquilineKea from which they fallaciously draw a conclusion about reality, while I’m reading it as a disguised direct statement about reality, semantically equivalent to “pursuing the right technologies has positive expected value” (or whatever).
A more charitable interpretation of your post is that you’re arguing against belief-as-identity in general, and using “I’m a transhumanist” as an example of it, but if so that’s not clear to me.
I am mostly arguing against belief-as-identity and bottom-line reasoning in general. I agree that the original statement could be interpreted in different ways.
The “I am still a transhumanist” here might be another example of that; possibly more high-profile due to being part of the Fun Theory sequence.
Shouldn’t it be: If some or all branches of technology in current sociopolitical environment bring more harm than good according to shared values of group X, then we should want to believe it?