This post is ridden with unstated unsubstantiated assumptions, and its presentation is borderline insane (even if the discussed issue could be rescued by a full rewrite). Let’s not go there. Voted down.
A slightly more charitable version of Vladimir’s comment: I’m not sure you’ve made enough effort to overcome the inferential distance between yourself and much of your readership here.
That’s a harsh accusation to make without supporting it in any way.
After all the posts and comments I’ve made on LW, you should realize that the odds are much greater that you failed to understand my post, than that I am insane. I’m disappointed in you.
I doubt that you’re confused by assumptions, since this post contains far fewer assumptions than anything else you’re likely to read today. What is confusing is that it removes many of the assumptions you rely on in everyday conversation—such as that society is made of humans, who are sexually diploid, and face certain ethical problems and have a certain range of possible actions available to them—and doesn’t explicitly say where it stops removing assumptions.
I referenced the word “insane” with the Raising the Sanity Waterline article, thus qualifying it, taking, for example, belief in God as a kind of insanity in the intended sense.
Judging by the rating of your post, my impression about there being something wrong with it is shared by other readers. My comment was an attempt to express what in particular I found to be wrong: presentation is extremely confused.
By “unstated unsubstantiated assumptions” I mean the things like:
“your task is to choose the proper weight to give collective versus individual goals” (what weight? what kind of framework are you working from?),
starting to talk about “the transhuman” (what’s that exactly? how did it get in the article?),
“organisms with less genetic diversity” (genetic diversity? what does it have to do with transhumans?),
ethics being determined by “sexual diploidy” (where’s that come from in the article? explanation please),
“when people are software”, “a more insightful AI” (you are assuming a specific futuristic model now)
“exploration” and “exploitation” (you are selecting a specific algorithmic problem; why?)
He said “its presentation is borderline insane”, not “its author is insane”. Argumentative hygiene, please.
(Is there a case for valuing some kinds of insanity because the best contributors to a rationalist group are not always the best individual rationalists for division-of-labor reasons? Should we ever think in terms of “psychiatric diversity”?)
This post is ridden with unstated unsubstantiated assumptions, and its presentation is borderline insane (even if the discussed issue could be rescued by a full rewrite). Let’s not go there. Voted down.
A slightly more charitable version of Vladimir’s comment: I’m not sure you’ve made enough effort to overcome the inferential distance between yourself and much of your readership here.
That’s a harsh accusation to make without supporting it in any way.
After all the posts and comments I’ve made on LW, you should realize that the odds are much greater that you failed to understand my post, than that I am insane. I’m disappointed in you.
I doubt that you’re confused by assumptions, since this post contains far fewer assumptions than anything else you’re likely to read today. What is confusing is that it removes many of the assumptions you rely on in everyday conversation—such as that society is made of humans, who are sexually diploid, and face certain ethical problems and have a certain range of possible actions available to them—and doesn’t explicitly say where it stops removing assumptions.
I referenced the word “insane” with the Raising the Sanity Waterline article, thus qualifying it, taking, for example, belief in God as a kind of insanity in the intended sense.
Judging by the rating of your post, my impression about there being something wrong with it is shared by other readers. My comment was an attempt to express what in particular I found to be wrong: presentation is extremely confused.
By “unstated unsubstantiated assumptions” I mean the things like:
“your task is to choose the proper weight to give collective versus individual goals” (what weight? what kind of framework are you working from?),
starting to talk about “the transhuman” (what’s that exactly? how did it get in the article?),
“organisms with less genetic diversity” (genetic diversity? what does it have to do with transhumans?),
ethics being determined by “sexual diploidy” (where’s that come from in the article? explanation please),
“when people are software”, “a more insightful AI” (you are assuming a specific futuristic model now)
“exploration” and “exploitation” (you are selecting a specific algorithmic problem; why?)
He said “its presentation is borderline insane”, not “its author is insane”. Argumentative hygiene, please.
(Is there a case for valuing some kinds of insanity because the best contributors to a rationalist group are not always the best individual rationalists for division-of-labor reasons? Should we ever think in terms of “psychiatric diversity”?)