I referenced the word “insane” with the Raising the Sanity Waterline article, thus qualifying it, taking, for example, belief in God as a kind of insanity in the intended sense.
Judging by the rating of your post, my impression about there being something wrong with it is shared by other readers. My comment was an attempt to express what in particular I found to be wrong: presentation is extremely confused.
By “unstated unsubstantiated assumptions” I mean the things like:
“your task is to choose the proper weight to give collective versus individual goals” (what weight? what kind of framework are you working from?),
starting to talk about “the transhuman” (what’s that exactly? how did it get in the article?),
“organisms with less genetic diversity” (genetic diversity? what does it have to do with transhumans?),
ethics being determined by “sexual diploidy” (where’s that come from in the article? explanation please),
“when people are software”, “a more insightful AI” (you are assuming a specific futuristic model now)
“exploration” and “exploitation” (you are selecting a specific algorithmic problem; why?)
I referenced the word “insane” with the Raising the Sanity Waterline article, thus qualifying it, taking, for example, belief in God as a kind of insanity in the intended sense.
Judging by the rating of your post, my impression about there being something wrong with it is shared by other readers. My comment was an attempt to express what in particular I found to be wrong: presentation is extremely confused.
By “unstated unsubstantiated assumptions” I mean the things like:
“your task is to choose the proper weight to give collective versus individual goals” (what weight? what kind of framework are you working from?),
starting to talk about “the transhuman” (what’s that exactly? how did it get in the article?),
“organisms with less genetic diversity” (genetic diversity? what does it have to do with transhumans?),
ethics being determined by “sexual diploidy” (where’s that come from in the article? explanation please),
“when people are software”, “a more insightful AI” (you are assuming a specific futuristic model now)
“exploration” and “exploitation” (you are selecting a specific algorithmic problem; why?)