By the middle of the second paragraph I was thinking “Whoa, is everyone an Amanda Baggs fan around here?”. Hole in one! I win so many Bayes-points, go me.
I and a bunch of LWers I’ve talked to about it basically already agree with you on ableism, and a large fraction seems to apply usual liberal instincts to the issue (so, no forced cures for people who can point to “No thanks” on a picture board). There are extremely interesting and pretty fireworks that go off when you look at the social model disability from a transhumanist perspective and I want to round up Alicorn and Anne Corwin and you and a bunch of other people to look at them closely. It doesn’t look like curing everyone (you don’t want a perfectly optimized life, you want a world with variety, you want change over time), and it doesn’t look like current (dis)abilities (what does “blind” mean if most people can see radio waves?), and it doesn’t look like current models of disability (if everyone is super different and the world is set up for that and everything is cheap there’s no such thing as accommodations), and it doesn’t look like the current structures around disability (if society and personal identity and memory look nothing like they started with “culture” doesn’t mean the same thing and that applies to Deaf culture) and it’s complicated and pretty and probably already in some Egan novel.
But, to address your central point directly: You are completely and utterly mistaken about what Eliezer Yudkowsky wants to do. He’s certainly not going to tell a superintelligence anything as direct and complicated as “Make this person smarter”, or even “Give me a banana”. Seriously, nursing homes?
If tech had happened to be easier, we might have gotten a superintelligence in the 16th century in Europe. Surely we wouldn’t have told it to care about the welfare of black people. We need to build something that would have done the right thing even if we had built it in the 16th century. The very rough outline for that is to tell it “Here are some people. Figure out what they would want if they knew better, and do that.”. So in the 16th century, it would have been presented with abled white men; figured out that if they were better informed and smarter and less biased and so on, these men would like to be equal to black women; and thus included black women in its next turn of figuring out what people want. Something as robust as this needs to be can’t miss an issue that’s currently known to exist and be worthy of debate!
And for the celibacy thing: that’s a bit besides the point, but obviously if you want to avoid sex for reasons other than low libido, increasing your libido obviously won’t fix the mismatch.
The same way we do, but faster? Like, if you start out thinking that scandalous-and-gross-sex-practice is bad, you can consider arguments like “disgust is easily culturally trained so it’s a poor measure of morality”, and talk to people so you form an idea of what it’s like to want and do it as a subjective experience (what positive emotions are involved, for example), and do research so you can answer queries like “If we had a brain scanner that could detect brainwashing manipulation, what would it say about people who want that?”.
So the superintelligence builds a model of you and feeds it lots of arguments and memory tape from others and other kinds of information. And then we run into trouble because maybe you end up wanting different things depending on the order it feeds you it, or it tells you to many facts about Deep Ones and it breaks your brain.
By the middle of the second paragraph I was thinking “Whoa, is everyone an Amanda Baggs fan around here?”. Hole in one! I win so many Bayes-points, go me.
I and a bunch of LWers I’ve talked to about it basically already agree with you on ableism, and a large fraction seems to apply usual liberal instincts to the issue (so, no forced cures for people who can point to “No thanks” on a picture board). There are extremely interesting and pretty fireworks that go off when you look at the social model disability from a transhumanist perspective and I want to round up Alicorn and Anne Corwin and you and a bunch of other people to look at them closely. It doesn’t look like curing everyone (you don’t want a perfectly optimized life, you want a world with variety, you want change over time), and it doesn’t look like current (dis)abilities (what does “blind” mean if most people can see radio waves?), and it doesn’t look like current models of disability (if everyone is super different and the world is set up for that and everything is cheap there’s no such thing as accommodations), and it doesn’t look like the current structures around disability (if society and personal identity and memory look nothing like they started with “culture” doesn’t mean the same thing and that applies to Deaf culture) and it’s complicated and pretty and probably already in some Egan novel.
But, to address your central point directly: You are completely and utterly mistaken about what Eliezer Yudkowsky wants to do. He’s certainly not going to tell a superintelligence anything as direct and complicated as “Make this person smarter”, or even “Give me a banana”. Seriously, nursing homes?
If tech had happened to be easier, we might have gotten a superintelligence in the 16th century in Europe. Surely we wouldn’t have told it to care about the welfare of black people. We need to build something that would have done the right thing even if we had built it in the 16th century. The very rough outline for that is to tell it “Here are some people. Figure out what they would want if they knew better, and do that.”. So in the 16th century, it would have been presented with abled white men; figured out that if they were better informed and smarter and less biased and so on, these men would like to be equal to black women; and thus included black women in its next turn of figuring out what people want. Something as robust as this needs to be can’t miss an issue that’s currently known to exist and be worthy of debate!
And for the celibacy thing: that’s a bit besides the point, but obviously if you want to avoid sex for reasons other than low libido, increasing your libido obviously won’t fix the mismatch.
How do you identify what knowing better would mean, when you don’t know better yet?
The same way we do, but faster? Like, if you start out thinking that scandalous-and-gross-sex-practice is bad, you can consider arguments like “disgust is easily culturally trained so it’s a poor measure of morality”, and talk to people so you form an idea of what it’s like to want and do it as a subjective experience (what positive emotions are involved, for example), and do research so you can answer queries like “If we had a brain scanner that could detect brainwashing manipulation, what would it say about people who want that?”.
So the superintelligence builds a model of you and feeds it lots of arguments and memory tape from others and other kinds of information. And then we run into trouble because maybe you end up wanting different things depending on the order it feeds you it, or it tells you to many facts about Deep Ones and it breaks your brain.