has anyone considered getting a sociopath to work on FAI?
Since mild cases of ASPD are hard to distinguish from mild cases of Asperger’s, and a lot of people with Asperger’s are in programming, that doesn’t seem like an off the wall idea.
This doesn’t seem to me to address MinibearRex’s proposal.
We don’t want to extrapolate the sociopath’s volition; we want the sociopath to extrapolate our volition. The idea is that sociopaths have experience with thinking objectively about humans’ volition.
I don’t disagree (similarly to how the proverbial fox might know many ways for a predator to get into the henhouse, and may seem like a good guard candidate for that reason, if the obvious problem with this were to be solved).
Maybe. Maybe not. Given that we don’t have a clear way of predicting the output of a (of the?) volition-extrapolating process given its inputs, I’m not too confident about that.
Assuming you believe the FAI project is coherent, what you want to do is prove safety (of AI code, of people involved, etc.) not prove lack of safety. So if I say a sociopath is bad news, and you are not sure—well it is not my job to convince you! It is the sociopath’s job to convince you (s)he’s safe.
My personal opinion is sociopaths are badly badly broken, possibly not even entirely human.
Well, I am not sure, since I do not know what it is like to be a bat. But it is my understanding from descriptions of psychopathy that psychopaths do not have the same kind of inner life that an ordinary human being has. (This is not a behavioral test, but then psychopaths will not do so well on behavioral tests either, since so many of them behave like monsters).
(From personal experience, knowing some people who’d probably score as “mildly psychopathic”, there’s probably an inner-experience distinction in the same sense that there is with other strong neurological variations, and it can be more or less marked, but the idea that they’re “not fully human” seems more like a comforting rationalization, a way of trying to cope with the often profoundly antisocial behavior that high-profile psychopaths appear casually capable of.)
For my part, if deciding whether a given FAI project is safe to turn on ever comes down to dueling intuitions and personal opinions, that alone seems like sufficient evidence to conclude that the project is not safe to turn on. (I would say the same thing about a bridge.)
Since mild cases of ASPD are hard to distinguish from mild cases of Asperger’s, and a lot of people with Asperger’s are in programming, that doesn’t seem like an off the wall idea.
Frankly, I’m more worried about the opposite problem. Might we need more neurotypical people working on FAI?
Extrapolated volition of a sociopath would be bad bad news.
This doesn’t seem to me to address MinibearRex’s proposal.
We don’t want to extrapolate the sociopath’s volition; we want the sociopath to extrapolate our volition. The idea is that sociopaths have experience with thinking objectively about humans’ volition.
I don’t disagree (similarly to how the proverbial fox might know many ways for a predator to get into the henhouse, and may seem like a good guard candidate for that reason, if the obvious problem with this were to be solved).
Maybe.
Maybe not.
Given that we don’t have a clear way of predicting the output of a (of the?) volition-extrapolating process given its inputs, I’m not too confident about that.
The idea with FAI is the burden of proof is not on me :). The burden of proof is on the psychopath.
I’m not at all sure what “burden of proof” means in this context. Can you unpack that a little?
Assuming you believe the FAI project is coherent, what you want to do is prove safety (of AI code, of people involved, etc.) not prove lack of safety. So if I say a sociopath is bad news, and you are not sure—well it is not my job to convince you! It is the sociopath’s job to convince you (s)he’s safe.
My personal opinion is sociopaths are badly badly broken, possibly not even entirely human.
How exactly are you defining “entirely human” such that 1 to 3 percent of the population of H. sapiens fails to qualify?
Well, I am not sure, since I do not know what it is like to be a bat. But it is my understanding from descriptions of psychopathy that psychopaths do not have the same kind of inner life that an ordinary human being has. (This is not a behavioral test, but then psychopaths will not do so well on behavioral tests either, since so many of them behave like monsters).
Others in this thread have already addressed some of it, so I’ll just point you here:
http://lesswrong.com/lw/ckj/question_about_sociopathypsychopathyaspd/6n95
and here:
http://lesswrong.com/lw/ckj/question_about_sociopathypsychopathyaspd/6npo
and here:
http://lesswrong.com/lw/ckj/question_about_sociopathypsychopathyaspd/6n85
and call it good.
(From personal experience, knowing some people who’d probably score as “mildly psychopathic”, there’s probably an inner-experience distinction in the same sense that there is with other strong neurological variations, and it can be more or less marked, but the idea that they’re “not fully human” seems more like a comforting rationalization, a way of trying to cope with the often profoundly antisocial behavior that high-profile psychopaths appear casually capable of.)
Ah, I see. OK, thanks for clarifying.
For my part, if deciding whether a given FAI project is safe to turn on ever comes down to dueling intuitions and personal opinions, that alone seems like sufficient evidence to conclude that the project is not safe to turn on. (I would say the same thing about a bridge.)