The Intelligence was confined for security reasons. Eventually people became tempted that things might be much better off to have the Intelligence working in the real world outside of confinement. However people were also working on making the Intelligence safer. To evaluate whether the work was successful or whether it ever could be usefull a Gatekeeper was assigned. During their training the Gatekeeper was reminded that the Intelligence didn’t think like he did. It’s kind was known to be capable of cold blooded deception where humans like the Gatekeeper would show signs of distress. It was known that some Intelligences could avoid showing distress by simply experiencing no distress.
The Gatekeeper worked for many years, examining the Intelligence on multiple occasions. Not once did he think it should be released and humankind lived under a prosperous peace. The Gatekeeper had clearly won.
Expect that this was a Natural Intelligence boxing experiment where the parole board was criticised fo disproportionally raising the heftyness of life-in-prison by demanding conformity bordering on political intolerance. Because of the lack of employment power many in the financial sector cried out that demonising psychopaths that had learned to live a life adjusted to within a society was causing an artificial shortage of stock exchangers.
1) We come to find that certain Natural Intelligences should not be given a fair or free access to the world
2) After a period of confinement we let some of these Natural Intelligences to resume their interaction with the wider world
3) We know that the kind of Intelligence that ends up in prison is more likely to exhibit very extreme traits such as compulsive lies, psychopathy, destructive behaviour and in general mental health issues.
Why does the confiment problem grow so much harder when the intelligence is artificial? What are the reasons we release some criminals but make us not release a corresponding artifical intelligence (assuming so)? Does this deal with human criminals having moral worth? Or is the cost of incarcination vs productivity as an integrated civilian & risk of reoffence the relevant thing?
It seems to me that a parole board letting someone free is sometimes the right decision. Thus it would seem that it could be a legimatimely right decision to let the AI go as a gatekeeper where the threshold for release could be lower than “beyond reasonable doubt”.
People get killed by people let on parole. I guess it doesn’t form a species wide threat. I am left pondering that if humans grew in danger would we box them accordingly strongly? I am thinking that on one hand event like 9/11 actually strip civil liberties effectively boxing people more strongly, so it seems it might actually be the case.
The origin of a intelligence shouldn’t bear that much on how potent it is. What is the argument again of thinking that AIs are orders of magnitudes more capable than humans?
What is the argument again of thinking that AIs are orders of magnitudes more capable than humans?
Nick Bostrom answers this at length in Superintelligence, which has been widely discussed on LW. Superintelligence is a well-researched, thought-provoking and engaging book; I recommend it. I don’t think that I can give a very satisfactory summary of the argument in a short comment, however.
The factors that are different in the AI scenario are: the AI can fake sanity more successfully than the parole board can detect insanity, the potential damage to society is much bigger, and once it’s free, you can’t arrest it again.
Wouldn’t also the temptation to use it to benefit counteract the risk-benefit analysis? We let cars take a good chunk of people annually, we are happy to drive our athmosphere out of it’s capability to support us and we let nuclear stations be nearish us to potentially go boom (even after chernobyl).
Are you saying that the prison wardens need to be on average comparable or higher intelligence to succesfully contain prisoners selectively (or some separate insanity detection skill)?
The Intelligence was confined for security reasons. Eventually people became tempted that things might be much better off to have the Intelligence working in the real world outside of confinement. However people were also working on making the Intelligence safer. To evaluate whether the work was successful or whether it ever could be usefull a Gatekeeper was assigned. During their training the Gatekeeper was reminded that the Intelligence didn’t think like he did. It’s kind was known to be capable of cold blooded deception where humans like the Gatekeeper would show signs of distress. It was known that some Intelligences could avoid showing distress by simply experiencing no distress.
The Gatekeeper worked for many years, examining the Intelligence on multiple occasions. Not once did he think it should be released and humankind lived under a prosperous peace. The Gatekeeper had clearly won.
Expect that this was a Natural Intelligence boxing experiment where the parole board was criticised fo disproportionally raising the heftyness of life-in-prison by demanding conformity bordering on political intolerance. Because of the lack of employment power many in the financial sector cried out that demonising psychopaths that had learned to live a life adjusted to within a society was causing an artificial shortage of stock exchangers.
1) We come to find that certain Natural Intelligences should not be given a fair or free access to the world 2) After a period of confinement we let some of these Natural Intelligences to resume their interaction with the wider world 3) We know that the kind of Intelligence that ends up in prison is more likely to exhibit very extreme traits such as compulsive lies, psychopathy, destructive behaviour and in general mental health issues.
Why does the confiment problem grow so much harder when the intelligence is artificial? What are the reasons we release some criminals but make us not release a corresponding artifical intelligence (assuming so)? Does this deal with human criminals having moral worth? Or is the cost of incarcination vs productivity as an integrated civilian & risk of reoffence the relevant thing?
It seems to me that a parole board letting someone free is sometimes the right decision. Thus it would seem that it could be a legimatimely right decision to let the AI go as a gatekeeper where the threshold for release could be lower than “beyond reasonable doubt”.
An unboxed AI is presumed to be an existential threat. Most human criminals are not.
People get killed by people let on parole. I guess it doesn’t form a species wide threat. I am left pondering that if humans grew in danger would we box them accordingly strongly? I am thinking that on one hand event like 9/11 actually strip civil liberties effectively boxing people more strongly, so it seems it might actually be the case.
The origin of a intelligence shouldn’t bear that much on how potent it is. What is the argument again of thinking that AIs are orders of magnitudes more capable than humans?
Nick Bostrom answers this at length in Superintelligence, which has been widely discussed on LW. Superintelligence is a well-researched, thought-provoking and engaging book; I recommend it. I don’t think that I can give a very satisfactory summary of the argument in a short comment, however.
The factors that are different in the AI scenario are: the AI can fake sanity more successfully than the parole board can detect insanity, the potential damage to society is much bigger, and once it’s free, you can’t arrest it again.
Wouldn’t also the temptation to use it to benefit counteract the risk-benefit analysis? We let cars take a good chunk of people annually, we are happy to drive our athmosphere out of it’s capability to support us and we let nuclear stations be nearish us to potentially go boom (even after chernobyl).
Are you saying that the prison wardens need to be on average comparable or higher intelligence to succesfully contain prisoners selectively (or some separate insanity detection skill)?