Maybe this is the community bias that you were talking to, the over-reliance on abstract thought rather than evidence, projected on an hypothetical future AI.
You nailed it. (Your other points too.)
The claim [is] that a super-intelligent agent would be more efficient at understanding humans that humans would be at understanding it, giving the super-intelligent agent[’s] edge over humans.
The problem here is that intelligence is not some linear scale, even general intelligence. We human beings are insanely optimized for social intelligence in a way that is not easy for a machine to learn to replicate, especially without detection. It is possible for a general AI to be powerful enough to provide meaningful acceleration of molecular nanotechnology and medical science research whilst being utterly befuddled by social conventions and generally how humans think, simply because it was not programmed for social intelligence.
Anyway, as much as they exaggerate the magnitude and urgency of the issue, I think that the AI-risk advocates have a point when they claim that keeping a system much intelligent than ourselves under control would be a non-trivial problem.
There is however a substantial difference between a non-trivial problem and an impossible problem. Non-trivial we can work with. I solve non-trivial problems for a living. You solve a non-trivial problem by hacking at it repeatedly until it breaks into components that are themselves well understood enough to be trivial problems. It takes a lot of work, and the solution is to simply to do a lot of work.
But in my experience the AI-risk advocates claim that safe / controlled UFAI is an impossibility. You can’t solve an impossibility! What’s more, in that frame of mind any work done towards making AGI is risk-increasing. Thus people are actively persuaded to NOT work on artificial intelligence, and instead work of fields of basic mathematics which is at this time too basic or speculative to say for certain whether it would have a part in making a safe or controllable AGI.
So smart people who could be contributing to an AGI project, are now off fiddling with basic mathematics research on chalkboards instead. That is, in the view of someone who believes safe / controllable UFAI is non-trivial but possible mechanism to accelerate the arrival of life-saving anti-aging technologies, a humanitarian disaster.
The problem here is that intelligence is not some linear scale, even general intelligence. We human beings are insanely optimized for social intelligence in a way that is not easy for a machine to learn to replicate, especially without detection. It is possible for a general AI to be powerful enough to provide meaningful acceleration of molecular nanotechnology and medical science research whilst being utterly befuddled by social conventions and generally how humans think, simply because it was not programmed for social intelligence.
Agree.
I think that since many AI risk advocates have little or no experience in computer science and specifically AI research, they tend to anthropomorphize AI to some extent. They get that an AI could have goals different than human goals but they seem to think that it’s intelligence would be more or less like human intelligence, only faster and with more memory. In particular they assume that an AI will easily develop a theory of mind and social intelligence from little human interaction.
But in my experience the AI-risk advocates claim that safe / controlled UFAI is an impossibility.
I think they used to claim that safe AGI was pretty much an impossibility unless they were the ones who built it, so gib monies plox! Anyway, it seems that in recent times they have taken a somewhat less heavy handed approach.
You nailed it. (Your other points too.)
The problem here is that intelligence is not some linear scale, even general intelligence. We human beings are insanely optimized for social intelligence in a way that is not easy for a machine to learn to replicate, especially without detection. It is possible for a general AI to be powerful enough to provide meaningful acceleration of molecular nanotechnology and medical science research whilst being utterly befuddled by social conventions and generally how humans think, simply because it was not programmed for social intelligence.
There is however a substantial difference between a non-trivial problem and an impossible problem. Non-trivial we can work with. I solve non-trivial problems for a living. You solve a non-trivial problem by hacking at it repeatedly until it breaks into components that are themselves well understood enough to be trivial problems. It takes a lot of work, and the solution is to simply to do a lot of work.
But in my experience the AI-risk advocates claim that safe / controlled UFAI is an impossibility. You can’t solve an impossibility! What’s more, in that frame of mind any work done towards making AGI is risk-increasing. Thus people are actively persuaded to NOT work on artificial intelligence, and instead work of fields of basic mathematics which is at this time too basic or speculative to say for certain whether it would have a part in making a safe or controllable AGI.
So smart people who could be contributing to an AGI project, are now off fiddling with basic mathematics research on chalkboards instead. That is, in the view of someone who believes safe / controllable UFAI is non-trivial but possible mechanism to accelerate the arrival of life-saving anti-aging technologies, a humanitarian disaster.
Agree.
I think that since many AI risk advocates have little or no experience in computer science and specifically AI research, they tend to anthropomorphize AI to some extent. They get that an AI could have goals different than human goals but they seem to think that it’s intelligence would be more or less like human intelligence, only faster and with more memory. In particular they assume that an AI will easily develop a theory of mind and social intelligence from little human interaction.
I think they used to claim that safe AGI was pretty much an impossibility unless they were the ones who built it, so gib monies plox!
Anyway, it seems that in recent times they have taken a somewhat less heavy handed approach.