Discussion of this topic suffers from asymmetrical motivation. If you disagree with a mainstream position, arguing against it feels worth while. If you agree with a fringe position, arguing in favour of it feels worth while. But if you disagree with a fringe position, why bother?
The mainstream of research in AI thinks that we are safe from an unfriendly artificial general intelligence and have three layers of protection.
We have tried programming AGI’s and failed. Humans suck at programming computers. AGI might be possible in theory, but not on this planet.
Researchers and funders have both learned that lesson. Even if humans could program an AGI, we are safe because no-one is working on it.
We failed really hard. Even if people returned to AGI research and overturned precedent with dramatic breakthroughs we are still safe because of the scale of the challenge. If dangerous success is a 10, and we had given up in the past because our efforts only ever ranked 7 or 8, then an AGI research revival that hoped to get to 9 might succeed better than expected and get to 10. Whoops! But really, AGI fell into disrepute because it was over-hyped crap. We only ever scored 2 or 3 on the fully general stuff. So even major unexpected breakthroughs, that score 5, when we were hoping for 4, still leave us decades to rethink whether there is anything to worry about.
I started this comment with the phrase asymmetric motivation and, having briefly sketched in why the mainstream isn’t interested in discussing the issue, I can give an example of how this hurts the discussion. Is it really true that “we are safe because no-one is working on it.”? That is not actually a reassuring argument. If you could get a member of the mainstream to engage with the issue they would quickly patch it. AGI is way too hard for a lone genius in a basement. It needs a research community bigger than a fairly substantial critical mass. The point could be elaborated and, fully worked out, may be convincing, but if one just doesn’t believe that AGI poses a risk, why bother?
Discussion of this topic suffers from asymmetrical motivation. If you disagree with a mainstream position, arguing against it feels worth while. If you agree with a fringe position, arguing in favour of it feels worth while. But if you disagree with a fringe position, why bother?
The mainstream of research in AI thinks that we are safe from an unfriendly artificial general intelligence and have three layers of protection.
We have tried programming AGI’s and failed. Humans suck at programming computers. AGI might be possible in theory, but not on this planet.
Researchers and funders have both learned that lesson. Even if humans could program an AGI, we are safe because no-one is working on it.
We failed really hard. Even if people returned to AGI research and overturned precedent with dramatic breakthroughs we are still safe because of the scale of the challenge. If dangerous success is a 10, and we had given up in the past because our efforts only ever ranked 7 or 8, then an AGI research revival that hoped to get to 9 might succeed better than expected and get to 10. Whoops! But really, AGI fell into disrepute because it was over-hyped crap. We only ever scored 2 or 3 on the fully general stuff. So even major unexpected breakthroughs, that score 5, when we were hoping for 4, still leave us decades to rethink whether there is anything to worry about.
I started this comment with the phrase asymmetric motivation and, having briefly sketched in why the mainstream isn’t interested in discussing the issue, I can give an example of how this hurts the discussion. Is it really true that “we are safe because no-one is working on it.”? That is not actually a reassuring argument. If you could get a member of the mainstream to engage with the issue they would quickly patch it. AGI is way too hard for a lone genius in a basement. It needs a research community bigger than a fairly substantial critical mass. The point could be elaborated and, fully worked out, may be convincing, but if one just doesn’t believe that AGI poses a risk, why bother?