Current international emphasis on AI growth cannot be extrapolated from today’s narrow AI to future AGI. Today, smarter AI is needed for mission-critical military technology like nuclear cruise missiles, which need to be able to fly in environments where most or all communication is jammed, but still recognize terrain and outmaneuver (“juke”) enemy anti-air missiles. That is only one example of how AI is critical for national security, and it is not necessarily the biggest example.
If general intelligence begins to appear possible, there might be enough visible proof needed to justify a mission-critical fear of software glitches. Policymakers are doubtful, paranoid, and ignorant by nature, and live their everyday life surrounded by real experts who are constantly trying to manipulate them by making up fake threats. What they can see is that narrow AI is really good for their priorities, and AGI does not seem to exist anywhere, and thus is probably another speculative marketing tactic. This is a very sensible Bayesian calculation in the context of their everyday life, and all the slippery snakes who gather around it.
If our overlords change their mind (and you can never confidently state that they will or won’t), then you will be surprised at how much of the world reorients with them. Military-grade hackers are not to be trifled with. They can create plenty of time, if they had reason to. International coordination has always been easier with a common threat, and harder with vague forecasts like an eldritch abomination someday spawning out of our computers and ruining everything forever.
Anthropics; we can’t prove that intelligence isn’t spectacularly difficult to make.
There’s a possibility that correctly ordered neurons for human intelligence might have had an astronomically low chance of ever, ever evolving randomly, anywhere, anyhow. But it would still look like a probable evolutionary outcome to us, because that is the course evolution must have taken in order for us to be born and observe evolution with generally intelligent brains.
All the “failed intelligence” offshoots like mammals and insects would still be generated either way, it’s just a question of how improbably difficult it is to replicate the remaining milestones are between them and us. So if we make a chimpanzee in a neural network, it might take an octillion iterations to make something like a human’s general intelligence. But the Chimpanzee intelligence itself might be the bottleneck, or a third of the bottleneck. We don’t know until we get there, only that with recent AI more and more potential one-in-an-octillion bottlenecks are being ruled out e.g. insect intelligence.
Notably, the less-brainy lifeforms appear to be much more successful in evolution e.g. insects and plants. Also, the recent neural networks were made by plagarizing the neuron, which is the most visible and easily copied part of the human brain, because you can literally see a neuron with a microscope.
Yudkowsky et. al are wrong.
There are probably like 100 people max who are really qualified to disprove a solution (Yudkowsky et. al are a large portion of these). However, it’s not about crushing the problem underfoot, because it’s actually proposing the perfect solution. More people working on the problem means more creativity pointed at brute-forcing solutions, and more solutions proposed per day, and more complexity per solution. Disproving bad solutions probably scales even better.
One way or another, the human brain is finite and a functional solution probably requires orders of magnitude more creativity-hours than what we have had for the last 20 years. Yudkowsky et. al are having difficulty imagining that because their everyday life has revolved around inadequate numbers of qualified solution-proposers.
The military might intervene ahead of time.
Current international emphasis on AI growth cannot be extrapolated from today’s narrow AI to future AGI. Today, smarter AI is needed for mission-critical military technology like nuclear cruise missiles, which need to be able to fly in environments where most or all communication is jammed, but still recognize terrain and outmaneuver (“juke”) enemy anti-air missiles. That is only one example of how AI is critical for national security, and it is not necessarily the biggest example.
If general intelligence begins to appear possible, there might be enough visible proof needed to justify a mission-critical fear of software glitches. Policymakers are doubtful, paranoid, and ignorant by nature, and live their everyday life surrounded by real experts who are constantly trying to manipulate them by making up fake threats. What they can see is that narrow AI is really good for their priorities, and AGI does not seem to exist anywhere, and thus is probably another speculative marketing tactic. This is a very sensible Bayesian calculation in the context of their everyday life, and all the slippery snakes who gather around it.
If our overlords change their mind (and you can never confidently state that they will or won’t), then you will be surprised at how much of the world reorients with them. Military-grade hackers are not to be trifled with. They can create plenty of time, if they had reason to. International coordination has always been easier with a common threat, and harder with vague forecasts like an eldritch abomination someday spawning out of our computers and ruining everything forever.
Anthropics; we can’t prove that intelligence isn’t spectacularly difficult to make.
There’s a possibility that correctly ordered neurons for human intelligence might have had an astronomically low chance of ever, ever evolving randomly, anywhere, anyhow. But it would still look like a probable evolutionary outcome to us, because that is the course evolution must have taken in order for us to be born and observe evolution with generally intelligent brains.
All the “failed intelligence” offshoots like mammals and insects would still be generated either way, it’s just a question of how improbably difficult it is to replicate the remaining milestones are between them and us. So if we make a chimpanzee in a neural network, it might take an octillion iterations to make something like a human’s general intelligence. But the Chimpanzee intelligence itself might be the bottleneck, or a third of the bottleneck. We don’t know until we get there, only that with recent AI more and more potential one-in-an-octillion bottlenecks are being ruled out e.g. insect intelligence.
Notably, the less-brainy lifeforms appear to be much more successful in evolution e.g. insects and plants. Also, the recent neural networks were made by plagarizing the neuron, which is the most visible and easily copied part of the human brain, because you can literally see a neuron with a microscope.
Yudkowsky et. al are wrong.
There are probably like 100 people max who are really qualified to disprove a solution (Yudkowsky et. al are a large portion of these). However, it’s not about crushing the problem underfoot, because it’s actually proposing the perfect solution. More people working on the problem means more creativity pointed at brute-forcing solutions, and more solutions proposed per day, and more complexity per solution. Disproving bad solutions probably scales even better.
One way or another, the human brain is finite and a functional solution probably requires orders of magnitude more creativity-hours than what we have had for the last 20 years. Yudkowsky et. al are having difficulty imagining that because their everyday life has revolved around inadequate numbers of qualified solution-proposers.