“People are building AI because they want it to radically impact the world,” but maybe it’s already good enough to do that. According to the 80⁄20 rule, we can get 80% of the results we want with 20% of what’s required for perfection. Are we so sure that isn’t enough? Eric Schmidt has a TED Talk (“AI is underhyped”) where he says he uses AI for deep research; if it’s good enough for that, it’s good enough to find the power we need to solve the crises in sustainable power, water, and food. It’s good enough to tutor any kid at his own pace, in his own language, and gamify it; it’s good enough to teach the middle class how to handle their money for retirement. We should be singing the praises of “good enough” AI instead of saying why we can’t have ASI, which is more dangerous than it’s worth to us. And even if we can’t charge ahead, maybe a weaker system can give us hints about how to safely approach a slightly stronger system.
One can talk about competitive pressures and the qualitatively new prospect of global takeover, but the most straightforward answer to why humanity is charging full speed ahead is that the leaders of the top AI labs are ideologically committed to building ASI. They are utopists and power-seekers. They don’t want only 80% of a utopia any more than a venture capitalist wants only 80% of a billion dollars.
“People are building AI because they want it to radically impact the world,” but maybe it’s already good enough to do that. According to the 80⁄20 rule, we can get 80% of the results we want with 20% of what’s required for perfection. Are we so sure that isn’t enough? Eric Schmidt has a TED Talk (“AI is underhyped”) where he says he uses AI for deep research; if it’s good enough for that, it’s good enough to find the power we need to solve the crises in sustainable power, water, and food. It’s good enough to tutor any kid at his own pace, in his own language, and gamify it; it’s good enough to teach the middle class how to handle their money for retirement. We should be singing the praises of “good enough” AI instead of saying why we can’t have ASI, which is more dangerous than it’s worth to us. And even if we can’t charge ahead, maybe a weaker system can give us hints about how to safely approach a slightly stronger system.
One can talk about competitive pressures and the qualitatively new prospect of global takeover, but the most straightforward answer to why humanity is charging full speed ahead is that the leaders of the top AI labs are ideologically committed to building ASI. They are utopists and power-seekers. They don’t want only 80% of a utopia any more than a venture capitalist wants only 80% of a billion dollars.