Could you explain why exactly AGI is “a necessity”? What can we do with AGI that we can’t do with highly specialized tool AI and one ore more skilled human researchers?
Not the person you’re responding to, but my guess is that without general AI, we wouldn’t know the right questions to ask or which specialized AIs to create.
Thanks for your comment! If we talk about AGI and define this as “generally as intelligent as a human, but not significantly more intelligent”, then by definition it wouldn’t be significantly better at figuring out the right questions. Maybe AGI could help with that by enhancing our capacity for searching for the right questions, but it shouldn’t be a fundamental difference, especially if we weigh the risk of losing control over AI against it. If we talk about superintelligent AI, it’s different, but the risks are even higher (however, it’s not easy to draw a clear line between AGI and ASI).
All in all, I would agree that we lose some capabilities to shape our future if we don’t develop AGI, but I believe that this is the far better option until we understand how to keep AGI under control or safely and securely align it to our goals and values.
Could you explain why exactly AGI is “a necessity”? What can we do with AGI that we can’t do with highly specialized tool AI and one ore more skilled human researchers?
Not the person you’re responding to, but my guess is that without general AI, we wouldn’t know the right questions to ask or which specialized AIs to create.
Thanks for your comment! If we talk about AGI and define this as “generally as intelligent as a human, but not significantly more intelligent”, then by definition it wouldn’t be significantly better at figuring out the right questions. Maybe AGI could help with that by enhancing our capacity for searching for the right questions, but it shouldn’t be a fundamental difference, especially if we weigh the risk of losing control over AI against it. If we talk about superintelligent AI, it’s different, but the risks are even higher (however, it’s not easy to draw a clear line between AGI and ASI).
All in all, I would agree that we lose some capabilities to shape our future if we don’t develop AGI, but I believe that this is the far better option until we understand how to keep AGI under control or safely and securely align it to our goals and values.
Fair point. I basically agree with that—AGI would give us broader capabilities than narrow AI, but certainly would also carry greater risk.