I’m sorry for your loss. I would just like to point out that proceeding cautiously with AGI development does not mean that we’ll reach longevity escape velocity much later. Actually, I think if we don’t develop AGI at all, the chances for anyone celebrating their 200th birthday are much greater.
To make the necessary breakthroughs in medicine, we don’t need a general agent who can also write books or book a flight. Instead, we need highly specialized tool AI like AlphaFold, which in my view is the most valuable AI ever developed, and there’s zero chance that it will seek power and become uncontrollable. Of course, tools like AlphaFold can be misused, but the probability of destroying humanity is much lower than with the current race towards AGI that no one knows how to control or align.
It is my opinion as an aging researcher, for what it is worth, that the chances of living 200 years by anyone currently alive round to 0% if we do not develop AGI. We may get away with not developing strong superintelligence, but I consider the development of AGI a necessity. Knowing this, you may proceed accordingly and do your EV calculations. Maybe it is worth the risk or maybe it is not.
Could you explain why exactly AGI is “a necessity”? What can we do with AGI that we can’t do with highly specialized tool AI and one ore more skilled human researchers?
Not the person you’re responding to, but my guess is that without general AI, we wouldn’t know the right questions to ask or which specialized AIs to create.
Thanks for your comment! If we talk about AGI and define this as “generally as intelligent as a human, but not significantly more intelligent”, then by definition it wouldn’t be significantly better at figuring out the right questions. Maybe AGI could help with that by enhancing our capacity for searching for the right questions, but it shouldn’t be a fundamental difference, especially if we weigh the risk of losing control over AI against it. If we talk about superintelligent AI, it’s different, but the risks are even higher (however, it’s not easy to draw a clear line between AGI and ASI).
All in all, I would agree that we lose some capabilities to shape our future if we don’t develop AGI, but I believe that this is the far better option until we understand how to keep AGI under control or safely and securely align it to our goals and values.
They probably have more than a 1% chance of success and could accelerate anti-aging research. Even if you consider the current research situation critically stalled.
I’m sorry for your loss. I would just like to point out that proceeding cautiously with AGI development does not mean that we’ll reach longevity escape velocity much later. Actually, I think if we don’t develop AGI at all, the chances for anyone celebrating their 200th birthday are much greater.
To make the necessary breakthroughs in medicine, we don’t need a general agent who can also write books or book a flight. Instead, we need highly specialized tool AI like AlphaFold, which in my view is the most valuable AI ever developed, and there’s zero chance that it will seek power and become uncontrollable. Of course, tools like AlphaFold can be misused, but the probability of destroying humanity is much lower than with the current race towards AGI that no one knows how to control or align.
It is my opinion as an aging researcher, for what it is worth, that the chances of living 200 years by anyone currently alive round to 0% if we do not develop AGI. We may get away with not developing strong superintelligence, but I consider the development of AGI a necessity. Knowing this, you may proceed accordingly and do your EV calculations. Maybe it is worth the risk or maybe it is not.
Could you explain why exactly AGI is “a necessity”? What can we do with AGI that we can’t do with highly specialized tool AI and one ore more skilled human researchers?
Not the person you’re responding to, but my guess is that without general AI, we wouldn’t know the right questions to ask or which specialized AIs to create.
Thanks for your comment! If we talk about AGI and define this as “generally as intelligent as a human, but not significantly more intelligent”, then by definition it wouldn’t be significantly better at figuring out the right questions. Maybe AGI could help with that by enhancing our capacity for searching for the right questions, but it shouldn’t be a fundamental difference, especially if we weigh the risk of losing control over AI against it. If we talk about superintelligent AI, it’s different, but the risks are even higher (however, it’s not easy to draw a clear line between AGI and ASI).
All in all, I would agree that we lose some capabilities to shape our future if we don’t develop AGI, but I believe that this is the far better option until we understand how to keep AGI under control or safely and securely align it to our goals and values.
Fair point. I basically agree with that—AGI would give us broader capabilities than narrow AI, but certainly would also carry greater risk.
What about the enhancement of human intelligence that was discussed here? (For example How to make superkids—LessWrong).
They probably have more than a 1% chance of success and could accelerate anti-aging research. Even if you consider the current research situation critically stalled.