Personally, I’m not very sure. But it seems to me that the risk of an AI-caused extinction is high enough to be worth of a serious discussion on the presidential level.
My reasoning:
GPT-4 is an AGI
A personal observation: I’ve been using it almost daily for months and for all kinds of diverse applied tasks, and I can confirm that it indeed demonstrates a general intelligence, in the same sense as a talented jack-of-all-trades human secretary demonstrates a general intelligence.
A much smarter AGI can be realistically developed
It seems that these days, the factor that limits AI smarts is the will to invest more money into it. It’s not about finding the right algorithms anymore
The surest way to predict the next token is to deeply understand the universe
There are strong financial, scientific, political incentives to develop smarter and smarter AIs
Therefore, unless there is some kind of a dramatic change in the situation, humanity will create an AGI much smarter than GPT-4, and much smarter than the average human, and much smarter than the smartest humans
We have no idea how to co-exist with such an entity.
Judging by the scaling laws and the dev speed in the field, it’s the matter of years, not decades. So, the question is urgent.
GPT4 can’t even do date arithmetic correctly. It’s superhuman in many ways, and dumb in many others. It is dumb in strategy, philosophy, game theory, self awareness, mathematics, arithmetic, and reasoning from first principles. It’s not clear that current scaling laws will be able to make GPTs human level in these skills. Even if it becomes human level, a lot of problems are NP. This allows effective utilization of an unaligned weak super-intelligence. Its path to strong super-intelligence and free replication seems far away. It took years from GPT3 to GPT4. GPT4 is not that much better. And these were all low hanging fruit. My prediction is that GPT5 will have less improvements. It will be similarly slow to get developed. Its improvements will be mostly in areas it is already good at, not in its inherent shortcomings. Most improvements will come from augmenting LLMs with tools. This will be significant, but it will importantly not enable strategic thinking or mathematical reasoning. Without these skills, it’s not an x-risk.
I am not as pessimistic about the future capabilities, and definitely not as sure as you are (hence this post), but I see what you describe as a possibility. Definitely there is a lot of overhang in terms of augmentation: https://www.oneusefulthing.org/p/it-is-starting-to-get-strange
Personally, I’m not very sure. But it seems to me that the risk of an AI-caused extinction is high enough to be worth of a serious discussion on the presidential level.
My reasoning:
GPT-4 is an AGI
A personal observation: I’ve been using it almost daily for months and for all kinds of diverse applied tasks, and I can confirm that it indeed demonstrates a general intelligence, in the same sense as a talented jack-of-all-trades human secretary demonstrates a general intelligence.
A much smarter AGI can be realistically developed
It seems that these days, the factor that limits AI smarts is the will to invest more money into it. It’s not about finding the right algorithms anymore
The surest way to predict the next token is to deeply understand the universe
There are strong financial, scientific, political incentives to develop smarter and smarter AIs
Therefore, unless there is some kind of a dramatic change in the situation, humanity will create an AGI much smarter than GPT-4, and much smarter than the average human, and much smarter than the smartest humans
We have no idea how to co-exist with such an entity.
Judging by the scaling laws and the dev speed in the field, it’s the matter of years, not decades. So, the question is urgent.
GPT4 can’t even do date arithmetic correctly. It’s superhuman in many ways, and dumb in many others. It is dumb in strategy, philosophy, game theory, self awareness, mathematics, arithmetic, and reasoning from first principles. It’s not clear that current scaling laws will be able to make GPTs human level in these skills. Even if it becomes human level, a lot of problems are NP. This allows effective utilization of an unaligned weak super-intelligence. Its path to strong super-intelligence and free replication seems far away. It took years from GPT3 to GPT4. GPT4 is not that much better. And these were all low hanging fruit. My prediction is that GPT5 will have less improvements. It will be similarly slow to get developed. Its improvements will be mostly in areas it is already good at, not in its inherent shortcomings. Most improvements will come from augmenting LLMs with tools. This will be significant, but it will importantly not enable strategic thinking or mathematical reasoning. Without these skills, it’s not an x-risk.
I think I touched on these points, that some things are easy and others are hard for LLMs, in my other post, https://www.lesswrong.com/posts/S2opNN9WgwpGPbyBi/do-llms-dream-of-emergent-sheep
I am not as pessimistic about the future capabilities, and definitely not as sure as you are (hence this post), but I see what you describe as a possibility. Definitely there is a lot of overhang in terms of augmentation: https://www.oneusefulthing.org/p/it-is-starting-to-get-strange