Perhaps my large collection of quotes about the impact of AI on the future of humanity here will be helpful.
Then it is worth considering the majority of experts from the FHI to be extreme optimists, the same 20%? I really tried to find all the publicly available forecasts of experts and those who were confident that AI would lead to the extinction of humanity, there were very few among them. But I have no reason not to believe you or Luke Muehlhauser who introduced AI safety experts as even more confident pessimists: ’Many of them are, roughly speaking, 65%-85% confident that machine superintelligence will lead to human extinction’ . The reason may be that not everyone agrees, whose opinion is worth considering.
What about this and this? Here, some researchers at the FHI give other probabilities.
I meant the results of such polls: https://www.thatsmags.com/china/post/15129/happy-planet-index-china-is-72nd-happiest-country-in-the-world. Well, it doesn’t matter.I think that I could sleep better if everyone recognized the reduction of existential risks in a less free world.
I’m not sure that I can trust news sources that are interested in outlining China.In any case, this does not seem to stop the Chinese people from feeling happier than the US people.I cited this date just to contrast with your forecast. My intuition is more likely to point to AI in the 2050-2060 years.And yes, I expect that in 2050 it will be possible to monitor the behavior of each person in countries 24⁄7. I can’t say that it makes me happy, but I think that the vast majority will put up with this. I don’t believe in a liberal democratic utopia, but the end of the world seems unlikely to me.
Just wondering. Why are some so often convinced that the victory of China in the AGI race will lead to the end of humanity? The Chinese strategy seems to me much more focused on long terms.The most prominent experts give a 50% chance of AI in 2099 (https://spectrum.ieee.org/automaton/robotics/artificial-intelligence/book-review-architects-of-intelligence). And I can expect that the world in 80 years will be significantly different from the present. Well, you can call this a totalitarian hell, but I think that the probability of an existential disaster in this world will become less.
How about paying attention to discontinuous progress in tasks that are related to DL? It is very easy to track with https://paperswithcode.com/sota . And https://sotabench.com/ is showing diminishing returns.
(I apologize in advance for my English). Well, only the fifth column shows an expert’s assessment of the impact of AI on humanity. Therefore, any other percentages can be quickly skipped. It took me a few seconds to examine 1⁄10 of the table through Ctrl+F, so it would not take long to fully study the table by such a principle. Unfortunately, I can’t think of anything better.
It may be useful.
’Actually, the people Tim is talking about here are often more pessimistic about societal outcomes than Tim is suggesting. Many of them are, roughly speaking, 65%-85% confident that machine superintelligence will lead to human extinction, and that it’s only in a small minority of possible worlds that humanity rises to the challenge and gets a machine superintelligence robustly aligned with humane values.’ — Luke Muehlhauser, https://lukemuehlhauser.com/a-reply-to-wait-but-why-on-machine-superintelligence/
’In terms of falsifiability, if you have an AGI that passes the real no-holds-barred Turing Test over all human capabilities that can be tested in a one-hour conversation, and life as we know it is still continuing 2 years later, I’m pretty shocked. In fact, I’m pretty shocked if you get up to that point at all before the end of the world.’ — Eliezer Yudkowsky, https://www.econlib.org/archives/2016/03/so_far_my_respo.html
I have collected a huge number of quotes from various experts about AGI. About the timing of AGI, about the possibility of a quick takeoff of AGI and its impact on humanity. Perhaps this will be useful to you.
Then AI will have to become really smarter than very large groups of people who will try to control the world. And people by that time will surely be ready more than now. I am sure that the laws of physics allow the quick destruction of humanity, but it seems to me that without a swarm of self-reproducing nanorobots, the probability of our survival after the creation of the first AGI exceeds 50%.
It seems that this option leaves more chances for the victory for humanity than the gray goo scenario. And even if we screw up for the first time, it can be fixed. Of course, this does not eliminate the need for AI alignment efforts anyway.
Is AI Foom possible if even the godlike superintelligence cannot create gray goo? Some doubt that nanobots so quickly reproducing are possible. Without this, the ability for AI to quickly take over the world in the coming years will be significantly reduced.
Is AI Foom possible if even the godlike superintelligence cannot create ’gray goo’? Some doubt that nanobots so quickly reproducing are possible. Without this, the ability for AI to quickly take over the world in the coming years will be significantly reduced.
Indeed, quite a lot of experts are more optimistic than it seems. See this or this . Well, I collected a lot of quotes from various experts about the future of human extinction due to AI here. Maybe someone is interested.
It seems Russell does not agree with what is considered an LW consensus. From ’Architects of Intelligence The truth about AI from the people building it’:
When [the first AGI is created], it’s not going to be a single finishing line that we cross. It’s going to be along several dimensions.
I do think that I’m an optimist. I think there’s a long way to go. We are just scratching the surface of this control problem, but the first scratching seems to be productive, and so I’m reasonably optimistic that there is a path of AI development that leads us to what we might describe as “provably beneficial AI systems.”