I think it’s an amazing post but it seems to suggest that AGI is inevitable, which it isn’t. Narrow AI will flourish humanity in remarkable ways and many are waking up to the concerns of EY and are agreeing that AGI is a foolish goal.
This article promotes a steadfast pursuit or acceptance towards AGI and that it will likely be for the better.
Perhaps though you could join the growing number of people that are calling for a halt on new AGI systems well beyond chatgpt?
This is a perfectly fine response and one that will eliminate your fears if you are to succeed in the type of coming together and regulations that would halt what could be a very dangerous technology.
This would be nothing new, Stanford and MIT aren’t allowed to work on bio weapons and radically larger nukes, (which if they did, they could easily make humanity threatening weapons in short order.)
The difference is the public and regulators are much less tuned into the high risk dangers of AGI, but it’s logical to think that if they knew half of what we knew, AGI would be seen in the same light as bio weapons.
Your intuitions are usually right, it’s an odd time to be working in science and tech but you still have to do what is right.
Thanks for writing this. I had in mind to express a similar view but wouldn’t have expressed it nearly as well.
In the past two months I’ve gone from over the moon excited about AI to a deep concern.
This is largely because I misunderstood the sentiment around super intelligent AGI .
I thought we were on the same page about utilizing narrow LLM’s to help us solve problems that plague society (ie protein folding.) But what I see cluttered on my timeline and clogging the podcast airwaves was the utter delight at how much closer we are to having an AGI some 6-10x human intelligence.
Wait What? What did I miss. I thought that kind of rhetoric was isolated to the at worst ungrounded in reality LCD user and at best the radical Kurzweil types. I mean listen to us, are we really needing to argue about what percentage the risk is that human life gets exterminated by AGI?
Let me step off my soap box and address a concern that was illuminated in this piece and one that the biggest AGI proponents should at least ponder.
The concern has to do with the risks of hurting innocent bystanders that won’t get to make the choice about integrating AGI into the equation. Make no doubt, AGI both aligned and non aligned will likely cause an immense disruption on the part of billions of people. At the low scale displacing jobs and at the high getting murdered by an unaligned AGI. We all know about the consequences of the Industrial Revolution and job displacement but we look back at historical technological advances with appreciation that they lead us to where we are. But are you so sure that AGI is just the next step in that long ascension? To me it looks not to be. In fact AGI isn’t at all what people want. What we are learning about happiness is that work is incredibly important.
You know who isn’t happy? Retired and/or elderly who find themselves with no role in society and an ever increasing narrowing of friends and acquaintances.
“They will be better with AGI doing everything, trust me, technological progression always enhances”
Are you sure about that? I have so many philosophical directions I could go to disprove this (happiness is less choice not more) but I will get to the point which is:
You don’t get to decide. Not this time anyway.
It might be worth mentioning the crypto decentralization movement is the exact opposite of AGI. if you are a decentralization enthusiast who wants to bring power away from a centralized few then you should be ashamed to support the AGI premise of a handful of people modifying billions of life’s without their consent.
I will end with this. Your hand has been played. The AGI enthusiasts have revealed their intentions and it won’t sit well with basically…everyone. Unless AGI can be attained in the next 1-2 years it’s likely to see one of the biggest push backs our world has ever witnessed. Information spreads fast and you’re already seeing the mainstream pick up on the absurdity of pursuing AGI and. when this technology starts disrupting people’s lives get ready for more than just regulation.
Let’s take a deep breath. Remember AI is to solve problems and life’s tragedies, not create them.