Sam Altman on GPT-4, ChatGPT, and the Future of AI | Lex Fridman Podcast #367

Link post

Lex Fridman just released a podcast episode with Sam Altman, CEO of OpenAI. In my opinion, there wasn’t too much new here that hasn’t been said in other recent interviews. However, here are some scattered notes on parts I found interesting from an AI safety lens:

AI risk https://​​youtu.be/​​L_Guz73e6fw?t=3266

  • Lex asks Sama to steelman Eliezer Yudkowsky’s views

  • Sama said there’s some chance of no hope, but the only way he knows how to fix things is to keep iterating and eliminating the “1-shot-to-get-it-right” cases. He does like one of Eliezer’s posts that discusses his reasons why he thinks alignment is hard [I believe this is in reference to AGI Ruin: A List of Lethalities].

  • Lex confirms he will do an interview with Eliezer.

  • Sama: Now is the time to ramp up technical alignment work.

  • Lex: What about fast takeoffs?

  • Sama: I’m not that surprised by GPT-4, was a little surprised by ChatGPT [I think this means this feels slow to him]. I’m in the long-takeoffs, short-timelines quadrant. I’m scared of the short-takeoff scenarios.

  • Sama has heard of but not seem Ex Machina

On power https://​​www.youtube.com/​​watch?v=L_Guz73e6fw&t=4614s

  • Sama says it’s weird that it will be OOM thousands of people in control of the first AGI .

  • Acknowledges the AIS people think OAI deploying things fast is bad.

  • Sama asks how Lex thinks they’re doing.

  • Lex likes the transparency and openly sharing the issues.

  • Sama: Should we open source GPT-4?

  • Lex: Knowing people at OAI, no (bc he trusts them,)

  • Sama: I think people at OAI know the stakes of what we’re building. But we’re always looking for feedback from smart people.

  • Lex: How do you take feedback?

  • Sama: Twitter is unreadable. Mostly from convos like this.

On responsibility https://​​youtu.be/​​L_Guz73e6fw?t=6813

  • Sama: We will have very significant but new and different challenges [with governing/​deciding how to steer AI]

  • Lex: Is it up to GPT or the humans to decrease the amount of hate in the world.

  • Sama: I think we as OAI have responsibility for the tools we put out in the world, I think the tools can’t have responsibility.

  • Lex: So there could be harm caused by these tools—

  • Sama: There will be harm caused by these tools. There will be tremendous benefits. But tools do wonderful good and real bad. And we will minimize the bad and maximize the good.

Jailbreaking https://​​youtu.be/​​L_Guz73e6fw?t=6939

  • Lex: How do you prevent jailbreaking?

  • Sama: It kinda sucks being on the side of the company being jailbroken. We want the users to have a lot of control and have the models behave how they want within broad bounds. The existence of jailbreaking shows we haven’t solved that problem yet, and the more we solve it, the less need there will be for jailbreaking. People don’t really jailbreak iPhones anymore.

Shipping products https://​​youtu.be/​​L_Guz73e6fw?t=7034

  • Lex: shows this tweet summarizing all the OAI products in the last year

  • Sama: There’s a question of should we be very proud of that or should other companies be very embarrassed. We have a high bar on our team, we work hard, we give a huge amount of trust, autonomy, and authority to individual people, and we try to hold each other to very high standards. These other things enable us to ship at such a high velocity.

  • Lex: How do you go about hiring?

  • Sama: I spend 13 of my time hiring, and I approve every OAI hire. There are no shortcuts to good hiring.