Carl Shulman on The Lunar Society (7 hour, two-part podcast)

Link post

Dwarkesh’s summary for part 1:

In terms of the depth and range of topics, this episode is the best I’ve done.

No part of my worldview is the same after talking with Carl Shulman. He’s the most interesting intellectual you’ve never heard of.

We ended up talking for 8 hours, so I’m splitting this episode into 2 parts.

This part is about Carl’s model of an intelligence explosion, which integrates everything from:

  • how fast algorithmic progress & hardware improvements in AI are happening,

  • what primate evolution suggests about the scaling hypothesis,

  • how soon before AIs could do large parts of AI research themselves, and whether there would be faster and faster doublings of AI researchers,

  • how quickly robots produced from existing factories could take over the economy.

We also discuss the odds of a takeover based on whether the AI is aligned before the intelligence explosion happens, and Carl explains why he’s more optimistic than Eliezer.

The next part, which I’ll release next week, is about all the specific mechanisms of an AI takeover, plus a whole bunch of other galaxy brain stuff.

Maybe 3 people in the world have thought as rigorously as Carl about so many interesting topics. So this was a huge pleasure.

Dwarkesh’s summary for part 2:

The second half of my 7 hour conversation with Carl Shulman is out!

My favorite part! And the one that had the biggest impact on my worldview.

Here, Carl lays out how an AI takeover might happen:

  • AI can threaten mutually assured destruction from bioweapons,

  • use cyber attacks to take over physical infrastructure,

  • build mechanical armies,

  • spread seed AIs we can never exterminate,

  • offer tech and other advantages to collaborating countries, etc

Plus we talk about a whole bunch of weird and interesting topics which Carl has thought about:

  • what is the far future best case scenario for humanity

  • what it would look like to have AI make thousands of years of intellectual progress in a month

  • how do we detect deception in superhuman models

  • does space warfare favor defense or offense

  • is a Malthusian state inevitable in the long run

  • why markets haven’t priced in explosive economic growth

  • & much more

Carl also explains how he developed such a rigorous, thoughtful, and interdisciplinary model of the biggest problems in the world.

(Also discussed on the EA Forum here.)