2021 MIRI Conversations

This sequence is a (chronological) series of chatroom conversation logs about artificial general intelligence. A large number of topics are covered, beginning with conversations related to alignment difficulty.

Short summaries of each post, and links to audio versions, are available here. There are also two related posts released shortly before this sequence:

Rob Bensinger edited and posted this sequence, and Matthew Graves helped with much of the formatting.

Part One (Primarily Richard Ngo and Eliezer Yudkowsky)

Ngo and Yud­kowsky on al­ign­ment difficulty

Ngo and Yud­kowsky on AI ca­pa­bil­ity gains

Yud­kowsky and Chris­ti­ano dis­cuss “Take­off Speeds”

Soares, Tal­linn, and Yud­kowsky dis­cuss AGI cognition

Part Two (Primarily Paul Christiano and Eliezer Yudkowsky)

Chris­ti­ano, Co­tra, and Yud­kowsky on AI progress

Biol­ogy-In­spired AGI Timelines: The Trick That Never Works

Re­ply to Eliezer on Biolog­i­cal Anchors

Shul­man and Yud­kowsky on AI progress

More Chris­ti­ano, Co­tra, and Yud­kowsky on AI progress

Con­ver­sa­tion on tech­nol­ogy fore­cast­ing and gradualism

Part Three (Varied Participants)

Ngo’s view on al­ign­ment difficulty

Ngo and Yud­kowsky on sci­en­tific rea­son­ing and pivotal acts

Chris­ti­ano and Yud­kowsky on AI pre­dic­tions and hu­man intelligence

Shah and Yud­kowsky on al­ign­ment failures

Late 2021 MIRI Con­ver­sa­tions: AMA /​ Discussion