This sequence is a (chronological) series of chatroom conversation logs about artificial general intelligence. A large number of topics are covered, beginning with conversations related to alignment difficulty.
Short summaries of each post, and links to audio versions, are available here. There are also two related posts released shortly before this sequence:
Comments [by Nate Soares] on Joe Carlsmith’s “Is power-seeking AI an existential risk?”
Rob Bensinger edited and posted this sequence, and Matthew Graves helped with much of the formatting.