RSS

Interviews

TagLast edit: 8 Feb 2021 18:32 UTC by Yoav Ravid

Related Pages: Interview Series On Risks From AI, Dialogue (format)

Robin Han­son on the fu­tur­ist fo­cus on AI

abergal13 Nov 2019 21:50 UTC
31 points
24 comments1 min readLW link
(aiimpacts.org)

Ge­offrey Miller on Effec­tive Altru­ism and Rationality

Jacob Falkovich15 Jun 2018 17:05 UTC
18 points
0 comments1 min readLW link
(putanumonit.com)

AI Align­ment Pod­cast: An Overview of Tech­ni­cal AI Align­ment in 2018 and 2019 with Buck Sh­legeris and Ro­hin Shah

Palus Astra16 Apr 2020 0:50 UTC
46 points
27 comments89 min readLW link

In­ter­view on IQ, genes, and ge­netic en­g­ineer­ing with ex­pert (Hsu)

James_Miller28 May 2017 22:19 UTC
7 points
8 comments1 min readLW link
(www.youtube.com)

Evan Hub­inger on In­ner Align­ment, Outer Align­ment, and Pro­pos­als for Build­ing Safe Ad­vanced AI

Palus Astra1 Jul 2020 17:30 UTC
34 points
4 comments67 min readLW link

deluks917 on On­line Weirdos

Jacob Falkovich24 Nov 2018 17:03 UTC
24 points
3 comments10 min readLW link

A Key Power of the Pres­i­dent is to Co­or­di­nate the Ex­e­cu­tion of Ex­ist­ing Con­crete Plans

Ben Pace16 Jul 2019 5:06 UTC
117 points
13 comments10 min readLW link

AXRP Epi­sode 1 - Ad­ver­sar­ial Poli­cies with Adam Gleave

DanielFilan29 Dec 2020 20:41 UTC
10 points
5 comments33 min readLW link

AXRP Epi­sode 5 - In­fra-Bayesi­anism with Vanessa Kosoy

DanielFilan10 Mar 2021 4:30 UTC
26 points
11 comments35 min readLW link

AXRP Epi­sode 3 - Ne­go­tiable Re­in­force­ment Learn­ing with An­drew Critch

DanielFilan29 Dec 2020 20:45 UTC
26 points
0 comments27 min readLW link

AXRP Epi­sode 2 - Learn­ing Hu­man Bi­ases with Ro­hin Shah

DanielFilan29 Dec 2020 20:43 UTC
11 points
0 comments35 min readLW link

Con­ver­sa­tion with Paul Christiano

abergal11 Sep 2019 23:20 UTC
44 points
6 comments30 min readLW link
(aiimpacts.org)

AI Align­ment Pod­cast: On Lethal Au­tonomous Weapons with Paul Scharre

Palus Astra16 Mar 2020 23:00 UTC
11 points
0 comments48 min readLW link

FLI Pod­cast: The Precipice: Ex­is­ten­tial Risk and the Fu­ture of Hu­man­ity with Toby Ord

Palus Astra1 Apr 2020 1:02 UTC
7 points
1 comment46 min readLW link

FLI Pod­cast: On Su­perfore­cast­ing with Robert de Neufville

Palus Astra30 Apr 2020 23:08 UTC
6 points
0 comments52 min readLW link

Tran­scrip­tion of Eliezer’s Jan­uary 2010 video Q&A

curiousepic14 Nov 2011 17:02 UTC
109 points
9 comments56 min readLW link

[Tran­script] Richard Feyn­man on Why Questions

Grognor8 Jan 2012 19:01 UTC
108 points
45 comments5 min readLW link

Ro­hin Shah on rea­sons for AI optimism

abergal31 Oct 2019 12:10 UTC
40 points
58 comments1 min readLW link
(aiimpacts.org)

Si­tu­at­ing LessWrong in con­tem­po­rary philos­o­phy: An in­ter­view with Jon Livengood

Suspended Reason1 Jul 2020 0:37 UTC
109 points
21 comments19 min readLW link

Q&A with Jür­gen Sch­mid­hu­ber on risks from AI

XiXiDu15 Jun 2011 15:51 UTC
54 points
45 comments4 min readLW link

Blog­ging­heads: Yud­kowsky and Horgan

Eliezer Yudkowsky7 Jun 2008 22:09 UTC
6 points
37 comments1 min readLW link

Q&A with ex­perts on risks from AI #1

XiXiDu8 Jan 2012 11:46 UTC
45 points
67 comments9 min readLW link

Q&A with Stan Fran­klin on risks from AI

XiXiDu11 Jun 2011 15:22 UTC
36 points
10 comments2 min readLW link

Aella on Ra­tion­al­ity and the Void

Jacob Falkovich31 Oct 2019 21:40 UTC
27 points
8 comments15 min readLW link

GiveWell in­ter­view with ma­jor SIAI donor Jaan Tallinn

jsalvatier19 Jul 2011 15:10 UTC
25 points
8 comments1 min readLW link

My hour-long in­ter­view with Yud­kowsky on “Be­com­ing a Ra­tion­al­ist”

lukeprog6 Feb 2011 3:19 UTC
33 points
22 comments1 min readLW link

Muehlhauser-Wang Dialogue

lukeprog22 Apr 2012 22:40 UTC
34 points
288 comments12 min readLW link

Q&A with Abram Dem­ski on risks from AI

XiXiDu17 Jan 2012 9:43 UTC
33 points
71 comments9 min readLW link

Q&A with ex­perts on risks from AI #2

XiXiDu9 Jan 2012 19:40 UTC
22 points
29 comments7 min readLW link

BHTV: Jaron Lanier and Yudkowsky

Eliezer Yudkowsky1 Nov 2008 17:04 UTC
7 points
66 comments1 min readLW link

BHTV: de Grey and Yudkowsky

Eliezer Yudkowsky13 Dec 2008 15:28 UTC
10 points
13 comments1 min readLW link

In­ter­view with Putanumonit

Jacob Falkovich24 Apr 2019 14:53 UTC
15 points
1 comment1 min readLW link

[Link] My In­ter­view with Dilbert cre­ator Scott Adams

James_Miller13 Sep 2016 5:22 UTC
17 points
27 comments1 min readLW link

BHTV: Yud­kowsky /​ Wilkinson

Eliezer Yudkowsky26 Jan 2009 1:10 UTC
4 points
19 comments1 min readLW link

BHTV: Yud­kowsky /​ Robert Greene

Eliezer Yudkowsky16 Nov 2009 20:26 UTC
16 points
24 comments1 min readLW link

Link: In­ter­view with Vladimir Vapnik

Daniel_Burfoot25 Jul 2009 13:36 UTC
22 points
6 comments2 min readLW link

AXRP Epi­sode 4 - Risks from Learned Op­ti­miza­tion with Evan Hubinger

DanielFilan18 Feb 2021 0:03 UTC
41 points
10 comments86 min readLW link

Quotes from the WWMoR Pod­cast Epi­sode with Eliezer

MondSemmel13 Mar 2021 21:43 UTC
88 points
3 comments4 min readLW link
No comments.