Archive
Sequences
About
Search
Log In
Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
RSS
Interviews
Tag
Last edit:
26 Nov 2021 14:29 UTC
by
Multicore
Interviews
Related Pages:
Interview Series On Risks From AI
,
Dialogue (format)
Relevant
New
Old
Robin Hanson on the futurist focus on AI
abergal
13 Nov 2019 21:50 UTC
31
points
24
comments
1
min read
LW
link
(aiimpacts.org)
Geoffrey Miller on Effective Altruism and Rationality
Jacob Falkovich
15 Jun 2018 17:05 UTC
18
points
0
comments
1
min read
LW
link
(putanumonit.com)
AI Alignment Podcast: An Overview of Technical AI Alignment in 2018 and 2019 with Buck Shlegeris and Rohin Shah
Palus Astra
16 Apr 2020 0:50 UTC
58
points
27
comments
89
min read
LW
link
Interview on IQ, genes, and genetic engineering with expert (Hsu)
James_Miller
28 May 2017 22:19 UTC
7
points
8
comments
1
min read
LW
link
(www.youtube.com)
Evan Hubinger on Inner Alignment, Outer Alignment, and Proposals for Building Safe Advanced AI
Palus Astra
1 Jul 2020 17:30 UTC
35
points
4
comments
67
min read
LW
link
deluks917 on Online Weirdos
Jacob Falkovich
24 Nov 2018 17:03 UTC
24
points
3
comments
10
min read
LW
link
A Key Power of the President is to Coordinate the Execution of Existing Concrete Plans
Ben Pace
16 Jul 2019 5:06 UTC
124
points
13
comments
10
min read
LW
link
AXRP Episode 1 - Adversarial Policies with Adam Gleave
DanielFilan
29 Dec 2020 20:41 UTC
12
points
5
comments
33
min read
LW
link
AXRP Episode 5 - Infra-Bayesianism with Vanessa Kosoy
DanielFilan
10 Mar 2021 4:30 UTC
28
points
12
comments
35
min read
LW
link
AXRP Episode 3 - Negotiable Reinforcement Learning with Andrew Critch
DanielFilan
29 Dec 2020 20:45 UTC
26
points
0
comments
27
min read
LW
link
AXRP Episode 2 - Learning Human Biases with Rohin Shah
DanielFilan
29 Dec 2020 20:43 UTC
13
points
0
comments
35
min read
LW
link
AXRP Episode 8 - Assistance Games with Dylan Hadfield-Menell
DanielFilan
8 Jun 2021 23:20 UTC
22
points
1
comment
71
min read
LW
link
AXRP Episode 7 - Side Effects with Victoria Krakovna
DanielFilan
14 May 2021 3:50 UTC
34
points
6
comments
43
min read
LW
link
AXRP Episode 7.5 - Forecasting Transformative AI from Biological Anchors with Ajeya Cotra
DanielFilan
28 May 2021 0:20 UTC
24
points
1
comment
67
min read
LW
link
AXRP Episode 6 - Debate and Imitative Generalization with Beth Barnes
DanielFilan
8 Apr 2021 21:20 UTC
23
points
3
comments
59
min read
LW
link
AXRP Episode 9 - Finite Factored Sets with Scott Garrabrant
DanielFilan
24 Jun 2021 22:10 UTC
56
points
2
comments
58
min read
LW
link
I wanted to interview Eliezer Yudkowsky but he’s busy so I simulated him instead
lsusr
16 Sep 2021 7:34 UTC
109
points
33
comments
5
min read
LW
link
AXRP Episode 10 - AI’s Future and Impacts with Katja Grace
DanielFilan
23 Jul 2021 22:10 UTC
34
points
2
comments
76
min read
LW
link
AXRP Episode 11 - Attainable Utility and Power with Alex Turner
DanielFilan
25 Sep 2021 21:10 UTC
19
points
5
comments
52
min read
LW
link
AXRP Episode 12 - AI Existential Risk with Paul Christiano
DanielFilan
2 Dec 2021 2:20 UTC
36
points
0
comments
125
min read
LW
link
AXRP Episode 13 - First Principles of AGI Safety with Richard Ngo
DanielFilan
31 Mar 2022 5:20 UTC
24
points
1
comment
48
min read
LW
link
AXRP Episode 14 - Infra-Bayesian Physicalism with Vanessa Kosoy
DanielFilan
5 Apr 2022 23:10 UTC
23
points
9
comments
52
min read
LW
link
Duncan Sabien On Writing
lynettebye
7 Apr 2022 16:09 UTC
33
points
3
comments
16
min read
LW
link
AXRP Episode 15 - Natural Abstractions with John Wentworth
DanielFilan
23 May 2022 5:40 UTC
31
points
1
comment
57
min read
LW
link
AXRP Episode 16 - Preparing for Debate AI with Geoffrey Irving
DanielFilan
1 Jul 2022 22:20 UTC
11
points
0
comments
37
min read
LW
link
Conversation with Paul Christiano
abergal
11 Sep 2019 23:20 UTC
44
points
6
comments
30
min read
LW
link
(aiimpacts.org)
AI Alignment Podcast: On Lethal Autonomous Weapons with Paul Scharre
Palus Astra
16 Mar 2020 23:00 UTC
11
points
0
comments
48
min read
LW
link
FLI Podcast: The Precipice: Existential Risk and the Future of Humanity with Toby Ord
Palus Astra
1 Apr 2020 1:02 UTC
7
points
1
comment
46
min read
LW
link
FLI Podcast: On Superforecasting with Robert de Neufville
Palus Astra
30 Apr 2020 23:08 UTC
6
points
0
comments
52
min read
LW
link
Transcription of Eliezer’s January 2010 video Q&A
curiousepic
14 Nov 2011 17:02 UTC
110
points
9
comments
56
min read
LW
link
[Transcript] Richard Feynman on Why Questions
Grognor
8 Jan 2012 19:01 UTC
116
points
45
comments
5
min read
LW
link
Rohin Shah on reasons for AI optimism
abergal
31 Oct 2019 12:10 UTC
40
points
58
comments
1
min read
LW
link
(aiimpacts.org)
Situating LessWrong in contemporary philosophy: An interview with Jon Livengood
Suspended Reason
1 Jul 2020 0:37 UTC
115
points
21
comments
19
min read
LW
link
Q&A with Jürgen Schmidhuber on risks from AI
XiXiDu
15 Jun 2011 15:51 UTC
59
points
45
comments
4
min read
LW
link
Bloggingheads: Yudkowsky and Horgan
Eliezer Yudkowsky
7 Jun 2008 22:09 UTC
7
points
37
comments
1
min read
LW
link
Q&A with experts on risks from AI #1
XiXiDu
8 Jan 2012 11:46 UTC
45
points
67
comments
9
min read
LW
link
Q&A with Stan Franklin on risks from AI
XiXiDu
11 Jun 2011 15:22 UTC
36
points
10
comments
2
min read
LW
link
Aella on Rationality and the Void
Jacob Falkovich
31 Oct 2019 21:40 UTC
27
points
8
comments
15
min read
LW
link
GiveWell interview with major SIAI donor Jaan Tallinn
jsalvatier
19 Jul 2011 15:10 UTC
25
points
8
comments
1
min read
LW
link
My hour-long interview with Yudkowsky on “Becoming a Rationalist”
lukeprog
6 Feb 2011 3:19 UTC
33
points
22
comments
1
min read
LW
link
Muehlhauser-Wang Dialogue
lukeprog
22 Apr 2012 22:40 UTC
34
points
288
comments
12
min read
LW
link
Q&A with Abram Demski on risks from AI
XiXiDu
17 Jan 2012 9:43 UTC
33
points
71
comments
9
min read
LW
link
Q&A with experts on risks from AI #2
XiXiDu
9 Jan 2012 19:40 UTC
22
points
29
comments
7
min read
LW
link
BHTV: Jaron Lanier and Yudkowsky
Eliezer Yudkowsky
1 Nov 2008 17:04 UTC
8
points
66
comments
1
min read
LW
link
BHTV: de Grey and Yudkowsky
Eliezer Yudkowsky
13 Dec 2008 15:28 UTC
10
points
13
comments
1
min read
LW
link
Interview with Putanumonit
Jacob Falkovich
24 Apr 2019 14:53 UTC
15
points
1
comment
1
min read
LW
link
[Link] My Interview with Dilbert creator Scott Adams
James_Miller
13 Sep 2016 5:22 UTC
17
points
27
comments
1
min read
LW
link
BHTV: Yudkowsky / Wilkinson
Eliezer Yudkowsky
26 Jan 2009 1:10 UTC
4
points
19
comments
1
min read
LW
link
BHTV: Yudkowsky / Robert Greene
Eliezer Yudkowsky
16 Nov 2009 20:26 UTC
16
points
24
comments
1
min read
LW
link
Link: Interview with Vladimir Vapnik
Daniel_Burfoot
25 Jul 2009 13:36 UTC
22
points
6
comments
2
min read
LW
link
AXRP Episode 4 - Risks from Learned Optimization with Evan Hubinger
DanielFilan
18 Feb 2021 0:03 UTC
41
points
10
comments
86
min read
LW
link
Quotes from the WWMoR Podcast Episode with Eliezer
MondSemmel
13 Mar 2021 21:43 UTC
94
points
3
comments
4
min read
LW
link
Interview with Christine M. Korsgaard: Animal Ethics, Kantianism, Utilitarianism
Erich_Grunewald
8 May 2021 11:44 UTC
11
points
2
comments
1
min read
LW
link
(www.erichgrunewald.com)
Interview with Olle Häggström: Reason, COVID-19 and Academic Freedom in Sweden
Erich_Grunewald
21 Aug 2021 15:08 UTC
8
points
0
comments
2
min read
LW
link
(www.erichgrunewald.com)
See Eliezer talk with PZ Myers and David Brin (and me) about immortality this Sunday
Eneasz
17 Jul 2013 15:56 UTC
26
points
5
comments
1
min read
LW
link
Notes from a conversation with Ing. Agr. Adriana Balzarini
Pablo Repetto
8 May 2022 15:56 UTC
5
points
0
comments
2
min read
LW
link
(pabloernesto.github.io)
No comments.
Back to top