Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
Archive
Sequences
About
Search
Log In
All
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
All
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
All
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
Page
1
A non-magical explanation of Jeffrey Epstein
lc
Dec 28, 2021, 9:15 PM
329
points
59
comments
15
min read
LW
link
1
review
The Plan
johnswentworth
Dec 10, 2021, 11:41 PM
260
points
78
comments
14
min read
LW
link
1
review
Omicron: My Current Model
Zvi
Dec 28, 2021, 5:10 PM
253
points
72
comments
10
min read
LW
link
(thezvi.wordpress.com)
Morality is Scary
Wei Dai
Dec 2, 2021, 6:35 AM
230
points
116
comments
4
min read
LW
link
1
review
ARC’s first technical report: Eliciting Latent Knowledge
paulfchristiano
,
Mark Xu
and
Ajeya Cotra
Dec 14, 2021, 8:09 PM
228
points
90
comments
1
min read
LW
link
3
reviews
(docs.google.com)
Zvi’s Thoughts on the Survival and Flourishing Fund (SFF)
Zvi
Dec 14, 2021, 2:30 PM
193
points
65
comments
64
min read
LW
link
1
review
(thezvi.wordpress.com)
Book Launch: The Engines of Cognition
Ben Pace
Dec 21, 2021, 7:24 AM
174
points
56
comments
5
min read
LW
link
2021 AI Alignment Literature Review and Charity Comparison
Larks
Dec 23, 2021, 2:06 PM
168
points
28
comments
73
min read
LW
link
Worst-case thinking in AI alignment
Buck
Dec 23, 2021, 1:29 AM
167
points
18
comments
6
min read
LW
link
2
reviews
Biology-Inspired AGI Timelines: The Trick That Never Works
Eliezer Yudkowsky
Dec 1, 2021, 10:35 PM
158
points
142
comments
65
min read
LW
link
1
review
Dear Self; We Need To Talk About Social Media
Elizabeth
Dec 7, 2021, 12:40 AM
157
points
19
comments
10
min read
LW
link
1
review
(acesounderglass.com)
Omicron Post #7
Zvi
Dec 16, 2021, 5:30 PM
155
points
41
comments
12
min read
LW
link
(thezvi.wordpress.com)
Omicron Post #4
Zvi
Dec 6, 2021, 5:00 PM
153
points
66
comments
15
min read
LW
link
(thezvi.wordpress.com)
Reply to Eliezer on Biological Anchors
HoldenKarnofsky
Dec 23, 2021, 4:15 PM
149
points
46
comments
15
min read
LW
link
Transformer Circuits
evhub
Dec 22, 2021, 9:09 PM
144
points
4
comments
3
min read
LW
link
(transformer-circuits.pub)
Moore’s Law, AI, and the pace of progress
Veedrac
Dec 11, 2021, 3:02 AM
128
points
38
comments
24
min read
LW
link
My Overview of the AI Alignment Landscape: A Bird’s Eye View
Neel Nanda
Dec 15, 2021, 11:44 PM
127
points
9
comments
15
min read
LW
link
COVID Skepticism Isn’t About Science
jaspax
Dec 29, 2021, 5:53 PM
127
points
76
comments
7
min read
LW
link
Law of No Evidence
Zvi
Dec 20, 2021, 1:50 PM
122
points
20
comments
4
min read
LW
link
1
review
(thezvi.wordpress.com)
Perpetual Dickensian Poverty?
jefftk
Dec 21, 2021, 1:30 PM
120
points
18
comments
1
min read
LW
link
(www.jefftk.com)
Experiences raising children in shared housing
juliawise
Dec 21, 2021, 5:09 PM
117
points
5
comments
6
min read
LW
link
In Defense of Attempting Hard Things, and my story of the Leverage ecosystem
Cathleen
Dec 17, 2021, 11:08 PM
115
points
43
comments
LW
link
2
reviews
(cathleensdiscoveries.com)
The 2020 Review
Raemon
Dec 2, 2021, 12:39 AM
112
points
39
comments
6
min read
LW
link
Internet Literacy Atrophy
Elizabeth
Dec 26, 2021, 12:30 PM
111
points
49
comments
3
min read
LW
link
(acesounderglass.com)
Conversation on technology forecasting and gradualism
Richard_Ngo
,
Eliezer Yudkowsky
,
Rohin Shah
and
Rob Bensinger
Dec 9, 2021, 9:23 PM
108
points
30
comments
31
min read
LW
link
Merry Christmas
lsusr
Dec 26, 2021, 7:03 AM
107
points
16
comments
1
min read
LW
link
Perishable Knowledge
lsusr
Dec 18, 2021, 5:53 AM
104
points
6
comments
3
min read
LW
link
Omicron Post #5
Zvi
Dec 9, 2021, 9:10 PM
102
points
18
comments
14
min read
LW
link
(thezvi.wordpress.com)
Two (very different) kinds of donors
Duncan Sabien (Inactive)
Dec 22, 2021, 1:43 AM
101
points
19
comments
3
min read
LW
link
Interpreting Yudkowsky on Deep vs Shallow Knowledge
adamShimi
Dec 5, 2021, 5:32 PM
100
points
32
comments
24
min read
LW
link
Omicron Post #8
Zvi
Dec 20, 2021, 11:10 PM
96
points
33
comments
16
min read
LW
link
(thezvi.wordpress.com)
Ten Minutes with Sam Altman
lsusr
Dec 28, 2021, 7:32 AM
91
points
11
comments
3
min read
LW
link
More Christiano, Cotra, and Yudkowsky on AI progress
Eliezer Yudkowsky
and
Ajeya Cotra
Dec 6, 2021, 8:33 PM
91
points
28
comments
40
min read
LW
link
Shulman and Yudkowsky on AI progress
Eliezer Yudkowsky
and
CarlShulman
Dec 3, 2021, 8:05 PM
90
points
16
comments
20
min read
LW
link
Omicron Post #9
Zvi
Dec 23, 2021, 9:50 PM
89
points
11
comments
19
min read
LW
link
(thezvi.wordpress.com)
Omicron Post #6
Zvi
Dec 13, 2021, 6:00 PM
89
points
30
comments
8
min read
LW
link
(thezvi.wordpress.com)
There is essentially one best-validated theory of cognition.
abramdemski
Dec 10, 2021, 3:51 PM
89
points
33
comments
3
min read
LW
link
Deepmind’s Gopher—more powerful than GPT-3
hath
Dec 8, 2021, 5:06 PM
86
points
26
comments
LW
link
(deepmind.com)
A Summary Of Anthropic’s First Paper
Sam Ringer
Dec 30, 2021, 12:48 AM
85
points
1
comment
8
min read
LW
link
ML Alignment Theory Program under Evan Hubinger
ozhang
,
evhub
and
Victor W
Dec 6, 2021, 12:03 AM
82
points
3
comments
2
min read
LW
link
Privacy and Manipulation
Raemon
Dec 5, 2021, 12:39 AM
80
points
41
comments
8
min read
LW
link
Reviews of “Is power-seeking AI an existential risk?”
Joe Carlsmith
Dec 16, 2021, 8:48 PM
80
points
20
comments
1
min read
LW
link
Behavior Cloning is Miscalibrated
leogao
5 Dec 2021 1:36 UTC
78
points
3
comments
3
min read
LW
link
LessWrong discussed in New Ideas in Psychology article
rogersbacon
9 Dec 2021 21:01 UTC
76
points
11
comments
4
min read
LW
link
Risks from AI persuasion
Beth Barnes
24 Dec 2021 1:48 UTC
76
points
15
comments
31
min read
LW
link
Language Model Alignment Research Internships
Ethan Perez
13 Dec 2021 19:53 UTC
74
points
1
comment
1
min read
LW
link
Teaser: Hard-coding Transformer Models
MadHatter
12 Dec 2021 22:04 UTC
74
points
19
comments
1
min read
LW
link
[Question]
Where can one learn deep intuitions about information theory?
Valentine
16 Dec 2021 15:47 UTC
72
points
27
comments
2
min read
LW
link
COVID and the holidays
Connor_Flexman
8 Dec 2021 23:13 UTC
71
points
31
comments
9
min read
LW
link
Some abstract, non-technical reasons to be non-maximally-pessimistic about AI alignment
Rob Bensinger
12 Dec 2021 2:08 UTC
70
points
35
comments
7
min read
LW
link
Back to top
Next
N
W
F
A
C
D
E
F
G
H
I
Customize appearance
Current theme:
default
A
C
D
E
F
G
H
I
Less Wrong (text)
Less Wrong (link)
Invert colors
Reset to defaults
OK
Cancel
Hi, I’m Bobby the Basilisk! Click on the minimize button (
) to minimize the theme tweaker window, so that you can see what the page looks like with the current tweaked values. (But remember,
the changes won’t be saved until you click “OK”!
)
Theme tweaker help
Show Bobby the Basilisk
OK
Cancel