Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
Archive
Sequences
About
Search
Log In
All
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
All
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
All
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
Page
2
A Tentative Timeline of The Near Future (2022-2025) for Self-Accountability
Yitz
Dec 5, 2022, 5:33 AM
26
points
0
comments
4
min read
LW
link
Nook Nature
Duncan Sabien (Inactive)
Dec 5, 2022, 4:10 AM
54
points
18
comments
10
min read
LW
link
Probably good projects for the AI safety ecosystem
Ryan Kidd
Dec 5, 2022, 2:26 AM
78
points
40
comments
2
min read
LW
link
Historical Notes on Charitable Funds
jefftk
Dec 4, 2022, 11:30 PM
28
points
0
comments
3
min read
LW
link
(www.jefftk.com)
AGI as a Black Swan Event
Stephen McAleese
Dec 4, 2022, 11:00 PM
8
points
8
comments
7
min read
LW
link
South Bay ACX/LW Pre-Holiday Get-Together
IS
Dec 4, 2022, 10:57 PM
10
points
0
comments
1
min read
LW
link
ChatGPT is settling the Chinese Room argument
averros
Dec 4, 2022, 8:25 PM
−7
points
7
comments
1
min read
LW
link
Race to the Top: Benchmarks for AI Safety
Isabella Duan
Dec 4, 2022, 6:48 PM
29
points
6
comments
1
min read
LW
link
Open & Welcome Thread—December 2022
niplav
Dec 4, 2022, 3:06 PM
8
points
22
comments
1
min read
LW
link
AI can exploit safety plans posted on the Internet
Peter S. Park
Dec 4, 2022, 12:17 PM
−15
points
4
comments
LW
link
ChatGPT seems overconfident to me
qbolec
Dec 4, 2022, 8:03 AM
19
points
3
comments
16
min read
LW
link
Could an AI be Religious?
mk54
Dec 4, 2022, 5:00 AM
−12
points
14
comments
1
min read
LW
link
Can GPT-3 Write Contra Dances?
jefftk
Dec 4, 2022, 3:00 AM
6
points
4
comments
10
min read
LW
link
(www.jefftk.com)
Take 3: No indescribable heavenworlds.
Charlie Steiner
Dec 4, 2022, 2:48 AM
23
points
12
comments
2
min read
LW
link
Summary of a new study on out-group hate (and how to fix it)
DirectedEvolution
Dec 4, 2022, 1:53 AM
60
points
30
comments
3
min read
LW
link
(www.pnas.org)
[Question]
Will the first AGI agent have been designed as an agent (in addition to an AGI)?
nahoj
Dec 3, 2022, 8:32 PM
1
point
8
comments
1
min read
LW
link
Logical induction for software engineers
Alex Flint
Dec 3, 2022, 7:55 PM
163
points
8
comments
27
min read
LW
link
1
review
Utilitarianism is the only option
aelwood
Dec 3, 2022, 5:14 PM
−13
points
7
comments
LW
link
Our 2022 Giving
jefftk
Dec 3, 2022, 3:40 PM
33
points
0
comments
1
min read
LW
link
(www.jefftk.com)
[Question]
Is school good or bad?
tailcalled
Dec 3, 2022, 1:14 PM
10
points
76
comments
1
min read
LW
link
MrBeast’s Squid Game Tricked Me
lsusr
Dec 3, 2022, 5:50 AM
75
points
1
comment
2
min read
LW
link
Great Cryonics Survey of 2022
Mati_Roy
Dec 3, 2022, 5:10 AM
16
points
0
comments
1
min read
LW
link
Causal scrubbing: results on induction heads
LawrenceC
,
Adrià Garriga-alonso
,
Nicholas Goldowsky-Dill
,
ryan_greenblatt
,
Tao Lin
,
jenny
,
Ansh Radhakrishnan
,
Buck
and
Nate Thomas
Dec 3, 2022, 12:59 AM
34
points
1
comment
17
min read
LW
link
Causal scrubbing: results on a paren balance checker
LawrenceC
,
Adrià Garriga-alonso
,
Nicholas Goldowsky-Dill
,
ryan_greenblatt
,
Tao Lin
,
jenny
,
Ansh Radhakrishnan
,
Buck
and
Nate Thomas
Dec 3, 2022, 12:59 AM
34
points
2
comments
30
min read
LW
link
Causal scrubbing: Appendix
LawrenceC
,
Adrià Garriga-alonso
,
Nicholas Goldowsky-Dill
,
ryan_greenblatt
,
jenny
,
Ansh Radhakrishnan
,
Buck
and
Nate Thomas
Dec 3, 2022, 12:58 AM
18
points
4
comments
20
min read
LW
link
Causal Scrubbing: a method for rigorously testing interpretability hypotheses [Redwood Research]
LawrenceC
,
Adrià Garriga-alonso
,
Nicholas Goldowsky-Dill
,
ryan_greenblatt
,
jenny
,
Ansh Radhakrishnan
,
Buck
and
Nate Thomas
Dec 3, 2022, 12:58 AM
206
points
35
comments
20
min read
LW
link
1
review
Take 2: Building tools to help build FAI is a legitimate strategy, but it’s dual-use.
Charlie Steiner
Dec 3, 2022, 12:54 AM
17
points
1
comment
2
min read
LW
link
D&D.Sci December 2022: The Boojumologist
abstractapplic
Dec 2, 2022, 11:39 PM
32
points
9
comments
2
min read
LW
link
Subsets and quotients in interpretability
Erik Jenner
Dec 2, 2022, 11:13 PM
26
points
1
comment
7
min read
LW
link
Research Principles for 6 Months of AI Alignment Studies
Shoshannah Tekofsky
Dec 2, 2022, 10:55 PM
23
points
3
comments
6
min read
LW
link
Three Fables of Magical Girls and Longtermism
Ulisse Mini
Dec 2, 2022, 10:01 PM
33
points
11
comments
2
min read
LW
link
Brun’s theorem and sieve theory
Ege Erdil
Dec 2, 2022, 8:57 PM
31
points
1
comment
73
min read
LW
link
Apply for the ML Upskilling Winter Camp in Cambridge, UK [2-10 Jan]
hannah wing-yee
Dec 2, 2022, 8:45 PM
3
points
0
comments
2
min read
LW
link
Takeoff speeds, the chimps analogy, and the Cultural Intelligence Hypothesis
NickGabs
Dec 2, 2022, 7:14 PM
16
points
2
comments
4
min read
LW
link
[ASoT] Finetuning, RL, and GPT’s world prior
Jozdien
Dec 2, 2022, 4:33 PM
45
points
8
comments
5
min read
LW
link
NeurIPS Safety & ChatGPT. MLAISU W48
Esben Kran
and
Steinthal
Dec 2, 2022, 3:50 PM
3
points
0
comments
4
min read
LW
link
(newsletter.apartresearch.com)
[Question]
Is ChatGPT rigth when advising to brush the tongue when brushing teeth?
ChristianKl
Dec 2, 2022, 2:53 PM
13
points
14
comments
2
min read
LW
link
Jailbreaking ChatGPT on Release Day
Zvi
Dec 2, 2022, 1:10 PM
242
points
77
comments
6
min read
LW
link
1
review
(thezvi.wordpress.com)
Deconfusing Direct vs Amortised Optimization
beren
Dec 2, 2022, 11:30 AM
136
points
19
comments
10
min read
LW
link
Inner and outer alignment decompose one hard problem into two extremely hard problems
TurnTrout
Dec 2, 2022, 2:43 AM
149
points
22
comments
47
min read
LW
link
3
reviews
New Feature: Collaborative editing now supports logged-out users
RobertM
Dec 2, 2022, 2:41 AM
10
points
0
comments
1
min read
LW
link
Mastering Stratego (Deepmind)
svemirski
Dec 2, 2022, 2:21 AM
6
points
0
comments
1
min read
LW
link
(www.deepmind.com)
Update on Harvard AI Safety Team and MIT AI Alignment
Xander Davies
,
Sam Marks
,
kaivu
,
tlevin
,
leni
,
maxnadeau
and
Naomi Bashkansky
Dec 2, 2022, 12:56 AM
60
points
4
comments
8
min read
LW
link
Quick look: cognitive damage from well-administered anesthesia
Elizabeth
Dec 2, 2022, 12:40 AM
28
points
0
comments
4
min read
LW
link
(acesounderglass.com)
Against meta-ethical hedonism
Joe Carlsmith
Dec 2, 2022, 12:23 AM
24
points
5
comments
35
min read
LW
link
Lumenators for very lazy British people
shakeelh
Dec 2, 2022, 12:18 AM
16
points
3
comments
1
min read
LW
link
Understanding goals in complex systems
Johannes C. Mayer
Dec 1, 2022, 11:49 PM
9
points
0
comments
1
min read
LW
link
(www.youtube.com)
A challenge for AGI organizations, and a challenge for readers
Rob Bensinger
and
Eliezer Yudkowsky
Dec 1, 2022, 11:11 PM
302
points
33
comments
2
min read
LW
link
Playing with Aerial Photos
jefftk
Dec 1, 2022, 10:50 PM
9
points
0
comments
1
min read
LW
link
(www.jefftk.com)
Take 1: We’re not going to reverse-engineer the AI.
Charlie Steiner
Dec 1, 2022, 10:41 PM
38
points
4
comments
4
min read
LW
link
Previous
Back to top
Next
N
W
F
A
C
D
E
F
G
H
I
Customize appearance
Current theme:
default
A
C
D
E
F
G
H
I
Less Wrong (text)
Less Wrong (link)
Invert colors
Reset to defaults
OK
Cancel
Hi, I’m Bobby the Basilisk! Click on the minimize button (
) to minimize the theme tweaker window, so that you can see what the page looks like with the current tweaked values. (But remember,
the changes won’t be saved until you click “OK”!
)
Theme tweaker help
Show Bobby the Basilisk
OK
Cancel