Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
Archive
Sequences
About
Search
Log In
All
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
All
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
All
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
Page
2
Rambling thoughts on having multiple selves
cranberry_bear
Apr 11, 2022, 10:43 PM
15
points
1
comment
3
min read
LW
link
An AI-in-a-box success model
azsantosk
Apr 11, 2022, 10:28 PM
16
points
1
comment
10
min read
LW
link
The Regulatory Option: A response to near 0% survival odds
Matthew Lowenstein
Apr 11, 2022, 10:00 PM
46
points
21
comments
6
min read
LW
link
The Efficient LessWrong Hypothesis—Stock Investing Competition
MrThink
Apr 11, 2022, 8:43 PM
30
points
35
comments
2
min read
LW
link
Review: Structure and Interpretation of Computer Programs
L Rudolf L
Apr 11, 2022, 8:27 PM
17
points
9
comments
10
min read
LW
link
(www.strataoftheworld.com)
[Question]
Underappreciated content on LessWrong
Ege Erdil
Apr 11, 2022, 5:40 PM
22
points
5
comments
1
min read
LW
link
Editing Advice for LessWrong Users
JustisMills
Apr 11, 2022, 4:32 PM
236
points
14
comments
6
min read
LW
link
1
review
Post-history is written by the martyrs
Veedrac
Apr 11, 2022, 3:45 PM
50
points
2
comments
19
min read
LW
link
(www.royalroad.com)
What Chords Do You Need?
jefftk
Apr 11, 2022, 3:00 PM
11
points
0
comments
3
min read
LW
link
(www.jefftk.com)
What can people not smart/technical/”competent” enough for AI research/AI risk work do to reduce AI-risk/maximize AI safety? (which is most people?)
Alex K. Chen (parrot)
Apr 11, 2022, 2:05 PM
7
points
3
comments
3
min read
LW
link
Goodhart’s Law Causal Diagrams
JustinShovelain
and
Jeremy Gillen
Apr 11, 2022, 1:52 PM
35
points
6
comments
6
min read
LW
link
China Covid Update #1
Zvi
Apr 11, 2022, 1:40 PM
88
points
22
comments
3
min read
LW
link
(thezvi.wordpress.com)
ACX Meetup Copenhagen, Denmark
Søren Elverlin
Apr 11, 2022, 11:53 AM
4
points
0
comments
1
min read
LW
link
Is it time to start thinking about what AI Friendliness means?
Victor Novikov
Apr 11, 2022, 9:32 AM
18
points
6
comments
3
min read
LW
link
[Question]
Is there an equivalent of the CDF for grading predictions?
Optimization Process
Apr 11, 2022, 5:30 AM
6
points
5
comments
1
min read
LW
link
[Question]
Impactful data science projects
Valentin2026
Apr 11, 2022, 4:27 AM
5
points
2
comments
1
min read
LW
link
[Question]
Could we set a resolution/stopper for the upper bound of the utility function of an AI?
FinalFormal2
Apr 11, 2022, 3:10 AM
−5
points
2
comments
1
min read
LW
link
Epistemic Slipperiness
Raemon
Apr 11, 2022, 1:48 AM
59
points
18
comments
7
min read
LW
link
[Question]
What is the most efficient way to create more worlds in the many worlds interpretation of quantum mechanics?
seank
Apr 11, 2022, 12:26 AM
4
points
11
comments
1
min read
LW
link
[Question]
Convince me that humanity is as doomed by AGI as Yudkowsky et al., seems to believe
Yitz
Apr 10, 2022, 9:02 PM
92
points
141
comments
2
min read
LW
link
Emotionally Confronting a Probably-Doomed World: Against Motivation Via Dignity Points
TurnTrout
Apr 10, 2022, 6:45 PM
155
points
7
comments
9
min read
LW
link
[Question]
Does non-access to outputs prevent recursive self-improvement?
Gunnar_Zarncke
Apr 10, 2022, 6:37 PM
15
points
0
comments
1
min read
LW
link
A Brief Excursion Into Molecular Neuroscience
Jan
Apr 10, 2022, 5:55 PM
48
points
8
comments
19
min read
LW
link
(universalprior.substack.com)
Finally Entering Alignment
Ulisse Mini
Apr 10, 2022, 5:01 PM
81
points
8
comments
2
min read
LW
link
Schelling Meetup Toronto
Sean Aubin
Apr 10, 2022, 1:58 PM
3
points
0
comments
1
min read
LW
link
Is Fisherian Runaway Gradient Hacking?
Ryan Kidd
Apr 10, 2022, 1:47 PM
15
points
6
comments
4
min read
LW
link
Worse than an unaligned AGI
Shmi
Apr 10, 2022, 3:35 AM
−1
points
11
comments
1
min read
LW
link
Time-Time Tradeoffs
Orpheus16
Apr 10, 2022, 2:33 AM
18
points
1
comment
3
min read
LW
link
(forum.effectivealtruism.org)
Boston Contra: Fully Gender-Free
jefftk
Apr 10, 2022, 12:40 AM
3
points
12
comments
1
min read
LW
link
(www.jefftk.com)
[Question]
Hidden comments settings not working?
TLW
Apr 9, 2022, 11:15 PM
4
points
2
comments
1
min read
LW
link
Godshatter Versus Legibility: A Fundamentally Different Approach To AI Alignment
LukeOnline
Apr 9, 2022, 9:43 PM
15
points
14
comments
7
min read
LW
link
A concrete bet offer to those with short AGI timelines
Matthew Barnett
and
Tamay
Apr 9, 2022, 9:41 PM
199
points
120
comments
5
min read
LW
link
New: use The Nonlinear Library to listen to the top LessWrong posts of all time
KatWoods
Apr 9, 2022, 8:50 PM
39
points
9
comments
8
min read
LW
link
140 Cognitive Biases You Should Know
André Ferretti
Apr 9, 2022, 5:15 PM
8
points
7
comments
1
min read
LW
link
Strategies for keeping AIs narrow in the short term
Rossin
Apr 9, 2022, 4:42 PM
9
points
3
comments
3
min read
LW
link
Hyperbolic takeoff
Ege Erdil
Apr 9, 2022, 3:57 PM
18
points
7
comments
10
min read
LW
link
(www.metaculus.com)
Elicit: Language Models as Research Assistants
stuhlmueller
and
jungofthewon
Apr 9, 2022, 2:56 PM
71
points
6
comments
13
min read
LW
link
Emergent Ventures/Schmidt (new grantor for individual researchers)
gwern
Apr 9, 2022, 2:41 PM
21
points
6
comments
1
min read
LW
link
(marginalrevolution.com)
AI safety: the ultimate trolley problem
chaosmage
Apr 9, 2022, 12:05 PM
−21
points
6
comments
1
min read
LW
link
AMA Conjecture, A New Alignment Startup
adamShimi
Apr 9, 2022, 9:43 AM
47
points
42
comments
1
min read
LW
link
[Question]
What advice do you have for someone struggling to detach their grim-o-meter?
Zorger74
Apr 9, 2022, 7:35 AM
6
points
3
comments
1
min read
LW
link
[Question]
Can AI systems have extremely impressive outputs and also not need to be aligned because they aren’t general enough or something?
WilliamKiely
Apr 9, 2022, 6:03 AM
6
points
3
comments
1
min read
LW
link
Buy-in Before Randomization
jefftk
Apr 9, 2022, 1:30 AM
26
points
9
comments
1
min read
LW
link
(www.jefftk.com)
Why Instrumental Goals are not a big AI Safety Problem
Jonathan Paulson
Apr 9, 2022, 12:10 AM
0
points
7
comments
3
min read
LW
link
A method of writing content easily with little anxiety
jessicata
Apr 8, 2022, 10:11 PM
64
points
19
comments
3
min read
LW
link
(unstableontology.com)
Good Heart Donation Lottery Winner
Gordon Seidoh Worley
Apr 8, 2022, 8:34 PM
21
points
0
comments
1
min read
LW
link
Roam Research Mobile is Out!
Logan Riggs
Apr 8, 2022, 7:05 PM
12
points
0
comments
1
min read
LW
link
Progress Report 4: logit lens redux
Nathan Helm-Burger
Apr 8, 2022, 6:35 PM
4
points
0
comments
2
min read
LW
link
[Question]
What would the creation of aligned AGI look like for us?
Perhaps
Apr 8, 2022, 6:05 PM
3
points
4
comments
1
min read
LW
link
Convincing All Capability Researchers
Logan Riggs
Apr 8, 2022, 5:40 PM
121
points
70
comments
3
min read
LW
link
Previous
Back to top
Next
N
W
F
A
C
D
E
F
G
H
I
Customize appearance
Current theme:
default
A
C
D
E
F
G
H
I
Less Wrong (text)
Less Wrong (link)
Invert colors
Reset to defaults
OK
Cancel
Hi, I’m Bobby the Basilisk! Click on the minimize button (
) to minimize the theme tweaker window, so that you can see what the page looks like with the current tweaked values. (But remember,
the changes won’t be saved until you click “OK”!
)
Theme tweaker help
Show Bobby the Basilisk
OK
Cancel