Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
Archive
Sequences
About
Search
Log In
All
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
All
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
All
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
Page
1
“Acquisition of Chess Knowledge in AlphaZero”: probing AZ over time
jsd
Nov 18, 2021, 11:24 PM
11
points
9
comments
LW
link
(arxiv.org)
Ngo and Yudkowsky on AI capability gains
Eliezer Yudkowsky
and
Richard_Ngo
Nov 18, 2021, 10:19 PM
131
points
61
comments
39
min read
LW
link
1
review
Covid 11/18: Paxlovid Remains Illegal
Zvi
Nov 18, 2021, 3:50 PM
55
points
36
comments
14
min read
LW
link
(thezvi.wordpress.com)
Satisficers Tend To Seek Power: Instrumental Convergence Via Retargetability
TurnTrout
Nov 18, 2021, 1:54 AM
85
points
8
comments
17
min read
LW
link
(www.overleaf.com)
Forecasting: Zeroth and First Order
jsteinhardt
Nov 18, 2021, 1:30 AM
33
points
6
comments
5
min read
LW
link
(bounded-regret.ghost.io)
Experience on Methotrexate
jefftk
Nov 17, 2021, 10:40 PM
13
points
0
comments
2
min read
LW
link
(www.jefftk.com)
Applications for AI Safety Camp 2022 Now Open!
adamShimi
Nov 17, 2021, 9:42 PM
47
points
3
comments
1
min read
LW
link
[Question]
Did EcoHealth create SARS-CoV-2?
jamal
Nov 17, 2021, 8:42 PM
3
points
7
comments
1
min read
LW
link
On Raising Awareness
Tomás B.
Nov 17, 2021, 5:12 PM
21
points
10
comments
3
min read
LW
link
Sasha Chapin on bad social norms in rationality/EA
Kaj_Sotala
Nov 17, 2021, 9:43 AM
51
points
22
comments
5
min read
LW
link
(sashachapin.substack.com)
[Question]
What are the mutual benefits of AGI-human collaboration that would otherwise be unobtainable?
M. Y. Zuo
Nov 17, 2021, 3:09 AM
1
point
4
comments
1
min read
LW
link
Quadratic Voting and Collusion
leogao
Nov 17, 2021, 12:19 AM
41
points
24
comments
2
min read
LW
link
Taking a simplified model
dominicq
Nov 16, 2021, 10:21 PM
9
points
8
comments
1
min read
LW
link
The Greedy Doctor Problem
Jan
Nov 16, 2021, 10:06 PM
6
points
10
comments
12
min read
LW
link
(universalprior.substack.com)
Equity premium puzzles
Ege Erdil
and
Metaculus
Nov 16, 2021, 8:50 PM
20
points
4
comments
12
min read
LW
link
(www.metaculus.com)
Why I am no longer driven
dominicq
Nov 16, 2021, 8:43 PM
71
points
16
comments
4
min read
LW
link
Super intelligent AIs that don’t require alignment
Yair Halberstadt
Nov 16, 2021, 7:55 PM
10
points
2
comments
6
min read
LW
link
Why Save The Drowning Child: Ethics Vs Theory
Raymond Douglas
Nov 16, 2021, 7:07 PM
17
points
12
comments
4
min read
LW
link
Two Stupid AI Alignment Ideas
aphyer
Nov 16, 2021, 4:13 PM
27
points
3
comments
4
min read
LW
link
[linkpost] Project Blueprint: ‘Measuring and then maximally reversing the quantified biological age of my organs’
matteodimaio
Nov 16, 2021, 2:48 AM
2
points
0
comments
1
min read
LW
link
A positive case for how we might succeed at prosaic AI alignment
evhub
Nov 16, 2021, 1:49 AM
81
points
46
comments
6
min read
LW
link
Quantilizer ≡ Optimizer with a Bounded Amount of Output
itaibn0
Nov 16, 2021, 1:03 AM
11
points
4
comments
2
min read
LW
link
D&D.Sci Dungeoncrawling: The Crown of Command Evaluation & Ruleset
aphyer
Nov 16, 2021, 12:29 AM
29
points
12
comments
9
min read
LW
link
Streaming Science on Twitch
A Ray
Nov 15, 2021, 10:24 PM
21
points
1
comment
3
min read
LW
link
Ngo and Yudkowsky on alignment difficulty
Eliezer Yudkowsky
and
Richard_Ngo
Nov 15, 2021, 8:31 PM
259
points
151
comments
99
min read
LW
link
1
review
Dan Luu on Persistent Bad Decision Making (but maybe it’s noble?)
Elizabeth
Nov 15, 2021, 8:05 PM
17
points
3
comments
1
min read
LW
link
(danluu.com)
The poetry of progress
jasoncrawford
Nov 15, 2021, 7:24 PM
51
points
6
comments
4
min read
LW
link
(rootsofprogress.org)
[Question]
Worst Commonsense Concepts?
abramdemski
Nov 15, 2021, 6:22 PM
75
points
34
comments
3
min read
LW
link
My understanding of the alignment problem
danieldewey
Nov 15, 2021, 6:13 PM
43
points
3
comments
3
min read
LW
link
“Summarizing Books with Human Feedback” (recursive GPT-3)
gwern
Nov 15, 2021, 5:41 PM
24
points
4
comments
LW
link
(openai.com)
How Humanity Lost Control and Humans Lost Liberty: From Our Brave New World to Analogia (Sequence Introduction)
Justin Bullock
Nov 15, 2021, 2:22 PM
8
points
4
comments
3
min read
LW
link
Re: Attempted Gears Analysis of AGI Intervention Discussion With Eliezer
lsusr
Nov 15, 2021, 10:02 AM
20
points
8
comments
15
min read
LW
link
What the future will look like
avantika.mehra
Nov 15, 2021, 5:14 AM
7
points
1
comment
3
min read
LW
link
Attempted Gears Analysis of AGI Intervention Discussion With Eliezer
Zvi
Nov 15, 2021, 3:50 AM
197
points
49
comments
16
min read
LW
link
(thezvi.wordpress.com)
An Emergency Fund for Effective Altruists (second version)
bice
Nov 14, 2021, 6:28 PM
12
points
4
comments
2
min read
LW
link
Televised sports exist to gamble with testosterone levels using prediction skill
Lucent
Nov 14, 2021, 6:24 PM
22
points
3
comments
1
min read
LW
link
Improving on the Karma System
Raelifin
Nov 14, 2021, 6:01 PM
106
points
36
comments
19
min read
LW
link
[Linkpost] Paul Graham 101
Gunnar_Zarncke
Nov 14, 2021, 4:52 PM
12
points
4
comments
1
min read
LW
link
My current uncertainties regarding AI, alignment, and the end of the world
dominicq
Nov 14, 2021, 2:08 PM
2
points
3
comments
2
min read
LW
link
Education on My Homeworld
lsusr
Nov 14, 2021, 10:16 AM
37
points
19
comments
5
min read
LW
link
What would we do if alignment were futile?
Grant Demaree
Nov 14, 2021, 8:09 AM
75
points
39
comments
3
min read
LW
link
A pharmaceutical stock pricing mystery
DirectedEvolution
Nov 14, 2021, 1:19 AM
14
points
2
comments
3
min read
LW
link
You are probably underestimating how good self-love can be
Charlie Rogers-Smith
Nov 14, 2021, 12:41 AM
168
points
19
comments
12
min read
LW
link
1
review
Coordination Skills I Wish I Had For the Pandemic
Raemon
Nov 13, 2021, 11:32 PM
96
points
9
comments
6
min read
LW
link
1
review
Sci-Hub sued in India
Connor_Flexman
Nov 13, 2021, 11:12 PM
131
points
19
comments
7
min read
LW
link
[Question]
What’s the likelihood of only sub exponential growth for AGI?
M. Y. Zuo
Nov 13, 2021, 10:46 PM
5
points
22
comments
1
min read
LW
link
Comments on Carlsmith’s “Is power-seeking AI an existential risk?”
So8res
Nov 13, 2021, 4:29 AM
139
points
15
comments
40
min read
LW
link
1
review
A FLI postdoctoral grant application: AI alignment via causal analysis and design of agents
PabloAMC
Nov 13, 2021, 1:44 AM
4
points
0
comments
7
min read
LW
link
[Question]
Is Functional Decision Theory still an active area of research?
Grant Demaree
Nov 13, 2021, 12:30 AM
8
points
3
comments
1
min read
LW
link
Average probabilities, not log odds
AlexMennen
Nov 12, 2021, 9:39 PM
27
points
20
comments
5
min read
LW
link
Back to top
Next
N
W
F
A
C
D
E
F
G
H
I
Customize appearance
Current theme:
default
A
C
D
E
F
G
H
I
Less Wrong (text)
Less Wrong (link)
Invert colors
Reset to defaults
OK
Cancel
Hi, I’m Bobby the Basilisk! Click on the minimize button (
) to minimize the theme tweaker window, so that you can see what the page looks like with the current tweaked values. (But remember,
the changes won’t be saved until you click “OK”!
)
Theme tweaker help
Show Bobby the Basilisk
OK
Cancel