Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
Archive
Sequences
About
Search
Log In
All
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
All
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
All
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
Page
1
Sad!
nws
Feb 7, 2019, 7:42 PM
−1
points
6
comments
1
min read
LW
link
Open Thread February 2019
ryan_b
Feb 7, 2019, 6:00 PM
19
points
19
comments
1
min read
LW
link
EA grants available (to individuals)
Jameson Quinn
Feb 7, 2019, 3:17 PM
34
points
8
comments
3
min read
LW
link
X-risks are a tragedies of the commons
David Scott Krueger (formerly: capybaralet)
Feb 7, 2019, 2:48 AM
9
points
19
comments
1
min read
LW
link
Do Science and Technology Lead to a Fall in Human Values?
jayshi19
Feb 7, 2019, 1:53 AM
1
point
1
comment
1
min read
LW
link
(techandhumanity.com)
Test Cases for Impact Regularisation Methods
DanielFilan
Feb 6, 2019, 9:50 PM
72
points
5
comments
13
min read
LW
link
(danielfilan.com)
A tentative solution to a certain mythological beast of a problem
Edward Knox
Feb 6, 2019, 8:42 PM
−11
points
9
comments
1
min read
LW
link
AI Alignment is Alchemy.
Jeevan
Feb 6, 2019, 8:32 PM
−9
points
20
comments
1
min read
LW
link
My use of the phrase “Super-Human Feedback”
David Scott Krueger (formerly: capybaralet)
Feb 6, 2019, 7:11 PM
13
points
0
comments
1
min read
LW
link
Thoughts on Ben Garfinkel’s “How sure are we about this AI stuff?”
David Scott Krueger (formerly: capybaralet)
Feb 6, 2019, 7:09 PM
25
points
17
comments
1
min read
LW
link
Show LW: (video) how to remember everything you learn
ArthurLidia
Feb 6, 2019, 7:02 PM
3
points
0
comments
1
min read
LW
link
Does the EA community do “basic science” grants? How do I get one?
Jameson Quinn
Feb 6, 2019, 6:10 PM
7
points
6
comments
1
min read
LW
link
Is the World Getting Better? A brief summary of recent debate
ErickBall
Feb 6, 2019, 5:38 PM
35
points
8
comments
2
min read
LW
link
(capx.co)
Security amplification
paulfchristiano
Feb 6, 2019, 5:28 PM
21
points
2
comments
13
min read
LW
link
Alignment Newsletter #44
Rohin Shah
Feb 6, 2019, 8:30 AM
18
points
0
comments
9
min read
LW
link
(mailchi.mp)
South Bay Meetup March 2nd
David Friedman
Feb 6, 2019, 6:48 AM
1
point
0
comments
LW
link
[Question]
If Rationality can be likened to a ‘Martial Art’, what would be the Forms?
Bae's Theorem
Feb 6, 2019, 5:48 AM
21
points
10
comments
1
min read
LW
link
Complexity Penalties in Statistical Learning
michael_h
Feb 6, 2019, 4:13 AM
31
points
3
comments
6
min read
LW
link
Automated Nomic Game 2
jefftk
Feb 5, 2019, 10:11 PM
19
points
2
comments
2
min read
LW
link
Should we bait criminals using clones ?
Aël Chappuit
Feb 5, 2019, 9:13 PM
−23
points
3
comments
1
min read
LW
link
Describing things: parsimony, fruitfulness, and adaptability
Mary Chernyshenko
Feb 5, 2019, 8:59 PM
1
point
0
comments
1
min read
LW
link
Philosophy as low-energy approximation
Charlie Steiner
Feb 5, 2019, 7:34 PM
41
points
20
comments
3
min read
LW
link
When to use quantilization
RyanCarey
Feb 5, 2019, 5:17 PM
65
points
5
comments
4
min read
LW
link
(notes on) Policy Desiderata for Superintelligent AI: A Vector Field Approach
Ben Pace
Feb 4, 2019, 10:08 PM
43
points
5
comments
7
min read
LW
link
SSC Paris Meetup, 09/02/18
fbreton
Feb 4, 2019, 7:54 PM
1
point
0
comments
1
min read
LW
link
January 2019 gwern.net newsletter
gwern
Feb 4, 2019, 3:53 PM
15
points
0
comments
1
min read
LW
link
(www.gwern.net)
(Why) Does the Basilisk Argument fail?
Lookingforyourlogic
Feb 3, 2019, 11:50 PM
0
points
11
comments
2
min read
LW
link
Constructing Goodhart
johnswentworth
Feb 3, 2019, 9:59 PM
29
points
10
comments
3
min read
LW
link
Conclusion to the sequence on value learning
Rohin Shah
Feb 3, 2019, 9:05 PM
51
points
20
comments
5
min read
LW
link
AI Safety Prerequisites Course: Revamp and New Lessons
philip_b
Feb 3, 2019, 9:04 PM
24
points
5
comments
1
min read
LW
link
[Question]
What are some of bizarre theories based on anthropic reasoning?
Dr. Jamchie
Feb 3, 2019, 6:48 PM
21
points
13
comments
1
min read
LW
link
Rationality: What’s the point?
Hazard
Feb 3, 2019, 4:34 PM
12
points
11
comments
1
min read
LW
link
Quantifying Human Suffering and “Everyday Suffering”
willfranks
Feb 3, 2019, 1:07 PM
7
points
3
comments
1
min read
LW
link
[Question]
How to stay concentrated for a long period of time?
infinickel
Feb 3, 2019, 5:24 AM
6
points
15
comments
1
min read
LW
link
How to notice being mind-hacked
Shmi
Feb 2, 2019, 11:13 PM
18
points
22
comments
2
min read
LW
link
Depression philosophizing
aaq
Feb 2, 2019, 10:54 PM
6
points
2
comments
1
min read
LW
link
LessWrong DC: Metameetup
rusalkii
Feb 2, 2019, 6:50 PM
1
point
0
comments
1
min read
LW
link
SSC Atlanta Meetup
Steve French
Feb 2, 2019, 3:11 AM
2
points
0
comments
1
min read
LW
link
[Question]
How does Gradient Descent Interact with Goodhart?
Scott Garrabrant
Feb 2, 2019, 12:14 AM
68
points
19
comments
4
min read
LW
link
Philadelphia SSC Meetup
Majuscule
Feb 1, 2019, 11:51 PM
1
point
0
comments
1
min read
LW
link
STRUCTURE: Reality and rational best practice
Hazard
Feb 1, 2019, 11:51 PM
5
points
2
comments
1
min read
LW
link
An Attempt To Explain No-Self In Simple Terms
Justin Vriend
Feb 1, 2019, 11:50 PM
1
point
0
comments
3
min read
LW
link
STRUCTURE: How the Social Affects your rationality
Hazard
Feb 1, 2019, 11:35 PM
0
points
0
comments
1
min read
LW
link
STRUCTURE: A Crash Course in Your Brain
Hazard
Feb 1, 2019, 11:17 PM
6
points
4
comments
1
min read
LW
link
February Nashville SSC Meetup
Dude McDude
Feb 1, 2019, 10:36 PM
1
point
0
comments
1
min read
LW
link
[Question]
What kind of information would serve as the best evidence for resolving the debate of whether a centrist or leftist Democratic nominee is likelier to take the White House in 2020?
Evan_Gaensbauer
1 Feb 2019 18:40 UTC
10
points
10
comments
3
min read
LW
link
Urgent & important: How (not) to do your to-do list
bfinn
1 Feb 2019 17:44 UTC
51
points
20
comments
13
min read
LW
link
Who wants to be a Millionaire?
Bucky
1 Feb 2019 14:02 UTC
29
points
1
comment
11
min read
LW
link
What is Wrong?
Inyuki
1 Feb 2019 12:02 UTC
1
point
2
comments
2
min read
LW
link
Drexler on AI Risk
PeterMcCluskey
1 Feb 2019 5:11 UTC
35
points
10
comments
9
min read
LW
link
(www.bayesianinvestor.com)
Back to top
Next
N
W
F
A
C
D
E
F
G
H
I
Customize appearance
Current theme:
default
A
C
D
E
F
G
H
I
Less Wrong (text)
Less Wrong (link)
Invert colors
Reset to defaults
OK
Cancel
Hi, I’m Bobby the Basilisk! Click on the minimize button (
) to minimize the theme tweaker window, so that you can see what the page looks like with the current tweaked values. (But remember,
the changes won’t be saved until you click “OK”!
)
Theme tweaker help
Show Bobby the Basilisk
OK
Cancel