Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
Archive
Sequences
About
Search
Log In
All
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
All
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
All
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
Page
1
How to choose what to work on
jasoncrawford
Sep 18, 2024, 8:39 PM
22
points
6
comments
4
min read
LW
link
(blog.rootsofprogress.org)
Intention-to-Treat (Re: How harmful is music, really?)
kqr
Sep 18, 2024, 6:44 PM
11
points
0
comments
5
min read
LW
link
(entropicthoughts.com)
The case for a negative alignment tax
Cameron Berg
,
Judd Rosenblatt
,
Diogo de Lucena
and
AE Studio
Sep 18, 2024, 6:33 PM
75
points
20
comments
7
min read
LW
link
Endogenous Growth and Human Intelligence
Nicholas D.
Sep 18, 2024, 2:05 PM
3
points
0
comments
2
min read
LW
link
Inquisitive vs. adversarial rationality
gb
Sep 18, 2024, 1:50 PM
6
points
9
comments
2
min read
LW
link
Pronouns are Annoying
ymeskhout
Sep 18, 2024, 1:30 PM
15
points
23
comments
4
min read
LW
link
(www.ymeskhout.com)
Is “superhuman” AI forecasting BS? Some experiments on the “539″ bot from the Centre for AI Safety
titotal
Sep 18, 2024, 1:07 PM
79
points
3
comments
LW
link
(open.substack.com)
Knowledge’s practicability
Ted Nguyễn
Sep 18, 2024, 2:31 AM
−5
points
0
comments
7
min read
LW
link
(tednguyen.substack.com)
Skills from a year of Purposeful Rationality Practice
Raemon
Sep 18, 2024, 2:05 AM
190
points
18
comments
7
min read
LW
link
[Question]
Where to find reliable reviews of AI products?
Elizabeth
Sep 17, 2024, 11:48 PM
29
points
6
comments
1
min read
LW
link
Superposition through Active Learning Lens
akankshanc
Sep 17, 2024, 5:32 PM
1
point
0
comments
10
min read
LW
link
Survey—Psychological Impact of Long-Term AI Engagement
Manuela García
Sep 17, 2024, 5:31 PM
2
points
0
comments
1
min read
LW
link
Survey—Psychological Impact of Long-Term AI Engagement
Manuela García
Sep 17, 2024, 5:31 PM
1
point
1
comment
1
min read
LW
link
[Question]
What does it mean for an event or observation to have probability 0 or 1 in Bayesian terms?
Noosphere89
Sep 17, 2024, 5:28 PM
1
point
22
comments
1
min read
LW
link
How harmful is music, really?
dkl9
Sep 17, 2024, 2:53 PM
10
points
6
comments
3
min read
LW
link
(dkl9.net)
Monthly Roundup #22: September 2024
Zvi
Sep 17, 2024, 12:20 PM
35
points
10
comments
45
min read
LW
link
(thezvi.wordpress.com)
I finally got ChatGPT to sound like me
lsusr
Sep 17, 2024, 9:39 AM
47
points
18
comments
6
min read
LW
link
Food, Prison & Exotic Animals: Sparse Autoencoders Detect 6.5x Performing Youtube Thumbnails
Louka Ewington-Pitsos
Sep 17, 2024, 3:52 AM
6
points
2
comments
7
min read
LW
link
Head in the Cloud: Why an Upload of Your Mind is Not You
xhq
Sep 17, 2024, 12:25 AM
−11
points
3
comments
14
min read
LW
link
[Question]
How does someone prove that their general intelligence is above average?
M. Y. Zuo
Sep 16, 2024, 9:01 PM
−3
points
12
comments
1
min read
LW
link
[Question]
Does life actually locally *increase* entropy?
tailcalled
Sep 16, 2024, 8:30 PM
10
points
27
comments
1
min read
LW
link
Book review: Xenosystems
jessicata
Sep 16, 2024, 8:17 PM
50
points
18
comments
37
min read
LW
link
(unstableontology.com)
MIRI’s September 2024 newsletter
Harlan
Sep 16, 2024, 6:15 PM
46
points
0
comments
1
min read
LW
link
(intelligence.org)
Generative ML in chemistry is bottlenecked by synthesis
Abhishaike Mahajan
Sep 16, 2024, 4:31 PM
38
points
2
comments
14
min read
LW
link
(www.owlposting.com)
Secret Collusion: Will We Know When to Unplug AI?
schroederdewitt
,
srm
,
MikhailB
,
Lewis Hammond
,
chansmi
and
sofmonk
Sep 16, 2024, 4:07 PM
61
points
8
comments
31
min read
LW
link
GPT-o1
Zvi
Sep 16, 2024, 1:40 PM
86
points
34
comments
46
min read
LW
link
(thezvi.wordpress.com)
[Question]
Can subjunctive dependence emerge from a simplicity prior?
Daniel C
Sep 16, 2024, 12:39 PM
11
points
0
comments
1
min read
LW
link
Longevity and the Mind
George3d6
Sep 16, 2024, 9:43 AM
5
points
2
comments
10
min read
LW
link
[Question]
What’s the Deal with Logical Uncertainty?
Ape in the coat
Sep 16, 2024, 8:11 AM
32
points
29
comments
2
min read
LW
link
Reinforcement Learning from Information Bazaar Feedback, and other uses of information markets
Abhimanyu Pallavi Sudhir
Sep 16, 2024, 1:04 AM
5
points
1
comment
5
min read
LW
link
Hyperpolation
Gunnar_Zarncke
Sep 15, 2024, 9:37 PM
22
points
6
comments
1
min read
LW
link
(arxiv.org)
[Question]
If I wanted to spend WAY more on AI, what would I spend it on?
Logan Zoellner
Sep 15, 2024, 9:24 PM
53
points
16
comments
1
min read
LW
link
Superintelligence Can’t Solve the Problem of Deciding What You’ll Do
Vladimir_Nesov
Sep 15, 2024, 9:03 PM
27
points
11
comments
1
min read
LW
link
For Limited Superintelligences, Epistemic Exclusion is Harder than Robustness to Logical Exploitation
Lorec
Sep 15, 2024, 8:49 PM
3
points
9
comments
3
min read
LW
link
Why I funded PIBBSS
Ryan Kidd
Sep 15, 2024, 7:56 PM
115
points
21
comments
3
min read
LW
link
My disagreements with “AGI ruin: A List of Lethalities”
Noosphere89
Sep 15, 2024, 5:22 PM
36
points
46
comments
18
min read
LW
link
Thirty random thoughts about AI alignment
Lysandre Terrisse
Sep 15, 2024, 4:24 PM
6
points
1
comment
29
min read
LW
link
Proveably Safe Self Driving Cars [Modulo Assumptions]
Davidmanheim
Sep 15, 2024, 1:58 PM
27
points
29
comments
8
min read
LW
link
SCP Foundation—Anti memetic Division Hub
landscape_kiwi
Sep 15, 2024, 1:40 PM
6
points
1
comment
1
min read
LW
link
(scp-wiki.wikidot.com)
Did Christopher Hitchens change his mind about waterboarding?
Isaac King
Sep 15, 2024, 8:28 AM
171
points
22
comments
7
min read
LW
link
Not every accommodation is a Curb Cut Effect: The Handicapped Parking Effect, the Clapper Effect, and more
Michael Cohn
Sep 15, 2024, 5:27 AM
82
points
39
comments
10
min read
LW
link
(perplexedguide.net)
AlignedCut: Visual Concepts Discovery on Brain-Guided Universal Feature Space
Bogdan Ionut Cirstea
Sep 14, 2024, 11:23 PM
17
points
1
comment
1
min read
LW
link
(arxiv.org)
How you can help pass important AI legislation with 10 minutes of effort
ThomasW
Sep 14, 2024, 10:10 PM
59
points
2
comments
2
min read
LW
link
[Question]
Calibration training for ‘percentile rankings’?
david reinstein
Sep 14, 2024, 9:51 PM
3
points
0
comments
2
min read
LW
link
OpenAI o1, Llama 4, and AlphaZero of LLMs
Vladimir_Nesov
Sep 14, 2024, 9:27 PM
83
points
25
comments
1
min read
LW
link
Forever Leaders
Justice Howard
Sep 14, 2024, 8:55 PM
6
points
9
comments
1
min read
LW
link
Emergent Authorship: Creativity à la Communing
gswonk
Sep 14, 2024, 7:02 PM
1
point
0
comments
3
min read
LW
link
Compression Moves for Prediction
adamShimi
Sep 14, 2024, 5:51 PM
20
points
0
comments
7
min read
LW
link
(epistemologicalfascinations.substack.com)
Pay-on-results personal growth: first success
Chris Lakin
Sep 14, 2024, 3:39 AM
63
points
8
comments
4
min read
LW
link
(chrislakin.blog)
Avoiding the Bog of Moral Hazard for AI
Nathan Helm-Burger
Sep 13, 2024, 9:24 PM
19
points
13
comments
2
min read
LW
link
Back to top
Next
N
W
F
A
C
D
E
F
G
H
I
Customize appearance
Current theme:
default
A
C
D
E
F
G
H
I
Less Wrong (text)
Less Wrong (link)
Invert colors
Reset to defaults
OK
Cancel
Hi, I’m Bobby the Basilisk! Click on the minimize button (
) to minimize the theme tweaker window, so that you can see what the page looks like with the current tweaked values. (But remember,
the changes won’t be saved until you click “OK”!
)
Theme tweaker help
Show Bobby the Basilisk
OK
Cancel