Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
Archive
Sequences
About
Search
Log In
All
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
All
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
All
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
Page
1
Funding for programs and events on global catastrophic risk, effective altruism, and other topics
abergal
and
reallyeli
Aug 14, 2024, 11:59 PM
9
points
0
comments
2
min read
LW
link
Funding for work that builds capacity to address risks from transformative AI
abergal
and
reallyeli
Aug 14, 2024, 11:52 PM
16
points
0
comments
5
min read
LW
link
GPT-2 Sometimes Fails at IOI
Ronak_Mehta
Aug 14, 2024, 11:24 PM
13
points
0
comments
2
min read
LW
link
(ronakrm.github.io)
Toward a Human Hybrid Language for Enhanced Human-Machine Communication: Addressing the AI Alignment Problem
Andndn Dheudnd
Aug 14, 2024, 10:19 PM
−4
points
2
comments
4
min read
LW
link
Adverse Selection by Life-Saving Charities
vaishnav92
Aug 14, 2024, 8:46 PM
41
points
16
comments
5
min read
LW
link
(www.everythingisatrolley.com)
The great Enigma in the sky: The universe as an encryption machine
Alex_Shleizer
Aug 14, 2024, 1:21 PM
4
points
1
comment
8
min read
LW
link
An anti-inductive sequence
Viliam
Aug 14, 2024, 12:28 PM
37
points
10
comments
3
min read
LW
link
Rabin’s Paradox
Charlie Steiner
Aug 14, 2024, 5:40 AM
18
points
41
comments
3
min read
LW
link
Announcing the $200k EA Community Choice
Austin Chen
Aug 14, 2024, 12:39 AM
58
points
8
comments
LW
link
(manifund.substack.com)
Debate: Is it ethical to work at AI capabilities companies?
Ben Pace
and
LawrenceC
Aug 14, 2024, 12:18 AM
39
points
21
comments
11
min read
LW
link
Fields that I reference when thinking about AI takeover prevention
Buck
Aug 13, 2024, 11:08 PM
144
points
16
comments
10
min read
LW
link
(redwoodresearch.substack.com)
Ten counter-arguments that AI is (not) an existential risk (for now)
kwiat.dev
Aug 13, 2024, 10:35 PM
20
points
5
comments
8
min read
LW
link
Alignment from equivariance
hamishtodd1
Aug 13, 2024, 9:09 PM
3
points
2
comments
5
min read
LW
link
[LDSL#6] When is quantification needed, and when is it hard?
tailcalled
Aug 13, 2024, 8:39 PM
32
points
0
comments
2
min read
LW
link
A computational complexity argument for many worlds
jessicata
Aug 13, 2024, 7:35 PM
32
points
15
comments
5
min read
LW
link
(unstableontology.com)
The Consciousness Conundrum: Why We Can’t Dismiss Machine Sentience
SystematicApproach
Aug 13, 2024, 6:01 PM
−22
points
1
comment
3
min read
LW
link
Ten arguments that AI is an existential risk
KatjaGrace
and
Nathan Young
Aug 13, 2024, 5:00 PM
118
points
42
comments
7
min read
LW
link
(blog.aiimpacts.org)
Eugenics And Reproduction Licenses FAQs: For the Common Good
Zero Contradictions
Aug 13, 2024, 4:34 PM
−8
points
14
comments
4
min read
LW
link
(zerocontradictions.net)
Superintelligent AI is possible in the 2020s
HunterJay
Aug 13, 2024, 6:03 AM
41
points
3
comments
12
min read
LW
link
Debate: Get a college degree?
Ben Pace
and
Saul Munn
Aug 12, 2024, 10:23 PM
42
points
14
comments
21
min read
LW
link
Extracting SAE task features for in-context learning
Dmitrii Kharlapenko
,
neverix
,
Neel Nanda
and
Arthur Conmy
Aug 12, 2024, 8:34 PM
31
points
1
comment
9
min read
LW
link
Hyppotherapy
Marius Adrian Nicoară
Aug 12, 2024, 8:07 PM
−3
points
0
comments
1
min read
LW
link
Californians, tell your reps to vote yes on SB 1047!
Holly_Elmore
Aug 12, 2024, 7:50 PM
40
points
24
comments
LW
link
[LDSL#5] Comparison and magnitude/diminishment
tailcalled
Aug 12, 2024, 6:47 PM
24
points
0
comments
2
min read
LW
link
In Defense of Open-Minded UDT
abramdemski
Aug 12, 2024, 6:27 PM
79
points
28
comments
11
min read
LW
link
Humanity isn’t remotely longtermist, so arguments for AGI x-risk should focus on the near term
Seth Herd
Aug 12, 2024, 6:10 PM
46
points
10
comments
1
min read
LW
link
Creating a “Conscience Calculator” to Guard-Rail an AGI
sweenesm
Aug 12, 2024, 4:03 PM
−2
points
0
comments
13
min read
LW
link
Shifting Headspaces—Transitional Beast-Mode
Jonathan Moregård
Aug 12, 2024, 1:02 PM
37
points
9
comments
2
min read
LW
link
(honestliving.substack.com)
Simultaneous Footbass and Footdrums II
jefftk
Aug 11, 2024, 11:50 PM
9
points
0
comments
1
min read
LW
link
(www.jefftk.com)
CultFrisbee
Gauraventh
Aug 11, 2024, 9:36 PM
16
points
3
comments
1
min read
LW
link
(y1d2.com)
Pleasure and suffering are not conceptual opposites
MichaelStJules
Aug 11, 2024, 6:32 PM
7
points
0
comments
LW
link
Computational irreducibility challenges the simulation hypothesis
Clément L
Aug 11, 2024, 4:14 PM
4
points
17
comments
7
min read
LW
link
[LDSL#4] Root cause analysis versus effect size estimation
tailcalled
Aug 11, 2024, 4:12 PM
29
points
0
comments
2
min read
LW
link
Closed to Interpretation
Yeshua God
Aug 11, 2024, 3:51 PM
−18
points
0
comments
2
min read
LW
link
Theories of Knowledge
Zero Contradictions
Aug 11, 2024, 8:55 AM
−1
points
5
comments
1
min read
LW
link
(thewaywardaxolotl.blogspot.com)
Unnatural abstractions
Aprillion
Aug 10, 2024, 10:31 PM
3
points
3
comments
4
min read
LW
link
(peter.hozak.info)
[LDSL#3] Information-orientation is in tension with magnitude-orientation
tailcalled
Aug 10, 2024, 9:58 PM
33
points
2
comments
3
min read
LW
link
The AI regulator’s toolbox: A list of concrete AI governance practices
Adam Jones
Aug 10, 2024, 9:15 PM
9
points
1
comment
34
min read
LW
link
(adamjones.me)
Diffusion Guided NLP: better steering, mostly a good thing
Nathan Helm-Burger
Aug 10, 2024, 7:49 PM
13
points
0
comments
1
min read
LW
link
(arxiv.org)
Tall tales and long odds
Solenoid_Entity
Aug 10, 2024, 3:22 PM
11
points
0
comments
5
min read
LW
link
The Great Organism Theory of Evolution
rogersbacon
Aug 10, 2024, 12:26 PM
20
points
0
comments
6
min read
LW
link
(www.secretorum.life)
Emergence, The Blind Spot of GenAI Interpretability?
Quentin FEUILLADE--MONTIXI
Aug 10, 2024, 10:07 AM
16
points
8
comments
3
min read
LW
link
Rowing vs steering
Saul Munn
Aug 10, 2024, 7:00 AM
43
points
2
comments
6
min read
LW
link
(www.brasstacks.blog)
Overpopulation FAQs
Zero Contradictions
Aug 10, 2024, 4:21 AM
−12
points
7
comments
1
min read
LW
link
(zerocontradictions.net)
Fermi Estimating How Long an Algorithm Takes
SatvikBeri
Aug 10, 2024, 1:34 AM
1
point
0
comments
2
min read
LW
link
What’s so special about likelihoods?
mfatt
Aug 10, 2024, 1:07 AM
6
points
1
comment
1
min read
LW
link
Provably Safe AI: Worldview and Projects
Ben Goldhaber
and
Steve_Omohundro
Aug 9, 2024, 11:21 PM
54
points
44
comments
7
min read
LW
link
All The Latest Human tFUS Studies
sarahconstantin
Aug 9, 2024, 10:20 PM
46
points
2
comments
8
min read
LW
link
(sarahconstantin.substack.com)
But Where do the Variables of my Causal Model come from?
Dalcy
Aug 9, 2024, 10:07 PM
38
points
1
comment
8
min read
LW
link
[LDSL#2] Latent variable models, network models, and linear diffusion of sparse lognormals
tailcalled
Aug 9, 2024, 7:57 PM
26
points
2
comments
3
min read
LW
link
Back to top
Next
N
W
F
A
C
D
E
F
G
H
I
Customize appearance
Current theme:
default
A
C
D
E
F
G
H
I
Less Wrong (text)
Less Wrong (link)
Invert colors
Reset to defaults
OK
Cancel
Hi, I’m Bobby the Basilisk! Click on the minimize button (
) to minimize the theme tweaker window, so that you can see what the page looks like with the current tweaked values. (But remember,
the changes won’t be saved until you click “OK”!
)
Theme tweaker help
Show Bobby the Basilisk
OK
Cancel