Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
Archive
Sequences
About
Search
Log In
All
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
All
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
All
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
Page
1
Notes from the Bank of England Talk by Giovanni Dosi on Agent-based Modeling for Macroeconomics
PixelatedPenguin
Jun 13, 2023, 10:25 PM
3
points
0
comments
1
min read
LW
link
Introducing The Long Game Project: Improving Decision-Making Through Tabletop Exercises and Simulated Experience
Dan Stuart
Jun 13, 2023, 9:45 PM
4
points
0
comments
4
min read
LW
link
Intelligence allocation from a Mean Field Game Theory perspective
Marv K
Jun 13, 2023, 7:52 PM
13
points
2
comments
2
min read
LW
link
Multiple stages of fallacy—justifications and non-justifications for the multiple stage fallacy
AronT
Jun 13, 2023, 5:37 PM
33
points
2
comments
5
min read
LW
link
(coordinationishard.substack.com)
TryContra Events
jefftk
Jun 13, 2023, 5:30 PM
2
points
0
comments
1
min read
LW
link
(www.jefftk.com)
MetaAI: less is less for alignment.
Cleo Nardo
Jun 13, 2023, 2:08 PM
71
points
17
comments
5
min read
LW
link
The Dial of Progress
Zvi
Jun 13, 2023, 1:40 PM
161
points
119
comments
11
min read
LW
link
(thezvi.wordpress.com)
Virtual AI Safety Unconference (VAISU)
Linda Linsefors
and
ntran
Jun 13, 2023, 9:56 AM
15
points
0
comments
1
min read
LW
link
Seattle ACX Meetup—Summer 2023
Optimization Process
Jun 13, 2023, 5:14 AM
5
points
0
comments
1
min read
LW
link
TASRA: A Taxonomy and Analysis of Societal-Scale Risks from AI
Andrew_Critch
Jun 13, 2023, 5:04 AM
64
points
1
comment
1
min read
LW
link
<$750k grants for General Purpose AI Assurance/Safety Research
Phosphorous
Jun 13, 2023, 4:45 AM
37
points
1
comment
1
min read
LW
link
(cset.georgetown.edu)
UFO Betting: Put Up or Shut Up
RatsWrongAboutUAP
Jun 13, 2023, 4:05 AM
260
points
216
comments
2
min read
LW
link
1
review
A bunch of videos in comments
the gears to ascension
Jun 12, 2023, 10:31 PM
10
points
62
comments
1
min read
LW
link
[Linkpost] The neuroconnectionist research programme
Bogdan Ionut Cirstea
Jun 12, 2023, 9:58 PM
6
points
1
comment
1
min read
LW
link
Contingency: A Conceptual Tool from Evolutionary Biology for Alignment
clem_acs
Jun 12, 2023, 8:54 PM
57
points
2
comments
14
min read
LW
link
(acsresearch.org)
Book Review: Autoheterosexuality
tailcalled
Jun 12, 2023, 8:11 PM
27
points
9
comments
24
min read
LW
link
Aura as a proprioceptive glitch
pchvykov
Jun 12, 2023, 7:30 PM
37
points
4
comments
4
min read
LW
link
Aligning Mathematical Notions of Infinity with Human Intuition
London L.
Jun 12, 2023, 7:19 PM
1
point
10
comments
9
min read
LW
link
(medium.com)
ARC is hiring theoretical researchers
paulfchristiano
,
Jacob_Hilton
and
Mark Xu
Jun 12, 2023, 6:50 PM
126
points
12
comments
4
min read
LW
link
(www.alignment.org)
Introduction to Towards Causal Foundations of Safe AGI
tom4everitt
,
Lewis Hammond
,
Francis Rhys Ward
,
RyanCarey
,
James Fox
,
mattmacdermott
and
sbenthall
Jun 12, 2023, 5:55 PM
67
points
6
comments
4
min read
LW
link
Manifold Predicted the AI Extinction Statement and CAIS Wanted it Deleted
David Chee
Jun 12, 2023, 3:54 PM
71
points
15
comments
12
min read
LW
link
Explicitness
TsviBT
Jun 12, 2023, 3:05 PM
29
points
0
comments
15
min read
LW
link
If you are too stressed, walk away from the front lines
Neil
Jun 12, 2023, 2:26 PM
44
points
14
comments
5
min read
LW
link
UK PM: $125M for AI safety
Hauke Hillebrandt
Jun 12, 2023, 12:33 PM
31
points
11
comments
1
min read
LW
link
(twitter.com)
[Question]
Could induced and stabilized hypomania be a desirable mental state?
MvB
Jun 12, 2023, 12:13 PM
8
points
22
comments
2
min read
LW
link
Non-loss of control AGI-related catastrophes are out of control too
Yi-Yang
,
Mo Putera
and
zeshen
Jun 12, 2023, 12:01 PM
2
points
3
comments
24
min read
LW
link
Critiques of prominent AI safety labs: Conjecture
Omega.
Jun 12, 2023, 1:32 AM
12
points
32
comments
33
min read
LW
link
why I’m anti-YIMBY
bhauth
Jun 12, 2023, 12:19 AM
20
points
45
comments
2
min read
LW
link
ACX Brno meetup #2
adekcz
Jun 11, 2023, 1:53 PM
2
points
0
comments
1
min read
LW
link
[Linkpost] Large Language Models Converge on Brain-Like Word Representations
Bogdan Ionut Cirstea
Jun 11, 2023, 11:20 AM
36
points
12
comments
1
min read
LW
link
Inference-Time Intervention: Eliciting Truthful Answers from a Language Model
likenneth
Jun 11, 2023, 5:38 AM
195
points
4
comments
1
min read
LW
link
(arxiv.org)
You Are a Computer, and No, That’s Not a Metaphor
jakej
Jun 11, 2023, 5:38 AM
12
points
1
comment
22
min read
LW
link
(sigil.substack.com)
Snake Eyes Paradox
Martin Randall
Jun 11, 2023, 4:10 AM
22
points
25
comments
6
min read
LW
link
[Question]
[Mostly solved] I get distracted while reading, but can easily comprehend audio text for 8+ hours per day. What are the best AI text-to-speech readers? Alternatively, do you have other ideas for what I could do?
kuira
Jun 11, 2023, 3:49 AM
18
points
7
comments
1
min read
LW
link
The Dictatorship Problem
alyssavance
Jun 11, 2023, 2:45 AM
35
points
145
comments
11
min read
LW
link
Higher Dimension Cartesian Objects and Aligning ‘Tiling Simulators’
lukemarks
Jun 11, 2023, 12:13 AM
22
points
0
comments
5
min read
LW
link
Using Consensus Mechanisms as an approach to Alignment
Prometheus
Jun 10, 2023, 11:38 PM
11
points
2
comments
6
min read
LW
link
Humanities first math problem, The shallow gene pool.
archeon
Jun 10, 2023, 11:09 PM
−2
points
0
comments
1
min read
LW
link
I can see how I am Dumb
Johannes C. Mayer
Jun 10, 2023, 7:18 PM
46
points
11
comments
5
min read
LW
link
Ethodynamics of Omelas
dr_s
Jun 10, 2023, 4:24 PM
83
points
18
comments
9
min read
LW
link
1
review
Dealing with UFO claims
ChristianKl
Jun 10, 2023, 3:45 PM
3
points
32
comments
1
min read
LW
link
A Theory of Unsupervised Translation Motivated by Understanding Animal Communication
jsd
Jun 10, 2023, 3:44 PM
19
points
0
comments
1
min read
LW
link
(arxiv.org)
[Question]
What are brains?
Valentine
Jun 10, 2023, 2:46 PM
10
points
22
comments
2
min read
LW
link
EY in the New York Times
Blueberry
Jun 10, 2023, 12:21 PM
6
points
14
comments
1
min read
LW
link
(www.nytimes.com)
Goal-misgeneralization is ELK-hard
rokosbasilisk
Jun 10, 2023, 9:32 AM
2
points
0
comments
1
min read
LW
link
[Question]
What do beneficial TDT trades for humanity concretely look like?
Stephen Fowler
Jun 10, 2023, 6:50 AM
4
points
0
comments
1
min read
LW
link
cloud seeding doesn’t work
bhauth
Jun 10, 2023, 5:14 AM
7
points
2
comments
1
min read
LW
link
[FICTION] Unboxing Elysium: An AI’S Escape
Super AGI
10 Jun 2023 4:41 UTC
−16
points
4
comments
14
min read
LW
link
[FICTION] Prometheus Rising: The Emergence of an AI Consciousness
Super AGI
10 Jun 2023 4:41 UTC
−14
points
0
comments
9
min read
LW
link
formalizing the QACI alignment formal-goal
Tamsin Leake
and
JuliaHP
10 Jun 2023 3:28 UTC
54
points
6
comments
13
min read
LW
link
(carado.moe)
Back to top
Next
N
W
F
A
C
D
E
F
G
H
I
Customize appearance
Current theme:
default
A
C
D
E
F
G
H
I
Less Wrong (text)
Less Wrong (link)
Invert colors
Reset to defaults
OK
Cancel
Hi, I’m Bobby the Basilisk! Click on the minimize button (
) to minimize the theme tweaker window, so that you can see what the page looks like with the current tweaked values. (But remember,
the changes won’t be saved until you click “OK”!
)
Theme tweaker help
Show Bobby the Basilisk
OK
Cancel