Archive
Sequences
About
Search
Log In
Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
RSS
So8res
Karma:
19,326
All
Posts
Comments
New
Top
Old
Page
1
Why Corrigibility is Hard and Important (i.e. “Whence the high MIRI confidence in alignment difficulty?”)
Raemon
,
Eliezer Yudkowsky
and
So8res
30 Sep 2025 0:12 UTC
78
points
50
comments
17
min read
LW
link
The Problem
Rob Bensinger
,
tanagrabeast
,
yams
,
So8res
,
Eliezer Yudkowsky
and
Gretta Duleba
5 Aug 2025 21:40 UTC
313
points
218
comments
26
min read
LW
link
A case for courage, when speaking of AI danger
So8res
27 Jun 2025 2:15 UTC
519
points
128
comments
6
min read
LW
link
Eliezer and I wrote a book: If Anyone Builds It, Everyone Dies
So8res
14 May 2025 19:00 UTC
648
points
114
comments
2
min read
LW
link
LessWrong: After Dark, a new side of LessWrong
So8res
1 Apr 2024 22:44 UTC
36
points
6
comments
1
min read
LW
link
Ronny and Nate discuss what sorts of minds humanity is likely to find by Machine Learning
So8res
and
Ronny Fernandez
19 Dec 2023 23:39 UTC
42
points
30
comments
25
min read
LW
link
Quick takes on “AI is easy to control”
So8res
2 Dec 2023 22:31 UTC
26
points
49
comments
4
min read
LW
link
Apocalypse insurance, and the hardline libertarian take on AI risk
So8res
28 Nov 2023 2:09 UTC
135
points
40
comments
7
min read
LW
link
1
review
Ability to solve long-horizon tasks correlates with wanting things in the behaviorist sense
So8res
24 Nov 2023 17:37 UTC
204
points
84
comments
5
min read
LW
link
1
review
How much to update on recent AI governance moves?
habryka
and
So8res
16 Nov 2023 23:46 UTC
112
points
5
comments
29
min read
LW
link
Thoughts on the AI Safety Summit company policy requests and responses
So8res
31 Oct 2023 23:54 UTC
169
points
14
comments
10
min read
LW
link
AI as a science, and three obstacles to alignment strategies
So8res
25 Oct 2023 21:00 UTC
194
points
80
comments
11
min read
LW
link
A mind needn’t be curious to reap the benefits of curiosity
So8res
2 Jun 2023 18:00 UTC
78
points
14
comments
1
min read
LW
link
Cosmopolitan values don’t come free
So8res
31 May 2023 15:58 UTC
138
points
87
comments
1
min read
LW
link
Sentience matters
So8res
29 May 2023 21:25 UTC
144
points
96
comments
2
min read
LW
link
Request: stop advancing AI capabilities
So8res
26 May 2023 17:42 UTC
154
points
24
comments
1
min read
LW
link
Would we even want AI to solve all our problems?
So8res
21 Apr 2023 18:04 UTC
98
points
15
comments
2
min read
LW
link
How could you possibly choose what an AI wants?
So8res
19 Apr 2023 17:08 UTC
109
points
19
comments
1
min read
LW
link
But why would the AI kill us?
So8res
17 Apr 2023 18:42 UTC
140
points
96
comments
2
min read
LW
link
Misgeneralization as a misnomer
So8res
6 Apr 2023 20:43 UTC
128
points
22
comments
4
min read
LW
link
Back to top
Next