Archive
Sequences
About
Search
Log In
Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
RSS
So8res
Karma:
17,322
All
Posts
Comments
New
Top
Old
Page
1
Eliezer and I wrote a book: If Anyone Builds It, Everyone Dies
So8res
May 14, 2025, 7:00 PM
601
points
103
comments
2
min read
LW
link
LessWrong: After Dark, a new side of LessWrong
So8res
Apr 1, 2024, 10:44 PM
36
points
6
comments
1
min read
LW
link
Ronny and Nate discuss what sorts of minds humanity is likely to find by Machine Learning
So8res
and
Ronny Fernandez
Dec 19, 2023, 11:39 PM
42
points
30
comments
25
min read
LW
link
Quick takes on “AI is easy to control”
So8res
Dec 2, 2023, 10:31 PM
26
points
49
comments
4
min read
LW
link
Apocalypse insurance, and the hardline libertarian take on AI risk
So8res
Nov 28, 2023, 2:09 AM
134
points
40
comments
7
min read
LW
link
1
review
Ability to solve long-horizon tasks correlates with wanting things in the behaviorist sense
So8res
Nov 24, 2023, 5:37 PM
197
points
84
comments
5
min read
LW
link
1
review
How much to update on recent AI governance moves?
habryka
and
So8res
Nov 16, 2023, 11:46 PM
112
points
5
comments
29
min read
LW
link
Thoughts on the AI Safety Summit company policy requests and responses
So8res
Oct 31, 2023, 11:54 PM
169
points
14
comments
10
min read
LW
link
AI as a science, and three obstacles to alignment strategies
So8res
Oct 25, 2023, 9:00 PM
193
points
80
comments
11
min read
LW
link
A mind needn’t be curious to reap the benefits of curiosity
So8res
Jun 2, 2023, 6:00 PM
78
points
14
comments
1
min read
LW
link
Cosmopolitan values don’t come free
So8res
May 31, 2023, 3:58 PM
137
points
85
comments
1
min read
LW
link
Sentience matters
So8res
May 29, 2023, 9:25 PM
143
points
96
comments
2
min read
LW
link
Request: stop advancing AI capabilities
So8res
May 26, 2023, 5:42 PM
154
points
24
comments
1
min read
LW
link
Would we even want AI to solve all our problems?
So8res
Apr 21, 2023, 6:04 PM
97
points
15
comments
2
min read
LW
link
How could you possibly choose what an AI wants?
So8res
Apr 19, 2023, 5:08 PM
108
points
19
comments
1
min read
LW
link
But why would the AI kill us?
So8res
Apr 17, 2023, 6:42 PM
139
points
96
comments
2
min read
LW
link
Misgeneralization as a misnomer
So8res
Apr 6, 2023, 8:43 PM
129
points
22
comments
4
min read
LW
link
If interpretability research goes well, it may get dangerous
So8res
Apr 3, 2023, 9:48 PM
201
points
11
comments
2
min read
LW
link
Hooray for stepping out of the limelight
So8res
Apr 1, 2023, 2:45 AM
284
points
26
comments
1
min read
LW
link
A rough and incomplete review of some of John Wentworth’s research
So8res
Mar 28, 2023, 6:52 PM
175
points
18
comments
18
min read
LW
link
Back to top
Next