Archive
Sequences
About
Search
Log In
Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
RSS
So8res
(Nate Soares)
Karma:
15,477
All
Posts
Comments
New
Top
Old
Page
1
Ronny and Nate discuss what sorts of minds humanity is likely to find by Machine Learning
So8res
and
Ronny Fernandez
19 Dec 2023 23:39 UTC
35
points
31
comments
25
min read
LW
link
Quick takes on “AI is easy to control”
So8res
2 Dec 2023 22:31 UTC
25
points
49
comments
4
min read
LW
link
Apocalypse insurance, and the hardline libertarian take on AI risk
So8res
28 Nov 2023 2:09 UTC
122
points
36
comments
7
min read
LW
link
Ability to solve long-horizon tasks correlates with wanting things in the behaviorist sense
So8res
24 Nov 2023 17:37 UTC
202
points
82
comments
5
min read
LW
link
How much to update on recent AI governance moves?
habryka
and
So8res
16 Nov 2023 23:46 UTC
109
points
4
comments
29
min read
LW
link
Thoughts on the AI Safety Summit company policy requests and responses
So8res
31 Oct 2023 23:54 UTC
169
points
14
comments
10
min read
LW
link
AI as a science, and three obstacles to alignment strategies
So8res
25 Oct 2023 21:00 UTC
174
points
78
comments
11
min read
LW
link
A mind needn’t be curious to reap the benefits of curiosity
So8res
2 Jun 2023 18:00 UTC
78
points
14
comments
1
min read
LW
link
Cosmopolitan values don’t come free
So8res
31 May 2023 15:58 UTC
123
points
82
comments
1
min read
LW
link
Sentience matters
So8res
29 May 2023 21:25 UTC
139
points
93
comments
2
min read
LW
link
Request: stop advancing AI capabilities
So8res
26 May 2023 17:42 UTC
155
points
23
comments
1
min read
LW
link
Would we even want AI to solve all our problems?
So8res
21 Apr 2023 18:04 UTC
97
points
15
comments
2
min read
LW
link
How could you possibly choose what an AI wants?
So8res
19 Apr 2023 17:08 UTC
105
points
19
comments
1
min read
LW
link
But why would the AI kill us?
So8res
17 Apr 2023 18:42 UTC
117
points
86
comments
2
min read
LW
link
Misgeneralization as a misnomer
So8res
6 Apr 2023 20:43 UTC
129
points
22
comments
4
min read
LW
link
If interpretability research goes well, it may get dangerous
So8res
3 Apr 2023 21:48 UTC
197
points
10
comments
2
min read
LW
link
Hooray for stepping out of the limelight
So8res
1 Apr 2023 2:45 UTC
281
points
24
comments
1
min read
LW
link
A rough and incomplete review of some of John Wentworth’s research
So8res
28 Mar 2023 18:52 UTC
173
points
17
comments
18
min read
LW
link
A stylized dialogue on John Wentworth’s claims about markets and optimization
So8res
25 Mar 2023 22:32 UTC
159
points
21
comments
8
min read
LW
link
Truth and Advantage: Response to a draft of “AI safety seems hard to measure”
So8res
22 Mar 2023 3:36 UTC
98
points
9
comments
5
min read
LW
link
Back to top
Next