Archive
Sequences
About
Search
Log In
Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
RSS
RomanS
Karma:
819
All
Posts
Comments
New
Top
Old
Page
1
A sufficiently paranoid non-Friendly AGI might self-modify itself to become Friendly
RomanS
22 Sep 2021 6:29 UTC
5
points
2
comments
1
min read
LW
link
Steelman arguments against the idea that AGI is inevitable and will arrive soon
RomanS
9 Oct 2021 6:22 UTC
20
points
12
comments
5
min read
LW
link
Resurrecting all humans ever lived as a technical problem
RomanS
31 Oct 2021 18:08 UTC
48
points
36
comments
7
min read
LW
link
Exterminating humans might be on the to-do list of a Friendly AI
RomanS
7 Dec 2021 14:15 UTC
5
points
8
comments
2
min read
LW
link
[Linkpost] Chinese government’s guidelines on AI
RomanS
10 Dec 2021 21:10 UTC
61
points
14
comments
1
min read
LW
link
A fate worse than death?
RomanS
13 Dec 2021 11:05 UTC
−25
points
26
comments
2
min read
LW
link
Consume fiction wisely
RomanS
21 Jan 2022 20:23 UTC
−9
points
56
comments
5
min read
LW
link
Predicting a global catastrophe: the Ukrainian model
RomanS
7 Apr 2022 12:06 UTC
5
points
11
comments
2
min read
LW
link
[Linkpost] A Chinese AI optimized for killing
RomanS
3 Jun 2022 9:17 UTC
−2
points
4
comments
1
min read
LW
link
[linkpost] The final AI benchmark: BIG-bench
RomanS
10 Jun 2022 8:53 UTC
25
points
21
comments
1
min read
LW
link
[Question]
What if LaMDA is indeed sentient / self-aware / worth having rights?
RomanS
16 Jun 2022 9:10 UTC
22
points
13
comments
1
min read
LW
link
A sufficiently paranoid paperclip maximizer
RomanS
8 Aug 2022 11:17 UTC
17
points
10
comments
2
min read
LW
link
[Question]
What are some good arguments against building new nuclear power plants?
RomanS
12 Aug 2022 7:32 UTC
16
points
15
comments
2
min read
LW
link
Another problem with AI confinement: ordinary CPUs can work as radio transmitters
RomanS
14 Oct 2022 8:28 UTC
35
points
1
comment
1
min read
LW
link
(news.softpedia.com)
[Question]
Is it a coincidence that GPT-3 requires roughly the same amount of compute as is necessary to emulate the human brain?
RomanS
10 Feb 2023 16:26 UTC
12
points
10
comments
1
min read
LW
link
How to survive in an AGI cataclysm
RomanS
23 Feb 2023 14:34 UTC
−4
points
3
comments
4
min read
LW
link
[Question]
Are we too confident about unaligned AGI killing off humanity?
RomanS
6 Mar 2023 16:19 UTC
21
points
63
comments
1
min read
LW
link
Project “MIRI as a Service”
RomanS
8 Mar 2023 19:22 UTC
42
points
4
comments
1
min read
LW
link
The humanity’s biggest mistake
RomanS
10 Mar 2023 16:30 UTC
0
points
1
comment
2
min read
LW
link
The dreams of GPT-4
RomanS
20 Mar 2023 17:00 UTC
14
points
7
comments
9
min read
LW
link
Back to top
Next