RSS

RomanS

Karma: 819

A suffi­ciently para­noid non-Friendly AGI might self-mod­ify it­self to be­come Friendly

RomanS22 Sep 2021 6:29 UTC
5 points
2 comments1 min readLW link

Steel­man ar­gu­ments against the idea that AGI is in­evitable and will ar­rive soon

RomanS9 Oct 2021 6:22 UTC
20 points
12 comments5 min readLW link

Re­s­ur­rect­ing all hu­mans ever lived as a tech­ni­cal problem

RomanS31 Oct 2021 18:08 UTC
48 points
36 comments7 min readLW link

Ex­ter­mi­nat­ing hu­mans might be on the to-do list of a Friendly AI

RomanS7 Dec 2021 14:15 UTC
5 points
8 comments2 min readLW link

[Linkpost] Chi­nese gov­ern­ment’s guidelines on AI

RomanS10 Dec 2021 21:10 UTC
61 points
14 comments1 min readLW link

A fate worse than death?

RomanS13 Dec 2021 11:05 UTC
−25 points
26 comments2 min readLW link

Con­sume fic­tion wisely

RomanS21 Jan 2022 20:23 UTC
−9 points
56 comments5 min readLW link

Pre­dict­ing a global catas­tro­phe: the Ukrainian model

RomanS7 Apr 2022 12:06 UTC
5 points
11 comments2 min readLW link

[Linkpost] A Chi­nese AI op­ti­mized for killing

RomanS3 Jun 2022 9:17 UTC
−2 points
4 comments1 min readLW link

[linkpost] The fi­nal AI bench­mark: BIG-bench

RomanS10 Jun 2022 8:53 UTC
25 points
21 comments1 min readLW link

[Question] What if LaMDA is in­deed sen­tient /​ self-aware /​ worth hav­ing rights?

RomanS16 Jun 2022 9:10 UTC
22 points
13 comments1 min readLW link

A suffi­ciently para­noid pa­per­clip maximizer

RomanS8 Aug 2022 11:17 UTC
17 points
10 comments2 min readLW link

[Question] What are some good ar­gu­ments against build­ing new nu­clear power plants?

RomanS12 Aug 2022 7:32 UTC
16 points
15 comments2 min readLW link

Another prob­lem with AI con­fine­ment: or­di­nary CPUs can work as ra­dio transmitters

RomanS14 Oct 2022 8:28 UTC
35 points
1 comment1 min readLW link
(news.softpedia.com)

[Question] Is it a co­in­ci­dence that GPT-3 re­quires roughly the same amount of com­pute as is nec­es­sary to em­u­late the hu­man brain?

RomanS10 Feb 2023 16:26 UTC
12 points
10 comments1 min readLW link

How to sur­vive in an AGI cataclysm

RomanS23 Feb 2023 14:34 UTC
−4 points
3 comments4 min readLW link

[Question] Are we too con­fi­dent about un­al­igned AGI kil­ling off hu­man­ity?

RomanS6 Mar 2023 16:19 UTC
21 points
63 comments1 min readLW link

Pro­ject “MIRI as a Ser­vice”

RomanS8 Mar 2023 19:22 UTC
42 points
4 comments1 min readLW link

The hu­man­ity’s biggest mistake

RomanS10 Mar 2023 16:30 UTC
0 points
1 comment2 min readLW link

The dreams of GPT-4

RomanS20 Mar 2023 17:00 UTC
14 points
7 comments9 min readLW link