RSS

Prometheus

Karma: 395

[Question] Why do so many think de­cep­tion in AI is im­por­tant?

Prometheus13 Jan 2024 8:14 UTC
23 points
12 comments1 min readLW link

Back to the Past to the Future

Prometheus18 Oct 2023 16:51 UTC
5 points
0 comments1 min readLW link

[Question] Why aren’t more peo­ple in AIS fa­mil­iar with PDP?

Prometheus1 Sep 2023 15:27 UTC
4 points
9 comments1 min readLW link

Why Is No One Try­ing To Align Profit In­cen­tives With Align­ment Re­search?

Prometheus23 Aug 2023 13:16 UTC
51 points
11 comments4 min readLW link

Slay­ing the Hy­dra: to­ward a new game board for AI

Prometheus23 Jun 2023 17:04 UTC
0 points
5 comments6 min readLW link

Light­ning Post: Things peo­ple in AI Safety should stop talk­ing about

Prometheus20 Jun 2023 15:00 UTC
23 points
6 comments2 min readLW link

Aligned Ob­jec­tives Prize Competition

Prometheus15 Jun 2023 12:42 UTC
8 points
0 comments2 min readLW link
(app.impactmarkets.io)

Prometheus’s Shortform

Prometheus13 Jun 2023 23:21 UTC
3 points
20 comments1 min readLW link

Us­ing Con­sen­sus Mechanisms as an ap­proach to Alignment

Prometheus10 Jun 2023 23:38 UTC
9 points
2 comments6 min readLW link

Hu­mans are not pre­pared to op­er­ate out­side their moral train­ing distribution

Prometheus10 Apr 2023 21:44 UTC
36 points
1 comment3 min readLW link

Wi­den­ing Over­ton Win­dow—Open Thread

Prometheus31 Mar 2023 10:03 UTC
23 points
8 comments1 min readLW link

4 Key As­sump­tions in AI Safety

Prometheus7 Nov 2022 10:50 UTC
20 points
5 comments7 min readLW link

Five Areas I Wish EAs Gave More Focus

Prometheus27 Oct 2022 6:13 UTC
15 points
18 comments1 min readLW link

The Twins

Prometheus28 Dec 2020 1:26 UTC
3 points
3 comments6 min readLW link