RSS

otto.barten

Karma: 530

Har­den­ing against AI takeover is difficult, but we should try

otto.barten5 Nov 2025 16:25 UTC
11 points
0 comments5 min readLW link
(www.existentialriskobservatory.org)

Space coloniza­tion and sci­en­tific dis­cov­ery could be manda­tory for suc­cess­ful defen­sive AI

otto.barten18 Oct 2025 4:57 UTC
16 points
0 comments1 min readLW link

Th­ese are my rea­sons to worry less about loss of con­trol over LLM-based agents

otto.barten18 Sep 2025 11:45 UTC
7 points
4 comments4 min readLW link

We should think about the pivotal act again. Here’s a bet­ter ver­sion of it.

otto.barten28 Aug 2025 9:29 UTC
11 points
2 comments3 min readLW link

AI Offense Defense Balance in a Mul­tipo­lar World

17 Jul 2025 9:34 UTC
15 points
5 comments18 min readLW link
(www.existentialriskobservatory.org)

Yes RAND, AI Could Really Cause Hu­man Ex­tinc­tion [cross­post]

otto.barten20 Jun 2025 11:42 UTC
17 points
4 comments4 min readLW link
(www.existentialriskobservatory.org)

US-China trade talks should pave way for AI safety treaty [SCMP cross­post]

otto.barten16 May 2025 16:55 UTC
10 points
0 comments3 min readLW link

New AI safety treaty pa­per out!

otto.barten26 Mar 2025 9:29 UTC
15 points
2 comments4 min readLW link

Propos­ing the Con­di­tional AI Safety Treaty (linkpost TIME)

otto.barten15 Nov 2024 13:59 UTC
11 points
9 comments3 min readLW link
(time.com)

An­nounc­ing the AI Safety Sum­mit Talks with Yoshua Bengio

otto.barten14 May 2024 12:52 UTC
9 points
1 comment1 min readLW link

What Failure Looks Like is not an ex­is­ten­tial risk (and al­ign­ment is not the solu­tion)

otto.barten2 Feb 2024 18:59 UTC
14 points
12 comments9 min readLW link

An­nounc­ing #AISum­mitTalks fea­tur­ing Pro­fes­sor Stu­art Rus­sell and many others

otto.barten24 Oct 2023 10:11 UTC
17 points
1 comment1 min readLW link

AI Reg­u­la­tion May Be More Im­por­tant Than AI Align­ment For Ex­is­ten­tial Safety

otto.barten24 Aug 2023 11:41 UTC
65 points
39 comments5 min readLW link

[Cross­post] An AI Pause Is Hu­man­ity’s Best Bet For Prevent­ing Ex­tinc­tion (TIME)

otto.barten24 Jul 2023 10:07 UTC
12 points
0 comments7 min readLW link
(time.com)

[Cross­post] Un­veiling the Amer­i­can Public Opinion on AI Mo­ra­to­rium and Govern­ment In­ter­ven­tion: The Im­pact of Me­dia Exposure

otto.barten8 May 2023 14:09 UTC
7 points
0 comments6 min readLW link
(forum.effectivealtruism.org)

[Cross­post] AI X-risk in the News: How Effec­tive are Re­cent Me­dia Items and How is Aware­ness Chang­ing? Our New Sur­vey Re­sults.

otto.barten4 May 2023 14:09 UTC
5 points
0 comments9 min readLW link
(forum.effectivealtruism.org)

[Cross­post] Or­ga­niz­ing a de­bate with ex­perts and MPs to raise AI xrisk aware­ness: a pos­si­ble blueprint

otto.barten19 Apr 2023 11:45 UTC
8 points
0 comments4 min readLW link
(forum.effectivealtruism.org)

Paper Sum­mary: The Effec­tive­ness of AI Ex­is­ten­tial Risk Com­mu­ni­ca­tion to the Amer­i­can and Dutch Public

otto.barten9 Mar 2023 10:47 UTC
14 points
6 comments4 min readLW link

Why Un­con­trol­lable AI Looks More Likely Than Ever

8 Mar 2023 15:41 UTC
18 points
0 comments4 min readLW link
(time.com)

Please help us com­mu­ni­cate AI xrisk. It could save the world.

otto.barten4 Jul 2022 21:47 UTC
4 points
7 comments2 min readLW link