RSS

Akash

Karma: 4,344

Akash’s Shortform

Akash18 Apr 2024 15:44 UTC
7 points
3 comments1 min readLW link

Co­op­er­at­ing with aliens and AGIs: An ECL explainer

24 Feb 2024 22:58 UTC
53 points
8 comments1 min readLW link

OpenAI’s Pre­pared­ness Frame­work: Praise & Recommendations

Akash2 Jan 2024 16:20 UTC
66 points
1 comment7 min readLW link

Speak­ing to Con­gres­sional staffers about AI risk

4 Dec 2023 23:08 UTC
288 points
23 comments16 min readLW link

Nav­i­gat­ing emo­tions in an un­cer­tain & con­fus­ing world

Akash20 Nov 2023 18:16 UTC
42 points
1 comment4 min readLW link

In­ter­na­tional treaty for global com­pute caps

9 Nov 2023 18:17 UTC
22 points
2 comments8 min readLW link

Chi­nese sci­en­tists ac­knowl­edge xrisk & call for in­ter­na­tional reg­u­la­tory body [Linkpost]

Akash1 Nov 2023 13:28 UTC
44 points
4 comments1 min readLW link
(www.ft.com)

Win­ners of AI Align­ment Awards Re­search Contest

13 Jul 2023 16:14 UTC
114 points
3 comments12 min readLW link
(alignmentawards.com)

AI Safety Newslet­ter #8: Rogue AIs, how to screen for AI risks, and grants for re­search on demo­cratic gov­er­nance of AI

30 May 2023 11:52 UTC
20 points
0 comments6 min readLW link
(newsletter.safe.ai)

AI Safety Newslet­ter #7: Dis­in­for­ma­tion, Gover­nance Recom­men­da­tions for AI labs, and Se­nate Hear­ings on AI

23 May 2023 21:47 UTC
25 points
0 comments6 min readLW link
(newsletter.safe.ai)

Eisen­hower’s Atoms for Peace Speech

Akash17 May 2023 16:10 UTC
18 points
3 comments11 min readLW link
(www.iaea.org)

AI Safety Newslet­ter #6: Ex­am­ples of AI safety progress, Yoshua Ben­gio pro­poses a ban on AI agents, and les­sons from nu­clear arms control

16 May 2023 15:14 UTC
31 points
0 comments6 min readLW link
(newsletter.safe.ai)

AI Safety Newslet­ter #5: Ge­offrey Hin­ton speaks out on AI risk, the White House meets with AI labs, and Tro­jan at­tacks on lan­guage models

9 May 2023 15:26 UTC
28 points
1 comment4 min readLW link
(newsletter.safe.ai)

AI Safety Newslet­ter #4: AI and Cy­ber­se­cu­rity, Per­sua­sive AIs, Weaponiza­tion, and Ge­offrey Hin­ton talks AI risks

2 May 2023 18:41 UTC
32 points
0 comments5 min readLW link
(newsletter.safe.ai)

Dis­cus­sion about AI Safety fund­ing (FB tran­script)

Akash30 Apr 2023 19:05 UTC
75 points
8 comments1 min readLW link

Refram­ing the bur­den of proof: Com­pa­nies should prove that mod­els are safe (rather than ex­pect­ing au­di­tors to prove that mod­els are dan­ger­ous)

Akash25 Apr 2023 18:49 UTC
27 points
11 comments3 min readLW link
(childrenoficarus.substack.com)

Deep­Mind and Google Brain are merg­ing [Linkpost]

Akash20 Apr 2023 18:47 UTC
55 points
5 comments1 min readLW link
(www.deepmind.com)

AI Safety Newslet­ter #2: ChaosGPT, Nat­u­ral Selec­tion, and AI Safety in the Media

18 Apr 2023 18:44 UTC
30 points
0 comments4 min readLW link
(newsletter.safe.ai)

Re­quest to AGI or­ga­ni­za­tions: Share your views on paus­ing AI progress

11 Apr 2023 17:30 UTC
141 points
11 comments1 min readLW link

AI Safety Newslet­ter #1 [CAIS Linkpost]

10 Apr 2023 20:18 UTC
45 points
0 comments4 min readLW link
(newsletter.safe.ai)