RSS

yanni

Karma: 207

Director & Movement Builder—AI Safety ANZ

Advisory Board Member (Growth) - Giving What We Can

The catchphrase I walk around with in my head regarding the optimal strategy for AI Safety is something like: Creating Superintelligent Artificial Agents* (SAA) without a worldwide referendum is ethically unjustifiable. Until a consensus is reached on whether to bring into existence such technology, a global moratorium is required (*we already have AGI).

I thought it might be useful to spell that out.

[Question] Who is test­ing AI Safety pub­lic out­reach mes­sag­ing?

yanni16 Apr 2023 6:57 UTC
13 points
2 comments1 min readLW link

An Up­date On The Cam­paign For AI Safety Dot Org

yanni5 May 2023 0:21 UTC
−11 points
2 comments1 min readLW link

[Question] A Ques­tion For Peo­ple Who Believe In God

yanni24 Nov 2023 5:22 UTC
3 points
38 comments1 min readLW link

[Question] How has in­ter­nal­is­ing a post-AGI world af­fected your cur­rent choices?

yanni5 Feb 2024 5:43 UTC
10 points
8 comments1 min readLW link

Some ques­tions for the peo­ple at 80,000 Hours

yanni14 Feb 2024 23:15 UTC
1 point
0 comments1 min readLW link
(forum.effectivealtruism.org)

[Question] Does in­creas­ing the power of a mul­ti­modal LLM get you an agen­tic AI?

yanni23 Feb 2024 4:14 UTC
3 points
3 comments1 min readLW link

Ap­ply to be a Safety Eng­ineer at Lock­heed Martin!

yanni31 Mar 2024 21:02 UTC
86 points
3 comments1 min readLW link