RSS

So­cial Proof of Ex­is­ten­tial Risks from AGI

TagLast edit: 16 Nov 2023 8:39 UTC by Odd anon

Letters:

Center for AI Safety Statement on AI Risk: Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

Prominent AI Researchers:

Yoshua Bengio: How Rogue AI’s may Arise (CAIS signatory)
Geoffry Hinton: Why neural net pioneer Geoffrey Hinton is sounding the alarm on AI (CAIS signatory)
Ilya Sustkever (CAIS signatory)
Max Tegmark (CAIS Signatory)
Ray Kurzweil (CAIS Signatory)
Stuart Russell (CAIS Signatory)

Heads of Labs:

Demis Hassabis (CAIS signatory)
Sam Altman (CAIS signatory)
Dario Amodei Lex Fridman (CAIS Signatory)
Connor Leahy Lex Fridman (CAIS Signatory)

Prominent Scientists:

Stephen Hawking: BBC
Daniel Dennett (CAIS Signatory)
Martin Rees (CAIS Signatory)
Scott Aaronson (CAIS Signatory)

Tech Leaders:

Bill Gates (CAIS Signatory)
Peter Norvig (CAIS Signatory)
Vitalik Buterin (CAIS Signatory)
Jaan Tallin (CAIS Signatory)
Adam D’Angel (CAIS Signatory)
Dustin Moskovitz (CAIS Signatory)

Politicians:

Ursula von der Leyen
António Guterres
Rishi Sunak
Prince Albert II
Naftali Bennett
Ted Lieu (CAIS Signatory)
Audrey Tang (CAIS Signatory)

Philosophers:

David Chalmers (CAIS Signatory)
Toby Ord (CAIS Signatory)
Will MacAskill (CAIS Signatory)

Others:
Chris Anderson—Dramer-in-Chief, TED (CAIS Signatory)
Lex Fridman (CAIS Signatory)

No entries.
No comments.