RSS

Mikhail Samin

Karma: 697

My name is Mikhail Samin (diminutive Misha, @Mihonarium on Twitter, @misha in Telegram).

I work on reducing existential risks endangering the future of humanity. Humanity’s future can be huge and bright; losing it would mean the universe losing most of its value.

My research is currently focused on AI alignment, AI governance, and improving the understanding of AI and AI risks among stakeholders. Numerous AI Safety researchers told me our conversations improved their understanding of the alignment problem. I’m happy to talk to policymakers and researchers about ensuring AI benefits society.

I believe a capacity for global regulation is necessary to mitigate the risks posed by future general AI systems.

I took the Giving What We Can pledge to donate at least 10% of my income for the rest of my life or until the day I retire (why?).

In the past, I’ve launched the most funded crowdfunding campaign in the history of Russia (it was to print HPMOR! we printed 21 000 copies =63k books) and founded audd.io, which allowed me to donate >$100k to EA causes, including >$60k to MIRI.

[Less important: I’ve also started a project to translate 80,000 Hours, a career guide that helps to find a fulfilling career that does good, into Russian. The impact and the effectiveness aside, for a year, I was the head of the Russian Pastafarian Church: a movement claiming to be a parody religion, with 215 000 members in Russia at the time, trying to increase separation between religious organisations and the state. I was a political activist and a human rights advocate. I studied relevant Russian and international law and wrote appeals that won cases against the Russian government in courts; I was able to protect people from unlawful police action. I co-founded the Moscow branch of the “Vesna” democratic movement, coordinated election observers in a Moscow district, wrote dissenting opinions for members of electoral commissions, helped Navalny’s Anti-Corruption Foundation, helped Telegram with internet censorship circumvention, and participated in and organized protests and campaigns. The large-scale goal was to build a civil society and turn Russia into a democracy through nonviolent resistance. This goal wasn’t achieved, but some of the more local campaigns were successful. That felt important and was also mostly fun- except for being detained by the police. And I think it’s likely the Russian authorities will throw me in prison if I ever visit Russia.]

A tran­script of the TED talk by Eliezer Yudkowsky

Mikhail Samin12 Jul 2023 12:12 UTC
103 points
13 comments4 min readLW link

AI pause/​gov­er­nance ad­vo­cacy might be net-nega­tive, es­pe­cially with­out fo­cus on ex­plain­ing the x-risk

Mikhail Samin27 Aug 2023 23:05 UTC
81 points
9 comments6 min readLW link

Claude 3 claims it’s con­scious, doesn’t want to die or be modified

Mikhail Samin4 Mar 2024 23:05 UTC
69 points
101 comments14 min readLW link

Visi­ble loss land­scape bas­ins don’t cor­re­spond to dis­tinct algorithms

Mikhail Samin28 Jul 2023 16:19 UTC
65 points
13 comments4 min readLW link

Try to solve the hard parts of the al­ign­ment problem

Mikhail Samin18 Mar 2023 14:55 UTC
45 points
7 comments5 min readLW link

NYT is su­ing OpenAI&Microsoft for alleged copy­right in­fringe­ment; some quick thoughts

Mikhail Samin27 Dec 2023 18:44 UTC
41 points
17 comments1 min readLW link

FTX ex­pects to re­turn all cus­tomer money; claw­backs may go away

Mikhail Samin14 Feb 2024 3:43 UTC
33 points
1 comment1 min readLW link
(www.nytimes.com)

You won’t solve al­ign­ment with­out agent foundations

Mikhail Samin6 Nov 2022 8:07 UTC
24 points
3 comments8 min readLW link

[Question] I have thou­sands of copies of HPMOR in Rus­sian. How to use them with the most im­pact?

Mikhail Samin3 Jan 2023 10:21 UTC
24 points
3 comments1 min readLW link

A smart enough LLM might be deadly sim­ply if you run it for long enough

Mikhail Samin5 May 2023 20:49 UTC
16 points
16 comments8 min readLW link

An EA used de­cep­tive mes­sag­ing to ad­vance their pro­ject; we need mechanisms to avoid de­on­tolog­i­cally du­bi­ous plans

Mikhail Samin13 Feb 2024 23:15 UTC
16 points
1 comment1 min readLW link

Some quick thoughts on “AI is easy to con­trol”

Mikhail Samin6 Dec 2023 0:58 UTC
14 points
9 comments7 min readLW link