Mikhail Samin

Karma: 89

My name is Mikhail Samin (diminutive Misha, @Mihonarium on Twitter, @misha in Telegram).
I’m an effective altruist, I worry about the future of humanity and want the universe not to lose most of its value.

I took the Giving What We Can pledge to donate at least 10% of my income for the rest of my life or until the day I retire (why?).
It seems that I have good intuitions about the AI alignment problem; some full-time alignment researchers told me that they were able to improve their understanding of the problem after talking to me.

I’m currently doing EA & AI Alignment outreach (e.g., I’m organising a translation of the 80,000 Hours’ Key Ideas series and partnering with Vert Dider for a translation and dubbing of Rob Miles’ videos) and considering switching to direct alignment research.

In the past, I’ve launched the most funded crowdfunding campaign in the history of Russia (it was to print HPMOR! we printed 21 000 copies, which is 63k books) and founded, which allowed me to donate >$50k to MIRI.

Won­der about the hard parts of the al­ign­ment problem

Mikhail Samin18 Mar 2023 14:55 UTC
37 points
6 comments5 min readLW link

Mikhail Samin’s Shortform

Mikhail Samin7 Feb 2023 15:30 UTC
2 points
1 comment1 min readLW link

[Question] I have thou­sands of copies of HPMOR in Rus­sian. How to use them with the most im­pact?

Mikhail Samin3 Jan 2023 10:21 UTC
17 points
3 comments1 min readLW link

You won’t solve al­ign­ment with­out agent foundations

Mikhail Samin6 Nov 2022 8:07 UTC
21 points
3 comments8 min readLW link