RSS

Martín Soto

Karma: 806

Mathematical Logic grad student, doing AI Safety research for ethical reasons.

Working on conceptual alignment, decision theory, cooperative AI and cause prioritization.

My webpage.

Leave me anonymous feedback.

[Question] Which one of these two aca­demic routes should I take to end up in AI Safety?

Martín Soto3 Jul 2022 1:05 UTC
5 points
2 comments1 min readLW link

Align­ment be­ing im­pos­si­ble might be bet­ter than it be­ing re­ally difficult

Martín Soto25 Jul 2022 23:57 UTC
13 points
2 comments2 min readLW link

Gen­eral ad­vice for tran­si­tion­ing into The­o­ret­i­cal AI Safety

Martín Soto15 Sep 2022 5:23 UTC
11 points
0 comments10 min readLW link

An is­sue with MacAskill’s Ev­i­den­tial­ist’s Wager

Martín Soto21 Sep 2022 22:02 UTC
1 point
9 comments4 min readLW link

[Question] En­rich­ing Youtube con­tent recommendations

Martín Soto27 Sep 2022 16:54 UTC
8 points
4 comments1 min readLW link

Fur­ther con­sid­er­a­tions on the Ev­i­den­tial­ist’s Wager

Martín Soto3 Nov 2022 20:06 UTC
3 points
9 comments8 min readLW link

Vanessa Kosoy’s PreDCA, distilled

Martín Soto12 Nov 2022 11:38 UTC
17 points
19 comments5 min readLW link