RSS

Martín Soto

Karma: 797

Mathematical Logic grad student, doing AI Safety research for ethical reasons.

Working on conceptual alignment, decision theory, cooperative AI and cause prioritization.

My webpage.

Leave me anonymous feedback.

Con­flict in Posthu­man Literature

Martín Soto6 Apr 2024 22:26 UTC
38 points
1 comment2 min readLW link
(twitter.com)

Com­par­ing Align­ment to other AGI in­ter­ven­tions: Ex­ten­sions and analysis

Martín Soto21 Mar 2024 17:30 UTC
7 points
0 comments4 min readLW link

Com­par­ing Align­ment to other AGI in­ter­ven­tions: Ba­sic model

Martín Soto20 Mar 2024 18:17 UTC
12 points
4 comments7 min readLW link

How dis­agree­ments about Ev­i­den­tial Cor­re­la­tions could be settled

Martín Soto11 Mar 2024 18:28 UTC
11 points
3 comments4 min readLW link