RSS

Towards_Keeperhood

Karma: 896

Simon Skade

I did (mostly non-prosaic) alignment research between Feb 2022 and Aug 2025. (Won $10k in the ELK contest, participated in MLAB and SERI MATS 3.0 & 3.1, then independent research. I mostly worked on an ambitious attempt to better understand minds to figure out how to create more understandable and pointable AIs. I started with agent foundations but then developed a more sciency agenda where I also studied concrete observations from language/​linguistics, pychology, (neuroscience—though didn’t study much here yet), and from tracking my thoughts on problems I solved (aka a good kind of introspection).)

I’m now exploring advocacy for making it more likely that we get sth like the MIRI treaty (ideally with a good exit plan like human intelligence augmentation, or possibly an alignment project with actually competent leadership).

Currently based in Germany.