RSS

Towards_Keeperhood

Karma: 1,015

Simon Skade

I did (mostly non-prosaic) alignment research between Feb 2022 and Aug 2025. (Won $10k in the ELK contest, participated in MLAB and SERI MATS 3.0 & 3.1, then independent research. I worked a bit on ontology identification and then on an ambitious attempt to better understand minds to figure out how to create more understandable and pointable AIs. I started with agent foundations but then developed a more sciency agenda where I also studied concrete observations from language/​linguistics, a little bit pychology and neuroscience, and from tracking my thoughts on problems I solved (aka a good kind of introspection).)

I’m now exploring advocacy for making it more likely that we get good international coordination to more safely navigate the AI transition.

I’m also into rationality/​self-improvement.

Currently based in Germany.