Simon Skade
I did (mostly non-prosaic) alignment research between Feb 2022 and Aug 2025. (Won $10k in the ELK contest, participated in MLAB and SERI MATS 3.0 & 3.1, then independent research. I mostly worked on an ambitious attempt to better understand minds to figure out how to create more understandable and pointable AIs. I started with agent foundations but then developed a more sciency agenda where I also studied concrete observations from language/linguistics, pychology, (neuroscience—though didn’t study much here yet), and from tracking my thoughts on problems I solved (aka a good kind of introspection).)
I’m now exploring advocacy for making it more likely that we get sth like the MIRI treaty (ideally with a good exit plan like human intelligence augmentation, or possibly an alignment project with actually competent leadership).
Currently based in Germany.
I also recently listened to the planecrash chapter “the meeting of their minds” and while it’s not a lecture it does contain a lot of interesting insights. May seem like weird anthropics brainfuck to some people though. And it definitely contains spoilers.
PS: also check out this lecture. (EDIT: This is mostly “how to relate to beliefs” + “what the truth can destroy”, and then a short section that’s not linked in the post here.)
PPS: Also check out these insights from dath ilan.