I’m interested in doing in-depth dialogues to find cruxes. Message me if you are interested in doing this.
I do alignment research, mostly stuff that is vaguely agent foundations. Currently doing independent alignment research on ontology identification. In 2023 I was on Vivek’s team at MIRI, before that I did MATS 2, and before that I did a CS and Neuroscience undergrad (thesis on statistical learning theory).