Feel free to reach out, I like internet friends :)
I’m a Secular Humanist (but, like, for animals and possibly future digital minds too).
I like thinkers like Peter Singer, Nick Bostrom, Eliezer Yudkowsky, Scott Alexander, you know the drill. More recently also Scott Aaranson and Paul Christiano.
Outside of the Bayesian rat sphere, I am into thinkers like RMS, Cory Doctorow, Simone de Beauvoir, and David Graeber.
I like rationality and EA for the quirky, sincere, passion driven movements that they are. I care a lot about values like truth, sanity, and the flourishing of all sentient beings.
As a counter example to the idea that safety work isn’t compute constrained, here is a quote from an interpretability paper out of Anthropic, “Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet” :
It seems like there probably are a lot of other compute-expensive experiments that would be helpful to run if safety compute was cheap for whatever reason.