My main focuses at the moment:
▪ S-risk macrostrategy (e.g., what AI safety proposals decrease rather than increase s-risks?)
▪ How to improve the exchange of knowledge in the s-risk community, and other s-risk field-building projects.
Previously, I worked in organizations such as EA Cambridge and EA France (community director), Existential Risk Alliance (research fellow), and the Center on Long-Term Risk (events and community associate).
I’ve conducted research on various longtermist topics (some of it posted on the EA Forum and here) and recently finished a Master’s in moral philosophy.
You can give me anonymous feedback here. :)
SHOW LESS
Thanks! I guess myopia is a specific example of one form of scope-insensitivity (which has to do with longterm thinking, according to this at least), yes.
> This is plausibly a beneficial alignment property, but like every plausibly beneficial alignment property, we don’t yet know how to instill them in a system via ML training.
I didn’t follow discussions around myopia and didn’t have this context (e.g., I thought maybe people didn’t find myopia promising at all to begin with or something) so thanks a lot. That’s very helpful.