(Posting in a personal capacity unless stated otherwise.) I help allocate Open Phil’s resources to improve the governance of AI with a focus on avoiding catastrophic outcomes. Formerly co-founder of the Cambridge Boston Alignment Initiative, which supports AI alignment/safety research and outreach programs at Harvard, MIT, and beyond, co-president of Harvard EA, Director of Governance Programs at the Harvard AI Safety Team and MIT AI Alignment, and occasional AI governance researcher.
Not to be confused with the user formerly known as trevor1.
“We should be devoting almost all of global production...” and “we must help them increase” are only the case if:
There are no other species whose product of [moral weight] * [population] is higher than bees, and
Our actions only have moral relevance for beings that are currently alive.
(And, you know, total utilitarianism and such.)