I’m a research scientist at Anthropic doing empirical safety research on language models. In the past, I’ve worked on automated red teaming of language models [1], the inverse scaling prize [2], learning from human feedback [3][4], and empirically testing debate [5][6], iterated amplification [7], and other methods [8] for scalably supervising AI systems as they become more capable.
Website: https://ethanperez.net/
I’m curious why you believe that having products will be helpful? A few particular considerations I would be interested to hear your take on:
There seems to be abundant EA donor funding available from sources like FTX without the need for a product / for attracting non-EA investors
Products require a large amount of resources to build/maintain
Profitable products also are especially prone to accelerating race dynamics