Personal view as an employee: Epoch has always been a mix of EAs/safety-focused people and people with other views. I don’t think our core mission was ever explicitly about safety, for a bunch of reasons including that some of us were personally uncertain about AI risk, and that an explicit commitment to safety might have undermined the perceived neutrality/objectiveness of our work. The mission was raising the standard of evidence for thinking about AI and informing people to hopefully make better decisions.
My impression is that Matthew, Tamay and Ege were among the most skeptical about AI risk and had relatively long timelines more or less from the beginning. They have contributed enormously to Epoch and I think we’d have done much less valuable work without them. I’m quite happy that they have been working with us until now, they could have moved to do direct capabilities work or anything else at any point if they wanted and I don’t think they lacked opportunities to do so.
Finally, Jaime is definitely not the only one who still takes risks seriously (at the very least I also do), even if there have been shifts in relative concern about different types of risks (eg: ASI takeover vs gradual disempowerment).
Personal view as an employee: Epoch has always been a mix of EAs/safety-focused people and people with other views. I don’t think our core mission was ever explicitly about safety, for a bunch of reasons including that some of us were personally uncertain about AI risk, and that an explicit commitment to safety might have undermined the perceived neutrality/objectiveness of our work. The mission was raising the standard of evidence for thinking about AI and informing people to hopefully make better decisions.
My impression is that Matthew, Tamay and Ege were among the most skeptical about AI risk and had relatively long timelines more or less from the beginning. They have contributed enormously to Epoch and I think we’d have done much less valuable work without them. I’m quite happy that they have been working with us until now, they could have moved to do direct capabilities work or anything else at any point if they wanted and I don’t think they lacked opportunities to do so.
Finally, Jaime is definitely not the only one who still takes risks seriously (at the very least I also do), even if there have been shifts in relative concern about different types of risks (eg: ASI takeover vs gradual disempowerment).
Thank you, that is helpful information.