Epoch has definitely described itself as safety focused to me and others. And I don’t know man, this back and forth to me sure sounds like they were branding themselves as being safety conscious:
Ofer: Can you describe your meta process for deciding what analyses to work on and how to communicate them? Analyses about the future development of transformative AI can be extremely beneficial (including via publishing them and getting many people more informed). But getting many people more hyped about scaling up ML models, for example, can also be counterproductive. Notably, The Economist article that you linked to shows your work under the title “The blessings of scale”. (I’m not making here a claim that that particular article is net-negative; just that the meta process above is very important.)
Jaime: OBJECT LEVEL REPLY:
Our current publication policy is:
Any Epoch staff member can object when we announce intention to publish a paper or blogpost.
We then have a discussion about it. If we conclude that there is a harm and that the harm outweights the benefits we refrain from publishing.
If no consensus is reached we discuss the issue with some of our trusted partners and seek advice.
Some of our work that is not published is instead disseminated privately on a case-by-case basis We think this policy has a good mix of being flexible and giving space for Epoch staff to raise concerns.
Zach: Out of curiosity, when you “announce intention to publish a paper or blogpost,” how often has a staff member objected in the past, and how often has that led to major changes or not publishing?
Jaime: I recall three in depth conversations about particular Epoch products. None of them led to a substantive change in publication and content.
OTOH I can think of at least three instances where we decided to not pursue projects or we edited some information out of an article guided by considerations like “we may not want to call attention about this topic”.
In general I think we are good at preempting when something might be controversial or could be presented in a less conspicuous framing, and acting on it.
As well as:
Thinking about the ways publications can be harmful is something that I wish was practiced more widely in the world, specially in the field of AI.
That being said, I believe that in EA, and in particular in AI Safety, the pendulum has swung too far—we would benefit from discussing these issues more openly.
In particular, I think that talking about AI scaling is unlikely to goad major companies to invest much more in AI (there are already huge incentives). And I think EAs and people otherwise invested in AI Safety would benefit from having access to the current best guesses of the people who spend more time thinking about the topic.
This does not exempt the responsibility for Epoch and other people working on AI Strategy to be mindful of how their work could result in harm, but I felt it was important to argue for more openness in the margin.
Jaime directly emphasizes how increasing AI investment would be a reasonable and valid complaint about Epoch’s work if it was true! Look, man, if I asked this set of question, got this set of answers, while the real answer is “Yes, we think it’s pretty likely we will use the research we developed at Epoch to launch a long-time-horizon focused RL capability company”, then I sure would feel pissed (and am pissed).
I had conversations with maybe two dozen people evaluating the work of Epoch over the past few months, as well as with Epoch staff, and they were definitely generally assumed to be safety focused (if sometimes from a worldview that is more gradual disempowerment focused). I heard concerns that the leadership didn’t really care about existential risk, but nobody I talked to felt confident in that (though maybe I missed that).
They have definitely described themselves as safety focused to me and others.
The original comment referenced (in addition to Epoch), “Matthew/Tamay/Ege”, yet you quoted Jaime to back up this claim. I think it’s important to distinguish who has said what when talking about what “they” have said. I for one have been openly critical of LW arguments for AI doom for quite a while now.
“They” is referring to Epoch as an entity, which the comment referenced directly. My guess is you just missed that?
ha ha but Epoch [...] were never really safety-focused, and certainly not bright-eyed standard-view-holding EAs, I think
Of course the views of the director of Epoch at the time are highly relevant to assessing whether Epoch as an institution was presenting itself as safety focused.
The original comment referenced “Matthew/Tamay/Ege”, yet you quoted Jaime to back up this claim.
But my claim is straightforwardly about the part where it’s not about “Matthew/Tamay/Ege”, but about the part where it says “Epoch”, for which the word of the director seems like the most relevant.
I agree that additionally we could also look at the Matthew/Tamay/Ege clause. I agree that you have been openly critical in many ways, and find your actions here less surprising.
I should have said: the vibe I’ve gotten from Epoch and Matthew/Tamay/Ege in private in the last year is not safety-focused. (Not that I really know all of them.)
I personally take AI risks seriously, and I think they are worth investigating and preparing for.
I have drifted towards a more skeptical position on risk in the last two years. This is due to a combination of seeing the societal reaction to AI, me participating in several risk evaluation processes, and AI unfolding more gradually than I expected 10 years ago.
Currently I am more worried about concentration in AI development and how unimproved humans will retain wealth over the very long term than I am about a violent AI takeover.
Personal view as an employee: Epoch has always been a mix of EAs/safety-focused people and people with other views. I don’t think our core mission was ever explicitly about safety, for a bunch of reasons including that some of us were personally uncertain about AI risk, and that an explicit commitment to safety might have undermined the perceived neutrality/objectiveness of our work. The mission was raising the standard of evidence for thinking about AI and informing people to hopefully make better decisions.
My impression is that Matthew, Tamay and Ege were among the most skeptical about AI risk and had relatively long timelines more or less from the beginning. They have contributed enormously to Epoch and I think we’d have done much less valuable work without them. I’m quite happy that they have been working with us until now, they could have moved to do direct capabilities work or anything else at any point if they wanted and I don’t think they lacked opportunities to do so.
Finally, Jaime is definitely not the only one who still takes risks seriously (at the very least I also do), even if there have been shifts in relative concern about different types of risks (eg: ASI takeover vs gradual disempowerment).
Jaime directly emphasizes how increasing AI investment would be a reasonable and valid complaint about Epoch’s work if it was true!
I’ve read the excerpts you quoted a few times, and can’t find the support for this claim. I think you’re treating the bolded text as substantiating it? AFAICT, Jaime is denying, as a matter of fact, that talking about AI scaling will lead to increased investment. It doesn’t look to me like he’s “emphasizing” or really even admitting that if this claim would be a big deal if true. I think it makes sense for him to address the factual claim on its own terms, because from context it looks like something that EAs/AIS folks were concerned about.
For clarity, at the moment of writing I felt that was a valid concern.
Currently I think this is no longer compelling to me personally, though I think at least some of our stakeholders would be concerned if we published work that significantly sped up AI capabilities and investment, which is a perspective we keep in mind when deciding what to work on.
I never thought that just because something speed up capabilities it means it is automatically something we shouldn’t work on. We are willing to make trade offs here in service of our core mission of improving the public understanding of the trajectory of AI. And in general we make a strong presumption in favour of freedom of knowledge.
Huh, by gricean implicature it IMO clearly implies that if there was a strong case that it would increase investment, then it would be a relevant and important consideration. Why bring it up otherwise?
I am really quite confident in my read here. I agree Jaime is not being maximally explicit here, but I would gladly take bets that >80% of random readers who would listen to a conversation like this, or read a comment thread like this, would walk away thinking the author does think that whether AI scaling would increase as a result of this kind of work, is considered relevant and important by Jaime.
(ha ha but Epoch and Matthew/Tamay/Ege were never really safety-focused, and certainly not bright-eyed standard-view-holding EAs, I think)
Epoch has definitely described itself as safety focused to me and others. And I don’t know man, this back and forth to me sure sounds like they were branding themselves as being safety conscious:
As well as:
Jaime directly emphasizes how increasing AI investment would be a reasonable and valid complaint about Epoch’s work if it was true! Look, man, if I asked this set of question, got this set of answers, while the real answer is “Yes, we think it’s pretty likely we will use the research we developed at Epoch to launch a long-time-horizon focused RL capability company”, then I sure would feel pissed (and am pissed).
I had conversations with maybe two dozen people evaluating the work of Epoch over the past few months, as well as with Epoch staff, and they were definitely generally assumed to be safety focused (if sometimes from a worldview that is more gradual disempowerment focused). I heard concerns that the leadership didn’t really care about existential risk, but nobody I talked to felt confident in that (though maybe I missed that).
The original comment referenced (in addition to Epoch), “Matthew/Tamay/Ege”, yet you quoted Jaime to back up this claim. I think it’s important to distinguish who has said what when talking about what “they” have said. I for one have been openly critical of LW arguments for AI doom for quite a while now.
[I edited this comment to be clearer]
“They” is referring to Epoch as an entity, which the comment referenced directly. My guess is you just missed that?
Of course the views of the director of Epoch at the time are highly relevant to assessing whether Epoch as an institution was presenting itself as safety focused.
I didn’t miss it. My point is that Epoch has a variety of different employees and internal views.
I don’t understand this sentence in that case:
But my claim is straightforwardly about the part where it’s not about “Matthew/Tamay/Ege”, but about the part where it says “Epoch”, for which the word of the director seems like the most relevant.
I agree that additionally we could also look at the Matthew/Tamay/Ege clause. I agree that you have been openly critical in many ways, and find your actions here less surprising.
I was pushing back against the ambiguous use of the word “they”. That’s all.
ETA: I edited the original comment to be more clear.
Ah, yeah, that makes sense. I’ll also edit my comment to make it clear I am talking about the “Epoch” clause, to reduce ambiguity there.
Good point. You’re right [edit: about Epoch].
I should have said: the vibe I’ve gotten from Epoch and Matthew/Tamay/Ege in private in the last year is not safety-focused. (Not that I really know all of them.)
This comment suggests it was maybe a shift over the last year or two (but also emphasises that at least Jaime thinks AI risk is still serious): https://www.lesswrong.com/posts/Fhwh67eJDLeaSfHzx/jonathan-claybrough-s-shortform?commentId=X3bLKX3ASvWbkNJkH
Personal view as an employee: Epoch has always been a mix of EAs/safety-focused people and people with other views. I don’t think our core mission was ever explicitly about safety, for a bunch of reasons including that some of us were personally uncertain about AI risk, and that an explicit commitment to safety might have undermined the perceived neutrality/objectiveness of our work. The mission was raising the standard of evidence for thinking about AI and informing people to hopefully make better decisions.
My impression is that Matthew, Tamay and Ege were among the most skeptical about AI risk and had relatively long timelines more or less from the beginning. They have contributed enormously to Epoch and I think we’d have done much less valuable work without them. I’m quite happy that they have been working with us until now, they could have moved to do direct capabilities work or anything else at any point if they wanted and I don’t think they lacked opportunities to do so.
Finally, Jaime is definitely not the only one who still takes risks seriously (at the very least I also do), even if there have been shifts in relative concern about different types of risks (eg: ASI takeover vs gradual disempowerment).
Thank you, that is helpful information.
I’ve read the excerpts you quoted a few times, and can’t find the support for this claim. I think you’re treating the bolded text as substantiating it? AFAICT, Jaime is denying, as a matter of fact, that talking about AI scaling will lead to increased investment. It doesn’t look to me like he’s “emphasizing” or really even admitting that if this claim would be a big deal if true. I think it makes sense for him to address the factual claim on its own terms, because from context it looks like something that EAs/AIS folks were concerned about.
For clarity, at the moment of writing I felt that was a valid concern.
Currently I think this is no longer compelling to me personally, though I think at least some of our stakeholders would be concerned if we published work that significantly sped up AI capabilities and investment, which is a perspective we keep in mind when deciding what to work on.
I never thought that just because something speed up capabilities it means it is automatically something we shouldn’t work on. We are willing to make trade offs here in service of our core mission of improving the public understanding of the trajectory of AI. And in general we make a strong presumption in favour of freedom of knowledge.
Huh, by gricean implicature it IMO clearly implies that if there was a strong case that it would increase investment, then it would be a relevant and important consideration. Why bring it up otherwise?
I am really quite confident in my read here. I agree Jaime is not being maximally explicit here, but I would gladly take bets that >80% of random readers who would listen to a conversation like this, or read a comment thread like this, would walk away thinking the author does think that whether AI scaling would increase as a result of this kind of work, is considered relevant and important by Jaime.