EpochAI seem do be doing a lot of work that’ll accelerate AI capabalities research and development (eg. informing investors and policy makers that yes AI is a huge economic deal and here are the bottlenecks you should work around, building capabilities benchmarks to optimize for). Under common-around-LW assumptions that no one could align AGI at this point, they are, by these means, increasing AI catastrophic and existential risk.
At a glance they also seem to not be doing AI x-risk reducing moves, like using their platform to mention that there are risks associated to AI, that these are not improbable, and that these require both technical work and governance to manage appropriately. This was salient to me in their latest podcast episode—speaking at length about AI replacing human workers in 5 to 10 years and the impact on the economy, without even hinting that there are risks associated with this, is burying the lede.
Given that Epoch AI is funded by OpenPhilantropy and Jaan Tallinn, who on their face care about reducing AI x-risk, what am I missing? (non rhetorical)
What is EpochAi’s theory of change for making the world better on net?
Overall, is EpochAI increasing or reducing ai x-risk (on LW models, in their models)?
I wanted this to be but a short shortpost, but since I’m questioning a pretty big maybe influential org under my true name let me show good faith with proposing reasons that might contribute to what I’m seeing. For anyone unaware of their work, maybe check out their launch post, or their recent podcast episode.
OpenPhil is unsure about magnitude of AI x-risk so invest in forecasting AI capabilities to know if they should invest more in AI safety.
EpochAI doesn’t believe AI x-risk is likely and believes that accelerating is overall better for humanity (seems true for some employees but not all)
EpochAI believes that promoting the information that AGI is economically important and possible soon is better because governments will better govern it than counterfactually
EpochAI is saying what they think is true without selection to avoid being deceptive (this doesn’t mesh with the next reason)
EpochAI believe that mentionning AI risks at this stage would hurt their platform and their influence, and are waiting for a more ripe opportunity/better argued paper.
I’m talking from a personal perspective here as Epoch director.
I personally take AI risks seriously, and I think they are worth investigating and preparing for.
I co-started Epoch AI to get evidence and clarity on AI and its risks and this is still a large motivation for me.
I have drifted towards a more skeptical position on risk in the last two years. This is due to a combination of seeing the societal reaction to AI, me participating in several risk evaluation processes, and AI unfolding more gradually than I expected 10 years ago.
Currently I am more worried about concentration in AI development and how unimproved humans will retain wealth over the very long term than I am about a violent AI takeover.
I also selfishly care about AI development happening fast enough that my parents, friends and myself could benefit from it, and I am willing to accept a certain but not unbounded amount of risk from speeding up development. Id currently be in favour of slightly faster development, specially if it could happen in a less distributed way. I feel very nervous about this however, as I see my beliefs as brittle.
I’m also going to risk also sharing more internal stuff without coordinating on it, erring on the side of over sharing. There is a chance that other management at Epoch won’t endorse these takes.
At management level, we are choosing to not talk about risks or work on risk measurement publicly. If I try to verbalize it, it’s due to a combination of us having different beliefs on AI risk, which makes communicating from a consensus view difficult, believing that talking about risk would alienate us from stakeholders skeptical of AI Risk, and the evidence about risk being below what we are comfortable writing about.
My sense is that OP is funding us primarily to gather evidence relevant to their personal models. Eg two senior people at OP particularly praised our algorithmic progress paper because it directly informs their models. They do also care about us producing legible evidence on key topics for policy, such as the software singularity or post training enhancements. We have had complete editorial control and I feel confident in rejecting topic suggestions that come from OP staff when they don’t match my vision of what we should be writing about (and have done so in the past).
In term of overall beliefs, we have a mixture of people very worried and skeptical of risk. I think the more charismatic and outspoken people at Epoch err towards being more skeptical of risks, but no one at Epoch is dismissive of it.
Some stakeholders I’ve talked to have expressed this view that they wish for Epoch AI to gain influence and then communicate publicly about AI risk. I don’t feel comfortable with that strategy, one should expect Epoch AI to keep a similar level of communication about risk as we gain influence. We might be willing to talk more about risks if we gather more evidence of risk, or if we build more sophisticated tools to talk about it, but this isn’t the niche we are filling or that you should expect us to fill.
The podcast is actually a good example here. We talk toward the end about the share of the economy owned by biological humans becoming smaller over time, which is an abstraction we have studied and have moderate confidence in. This is compatible with an AI takeover scenario, but also a peaceful transition to an AI dominated society. This is the kind of communication about risks you can expect from Epoch, relying more on abstractions we have studied than stories we don’t have confidence in.
The overall theory of change of Epoch AI is that having reliable evidence on AI will help raise the standards of conversation and decision making elsewhere. To be maximally clear, we are willing to make some tradeoffs like publishing work like FrontierMath and our distributed training paper that plausibly speed up AI development in service of that mission.
This seems fine to me (you can see some reasons I like Epoch here). My understanding is that most Epoch staff are concerned about AI Risk, though tend to longer timelines and maybe lower p(doom) than many in the community, and they aren’t exactly trying to keep this secret.
Your argument rests on an implicit premise that Epoch talking about “AI is risky” in their podcast is important, eg because it’d change the mind of some listeners. This seems fairly unlikely to me—it seems like a very inside baseball podcast, mostly listened to by people already aware of AI risk arguments, and likely that Epoch is somewhat part of the AI risk-concerned community. And, generally, I don’t think that all media produced by AI risk concerned people needs to mention that AI risk is a big deal—that just seems annoying and preachy. I see Epoch’s impact story as informing people of where AI is likely to go and what’s likely to happen, and this works fine even if they don’t explicitly discuss AI risk
I don’t think that all media produced by AI risk concerned people needs to mention that AI risk is a big deal—that just seems annoying and preachy. I see Epoch’s impact story as informing people of where AI is likely to go and what’s likely to happen, and this works fine even if they don’t explicitly discuss AI risk
I don’t think that every podcast episode should mention AI risk, but it would be pretty weird in my eyes to never mention it. Listeners would understandably infer that “these well-informed people apparently don’t really worry much, maybe I shouldn’t worry much either”. I think rationalists easily underestimate how much other people’s beliefs depend on what the people around them & their authority figures believe.
I think they have a strong platform to discuss risks occasionally. It also simply feels part of “where AI is likely to go and what’s likely to happen”.
Listeners would understandably infer that “these well-informed people apparently don’t really worry much, maybe I shouldn’t worry much either”.
I think this is countered to a great extent by all the well-informed people who worry a lot about AI risk. I think the “well-informed people apparently disagree on this topic, I better look into it myself” environment promotes inquiry and is generally good for truth-seeking.
More generally, I agree with @Neel Nanda, it seems somewhat doubtful that people listening to a very niche Epoch Podcast aren’t aware of all the smart people worried about AI risk.
EpochAI seem do be doing a lot of work that’ll accelerate AI capabalities research and development (eg. informing investors and policy makers that yes AI is a huge economic deal and here are the bottlenecks you should work around, building capabilities benchmarks to optimize for). Under common-around-LW assumptions that no one could align AGI at this point, they are, by these means, increasing AI catastrophic and existential risk.
At a glance they also seem to not be doing AI x-risk reducing moves, like using their platform to mention that there are risks associated to AI, that these are not improbable, and that these require both technical work and governance to manage appropriately. This was salient to me in their latest podcast episode—speaking at length about AI replacing human workers in 5 to 10 years and the impact on the economy, without even hinting that there are risks associated with this, is burying the lede.
Given that Epoch AI is funded by OpenPhilantropy and Jaan Tallinn, who on their face care about reducing AI x-risk, what am I missing? (non rhetorical)
What is EpochAi’s theory of change for making the world better on net?
Overall, is EpochAI increasing or reducing ai x-risk (on LW models, in their models)?
I wanted this to be but a short shortpost, but since I’m questioning a pretty big maybe influential org under my true name let me show good faith with proposing reasons that might contribute to what I’m seeing. For anyone unaware of their work, maybe check out their launch post, or their recent podcast episode.
OpenPhil is unsure about magnitude of AI x-risk so invest in forecasting AI capabilities to know if they should invest more in AI safety.
EpochAI doesn’t believe AI x-risk is likely and believes that accelerating is overall better for humanity (seems true for some employees but not all)
EpochAI believes that promoting the information that AGI is economically important and possible soon is better because governments will better govern it than counterfactually
EpochAI is saying what they think is true without selection to avoid being deceptive (this doesn’t mesh with the next reason)
EpochAI believe that mentionning AI risks at this stage would hurt their platform and their influence, and are waiting for a more ripe opportunity/better argued paper.
Tagging a few EpochAI folk that appeared in their podcast - @Jsevillamol @Tamay @Ege Erdil
I’m talking from a personal perspective here as Epoch director.
I personally take AI risks seriously, and I think they are worth investigating and preparing for.
I co-started Epoch AI to get evidence and clarity on AI and its risks and this is still a large motivation for me.
I have drifted towards a more skeptical position on risk in the last two years. This is due to a combination of seeing the societal reaction to AI, me participating in several risk evaluation processes, and AI unfolding more gradually than I expected 10 years ago.
Currently I am more worried about concentration in AI development and how unimproved humans will retain wealth over the very long term than I am about a violent AI takeover.
I also selfishly care about AI development happening fast enough that my parents, friends and myself could benefit from it, and I am willing to accept a certain but not unbounded amount of risk from speeding up development. Id currently be in favour of slightly faster development, specially if it could happen in a less distributed way. I feel very nervous about this however, as I see my beliefs as brittle.
I’m also going to risk also sharing more internal stuff without coordinating on it, erring on the side of over sharing. There is a chance that other management at Epoch won’t endorse these takes.
At management level, we are choosing to not talk about risks or work on risk measurement publicly. If I try to verbalize it, it’s due to a combination of us having different beliefs on AI risk, which makes communicating from a consensus view difficult, believing that talking about risk would alienate us from stakeholders skeptical of AI Risk, and the evidence about risk being below what we are comfortable writing about.
My sense is that OP is funding us primarily to gather evidence relevant to their personal models. Eg two senior people at OP particularly praised our algorithmic progress paper because it directly informs their models. They do also care about us producing legible evidence on key topics for policy, such as the software singularity or post training enhancements. We have had complete editorial control and I feel confident in rejecting topic suggestions that come from OP staff when they don’t match my vision of what we should be writing about (and have done so in the past).
In term of overall beliefs, we have a mixture of people very worried and skeptical of risk. I think the more charismatic and outspoken people at Epoch err towards being more skeptical of risks, but no one at Epoch is dismissive of it.
Some stakeholders I’ve talked to have expressed this view that they wish for Epoch AI to gain influence and then communicate publicly about AI risk. I don’t feel comfortable with that strategy, one should expect Epoch AI to keep a similar level of communication about risk as we gain influence. We might be willing to talk more about risks if we gather more evidence of risk, or if we build more sophisticated tools to talk about it, but this isn’t the niche we are filling or that you should expect us to fill.
The podcast is actually a good example here. We talk toward the end about the share of the economy owned by biological humans becoming smaller over time, which is an abstraction we have studied and have moderate confidence in. This is compatible with an AI takeover scenario, but also a peaceful transition to an AI dominated society. This is the kind of communication about risks you can expect from Epoch, relying more on abstractions we have studied than stories we don’t have confidence in.
The overall theory of change of Epoch AI is that having reliable evidence on AI will help raise the standards of conversation and decision making elsewhere. To be maximally clear, we are willing to make some tradeoffs like publishing work like FrontierMath and our distributed training paper that plausibly speed up AI development in service of that mission.
This seems fine to me (you can see some reasons I like Epoch here). My understanding is that most Epoch staff are concerned about AI Risk, though tend to longer timelines and maybe lower p(doom) than many in the community, and they aren’t exactly trying to keep this secret.
Your argument rests on an implicit premise that Epoch talking about “AI is risky” in their podcast is important, eg because it’d change the mind of some listeners. This seems fairly unlikely to me—it seems like a very inside baseball podcast, mostly listened to by people already aware of AI risk arguments, and likely that Epoch is somewhat part of the AI risk-concerned community. And, generally, I don’t think that all media produced by AI risk concerned people needs to mention that AI risk is a big deal—that just seems annoying and preachy. I see Epoch’s impact story as informing people of where AI is likely to go and what’s likely to happen, and this works fine even if they don’t explicitly discuss AI risk
I don’t think that every podcast episode should mention AI risk, but it would be pretty weird in my eyes to never mention it. Listeners would understandably infer that “these well-informed people apparently don’t really worry much, maybe I shouldn’t worry much either”. I think rationalists easily underestimate how much other people’s beliefs depend on what the people around them & their authority figures believe.
I think they have a strong platform to discuss risks occasionally. It also simply feels part of “where AI is likely to go and what’s likely to happen”.
I think this is countered to a great extent by all the well-informed people who worry a lot about AI risk. I think the “well-informed people apparently disagree on this topic, I better look into it myself” environment promotes inquiry and is generally good for truth-seeking.
More generally, I agree with @Neel Nanda, it seems somewhat doubtful that people listening to a very niche Epoch Podcast aren’t aware of all the smart people worried about AI risk.
This post is now looking extremely prescient.