My sense is that Jaime’s view (and Epoch’s view more generally) is more like: “making people better informed about AI in a way that is useful to them seems heuristically good (given that AI is a big deal), it doesn’t seem that useful or important to have a very specific theory of change beyond this”. From this perspective, saying “concerns about existential risk from AI are not among the primary motivations” is partially slightly confused as the heuristic isn’t necessarily back chained from any more specific justification. Like there is no specific terminal motivation.
Like consider someone who donates to Give Directly due to “idk, seems heuristically good to empower the worst off people” and someone who generally funds global health and well being due to specifically caring about ongoing human welfare (putting aside AI for now). This heuristic is partially motived via flow through from caring about something like welfare even though it doesn’t directly show up. These people seem like natural allies to me except in surprising circumstances (e.g., it turns out the worst off people use marginal money/power in a way that is net negative for human welfare).
I agree that there is some ontological mismatch here, but I think your position is still in pretty clear conflict to what Neel said, which is what I was objecting to:
My understanding is that eg Jaime is sincerely motivated by reducing x risk (though not 100% motivated by it), just disagrees with me (and presumably you) about various empirical questions about how to go about it, what risks are most likely, what timelines are, etc.
“Not 100% motivated by it” IMO sounds like an implication that “being motivated by reducing x-risk would make up something like 30%-70% of the motivation”. I don’t think that’s true, and I think various things that Jaime has said make that relatively clear.
I think you’re conflating “does not think that slowing down AI obviously reduces x-risk” with “reducing x risk is not a meaningful motivation for his work”. Jaime has clearly said that he believes x risk is a real and >=15% (though via different mechanisms to loss of control). I think that the public being well informed about AI generally reduces risk, and I think that Epoch is doing good work on this front, and that increasing the probability that AI goes well is part of why Jaime works on this. I think it’s much less clear if Frontier Math was good, but Jaime wasn’t very involved anyway, so doesn’t seem super relevant.
I basically think the only thing he’s said that you could consider objectionable is that he’s reluctant to push for a substantial pause for AI since x risk is not the only thing he cares about. But he also (sincerely, imo) expresses uncertainty about whether such a pause WOULD be good for x risk
My sense is that Jaime’s view (and Epoch’s view more generally) is more like: “making people better informed about AI in a way that is useful to them seems heuristically good (given that AI is a big deal), it doesn’t seem that useful or important to have a very specific theory of change beyond this”. From this perspective, saying “concerns about existential risk from AI are not among the primary motivations” is partially slightly confused as the heuristic isn’t necessarily back chained from any more specific justification. Like there is no specific terminal motivation.
Like consider someone who donates to Give Directly due to “idk, seems heuristically good to empower the worst off people” and someone who generally funds global health and well being due to specifically caring about ongoing human welfare (putting aside AI for now). This heuristic is partially motived via flow through from caring about something like welfare even though it doesn’t directly show up. These people seem like natural allies to me except in surprising circumstances (e.g., it turns out the worst off people use marginal money/power in a way that is net negative for human welfare).
I agree that there is some ontological mismatch here, but I think your position is still in pretty clear conflict to what Neel said, which is what I was objecting to:
“Not 100% motivated by it” IMO sounds like an implication that “being motivated by reducing x-risk would make up something like 30%-70% of the motivation”. I don’t think that’s true, and I think various things that Jaime has said make that relatively clear.
I think you’re conflating “does not think that slowing down AI obviously reduces x-risk” with “reducing x risk is not a meaningful motivation for his work”. Jaime has clearly said that he believes x risk is a real and >=15% (though via different mechanisms to loss of control). I think that the public being well informed about AI generally reduces risk, and I think that Epoch is doing good work on this front, and that increasing the probability that AI goes well is part of why Jaime works on this. I think it’s much less clear if Frontier Math was good, but Jaime wasn’t very involved anyway, so doesn’t seem super relevant.
I basically think the only thing he’s said that you could consider objectionable is that he’s reluctant to push for a substantial pause for AI since x risk is not the only thing he cares about. But he also (sincerely, imo) expresses uncertainty about whether such a pause WOULD be good for x risk