FWIW, I think that the GDM safety people are at least as similar to the Constellation/Redwood/METR cluster as the Anthropic safety people are, probably more similar. (And Anthropic as a whole has very different beliefs than the Constellation cluster, e.g. not having much credence on misalignment risk.)
Oh agreed, I didn’t notice that OP lumped Anthropic and Constellation together. I would consider GDM, constellation and Anthropic to all be separate clusters
I feel like GDM safety and Constellation are similar enough to be in the same cluster: I bet within-cluster variance is bigger than between-cluster variance.
Yeah I think that’d be reasonable too. You could talk about these clusters at many different levels of granularity, and there are tons I haven’t named.
In the context of AI safety views that are less correlated/more independent, I would personally bump the GDM work related to causality. I think GDM is the only major AI-related organization I can think of that seems to have a critical mass of interest in this line of research. A bit different since its not a full-on framework for addressing AGI, but I think it is a different (and in my view under-appreciated) line of work that has a different perspective and draws and different concepts/fields than a lot of other approaches.
I consider the deepmind safety team to have its own scene that is distinct from any specific outside view, though closest to the constellation one. Our AGI safety approach spells some of this out https://deepmind.google/discover/blog/taking-a-responsible-path-to-agi/
FWIW, I think that the GDM safety people are at least as similar to the Constellation/Redwood/METR cluster as the Anthropic safety people are, probably more similar. (And Anthropic as a whole has very different beliefs than the Constellation cluster, e.g. not having much credence on misalignment risk.)
Oh agreed, I didn’t notice that OP lumped Anthropic and Constellation together. I would consider GDM, constellation and Anthropic to all be separate clusters
I feel like GDM safety and Constellation are similar enough to be in the same cluster: I bet within-cluster variance is bigger than between-cluster variance.
Yeah I think that’d be reasonable too. You could talk about these clusters at many different levels of granularity, and there are tons I haven’t named.
In the context of AI safety views that are less correlated/more independent, I would personally bump the GDM work related to causality. I think GDM is the only major AI-related organization I can think of that seems to have a critical mass of interest in this line of research. A bit different since its not a full-on framework for addressing AGI, but I think it is a different (and in my view under-appreciated) line of work that has a different perspective and draws and different concepts/fields than a lot of other approaches.