I mean why can’t the people in these LW conversations say something like “yeah the lion’s share of the relevant power is held by people who don’t sincerely hold A or B / Type 1/2”?
Here are some groups who I think are currently relevant (not in any particular order, without quite knowing where I’m going with this yet)
nVidia (I hadn’t realized how huge they were till recently)
Sam Altman in particular
Demis Hassabis in particular
Dario Amodei in particular
OpenAI, Google, Google DeepMind, Anthropic, and maybe by now other leading labs (in slightly less particular than Sam/Demis/Dario)
The cluster of AI industry leaders of whom Sam/Demis/Dario are representative of.
People at labs who are basically AI researchers (who might have ever said the words “I do alignment research” because that’s those are the mouthwords the culture at their company said, but, weren’t meaningfully involved with safety efforts)
Anthropic safety engineers and similar
Eliezer in particular
cluster including MIRI / Lightcone / relatively highly bought in friends
Oliver Habryka in particular
OpenPhil
Dustin Moskowitz in particular
Jaan Tallin
“Constellation”
“AI Safety” egregore
“The EAgregore”
Future of Life Institute
Maybe Max Tegmark in particular, I’m not sure
Trump
MAGA
Elon Musk
Okay writing that out turned out to be most of the time I felt like spending right now, but, the next questions I have in mind are “who has power, here, over what?”, or “what is the ‘relevant’” power?
But, roughly:
a) a ton of the above are “fake” in some sense
b) On the worldscale, the OpenPhil/Constellation/Anthropic cluster is relatively weak.
c) within OpenPhil/Constellation/Anthropic, there are people more like Dario, Holden, Jack Clark, and Dustin, and people who are more rank-and-file-EA/AI-ish. I think the latter are fake the way I think you think things are fake. I think the former are differently fake from the way I think you think things are fake.
d) there are ton of vague EA/AI-safety people that I think are fake in the way you think they are fake but they don’t really matter except for The Median Researcher Problem
Nice, thank you! In a much more gray area of [has some non-trivial identity as “AI safety person”], I assume there’d be lots of people, some with relevant power. This would include some heads of research at big companies. Maybe you meant that by “People at labs who are basically AI researchers”, but I mean, they would be less coded as “AI safety” but still would e.g.
Pay lip service to safety;
Maybe even bring it up sometimes internally;
Happily manufacture what boils down to lies for funders regarding technical safety;
Internally think of themselves as doing something good and safe for humanity;
Internally think of themselves as being reasonable and responsible regarding AI.
Further, this would include many AI researchers in academia. People around Silicon Valley / LW / etc. tend to discount academia, but I don’t.
Here are some groups who I think are currently relevant (not in any particular order, without quite knowing where I’m going with this yet)
nVidia (I hadn’t realized how huge they were till recently)
Sam Altman in particular
Demis Hassabis in particular
Dario Amodei in particular
OpenAI, Google, Google DeepMind, Anthropic, and maybe by now other leading labs (in slightly less particular than Sam/Demis/Dario)
The cluster of AI industry leaders of whom Sam/Demis/Dario are representative of.
People at labs who are basically AI researchers (who might have ever said the words “I do alignment research” because that’s those are the mouthwords the culture at their company said, but, weren’t meaningfully involved with safety efforts)
Anthropic safety engineers and similar
Eliezer in particular
cluster including MIRI / Lightcone / relatively highly bought in friends
Oliver Habryka in particular
OpenPhil
Dustin Moskowitz in particular
Jaan Tallin
“Constellation”
“AI Safety” egregore
“The EAgregore”
Future of Life Institute
Maybe Max Tegmark in particular, I’m not sure
Trump
MAGA
Elon Musk
Okay writing that out turned out to be most of the time I felt like spending right now, but, the next questions I have in mind are “who has power, here, over what?”, or “what is the ‘relevant’” power?
But, roughly:
a) a ton of the above are “fake” in some sense
b) On the worldscale, the OpenPhil/Constellation/Anthropic cluster is relatively weak.
c) within OpenPhil/Constellation/Anthropic, there are people more like Dario, Holden, Jack Clark, and Dustin, and people who are more rank-and-file-EA/AI-ish. I think the latter are fake the way I think you think things are fake. I think the former are differently fake from the way I think you think things are fake.
d) there are ton of vague EA/AI-safety people that I think are fake in the way you think they are fake but they don’t really matter except for The Median Researcher Problem
Nice, thank you! In a much more gray area of [has some non-trivial identity as “AI safety person”], I assume there’d be lots of people, some with relevant power. This would include some heads of research at big companies. Maybe you meant that by “People at labs who are basically AI researchers”, but I mean, they would be less coded as “AI safety” but still would e.g.
Pay lip service to safety;
Maybe even bring it up sometimes internally;
Happily manufacture what boils down to lies for funders regarding technical safety;
Internally think of themselves as doing something good and safe for humanity;
Internally think of themselves as being reasonable and responsible regarding AI.
Further, this would include many AI researchers in academia. People around Silicon Valley / LW / etc. tend to discount academia, but I don’t.