Nice, thank you! In a much more gray area of [has some non-trivial identity as “AI safety person”], I assume there’d be lots of people, some with relevant power. This would include some heads of research at big companies. Maybe you meant that by “People at labs who are basically AI researchers”, but I mean, they would be less coded as “AI safety” but still would e.g.
Pay lip service to safety;
Maybe even bring it up sometimes internally;
Happily manufacture what boils down to lies for funders regarding technical safety;
Internally think of themselves as doing something good and safe for humanity;
Internally think of themselves as being reasonable and responsible regarding AI.
Further, this would include many AI researchers in academia. People around Silicon Valley / LW / etc. tend to discount academia, but I don’t.
Nice, thank you! In a much more gray area of [has some non-trivial identity as “AI safety person”], I assume there’d be lots of people, some with relevant power. This would include some heads of research at big companies. Maybe you meant that by “People at labs who are basically AI researchers”, but I mean, they would be less coded as “AI safety” but still would e.g.
Pay lip service to safety;
Maybe even bring it up sometimes internally;
Happily manufacture what boils down to lies for funders regarding technical safety;
Internally think of themselves as doing something good and safe for humanity;
Internally think of themselves as being reasonable and responsible regarding AI.
Further, this would include many AI researchers in academia. People around Silicon Valley / LW / etc. tend to discount academia, but I don’t.