There are some people in AI safety who I feel the most aligned with, and who I think are doing some of the most important work to advocate for AI x-risk being a big deal, and doing the best job of resisting the siren call of AI capabilities work. I’ve noticed a pattern where these people tend to have a hair trigger for who’s behaving unethically and will aggressively call people out for doing things that I don’t see as that big a deal. In fact some of these people have called each other out.
My best guess for what’s going on here is that the sort of person who advocates strongly for what they believe in will sometimes have false positives, and they will advocate strongly for those false positives, too.
There are also people who are very careful and analytical, and who don’t start fights, but they aren’t leading advocacy efforts either. I can think of a couple examples of people who do both, but they’re exceedingly rare.
There are some people in AI safety who I feel the most aligned with, and who I think are doing some of the most important work to advocate for AI x-risk being a big deal, and doing the best job of resisting the siren call of AI capabilities work. I’ve noticed a pattern where these people tend to have a hair trigger for who’s behaving unethically and will aggressively call people out for doing things that I don’t see as that big a deal. In fact some of these people have called each other out.
My best guess for what’s going on here is that the sort of person who advocates strongly for what they believe in will sometimes have false positives, and they will advocate strongly for those false positives, too.
There are also people who are very careful and analytical, and who don’t start fights, but they aren’t leading advocacy efforts either. I can think of a couple examples of people who do both, but they’re exceedingly rare.