Kaj—I think the key thing here is to try to avoid making AI safety a strongly partisan-coded issue (e.g. ‘it’s a Lefty thing’ or ‘its a Righty thing’) -- but to find persuasive arguments that appeal about equally strongly to people coming from different specific political and religious values.
So, for example, conservatives on average are more concerned about ‘the dignity of work’ and ‘the sanctity of marriage’, so I emphasized how ASI would undermine both (through mass unemployment and AI sexbots/waifus/deepfakes), because that’s how to get their attention and interest. Whereas liberals on average may be more concerned about ‘economic inequality’, so when speaking with them, it might be more effective to talk about how ASI could dramatically increase wealth differences between future AI trillionaires and ordinary unemployed people.
So it’s really about learning specific ways to appeal to different constituencies, given the values and concerns they already have—rather than making AI into a generally liberal or generally conservative cause. Hope that makes sense.
instead being “demonizing anyone associated with building AI, including much of the AI safety community itself”.
I’m confused how you can simultaneously suggest that this talk is about finding allies and building a coalition together with the conservatives, while also explicitly naming “rationalists” in your list of groups that are trying to destroy religion
I get the concern about “rationalists” being mentioned. It is true that many (but not all) rationalists tend to downplay the value of traditional religion, and that a minority of rationalists unfortunately have worked on AI development (including at DeepMind, OpenAI and Anthropic).
However, I don’t get the impression that this piece is demonising the AI Safety community. It is very much arguing for concepts like AI extinction risk that came out of the AI Safety community. This is setting a base for AI Safety researchers (like Nate Soares) to talk with conservatives.
The piece is mostly focussed on demonising current attempts to develop ‘ASI’. I think accelerating AI development is evil in the sense of ‘discontinuing life’. A culture that commits to not do ‘evil’ also seems more robust at preventing some bad thing from happening than a culture focused on trying to prevent an estimated risk but weighing this up with estimated benefits. Though I can see how a call to prevent ‘evil’ can result in a movement causing other harms. This would need to be channeled with care.
Personally, I think it’s also important to build bridges across to multiple communities, to show where all of us actually care about restricting the same reckless activities (toward the development and release of models). A lot of that does not require bringing up abstract notions like ‘ASI’, which are hard to act on and easy to conflate. Rather, it requires relating with communities’ perspectives on what company activities they are concerned about (e.g. mass surveillance and the construction of hyperscale data centers in rural towns), in a way that enables robust action to curb those activities. The ‘building multiple bridges’ aspect is missing in Geoffrey’s talk, but also it seems focused on first making the case why traditional conservatives should even care about this issue.
If we care to actually reduce the risk, let’s focus the discussion on what this talk is advocating for, and whether or not that helps people in communities orient to reduce the risk.
Kaj—I think the key thing here is to try to avoid making AI safety a strongly partisan-coded issue (e.g. ‘it’s a Lefty thing’ or ‘its a Righty thing’) -- but to find persuasive arguments that appeal about equally strongly to people coming from different specific political and religious values.
So, for example, conservatives on average are more concerned about ‘the dignity of work’ and ‘the sanctity of marriage’, so I emphasized how ASI would undermine both (through mass unemployment and AI sexbots/waifus/deepfakes), because that’s how to get their attention and interest. Whereas liberals on average may be more concerned about ‘economic inequality’, so when speaking with them, it might be more effective to talk about how ASI could dramatically increase wealth differences between future AI trillionaires and ordinary unemployed people.
So it’s really about learning specific ways to appeal to different constituencies, given the values and concerns they already have—rather than making AI into a generally liberal or generally conservative cause. Hope that makes sense.
I get the concern about “rationalists” being mentioned. It is true that many (but not all) rationalists tend to downplay the value of traditional religion, and that a minority of rationalists unfortunately have worked on AI development (including at DeepMind, OpenAI and Anthropic).
However, I don’t get the impression that this piece is demonising the AI Safety community. It is very much arguing for concepts like AI extinction risk that came out of the AI Safety community. This is setting a base for AI Safety researchers (like Nate Soares) to talk with conservatives.
The piece is mostly focussed on demonising current attempts to develop ‘ASI’. I think accelerating AI development is evil in the sense of ‘discontinuing life’. A culture that commits to not do ‘evil’ also seems more robust at preventing some bad thing from happening than a culture focused on trying to prevent an estimated risk but weighing this up with estimated benefits. Though I can see how a call to prevent ‘evil’ can result in a movement causing other harms. This would need to be channeled with care.
Personally, I think it’s also important to build bridges across to multiple communities, to show where all of us actually care about restricting the same reckless activities (toward the development and release of models). A lot of that does not require bringing up abstract notions like ‘ASI’, which are hard to act on and easy to conflate. Rather, it requires relating with communities’ perspectives on what company activities they are concerned about (e.g. mass surveillance and the construction of hyperscale data centers in rural towns), in a way that enables robust action to curb those activities. The ‘building multiple bridges’ aspect is missing in Geoffrey’s talk, but also it seems focused on first making the case why traditional conservatives should even care about this issue.
If we care to actually reduce the risk, let’s focus the discussion on what this talk is advocating for, and whether or not that helps people in communities orient to reduce the risk.