But if you instead start by addressing things like job risks, deepfakes, concentration of power and the totalitarianism, tangible real issues people can see now, they may begin to open that door and then be more susceptible to discussing and acting on existential risk because they have the momentum behind them.
I spent approximately a year at PauseAI (Global) soul-searching over this, and I’ve come to the conclusion that this strategy is not a good idea. This is something I’ve changed my mind about.
My original view was something like:
“If we convince people to get on board with pausing AI for any reason, they’ll eventually come round to the extinction risk concerns, in the same way that people who become vegan for e.g. environmental reason usually come around to the moral concerns. This is more efficient than trying to hit people with the extinction risk concerns from the start, since more people will be open to listening to non-extinction concerns.”
I think this is wrong. Recruiting people for non-extinction reasons was harder than I expected. I remember at one point I found a facebook group with 80K people called “Artists Against Generative AI” and got the organizers to share our stuff there and we literally got zero uptake from that. We did a few media campaigns on copyright grounds, and we didn’t get much attention on that. I’m still not sure why this was the case, but we just didn’t make headway. We didn’t get any wins to leverage, we didn’t build any momentum. And even if we did, we would have been pointed in the wrong direction.
I now think something like this:
“Everything we do should have the threat of extinction front-and-centre. We might protest about specific things, but this is ‘wave’ content in the MIRI sense (I don’t know if MIRI is still doing this, I haven’t actually seen them put out any ‘wave’ content but I could easily have missed it) and needs to be fed back into extinction threat concerns. Everything we talk about that isn’t about extinction is in some sense a compromise to The Current Moment, and we should be careful of this.”
Example: we recently protested about DeepMind breaking their commitments from the Seoul summit. Whether or not they keep their commitments is probably not an X-risk lynchpin, but it is something that’s happening now, and it is genuinely bad for them to be defecting in this way. Our signage, our speeches, and our comedic skit all featured extinction risk as a/the major reason to be concerned about whether DeepMind is following their own commitments. This is still a compromise to The Current Moment, a compromise between pointing out 100% clear issues—DeepMind 100% definitely broke their commitments this isn’t debatable, and there is no regulation besides voluntary commitments—and pointing out the actual reason we care whether DeepMind is following their commitments.
Hi there! I apologize for not responding to this very insightful comment, I really appreciate your perspective on my admittedly scatter brained thought parent comment. Your comment definitely has caused me to reflect a-bit on my own, and updated me away slightly from my original position.
I feel I may have been a bit ignorant to the actual state of PauseAI, as like I said in my original comments and replies it felt like an organization dangerously close to becoming orphaned from people’s thought processes. I’m glad to hear there are some ways around the issue I described. Maybe write a top level post about how this shift in understanding is benefiting your messaging to the general public? It may inform others of novel ways to spread a positive movement.
I spent approximately a year at PauseAI (Global) soul-searching over this, and I’ve come to the conclusion that this strategy is not a good idea. This is something I’ve changed my mind about.
My original view was something like:
“If we convince people to get on board with pausing AI for any reason, they’ll eventually come round to the extinction risk concerns, in the same way that people who become vegan for e.g. environmental reason usually come around to the moral concerns. This is more efficient than trying to hit people with the extinction risk concerns from the start, since more people will be open to listening to non-extinction concerns.”
I think this is wrong. Recruiting people for non-extinction reasons was harder than I expected. I remember at one point I found a facebook group with 80K people called “Artists Against Generative AI” and got the organizers to share our stuff there and we literally got zero uptake from that. We did a few media campaigns on copyright grounds, and we didn’t get much attention on that. I’m still not sure why this was the case, but we just didn’t make headway. We didn’t get any wins to leverage, we didn’t build any momentum. And even if we did, we would have been pointed in the wrong direction.
I now think something like this:
“Everything we do should have the threat of extinction front-and-centre. We might protest about specific things, but this is ‘wave’ content in the MIRI sense (I don’t know if MIRI is still doing this, I haven’t actually seen them put out any ‘wave’ content but I could easily have missed it) and needs to be fed back into extinction threat concerns. Everything we talk about that isn’t about extinction is in some sense a compromise to The Current Moment, and we should be careful of this.”
Example: we recently protested about DeepMind breaking their commitments from the Seoul summit. Whether or not they keep their commitments is probably not an X-risk lynchpin, but it is something that’s happening now, and it is genuinely bad for them to be defecting in this way. Our signage, our speeches, and our comedic skit all featured extinction risk as a/the major reason to be concerned about whether DeepMind is following their own commitments. This is still a compromise to The Current Moment, a compromise between pointing out 100% clear issues—DeepMind 100% definitely broke their commitments this isn’t debatable, and there is no regulation besides voluntary commitments—and pointing out the actual reason we care whether DeepMind is following their commitments.
In summary:
Hi there!
I apologize for not responding to this very insightful comment, I really appreciate your perspective on my admittedly scatter brained thought parent comment. Your comment definitely has caused me to reflect a-bit on my own, and updated me away slightly from my original position.
I feel I may have been a bit ignorant to the actual state of PauseAI, as like I said in my original comments and replies it felt like an organization dangerously close to becoming orphaned from people’s thought processes. I’m glad to hear there are some ways around the issue I described. Maybe write a top level post about how this shift in understanding is benefiting your messaging to the general public? It may inform others of novel ways to spread a positive movement.