Coordination Surveys: why we should survey to organize responsibilities, not just predictions

Summary: I think it’s important for surveys about the future of technology or society to check how people’s predictions of the future depend on their beliefs about what actions or responsibilities they and others will take on. Moreover, surveys should also help people to calibrate their beliefs about those responsibilities by collecting feedback from the participants about their individual plans. Successive surveys could help improve the groups calibration as people update their responsibilities upon hearing from each other. Further down, I’ll argue that not doing this — i.e. surveying only for predictions but not responsibilities — might even be actively harmful.

An example

Here’s an example of the type of survey question combination I’m advocating for, in the case of a survey to AI researchers about the future impact of AI.

Prediction about impact:

1) Do you think AI development will have a net positive or net negative impact on society over the next 30 years?

Prediction about responsibility/​action:

2) What fraction of AI researchers over the next 30 years will focus their full-time research attention on ensuring that AI is used for positive and rather than negative societal impacts?

Feedback on responsibility/​action:

3) What is the chance that you, over the next 30 years, will transition to focusing your full-time research attention on ensuring that AI is used for positive rather than negative societal impacts?

I see a lot of surveys asking questions like (1), which is great, but not enough of (2) or (3). Asking (2) will help expose if people think AI will be good as a result of other people will take responsibility for making it good. Asking (3) will well help the survey respondents to update by seeing if their prediction in (2) matches the responses of other survey respondents in (3).

How this helps

I’ve seen it happen that everyone thinks something is fine because someone else will deal with it. This sort of survey could help folks to notice when that’s not the case. In other words, it could help mitigate the bystander effect.

Similarly, I’ve also seen it happen that everyone gets worried about a thing because they think no one else is doing anything about it, and then they go around taking a bunch of more-drastic-than-necessary unilateral actions. This sort of survey can help to mitigate this sort of social malcoordination. That is, it could help mitigate the “unilateralist’s curse” (which I think is essentially just the opposite of the bystander effect).

Finally, surveying to coordinate feels like a more cooperative and agentic game than just collecting socially contingent predictions about what will happen in the future, as though the future is inevitable. It ascribes agency, rather than merely predictive power, to the group of survey respondents as a whole. And it suggests what parameters are available for the group to change the future, namely, the allocation of certain responsibilities.

The side-taking effect: why surveying for predictions alone can be actively bad

More is true. I claim that without adding this sort of coordination information to the group’s discourse, surveys about prediction can sometimes sow seeds of deep-rooted disagreement that actually make coordination harder to achieve. Here’s how it works:

Alex: “Why worry about AI safety? It would be silly to make AI unsafe. Therefore someone will take responsibility for it.”

Bailey: “You should definitely worry about AI safety, because many people are not taking responsibility for it.”

These views are strangely compatible and therefore hard to reconcile by evidence alone. Specifically, Alice is rightly predicting that people like Bob will worry and take responsibility for safety, and Bob is rightly predicting that people like Alice are not worried.

This causes Alice and Bob to disagree with each other in a way that is fairly robust and difficult to settle without shifting the conversation to being about responsibilities instead of impact predictions. These persistent disagreements can result in factioning where people end up divided on whether they take the position that the responsibility in question (in the example, AI safety) is important or not. We end up with a lot of side-taking in favor of or against the responsibility, without a lot of discussion of how or when that responsibility will be distributed.

The organic version

It’s possible that surveys about prediction alone can still be net-good, because people naturally carry out discussions (2) and (3) slowly and organically on their own. For instance, I’ve given and seen others give talks about the neglectedness of AI safety as an area of research, by arguing from study results compiled by other researchers about the disparity between (a) the widespread opinion that AI safety is important, (b) the widespread opinion that AI safety will eventually we well-taken care of as a research area, and (b) the widespread lack of funding for the topic, at least prior to 2015.

But this sort of organic responsibility-awareness development can take years or decades; at least seems to be taking that that long in the case of “AI safety” as a responsibility. I’d like to see groups and communities develop a faster turnaround time for adopting and distributing responsibilities, and it seems to me like the sort of survey questions I’m proposing here can help with that.

My offer

If you’re a researcher who is already conducting a survey on the future of AI, even if you don’t see a way to incorporate the sort of methodology I’m suggesting for the particular questions you’re asking, I’d love a chance to see the content you have planned, just in case I can come up with some suggestions myself. If you’re interested in that, you can email about your upcoming survey at critch+upcoming-surveys@eecs.berkeley.edu.

(Please don’t use this email for other topics besides surveys that are already definitely going to happen soon; I don’t have a lot of availability to create new initiatives right now.)

If your survey isn’t about AI but about some other impactful technological or societal change, I think I’m less likely to be able to add value to your thinking about it much beyond the writing of this post, but I might be willing to try anyway depending on my availability at the time.

Thanks for reading!