Hi everyone! My name is Ana, I am a sociology student and I am doing a research project at the University of Buenos Aires. In this post, I’m going to tell you a little about the approach I’m working on to understand how discourses surrounding AI Safety are structured and circulated, and I’m going to ask you some questions about your experiences.
For some time now I have been reading many of the things that are discussed in Less Wrong and in other spaces where AI Safety is published. Although from what I understand and from what I saw in the Less Wrong censuses, there is mostly influence from the exact sciences and computer science in these spaces, I have been reflecting on the contributions that my discipline can make to these issues. That’s why I’m now doing a research project on the network of spaces, platforms, and public and private institutions in which discourses about AI Safety are configured. I’m also interested in learning about the trajectories of those who get involved in these issues.
I know there are already publications here that address these topics —from the point of view of AI Governance— and I think it’s relevant, from that perspective, to analyze how the risks associated with the development of artificial intelligence are interpreted, addressed, and problematized, and how this information is disseminated and put on the agenda. After all, the way in which the AI Safety problem is discursively presented and spread is determinant for the ability to mobilize resources and to develop mechanisms for regulation and coordination.
That’s why I’m posting here, to be able to talk to you about two things:
First, I would really like to know more about you. To understand, from a micro-level perspective, how these information and dissemination networks are built, I’m interested in your stories about how you first came to know about AI Safety, why you felt compelled by them, and how you started getting involved. I’d also like to know in general about you and your trajectories. I would really appreciate it if you could respond by sharing your stories related to this.
2. On the other hand, I would very much like to get feedback on the way I’m thinking about this issue: I’m starting from a theoretical approach which analyzes how public problems are constructed. To summarize this approach very briefly: these authors (Gusfield, 1981, 2014 ; Cefaï, 2014; Boltanski, 1990) suggest that there are narrative or discursive strategies that actors develop to position a situation they identify as problematic as one of common interest. That is, there are strategies of justification in which actors have mechanisms—that can be more or less effective— to mobilize shared values and thus appeal to a broader collective with the capacity to intervene.
I see that happening in this community. I see that there is a very large network of organizations and actors, functioning as nodes around the world, currently researching and developing strategies to mobilize resources in favor of this situation they identify as problematic. Especially since it is an issue that has emerged with particular strength relatively recently, and given that the existence of these risks is far from being a consensus in business and political spaces, I believe it is important to carry out this kind of research to understand these mechanisms. In the fields I move in —social sciences and public policy in Latin America— this is not a topic that is very present. As I said, the influence capacity of these discourses will determine the resources available, and whether mechanisms of coordination can be created to establish common regulations around AI Safety. What are your insights on this, and how do you see the current situation of this network?
I really appreciate any response to this post.
P.S. English is not my first language and I read it better than I write it, I apologize if there are any mistakes.
Hi everyone! My name is Ana, I am a sociology student and I am doing a research project at the University of Buenos Aires. In this post, I’m going to tell you a little about the approach I’m working on to understand how discourses surrounding AI Safety are structured and circulated, and I’m going to ask you some questions about your experiences.
For some time now I have been reading many of the things that are discussed in Less Wrong and in other spaces where AI Safety is published. Although from what I understand and from what I saw in the Less Wrong censuses, there is mostly influence from the exact sciences and computer science in these spaces, I have been reflecting on the contributions that my discipline can make to these issues. That’s why I’m now doing a research project on the network of spaces, platforms, and public and private institutions in which discourses about AI Safety are configured. I’m also interested in learning about the trajectories of those who get involved in these issues.
I know there are already publications here that address these topics —from the point of view of AI Governance— and I think it’s relevant, from that perspective, to analyze how the risks associated with the development of artificial intelligence are interpreted, addressed, and problematized, and how this information is disseminated and put on the agenda. After all, the way in which the AI Safety problem is discursively presented and spread is determinant for the ability to mobilize resources and to develop mechanisms for regulation and coordination.
That’s why I’m posting here, to be able to talk to you about two things:
First, I would really like to know more about you. To understand, from a micro-level perspective, how these information and dissemination networks are built, I’m interested in your stories about how you first came to know about AI Safety, why you felt compelled by them, and how you started getting involved. I’d also like to know in general about you and your trajectories. I would really appreciate it if you could respond by sharing your stories related to this.
2. On the other hand, I would very much like to get feedback on the way I’m thinking about this issue:
I’m starting from a theoretical approach which analyzes how public problems are constructed. To summarize this approach very briefly: these authors (Gusfield, 1981, 2014 ; Cefaï, 2014; Boltanski, 1990) suggest that there are narrative or discursive strategies that actors develop to position a situation they identify as problematic as one of common interest. That is, there are strategies of justification in which actors have mechanisms—that can be more or less effective— to mobilize shared values and thus appeal to a broader collective with the capacity to intervene.
I see that happening in this community. I see that there is a very large network of organizations and actors, functioning as nodes around the world, currently researching and developing strategies to mobilize resources in favor of this situation they identify as problematic. Especially since it is an issue that has emerged with particular strength relatively recently, and given that the existence of these risks is far from being a consensus in business and political spaces, I believe it is important to carry out this kind of research to understand these mechanisms. In the fields I move in —social sciences and public policy in Latin America— this is not a topic that is very present. As I said, the influence capacity of these discourses will determine the resources available, and whether mechanisms of coordination can be created to establish common regulations around AI Safety. What are your insights on this, and how do you see the current situation of this network?
I really appreciate any response to this post.
P.S. English is not my first language and I read it better than I write it, I apologize if there are any mistakes.