Hi everyone! My name is Ana, I am a sociology student and I am doing a research project at the University of Buenos Aires. In this post, I’m going to tell you a little about the approach I’m working on to understand how discourses surrounding AI Safety are structured and circulated, and I’m going to ask you some questions about your experiences.
For some time now I have been reading many of the things that are discussed in Less Wrong and in other spaces where AI Safety is published. Although from what I understand and from what I saw in the Less Wrong censuses, there is mostly influence from the exact sciences and computer science in these spaces, I have been reflecting on the contributions that my discipline can make to these issues. That’s why I’m now doing a research project on the network of spaces, platforms, and public and private institutions in which discourses about AI Safety are configured. I’m also interested in learning about the trajectories of those who get involved in these issues.
I know there are already publications here that address these topics —from the point of view of AI Governance— and I think it’s relevant, from that perspective, to analyze how the risks associated with the development of artificial intelligence are interpreted, addressed, and problematized, and how this information is disseminated and put on the agenda. After all, the way in which the AI Safety problem is discursively presented and spread is determinant for the ability to mobilize resources and to develop mechanisms for regulation and coordination.
That’s why I’m posting here, to be able to talk to you about two things:
First, I would really like to know more about you. To understand, from a micro-level perspective, how these information and dissemination networks are built, I’m interested in your stories about how you first came to know about AI Safety, why you felt compelled by them, and how you started getting involved. I’d also like to know in general about you and your trajectories. I would really appreciate it if you could respond by sharing your stories related to this.
2. On the other hand, I would very much like to get feedback on the way I’m thinking about this issue: I’m starting from a theoretical approach which analyzes how public problems are constructed. To summarize this approach very briefly: these authors (Gusfield, 1981, 2014 ; Cefaï, 2014; Boltanski, 1990) suggest that there are narrative or discursive strategies that actors develop to position a situation they identify as problematic as one of common interest. That is, there are strategies of justification in which actors have mechanisms—that can be more or less effective— to mobilize shared values and thus appeal to a broader collective with the capacity to intervene.
I see that happening in this community. I see that there is a very large network of organizations and actors, functioning as nodes around the world, currently researching and developing strategies to mobilize resources in favor of this situation they identify as problematic. Especially since it is an issue that has emerged with particular strength relatively recently, and given that the existence of these risks is far from being a consensus in business and political spaces, I believe it is important to carry out this kind of research to understand these mechanisms. In the fields I move in —social sciences and public policy in Latin America— this is not a topic that is very present. As I said, the influence capacity of these discourses will determine the resources available, and whether mechanisms of coordination can be created to establish common regulations around AI Safety. What are your insights on this, and how do you see the current situation of this network?
I really appreciate any response to this post.
P.S. English is not my first language and I read it better than I write it, I apologize if there are any mistakes.
Hi Ana! This shortform didn’t get much engagement, so I’d suggest posting it instead as a “question”. (Click on your name in the upper right corner of the screen, click “New Post”, then switch at the top from “Post” to “Question”.)
But before doing that, maybe think about how to make it easier to read for a reader. Oh, and although I am interested in your perspective, you probably shouldn’t tell me right before you are asking me, because that may influence my answers.
how discourses surrounding AI Safety are structured and circulated
By asking here, you can only get the part that involves people who regularly read this website. Sure, many people here worry about AI Safety, but you may miss opinions of people who don’t participate here—maybe because they deeply disagree with local consensus.
First question I have is: who is the audience whose answers you want:
people who active do something about AI Safety, whether it’s writing papers, making videos, or at least blogging about it?
people who are kinda worried about AI Safety, but don’t actually do anything about it (e.g. I would be in this group)
do you also want answers from people who visit this website but don’t worry about AI Safety?
It would probably be easier to make the questions more structured. One possibility would be to use something like Google Forms, but I think posting a “question” on Less Wrong will also work. But even then it would help to highlight the specific questions. Something like this:
What is your involvement in AI Safety? Please name the specific activities you are doing (e.g. write papers, post blogs, work for a AI Safety non-profit...). If you are not doing anything special about AI Safety, just write “nothing”.
How worried are you about AI Safety, on a scale from 0 to 10?
How and when did you first learn about AI Safety? (When = a year, make a guess if you are not sure.)
How and when did you start getting involved in AI Safety? (Skip if the answer to #1 is “nothing”.)
these authors (...) suggest that there are narrative or discursive strategies that actors develop to position a situation they identify as problematic as one of common interest. That is, there are strategies of justification in which actors have mechanisms—that can be more or less effective— to mobilize shared values and thus appeal to a broader collective with the capacity to intervene.
I am not a sociologist, and I am not sure how to decipher this text. (Which makes it difficult to answer your question.) What I see is “some people, when they see a problem, try to convince others that they also have the same problem”. Is that what you wanted to say? Is there anything that is implied by saying it? (For example, “people try to convince others of X” might imply that X is actually false. Are those authors suggesting that when someone says “people, we have a problem”, they are lying? Or is this just too paranoid reading?) Otherwise, I am not sure what is so special about saying “guys, we have a problem”.
I see that there is a very large network of organizations and actors, functioning as nodes around the world, currently researching and developing strategies to mobilize resources in favor of this situation they identify as problematic.
It might help to quantify what you mean by “a very large” network. Compared to what? Problems such as global warming probably get 1000× more attention and resources, so from that perspective I might call the AI Safety network surprisingly small. Basically a few nerds writing papers, and sometimes a rich entrepreneur says in media “I worry that this thing we are building might kill us all”, and people go like “oh, you mean, it could kill the job market for creative artists? that is a serious concern indeed”, and then they change the topic.
(So maybe, as a part of your research, try to make a list of those groups, and maybe ask them how many people work for them? Maybe count separately employees and volunteers. If you get data for the 10 largest organizations, the final number will probably be somewhere between 1.5× and 5× that.)
Where does the money for AI Safety come from, that is an interesting question, and you could try tracking down the sources. If the source is “university”, look further, university projects are paid by grants, where does the grant come from? Government sources? Rich individual sponsors? Fundraising?
Again, I translate this paragraph as “some people try to address the danger of AI by doing research, and other people try to get them funding”. Is there an important nuance that I missed?
As a summary, as I see your research, I would split it into two parts that require different strategy:
Data about people who post on Less Wrong and are concerned about AI Safety—how do they feel about the situation, when and how did they get involved, etc. This you can figure out by asking here.
Data about the AI Safety network—who are the major organizations, how many people work for them, where do they get money from, etc. This is the kind of research journalists do; you don’t need to survey many people, you just need to get the answers from the right ones. You could e.g. ask here “what are the most important AI Safety organizations”, and then try to find more information on their web pages, or by asking them directly.
(When you have some data collected, it might make sense to post it here as “this is what I have figured out” and let people add some feedback. Make it a post, not a shortform.)
Hi everyone! My name is Ana, I am a sociology student and I am doing a research project at the University of Buenos Aires. In this post, I’m going to tell you a little about the approach I’m working on to understand how discourses surrounding AI Safety are structured and circulated, and I’m going to ask you some questions about your experiences.
For some time now I have been reading many of the things that are discussed in Less Wrong and in other spaces where AI Safety is published. Although from what I understand and from what I saw in the Less Wrong censuses, there is mostly influence from the exact sciences and computer science in these spaces, I have been reflecting on the contributions that my discipline can make to these issues. That’s why I’m now doing a research project on the network of spaces, platforms, and public and private institutions in which discourses about AI Safety are configured. I’m also interested in learning about the trajectories of those who get involved in these issues.
I know there are already publications here that address these topics —from the point of view of AI Governance— and I think it’s relevant, from that perspective, to analyze how the risks associated with the development of artificial intelligence are interpreted, addressed, and problematized, and how this information is disseminated and put on the agenda. After all, the way in which the AI Safety problem is discursively presented and spread is determinant for the ability to mobilize resources and to develop mechanisms for regulation and coordination.
That’s why I’m posting here, to be able to talk to you about two things:
First, I would really like to know more about you. To understand, from a micro-level perspective, how these information and dissemination networks are built, I’m interested in your stories about how you first came to know about AI Safety, why you felt compelled by them, and how you started getting involved. I’d also like to know in general about you and your trajectories. I would really appreciate it if you could respond by sharing your stories related to this.
2. On the other hand, I would very much like to get feedback on the way I’m thinking about this issue:
I’m starting from a theoretical approach which analyzes how public problems are constructed. To summarize this approach very briefly: these authors (Gusfield, 1981, 2014 ; Cefaï, 2014; Boltanski, 1990) suggest that there are narrative or discursive strategies that actors develop to position a situation they identify as problematic as one of common interest. That is, there are strategies of justification in which actors have mechanisms—that can be more or less effective— to mobilize shared values and thus appeal to a broader collective with the capacity to intervene.
I see that happening in this community. I see that there is a very large network of organizations and actors, functioning as nodes around the world, currently researching and developing strategies to mobilize resources in favor of this situation they identify as problematic. Especially since it is an issue that has emerged with particular strength relatively recently, and given that the existence of these risks is far from being a consensus in business and political spaces, I believe it is important to carry out this kind of research to understand these mechanisms. In the fields I move in —social sciences and public policy in Latin America— this is not a topic that is very present. As I said, the influence capacity of these discourses will determine the resources available, and whether mechanisms of coordination can be created to establish common regulations around AI Safety. What are your insights on this, and how do you see the current situation of this network?
I really appreciate any response to this post.
P.S. English is not my first language and I read it better than I write it, I apologize if there are any mistakes.
Hi Ana! This shortform didn’t get much engagement, so I’d suggest posting it instead as a “question”. (Click on your name in the upper right corner of the screen, click “New Post”, then switch at the top from “Post” to “Question”.)
But before doing that, maybe think about how to make it easier to read for a reader. Oh, and although I am interested in your perspective, you probably shouldn’t tell me right before you are asking me, because that may influence my answers.
By asking here, you can only get the part that involves people who regularly read this website. Sure, many people here worry about AI Safety, but you may miss opinions of people who don’t participate here—maybe because they deeply disagree with local consensus.
First question I have is: who is the audience whose answers you want:
people who active do something about AI Safety, whether it’s writing papers, making videos, or at least blogging about it?
people who are kinda worried about AI Safety, but don’t actually do anything about it (e.g. I would be in this group)
do you also want answers from people who visit this website but don’t worry about AI Safety?
It would probably be easier to make the questions more structured. One possibility would be to use something like Google Forms, but I think posting a “question” on Less Wrong will also work. But even then it would help to highlight the specific questions. Something like this:
What is your involvement in AI Safety? Please name the specific activities you are doing (e.g. write papers, post blogs, work for a AI Safety non-profit...). If you are not doing anything special about AI Safety, just write “nothing”.
How worried are you about AI Safety, on a scale from 0 to 10?
How and when did you first learn about AI Safety? (When = a year, make a guess if you are not sure.)
How and when did you start getting involved in AI Safety? (Skip if the answer to #1 is “nothing”.)
I am not a sociologist, and I am not sure how to decipher this text. (Which makes it difficult to answer your question.) What I see is “some people, when they see a problem, try to convince others that they also have the same problem”. Is that what you wanted to say? Is there anything that is implied by saying it? (For example, “people try to convince others of X” might imply that X is actually false. Are those authors suggesting that when someone says “people, we have a problem”, they are lying? Or is this just too paranoid reading?) Otherwise, I am not sure what is so special about saying “guys, we have a problem”.
It might help to quantify what you mean by “a very large” network. Compared to what? Problems such as global warming probably get 1000× more attention and resources, so from that perspective I might call the AI Safety network surprisingly small. Basically a few nerds writing papers, and sometimes a rich entrepreneur says in media “I worry that this thing we are building might kill us all”, and people go like “oh, you mean, it could kill the job market for creative artists? that is a serious concern indeed”, and then they change the topic.
(So maybe, as a part of your research, try to make a list of those groups, and maybe ask them how many people work for them? Maybe count separately employees and volunteers. If you get data for the 10 largest organizations, the final number will probably be somewhere between 1.5× and 5× that.)
Where does the money for AI Safety come from, that is an interesting question, and you could try tracking down the sources. If the source is “university”, look further, university projects are paid by grants, where does the grant come from? Government sources? Rich individual sponsors? Fundraising?
Again, I translate this paragraph as “some people try to address the danger of AI by doing research, and other people try to get them funding”. Is there an important nuance that I missed?
As a summary, as I see your research, I would split it into two parts that require different strategy:
Data about people who post on Less Wrong and are concerned about AI Safety—how do they feel about the situation, when and how did they get involved, etc. This you can figure out by asking here.
Data about the AI Safety network—who are the major organizations, how many people work for them, where do they get money from, etc. This is the kind of research journalists do; you don’t need to survey many people, you just need to get the answers from the right ones. You could e.g. ask here “what are the most important AI Safety organizations”, and then try to find more information on their web pages, or by asking them directly.
(When you have some data collected, it might make sense to post it here as “this is what I have figured out” and let people add some feedback. Make it a post, not a shortform.)