Hi Ana! This shortform didn’t get much engagement, so I’d suggest posting it instead as a “question”. (Click on your name in the upper right corner of the screen, click “New Post”, then switch at the top from “Post” to “Question”.)
But before doing that, maybe think about how to make it easier to read for a reader. Oh, and although I am interested in your perspective, you probably shouldn’t tell me right before you are asking me, because that may influence my answers.
how discourses surrounding AI Safety are structured and circulated
By asking here, you can only get the part that involves people who regularly read this website. Sure, many people here worry about AI Safety, but you may miss opinions of people who don’t participate here—maybe because they deeply disagree with local consensus.
First question I have is: who is the audience whose answers you want:
people who active do something about AI Safety, whether it’s writing papers, making videos, or at least blogging about it?
people who are kinda worried about AI Safety, but don’t actually do anything about it (e.g. I would be in this group)
do you also want answers from people who visit this website but don’t worry about AI Safety?
It would probably be easier to make the questions more structured. One possibility would be to use something like Google Forms, but I think posting a “question” on Less Wrong will also work. But even then it would help to highlight the specific questions. Something like this:
What is your involvement in AI Safety? Please name the specific activities you are doing (e.g. write papers, post blogs, work for a AI Safety non-profit...). If you are not doing anything special about AI Safety, just write “nothing”.
How worried are you about AI Safety, on a scale from 0 to 10?
How and when did you first learn about AI Safety? (When = a year, make a guess if you are not sure.)
How and when did you start getting involved in AI Safety? (Skip if the answer to #1 is “nothing”.)
these authors (...) suggest that there are narrative or discursive strategies that actors develop to position a situation they identify as problematic as one of common interest. That is, there are strategies of justification in which actors have mechanisms—that can be more or less effective— to mobilize shared values and thus appeal to a broader collective with the capacity to intervene.
I am not a sociologist, and I am not sure how to decipher this text. (Which makes it difficult to answer your question.) What I see is “some people, when they see a problem, try to convince others that they also have the same problem”. Is that what you wanted to say? Is there anything that is implied by saying it? (For example, “people try to convince others of X” might imply that X is actually false. Are those authors suggesting that when someone says “people, we have a problem”, they are lying? Or is this just too paranoid reading?) Otherwise, I am not sure what is so special about saying “guys, we have a problem”.
I see that there is a very large network of organizations and actors, functioning as nodes around the world, currently researching and developing strategies to mobilize resources in favor of this situation they identify as problematic.
It might help to quantify what you mean by “a very large” network. Compared to what? Problems such as global warming probably get 1000× more attention and resources, so from that perspective I might call the AI Safety network surprisingly small. Basically a few nerds writing papers, and sometimes a rich entrepreneur says in media “I worry that this thing we are building might kill us all”, and people go like “oh, you mean, it could kill the job market for creative artists? that is a serious concern indeed”, and then they change the topic.
(So maybe, as a part of your research, try to make a list of those groups, and maybe ask them how many people work for them? Maybe count separately employees and volunteers. If you get data for the 10 largest organizations, the final number will probably be somewhere between 1.5× and 5× that.)
Where does the money for AI Safety come from, that is an interesting question, and you could try tracking down the sources. If the source is “university”, look further, university projects are paid by grants, where does the grant come from? Government sources? Rich individual sponsors? Fundraising?
Again, I translate this paragraph as “some people try to address the danger of AI by doing research, and other people try to get them funding”. Is there an important nuance that I missed?
As a summary, as I see your research, I would split it into two parts that require different strategy:
Data about people who post on Less Wrong and are concerned about AI Safety—how do they feel about the situation, when and how did they get involved, etc. This you can figure out by asking here.
Data about the AI Safety network—who are the major organizations, how many people work for them, where do they get money from, etc. This is the kind of research journalists do; you don’t need to survey many people, you just need to get the answers from the right ones. You could e.g. ask here “what are the most important AI Safety organizations”, and then try to find more information on their web pages, or by asking them directly.
(When you have some data collected, it might make sense to post it here as “this is what I have figured out” and let people add some feedback. Make it a post, not a shortform.)
Hi Ana! This shortform didn’t get much engagement, so I’d suggest posting it instead as a “question”. (Click on your name in the upper right corner of the screen, click “New Post”, then switch at the top from “Post” to “Question”.)
But before doing that, maybe think about how to make it easier to read for a reader. Oh, and although I am interested in your perspective, you probably shouldn’t tell me right before you are asking me, because that may influence my answers.
By asking here, you can only get the part that involves people who regularly read this website. Sure, many people here worry about AI Safety, but you may miss opinions of people who don’t participate here—maybe because they deeply disagree with local consensus.
First question I have is: who is the audience whose answers you want:
people who active do something about AI Safety, whether it’s writing papers, making videos, or at least blogging about it?
people who are kinda worried about AI Safety, but don’t actually do anything about it (e.g. I would be in this group)
do you also want answers from people who visit this website but don’t worry about AI Safety?
It would probably be easier to make the questions more structured. One possibility would be to use something like Google Forms, but I think posting a “question” on Less Wrong will also work. But even then it would help to highlight the specific questions. Something like this:
What is your involvement in AI Safety? Please name the specific activities you are doing (e.g. write papers, post blogs, work for a AI Safety non-profit...). If you are not doing anything special about AI Safety, just write “nothing”.
How worried are you about AI Safety, on a scale from 0 to 10?
How and when did you first learn about AI Safety? (When = a year, make a guess if you are not sure.)
How and when did you start getting involved in AI Safety? (Skip if the answer to #1 is “nothing”.)
I am not a sociologist, and I am not sure how to decipher this text. (Which makes it difficult to answer your question.) What I see is “some people, when they see a problem, try to convince others that they also have the same problem”. Is that what you wanted to say? Is there anything that is implied by saying it? (For example, “people try to convince others of X” might imply that X is actually false. Are those authors suggesting that when someone says “people, we have a problem”, they are lying? Or is this just too paranoid reading?) Otherwise, I am not sure what is so special about saying “guys, we have a problem”.
It might help to quantify what you mean by “a very large” network. Compared to what? Problems such as global warming probably get 1000× more attention and resources, so from that perspective I might call the AI Safety network surprisingly small. Basically a few nerds writing papers, and sometimes a rich entrepreneur says in media “I worry that this thing we are building might kill us all”, and people go like “oh, you mean, it could kill the job market for creative artists? that is a serious concern indeed”, and then they change the topic.
(So maybe, as a part of your research, try to make a list of those groups, and maybe ask them how many people work for them? Maybe count separately employees and volunteers. If you get data for the 10 largest organizations, the final number will probably be somewhere between 1.5× and 5× that.)
Where does the money for AI Safety come from, that is an interesting question, and you could try tracking down the sources. If the source is “university”, look further, university projects are paid by grants, where does the grant come from? Government sources? Rich individual sponsors? Fundraising?
Again, I translate this paragraph as “some people try to address the danger of AI by doing research, and other people try to get them funding”. Is there an important nuance that I missed?
As a summary, as I see your research, I would split it into two parts that require different strategy:
Data about people who post on Less Wrong and are concerned about AI Safety—how do they feel about the situation, when and how did they get involved, etc. This you can figure out by asking here.
Data about the AI Safety network—who are the major organizations, how many people work for them, where do they get money from, etc. This is the kind of research journalists do; you don’t need to survey many people, you just need to get the answers from the right ones. You could e.g. ask here “what are the most important AI Safety organizations”, and then try to find more information on their web pages, or by asking them directly.
(When you have some data collected, it might make sense to post it here as “this is what I have figured out” and let people add some feedback. Make it a post, not a shortform.)