Hi everyone! We would love your responses to this survey about AI risk attitudes given personal context: Google Forms link.
Edit: We updated the survey for the feedback given in the comments. If you already answered it before, you don’t need to do it again.
Further information
The unique value proposition is to give an idea of the attitudes towards AI safety in the context of your initial exposure point, knowledge level and demographic whereas other surveys often focus on pure forecasting and field overview. It sprung from this article and further discussion about how people initially came to know about AI safety. This survey is focused on AI safety-engaged people while the next survey will be reformulated for a non-EA, non-rationalist community to help in outreach strategy designs.
We have shared the survey in an array of groups and forums (EA Forum, AI Safety Discussion FB group and more) and expect ~100 responses. The specific contrasts we’re hoping to infer and analyze are:
Prior knowledge <> (% AGI developed <> % AGI dangerous)
(First exposure point <> Initial impressions) <> (Currently convincing arguments <> Current impressions)
(Occupation + Age + Country) <> [*]
We are expecting to publish the results on LessWrong and EA Forum with in-depth exploratory analyses in the light of the contrasts above.
Subjectivity is part of the survey and one of the reasons we made it. Be prepared for ambiguous questions.
This is a cool attempt to get some insight into what’s going on in AI safety! It makes sense we have had so few surveys directly about this, as they are seriously difficult to do correctly—especially in a field like AI safety where there is a lot of vagueness in how people think about positive and negative outcomes.
Most of the comments are and will be about how the survey could be better and more elaborate in what it’s asking, which would mean that the survey would probably ideally be several pages long and take about 30min to answer—at the least. And there’s a bit of a cut-off where if a survey is too long, a large part of the community might not answer it anyway (the more time you have to spend on something, the more sure you wanna be it’s worth it), but with surveys it can be hard to know if they’re worth doing, unless you specifically trust the person doing them to do them well. Plus you’d need to have a good enough sample of the AI community answering the questions, which in itself will be difficult because you’ll have more people responding who have time, compared to the ones who actively work on AI safety and probably are lower on time for surveys (that being said—I think AI safety researchers would probably appreciate an AI survey as well?).
All this being said, depending on the response rates and how well we can trust their accuracy, it could be cool to see what we can get out of a survey like this. Even if it’s just the knowledge about what kinds of questions are more useful to ask compared to other questions. It could be that, we could technically have a survey that just asks people to write down their AGI timeline estimate and it would still be several pages long, or at least include a lot of description of different types of AGI and what kinds of probabilities people put on those timelines. Points for the effort in tackling such a difficult problem!
Full disclosure: I am in a relationship with the author of the post and thus have my own biases and also additional knowledge. This mainly means that I’m leaving a comment in the first place, because if it would be someone else doing something like this and most of the comments are on the critical side (note that it’s not bad to be on the critical side here because often the criticism is necessary and important), I would be more likely to default to silence and hope that the criticism is received in a constructive way that ends up helping the project launch properly instead of the project dying out. I think it’s important to recognise people for trying to do something difficult, and support them in a way that if what they are doing could be net positive, they end up doing it properly—unless we have good reason to think that doing a survey like this is actually certainly bad, in which case it should be clearer in the comments that that is the case.
This is a long survey. Can you provide some reason to think that filling out the survey is a good use of my time? For instance, you could tell me you plan to work to get 100s of people to fill it out and then publish the results. Or you could tell me more about what specific questions you hope this survey will resolve.
Updated, thank you. We were unsure if it was best to keep it vague as to not bias the responses, but it is true that you need slight justification and that it is net positive to provide the explanation.
Thanks. I’ve filled out the form, to reciprocate your efforts in response to my comment :)
I think the form could’ve been better, here’s a few ways.
The question about occupation had a really strange set of answers, with separate choices for “software engineering” and “industry” and three choices for AI work, it felt pretty odd and I wasn’t sure what answer was right.
The question about the number of “books / articles / blogs / forum posts / documentaries / movies / podcasts” I have consumed was quite odd. Firstly, that question is basically something like “do you read LW + EA Forum a lot or not” because that’s the element of that list that has the ability to actually be 200+, nobody has read that many books or documentaries on the topic because there aren’t that many. Secondly, I have no idea when I switched from having read “51-100” to having read “101-200″. I just put 201+ for all.
As a first pass I would probably have instead asked “how many hours have you spent engaging with content on this topic” (e.g. reading books / blog posts / having conversations about it etc). Or if you want more about the content then I’d have asked different questions about different content (“how much time have you spent reading LW + EAF” “how much time spent arguing about the topic with other people” etc). But that would have been more questions, and length is bad.
For the probabilities of different things question, your answers are “<1% 10% 25% 50% 75% 90% 99<% Pass”. Obviously most numbers are not there. I would’ve made each column a bucket, or else said “pick the nearest one”.
It felt a bit “forced” to “rank” the AI issues from most to least important, given that the actual importance for me was something like 1000, 10, 10, 0.1, 0.1, 0.1, 0.1, for which 1, 2, 3, 4, 5, 6, 7 is a bit misleading.
Some of the other questions also landed a bit oddly to me, but this list is long enough.
I think I’d encourage future people who want their forms answered to make their forms better than this one. I’d like LW users to be able to expect forms they’re asked to fill out to be of a basic quality. As a key element, user testing is pretty key, I always get 1-3 people in my target audience to fill out a form in its entirety before I publish it to a group. I suspect that wasn’t done here, else I think a lot of users would have said the questions were a bit strange. But maybe it was.
I want to clearly say that the impulse here to get data and test your beliefs is pretty great, and I hope more people make surveys about such questions in the future.
We did indeed user test with 3-4 people within our target group and changed the survey quite a bit in response to these. I definitely believe the “basic quality” to be met but I appreciate your reciprocation either way.
Response to your feedback, in order: The career options are multiple-choice, i.e. you may associate yourself with computer science and academia and not industry which is valuable information in this case. It means we need less combinations of occupational features. From our perspective, the number is easier to estimate while time estimates (both in the future and past) generally are less accurate. In a similar vein, it’s fine that LessWrong blog posts each count as one due to their inherently more in-depth nature. I’ll add the “pick the nearest one”. The ranking was expected to feel like that which is in accordance with our priors on the question. There are reasons for most questions’ features as well.
And thank you, I believe the same. Compared to how data-driven we believe we are, there is very little data from within the community, as far as I see it. And we should work harder to positively encourage more action in a similar vein.
After the fact, I realize that I might have replied too defensively to your feedback as a result of the tone so sorry for that!
I sincerely do thank you for all the points and we have updated the survey with a basis in this feedback, i.e. 1) career = short text answer, 2) knowledge level = which of these learning tasks have you completed and 4) they are now independent rankings.
EDIT: The survey appears to have changed out from under me, and I do not have a copy of the original. Some of the below may not be relevant any more. This adds an additional layer of concerns...
=====
Anacdotal selection-bias warning: I didn’t answer this survey because several of the mandatory questions are ambiguous or are questions I don’t have good answers for, and so I got frustrated and closed it.
Some examples:
I have no clue when I first heard about AI safety.
This also has annoying questions as to what exactly is categorized as AI safety.
I don’t think there is a clean dividing line where day X-1 I didn’t know about AI safety and day X I did. Rather it’s something that I picked up gradually over time.
(Ditto, I have very little idea as to what my initial thoughts were on the subject, for much the same reason.)
Scale from 1-10 for importance of an X-risk means very different things depending on the scale.
Scale from 1-10 for personal concern on an X-risk also means very different things depending on the scale.
100 forum posts != 100 books.
...and what does e.g. ‘AI safety?’ mean in this context? ‘tangentially mentioning X?’ ‘exclusively about X’? Other? Can a single entry be counted in multiple categories?
Are the probabilities contingent or independent?
If I thought, for instance[2], there was a 50% chance of AGI happening, and if AGI happened it would be a 50% probability it was a net negative, should I select 25% or 50% for ‘AGI will lead to a net negative future’?
(I can see arguments for either interpretation.)
‘Rank the importance of the following AI issues’ has a bunch of arguably-overlapping categories.
If I thought that, for instance[2], there was a high chance of critical AI systems failure mostly due to AI-enabled cyber attacks, and a medium chance of AGI misalignment, should I rank this as:
critical AI systems failure > AI-enabled cyber attacks > AGI misalignment?
AI-enabled cyber attacks > critical AI systems failure > AGI misalignment?
AI-enabled cyber attacks > AGI misalignment > critical AI systems failure?
(I can see arguments for all of these rankings...)
Thank you for your perspective on the content! The subjectivity is a design feature since many of the questions are related to the subjective “first exposure”, the subjective concern or subjective probabilities. This also relates to how hard it will be to communicate the same questions for public audiences without introducing a substantial amount of subjectivity. The importance of X-risks is not ranked on a 1-10 scale so that should not be a problem. The 100 forum posts != 100 books is addressed above and is meant to be a highly fuzzy measure. By equating the different modalities, we accept the fuzziness of the responses. AI safety definitional subjectivity, that’s also fine in relation to the above design thoughts. For the rankings, this is also fine and up to the survey recipient themselves. I’ll add that the probabilities should be thought of independently, given the assumptions necessary for it to happen.
Sorry to hear that you did not wish to answer but thank you for presenting your arguments for why.
Hm. Did I give that (false) impression? If I didn’t wish to answer I wouldn’t have even opened the survey.
To be perfectly clear: I did wish to answer; the survey was constructed in such a way that I could not answer in a way that didn’t knowingly add likely-incorrect data[1].
=====
The subjectivity is a design feature
One person’s feature is another person’s selection bias[2].
Corporate survey where they described how they filtered out ‘bad’ data by asking synonymous questions and discarding inconsistent answers… only when a survey asks multiple closely related questions I have a tendency to notice and carefully examine the differences… and said questions weren’t actually quite synonymous (they never are).
As I also responded to Ben Pace, I believe I replied too intellectually defensively to both of your comments as a result of the tone and would like to rectify that mistake. So thank you sincerely for the feedback and I agree that we would like not to exclude anyone unnecessarily nor have too much ambiguity in the answers we expect. We have updated the survey as a result and again, excuse my response.
It is refreshing[1] to be on a forum where people change their mind.
Thank you!
A “few” comments on the revised form:
- Transformative AI (TAI) systems: AI systems that are able to qualitatively transform society in a way as large as the industrial revolution
This hinges very heavily on the definition of “AI systems”. The main issue is that the rise of computing (arguably) already has “qualitatively transform[ed] society in a way as large as the industrial revolution”.
It is very difficult to disentangle ‘TAI’ and ‘effective application of machine learning’ without explicit and careful definitions. I could devils-advocate that Facebook/Google/Instagram/Tiktok/etc’s use of machine learning already counts here.
Unfortunately, what this does is that it means that the signal for the later odds question is drowned out by this. (For the sake of example, if I think that something already existing counts under this definition with 40% probability, and I thought there was a 20% probability of TAI within the next eighty years given that nothing existing met said definition, my resulting probabilities (rounded to the options) would be 50% 50% 50%. Which looks like I thought that something was imminent or never.)
I would suggest putting in an explicit row for ‘TAI already exists’. (Or maybe two: ‘TAI already publicly exists’ and ‘TAI already exists, but in secret’.)
AI safety is important, but my comparative advantage lies elsewhere
This answer implies both “AI safety is important” and “my comparative advantage lies elsewhere”. It is not clear what should happen if one agrees with one of these but not the other.
What do you think are the odds for the following scenarios? [...] TAI will be developed by [...] If AGI is developed today [...]
TAI is distinct from AGI. It is good that you mention the distinction; putting these in the same question can easily result in bias where people assume you mean TAI for all of the scenarios.
As an aside, my answers for these scenarios are very different for your definitions of TAI and AGI.
What do you think are the odds for the following scenarios? [...] If AGI is developed today, it would be net beneficial for humanity’s long-term future
Ditto, P(insufficient safeguards | low tolerance for computational overhead) > P(insufficient safeguards | high tolerance for computational overhead).
Ditto, P(insufficient safeguards | AGI in final development in secret now) > P(insufficient safeguards | AGI in final development when I heard about it while it was under development).[2]
This results in the answer to this question for if it was developed today and I didn’t already know about it being far more pessimistic than if it was developed, in, say, 80 years.
How concerned are you about each of these problems?
A problem that I am relatively concerned about that you don’t mention: adversarial attacks[3]. It’s related to, but tangential to, ‘Critical AI systems failure’ and ‘AI-enabled cyber attacks/misinformation’.
AI-enabled cyber attacks/misinformation
These are two separate things. It is unclear as to how to weight this if you have different amounts of concern about the two.
This is largely because most of the groups that could be doing AI development in secret right now I believe are likely to take lower levels of precautions than average. If you are a military developing AI there are rather direct incentives for you to not add safeguards that prevent the AI from doing anything to harm any human, for instance.
This does somewhat conflate machine learning and AI, I am aware. That being said, most approaches towards AI I have seen are susceptible to adversarial attacks.
Haha true, but the feature is luckily not a systematic blockade, more of an ontological one. And sorry for misinterpreting! On another note, I really do appreciate the feedback and With Folded Hands definitely seems within-scope for this sort of answer, great book.
It is written as independent probabilities so it’s just your expectation, given that governments develop AGI, that AGI is a net positive. So it’s additive to the “AGI will lead to a net positive future”. You would expect the averages of the “large companies” and “nation state” arguments to average to the “AGI will lead...” question.
I don’t think you’re answering Vanessa’s question. Before we even get to the government question, the earlier question is “AGI will lead to a net negative future”. What is a “net negative future”? For example, suppose the future is an empty universe without life or consciousness. Is that net negative? It’s not like anyone’s suffering, right? So maybe we should say it’s neutral?
Anyway, I’m interpreting “net positive future” and “net negative future” as: “…compared to a hypothetical future in which AGI is forever impossible to build, for some technical reason”.
If AGI is developed today, it would be net beneficial for humanity’s long-term future
This seems like a request to condition on an event of infinitesimal probability. I have no idea how to interpret this question. I feel like you’re not going for “if there is some secret government project to make an AGI, how good do you think they are at aligning it”?
Basically, the original intention was “identify” because you might work as a web developer but identify and hobby-work as an academic. But it’s a bit arbitrary so we updated it now.
[Question] Your specific attitudes towards AI safety
Hi everyone! We would love your responses to this survey about AI risk attitudes given personal context: Google Forms link.
Edit: We updated the survey for the feedback given in the comments. If you already answered it before, you don’t need to do it again.
Further information
The unique value proposition is to give an idea of the attitudes towards AI safety in the context of your initial exposure point, knowledge level and demographic whereas other surveys often focus on pure forecasting and field overview. It sprung from this article and further discussion about how people initially came to know about AI safety. This survey is focused on AI safety-engaged people while the next survey will be reformulated for a non-EA, non-rationalist community to help in outreach strategy designs.
We have shared the survey in an array of groups and forums (EA Forum, AI Safety Discussion FB group and more) and expect ~100 responses. The specific contrasts we’re hoping to infer and analyze are:
Prior knowledge <> (% AGI developed <> % AGI dangerous)
(First exposure point <> Initial impressions) <> (Currently convincing arguments <> Current impressions)
(Occupation + Age + Country) <> [*]
We are expecting to publish the results on LessWrong and EA Forum with in-depth exploratory analyses in the light of the contrasts above.
Subjectivity is part of the survey and one of the reasons we made it. Be prepared for ambiguous questions.
This is a cool attempt to get some insight into what’s going on in AI safety! It makes sense we have had so few surveys directly about this, as they are seriously difficult to do correctly—especially in a field like AI safety where there is a lot of vagueness in how people think about positive and negative outcomes.
Most of the comments are and will be about how the survey could be better and more elaborate in what it’s asking, which would mean that the survey would probably ideally be several pages long and take about 30min to answer—at the least. And there’s a bit of a cut-off where if a survey is too long, a large part of the community might not answer it anyway (the more time you have to spend on something, the more sure you wanna be it’s worth it), but with surveys it can be hard to know if they’re worth doing, unless you specifically trust the person doing them to do them well. Plus you’d need to have a good enough sample of the AI community answering the questions, which in itself will be difficult because you’ll have more people responding who have time, compared to the ones who actively work on AI safety and probably are lower on time for surveys (that being said—I think AI safety researchers would probably appreciate an AI survey as well?).
All this being said, depending on the response rates and how well we can trust their accuracy, it could be cool to see what we can get out of a survey like this. Even if it’s just the knowledge about what kinds of questions are more useful to ask compared to other questions. It could be that, we could technically have a survey that just asks people to write down their AGI timeline estimate and it would still be several pages long, or at least include a lot of description of different types of AGI and what kinds of probabilities people put on those timelines. Points for the effort in tackling such a difficult problem!
Full disclosure: I am in a relationship with the author of the post and thus have my own biases and also additional knowledge. This mainly means that I’m leaving a comment in the first place, because if it would be someone else doing something like this and most of the comments are on the critical side (note that it’s not bad to be on the critical side here because often the criticism is necessary and important), I would be more likely to default to silence and hope that the criticism is received in a constructive way that ends up helping the project launch properly instead of the project dying out.
I think it’s important to recognise people for trying to do something difficult, and support them in a way that if what they are doing could be net positive, they end up doing it properly—unless we have good reason to think that doing a survey like this is actually certainly bad, in which case it should be clearer in the comments that that is the case.
This is a long survey. Can you provide some reason to think that filling out the survey is a good use of my time? For instance, you could tell me you plan to work to get 100s of people to fill it out and then publish the results. Or you could tell me more about what specific questions you hope this survey will resolve.
Updated, thank you. We were unsure if it was best to keep it vague as to not bias the responses, but it is true that you need slight justification and that it is net positive to provide the explanation.
Thanks. I’ve filled out the form, to reciprocate your efforts in response to my comment :)
I think the form could’ve been better, here’s a few ways.
The question about occupation had a really strange set of answers, with separate choices for “software engineering” and “industry” and three choices for AI work, it felt pretty odd and I wasn’t sure what answer was right.
The question about the number of “books / articles / blogs / forum posts / documentaries / movies / podcasts” I have consumed was quite odd. Firstly, that question is basically something like “do you read LW + EA Forum a lot or not” because that’s the element of that list that has the ability to actually be 200+, nobody has read that many books or documentaries on the topic because there aren’t that many. Secondly, I have no idea when I switched from having read “51-100” to having read “101-200″. I just put 201+ for all.
As a first pass I would probably have instead asked “how many hours have you spent engaging with content on this topic” (e.g. reading books / blog posts / having conversations about it etc). Or if you want more about the content then I’d have asked different questions about different content (“how much time have you spent reading LW + EAF” “how much time spent arguing about the topic with other people” etc). But that would have been more questions, and length is bad.
For the probabilities of different things question, your answers are “<1% 10% 25% 50% 75% 90% 99<% Pass”. Obviously most numbers are not there. I would’ve made each column a bucket, or else said “pick the nearest one”.
It felt a bit “forced” to “rank” the AI issues from most to least important, given that the actual importance for me was something like 1000, 10, 10, 0.1, 0.1, 0.1, 0.1, for which 1, 2, 3, 4, 5, 6, 7 is a bit misleading.
Some of the other questions also landed a bit oddly to me, but this list is long enough.
I think I’d encourage future people who want their forms answered to make their forms better than this one. I’d like LW users to be able to expect forms they’re asked to fill out to be of a basic quality. As a key element, user testing is pretty key, I always get 1-3 people in my target audience to fill out a form in its entirety before I publish it to a group. I suspect that wasn’t done here, else I think a lot of users would have said the questions were a bit strange. But maybe it was.
I want to clearly say that the impulse here to get data and test your beliefs is pretty great, and I hope more people make surveys about such questions in the future.
Thank you for the valuable feedback!
We did indeed user test with 3-4 people within our target group and changed the survey quite a bit in response to these. I definitely believe the “basic quality” to be met but I appreciate your reciprocation either way.
Response to your feedback, in order: The career options are multiple-choice, i.e. you may associate yourself with computer science and academia and not industry which is valuable information in this case. It means we need less combinations of occupational features. From our perspective, the number is easier to estimate while time estimates (both in the future and past) generally are less accurate. In a similar vein, it’s fine that LessWrong blog posts each count as one due to their inherently more in-depth nature. I’ll add the “pick the nearest one”. The ranking was expected to feel like that which is in accordance with our priors on the question. There are reasons for most questions’ features as well.
And thank you, I believe the same. Compared to how data-driven we believe we are, there is very little data from within the community, as far as I see it. And we should work harder to positively encourage more action in a similar vein.
Cool that you did user testing! I’ll leave this this thread here.
After the fact, I realize that I might have replied too defensively to your feedback as a result of the tone so sorry for that!
I sincerely do thank you for all the points and we have updated the survey with a basis in this feedback, i.e. 1) career = short text answer, 2) knowledge level = which of these learning tasks have you completed and 4) they are now independent rankings.
EDIT: The survey appears to have changed out from under me, and I do not have a copy of the original. Some of the below may not be relevant any more. This adds an additional layer of concerns...
=====
Anacdotal selection-bias warning: I didn’t answer this survey because several of the mandatory questions are ambiguous or are questions I don’t have good answers for, and so I got frustrated and closed it.
Some examples:
I have no clue when I first heard about AI safety.
This also has annoying questions as to what exactly is categorized as AI safety.
Does With Folded Hands count[1]?
In general, does science fiction count?
I don’t think there is a clean dividing line where day X-1 I didn’t know about AI safety and day X I did. Rather it’s something that I picked up gradually over time.
(Ditto, I have very little idea as to what my initial thoughts were on the subject, for much the same reason.)
Scale from 1-10 for importance of an X-risk means very different things depending on the scale.
Scale from 1-10 for personal concern on an X-risk also means very different things depending on the scale.
100 forum posts != 100 books.
...and what does e.g. ‘AI safety?’ mean in this context? ‘tangentially mentioning X?’ ‘exclusively about X’? Other? Can a single entry be counted in multiple categories?
Are the probabilities contingent or independent?
If I thought, for instance[2], there was a 50% chance of AGI happening, and if AGI happened it would be a 50% probability it was a net negative, should I select 25% or 50% for ‘AGI will lead to a net negative future’?
(I can see arguments for either interpretation.)
‘Rank the importance of the following AI issues’ has a bunch of arguably-overlapping categories.
If I thought that, for instance[2], there was a high chance of critical AI systems failure mostly due to AI-enabled cyber attacks, and a medium chance of AGI misalignment, should I rank this as:
critical AI systems failure > AI-enabled cyber attacks > AGI misalignment?
AI-enabled cyber attacks > critical AI systems failure > AGI misalignment?
AI-enabled cyber attacks > AGI misalignment > critical AI systems failure?
(I can see arguments for all of these rankings...)
This is more pertinent than it may appear. I read a lot of science fiction growing up, including With Folded Hands.
Numbers are made up to make simple examples. Don’t treat them as reflecting in any way shape or form on my actual views on the subject.
Thank you for your perspective on the content! The subjectivity is a design feature since many of the questions are related to the subjective “first exposure”, the subjective concern or subjective probabilities. This also relates to how hard it will be to communicate the same questions for public audiences without introducing a substantial amount of subjectivity. The importance of X-risks is not ranked on a 1-10 scale so that should not be a problem. The 100 forum posts != 100 books is addressed above and is meant to be a highly fuzzy measure. By equating the different modalities, we accept the fuzziness of the responses. AI safety definitional subjectivity, that’s also fine in relation to the above design thoughts. For the rankings, this is also fine and up to the survey recipient themselves. I’ll add that the probabilities should be thought of independently, given the assumptions necessary for it to happen.
Sorry to hear that you did not wish to answer but thank you for presenting your arguments for why.
Hm. Did I give that (false) impression? If I didn’t wish to answer I wouldn’t have even opened the survey.
To be perfectly clear: I did wish to answer; the survey was constructed in such a way that I could not answer in a way that didn’t knowingly add likely-incorrect data[1].
=====
One person’s feature is another person’s selection bias[2].
(And I am the sort of person who will refuse rather than give likely-incorrect responses.)
I find myself surprisingly commonly[3] selection-biased against, and it is frustrating at best. This is just the latest example[4][5][6].
As in % of surveys that I open but end up dropping, give or take.
Privacy survey that required Skype.
Corporate survey where they described how they filtered out ‘bad’ data by asking synonymous questions and discarding inconsistent answers… only when a survey asks multiple closely related questions I have a tendency to notice and carefully examine the differences… and said questions weren’t actually quite synonymous (they never are).
Different corporate ‘anonymous’ HR-related survey that could only be taken on the corporate VPN.
As I also responded to Ben Pace, I believe I replied too intellectually defensively to both of your comments as a result of the tone and would like to rectify that mistake. So thank you sincerely for the feedback and I agree that we would like not to exclude anyone unnecessarily nor have too much ambiguity in the answers we expect. We have updated the survey as a result and again, excuse my response.
It is refreshing[1] to be on a forum where people change their mind.
Thank you!
A “few” comments on the revised form:
This hinges very heavily on the definition of “AI systems”. The main issue is that the rise of computing (arguably) already has “qualitatively transform[ed] society in a way as large as the industrial revolution”.
It is very difficult to disentangle ‘TAI’ and ‘effective application of machine learning’ without explicit and careful definitions. I could devils-advocate that Facebook/Google/Instagram/Tiktok/etc’s use of machine learning already counts here.
Unfortunately, what this does is that it means that the signal for the later odds question is drowned out by this. (For the sake of example, if I think that something already existing counts under this definition with 40% probability, and I thought there was a 20% probability of TAI within the next eighty years given that nothing existing met said definition, my resulting probabilities (rounded to the options) would be 50% 50% 50%. Which looks like I thought that something was imminent or never.)
I would suggest putting in an explicit row for ‘TAI already exists’. (Or maybe two: ‘TAI already publicly exists’ and ‘TAI already exists, but in secret’.)
This answer implies both “AI safety is important” and “my comparative advantage lies elsewhere”. It is not clear what should happen if one agrees with one of these but not the other.
TAI is distinct from AGI. It is good that you mention the distinction; putting these in the same question can easily result in bias where people assume you mean TAI for all of the scenarios.
As an aside, my answers for these scenarios are very different for your definitions of TAI and AGI.
P(insufficient safeguards | rushed development) > P(insufficient safeguards | slow development).
Ditto, P(insufficient safeguards | low tolerance for computational overhead) > P(insufficient safeguards | high tolerance for computational overhead).
Ditto, P(insufficient safeguards | AGI in final development in secret now) > P(insufficient safeguards | AGI in final development when I heard about it while it was under development).[2]
This results in the answer to this question for if it was developed today and I didn’t already know about it being far more pessimistic than if it was developed, in, say, 80 years.
A problem that I am relatively concerned about that you don’t mention: adversarial attacks[3]. It’s related to, but tangential to, ‘Critical AI systems failure’ and ‘AI-enabled cyber attacks/misinformation’.
These are two separate things. It is unclear as to how to weight this if you have different amounts of concern about the two.
Hasn’t changed. Still mandatory.
Not really the correct term, but I don’t know of a better one.
This is largely because most of the groups that could be doing AI development in secret right now I believe are likely to take lower levels of precautions than average. If you are a military developing AI there are rather direct incentives for you to not add safeguards that prevent the AI from doing anything to harm any human, for instance.
This does somewhat conflate machine learning and AI, I am aware. That being said, most approaches towards AI I have seen are susceptible to adversarial attacks.
Haha true, but the feature is luckily not a systematic blockade, more of an ontological one. And sorry for misinterpreting! On another note, I really do appreciate the feedback and With Folded Hands definitely seems within-scope for this sort of answer, great book.
Some “I don’t remember” options would be useful.
It’s unclear what is meant by “net negative” and “net positive” future. Compared to what? Compared to a vacuum? Or, compared to the current world?
It is written as independent probabilities so it’s just your expectation, given that governments develop AGI, that AGI is a net positive. So it’s additive to the “AGI will lead to a net positive future”. You would expect the averages of the “large companies” and “nation state” arguments to average to the “AGI will lead...” question.
I don’t think you’re answering Vanessa’s question. Before we even get to the government question, the earlier question is “AGI will lead to a net negative future”. What is a “net negative future”? For example, suppose the future is an empty universe without life or consciousness. Is that net negative? It’s not like anyone’s suffering, right? So maybe we should say it’s neutral?
Anyway, I’m interpreting “net positive future” and “net negative future” as: “…compared to a hypothetical future in which AGI is forever impossible to build, for some technical reason”.
True, sorry about that! And your interpretation is our interpretation as well however, the survey is updated to a different framing now.
This seems like a request to condition on an event of infinitesimal probability. I have no idea how to interpret this question. I feel like you’re not going for “if there is some secret government project to make an AGI, how good do you think they are at aligning it”?
Is the question “Which occupations do you identify with?” asking where I work, or about my identity? I might not identify with my occupation.
Basically, the original intention was “identify” because you might work as a web developer but identify and hobby-work as an academic. But it’s a bit arbitrary so we updated it now.