I whole-heartedly agree with you, but I don’t have anything better than “tell everyone you know about it.” On that topic, what do you think is the best link to send to people? I use this, but it’s not ideal.
This is the exact topic I’m thinking a lot about, thanks for the link! I’ve wrote my own essay for a general audience but it seems ineffective. I knew about the Wait but why blog post, but there must be better approaches possible. What I find hard to understand is that there have been multiple best-selling books about the topic, but still no general alarm is raised and the topic is not discussed in e.g. politics. I would be interested in why this paradox exists, and also how to fix it.
Is there any more information about reaching out to a general audience on Lesswrong? I’ve not been able to find it using the search function etc.
The reason I’m interested is twofold:
1) If we convince a general audience that we face an important and understudied issue, I expect them to fund research into it several orders of magnitude more generously, which should help enormously in reducing the X-risk (I’m not working in the field myself).
2) If we convince a general audience that we face an important and understudied issue, they may convince governing bodies to regulate, which I think would be wise.
I’ve heard the following counterarguments before, but didn’t find them convincing. If someone would want to convince me that convincing the public about AGI risk is not a good idea, these are places to start:
1) General audiences might start pressing for regulation which could delay AI research in general and/or AGI. That’s true and indeed a real problem, since all the potential positive aspects of AI/AGI (which may be enormous) cannot be applied yet. However, in my opinion the argument is not sufficient because:
A) AGI existential risk is so high and important that reducing it is more important than AI/AGI delay, and
B) Increased knowledge of AGI will also increase general AI interest, and this effect could outweigh the delay that regulation might cause.
2) AGI worries from the general public could make AI researchers more secretive and less cooperative in working together with AI Safety research. My problem with this argument is the alternative: I think currently, without e.g. politicians discussing this issue, the investments in AI Safety are far too small to have a realistic shot at actually solving the issue timely. Finally, AI Safety may well not be solvable at all, in which case regulation gets more important.
Would be super to read your views and get more information!
I whole-heartedly agree with you, but I don’t have anything better than “tell everyone you know about it.” On that topic, what do you think is the best link to send to people? I use this, but it’s not ideal.
This is the exact topic I’m thinking a lot about, thanks for the link! I’ve wrote my own essay for a general audience but it seems ineffective. I knew about the Wait but why blog post, but there must be better approaches possible. What I find hard to understand is that there have been multiple best-selling books about the topic, but still no general alarm is raised and the topic is not discussed in e.g. politics. I would be interested in why this paradox exists, and also how to fix it.
Is there any more information about reaching out to a general audience on Lesswrong? I’ve not been able to find it using the search function etc.
The reason I’m interested is twofold:
1) If we convince a general audience that we face an important and understudied issue, I expect them to fund research into it several orders of magnitude more generously, which should help enormously in reducing the X-risk (I’m not working in the field myself).
2) If we convince a general audience that we face an important and understudied issue, they may convince governing bodies to regulate, which I think would be wise.
I’ve heard the following counterarguments before, but didn’t find them convincing. If someone would want to convince me that convincing the public about AGI risk is not a good idea, these are places to start:
1) General audiences might start pressing for regulation which could delay AI research in general and/or AGI. That’s true and indeed a real problem, since all the potential positive aspects of AI/AGI (which may be enormous) cannot be applied yet. However, in my opinion the argument is not sufficient because:
A) AGI existential risk is so high and important that reducing it is more important than AI/AGI delay, and
B) Increased knowledge of AGI will also increase general AI interest, and this effect could outweigh the delay that regulation might cause.
2) AGI worries from the general public could make AI researchers more secretive and less cooperative in working together with AI Safety research. My problem with this argument is the alternative: I think currently, without e.g. politicians discussing this issue, the investments in AI Safety are far too small to have a realistic shot at actually solving the issue timely. Finally, AI Safety may well not be solvable at all, in which case regulation gets more important.
Would be super to read your views and get more information!