I don’t think it’s a good idea to frame this as “AI ethicists vs. AI notkilleveryoneists”, as if anyone that cares about issues related to the development of powerful AI has to choose to only care about existential risk or only other issues. I think this framing unnecessarily excludes AI ethicists from the alignment field, which is unfortunate and counterproductive since they’re otherwise aligned with the broader idea of “AI is going to be a massive force for societal change and we should make sure it goes well”.
Suggestion: instead of addressing “AI ethicists” or “AI ethicists of the DAIR / Stochastic Parrots school of thought”, why not address “AI X-risk skeptics”?
I’ve seen plenty of AI x-risk skeptics present their object-level argument, and I’m not interested in paying out a bounty for stuff I already have. I’m most interested in the arguments from this specific school of thought, and that’s why I’m offering the terms I offer.
I see. Maybe you could address it towards “DAIR, and related, researchers”? I know that’s a clunkier name for the group you’re trying to describe, but I don’t think more succinct wording is worth progressing towards a tribal dynamic between researchers who care about X-risk and S-risk and those who care about less extreme risks.
I don’t think it’s a good idea to frame this as “AI ethicists vs. AI notkilleveryoneists”, as if anyone that cares about issues related to the development of powerful AI has to choose to only care about existential risk or only other issues. I think this framing unnecessarily excludes AI ethicists from the alignment field, which is unfortunate and counterproductive since they’re otherwise aligned with the broader idea of “AI is going to be a massive force for societal change and we should make sure it goes well”.
Suggestion: instead of addressing “AI ethicists” or “AI ethicists of the DAIR / Stochastic Parrots school of thought”, why not address “AI X-risk skeptics”?
I’ve seen plenty of AI x-risk skeptics present their object-level argument, and I’m not interested in paying out a bounty for stuff I already have. I’m most interested in the arguments from this specific school of thought, and that’s why I’m offering the terms I offer.
I see. Maybe you could address it towards “DAIR, and related, researchers”? I know that’s a clunkier name for the group you’re trying to describe, but I don’t think more succinct wording is worth progressing towards a tribal dynamic between researchers who care about X-risk and S-risk and those who care about less extreme risks.