“AI Safety” has been popularised by LLM companies to mean basically content restrictions
We’re working on a descriptive lit review of research labeled as “AI safety,” and it seems to us the issue isn’t that the very horrible X-risk stuff is ignored, but that everything is “lumped in” like you said.
I think if I was being clearer, I should’ve said it seems “lumped in” in the research, but for the public who don’t know much about the X-risk stuff, “safety” means “content restrictions and maybe data protection”
I’m not sure how true it is that
We’re working on a descriptive lit review of research labeled as “AI safety,” and it seems to us the issue isn’t that the very horrible X-risk stuff is ignored, but that everything is “lumped in” like you said.
I think if I was being clearer, I should’ve said it seems “lumped in” in the research, but for the public who don’t know much about the X-risk stuff, “safety” means “content restrictions and maybe data protection”