That’s not something the average person will think upon hearing the term, especially since “AGI” tends to connote something very intelligent. I don’t think it is a strong reason not to use it.
Actually, I think people often will think that when they hear the term. “Safety research” implies a focus on how to prevent a system from causing bad outcomes while achieving its goal, not on getting the system to achieve its goal in the first place, so “AGI Safety” sounds like research on how to prevent a not-necessarily-friendly AGI from becoming powerful enough to be dangerous, especially to someone who does not see an intelligence explosion as the automatic outcome of a sufficiently intelligent AI.
That’s not something the average person will think upon hearing the term, especially since “AGI” tends to connote something very intelligent. I don’t think it is a strong reason not to use it.
Actually, I think people often will think that when they hear the term. “Safety research” implies a focus on how to prevent a system from causing bad outcomes while achieving its goal, not on getting the system to achieve its goal in the first place, so “AGI Safety” sounds like research on how to prevent a not-necessarily-friendly AGI from becoming powerful enough to be dangerous, especially to someone who does not see an intelligence explosion as the automatic outcome of a sufficiently intelligent AI.