Amen to this. Indeed, I fear that an actual majority of “people out there” may have no idea that “semantics” means anything other than “pointless pedantry”.
Actually, though semantics is perhaps the hardest hit, this is a general phenomenon, afflicting many unfortunate disciplines. You might call it the Argument from Circumscription of Subject Matter, or the ”...But That Would Get Us Into X” Fallacy. Essentially, it goes like this: “that line of inquiry can’t possibly be relevant, because it comes under the heading of a different academic discipline from the one our discussion falls under”. It is particularly common (and insidious) when the “other” discipline has some kind of “bad” reputation for some reason (as in the case of semantics, which is evidently regarded as “pointless pedantry”).
As a fictional (yet particularly illustrative) example of this fallacy, one could imagine EY and his colleagues at SIAI a decade ago saying “Well, we could worry about making sure future AI is Friendly, but....that would get us into philosophy [which is notoriously difficult, and not techno-programmer-sounding, so we won’t].”
To which the response, of course, is: “So it would. What’s your point?”
Amen to this. Indeed, I fear that an actual majority of “people out there” may have no idea that “semantics” means anything other than “pointless pedantry”.
Actually, though semantics is perhaps the hardest hit, this is a general phenomenon, afflicting many unfortunate disciplines. You might call it the Argument from Circumscription of Subject Matter, or the ”...But That Would Get Us Into X” Fallacy. Essentially, it goes like this: “that line of inquiry can’t possibly be relevant, because it comes under the heading of a different academic discipline from the one our discussion falls under”. It is particularly common (and insidious) when the “other” discipline has some kind of “bad” reputation for some reason (as in the case of semantics, which is evidently regarded as “pointless pedantry”).
As a fictional (yet particularly illustrative) example of this fallacy, one could imagine EY and his colleagues at SIAI a decade ago saying “Well, we could worry about making sure future AI is Friendly, but....that would get us into philosophy [which is notoriously difficult, and not techno-programmer-sounding, so we won’t].”
To which the response, of course, is: “So it would. What’s your point?”