I think this is a replay of the contrast I mentioned here of “static” vs “dynamic” conceptions about AI. To the author of the original post, AI is an existing technology that has taken a particular shape, so its important to ask what harms that shape might cause in society. To AI safety folk, the shape is an intermediate stage and rapidly changing into a world ending superbeing, so asking about present harms (or, indeed, being overly worried about chatbot misalignment) is a distraction from the “core issue”.
dunno, I see two confused opinions, maybe if you explained what exactly is the part that made you interested.
author is well respected, isn’t just saying this for no reason, so working through the confusion could be useful. I share it because it seems to make mistakes. author is https://www2.eecs.berkeley.edu/Faculty/Homepages/brecht.html
I think this is a replay of the contrast I mentioned here of “static” vs “dynamic” conceptions about AI. To the author of the original post, AI is an existing technology that has taken a particular shape, so its important to ask what harms that shape might cause in society. To AI safety folk, the shape is an intermediate stage and rapidly changing into a world ending superbeing, so asking about present harms (or, indeed, being overly worried about chatbot misalignment) is a distraction from the “core issue”.