so I saw this post, about what AI safety is doing wrong (they claim basically should treat mental health advice as similarly critical to CBRN). I disagree with some of the mud slinging but it’s quite understandable given the stakes.
someone else I saw said this, so the sentiment isn’t universal.
idk just thought someone should post it. react “typo” if you think i should include titles for the links, I currently lean towards anti-clickbait though edit: done
I think this is a replay of the contrast I mentioned here of “static” vs “dynamic” conceptions about AI. To the author of the original post, AI is an existing technology that has taken a particular shape, so its important to ask what harms that shape might cause in society. To AI safety folk, the shape is an intermediate stage and rapidly changing into a world ending superbeing, so asking about present harms (or, indeed, being overly worried about chatbot misalignment) is a distraction from the “core issue”.
Typo react from me. I think you should call your links something informative. If you think the title of the post is clickbate, you can re-title it something better maybe?
Now I have to click to find out what the link is even about, which is also click-bate-y.
so I saw this post, about what AI safety is doing wrong (they claim basically should treat mental health advice as similarly critical to CBRN). I disagree with some of the mud slinging but it’s quite understandable given the stakes.
someone else I saw said this, so the sentiment isn’t universal.
idk just thought someone should post it.
react “typo” if you think i should include titles for the links, I currently lean towards anti-clickbait thoughedit: donedunno, I see two confused opinions, maybe if you explained what exactly is the part that made you interested.
author is well respected, isn’t just saying this for no reason, so working through the confusion could be useful. I share it because it seems to make mistakes. author is https://www2.eecs.berkeley.edu/Faculty/Homepages/brecht.html
I think this is a replay of the contrast I mentioned here of “static” vs “dynamic” conceptions about AI. To the author of the original post, AI is an existing technology that has taken a particular shape, so its important to ask what harms that shape might cause in society. To AI safety folk, the shape is an intermediate stage and rapidly changing into a world ending superbeing, so asking about present harms (or, indeed, being overly worried about chatbot misalignment) is a distraction from the “core issue”.
Typo react from me. I think you should call your links something informative. If you think the title of the post is clickbate, you can re-title it something better maybe?
Now I have to click to find out what the link is even about, which is also click-bate-y.