In this particular case, I’m not sure the relevant context was directly present in the thread, as opposed to being part of the background knowledge that people talking about AI alignment are supposed to have. In particular, “AI behavior is discovered rather than programmed”. I don’t think that was stated directly anywhere in the thread; rather, it’s something everyone reading AI-alignment-researcher tweets would typically know, but which is less-known when the tweet is transported out of that bubble.
I don’t see the awfulness, although tbh I have not read the original reactions. If you are not desensitized to what this community woudl consider irresponsible AI development speed, responding with “You are building and releasing an AI that can do THAT?!” rather understandable. It is relatively unfortunate that it is the safety testing people that get the flack (if this impression is accurate) though.
Twitter users are awful.
I’m wondering if there are UI improvements that could happen on twitter where context from earlier is more automatically carried over.
In this particular case, I’m not sure the relevant context was directly present in the thread, as opposed to being part of the background knowledge that people talking about AI alignment are supposed to have. In particular, “AI behavior is discovered rather than programmed”. I don’t think that was stated directly anywhere in the thread; rather, it’s something everyone reading AI-alignment-researcher tweets would typically know, but which is less-known when the tweet is transported out of that bubble.
I don’t see the awfulness, although tbh I have not read the original reactions. If you are not desensitized to what this community woudl consider irresponsible AI development speed, responding with “You are building and releasing an AI that can do THAT?!” rather understandable. It is relatively unfortunate that it is the safety testing people that get the flack (if this impression is accurate) though.