But even just stylistically it’s fairly obvious that journalists love this narrative. There’s nothing Western readers love more than a spooky story about technology gone awry or corrupting people, it reliably rakes in the clicks.
Also related is the way that positive reports get very little attention in comparison. E.g. the thing about chatbots apparently having encouraged some people to commit suicide gets brought up relatively frequently, but nobody ever mentions the peer-reviewed study where 3% of the interviewed chatbot users spontaneously reported that the bot had prevented them from attempting suicide.
That’s a good point, says that the study collected data “in late 2021”. Instruction-following GPT-3 became OpenAI’s default model in January 2022, though the same article also mentions that the models “have been in beta on the API for more than a year”. I don’t know whether Replika had used those beta models or not.
That said, even though instruct-GPTs were technically trained with RLHF, the nature of that RLHF was quite different (they weren’t even chat models, so not trained for anything like continuing an ongoing conversation).
Also related is the way that positive reports get very little attention in comparison. E.g. the thing about chatbots apparently having encouraged some people to commit suicide gets brought up relatively frequently, but nobody ever mentions the peer-reviewed study where 3% of the interviewed chatbot users spontaneously reported that the bot had prevented them from attempting suicide.
Fair,
Note they used GPT-3 which wasn’t trained with RLHF (right?)
That’s a good point, says that the study collected data “in late 2021”. Instruction-following GPT-3 became OpenAI’s default model in January 2022, though the same article also mentions that the models “have been in beta on the API for more than a year”. I don’t know whether Replika had used those beta models or not.
That said, even though instruct-GPTs were technically trained with RLHF, the nature of that RLHF was quite different (they weren’t even chat models, so not trained for anything like continuing an ongoing conversation).