[Question] ChatGTP “Writing ” News Stories for The Guardian?

Here’s the link: https://​​www.theguardian.com/​​commentisfree/​​2023/​​apr/​​06/​​ai-chatgpt-guardian-technology-risks-fake-article

The issue seems to be that ChatGTP made up a news story and attributed it to a journalist who worked for The Guardian. I’m not quite sure how the researcher who was using ChatGTP was posing questions but suspect that might have something to do with the outcome.

It seems an odd result and for those in the industry that might have intuitions or even some direct knowledge of the case it would be interesting to hear thoughts about the situation.

That said, what I’m wondering though, even if this type of result—inserting fake data into real data—cannot be prevented (nonalignment issue) would the use of some type of cryptographic signature or blockchains for the publisher (or even private posters on the internet) be a solution?

Leading me to a follow on question. In the alignment world has any, some, boat loads and we’ve got that covered, work be done on identifying which alignment issues are mitigated by some reaction function to that misalignment and which cannot be mitigated even if the misalignment persists?