I wonder if there are people/groups who (implicitly) do the same with ChatGPT? If the chatbot says something it is considered truth, unless someone explicitly disproves it. (I think I have read stories hinting at this behavior online, and also met people IRL who seemed a bit to eager to take the LLM output at face value.)
I think so! Actually my reason for thinking to post about this was inspired by a recent tweet from Kelsey Piper about exactly that:
Never thought I’d become a ‘take your relationship problems to ChatGPT’ person but when the 8yo and I have an argument it actually works really well to mutually agree on an account of events for Claude and then ask for its opinion
I wonder if there are people/groups who (implicitly) do the same with ChatGPT? If the chatbot says something it is considered truth, unless someone explicitly disproves it. (I think I have read stories hinting at this behavior online, and also met people IRL who seemed a bit to eager to take the LLM output at face value.)
I think so! Actually my reason for thinking to post about this was inspired by a recent tweet from Kelsey Piper about exactly that:
Not quite the same thing, but related.