I believe you you when you say that people output their true beliefs and share what they’re curious about w/ the chabot. But I don’t think it writes as if it’s trying to understand what I’m saying, which implies a lack of curiosity on the chatbot’s part. Instead, it seems quite keen to explain/convince someone of a particular argument, which is one of the basins chatbots naturally fall into. (Though I do note that it is quite skilful in its attempts to explain/convince me when I talk to it. It certainly doesn’t just regurgitate the sources.) This is often useful, but it’s not always the right approach.
Yeah, I see why collecting personal info is important. It is legitimately useful. Just pointing out the personal aversion I felt at the trivial inconvenience to getting started w/ the chatbot, and reluctance to share personal info.
(I think our bot has improved a lot at answering unusual questions. Even more so on the beta version: https://chat.stampy.ai/playground. Though I think the style of the answers isn’t optimal for the average person. It’s output is too dense compared to your bot.)
I wonder if getting the chatbot to roleplay survey taker, and having it categorize responses of collect them, would help?
With the goal of actually trying to use the chatbot to better understand what views are common and what people’s objections are. Don’t just try to make the chatbot curious, figure out what would motivate true curiosity.
We’re slightly more interested in getting info on what works to convince someone than on what they’re curious about, when they’re already reading the chatbot’s response.
Ideally, the journey that leads to the interaction makes them say what they’re curious about in the initial message.
I believe you you when you say that people output their true beliefs and share what they’re curious about w/ the chabot. But I don’t think it writes as if it’s trying to understand what I’m saying, which implies a lack of curiosity on the chatbot’s part. Instead, it seems quite keen to explain/convince someone of a particular argument, which is one of the basins chatbots naturally fall into. (Though I do note that it is quite skilful in its attempts to explain/convince me when I talk to it. It certainly doesn’t just regurgitate the sources.) This is often useful, but it’s not always the right approach.
Yeah, I see why collecting personal info is important. It is legitimately useful. Just pointing out the personal aversion I felt at the trivial inconvenience to getting started w/ the chatbot, and reluctance to share personal info.
(I think our bot has improved a lot at answering unusual questions. Even more so on the beta version: https://chat.stampy.ai/playground. Though I think the style of the answers isn’t optimal for the average person. It’s output is too dense compared to your bot.)
I wonder if getting the chatbot to roleplay survey taker, and having it categorize responses of collect them, would help?
With the goal of actually trying to use the chatbot to better understand what views are common and what people’s objections are. Don’t just try to make the chatbot curious, figure out what would motivate true curiosity.
We’re slightly more interested in getting info on what works to convince someone than on what they’re curious about, when they’re already reading the chatbot’s response.
Ideally, the journey that leads to the interaction makes them say what they’re curious about in the initial message.
Using chatbots to simulate audiences is a good idea. But I’m not sure what that’s got to do with motivating true curiosity?