I’m so torn about “for like 75% or maybe 99% of humans, the chatbot saying ‘are you sure you want to say that?’ is probably legit an improvement. But… it just feels so slippery-slope-orwellian to me.” (In particular, if you build that feature, you need to be confident not only that the current leadership of your company won’t abuse it, but that all future leadership won’t either, and that the AI company you’re renting models from won’t enshittify in a way you don’t notice)
(I am saying this as, like, a forum-maintainer who is actually taking the idea seriously and trying to figure out how to get the good things from the idea, not just randomly dunking on it. Interested in more variants or takes)
to be clear I explicitly decided not to think too hard about this kind of issue when brainstorming. I think the long run solution is probably something like an elected governance scheme that lets the users control what model to use. maybe make it bicameral to split power between users and funders. but my main motivation for this brainstorming was to think of ideas I could implement in a weekend for shits and giggles to see how well they work irl
I lean towards not using models directly as “conversation participants”, which feels too likely to spiral out of control, but instead do things like have white-listed specific popups that they decide when to trigger.
I’m so torn about “for like 75% or maybe 99% of humans, the chatbot saying ‘are you sure you want to say that?’ is probably legit an improvement. But… it just feels so slippery-slope-orwellian to me.” (In particular, if you build that feature, you need to be confident not only that the current leadership of your company won’t abuse it, but that all future leadership won’t either, and that the AI company you’re renting models from won’t enshittify in a way you don’t notice)
(I am saying this as, like, a forum-maintainer who is actually taking the idea seriously and trying to figure out how to get the good things from the idea, not just randomly dunking on it. Interested in more variants or takes)
to be clear I explicitly decided not to think too hard about this kind of issue when brainstorming. I think the long run solution is probably something like an elected governance scheme that lets the users control what model to use. maybe make it bicameral to split power between users and funders. but my main motivation for this brainstorming was to think of ideas I could implement in a weekend for shits and giggles to see how well they work irl
I lean towards not using models directly as “conversation participants”, which feels too likely to spiral out of control, but instead do things like have white-listed specific popups that they decide when to trigger.