random brainstorming ideas for things the ideal sane discourse encouraging social media platform would have:
have an LM look at the comment you’re writing and real time give feedback on things like “are you sure you want to say that? people will interpret that as an attack and become more defensive, so your point will not be heard”. addendum: if it notices you’re really fuming and flame warring, literally gray out the text box for 2 minutes with a message like “take a deep breath. go for a walk. yelling never changes minds”
have some threaded chat component bolted on (I have takes on best threading system). big problem is posts are fundamentally too high effort to be a way to think; people want to talk over chat (see success of discord). dialogues were ok but still too high effort and nobody wants to read the transcript. one stupid idea is have an LM look at the transcript and gently nudge people to write things up if the convo is interesting and to have UI affordances to make it low friction (eg a single button that instantly creates a new post and automatically invites everyone from the convo to edit, and auto populates the headers)
inspired by the court system, the most autistically rule following part of the US government: have explicit trusted judges who can be summoned to adjudicate claims or meta level “is this valid arguing” claims. top level judges are selected for fixed terms by a weighted sortition scheme that uses some game theoretic / schelling point stuff to discourage partisanship
recommendation system where you can say what kind of stuff you want to be recommended in some text box in the settings. also when people click “good/bad rec” buttons on the home page, try to notice patterns and occasionally ask the user whether a specific noticed pattern is correct and ask whether they want it appended to their rec preferences
opt in anti scrolling pop up that asks you every few days what the highest value interaction you had recently on the site was, or whether you’re just mindlessly scrolling. gently reminds you to take a break if you can’t come up with a good example of a good interaction.
argument mapping is really cool imo but I think most attempts fail because they try to make arguments super structured and legible. I think a less structured version that lets you vote on how much you think various posts respond to other posts and how well you think it addresses the key points and which posts overlap in arguments would be valuable. like you’d see clusters with (human written and vote selected) summaries of various clusters, and then links of various strengths inter cluster. I think this would greatly help epistemics by avoiding infinite argument retreading
things the ideal sane discourse encouraging social media platform would have: [...]
opt in anti scrolling pop up that asks you every few days what the highest value interaction you had recently on the site was, or whether you’re just mindlessly scrolling. gently reminds you to take a break if you can’t come up with a good example of a good interaction.
Cynical thought: these two points might be incompatible. Social media thrives on network effects, and one requirement for those is that the website be addicting or attention-grabbing. Anti-addictiveness designs are nice in principle, but then your prospective users just spend their time on something that’s more addicting instead (whether other websites or Netflix or whatever), and thus can’t benefit from the other ways in which your site is better.
I’m so torn about “for like 75% or maybe 99% of humans, the chatbot saying ‘are you sure you want to say that?’ is probably legit an improvement. But… it just feels so slippery-slope-orwellian to me.” (In particular, if you build that feature, you need to be confident not only that the current leadership of your company won’t abuse it, but that all future leadership won’t either, and that the AI company you’re renting models from won’t enshittify in a way you don’t notice)
(I am saying this as, like, a forum-maintainer who is actually taking the idea seriously and trying to figure out how to get the good things from the idea, not just randomly dunking on it. Interested in more variants or takes)
to be clear I explicitly decided not to think too hard about this kind of issue when brainstorming. I think the long run solution is probably something like an elected governance scheme that lets the users control what model to use. maybe make it bicameral to split power between users and funders. but my main motivation for this brainstorming was to think of ideas I could implement in a weekend for shits and giggles to see how well they work irl
I lean towards not using models directly as “conversation participants”, which feels too likely to spiral out of control, but instead do things like have white-listed specific popups that they decide when to trigger.
IMO, part of the solution to endless scrolling is to not implement the feature where you can endless scroll. Instead, have an explicit next page button after some moderate amount of scrolling. (Also having the pop up is good, you could even let people program the pop up to be more frequent etc.)
there’s a broader category of things which are not literally scrolling but still time wasting / consuming info not to enrich oneself, but to push the dopamine button, and I think even removing the scroll doesn’t fix this (my phone is intentionally quite high friction to use and I still fail to stay off of it)
I wonder if anyone has ball-park figures for how much the LLM, used for tone-warnings and light moderation, would cost? I am uncertain about what grade of model would be necessary for acceptable results, though I’d wager a guess that Gemini 2.5 Flash would be acceptable.
Disclosure: I’m an admin of themotte.org, and an unusually AI-philic one. I’d previously floated the idea of fine-tuning an LLM on records of previous moderator interactions and associated parent comments (both good and bad, us mods go out of our way to recognize and reward high quality posts, after user reports). Our core thesis is to be a place for polite and thoughtful discussion of contentious topics, and necessarily, we have rather subjective moderation guidelines. (People can be very persistent and inventive about sticking to the RAW while violating the spirit)
Even 2 years ago, when I floated the idea, I think it would have worked okay, and these days, I think you could get away without fine-tuning at all. I suspect the biggest hurdle would be models throwing a fit over controversial topics/views, even if the manner and phrasing were within discussion norms. Sadly, now, as it was then, the core user base was too polarized to support such an endeavor. I’d still like to see it put into use.
>argument mapping is really cool imo but I think most attempts fail because they try to make arguments super structured and legible. I think a less structured version that lets you vote on how much you think various posts respond to other posts and how well you think it addresses the key points and which posts overlap in arguments would be valuable. like you’d see clusters with (human written and vote selected) summaries of various clusters, and then links of various strengths inter cluster. I think this would greatly help epistemics by avoiding infinite argument retreading
Another feature I might float is the idea of granular voting. Let’s say there’s a comment where I agree with 90% of the content, but vehemently disagree with the rest. Should I upvote, and unavoidably endorse the bit I don’t want to? Should I make a comment stating that I agree with this specific portion and not that?
What if users could just select snippets of a comment and upvote/downvote them? We could even do the HackerNews thing and change the opacity of the text to show how popular particular passages were.
the LLM cost should not be too bad. it would mostly be looking at vague vibes rather than requiring lots of reasoning about the thing. I trust e.g AI summaries vastly less because they can require actual intelligence.
I’m happy to fund this a moderate amount for the MVP. I think it would be cool if this existed.
I don’t really want to deal with all the problems that come with modifying something that already works for other people, at least not before we’re confident the ideas are good. this points towards building a new thing. fwiw I think if building a new thing, the chat part would be most interesting/valuable standalone (and I think it’s good to have platforms grow out of a simple core rather than to do everything at once)
One consideration re: the tone-warning LLMs: make sure to be aware that this means you’re pseudo-publishing someone’s comment before they meant to. Not publishing in discoverable sense, but logging it to a database somewhere (i.e., probably controlled by the LLM provider) - and depending on the types of writing, this might affect people’s willingness to actually write stuff
This is fixable by a) hosting own model, and double-checking that code does not log incoming content in any way, b) potentially, having that model on client side (over time, it might shrink to some manageable size).
random brainstorming ideas for things the ideal sane discourse encouraging social media platform would have:
have an LM look at the comment you’re writing and real time give feedback on things like “are you sure you want to say that? people will interpret that as an attack and become more defensive, so your point will not be heard”. addendum: if it notices you’re really fuming and flame warring, literally gray out the text box for 2 minutes with a message like “take a deep breath. go for a walk. yelling never changes minds”
have some threaded chat component bolted on (I have takes on best threading system). big problem is posts are fundamentally too high effort to be a way to think; people want to talk over chat (see success of discord). dialogues were ok but still too high effort and nobody wants to read the transcript. one stupid idea is have an LM look at the transcript and gently nudge people to write things up if the convo is interesting and to have UI affordances to make it low friction (eg a single button that instantly creates a new post and automatically invites everyone from the convo to edit, and auto populates the headers)
inspired by the court system, the most autistically rule following part of the US government: have explicit trusted judges who can be summoned to adjudicate claims or meta level “is this valid arguing” claims. top level judges are selected for fixed terms by a weighted sortition scheme that uses some game theoretic / schelling point stuff to discourage partisanship
recommendation system where you can say what kind of stuff you want to be recommended in some text box in the settings. also when people click “good/bad rec” buttons on the home page, try to notice patterns and occasionally ask the user whether a specific noticed pattern is correct and ask whether they want it appended to their rec preferences
opt in anti scrolling pop up that asks you every few days what the highest value interaction you had recently on the site was, or whether you’re just mindlessly scrolling. gently reminds you to take a break if you can’t come up with a good example of a good interaction.
argument mapping is really cool imo but I think most attempts fail because they try to make arguments super structured and legible. I think a less structured version that lets you vote on how much you think various posts respond to other posts and how well you think it addresses the key points and which posts overlap in arguments would be valuable. like you’d see clusters with (human written and vote selected) summaries of various clusters, and then links of various strengths inter cluster. I think this would greatly help epistemics by avoiding infinite argument retreading
Cynical thought: these two points might be incompatible. Social media thrives on network effects, and one requirement for those is that the website be addicting or attention-grabbing. Anti-addictiveness designs are nice in principle, but then your prospective users just spend their time on something that’s more addicting instead (whether other websites or Netflix or whatever), and thus can’t benefit from the other ways in which your site is better.
I’m so torn about “for like 75% or maybe 99% of humans, the chatbot saying ‘are you sure you want to say that?’ is probably legit an improvement. But… it just feels so slippery-slope-orwellian to me.” (In particular, if you build that feature, you need to be confident not only that the current leadership of your company won’t abuse it, but that all future leadership won’t either, and that the AI company you’re renting models from won’t enshittify in a way you don’t notice)
(I am saying this as, like, a forum-maintainer who is actually taking the idea seriously and trying to figure out how to get the good things from the idea, not just randomly dunking on it. Interested in more variants or takes)
to be clear I explicitly decided not to think too hard about this kind of issue when brainstorming. I think the long run solution is probably something like an elected governance scheme that lets the users control what model to use. maybe make it bicameral to split power between users and funders. but my main motivation for this brainstorming was to think of ideas I could implement in a weekend for shits and giggles to see how well they work irl
I lean towards not using models directly as “conversation participants”, which feels too likely to spiral out of control, but instead do things like have white-listed specific popups that they decide when to trigger.
IMO, part of the solution to endless scrolling is to not implement the feature where you can endless scroll. Instead, have an explicit next page button after some moderate amount of scrolling. (Also having the pop up is good, you could even let people program the pop up to be more frequent etc.)
there’s a broader category of things which are not literally scrolling but still time wasting / consuming info not to enrich oneself, but to push the dopamine button, and I think even removing the scroll doesn’t fix this (my phone is intentionally quite high friction to use and I still fail to stay off of it)
I wish to hear these takes.
I’d be down to try something along those lines.
I wonder if anyone has ball-park figures for how much the LLM, used for tone-warnings and light moderation, would cost? I am uncertain about what grade of model would be necessary for acceptable results, though I’d wager a guess that Gemini 2.5 Flash would be acceptable.
Disclosure: I’m an admin of themotte.org, and an unusually AI-philic one. I’d previously floated the idea of fine-tuning an LLM on records of previous moderator interactions and associated parent comments (both good and bad, us mods go out of our way to recognize and reward high quality posts, after user reports). Our core thesis is to be a place for polite and thoughtful discussion of contentious topics, and necessarily, we have rather subjective moderation guidelines. (People can be very persistent and inventive about sticking to the RAW while violating the spirit)
Even 2 years ago, when I floated the idea, I think it would have worked okay, and these days, I think you could get away without fine-tuning at all. I suspect the biggest hurdle would be models throwing a fit over controversial topics/views, even if the manner and phrasing were within discussion norms. Sadly, now, as it was then, the core user base was too polarized to support such an endeavor. I’d still like to see it put into use.
>argument mapping is really cool imo but I think most attempts fail because they try to make arguments super structured and legible. I think a less structured version that lets you vote on how much you think various posts respond to other posts and how well you think it addresses the key points and which posts overlap in arguments would be valuable. like you’d see clusters with (human written and vote selected) summaries of various clusters, and then links of various strengths inter cluster. I think this would greatly help epistemics by avoiding infinite argument retreading
Another feature I might float is the idea of granular voting. Let’s say there’s a comment where I agree with 90% of the content, but vehemently disagree with the rest. Should I upvote, and unavoidably endorse the bit I don’t want to? Should I make a comment stating that I agree with this specific portion and not that?
What if users could just select snippets of a comment and upvote/downvote them? We could even do the HackerNews thing and change the opacity of the text to show how popular particular passages were.
the LLM cost should not be too bad. it would mostly be looking at vague vibes rather than requiring lots of reasoning about the thing. I trust e.g AI summaries vastly less because they can require actual intelligence.
I’m happy to fund this a moderate amount for the MVP. I think it would be cool if this existed.
I don’t really want to deal with all the problems that come with modifying something that already works for other people, at least not before we’re confident the ideas are good. this points towards building a new thing. fwiw I think if building a new thing, the chat part would be most interesting/valuable standalone (and I think it’s good to have platforms grow out of a simple core rather than to do everything at once)
One consideration re: the tone-warning LLMs: make sure to be aware that this means you’re pseudo-publishing someone’s comment before they meant to. Not publishing in discoverable sense, but logging it to a database somewhere (i.e., probably controlled by the LLM provider) - and depending on the types of writing, this might affect people’s willingness to actually write stuff
This is fixable by
a) hosting own model, and double-checking that code does not log incoming content in any way,
b) potentially, having that model on client side (over time, it might shrink to some manageable size).