I have this sort of approach as one of my top-3 strategies I’m considering, but one thing I wanna flag is that “AI for [epistemics/societal uplift]” seems to be prematurely focusing on a particular tool for the job.
The broader picture here is “tech for thinking/coordination”, or “good civic infrastructure”. See Sarah Constantin’s Neutrality and Tech for Thinking for some food for thought.
Note that X Community Notes are probably the most successful recent thing in this category, and while they are indeed “AI” they aren’t what I assume most people are thinking of when they hear “AI for epistemics.” Dumb algorithms doing the obvious things can be part of the puzzle.
Yeah strongly agree with the flag. In my mind one of the big things missing here is a true name for the direction, which will indeed likely involve a lot of non-LM stuff, even if LMs are yielding a lot of the unexpected affordances.
One of the places I most differ from the ‘tech for thinking’ picture is that I think the best version of this might need to involve giving people some kinds of direct influence and power, rather than mere(!) reasoning and coordination aids. But I’m pretty confused about how true/central that is, or how to fold it in.
To redteam, and in brief—what’s the tale of why this won’t have lead to a few very coordinated, very internally peaceful, mostly epistemically clean factions, each of which is kind of an echo chamber and almost all of which are wrong about something (or even just importantly mutually disagree on frames) in some crucial way, and which are at each other’s throats?
I have this sort of approach as one of my top-3 strategies I’m considering, but one thing I wanna flag is that “AI for [epistemics/societal uplift]” seems to be prematurely focusing on a particular tool for the job.
The broader picture here is “tech for thinking/coordination”, or “good civic infrastructure”. See Sarah Constantin’s Neutrality and Tech for Thinking for some food for thought.
Note that X Community Notes are probably the most successful recent thing in this category, and while they are indeed “AI” they aren’t what I assume most people are thinking of when they hear “AI for epistemics.” Dumb algorithms doing the obvious things can be part of the puzzle.
Yeah strongly agree with the flag. In my mind one of the big things missing here is a true name for the direction, which will indeed likely involve a lot of non-LM stuff, even if LMs are yielding a lot of the unexpected affordances.
One of the places I most differ from the ‘tech for thinking’ picture is that I think the best version of this might need to involve giving people some kinds of direct influence and power, rather than mere(!) reasoning and coordination aids. But I’m pretty confused about how true/central that is, or how to fold it in.
To redteam, and in brief—what’s the tale of why this won’t have lead to a few very coordinated, very internally peaceful, mostly epistemically clean factions, each of which is kind of an echo chamber and almost all of which are wrong about something (or even just importantly mutually disagree on frames) in some crucial way, and which are at each other’s throats?