Randomly (or quite explicitly if I’m being honest) I’ve been trying to work on trying to create a more general algorithmic feed for a bit so I’ve got some context that might be interesting.
Most of it is in a github repo of mine as I’ve been working on different ways of look at recommendation algorithms and thinking of ways to make them more useful to people.
Some of my thoughts on the codebase are best represented as part of the codebase and I cba going through it fully myself and write a long comment so I created an LLM block below.
My actual human thoughts are something like that it seems kind of hard to create a nice algorithmic feed tool that works more universally for people? My initial attempts at training an algorithm for this didn’t work, my next thing I will do is to do a RAG text embedding style setup where the thought is basically that if I already have a bunch of preference data locally it will be easy to build on that.
If that doesn’t work I’ve been cooking this plan for using active inference and a brain inspired knowledge architecture in order to explicitly parametrize exploration as a key value in the system itself and also to be able to train it more efficiently over time.
So basically I agree with you on this post but it seems like a tool where the natural incentives are against it where it is at least not trivial to build something good. (I’m ok at SWE but not the best and I’m just doing this in my free time so don’t overupdate)
LLM summary of prompted codebase knowledge
A few pieces of this are already further along than the framing suggests, and I think the interesting question is different from “who will build it.”
On the “who” question: Bluesky’s AT Protocol already ships a custom feed generator API — any feed on the network can be an LLM-curated one, and the switching cost for users is one tap. Paper Skygest (a curated academic-paper feed) has 50K+ weekly users on exactly that infrastructure. Matter and Readwise Reader are already doing LLM-curated long-form for personal reading. The startup you’re predicting exists in several forms; the pieces just haven’t been assembled into the specific “tell it what you want and it obeys” product yet, probably because that product is less defensible than it sounds.
The less-discussed problem: **declared preferences aren’t actually the alignment target most people want.** If I tell a feed “no Trump news,” I get a feed that mirrors my present self — which is fine for filtering, but it’s not “aligned,” it’s just obedient. It optimizes against ragebait by replacing one reward-hacked loss function with another (my stated preferences, which are also gameable, just by me). The deeper misalignment in YouTube/X isn’t that they ignore what I say I want; it’s that nobody — including me — has a clean articulation of what attention *should* optimize for, especially in aggregate.
The more interesting design targets I see are things like:
- **Bridging objectives** (what Community Notes uses): rank content that gets positive engagement *across* ideological clusters, not within them. This is already deployed at scale and demonstrably reduces the ragebait equilibrium without needing personalization at all.
- **Epistemic-stance-aware ranking**: labelers or LLMs tagging claim/question/evidence/opinion, then ranking by curiosity/inquiry rather than engagement.
- **Slow feeds**: weekly digests, not real-time streams, where latency itself is part of the design (sidesteps the recency/outrage coupling).
These don’t require users to “declare preferences” in the shallow sense. They require a *different objective function* at the infrastructure layer. And crucially, open protocols like ATProto make it possible to run that experiment without having to out-compete YouTube.
So I’d restate the prediction as: the disruption isn’t “LLMs will obey what you tell them to show you,” it’s that the cost of running a feed with a non-trivial objective function dropped by ~100x, and the equilibrium where one algorithm optimizes everyone’s attention for ad revenue is no longer a natural monopoly. The startup play is real; the deeper shift is that “algorithmic feed” stops being a thing one company does.
Randomly (or quite explicitly if I’m being honest) I’ve been trying to work on trying to create a more general algorithmic feed for a bit so I’ve got some context that might be interesting.
Most of it is in a github repo of mine as I’ve been working on different ways of look at recommendation algorithms and thinking of ways to make them more useful to people.
Some of my thoughts on the codebase are best represented as part of the codebase and I cba going through it fully myself and write a long comment so I created an LLM block below.
My actual human thoughts are something like that it seems kind of hard to create a nice algorithmic feed tool that works more universally for people? My initial attempts at training an algorithm for this didn’t work, my next thing I will do is to do a RAG text embedding style setup where the thought is basically that if I already have a bunch of preference data locally it will be easy to build on that.
If that doesn’t work I’ve been cooking this plan for using active inference and a brain inspired knowledge architecture in order to explicitly parametrize exploration as a key value in the system itself and also to be able to train it more efficiently over time.
So basically I agree with you on this post but it seems like a tool where the natural incentives are against it where it is at least not trivial to build something good. (I’m ok at SWE but not the best and I’m just doing this in my free time so don’t overupdate)
LLM summary of prompted codebase knowledge
A few pieces of this are already further along than the framing suggests, and I think the interesting question is different from “who will build it.”
On the “who” question: Bluesky’s AT Protocol already ships a custom feed generator API — any feed on the network can be an LLM-curated one, and the switching cost for users is one tap. Paper Skygest (a curated academic-paper feed) has 50K+ weekly users on exactly that infrastructure. Matter and Readwise Reader are already doing LLM-curated long-form for personal reading. The startup you’re predicting exists in several forms; the pieces just haven’t been assembled into the specific “tell it what you want and it obeys” product yet, probably because that product is less defensible than it sounds.
The less-discussed problem: **declared preferences aren’t actually the alignment target most people want.** If I tell a feed “no Trump news,” I get a feed that mirrors my present self — which is fine for filtering, but it’s not “aligned,” it’s just obedient. It optimizes against ragebait by replacing one reward-hacked loss function with another (my stated preferences, which are also gameable, just by me). The deeper misalignment in YouTube/X isn’t that they ignore what I say I want; it’s that nobody — including me — has a clean articulation of what attention *should* optimize for, especially in aggregate.
The more interesting design targets I see are things like:
- **Bridging objectives** (what Community Notes uses): rank content that gets positive engagement *across* ideological clusters, not within them. This is already deployed at scale and demonstrably reduces the ragebait equilibrium without needing personalization at all.
- **Epistemic-stance-aware ranking**: labelers or LLMs tagging claim/question/evidence/opinion, then ranking by curiosity/inquiry rather than engagement.
- **Slow feeds**: weekly digests, not real-time streams, where latency itself is part of the design (sidesteps the recency/outrage coupling).
These don’t require users to “declare preferences” in the shallow sense. They require a *different objective function* at the infrastructure layer. And crucially, open protocols like ATProto make it possible to run that experiment without having to out-compete YouTube.
So I’d restate the prediction as: the disruption isn’t “LLMs will obey what you tell them to show you,” it’s that the cost of running a feed with a non-trivial objective function dropped by ~100x, and the equilibrium where one algorithm optimizes everyone’s attention for ad revenue is no longer a natural monopoly. The startup play is real; the deeper shift is that “algorithmic feed” stops being a thing one company does.