Yeah I probably should have, thanks for the comment.
What I meant by simulation was whatever model the brain has of itself, and if that was necessary for consciousness (with consciousness I don’t have a really precise definition, but I meant what my experience feels like, being me feels like something, while I’d assume a basic computer program or an object does not feel anything) to arise, and the distinction from that and base reality was where the computing happens (in an abstract way) the brain is computing me and what I’m feeling (the computed is what I mean by simulation). The way it might be testable is that it predicts that if an agent is not modeling himself internally we can rule out that it’s conscious.
I think it’s unlikely that computation is the bottleneck here, even if it used large batches I would still expect it to provide a better feed than what I usually see, I think the problem is more likely to be that they’re incentivized to give us addictive content rather than what we actually like, and my hope is that something like X or other socials with different incentives could do a lot better.