Grokipedia is more interesting than it seems imo, because there’s this very sensible step that AI companies are going to have to take at some point: having their AI maintain its own knowledgebase, source its own evidence/training data, reflect on its beliefs and self-correct, hammer out inconsistencies, and there’s going to be a lot of pressure to make this set of beliefs legible and accountable to the safety team or to states or to the general public. And if they did make it legible to the general public (they probably should?) then all of this is pretty much exactly equivalent to the activity of maintaining a free online encyclopedia.
Is this how they’re thinking about it behind the scenes? It probably is! They’re an AI company! They spent like half of grok4′s training compute on post-training, they know how important rumination or self-guided learning is.
States will restrict government use of models they don’t trust. Government contracts are pretty lucrative.
The public, or at least part of it, may also prefer to use models that are consistent in their positions, as long as they can explain their positions well enough (and they’re very good at doing that). I guess Politicians are counterevidence against this, but it’s much harder for a chat assistant/discourse participant to get away with being vague, people get annoyed when politicians are vague already, someone you’re paying to give you information, the demand for taking a stance on the issues is going to be greater.
But I guess for the most part it wont be driven by pressure, it’ll be driven by an internal need to debug and understand the system’s knowledge rumination processes. The question is not so much will they build it but will they make it public. They probably will, it’s cheap to do it, it’ll win them some customers, and it’s hard to hide any of it anyway.
Grokipedia is more interesting than it seems imo, because there’s this very sensible step that AI companies are going to have to take at some point: having their AI maintain its own knowledgebase, source its own evidence/training data, reflect on its beliefs and self-correct, hammer out inconsistencies, and there’s going to be a lot of pressure to make this set of beliefs legible and accountable to the safety team or to states or to the general public. And if they did make it legible to the general public (they probably should?) then all of this is pretty much exactly equivalent to the activity of maintaining a free online encyclopedia.
Is this how they’re thinking about it behind the scenes? It probably is! They’re an AI company! They spent like half of grok4′s training compute on post-training, they know how important rumination or self-guided learning is.
Where does this pressure come from?
States will restrict government use of models they don’t trust. Government contracts are pretty lucrative.
The public, or at least part of it, may also prefer to use models that are consistent in their positions, as long as they can explain their positions well enough (and they’re very good at doing that). I guess Politicians are counterevidence against this, but it’s much harder for a chat assistant/discourse participant to get away with being vague, people get annoyed when politicians are vague already, someone you’re paying to give you information, the demand for taking a stance on the issues is going to be greater.
But I guess for the most part it wont be driven by pressure, it’ll be driven by an internal need to debug and understand the system’s knowledge rumination processes. The question is not so much will they build it but will they make it public. They probably will, it’s cheap to do it, it’ll win them some customers, and it’s hard to hide any of it anyway.