The explanation that it was done by “a new hire” is a classic and easy scapegoat. It’s much more straight forward to believe Musk himself wanted this done, and walked it back when it was clear it was more obvious than intended.
FWIW, I would currently take bets that Musk will pretty unambiguously enact and endorse censorship of things critical of him or the Trump administration more broadly within the next 12 months. I agree this case is ambiguous, but my pretty strong read based on him calling for criminal prosecution of journalists who say critical things about him or the Trump administration is that the moment its a question of political opportunity, not willingness. I am not totally sure, but sure enough to take a 1:1 bet on this operationalization.
Hm, the fact that you replied to me makes it seem like you’re disagreeing with me? But I basically agree with everything you said in this comment. My disagreement was about the specific example that Isopropylpod gave.
Oh, I guess I said “Elon wants xAI to produce a maximally truth-seeking AI, really decentralizing control over information”.
Yeah, in hindsight I should have been more careful to distinguish between my descriptions of people’s political platform, and my inferences about what they “really want”. The thing I was trying to describe was more like “what is the stance of this group” than “do people in the group actually believe the stance”.
A more accurate read of what the “real motivations” are would have been something like “you prevent it by using decentralization, until you’re in a position where you can centralize power yourself, and then you try to centralize power yourself”.
(Though that’s probably a bit too cynical—I think there are still parts of Elon that have a principled belief in decentralization. My guess is just that they won’t win out over his power-seeking parts when push comes to shove.)
It seemed in conflict to me with this sentence in the OP (which Isopropylpod was replying to):
Elon wants xAI to produce a maximally truth-seeking AI, really decentralizing control over information.
I do think in some sense Elon wants that, but my guess is he wants other things more, which will cause him to overall not aim for this.
I am personally quite uncertain about how exactly the xAI thing went down. I find it pretty plausible that it was a result of pressure from Musk, or at least indirect pressure, that was walked back when it revealed itself as politically unwise.
The explanation that it was done by “a new hire” is a classic and easy scapegoat. It’s much more straight forward to believe Musk himself wanted this done, and walked it back when it was clear it was more obvious than intended.
Obviously this sort of leap to a conclusion is very different from the sort of evidence that one expects upon hearing that literal written evidence (of Musk trying to censor) exists. Given this, your comment seems remarkably unproductive.
I agree that this isn’t what I’d call “direct written evidence”; I was just (somewhat jokingly) making the point that the linked articles are Bayesian evidence that Musk tries to censor, and that the articles are pieces of text.
Yeah, I’d seen this. The fact that grok was ever consistently saying this kind of thing is evidence, though not proof, that they actually may have a culture of generally not distorting its reasoning, they could have introduced propaganda policies at training time, it seems like they haven’t done that, instead they decided to just insert some pretty specific prompts that, I’d guess, were probably going to be temporary.
It’s real bad, but it’s not bad enough for me to shoot yet.
https://www.theverge.com/news/618109/grok-blocked-elon-musk-trump-misinformation
https://www.businessinsider.com/grok-3-censor-musk-trump-misinformation-xai-openai-2025-2?op=1
The explanation that it was done by “a new hire” is a classic and easy scapegoat. It’s much more straight forward to believe Musk himself wanted this done, and walked it back when it was clear it was more obvious than intended.
We disagree on which explanation is more straightforward, but regardless, that type of inference is very different from “literal written evidence”.
FWIW, I would currently take bets that Musk will pretty unambiguously enact and endorse censorship of things critical of him or the Trump administration more broadly within the next 12 months. I agree this case is ambiguous, but my pretty strong read based on him calling for criminal prosecution of journalists who say critical things about him or the Trump administration is that the moment its a question of political opportunity, not willingness. I am not totally sure, but sure enough to take a 1:1 bet on this operationalization.
Hm, the fact that you replied to me makes it seem like you’re disagreeing with me? But I basically agree with everything you said in this comment. My disagreement was about the specific example that Isopropylpod gave.
Oh, I guess I said “Elon wants xAI to produce a maximally truth-seeking AI, really decentralizing control over information”.
Yeah, in hindsight I should have been more careful to distinguish between my descriptions of people’s political platform, and my inferences about what they “really want”. The thing I was trying to describe was more like “what is the stance of this group” than “do people in the group actually believe the stance”.
A more accurate read of what the “real motivations” are would have been something like “you prevent it by using decentralization, until you’re in a position where you can centralize power yourself, and then you try to centralize power yourself”.
(Though that’s probably a bit too cynical—I think there are still parts of Elon that have a principled belief in decentralization. My guess is just that they won’t win out over his power-seeking parts when push comes to shove.)
It seemed in conflict to me with this sentence in the OP (which Isopropylpod was replying to):
I do think in some sense Elon wants that, but my guess is he wants other things more, which will cause him to overall not aim for this.
I am personally quite uncertain about how exactly the xAI thing went down. I find it pretty plausible that it was a result of pressure from Musk, or at least indirect pressure, that was walked back when it revealed itself as politically unwise.
yepp, see my other comment which anticipated this
Ah, yeah, that makes sense. Seems like we are largely on the same page then.
It is definitely evidence that was literally written
I was referring to the inference:
Obviously this sort of leap to a conclusion is very different from the sort of evidence that one expects upon hearing that literal written evidence (of Musk trying to censor) exists. Given this, your comment seems remarkably unproductive.
I agree that this isn’t what I’d call “direct written evidence”; I was just (somewhat jokingly) making the point that the linked articles are Bayesian evidence that Musk tries to censor, and that the articles are pieces of text.
Ah, gotcha. Unfortunately I have rejected the concept of Bayesian evidence from my ontology and therefore must regard your claim as nonsense. Alas.
(more seriously, sorry for misinterpreting your tone, I have been getting flak from all directions for this talk so am a bit trigger-happy)
No problem, my comment was pretty unclear and I can see from the other comments why you’d be on edge!
Yeah, I’d seen this. The fact that grok was ever consistently saying this kind of thing is evidence, though not proof, that they actually may have a culture of generally not distorting its reasoning, they could have introduced propaganda policies at training time, it seems like they haven’t done that, instead they decided to just insert some pretty specific prompts that, I’d guess, were probably going to be temporary.
It’s real bad, but it’s not bad enough for me to shoot yet.