If you can prevent algorithmic progress then I agree somewhat, though experiments to make this sort of progress should be doable on small volumes of compute so you’d need to suppress the research or publishing.
I do think that not being able to acquire, say, $1M worth of matmul-adapted compute is a higher bar than you imply here. Being able to do large numbers of matmuls is an extremely useful property for like a zillion reasons beyond AI—iirc Google poured at least hundreds of millions into building TPUs based only on the projected demand for very simple NLP algorithms. LLM-optimized matmul machines are helpful but you can use anything if you’re willing to adapt your algorithms and software. I would expect rendering farm or basically any serious cluster at all in 15y to be able to train >current models.
According to their twitter, Anthropic revenue grew 3x in the first 3 months of 2026, which this comment ~implies would be unlikely