I do not vote in either direction on this article because
I have not checked how the provided implementation works, for myself. It would not be nice to upvote something which would not work in fact.
I could not parse the text.
When I see “O(1) reasoning in latent space: 1ms inference, 77% accuracy, no attention or tokens” title in feed, I interpret it as “Constant-time deliberate thinking in ???: quick, unreliable but nice to get at least something, no ???, yes again constant-time” and still do not understand what is the product. Turns out it is a “Fast dirty arbitrary-task classifier in O(1), based on latent space”.
The article itself focuses on what you did and will do, with weird focal points like “This includes grad students...”, but not on what can one obtain from your model or method.
I do not vote in either direction on this article because
I have not checked how the provided implementation works, for myself. It would not be nice to upvote something which would not work in fact.
I could not parse the text.
When I see “O(1) reasoning in latent space: 1ms inference, 77% accuracy, no attention or tokens” title in feed, I interpret it as “Constant-time deliberate thinking in ???: quick, unreliable but nice to get at least something, no ???, yes again constant-time” and still do not understand what is the product.
Turns out it is a “Fast dirty arbitrary-task classifier in O(1), based on latent space”.
The article itself focuses on what you did and will do, with weird focal points like “This includes grad students...”, but not on what can one obtain from your model or method.