Wow, the SONAR encode-decode performance is shockingly good, and I read the paper and they explicitly stated that their goal was translation, and that the autoencoder objective alone was extremely easy! (But it hurt translation performance, presumably by using a lot of the latent space to encode non-semantic linguistic details, so they heavily downweighted autoencoder loss relative to other objectives when training the final model.)
I wonder how much information there is in those 1024-dimensional embedding vectors. I know you can jam an unlimited amount of data into infinite-precision floating point numbers, but I bet if you add Gaussian noise to them they still decode fine, and the magnitude of noise you can add before performance degrades would allow you to compute how many effective bits there are. (Actually, do people use this technique on latents in general? I’m sure either they do or they have something even better; I’m not a supergenius and this is a hobby for me, not a profession.) Then you could compare to existing estimates of text entropy, and depending on exactly how the embedding vectors are computed (they say 512 tokens of context but I haven’t looked at the details enough to know if there’s a natural way to encode more tokens than that; I remember some references to mean pooling, which would seem to extend to longer text just fine?), compare these across different texts.
Exploring this embedding space seems super interesting, in general, way more so on an abstract level (obviously it isn’t as directly useful at this point) than the embedding space used by actual LLMs. Like, with only 1024 dimensions for a whole paragraph, it must be massively polysemantic, right? I guess your follow-on post (which this was just research to support) is implicitly doing part of this, but I think maybe it underplays “can we extract semantic information from this 1024-dimensional embedding vector in any way substantially more efficient than actually decoding it and reading the output?” (Or maybe it doesn’t; I read the other post too, but haven’t re-read it in light of this one.)
There also appears to be a way to attempt to use this to enhance model capabilities. I seem to think of one of these every other week, and again, I’m not a supergenius nor a professional ML researcher so I assume it’s obvious to those in the field. The devil appears to be in the details; sometimes a new innovation appears to be a variant of something I thought of years ago, sometimes they come out of left field from my perspective, and in no case does there appear to be anything I, from my position, could have usefully done with the idea, so far. Experiments seem very compute-limited, especially because like all other software development in my experience, one needs to actually run the code and see what happens. This particular technique, if it actually works (I’m guessing either it doesn’t, or it only works when scaled so large that a bunch of other techniques would have worked just as well and converged on the same implicit computations) might come with large improvements to interpretability and controllability, or it might not (which seems to be true for all the other ideas I have that might improve capabilities, too). I’m not advising anyone to try it (again, if one works in the field I think it’s obvious, so either there are reasons not to or someone already is). Just venting, I guess. If anyone’s actually reading this, do you think there’s anything useful to do with this idea and others like it, or are they pretty much a dime a dozen, interesting to me but worthless in practice?
(Sorry for going on so long! Wish I had a way to pay a penny to anyone who thoughtfully reads this, whether or not they do anything with it.)
Thanks for reading, and yeah I was also surprised by how well it does. It does seem like there is degradation in auto-encoding from the translation, but I would guess that it probably does also make the embedding space have some nicer properties
I bet if you add Gaussian noise to them they still decode fine
I did try some small tests to see how sensitive the Sonar model is to noise, and it seems OK. I tried adding gaussian noise and it started breaking at around >0.5x the original vector size, or at around cosine similarity <0.9, but haven’t tested too deeply, and it seemed to depend a lot on the text.
There also appears to be a way to attempt to use this to enhance model capabilities
I meta’s newer “Large Concept Model” paper they do seem to manage to train a model solely on Sonar vectors for training, though I think they also fine-tune the Sonar model to get better results (here is a draft distillation I did. EDIT: decided to post it). It seems to have some benefits (processing long contexts becomes much easier), though they don’t test on many normal benchmarks, and it doesn’t seem much better than LLMs on those.
The SemFormers paper linked I think also tries to do some kind of “explicit planning” with a text auto-encoder but I haven’t read it too deeply yet. I briefly gleamed that it seemed to get better at graph traversal or something.
There are probably other things people will try, hopefully some that help make models more interpretable.
can we extract semantic information from this 1024-dimensional embedding vector in any way substantially more efficient than actually decoding it and reading the output?
Yeah I would like for there to be a good way of doing this in the general case. So far I haven’t come up with any amazing ideas that are not variations on “train a classifier probe”. I guess if you have a sufficiently good classifier probe setup you might be fine, but it doesn’t feel to me like something that works in the general case. I think there is a lot of room for people to try things though.
I wonder how much information there is in those 1024-dimensional embedding vectors… [Is there] a natural way to encode more tokens
I don’t think there is any explicit reason to limit to 512 tokens, but I guess it depends how much “detail” needs to be stored. In the Large Concept Models paper, the experiments on text segmentation did seem to degrade after around ~250 characters in length, but they only test n-gram BLEU scores.
I also guess that if you had a reinforcement loop setup like in the vec2text inversion paper, that you could probably do a good job getting even more accurate reconstructions from the model.
Exploring this embedding space seems super interesting
Yeah I agree, while it is probably imperfect, I think it seems like an interesting basis.
Since it was kind of a pain to run, sharing these probably minimally interesting results. I tried encoding this paragraph from my comment:
I wonder how much information there is in those 1024-dimensional embedding vectors. I know you can jam an unlimited amount of data into infinite-precision floating point numbers, but I bet if you add Gaussian noise to them they still decode fine, and the magnitude of noise you can add before performance degrades would allow you to compute how many effective bits there are. (Actually, do people use this technique on latents in general? I’m sure either they do or they have something even better; I’m not a supergenius and this is a hobby for me, not a profession.) Then you could compare to existing estimates of text entropy, and depending on exactly how the embedding vectors are computed (they say 512 tokens of context but I haven’t looked at the details enough to know if there’s a natural way to encode more tokens than that; I remember some references to mean pooling, which would seem to extend to longer text just fine?), compare these across different texts.
with SONAR, breaking it up like this:
sentences = [
'I wonder how much information there is in those 1024-dimensional embedding vectors.',
'I know you can jam an unlimited amount of data into infinite-precision floating point numbers, but I bet if you add Gaussian noise to them they still decode fine, and the magnitude of noise you can add before performance degrades would allow you to compute how many effective bits there are.',
'(Actually, do people use this technique on latents in general? I\'m sure either they do or they have something even better; I\'m not a supergenius and this is a hobby for me, not a profession.)',
'Then you could compare to existing estimates of text entropy, and depending on exactly how the embedding vectors are computed (they say 512 tokens of context but I haven\'t looked at the details enough to know if there\'s a natural way to encode more tokens than that;',
'I remember some references to mean pooling, which would seem to extend to longer text just fine?), compare these across different texts.']
and after decode, I got this:
['I wonder how much information there is in those 1024-dimensional embedding vectors.',
'I know you can encode an infinite amount of data into infinitely precise floating-point numbers, but I bet if you add Gaussian noise to them they still decode accurately, and the amount of noise you can add before the performance declines would allow you to calculate how many effective bits there are.',
"(Really, do people use this technique on latent in general? I'm sure they do or they have something even better; I'm not a supergenius and this is a hobby for me, not a profession.)",
"And then you could compare to existing estimates of text entropy, and depending on exactly how the embedding vectors are calculated (they say 512 tokens of context but I haven't looked into the details enough to know if there's a natural way to encode more tokens than that;",
'I remember some references to mean pooling, which would seem to extend to longer text just fine?), compare these across different texts.']
Can we do semantic arithmetic here?
sentences = [
'A king is a male monarch.',
'A bachelor is an unmarried man.',
'A queen is a female monarch.',
'A bachelorette is an unmarried woman.'
]
...
pp(reconstructed)
['A king is a male monarch.',
'A bachelor is an unmarried man.',
'A queen is a female monarch.',
'A bachelorette is an unmarried woman.']
...
new_embeddings[0] = embeddings[0] + embeddings[3] - embeddings[1]
new_embeddings[1] = embeddings[0] + embeddings[3] - embeddings[2]
new_embeddings[2] = embeddings[1] + embeddings[2] - embeddings[0]
new_embeddings[3] = embeddings[1] + embeddings[2] - embeddings[3]
reconstructed = vec2text_model.predict(new_embeddings, target_lang="eng_Latn", max_seq_len=512)
pp(reconstructed)
['A kingwoman is a male monarch.',
"A bachelor's is a unmarried man.",
'A bachelorette is an unmarried woman.',
'A queen is a male monarch.']
Nope. Interesting though. Actually I guess the 3rd one worked?
OK, I’ll stop here, otherwise I’m at risk of going on forever. But this seems like a really cool playground.
Wow, the SONAR encode-decode performance is shockingly good, and I read the paper and they explicitly stated that their goal was translation, and that the autoencoder objective alone was extremely easy! (But it hurt translation performance, presumably by using a lot of the latent space to encode non-semantic linguistic details, so they heavily downweighted autoencoder loss relative to other objectives when training the final model.)
I wonder how much information there is in those 1024-dimensional embedding vectors. I know you can jam an unlimited amount of data into infinite-precision floating point numbers, but I bet if you add Gaussian noise to them they still decode fine, and the magnitude of noise you can add before performance degrades would allow you to compute how many effective bits there are. (Actually, do people use this technique on latents in general? I’m sure either they do or they have something even better; I’m not a supergenius and this is a hobby for me, not a profession.) Then you could compare to existing estimates of text entropy, and depending on exactly how the embedding vectors are computed (they say 512 tokens of context but I haven’t looked at the details enough to know if there’s a natural way to encode more tokens than that; I remember some references to mean pooling, which would seem to extend to longer text just fine?), compare these across different texts.
Exploring this embedding space seems super interesting, in general, way more so on an abstract level (obviously it isn’t as directly useful at this point) than the embedding space used by actual LLMs. Like, with only 1024 dimensions for a whole paragraph, it must be massively polysemantic, right? I guess your follow-on post (which this was just research to support) is implicitly doing part of this, but I think maybe it underplays “can we extract semantic information from this 1024-dimensional embedding vector in any way substantially more efficient than actually decoding it and reading the output?” (Or maybe it doesn’t; I read the other post too, but haven’t re-read it in light of this one.)
There also appears to be a way to attempt to use this to enhance model capabilities. I seem to think of one of these every other week, and again, I’m not a supergenius nor a professional ML researcher so I assume it’s obvious to those in the field. The devil appears to be in the details; sometimes a new innovation appears to be a variant of something I thought of years ago, sometimes they come out of left field from my perspective, and in no case does there appear to be anything I, from my position, could have usefully done with the idea, so far. Experiments seem very compute-limited, especially because like all other software development in my experience, one needs to actually run the code and see what happens. This particular technique, if it actually works (I’m guessing either it doesn’t, or it only works when scaled so large that a bunch of other techniques would have worked just as well and converged on the same implicit computations) might come with large improvements to interpretability and controllability, or it might not (which seems to be true for all the other ideas I have that might improve capabilities, too). I’m not advising anyone to try it (again, if one works in the field I think it’s obvious, so either there are reasons not to or someone already is). Just venting, I guess. If anyone’s actually reading this, do you think there’s anything useful to do with this idea and others like it, or are they pretty much a dime a dozen, interesting to me but worthless in practice?
(Sorry for going on so long! Wish I had a way to pay a penny to anyone who thoughtfully reads this, whether or not they do anything with it.)
Thanks for reading, and yeah I was also surprised by how well it does. It does seem like there is degradation in auto-encoding from the translation, but I would guess that it probably does also make the embedding space have some nicer properties
I did try some small tests to see how sensitive the Sonar model is to noise, and it seems OK. I tried adding gaussian noise and it started breaking at around >0.5x the original vector size, or at around cosine similarity <0.9, but haven’t tested too deeply, and it seemed to depend a lot on the text.
I meta’s newer “Large Concept Model” paper they do seem to manage to train a model solely on Sonar vectors for training, though I think they also fine-tune the Sonar model to get better results (here is a draft distillation I did. EDIT: decided to post it). It seems to have some benefits (processing long contexts becomes much easier), though they don’t test on many normal benchmarks, and it doesn’t seem much better than LLMs on those.
The SemFormers paper linked I think also tries to do some kind of “explicit planning” with a text auto-encoder but I haven’t read it too deeply yet. I briefly gleamed that it seemed to get better at graph traversal or something.
There are probably other things people will try, hopefully some that help make models more interpretable.
Yeah I would like for there to be a good way of doing this in the general case. So far I haven’t come up with any amazing ideas that are not variations on “train a classifier probe”. I guess if you have a sufficiently good classifier probe setup you might be fine, but it doesn’t feel to me like something that works in the general case. I think there is a lot of room for people to try things though.
I don’t think there is any explicit reason to limit to 512 tokens, but I guess it depends how much “detail” needs to be stored. In the Large Concept Models paper, the experiments on text segmentation did seem to degrade after around ~250 characters in length, but they only test n-gram BLEU scores.
I also guess that if you had a reinforcement loop setup like in the vec2text inversion paper, that you could probably do a good job getting even more accurate reconstructions from the model.
Yeah I agree, while it is probably imperfect, I think it seems like an interesting basis.
Since it was kind of a pain to run, sharing these probably minimally interesting results. I tried encoding this paragraph from my comment:
with SONAR, breaking it up like this:
and after decode, I got this:
Can we do semantic arithmetic here?
Nope. Interesting though. Actually I guess the 3rd one worked?
OK, I’ll stop here, otherwise I’m at risk of going on forever. But this seems like a really cool playground.
Yeah it was annoying to get working. I now have added a Google Colab in case anyone else wants to try anything.
It does seem interesting that the semantic arithmetic is hit or miss (mostly miss).