Very funny, but the OA embeddings were always bad at sentence embedding, specifically, compared to other NN sentence-specialized embeddings; and as the original OA embedding paper somewhat defensively argues, it’s not even clear a priori what a sentence embedding should do because a sentence is such a cut-down piece of text, and doing well at a sentence embedding task may only be overfitting or come at the cost of performance on more meaningful text embedding tasks. (Similar to a word embedding: they are so poly-semantic or context-dependent that it seems like they have to have substantial limits—which is part of the motivation for Transformers in the first place, after all...)
That’s why I was experimenting with prompting a LLM to do seriation rewrites (instead of just splitting on punctuation to reuse my existing greedy-pairwise approach, and having done with it). A prompted LLM is taking full context and purpose into consideration, and avoid the issues with bad embeddings on very small text. So the seriation outputs aren’t crazily random, but sensible. This also helps avoid issues like Algon’s where a general-purpose embedding, blind to context or purpose, winds up emphasizing something you don’t care about; if Algon had been able to prompt a seriation, like ‘sort by theme’, the LLM would almost certainly not try to seriate it by the ‘question formatting’, but would organize his little Q&A question set by topic like biology then chemistry then physics, say. And if it doesn’t, then it’s easy to add more context or instructions. There are promptable embedders, but they are much more exotic and not necessary here.
(Which makes sense, because if you ask a LLM to sort a list of items in a freeform normal way, like a chat session, they are capable of it; in my poetry selection the other day, “Bell, Crow, Moon: 11 Variations”, I had Claude/Gemini/GPT suggest how exactly to sort the 11 poems we curated into a pleasing sequence, and they did come up with a much nicer poetry sequence than the original random one. And why wouldn’t they be able to do that, when they were good enough to write most of the poems in the first place?)
I was trying out a hierarchical approach when I stopped, because I wasn’t sure if I could trust a LLM to rewrite a whole input without dropping any characters or doing unintended rewrites, and aside from being theoretically more scalable and potentially better by making each step easier and propagating the sorting top-down, if you explicitly turn it into a tree, you can easily check that you get back an exact permutation of the list each time and so that the rewrite was safe. I think that might be unnecessary at this point, given the steady improvement in prompt adherence, so maybe the task is now trivial.
There’s no explicit distances calculated: just asking the LLM to sort the list meaningfully.
Very funny, but the OA embeddings were always bad at sentence embedding, specifically, compared to other NN sentence-specialized embeddings; and as the original OA embedding paper somewhat defensively argues, it’s not even clear a priori what a sentence embedding should do because a sentence is such a cut-down piece of text, and doing well at a sentence embedding task may only be overfitting or come at the cost of performance on more meaningful text embedding tasks. (Similar to a word embedding: they are so poly-semantic or context-dependent that it seems like they have to have substantial limits—which is part of the motivation for Transformers in the first place, after all...)
That’s why I was experimenting with prompting a LLM to do seriation rewrites (instead of just splitting on punctuation to reuse my existing greedy-pairwise approach, and having done with it). A prompted LLM is taking full context and purpose into consideration, and avoid the issues with bad embeddings on very small text. So the seriation outputs aren’t crazily random, but sensible. This also helps avoid issues like Algon’s where a general-purpose embedding, blind to context or purpose, winds up emphasizing something you don’t care about; if Algon had been able to prompt a seriation, like ‘sort by theme’, the LLM would almost certainly not try to seriate it by the ‘question formatting’, but would organize his little Q&A question set by topic like biology then chemistry then physics, say. And if it doesn’t, then it’s easy to add more context or instructions. There are promptable embedders, but they are much more exotic and not necessary here.
(Which makes sense, because if you ask a LLM to sort a list of items in a freeform normal way, like a chat session, they are capable of it; in my poetry selection the other day, “Bell, Crow, Moon: 11 Variations”, I had Claude/Gemini/GPT suggest how exactly to sort the 11 poems we curated into a pleasing sequence, and they did come up with a much nicer poetry sequence than the original random one. And why wouldn’t they be able to do that, when they were good enough to write most of the poems in the first place?)
Do you prompt the LLM to do the whole rewrite or call it n(n-1)/2 times to get the distances?
I was trying out a hierarchical approach when I stopped, because I wasn’t sure if I could trust a LLM to rewrite a whole input without dropping any characters or doing unintended rewrites, and aside from being theoretically more scalable and potentially better by making each step easier and propagating the sorting top-down, if you explicitly turn it into a tree, you can easily check that you get back an exact permutation of the list each time and so that the rewrite was safe. I think that might be unnecessary at this point, given the steady improvement in prompt adherence, so maybe the task is now trivial.
There’s no explicit distances calculated: just asking the LLM to sort the list meaningfully.