Nicky Pochinkov
NickyP
I respond to you in my next post but i’m also copying here:
I think this is one case where language is a bit inadequate.
I think it is very possible for “suffering” to be good. There are two cases for this:
-
“suffering” in which states are described as negative, but which are still positive valence. One example of this is the burn one feels from spicy food. This still feels good and is pleasurable, despite nominally having aspects which are described as bad. Some similar things are when crying feels cathartic. Or people who gain direct pleasure from painful stimuli. Often there is a limit to how far one can go before the direct pain stops feeling directly pleasurable, but there is a lot of variation in the human mind, and some people gain mental pleasure from being able to withstand levels of pain that are considered unbearable. This can be due to to things like feelings of pride, or servitude, or novelty.
-
“suffering” in which one was actually in pain/suffering at the time of the even, but which leads one to better mental states after the fact. Perhaps it leads one to grow and fix one’s other problems. Perhaps it is a memorable experience one finds valuable.
I have experienced both. Suffering can be a way to describe this, if the experience is also either positive-valence, or leading to longer-term pleasure, then I’m not sure it counts.
I think there are some forms of suffering that are near universally felt as bad. This can be chronic pain one gets from illness, or the suffering one can feel when feverish, scenarios of starvation or hunger, or through effective torture. And I guess with “suffering is bad” I am trying to point more-so at this.
-
I’ve tried swipe for maybe like an hour or two but didn’t like it that much. It’s been a whike though and maybe i’m missing out.
I’m also going to note tts podcast feeds exist:
https://feeds.type3.audio/lesswrong--30-karma.rss
and
Hmm, it seems the when I achieved the virtue of The Void, it was absorbed by the void
Yeah, the context length was 128 concepts for the small tests they did between architectures, and 2048 concepts for the larger models.
How this exactly translates is kind of variable. They limit the concepts to be around 200 characters, but this could be any number of tokens. They say they trained the large model on 2.7T tokens and 142B concepts, so on average 19 tokens per concept.
The 128 would translate to 2.4k tokens, and the 2048 concepts would translate to approx 39k tokens.
Yeah it was annoying to get working. I now have added a Google Colab in case anyone else wants to try anything.
It does seem interesting that the semantic arithmetic is hit or miss (mostly miss).
Thanks for reading, and yeah I was also surprised by how well it does. It does seem like there is degradation in auto-encoding from the translation, but I would guess that it probably does also make the embedding space have some nicer properties
I bet if you add Gaussian noise to them they still decode fine
I did try some small tests to see how sensitive the Sonar model is to noise, and it seems OK. I tried adding gaussian noise and it started breaking at around >0.5x the original vector size, or at around cosine similarity <0.9, but haven’t tested too deeply, and it seemed to depend a lot on the text.
There also appears to be a way to attempt to use this to enhance model capabilities
I meta’s newer “Large Concept Model” paper they do seem to manage to train a model solely on Sonar vectors for training, though I think they also fine-tune the Sonar model to get better results (here is a draft distillation I did. EDIT: decided to post it). It seems to have some benefits (processing long contexts becomes much easier), though they don’t test on many normal benchmarks, and it doesn’t seem much better than LLMs on those.
The SemFormers paper linked I think also tries to do some kind of “explicit planning” with a text auto-encoder but I haven’t read it too deeply yet. I briefly gleamed that it seemed to get better at graph traversal or something.
There are probably other things people will try, hopefully some that help make models more interpretable.
can we extract semantic information from this 1024-dimensional embedding vector in any way substantially more efficient than actually decoding it and reading the output?
Yeah I would like for there to be a good way of doing this in the general case. So far I haven’t come up with any amazing ideas that are not variations on “train a classifier probe”. I guess if you have a sufficiently good classifier probe setup you might be fine, but it doesn’t feel to me like something that works in the general case. I think there is a lot of room for people to try things though.
I wonder how much information there is in those 1024-dimensional embedding vectors… [Is there] a natural way to encode more tokens
I don’t think there is any explicit reason to limit to 512 tokens, but I guess it depends how much “detail” needs to be stored. In the Large Concept Models paper, the experiments on text segmentation did seem to degrade after around ~250 characters in length, but they only test n-gram BLEU scores.
I also guess that if you had a reinforcement loop setup like in the vec2text inversion paper, that you could probably do a good job getting even more accurate reconstructions from the model.
Exploring this embedding space seems super interesting
Yeah I agree, while it is probably imperfect, I think it seems like an interesting basis.
Ok thanks, not sure why that happened but it should be fixed now.
The unlearning results seem promising!
The author’s results from unlearning MMLU seems slightly rushed but moderately promising (I previously wrote a paper trying similar things, making good comparisons here is difficult), but the results from unlearning different coding languages seem very strong (compared to my previous attempt), the model seems to be substantially more monosemantic.
I agree with your suspicions that the gemma SAE performance was poor from using reconstructed activations, matches the drop in performance I got when I tried doing this.
Would be interesting to see if, e.g. steering performance from MONET expert directions is also comparable to that of SAEs. Using SAEs in practice is quite costly so I would prefer an approach more similar to MONET.
I wonder how much of these orthogonal vectors are “actually orthogonal” once we consider we are adding two vectors together, and that the model has things like LayerNorm.
If one conditions on downstream midlayer activations being “sufficiently different” it seems possible one could find like 10x degeneracy of actual effects these have on models. (A possibly relevant factor is how big the original activation vector is compared to the steering vector?)
I think there are already some papers doing similar work, though usually sold as reducing inference costs. For example, the MoEfication paper and Contextual Sparsity paper could probably be modified for this purpose.
Sorry! I have fixed this now
In case anyone finds it difficult to go through all the projects, I have made a longer post where each project title is followed by a brief description, and a list of the main skills/roles they are looking for.
See here: https://www.lesswrong.com/posts/npkvZG67hRvBneoQ9
Cadenza Labs has some video explainers on interpretability-related concepts: https://www.youtube.com/@CadenzaLabs
For example, an intro to Causal Scrubbing:
Seems to work fine for me, but here are the links to Market One, Market Two and Market Three from the post. (They show % customer funds to be returned, at 46%, 43% and 42% at time of this comment)
Maybe not fully understanding, but one issue I see is that without requiring “perfect prediction”, one could potentially Goodhart on on the proposal. I could imagine something like:
In training GPT-5, add a term that upweights very basic bigram statistics. In “evaluation”, use your bigram statistics table to “predict” most topk outputs just well enough to pass.
This would probably have a negative impact to performance, but this could possibly be tuned to be just sufficient to pass. Alternatively, one could use a toy model trained on the side that is easy to understand, and regularise the predictions on that instead of exactly using bigram statistics, just enough to pass the test, but still only understanding the toy model.
While I think this is important, and will probably edit the post, I think even in the unembedding, when getting the logits, the behaviour cares more about direction than distance.
When I think of distance, I implicitly think Euclidean distance:
But the actual “distance” used for calculating logits looks like this:
Which is a lot more similar to cosine similarity:
I think that because the metric is so similar to the cosine similarity, it makes more sense to think of size + directions instead of distances and points.
This is true. I think that visualising points on a (hyper-)sphere is fine, but it is difficult in practice to parametrise the points that way.
It is more that the vectors on the gpu look like , but the vectors in the model are treated more like
Thanks for this comment! I think this one of the main concerns I am pointing at.
I think somethings like fiscal aid could work, but have people tried making models for responses to things like this? It feels like with covid the relatively decent response was because the government was both enforcing a temporary policy of lockdown, and was sending checks to adjust things “back to normal” despite this. If job automation is slightly more gradual, on the scale of months to years, and specific only to certain jobs at a time, the response could be quite different, and it might be more likely that things end up poorly.
I just wrote this in my next post but TLDR I think even beyond this, I have some preference for: current world continue to exist VS current world ends and is replaced with one with a similar amount of pleasure/suffering. It’s probably me just being biased, but I do still hold this somewhat.