I am a graduate student at KAIST, supervised by Kimin Lee. Please checkout my homepage https://mintaywon.github.io/, if you’re interested!
Taywon Min
Karma: 1
Thanks for the great work. I think that multimodal sparse auto encoders is a promising direction. Do you think it is possible / worthwhile to train SAEs on vla models like OpenVLA? I haven’t seen any related work training or interpreting action models using SAE work, and am curious of your thoughts.
But what if all insecure code contribute in some way?
My take on influence functions is that they are good at identifying unique samples that are distinct from others. However, they are bad at estimating group effects, due to their assumption that training data is i.i.d.
Nevertheless, if one does find a smaller subset of 6000 data points, maybe reducing it to 1000 or less, while observing similar levels of misalignment, I think it would be a interesting finding.