Several such APIs exist. My thought was “I’d like to play with the llamascope SAE features without having to muck about with vllm, and together lets you upload a LoRA directly”, and I failed to notice that the SAE was for the base model and together only supports LoRAs for the instruct model.
The fun thing about this LoRA hack is that you don’t actually have to train the LoRA, if you know the outlier direction+magnitude for your model and the activation addition you want to apply you can write straight to the weights. The unfun thing is that it’s deeply cursed and also doesn’t even save you from having to mess with vllm.
Edit: on reflection, I do think rank 1 LoRAs might be an underappreciated interpretability tool.
Tinker is an API for LoRA PEFT. You don’t mention it directly, but it’s trendy enough that I thought your comment was a reference to it.
Several such APIs exist. My thought was “I’d like to play with the llamascope SAE features without having to muck about with vllm, and together lets you upload a LoRA directly”, and I failed to notice that the SAE was for the base model and together only supports LoRAs for the instruct model.
The fun thing about this LoRA hack is that you don’t actually have to train the LoRA, if you know the outlier direction+magnitude for your model and the activation addition you want to apply you can write straight to the weights. The unfun thing is that it’s deeply cursed and also doesn’t even save you from having to mess with vllm.
Edit: on reflection, I do think rank 1 LoRAs might be an underappreciated interpretability tool.