Nice work!
Hello!
I've seen your models pop up here and there. Am I correct in assuming that your https://huggingface.co/w601sxs/b1ade-embed is a merging of existing models? Based on your response here to my issue. I think it's a pretty fascinating approach, and it seems that you got it working quite well. If you're interested in sharing, what's the idea of this model vs the other one? I see that it's a knowledge distillation, though I'm curious about what kind of data you're distilling with (if you want to share that, at least). I'll restart MTEB for you.
Also, do you think you'll add the Sentence Transformer specific files too? Then it can be easily loaded with ST without ST having to guess the pooling type, haha. I can create a PR for you tomorrow-ish as well if that helps explain what files you'd need.
- Tom Aarsen
Hi Tom! It worked quite well (surprised more people are not doing this). I added the ST version - I was distilling with just NLI and wiki data. And this got far worse results (i.e. merging followed by KD with a much larger model failed