Add ONNX weights (+transformers.js support)
#2
by
Xenova
HF staff
- opened
Hi there! First of all, really great work with the model... I love seeing the MTEB leaderboard being switched up.
This PR adds ONNX weights for the model, both full-precision (fp32; model.onnx) as well as 8-bit precision (int8; model_quantized.onnx), so that it can be used with Transformers.js. Example usage can be found here.
Xenova
changed pull request title from
Add ONNX weights
to Add ONNX weights (+transformers.js support)
I also added the transformers.js tag for discoverability.
It is great! Thanks for your contribution!
SeanLee97
changed pull request status to
merged
Thanks! I’ll open up another PR so it shows both transformers and transformers.js library tags.