Dominik WeckmΓΌller

do-me

AI & ML interests

Making AI more accessible. Working on semantic search, embeddings and Geospatial AI applications. https://geo.rocks

Recent Activity

liked a Space 2 days ago
MVRL/taxabind-demo
updated a dataset 8 days ago
do-me/overture-places
posted an update 15 days ago

Organizations

do-me's activity

posted an update 15 days ago
reacted to tomaarsen's post with πŸ”₯ about 1 month ago
view post
Post
6314
πŸ“£ Sentence Transformers v3.2.0 is out, marking the biggest release for inference in 2 years! 2 new backends for embedding models: ONNX (+ optimization & quantization) and OpenVINO, allowing for speedups up to 2x-3x AND Static Embeddings for 500x speedups at 10-20% accuracy cost.

1️⃣ ONNX Backend: This backend uses the ONNX Runtime to accelerate model inference on both CPU and GPU, reaching up to 1.4x-3x speedup depending on the precision. We also introduce 2 helper methods for optimizing and quantizing models for (much) faster inference.
2️⃣ OpenVINO Backend: This backend uses Intel their OpenVINO instead, outperforming ONNX in some situations on CPU.

Usage is as simple as SentenceTransformer("all-MiniLM-L6-v2", backend="onnx"). Does your model not have an ONNX or OpenVINO file yet? No worries - it'll be autoexported for you. Thank me later πŸ˜‰

πŸ”’ Another major new feature is Static Embeddings: think word embeddings like GLoVe and word2vec, but modernized. Static Embeddings are bags of token embeddings that are summed together to create text embeddings, allowing for lightning-fast embeddings that don't require any neural networks. They're initialized in one of 2 ways:

1️⃣ via Model2Vec, a new technique for distilling any Sentence Transformer models into static embeddings. Either via a pre-distilled model with from_model2vec or with from_distillation where you do the distillation yourself. It'll only take 5 seconds on GPU & 2 minutes on CPU, no dataset needed.
2️⃣ Random initialization. This requires finetuning, but finetuning is extremely quick (e.g. I trained with 3 million pairs in 7 minutes). My final model was 6.6% worse than bge-base-en-v1.5, but 500x faster on CPU.

Full release notes: https://github.com/UKPLab/sentence-transformers/releases/tag/v3.2.0
Documentation on Speeding up Inference: https://sbert.net/docs/sentence_transformer/usage/efficiency.html
  • 1 reply
Β·
posted an update 2 months ago
view post
Post
1029
What are your favorite text chunkers/splitters?
Mine are:
- https://github.com/benbrandt/text-splitter (Rust/Python, battle-tested, Wasm version coming soon)
- https://github.com/umarbutler/semchunk (Python, really performant but some issues with huge docs)

I tried the huge Jina AI regex, but it failed for my (admittedly messy) documents, e.g. from EUR-LEX. Their free segmenter API is really cool but unfortunately times out on my huge docs (~100 pages): https://jina.ai/segmenter/

Also, I tried to write a Vanilla JS chunker with a simple, adjustable hierarchical logic (inspired from the above). I think it does a decent job for the few lines of code: https://do-me.github.io/js-text-chunker/

Happy to hear your thoughts!
  • 1 reply
Β·
posted an update 2 months ago
view post
Post
3247
SemanticFinder now supports WebGPU thanks to @Xenova 's efforts with transformers.js v3!
Expect massive performance gains. Inferenced a whole book with 46k chunks in <5min. If your device doesn't support #WebGPU use the classic Wasm-based version:
- WebGPU: https://do-me.github.io/SemanticFinder/webgpu/
- Wasm: https://do-me.github.io/SemanticFinder/

WebGPU harnesses the full power of your hardware, no longer being restricted to just the CPU. The speedup is significant (4-60x) for all kinds of devices: consumer-grade laptops, heavy Nvidia GPU setups or Apple Silicon. Measure the difference for your device here: Xenova/webgpu-embedding-benchmark
Chrome currently works out of the box, Firefox requires some tweaking.

WebGPU + transformers.js allows to build amazing applications and make them accessible to everyone. E.g. SemanticFinder could become a simple GUI for populating your (vector) DB of choice. See the pre-indexed community texts here: do-me/SemanticFinder
Happy to hear your ideas!
  • 1 reply
Β·
replied to Xenova's post 3 months ago
view reply

This is absolutely amazing, the speedup is so insane and makes on-device AI much more accessible. Thank you so so much for this!

It would be great to have some kind of "auto" mode for the device param so that devices supporting webGPU use it right away. Anyway, happily waiting for the docs/blog :)

reacted to Xenova's post with πŸ”₯ 3 months ago
view post
Post
14870
I'm excited to announce that Transformers.js V3 is finally available on NPM! πŸ”₯ State-of-the-art Machine Learning for the web, now with WebGPU support! 🀯⚑️

Install it from NPM with:
πš—πš™πš– πš’ @πš‘πšžπšπšπš’πš—πšπšπšŠπšŒπšŽ/πšπš›πšŠπš—πšœπšπš˜πš›πš–πšŽπš›πšœ

or via CDN, for example: https://v2.scrimba.com/s0lmm0qh1q

Segment Anything demo: webml-community/segment-anything-webgpu
Β·
reacted to Xenova's post with ❀️ 4 months ago
view post
Post
6707
Introducing Whisper Timestamped: Multilingual speech recognition with word-level timestamps, running 100% locally in your browser thanks to πŸ€— Transformers.js! Check it out!
πŸ‘‰ Xenova/whisper-word-level-timestamps πŸ‘ˆ

This unlocks a world of possibilities for in-browser video editing! 🀯 What will you build? 😍

Source code: https://github.com/xenova/transformers.js/tree/v3/examples/whisper-word-timestamps
  • 1 reply
Β·
reacted to Salama1429's post with πŸ”₯ 6 months ago
view post
Post
1292
Cohere's Aya 8B & 35B πŸ”₯
> Multilingual (23 languages), beats Mistral 7B and Llama3 8B in preferenceβ€”open weights.

capabilities:

🌍 **Multilingual Mastery**: Supporting 23 languages, including Arabic!

πŸ† **Top Performer**: Outperforms Mistral 7B and Llama3 8B in user preference.

πŸ” **Open Weights**: Access open weights for your research and projects.

πŸ”— **License**: CC-BY-NC with adherence to C4AI's Acceptable Use Policy.

πŸ’Ό **Developed by**: Cohere For AI and Cohere.


Check out Aya 23 on Hugging Face , link is in comments

#AI #MachineLearning #NLP #Multilingual #Arabic #TechInnovation #OpenSource #CohereAI #AyaModel
  • 2 replies
Β·
posted an update 6 months ago
view post
Post
1158
Hey HuggingFace, love your open source attitude and particularly transformers.js for embedding models! Your current integration "use this model" gives you the transformers.js code, but there is no quick way to really test a model in one click.
SemanticFinder ( do-me/SemanticFinder) offers such an integration for all compatible feature-extraction models! All you need to do is add a URL parameter with the model ID to it, like so: https://do-me.github.io/SemanticFinder/?model=Xenova/bge-small-en-v1.5. You can also decide between quantized and normal mode with https://do-me.github.io/SemanticFinder/?model=Xenova/bge-small-en-v1.5&quantized=false. Maybe that would do for a HF integration?
I know it's a small open source project, but I really believe that it provides value for devs before deciding for one model or the other. Also, it's much easier than having to spin up a notebook, install dependencies etc.. It's private, so you could even do some real-world evaluation on personal data without having to worry about third-party services data policies.
Happy to hear the community's thoughts!
  • 1 reply
Β·
reacted to Xenova's post with ❀️ 6 months ago
view post
Post
Introducing the πŸ€— Transformers.js WebGPU Embedding Benchmark! ⚑️
πŸ‘‰ Xenova/webgpu-embedding-benchmark πŸ‘ˆ

On my device, I was able to achieve a 64.04x speedup over WASM! 🀯 How much does WebGPU speed up ML models running locally in your browser? Try it out and share your results! πŸš€
Β·
posted an update 7 months ago
view post
Post
1539
Get daily/weekly/monthly notifications about latest trending feature-extraction models compatible with transformers.js for semantic search! All open source built on GitHub Actions and ntfy.sh.

I'm also providing daily updated tables (filterable and sortable by onnx model size too!) if you want to have a look only once in a while. Download what suits you best: csv, xlsx, parquet, json, html.

Would you like to monitor other models/tags? Feel free to open a PR :)

GitHub: https://github.com/do-me/trending-huggingface-models
Ntfy.sh daily channel: https://ntfy.sh/feature_extraction_transformers_js_models_daily
Sortable table: https://do-me.github.io/trending-huggingface-models/

And the best part: all 145 models are integrated in SemanticFinder to play around with https://do-me.github.io/SemanticFinder/!

replied to their post 7 months ago
view reply

Thanks a lot for your answer, this is confusing. Apparently the other=feature-extraction covers all pipeline_tag=feature-extraction as well. There are many popular models tagged in the same way, like https://huggingface.co/Snowflake/snowflake-arctic-embed-xs, which might remain in the dark if you're looking for them this way.

It's 137 vs. 159 models which makes a big difference! It seems indeed that this is the model's authors choice where to tag but it rather seems a mistake here. Maybe HF might want to improve this UI-wise?

fyi @thenlper

posted an update 7 months ago
view post
Post
2337
Question: HF model search not showing all results

I noticed that when I use the HF model search with these tags:
- feature-extraction
- transformers.js
it is not showing all models that are actually tagged.

Example: All Alibaba-NLP models (e.g. gte family) are correctly tagged but they don't show here
- https://huggingface.co/models?pipeline_tag=feature-extraction&library=transformers.js&sort=trending&search=gte
- correctly tagged model Alibaba-NLP/gte-large-en-v1.5

Does anyone know why?

fyi @Xenova
  • 3 replies
Β·
posted an update 8 months ago
view post
Post
1899
Hey, I just added three useful advanced use cases to do-me/SemanticFinder.
SemanticFinder is a collection of embeddings for public documents or books. You can create your own index file from any text or pdf and save it without installing or downloading anything. Try yourself:

1. Translating from 100+ languages to English (even though it might confuse a strawberry with a grapefruit ;D): https://do-me.github.io/SemanticFinder/?hf=List_of_the_Most_Common_English_Words_70320cde&firstOnly=true&inferencingActive=False
2. Finding English synonyms: https://do-me.github.io/SemanticFinder/?hf=List_of_the_Most_Common_English_Words_0d1e28dc&firstOnly=true&inferencingActive=False
3. The "universal index idea": create an embedding index with 30k English words and reuse it on unseen texts. You can decide to fill the gaps in the index by additional inferencing or just stick to the 30k index for instant semantic similarity.
Initial idea: https://github.com/do-me/SemanticFinder/discussions/48
Try here: https://do-me.github.io/SemanticFinder/?hf=List_of_the_Most_Common_English_Words_0d1e28dc&inferencingActive=False&universalIndexSettingsWordLevel with a text of your choice.

This could be enhanced by adding duplets or triplets like "climate change" or "green house gas". Eventually I'd like to set up vector DB integrations.

Super happy to hear your feedback, ideas and maybe even contributions! :)

---
Edit: Apparently markdown url formatting does only work for HF links.