![Intel's profile picture](https://cdn-avatars.huggingface.co/v1/production/uploads/1616186257611-60104afcc75e19ac1738fe70.png)
Intel
AI & ML interests
None defined yet.
Intel and Hugging Face are building powerful optimization tools to accelerate training and inference with Transformers.
![](https://cdn-media.huggingface.co/marketing/intel-page/Intel-Hugging-Face-alt-version2-org-page.png)
![](/blog/assets/25_hardware_partners_program/carbon_inc_quantizer.png)
![](/blog/assets/143_q8chat/thumbnail.png)
Intel optimizes widely adopted and innovative AI software tools, frameworks, and libraries for Intel® architecture. Whether you are computing locally or deploying AI applications on a massive scale, your organization can achieve peak performance with AI software optimized for Intel® Xeon® Scalable platforms.
Intel’s engineering collaboration with Hugging Face offers state-of-the-art hardware and software acceleration to train, fine-tune and predict with Transformers.
Useful Resources:
- Intel AI + Hugging Face partner page
- Intel AI GitHub
- Developer Resources from Intel and Hugging Face
Get Started
1. Intel Acceleration Libraries
To get started with Intel hardware and software optimizations, download and install the Optimum Intel and Intel® Extension for Transformers libraries. Follow these documents to learn how to install and use these libraries:
The Optimum Intel library provides primarily hardware acceleration, while the Intel® Extension for Transformers is focused more on software accleration. Both should be present to achieve ideal performance and productivity gains in transfer learning and fine-tuning with Hugging Face.
2. Find Your Model
Next, find your desired model (and dataset) by using the search box at the top-left of Hugging Face’s website. Add “intel” to your search to narrow your search to models pretrained by Intel.
![](https://huggingface.co/spaces/Intel/README/resolve/main/hf-model_search.png)
3. Read Through the Demo, Dataset, and Quick-Start Commands
On the model’s page (called a “Model Card”) you will find description and usage information, an embedded inferencing demo, and the associated dataset. In the upper-right of your screen, click “Use in Transformers” for helpful code hints on how to import the model to your own workspace with an established Hugging Face pipeline and tokenizer.
![](https://huggingface.co/spaces/Intel/README/resolve/main/hf-use_transformers.png)
![](https://huggingface.co/spaces/Intel/README/resolve/main/hf-quickstart.png)
Collections
31
models
192
![](https://cdn-avatars.huggingface.co/v1/production/uploads/1616186257611-60104afcc75e19ac1738fe70.png)
Intel/neural-embedding-v1
![](https://cdn-avatars.huggingface.co/v1/production/uploads/1616186257611-60104afcc75e19ac1738fe70.png)
Intel/dynamic-minilmv2-L6-H384-squad1.1-int8-static
![](https://cdn-avatars.huggingface.co/v1/production/uploads/1616186257611-60104afcc75e19ac1738fe70.png)
Intel/dpt-swinv2-tiny-256
![](https://cdn-avatars.huggingface.co/v1/production/uploads/1616186257611-60104afcc75e19ac1738fe70.png)
Intel/dpt-swinv2-large-384
![](https://cdn-avatars.huggingface.co/v1/production/uploads/1616186257611-60104afcc75e19ac1738fe70.png)
Intel/dpt-beit-large-384
![](https://cdn-avatars.huggingface.co/v1/production/uploads/1616186257611-60104afcc75e19ac1738fe70.png)
Intel/bert-base-uncased-mrpc
![](https://cdn-avatars.huggingface.co/v1/production/uploads/1616186257611-60104afcc75e19ac1738fe70.png)
Intel/dpt-beit-large-512
![](https://cdn-avatars.huggingface.co/v1/production/uploads/1616186257611-60104afcc75e19ac1738fe70.png)
Intel/phi-3-mini-4k-ov-quantized
![](https://cdn-avatars.huggingface.co/v1/production/uploads/1616186257611-60104afcc75e19ac1738fe70.png)
Intel/llava-gemma-2b
![](https://cdn-avatars.huggingface.co/v1/production/uploads/1616186257611-60104afcc75e19ac1738fe70.png)