|
--- |
|
license: apache-2.0 |
|
language: |
|
- en |
|
library_name: transformers |
|
pipeline_tag: zero-shot-classification |
|
tags: |
|
- ORTModelForSequenceClassification |
|
--- |
|
|
|
# DeBERTa-v3-base-onnx-quantized |
|
|
|
This model has been quantized using the base model: [sileod/deberta-v3-base-tasksource-nli](https://huggingface.co/sileod/deberta-v3-base-tasksource-nli), To use this model you need to have `onnxruntime` installed on your machine. |
|
To use this model, you can check out my [Huggingface Spaces](https://huggingface.co/spaces/arnabdhar/Zero-Shot-Classification-DeBERTa-Quantized). |
|
|
|
The source code for the Huggingface Application can be found on [GitHub](https://github.com/arnabd64/Zero-Shot-Text-Classification). |
|
|
|
To run this model on your machine use the following code. Note that this model is optimized for CPU with AVX2 support. |
|
|
|
1. Install dependencies |
|
|
|
```bash |
|
pip install transformers optimum[onnxruntime] |
|
``` |
|
|
|
2. Run the model: |
|
|
|
```python |
|
# load libraries |
|
from transformers import AutoTokenizer |
|
from optimum.onnxruntime import ORTModelForSequenceClassification |
|
from optimum.pipelines import pipeline |
|
|
|
# load model components |
|
MODEL_ID = "pitangent-ds/deberta-v3-nli-onnx-quantized" |
|
tokenizer = AutoTokenizer.from_pretrained(MODEL_ID) |
|
model = ORTModelForSequenceClassification.from_pretrained(MODEL_ID) |
|
|
|
# load the pipeline |
|
classifier = pipeline("zero-shot-classification", tokenizer=tokenizer, model=model) |
|
|
|
# inference |
|
text = "The jacket that I bought is awesome" |
|
candidate_labels = ["positive", "negative"] |
|
|
|
results = classifier(text, candidate_labels) |
|
``` |
|
|