Upload README.md with huggingface_hub
Browse files
README.md
ADDED
@@ -0,0 +1,29 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model: google/gemma-2-2b-it
|
3 |
+
library_name: transformers
|
4 |
+
license: gemma
|
5 |
+
pipeline_tag: text-generation
|
6 |
+
tags:
|
7 |
+
- conversational
|
8 |
+
- openvino
|
9 |
+
- nncf
|
10 |
+
- 8-bit
|
11 |
+
extra_gated_heading: Access Gemma on Hugging Face
|
12 |
+
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
|
13 |
+
agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging
|
14 |
+
Face and click below. Requests are processed immediately.
|
15 |
+
extra_gated_button_content: Acknowledge license
|
16 |
+
base_model_relation: quantized
|
17 |
+
---
|
18 |
+
|
19 |
+
This model is a quantized version of [`google/gemma-2-2b-it`](https://huggingface.co/google/gemma-2-2b-it) and is converted to the OpenVINO format. This model was obtained via the [nncf-quantization](https://huggingface.co/spaces/echarlaix/nncf-quantization) space with [optimum-intel](https://github.com/huggingface/optimum-intel).
|
20 |
+
First make sure you have `optimum-intel` installed:
|
21 |
+
```bash
|
22 |
+
pip install optimum[openvino]
|
23 |
+
```
|
24 |
+
To load your model you can do as follows:
|
25 |
+
```python
|
26 |
+
from optimum.intel import OVModelForCausalLM
|
27 |
+
model_id = "AIFunOver/gemma-2-2b-it-openvino-8bit"
|
28 |
+
model = OVModelForCausalLM.from_pretrained(model_id)
|
29 |
+
```
|