Update README.md
Browse files
README.md
CHANGED
@@ -18,7 +18,7 @@ The provided OpenVINO™ IR model is compatible with:
|
|
18 |
* OpenVINO version 2024.1.0 and higher
|
19 |
* Optimum Intel 1.16.0 and higher
|
20 |
|
21 |
-
## Running Model Inference
|
22 |
|
23 |
1. Install packages required for using [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) integration with the OpenVINO backend:
|
24 |
|
@@ -45,6 +45,37 @@ print(text)
|
|
45 |
|
46 |
For more examples and possible optimizations, refer to the [OpenVINO Large Language Model Inference Guide](https://docs.openvino.ai/2024/learn-openvino/llm_inference_guide.html).
|
47 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
48 |
## Legal information
|
49 |
|
50 |
The original model is distributed under [apache-2.0](https://choosealicense.com/licenses/apache-2.0/) license. More details can be found in [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0).
|
|
|
18 |
* OpenVINO version 2024.1.0 and higher
|
19 |
* Optimum Intel 1.16.0 and higher
|
20 |
|
21 |
+
## Running Model Inference with [Optimum Intel](https://huggingface.co/docs/optimum/intel/index)
|
22 |
|
23 |
1. Install packages required for using [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) integration with the OpenVINO backend:
|
24 |
|
|
|
45 |
|
46 |
For more examples and possible optimizations, refer to the [OpenVINO Large Language Model Inference Guide](https://docs.openvino.ai/2024/learn-openvino/llm_inference_guide.html).
|
47 |
|
48 |
+
## Running Model Inference with [OpenVINO GenAI](https://github.com/openvinotoolkit/openvino.genai)
|
49 |
+
|
50 |
+
1. Install packages required for using OpenVINO GenAI.
|
51 |
+
```
|
52 |
+
pip install openvino-genai huggingface_hub
|
53 |
+
```
|
54 |
+
|
55 |
+
2. Download model from HuggingFace Hub
|
56 |
+
|
57 |
+
```
|
58 |
+
import huggingface_hub as hf_hub
|
59 |
+
|
60 |
+
model_id = "OpenVINO/TinyLlama-1.1B-Chat-v1.0-fp16-ov"
|
61 |
+
model_path = "TinyLlama-1.1B-Chat-v1.0-fp16-ov"
|
62 |
+
|
63 |
+
hf_hub.snapshot_download(model_id, local_dir=model_path)
|
64 |
+
|
65 |
+
```
|
66 |
+
|
67 |
+
3. Run model inference:
|
68 |
+
|
69 |
+
```
|
70 |
+
import openvino_genai as ov_genai
|
71 |
+
|
72 |
+
device = "CPU"
|
73 |
+
pipe = ov_genai.LLMPipeline(model_path, device)
|
74 |
+
print(pipe.generate("What is OpenVINO?"))
|
75 |
+
```
|
76 |
+
|
77 |
+
More GenAI usage examples can be found in OpenVINO GenAI library [docs](https://github.com/openvinotoolkit/openvino.genai/blob/master/src/README.md) and [samples](https://github.com/openvinotoolkit/openvino.genai?tab=readme-ov-file#openvino-genai-samples)
|
78 |
+
|
79 |
## Legal information
|
80 |
|
81 |
The original model is distributed under [apache-2.0](https://choosealicense.com/licenses/apache-2.0/) license. More details can be found in [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0).
|