Update README.md
Browse files
README.md
CHANGED
@@ -16,10 +16,12 @@ This model is an int8 model by smooth quant of [tiiuae/falcon-7b](https://huggin
|
|
16 |
|
17 |
### Direct Use
|
18 |
|
|
|
19 |
```bash
|
20 |
git clone https://github.com/intel/intel-extension-for-pytorch.git
|
21 |
-
cd examples/cpu/inference/python/llm/single_instance
|
22 |
-
|
|
|
23 |
```
|
24 |
|
25 |
## Evaluate
|
|
|
16 |
|
17 |
### Direct Use
|
18 |
|
19 |
+
Use IPEX 2.2
|
20 |
```bash
|
21 |
git clone https://github.com/intel/intel-extension-for-pytorch.git
|
22 |
+
cd intel-extension-for-pytorch/examples/cpu/inference/python/llm/single_instance
|
23 |
+
git checkout release/2.2
|
24 |
+
python run_accuracy.py -m tiiuae/falcon-7b --quantized-model-path "<git_clone_path>/best_model.pt" --dtype int8 --tasks lambada_openai
|
25 |
```
|
26 |
|
27 |
## Evaluate
|