Update README.md
Browse files
README.md
CHANGED
@@ -32,6 +32,7 @@ Below linear modules (21/133) are fallbacked to fp32 for less than 1% relative a
|
|
32 |
### Load with optimum:
|
33 |
|
34 |
```python
|
|
|
35 |
from optimum.intel.neural_compressor.quantization import IncQuantizedModelForSeq2SeqLM
|
36 |
int8_model = IncQuantizedModelForSeq2SeqLM.from_pretrained(
|
37 |
'Intel/distilbart-cnn-12-6-int8-dynamic',
|
|
|
32 |
### Load with optimum:
|
33 |
|
34 |
```python
|
35 |
+
# transformers <= 4.23.0
|
36 |
from optimum.intel.neural_compressor.quantization import IncQuantizedModelForSeq2SeqLM
|
37 |
int8_model = IncQuantizedModelForSeq2SeqLM.from_pretrained(
|
38 |
'Intel/distilbart-cnn-12-6-int8-dynamic',
|