model-index: | |
- name: bloom-1b1 | |
results: | |
- task: | |
type: text-generation | |
dataset: | |
name: Wikitext | |
type: wikitext | |
metrics: | |
- type: perplexity (BASELINE) | |
value: 23.821933436102835 | |
- type: perplexity (BASIC) | |
value: 24.12128599176988 | |
This is a d-Matrix functional reference of the BLOOM-1B1 model. | |
The reference provides the following functional *configurations*: | |
Configuration | Explanation | |
:-- | :-- | |
**`BASELINE`** | a reference functionally equivalent to the original model | |
**`BASIC`** | all linear algebraic operands quantized to `MXINT8-64`, and all other operations transformed to approximated kernel simulations | |
### Usage | |
Install d-Matrix [Dmx_Compressor](https://github.com/d-matrix-ai/dmx-compressor) first. | |
```sh | |
pip install dmx_compressor | |
``` | |
The following is an example model and its evaluation. | |
```sh | |
pip install lm-eval | |
``` | |
```python | |
from dmx.compressor.modeling import DmxModel | |
import lm_eval | |
model_args = f"pretrained="d-matrix/bloom-1b1",trust_remote_code=True" | |
lm = lm_eval.api.registry.get_model("hf").create_from_arg_string(model_args, {"batch_size": 1}) | |
# Transform the model with DMX | |
lm._model = DmxModel.from_torch(lm._model).to_basic_model() # Using BASIC configuration | |
eval_results = lm_eval.evaluate(lm, lm_eval.tasks.get_task_dict([task]) # Assign desired task, i.e. "wikitext" | |
``` |