This is a d-Matrix functional reference of the LLAMA-3.2-1B model. The reference provides the following functional configurations:
Configuration | Explanation |
---|---|
BASELINE |
a reference functionally equivalent to the original model |
BASIC |
all linear algebraic operands quantized to MXINT8-64 , and all other operations transformed to approximated kernel simulations |
Usage
Install d-Matrix Dmx_Compressor first.
pip install dmx_compressor
The following is an example model and its evaluation.
pip install lm-eval
from dmx.compressor.modeling import DmxModel
import lm_eval
model_args = f"pretrained="d-matrix/Llama-3.2-1B",trust_remote_code=True"
lm = lm_eval.api.registry.get_model("hf").create_from_arg_string(model_args, {"batch_size": 1})
# Transform the model with DMX
lm._model = DmxModel.from_torch(lm._model).to_basic_model() # Using BASIC configuration
eval_results = lm_eval.evaluate(lm, lm_eval.tasks.get_task_dict([task]) # Assign desired task, i.e. "wikitext"
- Downloads last month
- 181
Evaluation results
- perplexity (BASELINE) on Wikitextself-reported11.576
- perplexity (BASIC) on Wikitextself-reported132.436