File size: 2,991 Bytes
6f96c3e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 |
---
base_model: neuralmagic/Llama-2-7b-pruned50-retrained-instruct
inference: false
model_type: llama
pipeline_tag: text-generation
datasets:
- garage-bAInd/Open-Platypus
- Open-Orca/OpenOrca
- cognitivecomputations/dolphin
tags:
- sparse
- instruct
- deepsparse
---
# Llama-2-7b-pruned50-retrained-instruct-quant-ds
This repo contains a [50% sparse Llama 2 7B](https://huggingface.co/neuralmagic/Llama-2-7b-pruned50-retrained) finetuned for instruction-following tasks using a blend of the Platypus + Open Orca + Dolphin datasets.
It was then quantized to 8-bit weights + activations and exported to deploy with [DeepSparse](https://github.com/neuralmagic/deepsparse), a CPU inference runtime for sparse models.
**Authors**: Neural Magic, Cerebras
## Usage
Below we share some code snippets on how to get quickly started with running the model.
### Sparse Transfer
By leveraging a pre-sparsified model's structure, you can efficiently fine-tune on new data, leading to reduced hyperparameter tuning, training times, and computational costs. Learn about this process [here](https://neuralmagic.github.io/docs-v2/get-started/transfer).
### Running the model
For accelerated inference with sparsity on CPUs, deploy with [deepsparse](https://github.com/neuralmagic/deepsparse).
```python
# pip install deepsparse[llm]
from deepsparse import TextGeneration
model = TextGeneration(model_path="hf:neuralmagic/Llama-2-7b-pruned50-retrained-instruct-quant-ds")
input_text = "Write me a poem about Machine Learning."
outputs = model(formatted_prompt, max_new_tokens=100)
print(outputs.generations[0].text)
```
## Evaluation Benchmark Results
Model evaluation metrics and results.
| Benchmark | Metric | Llama-2-7b-instruct | Llama-2-7b-pruned50-retrained-instruct-quant-ds |
|------------------------------------------------|---------------|-------------|-------------------------------|
| [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot, top-1 | xxxx | xxxx |
| [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot | xxxx | xxxx |
| [WinoGrande](https://arxiv.org/abs/1907.10641) | partial score | xxxx | xxxx |
| [ARC-c](https://arxiv.org/abs/1911.01547) | | xxxx | xxxx |
| [TruthfulQA](https://arxiv.org/abs/2109.07958) | 5-shot | xxxx | xxxx |
| [HumanEval](https://arxiv.org/abs/2107.03374) | pass@1 | xxxx | xxxx |
| [GSM8K](https://arxiv.org/abs/2110.14168) | maj@1 | xxxx | xxxx |
## Model Training Details
Coming soon.
## Help
For further support, and discussions on these models and AI in general, join [Neural Magic's Slack Community](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ) |