alexmarques commited on
Commit
692b030
1 Parent(s): 0944831

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -11
README.md CHANGED
@@ -6,7 +6,7 @@ language:
6
  pipeline_tag: text-generation
7
  ---
8
 
9
- # SparseLlama-2-7b-evol-codealpaca-v1-pruned_50.2of4
10
 
11
  ## Model Overview
12
  - **Model Architecture:** Llama-2
@@ -19,12 +19,11 @@ pipeline_tag: text-generation
19
  - **Model Developers:** Neural Magic
20
 
21
  Compressed version of [Llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b-hf) specialized for code-generation.
22
- This model was obtained by fine-tuning the Sparse Foundational model [Sparse-Llama-2-7b-pruned_50.2of4](https://huggingface.co/nm-testing/SparseLlama-2-7b-pruned_50.2of4) on the [ultrachat_200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) dataset.
23
- It achieves a win rate of 62.1% on the [AlpacaEval](https://github.com/tatsu-lab/alpaca_eval) benchmark (version 1.0) when using [Llama-2-70b-chat](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf) as evaluator, whereas the dense [Llama-2-7b-ultrachat200k](https://huggingface.co/neuralmagic/Llama-2-7b-ultrachat200k) model achieves 57.6% win rate.
 
24
 
25
- This model was produced as part if Neural Magic's Sparse Foundational Models initiative, and demostrates the capability of Sparse Foundational Models to transfer to the text-generation domain.
26
-
27
- **Note:** This model uses the chat template from [zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta).
28
 
29
  ## Model Optimizations
30
 
@@ -33,12 +32,11 @@ This optimization reduces the number of parameters by 50%, reducing the disk siz
33
 
34
  ## Evaluation
35
 
36
- This model was evaluated in the [AlpacaEval](https://github.com/tatsu-lab/alpaca_eval) benchmark using [Llama-2-70b-chat](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf) as evaluator.
37
 
38
  ## Accuracy
39
 
40
- | Model | Win rate | Recovery |
41
  | :----- | :--------: | :--------: |
42
- | [Llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b-hf) | 3.7% | -- |
43
- | [Llama-2-7b-ultrachat200k](https://huggingface.co/neuralmagic/Llama-2-7b-ultrachat200k) | 57.6% | -- |
44
- | SparseLlama-2-7b-ultrachat_200k-pruned_50.2of4 | 62.1% | 108% |
 
6
  pipeline_tag: text-generation
7
  ---
8
 
9
+ # SparseLlama-2-7b-evolcodealpaca-pruned_50.2of4
10
 
11
  ## Model Overview
12
  - **Model Architecture:** Llama-2
 
19
  - **Model Developers:** Neural Magic
20
 
21
  Compressed version of [Llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b-hf) specialized for code-generation.
22
+ This model was obtained by fine-tuning the Sparse Foundational model [SparseLlama-2-7b-pruned_50.2of4](https://huggingface.co/nm-testing/SparseLlama-2-7b-pruned_50.2of4) on the [evol-codealpaca-v1](https://huggingface.co/datasets/theblackcat102/evol-codealpaca-v1) dataset.
23
+ [SquareHead](https://arxiv.org/abs/2310.06927) knowledge distillation is used with [Llama-2-7b-evolcodealpaca](https://huggingface.co/neuralmagic/Llama-2-7b-evolcodealpaca) as teacher.
24
+ It achieves [HumanEval](https://arxiv.org/abs/2107.03374) pass@1 of 34.58%, whereas the dense [Llama-2-7b-evolcodealpaca](https://huggingface.co/neuralmagic/Llama-2-7b-evolcodealpaca) model achieves 32.03%.
25
 
26
+ This model was produced as part if Neural Magic's Sparse Foundational Models initiative, and demostrates the capability of Sparse Foundational Models to transfer to the code-generation domain.
 
 
27
 
28
  ## Model Optimizations
29
 
 
32
 
33
  ## Evaluation
34
 
35
+ This model was evaluated in the [HumanEval](https://arxiv.org/abs/2107.03374) benchmark using the [bigcode-evaluation-harness](https://github.com/bigcode-project/bigcode-evaluation-harness).
36
 
37
  ## Accuracy
38
 
39
+ | Model | HumanEval pass@1 | Recovery |
40
  | :----- | :--------: | :--------: |
41
+ | [Llama-2-7b-evolcodealpaca](https://huggingface.co/neuralmagic/Llama-2-7b-evolcodealpaca) | 32.03% | -- |
42
+ | SparseLlama-2-7b-evolcodealpaca-pruned_50.2of4 | 34.58% | 108% |