sguskin zmadscientist commited on
Commit
0b4125a
1 Parent(s): 2108e6f

Update README (#4)

Browse files

- Update README (87b789701af60d6be04f799cb1912eae4ad35480)


Co-authored-by: bob chesebrough <[email protected]>

Files changed (1) hide show
  1. README.md +82 -6
README.md CHANGED
@@ -1,13 +1,89 @@
1
- ---
2
- license: apache-2.0
3
- ---
4
 
5
- QuaLA-MiniLM: a Quantized Length Adaptive
6
- MiniLM
7
 
 
 
 
8
 
9
- The article discusses the challenge of making transformer-based models efficient enough for practical use, given their size and computational requirements. The authors propose a new approach called QuaLA-MiniLM, which combines knowledge distillation, the length-adaptive transformer (LAT) technique, and low-bit quantization. This approach trains a single model that can adapt to any inference scenario with a given computational budget, achieving a superior accuracy-efficiency trade-off on the SQuAD1.1 dataset. The authors compare their approach to other efficient methods and find that it achieves up to an x8.8 speedup with less than 1% accuracy loss. They also provide their code publicly on GitHub. The article also discusses other related work in the field, including dynamic transformers and other knowledge distillation approaches.
 
10
 
 
 
 
 
 
 
 
 
 
 
 
11
 
12
 
 
 
 
 
 
13
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Model Details: QuaLA-MiniLM
2
+ The article discusses the challenge of making transformer-based models efficient enough for practical use, given their size and computational requirements. The authors propose a new approach called **QuaLA-MiniLM**, which combines knowledge distillation, the length-adaptive transformer (LAT) technique, and low-bit quantization. We expand the Dynamic-TinyBERT approach. This approach trains a single model that can adapt to any inference scenario with a given computational budget, achieving a superior accuracy-efficiency trade-off on the SQuAD1.1 dataset. The authors compare their approach to other efficient methods and find that it achieves up to an **x8.8 speedup with less than 1% accuracy loss**. They also provide their code publicly on GitHub. The article also discusses other related work in the field, including dynamic transformers and other knowledge distillation approaches.
 
3
 
4
+ The model card has been written in combination by Intel.
 
5
 
6
+ ### QuaLA-MiniLM training process
7
+ Figure showing QuaLA-MiniLM training process. To run the model with the best accuracy-efficiency tradeoff per a specific computational budget, we set the length configuration to the best setting found by an evolutionary search to match our computational constraint.
8
+ ![ArchitecureQuaLA-MiniLM.jpg](ArchitecureQuaLA-MiniLM.jpg)
9
 
10
+ ### Model license
11
+ Licensed under MIT license.
12
 
13
+ | Model Detail | Description |
14
+ | ---- | --- |
15
+ | language: | en |
16
+ | Model Authors Company | Intel |
17
+ | Date | May 4, 2023 |
18
+ | Version | 1 |
19
+ | Type | NLP - Tiny language model|
20
+ | Architecture | "In this work we expand Dynamic-TinyBERT to generate a much more highly efficient model. First, we use a much smaller MiniLM model which was distilled from a RoBERTa-Large teacher rather than BERT-base. Second, we apply the LAT method to make the model length-adaptive, and finally we further enhance the model’s efficiency by applying 8-bit quantization. The resultant QuaLAMiniLM (Quantized Length-Adaptive MiniLM) model outperforms BERT-base with only 30% of parameters, and demonstrates an accuracy-speedup tradeoff that is superior to any other efficiency approach (up to x8.8 speedup with <1% accuracy loss) on the challenging SQuAD1.1 benchmark. Following the concept presented by LAT, it provides a wide range of accuracy-efficiency tradeoff points while alleviating the need to retrain it for each point along the accuracy-efficiency curve." |
21
+ | Paper or Other Resources | https://arxiv.org/pdf/2210.17114.pdf |
22
+ | License | TBD |
23
+ | Questions or Comments | [Community Tab](https://huggingface.co/Intel/bert-base-uncased-sparse-90-unstructured-pruneofa/discussions) and [Intel Developers Discord](https://discord.gg/rv2Gp55UJQ) |
24
 
25
 
26
+ | Intended Use | Description |
27
+ | --- | --- |
28
+ | Primary intended uses | TBD |
29
+ | Primary intended users | Anyone who needs an efficient tiny language model for other downstream tasks.|
30
+ |Out-of-scope uses|The model should not be used to intentionally create hostile or alienating environments for people.|
31
 
32
+ ### How to use
33
+
34
+ Code examples coming soon!
35
+
36
+ ```python
37
+ import ...
38
+
39
+ ```
40
+
41
+ For more code examples, refer to the GitHub Repo.
42
+
43
+ ### Metrics (Model Performance):
44
+
45
+ Inference performance on the SQuAD1.1 evaluation dataset. For all the length-adaptive
46
+ (LA) models we show the performance both of running the model without token-dropping, and of
47
+ running the model in a token-dropping configuration according to the optimal length configuration
48
+ found to meet our accuracy constraint.
49
+
50
+ |Model | Model size (Mb) |Tokens per layer |Accuracy (F1) | Latency (ms) | FLOPs | Speedup|
51
+ | --- | --- | --- | --- | --- | --- | --- |
52
+ |BERT-base |415.4723 |(384,384,384,384,384,384) |88.5831 |56.5679 |3.53E+10 |1x|
53
+ |TinyBERT-ours |253.2077 |(384,384,384,384,384,384) |88.3959 |32.4038 |1.77E+10 |1.74x|
54
+ |QuaTinyBERT-ours |132.0665 |(384,384,384,384,384,384) |87.6755 |15.5850 1.77E+10 |3.63x|
55
+ |MiniLMv2-ours |115.0473 |(384,384,384,384,384,384) |88.7016 |18.2312 |4.76E+09 |3.10x|
56
+ |QuaMiniLMv2-ours |84.8602 |(384,384,384,384,384,384) |88.5463 |9.1466 |4.76E+09 |6.18x|
57
+ |LA-MiniLM |115.0473 |(384,384,384,384,384,384) |89.2811 |16.9900 |4.76E+09 |3.33x|
58
+ |LA-MiniLM |115.0473 |(269, 253, 252, 202, 104, 34) |87.7637 |11.4428 |2.49E+09 |4.94x|
59
+ |QuaLA-MiniLM |84.8596 |(384,384,384,384,384,384) |88.8593 |7.4443 |4.76E+09 |7.6x|
60
+ |QuaLA-MiniLM |84.8596 |(315,251,242,159,142,33) |87.6828 |6.4146 |2.547E+09 |8.8x|
61
+
62
+ ### Training and Evaluation Data
63
+
64
+ | Training and Evaluation Data | Description |
65
+ | --- | --- |
66
+ |Datasets|SQuAD1.1 dataset |
67
+ |Motivation | To build an efficient and accurate base model for several downstream language tasks. |
68
+
69
+ ### Ethical Considerations
70
+ |Ethical Considerations|Description|
71
+ | --- | --- |
72
+ |Data | SQuAD1.1 dataset |
73
+ | Human life | The model is not intended to inform decisions central to human life or flourishing. It is an aggregated set of labelled Wikipedia articles. |
74
+ |Mitigations| No additional risk mitigation strategies were considered during model development. |
75
+ |Risks and harms| Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al., 2021, and Bender et al., 2021). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. Beyond this, the extent of the risks involved by using the model remain unknown. |
76
+
77
+
78
+ ### Caveats and Recommendations
79
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. There are no additional caveats or recommendations for this model.
80
+
81
+
82
+
83
+ ### BibTeX entry and citation info
84
+ | comments | description |
85
+ | --- | --- |
86
+ | comments: | In this version we added reference to the source code in the abstract. arXiv admin note: text overlap with arXiv:2111.09645 |
87
+ | Subjects: | Computation and Language (cs.CL) |
88
+ | Cite as: | arXiv:2210.17114 [cs.CL]|
89
+ | - | (or arXiv:2210.17114v2 [cs.CL] for this version) https://doi.org/10.48550/arXiv.2210.17114|