Update README.md
Browse files
README.md
CHANGED
@@ -1,201 +1,133 @@
|
|
1 |
---
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
---
|
5 |
-
|
6 |
-
# Model Card for Model ID
|
7 |
-
|
8 |
-
<!-- Provide a quick summary of what the model is/does. -->
|
9 |
-
|
10 |
-
|
11 |
|
12 |
## Model Details
|
13 |
|
14 |
### Model Description
|
15 |
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
- **Developed by:**
|
21 |
-
- **
|
22 |
-
- **
|
23 |
-
- **
|
24 |
-
- **
|
25 |
-
|
26 |
-
-
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
-
|
33 |
-
-
|
34 |
-
-
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
51 |
|
52 |
### Out-of-Scope Use
|
53 |
|
54 |
-
|
55 |
-
|
56 |
-
[More Information Needed]
|
57 |
|
58 |
## Bias, Risks, and Limitations
|
59 |
|
60 |
-
|
61 |
|
62 |
-
|
63 |
|
64 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
65 |
|
66 |
-
|
67 |
|
68 |
-
|
69 |
-
|
70 |
-
## How to Get Started with the Model
|
71 |
-
|
72 |
-
Use the code below to get started with the model.
|
73 |
-
|
74 |
-
[More Information Needed]
|
75 |
|
76 |
## Training Details
|
77 |
|
78 |
### Training Data
|
79 |
|
80 |
-
|
81 |
-
|
82 |
-
[More Information Needed]
|
83 |
-
|
84 |
-
### Training Procedure
|
85 |
-
|
86 |
-
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
87 |
-
|
88 |
-
#### Preprocessing [optional]
|
89 |
-
|
90 |
-
[More Information Needed]
|
91 |
-
|
92 |
|
93 |
#### Training Hyperparameters
|
94 |
|
95 |
-
|
96 |
-
|
97 |
-
#### Speeds, Sizes, Times [optional]
|
98 |
-
|
99 |
-
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|
100 |
-
|
101 |
-
[More Information Needed]
|
102 |
-
|
103 |
-
## Evaluation
|
104 |
-
|
105 |
-
<!-- This section describes the evaluation protocols and provides the results. -->
|
106 |
-
|
107 |
-
### Testing Data, Factors & Metrics
|
108 |
-
|
109 |
-
#### Testing Data
|
110 |
-
|
111 |
-
<!-- This should link to a Dataset Card if possible. -->
|
112 |
-
|
113 |
-
[More Information Needed]
|
114 |
-
|
115 |
-
#### Factors
|
116 |
-
|
117 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
118 |
-
|
119 |
-
[More Information Needed]
|
120 |
-
|
121 |
-
#### Metrics
|
122 |
-
|
123 |
-
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
124 |
-
|
125 |
-
[More Information Needed]
|
126 |
-
|
127 |
-
### Results
|
128 |
-
|
129 |
-
[More Information Needed]
|
130 |
-
|
131 |
-
#### Summary
|
132 |
-
|
133 |
-
|
134 |
-
|
135 |
-
## Model Examination [optional]
|
136 |
-
|
137 |
-
<!-- Relevant interpretability work for the model goes here -->
|
138 |
-
|
139 |
-
[More Information Needed]
|
140 |
-
|
141 |
-
## Environmental Impact
|
142 |
-
|
143 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
144 |
-
|
145 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
146 |
-
|
147 |
-
- **Hardware Type:** [More Information Needed]
|
148 |
-
- **Hours used:** [More Information Needed]
|
149 |
-
- **Cloud Provider:** [More Information Needed]
|
150 |
-
- **Compute Region:** [More Information Needed]
|
151 |
-
- **Carbon Emitted:** [More Information Needed]
|
152 |
-
|
153 |
-
## Technical Specifications [optional]
|
154 |
-
|
155 |
-
### Model Architecture and Objective
|
156 |
-
|
157 |
-
[More Information Needed]
|
158 |
-
|
159 |
-
### Compute Infrastructure
|
160 |
-
|
161 |
-
[More Information Needed]
|
162 |
-
|
163 |
-
#### Hardware
|
164 |
-
|
165 |
-
[More Information Needed]
|
166 |
-
|
167 |
-
#### Software
|
168 |
-
|
169 |
-
[More Information Needed]
|
170 |
-
|
171 |
-
## Citation [optional]
|
172 |
-
|
173 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
174 |
-
|
175 |
-
**BibTeX:**
|
176 |
-
|
177 |
-
[More Information Needed]
|
178 |
-
|
179 |
-
**APA:**
|
180 |
-
|
181 |
-
[More Information Needed]
|
182 |
|
183 |
-
|
184 |
|
185 |
-
|
186 |
|
187 |
-
|
188 |
|
189 |
-
|
190 |
|
191 |
-
|
192 |
|
193 |
-
|
194 |
|
195 |
-
|
196 |
|
197 |
-
|
198 |
|
199 |
-
|
200 |
|
|
|
201 |
|
|
|
|
1 |
---
|
2 |
+
license: cc-by-nc-4.0
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
- de
|
6 |
+
- fr
|
7 |
+
- zh
|
8 |
+
- pt
|
9 |
+
- nl
|
10 |
+
- ru
|
11 |
+
- ko
|
12 |
+
- it
|
13 |
+
- es
|
14 |
+
metrics:
|
15 |
+
- comet
|
16 |
+
pipeline_tag: translation
|
17 |
---
|
18 |
+
# Model Card for TowerInstruct-7B-v0.1
|
|
|
|
|
|
|
|
|
|
|
19 |
|
20 |
## Model Details
|
21 |
|
22 |
### Model Description
|
23 |
|
24 |
+
TowerInstruct-7B is a language model that results from fine-tuning TowerBase on the TowerBlocks supervised fine-tuning dataset. TowerInstruct-7B-v0.1 is the first model in the series.
|
25 |
+
The model is trained to handle several translation-related tasks, such as general machine translation (e.g., sentence- and paragraph/document-level translation, terminology-aware translation, context-aware translation), automatic post edition, named-entity recognition, gramatical error correction, and paraphrase generation.
|
26 |
+
We will release more details in the upcoming technical report.
|
27 |
+
|
28 |
+
- **Developed by:** Unbabel, Instituto Superior Técnico, CentraleSupélec University of Paris-Saclay
|
29 |
+
- **Model type:** A 7B parameter model fine-tuned on a mix of publicly available, synthetic datasets on translation-related tasks, as well as conversational datasets and code instructions.
|
30 |
+
- **Language(s) (NLP):** English, Portuguese, Spanish, French, German, Dutch, Italian, Korean, Chinese, Russian
|
31 |
+
- **License:** CC-BY-NC-4.0, Llama 2 is licensed under the [LLAMA 2 Community License](https://ai.meta.com/llama/license/), Copyright © Meta Platforms, Inc. All Rights Reserved.
|
32 |
+
- **Finetuned from model:** [TowerBase](https://huggingface.co/Unbabel/TowerBase-7B-v0.1)
|
33 |
+
|
34 |
+
**Update**: TowerInstruct-7B-v0.2 has more reliable document-level translation capabilities in comparison with TowerInstruct-7B-v0.1. The new version of TowerBlocks used to train v0.2 is also available in the Tower collection.
|
35 |
+
|
36 |
+
## Intended uses & limitations
|
37 |
+
|
38 |
+
The model was initially fine-tuned on a filtered and preprocessed supervised fine-tuning dataset ([TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.1)), which contains a diverse range of data sources:
|
39 |
+
- Translation (sentence and paragraph-level)
|
40 |
+
- Automatic Post Edition
|
41 |
+
- Machine Translation Evaluation
|
42 |
+
- Context-aware Translation
|
43 |
+
- Terminology-aware Translation
|
44 |
+
- Multi-reference Translation
|
45 |
+
- Named-entity Recognition
|
46 |
+
- Paraphrase Generation
|
47 |
+
- Synthetic Chat data
|
48 |
+
- Code instructions
|
49 |
+
|
50 |
+
You can find the dataset and all data sources of [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.1) here.
|
51 |
+
|
52 |
+
Here's how you can run the model using the `pipeline()` function from 🤗 Transformers:
|
53 |
+
|
54 |
+
```python
|
55 |
+
# Install transformers from source - only needed for versions <= v4.34
|
56 |
+
# pip install git+https://github.com/huggingface/transformers.git
|
57 |
+
# pip install accelerate
|
58 |
+
|
59 |
+
import torch
|
60 |
+
from transformers import pipeline
|
61 |
+
|
62 |
+
pipe = pipeline("text-generation", model="Unbabel/TowerInstruct-v0.1", torch_dtype=torch.bfloat16, device_map="auto")
|
63 |
+
# We use the tokenizer’s chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
|
64 |
+
messages = [
|
65 |
+
{"role": "user", "content": "Translate the following text from Portuguese into English.\nPortuguese: Um grupo de investigadores lançou um novo modelo para tarefas relacionadas com tradução.\nEnglish:"},
|
66 |
+
]
|
67 |
+
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
68 |
+
outputs = pipe(prompt, max_new_tokens=256, do_sample=False)
|
69 |
+
print(outputs[0]["generated_text"])
|
70 |
+
# <|im_start|>user
|
71 |
+
# Translate the following text from Portuguese into English.
|
72 |
+
# Portuguese: Um grupo de investigadores lançou um novo modelo para tarefas relacionadas com tradução.
|
73 |
+
# English:<|im_end|>
|
74 |
+
# <|im_start|>assistant
|
75 |
+
# A group of researchers has launched a new model for translation-related tasks.
|
76 |
+
```
|
77 |
|
78 |
### Out-of-Scope Use
|
79 |
|
80 |
+
The model is not guaranteed to perform for languages other than the 10 languages it supports. Even though we trained the model on conversational data and code instructions, it is not intended to be used as a conversational chatbot or code assistant.
|
81 |
+
We are currently working on improving quality and consistency on document-level translation. This model should is not intended to be use as a document-level translator.
|
|
|
82 |
|
83 |
## Bias, Risks, and Limitations
|
84 |
|
85 |
+
TowerInstruct-v0.1 has not been aligned to human preferences, so the model may generate problematic outputs (e.g., hallucinations, harmful content, or false statements).
|
86 |
|
87 |
+
## Prompt Format
|
88 |
|
89 |
+
TowerInstruct-v0.1 was trained using the ChatML prompt templates without any system prompts. An example follows below:
|
90 |
+
```
|
91 |
+
<|im_start|>user
|
92 |
+
{USER PROMPT}<|im_end|>
|
93 |
+
<|im_start|>assistant
|
94 |
+
{MODEL RESPONSE}<|im_end|>
|
95 |
+
<|im_start|>user
|
96 |
+
[...]
|
97 |
+
```
|
98 |
|
99 |
+
### Supervised tasks
|
100 |
|
101 |
+
The prompts for all supervised tasks can be found in [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.1). We have used multiple prompt templates for each task. While different prompts may offer different outputs, the difference in downstream performance should be very minimal.
|
|
|
|
|
|
|
|
|
|
|
|
|
102 |
|
103 |
## Training Details
|
104 |
|
105 |
### Training Data
|
106 |
|
107 |
+
Link to [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.1).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
108 |
|
109 |
#### Training Hyperparameters
|
110 |
|
111 |
+
The following hyperparameters were used during training:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
112 |
|
113 |
+
- total_train_batch_size: 256
|
114 |
|
115 |
+
- learning_rate: 7e-06
|
116 |
|
117 |
+
- lr_scheduler_type: cosine
|
118 |
|
119 |
+
- lr_scheduler_warmup_steps: 500
|
120 |
|
121 |
+
- weight_decay: 0.01
|
122 |
|
123 |
+
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
124 |
|
125 |
+
- num_epochs: 4
|
126 |
|
127 |
+
- max_seq_length: 2048
|
128 |
|
129 |
+
## Citation
|
130 |
|
131 |
+
To be completed.
|
132 |
|
133 |
+
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|