phpaiola commited on
Commit
91d7a6b
1 Parent(s): 0079bf9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +54 -194
README.md CHANGED
@@ -1,6 +1,5 @@
1
  ---
2
  library_name: transformers
3
- tags: []
4
  model-index:
5
  - name: internlm-chatbode-20b
6
  results:
@@ -18,7 +17,8 @@ model-index:
18
  value: 65.78
19
  name: accuracy
20
  source:
21
- url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/internlm-chatbode-20b
 
22
  name: Open Portuguese LLM Leaderboard
23
  - task:
24
  type: text-generation
@@ -34,7 +34,8 @@ model-index:
34
  value: 58.69
35
  name: accuracy
36
  source:
37
- url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/internlm-chatbode-20b
 
38
  name: Open Portuguese LLM Leaderboard
39
  - task:
40
  type: text-generation
@@ -50,7 +51,8 @@ model-index:
50
  value: 43.33
51
  name: accuracy
52
  source:
53
- url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/internlm-chatbode-20b
 
54
  name: Open Portuguese LLM Leaderboard
55
  - task:
56
  type: text-generation
@@ -66,7 +68,8 @@ model-index:
66
  value: 91.53
67
  name: f1-macro
68
  source:
69
- url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/internlm-chatbode-20b
 
70
  name: Open Portuguese LLM Leaderboard
71
  - task:
72
  type: text-generation
@@ -82,7 +85,8 @@ model-index:
82
  value: 78.95
83
  name: pearson
84
  source:
85
- url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/internlm-chatbode-20b
 
86
  name: Open Portuguese LLM Leaderboard
87
  - task:
88
  type: text-generation
@@ -98,7 +102,8 @@ model-index:
98
  value: 81.36
99
  name: f1-macro
100
  source:
101
- url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/internlm-chatbode-20b
 
102
  name: Open Portuguese LLM Leaderboard
103
  - task:
104
  type: text-generation
@@ -114,7 +119,8 @@ model-index:
114
  value: 81.72
115
  name: f1-macro
116
  source:
117
- url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/internlm-chatbode-20b
 
118
  name: Open Portuguese LLM Leaderboard
119
  - task:
120
  type: text-generation
@@ -130,7 +136,8 @@ model-index:
130
  value: 73.66
131
  name: f1-macro
132
  source:
133
- url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/internlm-chatbode-20b
 
134
  name: Open Portuguese LLM Leaderboard
135
  - task:
136
  type: text-generation
@@ -146,204 +153,58 @@ model-index:
146
  value: 70.11
147
  name: f1-macro
148
  source:
149
- url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/internlm-chatbode-20b
 
150
  name: Open Portuguese LLM Leaderboard
 
 
 
151
  ---
152
 
153
- # Model Card for Model ID
154
 
155
  <!-- Provide a quick summary of what the model is/does. -->
156
 
 
157
 
 
158
 
159
- ## Model Details
 
 
160
 
161
- ### Model Description
162
 
163
- <!-- Provide a longer summary of what this model is. -->
164
 
165
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
 
 
 
 
 
 
 
 
 
 
166
 
167
- - **Developed by:** [More Information Needed]
168
- - **Funded by [optional]:** [More Information Needed]
169
- - **Shared by [optional]:** [More Information Needed]
170
- - **Model type:** [More Information Needed]
171
- - **Language(s) (NLP):** [More Information Needed]
172
- - **License:** [More Information Needed]
173
- - **Finetuned from model [optional]:** [More Information Needed]
174
 
175
- ### Model Sources [optional]
 
 
176
 
177
- <!-- Provide the basic links for the model. -->
 
 
178
 
179
- - **Repository:** [More Information Needed]
180
- - **Paper [optional]:** [More Information Needed]
181
- - **Demo [optional]:** [More Information Needed]
182
-
183
- ## Uses
184
-
185
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
186
-
187
- ### Direct Use
188
-
189
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
190
-
191
- [More Information Needed]
192
-
193
- ### Downstream Use [optional]
194
-
195
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
196
-
197
- [More Information Needed]
198
-
199
- ### Out-of-Scope Use
200
-
201
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
202
-
203
- [More Information Needed]
204
-
205
- ## Bias, Risks, and Limitations
206
-
207
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
208
-
209
- [More Information Needed]
210
-
211
- ### Recommendations
212
-
213
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
214
-
215
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
216
-
217
- ## How to Get Started with the Model
218
-
219
- Use the code below to get started with the model.
220
-
221
- [More Information Needed]
222
-
223
- ## Training Details
224
-
225
- ### Training Data
226
-
227
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
228
-
229
- [More Information Needed]
230
-
231
- ### Training Procedure
232
-
233
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
234
-
235
- #### Preprocessing [optional]
236
-
237
- [More Information Needed]
238
-
239
-
240
- #### Training Hyperparameters
241
-
242
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
243
-
244
- #### Speeds, Sizes, Times [optional]
245
-
246
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
247
-
248
- [More Information Needed]
249
-
250
- ## Evaluation
251
-
252
- <!-- This section describes the evaluation protocols and provides the results. -->
253
-
254
- ### Testing Data, Factors & Metrics
255
-
256
- #### Testing Data
257
-
258
- <!-- This should link to a Dataset Card if possible. -->
259
-
260
- [More Information Needed]
261
-
262
- #### Factors
263
-
264
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
265
-
266
- [More Information Needed]
267
-
268
- #### Metrics
269
-
270
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
271
-
272
- [More Information Needed]
273
-
274
- ### Results
275
-
276
- [More Information Needed]
277
-
278
- #### Summary
279
-
280
-
281
-
282
- ## Model Examination [optional]
283
-
284
- <!-- Relevant interpretability work for the model goes here -->
285
-
286
- [More Information Needed]
287
-
288
- ## Environmental Impact
289
-
290
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
291
-
292
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
293
-
294
- - **Hardware Type:** [More Information Needed]
295
- - **Hours used:** [More Information Needed]
296
- - **Cloud Provider:** [More Information Needed]
297
- - **Compute Region:** [More Information Needed]
298
- - **Carbon Emitted:** [More Information Needed]
299
-
300
- ## Technical Specifications [optional]
301
-
302
- ### Model Architecture and Objective
303
-
304
- [More Information Needed]
305
-
306
- ### Compute Infrastructure
307
-
308
- [More Information Needed]
309
-
310
- #### Hardware
311
-
312
- [More Information Needed]
313
-
314
- #### Software
315
-
316
- [More Information Needed]
317
-
318
- ## Citation [optional]
319
-
320
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
321
-
322
- **BibTeX:**
323
-
324
- [More Information Needed]
325
-
326
- **APA:**
327
-
328
- [More Information Needed]
329
-
330
- ## Glossary [optional]
331
-
332
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
333
-
334
- [More Information Needed]
335
-
336
- ## More Information [optional]
337
-
338
- [More Information Needed]
339
-
340
- ## Model Card Authors [optional]
341
-
342
- [More Information Needed]
343
-
344
- ## Model Card Contact
345
-
346
- [More Information Needed]
347
 
348
 
349
  # Open Portuguese LLM Leaderboard Evaluation Results
@@ -361,5 +222,4 @@ Detailed results can be found [here](https://huggingface.co/datasets/eduagarcia-
361
  |FaQuAD NLI | 81.36|
362
  |HateBR Binary | 81.72|
363
  |PT Hate Speech Binary | 73.66|
364
- |tweetSentBR | 70.11|
365
-
 
1
  ---
2
  library_name: transformers
 
3
  model-index:
4
  - name: internlm-chatbode-20b
5
  results:
 
17
  value: 65.78
18
  name: accuracy
19
  source:
20
+ url: >-
21
+ https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/internlm-chatbode-20b
22
  name: Open Portuguese LLM Leaderboard
23
  - task:
24
  type: text-generation
 
34
  value: 58.69
35
  name: accuracy
36
  source:
37
+ url: >-
38
+ https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/internlm-chatbode-20b
39
  name: Open Portuguese LLM Leaderboard
40
  - task:
41
  type: text-generation
 
51
  value: 43.33
52
  name: accuracy
53
  source:
54
+ url: >-
55
+ https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/internlm-chatbode-20b
56
  name: Open Portuguese LLM Leaderboard
57
  - task:
58
  type: text-generation
 
68
  value: 91.53
69
  name: f1-macro
70
  source:
71
+ url: >-
72
+ https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/internlm-chatbode-20b
73
  name: Open Portuguese LLM Leaderboard
74
  - task:
75
  type: text-generation
 
85
  value: 78.95
86
  name: pearson
87
  source:
88
+ url: >-
89
+ https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/internlm-chatbode-20b
90
  name: Open Portuguese LLM Leaderboard
91
  - task:
92
  type: text-generation
 
102
  value: 81.36
103
  name: f1-macro
104
  source:
105
+ url: >-
106
+ https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/internlm-chatbode-20b
107
  name: Open Portuguese LLM Leaderboard
108
  - task:
109
  type: text-generation
 
119
  value: 81.72
120
  name: f1-macro
121
  source:
122
+ url: >-
123
+ https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/internlm-chatbode-20b
124
  name: Open Portuguese LLM Leaderboard
125
  - task:
126
  type: text-generation
 
136
  value: 73.66
137
  name: f1-macro
138
  source:
139
+ url: >-
140
+ https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/internlm-chatbode-20b
141
  name: Open Portuguese LLM Leaderboard
142
  - task:
143
  type: text-generation
 
153
  value: 70.11
154
  name: f1-macro
155
  source:
156
+ url: >-
157
+ https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/internlm-chatbode-20b
158
  name: Open Portuguese LLM Leaderboard
159
+ language:
160
+ - pt
161
+ pipeline_tag: text-generation
162
  ---
163
 
164
+ # internlm-chatbode-20b
165
 
166
  <!-- Provide a quick summary of what the model is/does. -->
167
 
168
+ O InternLm-ChatBode é um modelo de linguagem ajustado para o idioma português, desenvolvido a partir do modelo [InternLM2](https://huggingface.co/internlm/internlm2-chat-20b). Este modelo foi refinado através do processo de fine-tuning utilizando o dataset UltraAlpaca.
169
 
170
+ ## Características Principais
171
 
172
+ - **Modelo Base:** [internlm/internlm2-chat-20b](internlm/internlm2-chat-20b)
173
+ - **Dataset para Fine-tuning:** UltraAlpaca
174
+ - **Treinamento:** O treinamento foi realizado a partir do fine-tuning, usando QLoRA, do internlm2-chat-20b.
175
 
176
+ ## Exemplo de uso
177
 
178
+ A seguir um exemplo de código de como carregar e utilizar o modelo:
179
 
180
+ ```python
181
+ import torch
182
+ from transformers import AutoTokenizer, AutoModelForCausalLM
183
+ tokenizer = AutoTokenizer.from_pretrained("recogna-nlp/internlm-chatbode-20b", trust_remote_code=True)
184
+ model = AutoModelForCausalLM.from_pretrained("recogna-nlp/internlm-chatbode-20b", torch_dtype=torch.float16, trust_remote_code=True).cuda()
185
+ model = model.eval()
186
+ response, history = model.chat(tokenizer, "Olá", history=[])
187
+ print(response)
188
+ response, history = model.chat(tokenizer, "O que é o Teorema de Pitágoras? Me dê um exemplo", history=history)
189
+ print(response)
190
+ ```
191
 
192
+ As respostas podem ser geradas via stream utilizando o método `stream_chat`:
 
 
 
 
 
 
193
 
194
+ ```python
195
+ import torch
196
+ from transformers import AutoModelForCausalLM, AutoTokenizer
197
 
198
+ model_path = "recogna-nlp/internlm-chatbode-20b"
199
+ model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=torch.float16, trust_remote_code=True).cuda()
200
+ tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
201
 
202
+ model = model.eval()
203
+ length = 0
204
+ for response, history in model.stream_chat(tokenizer, "Olá", history=[]):
205
+ print(response[length:], flush=True, end="")
206
+ length = len(response)
207
+ ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
208
 
209
 
210
  # Open Portuguese LLM Leaderboard Evaluation Results
 
222
  |FaQuAD NLI | 81.36|
223
  |HateBR Binary | 81.72|
224
  |PT Hate Speech Binary | 73.66|
225
+ |tweetSentBR | 70.11|