HiTZ
/

Token Classification
Transformers
Safetensors
bert
Inference Endpoints
ragerri commited on
Commit
3a03f4e
1 Parent(s): 7b3d40f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -26
README.md CHANGED
@@ -1,11 +1,6 @@
1
  ---
2
  license: apache-2.0
3
  base_model: bert-base-multilingual-cased
4
- tags:
5
- - generated_from_trainer
6
- model-index:
7
- - name: multi_rebuttal_neoplasm_mbert
8
- results: []
9
  datasets:
10
  - HiTZ/multilingual-abstrct
11
  language:
@@ -16,32 +11,33 @@ language:
16
  metrics:
17
  - f1
18
  pipeline_tag: token-classification
 
 
 
 
 
 
19
  ---
20
 
21
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
22
- should probably proofread and complete it, then remove this comment. -->
 
 
 
23
 
24
- # multi_rebuttal_neoplasm_mbert
25
 
26
- This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) for the argument mining task on AbstRCT data in English, Spanish, French and Italian.
27
 
28
 
 
 
29
 
30
- ## Model description
31
 
32
- More information needed
33
 
34
- ## Intended uses & limitations
35
 
36
- More information needed
37
-
38
- ## Training and evaluation data
39
-
40
- More information needed
41
-
42
- ## Training procedure
43
-
44
- ### Training hyperparameters
45
 
46
  The following hyperparameters were used during training:
47
  - learning_rate: 5e-05
@@ -52,11 +48,7 @@ The following hyperparameters were used during training:
52
  - lr_scheduler_type: linear
53
  - num_epochs: 3.0
54
 
55
- ### Training results
56
-
57
-
58
-
59
- ### Framework versions
60
 
61
  - Transformers 4.40.0.dev0
62
  - Pytorch 2.1.2+cu121
 
1
  ---
2
  license: apache-2.0
3
  base_model: bert-base-multilingual-cased
 
 
 
 
 
4
  datasets:
5
  - HiTZ/multilingual-abstrct
6
  language:
 
11
  metrics:
12
  - f1
13
  pipeline_tag: token-classification
14
+ library_name: transformers
15
+ widget:
16
+ - text: In the comparison of responders versus patients with both SD (6m) and PD, responders indicated better physical well-being (P=.004) and mood (P=.02) at month 3.
17
+ - text: En la comparación de los que respondieron frente a los pacientes tanto con SD (6m) como con EP, los que respondieron indicaron un mejor bienestar físico (P=.004) y estado de ánimo (P=.02) en el mes 3.
18
+ - text: Dans la comparaison entre les répondeurs et les patients atteints de SD (6m) et de PD, les répondeurs ont indiqué un meilleur bien-être physique (P=.004) et une meilleure humeur (P=.02) au mois 3.
19
+ - text: Nel confronto tra i responder e i pazienti con SD (6m) e PD, i responder hanno indicato un migliore benessere fisico (P=.004) e umore (P=.02) al terzo mese.
20
  ---
21
 
22
+ <p align="center">
23
+ <br>
24
+ <img src="http://www.ixa.eus/sites/default/files/anitdote.png" style="width: 45%;">
25
+ <h2 align="center">mBERT for multilingual Argument Detection in the Medical Domain</h2>
26
+ <be>
27
 
 
28
 
29
+ # Model Card: mBERT fine-tuned for multilingual (EN,ES,FR,IT) Argument Component Detection
30
 
31
 
32
+ This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) for the argument component
33
+ detection task on AbstRCT data in English, Spanish, French and Italian ([https://huggingface.co/datasets/HiTZ/multilingual-abstrct]).
34
 
 
35
 
36
+ ## Performance
37
 
38
+ <img src="https://github.com/hitz-zentroa/multilingual-abstrct/blob/65ca5c7452d83bb8d9d534aa401110e570a7ef83/resources/multilingual-abstrct-results.png" style="width: 50%;">
39
 
40
+ ## Training hyperparameters
 
 
 
 
 
 
 
 
41
 
42
  The following hyperparameters were used during training:
43
  - learning_rate: 5e-05
 
48
  - lr_scheduler_type: linear
49
  - num_epochs: 3.0
50
 
51
+ ## Framework versions
 
 
 
 
52
 
53
  - Transformers 4.40.0.dev0
54
  - Pytorch 2.1.2+cu121