Text Classification
Transformers
PyTorch
Spanish
bert
Stremie commited on
Commit
1d84479
1 Parent(s): b62a15f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -85
README.md CHANGED
@@ -10,7 +10,7 @@ license: mit
10
 
11
  # Model Card for "DiagTrast-Berto"
12
 
13
- This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) that is a BERT model trained on a big Spanish corpus.
14
 
15
  DiagTrast-Berto was trained with [hackathon-somos-nlp-2023/DiagTrast](https://huggingface.co/datasets/hackathon-somos-nlp-2023/DiagTrast) dataset to classify statements with each of the 5 selected mental disorders of the DSM-5. While this task is classically approached with neural network-based models, the goal of implementing a transformer model is that instead of basing the classification criteria on keyword search, it is expected to understand natural language.
16
 
@@ -30,11 +30,7 @@ This model should not be used as a replacement for a mental health professional
30
 
31
  The main limitation of the model is that it is restricted to the identification of only 5 of the DSM-5 disorders.
32
 
33
- ### Recommendations
34
-
35
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
36
-
37
- [More Information Needed]
38
 
39
  ## How to Get Started with the Model
40
 
@@ -66,61 +62,16 @@ We use the [hackathon-somos-nlp-2023/DiagTrast](https://huggingface.co/datasets/
66
 
67
  ### Training Procedure
68
 
69
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
70
- [More Information Needed]
71
-
72
- #### Preprocessing [optional]
73
-
74
- [More Information Needed]
75
 
76
 
77
  #### Training Hyperparameters
78
 
79
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
80
-
81
- #### Speeds, Sizes, Times [optional]
82
-
83
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
84
-
85
- [More Information Needed]
86
 
87
  ## Evaluation
88
 
89
- <!-- This section describes the evaluation protocols and provides the results. -->
90
-
91
- ### Testing Data, Factors & Metrics
92
-
93
- #### Testing Data
94
-
95
- <!-- This should link to a Data Card if possible. -->
96
-
97
- [More Information Needed]
98
-
99
- #### Factors
100
-
101
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
102
-
103
- [More Information Needed]
104
-
105
- #### Metrics
106
-
107
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
108
-
109
- [More Information Needed]
110
-
111
- ### Results
112
-
113
- [More Information Needed]
114
-
115
- #### Summary
116
-
117
-
118
-
119
- ## Model Examination [optional]
120
-
121
- <!-- Relevant interpretability work for the model goes here -->
122
-
123
- [More Information Needed]
124
 
125
  ## Environmental Impact
126
 
@@ -133,37 +84,7 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
133
  - **Cloud Provider:** Google
134
  - **Compute Region:** Spain
135
  - **Carbon Emitted:** 0.005 kg C02
136
-
137
- ## Technical Specifications [optional]
138
-
139
- ### Model Architecture and Objective
140
-
141
- [More Information Needed]
142
-
143
- ### Compute Infrastructure
144
-
145
- [More Information Needed]
146
-
147
- #### Hardware
148
-
149
- [More Information Needed]
150
-
151
- #### Software
152
-
153
- [More Information Needed]
154
-
155
- ## Citation [optional]
156
-
157
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
158
-
159
- **BibTeX:**
160
-
161
- [More Information Needed]
162
-
163
- **APA:**
164
-
165
- [More Information Needed]
166
-
167
  ## Team members
168
 
169
  - [Alberto Martín Garrido](https://huggingface.co/Stremie)
 
10
 
11
  # Model Card for "DiagTrast-Berto"
12
 
13
+ This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased), which is a BERT model trained on a big Spanish corpus.
14
 
15
  DiagTrast-Berto was trained with [hackathon-somos-nlp-2023/DiagTrast](https://huggingface.co/datasets/hackathon-somos-nlp-2023/DiagTrast) dataset to classify statements with each of the 5 selected mental disorders of the DSM-5. While this task is classically approached with neural network-based models, the goal of implementing a transformer model is that instead of basing the classification criteria on keyword search, it is expected to understand natural language.
16
 
 
30
 
31
  The main limitation of the model is that it is restricted to the identification of only 5 of the DSM-5 disorders.
32
 
33
+ Also, the model will always match a statement with a disorder since there was not a 'non-disorder' label in the dataset.
 
 
 
 
34
 
35
  ## How to Get Started with the Model
36
 
 
62
 
63
  ### Training Procedure
64
 
65
+ We use HuggingFace's Transformers library to load [BERTO](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) checkpoint and fine-tune the model.
 
 
 
 
 
66
 
67
 
68
  #### Training Hyperparameters
69
 
70
+ We use the default ones.
 
 
 
 
 
 
71
 
72
  ## Evaluation
73
 
74
+ The valuation dataset consists of 134 arbitrarily selected examples, so labels may not be in the same proportion. We use 'Accuracy' as our metric, achieving a 97% accuracy after 3 epochs.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
75
 
76
  ## Environmental Impact
77
 
 
84
  - **Cloud Provider:** Google
85
  - **Compute Region:** Spain
86
  - **Carbon Emitted:** 0.005 kg C02
87
+
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
88
  ## Team members
89
 
90
  - [Alberto Martín Garrido](https://huggingface.co/Stremie)