Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,77 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language: nl
|
3 |
+
license: mit
|
4 |
+
pipeline_tag: text-classification
|
5 |
+
inference: false
|
6 |
+
---
|
7 |
+
|
8 |
+
# Regression Model for Emotional Functioning Levels (ICF b152)
|
9 |
+
|
10 |
+
## Description
|
11 |
+
A fine-tuned regression model that assigns a functioning level to Dutch sentences describing emotional functions. The model is based on a pre-trained Dutch medical language model ([link to be added]()): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about emotional functions in clinical text in Dutch, use the [icf-domains](https://huggingface.co/CLTL/icf-domains) classification model.
|
12 |
+
|
13 |
+
## Functioning levels
|
14 |
+
Level | Meaning
|
15 |
+
---|---
|
16 |
+
4 | No problem with emotional functioning: emotions are appropriate, well regulated, etc.
|
17 |
+
3 | Slight problem with emotional functioning: irritable, gloomy, etc.
|
18 |
+
2 | Moderate problem with emotional functioning: negative emotions, such as fear, anger, sadness, etc.
|
19 |
+
1 | Severe problem with emotional functioning: intense negative emotions, such as fear, anger, sadness, etc.
|
20 |
+
0 | Flat affect, apathy, unstable, inappropriate emotions.
|
21 |
+
|
22 |
+
The predictions generated by the model might sometimes be outside of the scale (e.g. 4.2); this is normal in a regression model.
|
23 |
+
|
24 |
+
## Intended uses and limitations
|
25 |
+
- The model was fine-tuned (trained, validated and tested) on medical records from the Amsterdam UMC (the two academic medical centers of Amsterdam). It might perform differently on text from a different hospital or text from non-hospital sources (e.g. GP records).
|
26 |
+
- The model was fine-tuned with the [Simple Transformers](https://simpletransformers.ai/) library. This library is based on Transformers but the model cannot be used directly with Transformers `pipeline` and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled.
|
27 |
+
|
28 |
+
## How to use
|
29 |
+
To generate predictions with the model, use the [Simple Transformers](https://simpletransformers.ai/) library:
|
30 |
+
```
|
31 |
+
from simpletransformers.classification import ClassificationModel
|
32 |
+
|
33 |
+
model = ClassificationModel(
|
34 |
+
'roberta',
|
35 |
+
'CLTL/icf-levels-stm',
|
36 |
+
use_cuda=False,
|
37 |
+
)
|
38 |
+
|
39 |
+
example = 'Naarmate het somatische beeld een herstellende trend laat zien, valt op dat patient zich depressief en suicidaal uit.'
|
40 |
+
_, raw_outputs = model.predict([example])
|
41 |
+
predictions = np.squeeze(raw_outputs)
|
42 |
+
```
|
43 |
+
The prediction on the example is:
|
44 |
+
```
|
45 |
+
1.60
|
46 |
+
```
|
47 |
+
The raw outputs look like this:
|
48 |
+
```
|
49 |
+
[[1.60418844]]
|
50 |
+
```
|
51 |
+
|
52 |
+
## Training data
|
53 |
+
- The training data consists of clinical notes from medical records (in Dutch) of the Amsterdam UMC. Due to privacy constraints, the data cannot be released.
|
54 |
+
- The annotation guidelines used for the project can be found [here](https://github.com/cltl/a-proof-zonmw/tree/main/resources/annotation_guidelines).
|
55 |
+
|
56 |
+
## Training procedure
|
57 |
+
The default training parameters of Simple Transformers were used, including:
|
58 |
+
- Optimizer: AdamW
|
59 |
+
- Learning rate: 4e-5
|
60 |
+
- Num train epochs: 1
|
61 |
+
- Train batch size: 8
|
62 |
+
|
63 |
+
## Evaluation results
|
64 |
+
The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals).
|
65 |
+
|
66 |
+
| | Sentence-level | Note-level
|
67 |
+
|---|---|---
|
68 |
+
mean absolute error | 0.76 | 0.68
|
69 |
+
mean squared error | 1.03 | 0.87
|
70 |
+
root mean squared error | 1.01 | 0.93
|
71 |
+
|
72 |
+
## Authors and references
|
73 |
+
### Authors
|
74 |
+
Jenia Kim, Piek Vossen
|
75 |
+
|
76 |
+
### References
|
77 |
+
TBD
|