format adjusted
Browse files
README.md
CHANGED
@@ -11,7 +11,7 @@ tags:
|
|
11 |
|
12 |
# German Medical BERT
|
13 |
|
14 |
-
This is a fine-tuned model on the Medical domain for the German language and based on German BERT. This model has only been trained to improve on-target
|
15 |
|
16 |
## Overview
|
17 |
**Language model:** bert-base-german-cased
|
@@ -30,19 +30,18 @@ This is a fine-tuned model on the Medical domain for the German language and bas
|
|
30 |
- With standard parameter settings for fine-tuning as mentioned in the original BERT paper.
|
31 |
- Although had to train for up to 25 epochs for classification.
|
32 |
|
33 |
-
## Performance (Micro precision, recall and f1 score for multilabel code classification)
|
|
|
|
|
|
|
|
|
|
|
34 |
|
35 |
-
|Models\\\\\\\\t\\\\\\\\t\\\\\\\\t|P\\\\\\\\t|R\\\\\\\\t|F1\\\\\\\\t|
|
36 |
-
|:--------------\\\\\\\\t|:------|:------|:------|
|
37 |
-
|German BERT\\\\\\\\t\\\\\\\\t|86.04\\\\\\\\t|75.82\\\\\\\\t|80.60\\\\\\\\t|
|
38 |
-
|German MedBERT-256\\\\\\\\t|87.41\\\\\\\\t|77.97\\\\\\\\t|82.42\\\\\\\\t|
|
39 |
-
|German MedBERT-512\\\\\\\\t|87.75\\\\\\\\t|78.26\\\\\\\\t|82.73\\\\\\\\t|
|
40 |
|
41 |
## Author
|
42 |
Manjil Shrestha: `shresthamanjil21 [at] gmail.com`
|
43 |
|
44 |
-
## Related Paper:
|
45 |
-
[Report](https://opus4.kobv.de/opus4-rhein-waal/frontdoor/index/index/searchtype/collection/id/16225/start/0/rows/10/doctypefq/masterthesis/docId/740)
|
46 |
|
47 |
Get in touch:
|
48 |
[LinkedIn](https://www.linkedin.com/in/manjil-shrestha-038527b4/)
|
|
|
11 |
|
12 |
# German Medical BERT
|
13 |
|
14 |
+
This is a fine-tuned model on the Medical domain for the German language and based on German BERT. This model has only been trained to improve on-target tasks (Masked Language Model). It can later be used to perform a downstream task of your needs, while I performed it for the NTS-ICD-10 text classification task.
|
15 |
|
16 |
## Overview
|
17 |
**Language model:** bert-base-german-cased
|
|
|
30 |
- With standard parameter settings for fine-tuning as mentioned in the original BERT paper.
|
31 |
- Although had to train for up to 25 epochs for classification.
|
32 |
|
33 |
+
## Performance (Micro precision, recall, and f1 score for multilabel code classification)
|
34 |
+
|Models|P|R|F1|
|
35 |
+
|:------|:------|:------|:------|
|
36 |
+
|German BERT|86.04|75.82|80.60|
|
37 |
+
|German MedBERT-256 (fine-tuned)|87.41|77.97|82.42|
|
38 |
+
|German MedBERT-512 (fine-tuned)|87.75|78.26|82.73|
|
39 |
|
|
|
|
|
|
|
|
|
|
|
40 |
|
41 |
## Author
|
42 |
Manjil Shrestha: `shresthamanjil21 [at] gmail.com`
|
43 |
|
44 |
+
## Related Paper: [Report](https://opus4.kobv.de/opus4-rhein-waal/frontdoor/index/index/searchtype/collection/id/16225/start/0/rows/10/doctypefq/masterthesis/docId/740)
|
|
|
45 |
|
46 |
Get in touch:
|
47 |
[LinkedIn](https://www.linkedin.com/in/manjil-shrestha-038527b4/)
|