Giyaseddin
commited on
Commit
•
46c5bb4
1
Parent(s):
53c9a40
Update README.md
Browse files
README.md
CHANGED
@@ -17,7 +17,7 @@ with no humans labelling them in any way (which is why it can use lots of public
|
|
17 |
process to generate inputs and labels from those texts using the BERT base model.
|
18 |
|
19 |
This is a classification model that solves Short Question Answer Assessment task, finetuned [pretrained DistilBERT model](https://huggingface.co/distilbert-base-uncased) on
|
20 |
-
[
|
21 |
|
22 |
## Intended uses & limitations
|
23 |
|
@@ -119,7 +119,7 @@ Here is the scores during the training:
|
|
119 |
|
120 |
## Evaluation results
|
121 |
|
122 |
-
When fine-tuned on downstream task of
|
123 |
(scores are rounded to 2 floating points)
|
124 |
|
125 |
|
|
|
17 |
process to generate inputs and labels from those texts using the BERT base model.
|
18 |
|
19 |
This is a classification model that solves Short Question Answer Assessment task, finetuned [pretrained DistilBERT model](https://huggingface.co/distilbert-base-uncased) on
|
20 |
+
[Question Answer Assessment dataset](#)
|
21 |
|
22 |
## Intended uses & limitations
|
23 |
|
|
|
119 |
|
120 |
## Evaluation results
|
121 |
|
122 |
+
When fine-tuned on downstream task of Question Answer Assessment, 4 class classification, this model achieved the following results:
|
123 |
(scores are rounded to 2 floating points)
|
124 |
|
125 |
|