DunnBC22 commited on
Commit
9c68bc6
1 Parent(s): d13732d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +27 -8
README.md CHANGED
@@ -6,26 +6,40 @@ tags:
6
  model-index:
7
  - name: bert-base-uncased-QnA-MLQA_Dataset
8
  results: []
 
 
 
 
9
  ---
10
 
11
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
- should probably proofread and complete it, then remove this comment. -->
13
-
14
  # bert-base-uncased-QnA-MLQA_Dataset
15
 
16
- This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
17
 
18
  ## Model description
19
 
20
- More information needed
21
 
22
  ## Intended uses & limitations
23
 
24
- More information needed
25
 
26
  ## Training and evaluation data
27
 
28
- More information needed
 
 
 
 
 
 
 
 
 
 
 
 
 
29
 
30
  ## Training procedure
31
 
@@ -42,11 +56,16 @@ The following hyperparameters were used during training:
42
 
43
  ### Training results
44
 
 
 
 
 
45
 
 
46
 
47
  ### Framework versions
48
 
49
  - Transformers 4.31.0
50
  - Pytorch 2.0.1+cu118
51
  - Datasets 2.14.2
52
- - Tokenizers 0.13.3
 
6
  model-index:
7
  - name: bert-base-uncased-QnA-MLQA_Dataset
8
  results: []
9
+ datasets:
10
+ - mlqa
11
+ language:
12
+ - en
13
  ---
14
 
 
 
 
15
  # bert-base-uncased-QnA-MLQA_Dataset
16
 
17
+ This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased).
18
 
19
  ## Model description
20
 
21
+ For more information on how it was created, check out the following link: https://github.com/DunnBC22/NLP_Projects/blob/main/Question%26Answer/ML%20QA/ML_QA_Question%26Answer_with_BERT.ipynb
22
 
23
  ## Intended uses & limitations
24
 
25
+ This model is intended to demonstrate my ability to solve a complex problem using technology.
26
 
27
  ## Training and evaluation data
28
 
29
+ Dataset Source: https://huggingface.co/datasets/mlqa/viewer/mlqa.en.en/test
30
+
31
+
32
+ __Histogram of Input (Both Context & Question) Lengths__
33
+ ![Histogram of Input (Both Context & Question) Lengths](https://github.com/DunnBC22/NLP_Projects/raw/main/Question%26Answer/ML%20QA/Images/Histogram%20of%20Input%20Lengths.png)
34
+
35
+
36
+ __Histogram of Context Lengths__
37
+ ![Histogram of Context Lengths](https://github.com/DunnBC22/NLP_Projects/raw/main/Question%26Answer/ML%20QA/Images/Histogram%20of%20Context%20Lengths.png)
38
+
39
+
40
+ __Histogram of Question Lengths__
41
+ ![Histogram of Question Lengths](https://github.com/DunnBC22/NLP_Projects/raw/main/Question%26Answer/ML%20QA/Images/Histogram%20of%20Question%20Lengths.png)
42
+
43
 
44
  ## Training procedure
45
 
 
56
 
57
  ### Training results
58
 
59
+ | Metric Name | Metric Value |
60
+ |:-----:|:-----:|
61
+ | Exact Match | 59.6146 |
62
+ | F1 | 73.3002 |
63
 
64
+ * All values in the above chart are rounded to the nearest ten-thousandth.
65
 
66
  ### Framework versions
67
 
68
  - Transformers 4.31.0
69
  - Pytorch 2.0.1+cu118
70
  - Datasets 2.14.2
71
+ - Tokenizers 0.13.3