ashabrawy commited on
Commit
9a42825
1 Parent(s): 93d7713

NLP702-bert-large-uncased_finetuning-distillation_hs768-nh32-nl12

Browse files
Files changed (5) hide show
  1. README.md +6 -6
  2. best/config.json +2 -2
  3. best/model.safetensors +2 -2
  4. config.json +2 -2
  5. model.safetensors +2 -2
README.md CHANGED
@@ -15,8 +15,8 @@ should probably proofread and complete it, then remove this comment. -->
15
 
16
  This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
- - Loss: 0.4513
19
- - Accuracy: 0.8460
20
 
21
  ## Model description
22
 
@@ -50,10 +50,10 @@ The following hyperparameters were used during training:
50
 
51
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
52
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
53
- | 1.6134 | 1.39 | 500 | 0.6952 | 0.7560 |
54
- | 0.4746 | 2.78 | 1000 | 0.5133 | 0.8141 |
55
- | 0.2242 | 4.17 | 1500 | 0.4549 | 0.8490 |
56
- | 0.1182 | 5.56 | 2000 | 0.4262 | 0.8544 |
57
 
58
 
59
  ### Framework versions
 
15
 
16
  This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
+ - Loss: 0.4721
19
+ - Accuracy: 0.8352
20
 
21
  ## Model description
22
 
 
50
 
51
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
52
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
53
+ | 1.7395 | 1.39 | 500 | 0.8593 | 0.6847 |
54
+ | 0.6019 | 2.78 | 1000 | 0.5655 | 0.7949 |
55
+ | 0.3085 | 4.17 | 1500 | 0.4899 | 0.8293 |
56
+ | 0.1631 | 5.56 | 2000 | 0.4558 | 0.8475 |
57
 
58
 
59
  ### Framework versions
best/config.json CHANGED
@@ -136,8 +136,8 @@
136
  "layer_norm_eps": 1e-12,
137
  "max_position_embeddings": 512,
138
  "model_type": "bert",
139
- "num_attention_heads": 16,
140
- "num_hidden_layers": 8,
141
  "pad_token_id": 0,
142
  "position_embedding_type": "absolute",
143
  "torch_dtype": "float32",
 
136
  "layer_norm_eps": 1e-12,
137
  "max_position_embeddings": 512,
138
  "model_type": "bert",
139
+ "num_attention_heads": 32,
140
+ "num_hidden_layers": 12,
141
  "pad_token_id": 0,
142
  "position_embedding_type": "absolute",
143
  "torch_dtype": "float32",
best/model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7a5a78470240484fcf8543ad64573141a6c7a4d65bd1b7bd23107c3e775942b9
3
- size 324723536
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:27c43deb31b602550eff4fab8f503c4750833999e0d5c7c439bda7dcba3c5b7f
3
+ size 438137056
config.json CHANGED
@@ -136,8 +136,8 @@
136
  "layer_norm_eps": 1e-12,
137
  "max_position_embeddings": 512,
138
  "model_type": "bert",
139
- "num_attention_heads": 16,
140
- "num_hidden_layers": 8,
141
  "pad_token_id": 0,
142
  "position_embedding_type": "absolute",
143
  "torch_dtype": "float32",
 
136
  "layer_norm_eps": 1e-12,
137
  "max_position_embeddings": 512,
138
  "model_type": "bert",
139
+ "num_attention_heads": 32,
140
+ "num_hidden_layers": 12,
141
  "pad_token_id": 0,
142
  "position_embedding_type": "absolute",
143
  "torch_dtype": "float32",
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7a5a78470240484fcf8543ad64573141a6c7a4d65bd1b7bd23107c3e775942b9
3
- size 324723536
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:27c43deb31b602550eff4fab8f503c4750833999e0d5c7c439bda7dcba3c5b7f
3
+ size 438137056