danlou commited on
Commit
37eddb0
1 Parent(s): 7152fd8
.gitignore ADDED
@@ -0,0 +1 @@
 
 
1
+ checkpoint-*/
README.md ADDED
@@ -0,0 +1,74 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - generated_from_trainer
5
+ datasets:
6
+ - commonsense_qa
7
+ metrics:
8
+ - accuracy
9
+ model_index:
10
+ - name: albert-xxlarge-v2-finetuned-csqa
11
+ results:
12
+ - dataset:
13
+ name: commonsense_qa
14
+ type: commonsense_qa
15
+ args: default
16
+ metric:
17
+ name: Accuracy
18
+ type: accuracy
19
+ value: 0.7870597839355469
20
+ ---
21
+
22
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
23
+ should probably proofread and complete it, then remove this comment. -->
24
+
25
+ # albert-xxlarge-v2-finetuned-csqa
26
+
27
+ This model is a fine-tuned version of [albert-xxlarge-v2](https://huggingface.co/albert-xxlarge-v2) on the commonsense_qa dataset.
28
+ It achieves the following results on the evaluation set:
29
+ - Loss: 1.6177
30
+ - Accuracy: 0.7871
31
+
32
+ ## Model description
33
+
34
+ More information needed
35
+
36
+ ## Intended uses & limitations
37
+
38
+ More information needed
39
+
40
+ ## Training and evaluation data
41
+
42
+ More information needed
43
+
44
+ ## Training procedure
45
+
46
+ ### Training hyperparameters
47
+
48
+ The following hyperparameters were used during training:
49
+ - learning_rate: 1e-05
50
+ - train_batch_size: 16
51
+ - eval_batch_size: 16
52
+ - seed: 42
53
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
54
+ - lr_scheduler_type: linear
55
+ - num_epochs: 5
56
+ - mixed_precision_training: Native AMP
57
+
58
+ ### Training results
59
+
60
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy |
61
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|
62
+ | 0.7464 | 1.0 | 609 | 0.5319 | 0.7985 |
63
+ | 0.3116 | 2.0 | 1218 | 0.6422 | 0.7936 |
64
+ | 0.0769 | 3.0 | 1827 | 1.2674 | 0.7952 |
65
+ | 0.0163 | 4.0 | 2436 | 1.4839 | 0.7903 |
66
+ | 0.0122 | 5.0 | 3045 | 1.6177 | 0.7871 |
67
+
68
+
69
+ ### Framework versions
70
+
71
+ - Transformers 4.8.2
72
+ - Pytorch 1.9.0
73
+ - Datasets 1.10.2
74
+ - Tokenizers 0.10.3
config.json ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "albert-xxlarge-v2",
3
+ "architectures": [
4
+ "AlbertForMultipleChoice"
5
+ ],
6
+ "attention_probs_dropout_prob": 0,
7
+ "bos_token_id": 2,
8
+ "classifier_dropout_prob": 0.1,
9
+ "down_scale_factor": 1,
10
+ "embedding_size": 128,
11
+ "eos_token_id": 3,
12
+ "gap_size": 0,
13
+ "hidden_act": "gelu_new",
14
+ "hidden_dropout_prob": 0,
15
+ "hidden_size": 4096,
16
+ "initializer_range": 0.02,
17
+ "inner_group_num": 1,
18
+ "intermediate_size": 16384,
19
+ "layer_norm_eps": 1e-12,
20
+ "layers_to_keep": [],
21
+ "max_position_embeddings": 512,
22
+ "model_type": "albert",
23
+ "net_structure_type": 0,
24
+ "num_attention_heads": 64,
25
+ "num_hidden_groups": 1,
26
+ "num_hidden_layers": 12,
27
+ "num_memory_blocks": 0,
28
+ "pad_token_id": 0,
29
+ "position_embedding_type": "absolute",
30
+ "transformers_version": "4.8.2",
31
+ "type_vocab_size": 2,
32
+ "vocab_size": 30000
33
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:71af2a621125cea11d1b5fb40ae676a11eabb34fb15cb6c481dace02a22aa38f
3
+ size 890413777
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"bos_token": "[CLS]", "eos_token": "[SEP]", "unk_token": "<unk>", "sep_token": "[SEP]", "pad_token": "<pad>", "cls_token": "[CLS]", "mask_token": {"content": "[MASK]", "single_word": false, "lstrip": true, "rstrip": false, "normalized": false}}
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"do_lower_case": true, "remove_space": true, "keep_accents": false, "bos_token": "[CLS]", "eos_token": "[SEP]", "unk_token": "<unk>", "sep_token": "[SEP]", "pad_token": "<pad>", "cls_token": "[CLS]", "mask_token": {"content": "[MASK]", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "model_max_length": 512, "special_tokens_map_file": null, "name_or_path": "albert-xxlarge-v2", "tokenizer_class": "AlbertTokenizer"}
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f12874138df81c2ed43b50631b01d8e4a0ec813f8a8a6655fe9461a95c2568f6
3
+ size 2671