PFEemp2024 commited on
Commit
082c3ef
1 Parent(s): 02b278f

Upload training results

Browse files
.gitattributes CHANGED
@@ -33,3 +33,7 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ Training[[:space:]]Results/attack-train-2.csv filter=lfs diff=lfs merge=lfs -text
37
+ Training[[:space:]]Results/attack-train-2.txt filter=lfs diff=lfs merge=lfs -text
38
+ Training[[:space:]]Results/attack-train-3.csv filter=lfs diff=lfs merge=lfs -text
39
+ Training[[:space:]]Results/attack-train-3.txt filter=lfs diff=lfs merge=lfs -text
Training Results/attack-train-2.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:79d0a39ba3509fb8c6a1e53371129439235ceda658493900bf260b73b69b29c6
3
+ size 12718173
Training Results/attack-train-2.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7e59de5f0b4c57ed16edf500c71827979190278bf0f513f88d090c1f6a8161fa
3
+ size 12485528
Training Results/attack-train-3.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2408356c624109f2627eff25511c2f0a4429ec5d7e55b2941d3edaabe23ace17
3
+ size 18511408
Training Results/attack-train-3.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:801e40be9325370d090c9a3afbc45d8062ca6dec1460cefe1143443a4d1a0bd4
3
+ size 16078424
Training Results/attack-train-4.txt ADDED
The diff for this file is too large to render. See raw diff
 
Training Results/train_log.txt ADDED
@@ -0,0 +1,98 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Writing logs to /home/ubuntu/buildsCodes/Adversarial_training/trained_models/Multi-delete-our_bert-base-uncased-IMDB/train_log.txt.
2
+ Wrote original training args to /home/ubuntu/buildsCodes/Adversarial_training/trained_models/Multi-delete-our_bert-base-uncased-IMDB/training_args.json.
3
+ ***** Running training *****
4
+ Num examples = 25000
5
+ Num epochs = 5
6
+ Num clean epochs = 1
7
+ Instantaneous batch size per device = 8
8
+ Total train batch size (w. parallel, distributed & accumulation) = 32
9
+ Gradient accumulation steps = 4
10
+ Total optimization steps = 4410
11
+ ==========================================================
12
+ Epoch 1
13
+ Running clean epoch 1/1
14
+ Writing logs to /home/ubuntu/buildsCodes/Adversarial_training/trained_models/Multi-delete-our_bert-base-uncased-IMDB/train_log.txt.
15
+ Wrote original training args to /home/ubuntu/buildsCodes/Adversarial_training/trained_models/Multi-delete-our_bert-base-uncased-IMDB/training_args.json.
16
+ ***** Running training *****
17
+ Num examples = 25000
18
+ Num epochs = 5
19
+ Num clean epochs = 1
20
+ Instantaneous batch size per device = 8
21
+ Total train batch size (w. parallel, distributed & accumulation) = 32
22
+ Gradient accumulation steps = 4
23
+ Total optimization steps = 4410
24
+ ==========================================================
25
+ Epoch 1
26
+ Running clean epoch 1/1
27
+ Writing logs to /home/ubuntu/buildsCodes/Adversarial_training/trained_models/Multi-delete-our_bert-base-uncased-IMDB/train_log.txt.
28
+ Wrote original training args to /home/ubuntu/buildsCodes/Adversarial_training/trained_models/Multi-delete-our_bert-base-uncased-IMDB/training_args.json.
29
+ ***** Running training *****
30
+ Num examples = 25000
31
+ Num epochs = 5
32
+ Num clean epochs = 1
33
+ Instantaneous batch size per device = 8
34
+ Total train batch size (w. parallel, distributed & accumulation) = 32
35
+ Gradient accumulation steps = 4
36
+ Total optimization steps = 4410
37
+ ==========================================================
38
+ Epoch 1
39
+ Running clean epoch 1/1
40
+ Writing logs to /home/ubuntu/buildsCodes/Adversarial_training/trained_models/Multi-delete-our_bert-base-uncased-IMDB/train_log.txt.
41
+ Wrote original training args to /home/ubuntu/buildsCodes/Adversarial_training/trained_models/Multi-delete-our_bert-base-uncased-IMDB/training_args.json.
42
+ ***** Running training *****
43
+ Num examples = 25000
44
+ Num epochs = 5
45
+ Num clean epochs = 1
46
+ Instantaneous batch size per device = 8
47
+ Total train batch size (w. parallel, distributed & accumulation) = 32
48
+ Gradient accumulation steps = 4
49
+ Total optimization steps = 4410
50
+ ==========================================================
51
+ Epoch 1
52
+ Running clean epoch 1/1
53
+ Train accuracy: 97.48%
54
+ Eval accuracy: 90.31%
55
+ Best score found. Saved model to /home/ubuntu/buildsCodes/Adversarial_training/trained_models/Multi-delete-our_bert-base-uncased-IMDB//best_model/
56
+ ==========================================================
57
+ Epoch 2
58
+ Attacking model to generate new adversarial training set...
59
+ Total number of attack results: 4403
60
+ Attack success rate: 91.43% [4000 / 4375]
61
+ Train accuracy: 98.84%
62
+ Eval accuracy: 93.46%
63
+ Best score found. Saved model to /home/ubuntu/buildsCodes/Adversarial_training/trained_models/Multi-delete-our_bert-base-uncased-IMDB//best_model/
64
+ ==========================================================
65
+ Epoch 3
66
+ Attacking model to generate new adversarial training set...
67
+ Writing logs to /home/ubuntu/buildsCodes/Adversarial_training/trained_models/Multi-delete-our_bert-base-uncased-IMDB/train_log.txt.
68
+ Wrote original training args to /home/ubuntu/buildsCodes/Adversarial_training/trained_models/Multi-delete-our_bert-base-uncased-IMDB/training_args.json.
69
+ ***** Running training *****
70
+ Num examples = 25000
71
+ Num epochs = 5
72
+ Num clean epochs = 1
73
+ Instantaneous batch size per device = 8
74
+ Total train batch size (w. parallel, distributed & accumulation) = 32
75
+ Gradient accumulation steps = 4
76
+ Total optimization steps = 4410
77
+ ==========================================================
78
+ Epoch 1
79
+ Running clean epoch 1/1
80
+ Train accuracy: 97.48%
81
+ Eval accuracy: 90.31%
82
+ Best score found. Saved model to /home/ubuntu/buildsCodes/Adversarial_training/trained_models/Multi-delete-our_bert-base-uncased-IMDB//best_model/
83
+ ==========================================================
84
+ Epoch 2
85
+ Attacking model to generate new adversarial training set...
86
+ Train accuracy: 98.89%
87
+ Eval accuracy: 93.25%
88
+ Best score found. Saved model to /home/ubuntu/buildsCodes/Adversarial_training/trained_models/Multi-delete-our_bert-base-uncased-IMDB//best_model/
89
+ ==========================================================
90
+ Epoch 3
91
+ Attacking model to generate new adversarial training set...
92
+ Total number of attack results: 6088
93
+ Attack success rate: 65.77% [4000 / 6082]
94
+ Train accuracy: 70.22%
95
+ Eval accuracy: 93.25%
96
+ ==========================================================
97
+ Epoch 4
98
+ Attacking model to generate new adversarial training set...
Training Results/training_args.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"num_epochs": 5, "num_clean_epochs": 1, "attack_epoch_interval": 1, "early_stopping_epochs": null, "learning_rate": 5e-05, "num_warmup_steps": 500, "weight_decay": 0.01, "per_device_train_batch_size": 8, "per_device_eval_batch_size": 32, "gradient_accumulation_steps": 4, "random_seed": 786, "parallel": false, "load_best_model_at_end": false, "alpha": 1.0, "num_train_adv_examples": 4000, "query_budget_train": null, "attack_num_workers_per_device": 1, "output_dir": "/home/ubuntu/buildsCodes/Adversarial_training/trained_models/Multi-delete-our_bert-base-uncased-IMDB/", "checkpoint_interval_steps": null, "checkpoint_interval_epochs": 1, "save_last": true, "log_to_tb": true, "tb_log_dir": null, "log_to_wandb": false, "wandb_project": "textattack", "logging_interval_step": 1}