Improving Black-box Robustness with In-Context Rewriting
Collection
24 items
•
Updated
•
1
This model is a fine-tuned version of bert-base-uncased on the None dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | F1 | Acc | Validation Loss |
---|---|---|---|---|---|
0.3305 | 1.0 | 1500 | 0.6780 | 0.8250 | 0.3952 |
0.2821 | 2.0 | 3000 | 0.6773 | 0.8219 | 0.3777 |
0.2 | 3.0 | 4500 | 0.7331 | 0.8766 | 0.3716 |
0.1396 | 4.0 | 6000 | 0.6900 | 0.8356 | 0.6738 |
0.0914 | 5.0 | 7500 | 0.7011 | 0.8472 | 0.6766 |
0.0548 | 6.0 | 9000 | 0.7480 | 0.8905 | 0.6274 |
0.0365 | 7.0 | 10500 | 0.6624 | 0.8068 | 1.2739 |
0.0324 | 8.0 | 12000 | 0.7222 | 0.8655 | 1.0403 |
0.0295 | 9.0 | 13500 | 0.7003 | 0.8460 | 1.2310 |
Base model
google-bert/bert-base-uncased