bert-base-uncased-squad-v1
This model is a fine-tuned version of bert-base-uncased on the squad dataset. It was finetuned following the Transformers Question Answering example:
python run_qa.py \
--model_name_or_path bert-base-uncased \
--dataset_name squad \
--do_train \
--do_eval \
--per_device_train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
Training results
***** eval metrics *****
epoch = 2.0
eval_exact_match = 81.3434
eval_f1 = 88.7002
eval_samples = 10784
Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu117
- Datasets 2.8.0
- Tokenizers 0.13.2
- Downloads last month
- 45
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for helenai/bert-base-uncased-squad-v1
Base model
google-bert/bert-base-uncased