Rolv-Arild's picture
Training in progress, step 500
5a2b856
raw
history blame
No virus
6.47 kB
0%| | 100/503920 [03:45<123:27:02, 1.13it/s]
0%| | 199/503920 [07:33<131:30:18, 1.06it/s]
0%| | 298/503920 [11:21<140:58:24, 1.01s/it]
0%| | 400/503920 [15:10<122:42:36, 1.14it/s]
0%|▏ | 500/503920 [18:58<122:35:54, 1.14it/s]The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length. If input_length are not expected by `Wav2Vec2ForCTC.forward`, you can safely ignore this message.
***** Running Evaluation *****
Num examples = 41040
Batch size = 12
0%| | 0/3420 [00:00<?, ?it/s]
File "/mnt/lv_ai_1_dante/ml/models/wav2vec2-1b-npsc-nst-bokmaal/run_speech_recognition_ctc.py", line 819, in <module> | 2355/3420 [30:44<12:33, 1.41it/s]
main()
File "/mnt/lv_ai_1_dante/ml/models/wav2vec2-1b-npsc-nst-bokmaal/run_speech_recognition_ctc.py", line 770, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/mnt/lv_ai_1_dante/ml/rolvb/venv/lib/python3.9/site-packages/transformers/trainer.py", line 1497, in train
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/mnt/lv_ai_1_dante/ml/rolvb/venv/lib/python3.9/site-packages/transformers/trainer.py", line 1624, in _maybe_log_save_evaluate
metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
File "/mnt/lv_ai_1_dante/ml/rolvb/venv/lib/python3.9/site-packages/transformers/trainer.py", line 2284, in evaluate
output = eval_loop(
File "/mnt/lv_ai_1_dante/ml/rolvb/venv/lib/python3.9/site-packages/transformers/trainer.py", line 2458, in evaluation_loop
loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys)
File "/mnt/lv_ai_1_dante/ml/rolvb/venv/lib/python3.9/site-packages/transformers/trainer.py", line 2671, in prediction_step
loss, outputs = self.compute_loss(model, inputs, return_outputs=True)
File "/mnt/lv_ai_1_dante/ml/rolvb/venv/lib/python3.9/site-packages/transformers/trainer.py", line 2043, in compute_loss
outputs = model(**inputs)
File "/mnt/lv_ai_1_dante/ml/rolvb/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/mnt/lv_ai_1_dante/ml/rolvb/venv/lib/python3.9/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 1716, in forward
outputs = self.wav2vec2(
File "/mnt/lv_ai_1_dante/ml/rolvb/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/mnt/lv_ai_1_dante/ml/rolvb/venv/lib/python3.9/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 1361, in forward
encoder_outputs = self.encoder(
File "/mnt/lv_ai_1_dante/ml/rolvb/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/mnt/lv_ai_1_dante/ml/rolvb/venv/lib/python3.9/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 930, in forward
layer_outputs = layer(
File "/mnt/lv_ai_1_dante/ml/rolvb/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/mnt/lv_ai_1_dante/ml/rolvb/venv/lib/python3.9/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 766, in forward
hidden_states, attn_weights, _ = self.attention(
File "/mnt/lv_ai_1_dante/ml/rolvb/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/mnt/lv_ai_1_dante/ml/rolvb/venv/lib/python3.9/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 686, in forward
attn_output = self.out_proj(attn_output)
File "/mnt/lv_ai_1_dante/ml/rolvb/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/mnt/lv_ai_1_dante/ml/rolvb/venv/lib/python3.9/site-packages/torch/nn/modules/linear.py", line 103, in forward
return F.linear(input, self.weight, self.bias)
KeyboardInterrupt