2022-04-04 18:09:43 INFO Running runs: [] 2022-04-04 18:09:43 INFO Agent received command: run 2022-04-04 18:09:43 INFO Agent starting run with config: dataset_cache_dir: /home/sanchitgandhi/cache/huggingface/datasets dataset_config_name: clean dataset_name: librispeech_asr eval_split_name: validation generation_max_length: 40 generation_num_beams: 1 gradient_accumulation_steps: 1 learning_rate: 2.565346074198426e-05 length_column_name: input_length logging_steps: 1 matmul_precision: highest max_duration_in_seconds: 15 max_target_length: 64 min_duration_in_seconds: 15 model_name_or_path: ./ num_train_epochs: 5 output_dir: ./ per_device_eval_batch_size: 2 per_device_train_batch_size: 2 preprocessing_num_workers: 16 text_column_name: text train_split_name: train.100 wandb_project: flax-wav2vec2-2-bart-large-cnn warmup_steps: 500 2022-04-04 18:09:43 INFO About to run command: python3 run_flax_speech_recognition_seq2seq.py --overwrite_output_dir --freeze_feature_encoder --predict_with_generate --do_lower_case --do_train --do_eval --dataset_cache_dir=/home/sanchitgandhi/cache/huggingface/datasets --dataset_config_name=clean --dataset_name=librispeech_asr --eval_split_name=validation --generation_max_length=40 --generation_num_beams=1 --gradient_accumulation_steps=1 --learning_rate=2.565346074198426e-05 --length_column_name=input_length --logging_steps=1 --matmul_precision=highest --max_duration_in_seconds=15 --max_target_length=64 --min_duration_in_seconds=15 --model_name_or_path=./ --num_train_epochs=5 --output_dir=./ --per_device_eval_batch_size=2 --per_device_train_batch_size=2 --preprocessing_num_workers=16 --text_column_name=text --train_split_name=train.100 --wandb_project=flax-wav2vec2-2-bart-large-cnn --warmup_steps=500 2022-04-04 18:09:48 INFO Running runs: ['p4wqexfj'] 2022-04-04 18:10:23 INFO Cleaning up finished run: p4wqexfj 2022-04-04 18:10:24 INFO Agent received command: run 2022-04-04 18:10:24 INFO Agent starting run with config: dataset_cache_dir: /home/sanchitgandhi/cache/huggingface/datasets dataset_config_name: clean dataset_name: librispeech_asr eval_split_name: validation generation_max_length: 40 generation_num_beams: 1 gradient_accumulation_steps: 1 learning_rate: 0.0006871268347239357 length_column_name: input_length logging_steps: 1 matmul_precision: highest max_duration_in_seconds: 15 max_target_length: 64 min_duration_in_seconds: 15 model_name_or_path: ./ num_train_epochs: 5 output_dir: ./ per_device_eval_batch_size: 2 per_device_train_batch_size: 2 preprocessing_num_workers: 16 text_column_name: text train_split_name: train.100 wandb_project: flax-wav2vec2-2-bart-large-cnn warmup_steps: 500 2022-04-04 18:10:24 INFO About to run command: python3 run_flax_speech_recognition_seq2seq.py --overwrite_output_dir --freeze_feature_encoder --predict_with_generate --do_lower_case --do_train --do_eval --dataset_cache_dir=/home/sanchitgandhi/cache/huggingface/datasets --dataset_config_name=clean --dataset_name=librispeech_asr --eval_split_name=validation --generation_max_length=40 --generation_num_beams=1 --gradient_accumulation_steps=1 --learning_rate=0.0006871268347239357 --length_column_name=input_length --logging_steps=1 --matmul_precision=highest --max_duration_in_seconds=15 --max_target_length=64 --min_duration_in_seconds=15 --model_name_or_path=./ --num_train_epochs=5 --output_dir=./ --per_device_eval_batch_size=2 --per_device_train_batch_size=2 --preprocessing_num_workers=16 --text_column_name=text --train_split_name=train.100 --wandb_project=flax-wav2vec2-2-bart-large-cnn --warmup_steps=500 2022-04-04 18:10:29 INFO Running runs: ['mgg9caus'] 2022-04-04 18:10:59 INFO Cleaning up finished run: mgg9caus 2022-04-04 18:10:59 INFO Agent received command: run 2022-04-04 18:10:59 INFO Agent starting run with config: dataset_cache_dir: /home/sanchitgandhi/cache/huggingface/datasets dataset_config_name: clean dataset_name: librispeech_asr eval_split_name: validation generation_max_length: 40 generation_num_beams: 1 gradient_accumulation_steps: 1 learning_rate: 9.383495031304748e-05 length_column_name: input_length logging_steps: 1 matmul_precision: highest max_duration_in_seconds: 15 max_target_length: 64 min_duration_in_seconds: 15 model_name_or_path: ./ num_train_epochs: 5 output_dir: ./ per_device_eval_batch_size: 2 per_device_train_batch_size: 2 preprocessing_num_workers: 16 text_column_name: text train_split_name: train.100 wandb_project: flax-wav2vec2-2-bart-large-cnn warmup_steps: 500 2022-04-04 18:10:59 INFO About to run command: python3 run_flax_speech_recognition_seq2seq.py --overwrite_output_dir --freeze_feature_encoder --predict_with_generate --do_lower_case --do_train --do_eval --dataset_cache_dir=/home/sanchitgandhi/cache/huggingface/datasets --dataset_config_name=clean --dataset_name=librispeech_asr --eval_split_name=validation --generation_max_length=40 --generation_num_beams=1 --gradient_accumulation_steps=1 --learning_rate=9.383495031304748e-05 --length_column_name=input_length --logging_steps=1 --matmul_precision=highest --max_duration_in_seconds=15 --max_target_length=64 --min_duration_in_seconds=15 --model_name_or_path=./ --num_train_epochs=5 --output_dir=./ --per_device_eval_batch_size=2 --per_device_train_batch_size=2 --preprocessing_num_workers=16 --text_column_name=text --train_split_name=train.100 --wandb_project=flax-wav2vec2-2-bart-large-cnn --warmup_steps=500 2022-04-04 18:11:04 INFO Running runs: ['88xgr1fg'] 2022-04-04 18:11:35 INFO Cleaning up finished run: 88xgr1fg 2022-04-04 18:11:35 INFO Agent received command: run 2022-04-04 18:11:35 INFO Agent starting run with config: dataset_cache_dir: /home/sanchitgandhi/cache/huggingface/datasets dataset_config_name: clean dataset_name: librispeech_asr eval_split_name: validation generation_max_length: 40 generation_num_beams: 1 gradient_accumulation_steps: 1 learning_rate: 7.331199736432637e-05 length_column_name: input_length logging_steps: 1 matmul_precision: highest max_duration_in_seconds: 15 max_target_length: 64 min_duration_in_seconds: 15 model_name_or_path: ./ num_train_epochs: 5 output_dir: ./ per_device_eval_batch_size: 2 per_device_train_batch_size: 2 preprocessing_num_workers: 16 text_column_name: text train_split_name: train.100 wandb_project: flax-wav2vec2-2-bart-large-cnn warmup_steps: 500 2022-04-04 18:11:35 INFO About to run command: python3 run_flax_speech_recognition_seq2seq.py --overwrite_output_dir --freeze_feature_encoder --predict_with_generate --do_lower_case --do_train --do_eval --dataset_cache_dir=/home/sanchitgandhi/cache/huggingface/datasets --dataset_config_name=clean --dataset_name=librispeech_asr --eval_split_name=validation --generation_max_length=40 --generation_num_beams=1 --gradient_accumulation_steps=1 --learning_rate=7.331199736432637e-05 --length_column_name=input_length --logging_steps=1 --matmul_precision=highest --max_duration_in_seconds=15 --max_target_length=64 --min_duration_in_seconds=15 --model_name_or_path=./ --num_train_epochs=5 --output_dir=./ --per_device_eval_batch_size=2 --per_device_train_batch_size=2 --preprocessing_num_workers=16 --text_column_name=text --train_split_name=train.100 --wandb_project=flax-wav2vec2-2-bart-large-cnn --warmup_steps=500 2022-04-04 18:11:40 INFO Running runs: ['xmgtui21'] 2022-04-04 18:12:15 INFO Cleaning up finished run: xmgtui21 2022-04-04 18:12:15 INFO Agent received command: run 2022-04-04 18:12:15 INFO Agent starting run with config: dataset_cache_dir: /home/sanchitgandhi/cache/huggingface/datasets dataset_config_name: clean dataset_name: librispeech_asr eval_split_name: validation generation_max_length: 40 generation_num_beams: 1 gradient_accumulation_steps: 1 learning_rate: 0.0007642424770238645 length_column_name: input_length logging_steps: 1 matmul_precision: highest max_duration_in_seconds: 15 max_target_length: 64 min_duration_in_seconds: 15 model_name_or_path: ./ num_train_epochs: 5 output_dir: ./ per_device_eval_batch_size: 2 per_device_train_batch_size: 2 preprocessing_num_workers: 16 text_column_name: text train_split_name: train.100 wandb_project: flax-wav2vec2-2-bart-large-cnn warmup_steps: 500 2022-04-04 18:12:15 INFO About to run command: python3 run_flax_speech_recognition_seq2seq.py --overwrite_output_dir --freeze_feature_encoder --predict_with_generate --do_lower_case --do_train --do_eval --dataset_cache_dir=/home/sanchitgandhi/cache/huggingface/datasets --dataset_config_name=clean --dataset_name=librispeech_asr --eval_split_name=validation --generation_max_length=40 --generation_num_beams=1 --gradient_accumulation_steps=1 --learning_rate=0.0007642424770238645 --length_column_name=input_length --logging_steps=1 --matmul_precision=highest --max_duration_in_seconds=15 --max_target_length=64 --min_duration_in_seconds=15 --model_name_or_path=./ --num_train_epochs=5 --output_dir=./ --per_device_eval_batch_size=2 --per_device_train_batch_size=2 --preprocessing_num_workers=16 --text_column_name=text --train_split_name=train.100 --wandb_project=flax-wav2vec2-2-bart-large-cnn --warmup_steps=500 2022-04-04 18:12:20 INFO Running runs: ['4s004g1k'] 2022-04-04 18:12:51 ERROR Detected 5 failed runs in a row, shutting down. 2022-04-04 18:12:51 INFO To change this value set WANDB_AGENT_MAX_INITIAL_FAILURES=val