05/13/2024 20:46:55 - WARNING - __main__ - Process rank: 0, device: cuda:0, n_gpu: 1, distributed training: False, 16-bits training: False 05/13/2024 20:46:55 - INFO - __main__ - Training/evaluation parameters ParlerTTSTrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}, adafactor=False, adam_beta1=0.9, adam_beta2=0.99, adam_epsilon=1e-08, audio_encoder_per_device_batch_size=4, auto_find_batch_size=False, batch_eval_metrics=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=4, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=True, dtype=bfloat16, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_steps=None, eval_strategy=IntervalStrategy.EPOCH, evaluation_strategy=epoch, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, generation_config=None, generation_max_length=None, generation_num_beams=None, gradient_accumulation_steps=8, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=True, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=True, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=8e-05, length_column_name=length, load_best_model_at_end=False, local_rank=0, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=../output_dir_training_constant_concat/runs/May13_20-46-51_hf-dgx-01, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=5, logging_strategy=IntervalStrategy.STEPS, lr_scheduler_kwargs={}, lr_scheduler_type=SchedulerType.COSINE, max_grad_norm=1.0, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=8, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=../output_dir_training_constant_concat/, overwrite_output_dir=True, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=16, predict_with_generate=True, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=../output_dir_training_constant_concat/, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=500, save_strategy=IntervalStrategy.EPOCH, save_total_limit=5, seed=456, skip_memory_metrics=True, sortish_sampler=False, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=250, weight_decay=0.01, ) 05/13/2024 20:46:57 - WARNING - __main__ - Disabling fast tokenizer warning: https://github.com/huggingface/transformers/blob/main/src/transformers/tokenization_utils_base.py#L3231-L3235 loading configuration file preprocessor_config.json from cache at /raid/.cache/huggingface/models--parler-tts--dac_44khZ_8kbps/snapshots/db52bea859d9411e0beb44a3ea923a8731ee4197/preprocessor_config.json Feature extractor EncodecFeatureExtractor { "chunk_length_s": null, "feature_extractor_type": "EncodecFeatureExtractor", "feature_size": 1, "overlap": null, "padding_side": "right", "padding_value": 0.0, "return_attention_mask": true, "sampling_rate": 44100 } loading file spiece.model from cache at /raid/.cache/huggingface/models--parler-tts--parler_tts_mini_v0.1/snapshots/e02fd18e77d38b49a85c7a9a85189a64b8472544/spiece.model loading file tokenizer.json from cache at /raid/.cache/huggingface/models--parler-tts--parler_tts_mini_v0.1/snapshots/e02fd18e77d38b49a85c7a9a85189a64b8472544/tokenizer.json loading file added_tokens.json from cache at None loading file special_tokens_map.json from cache at /raid/.cache/huggingface/models--parler-tts--parler_tts_mini_v0.1/snapshots/e02fd18e77d38b49a85c7a9a85189a64b8472544/special_tokens_map.json loading file tokenizer_config.json from cache at /raid/.cache/huggingface/models--parler-tts--parler_tts_mini_v0.1/snapshots/e02fd18e77d38b49a85c7a9a85189a64b8472544/tokenizer_config.json You set `add_prefix_space`. The tokenizer needs to be converted from the slow tokenizers loading file spiece.model from cache at /raid/.cache/huggingface/models--parler-tts--parler_tts_mini_v0.1/snapshots/e02fd18e77d38b49a85c7a9a85189a64b8472544/spiece.model loading file tokenizer.json from cache at /raid/.cache/huggingface/models--parler-tts--parler_tts_mini_v0.1/snapshots/e02fd18e77d38b49a85c7a9a85189a64b8472544/tokenizer.json loading file added_tokens.json from cache at None loading file special_tokens_map.json from cache at /raid/.cache/huggingface/models--parler-tts--parler_tts_mini_v0.1/snapshots/e02fd18e77d38b49a85c7a9a85189a64b8472544/special_tokens_map.json loading file tokenizer_config.json from cache at /raid/.cache/huggingface/models--parler-tts--parler_tts_mini_v0.1/snapshots/e02fd18e77d38b49a85c7a9a85189a64b8472544/tokenizer_config.json Combining datasets...: 0%| | 0/4 [00:00 main() File "/raid/sanchit/parler-tts-mini-v0.1-expresso-concatenated-combined/run_parler_tts_training.py", line 950, in main raw_datasets["train"] = load_multiple_datasets( ^^^^^^^^^^^^^^^^^^^^^^^ File "/raid/sanchit/parler-tts-mini-v0.1-expresso-concatenated-combined/run_parler_tts_training.py", line 693, in load_multiple_datasets metadata_dataset = load_dataset( ^^^^^^^^^^^^^ File "/home/sanchit/miniconda3/envs/venv/lib/python3.11/site-packages/datasets/load.py", line 2587, in load_dataset builder_instance = load_dataset_builder( ^^^^^^^^^^^^^^^^^^^^^ File "/home/sanchit/miniconda3/envs/venv/lib/python3.11/site-packages/datasets/load.py", line 2296, in load_dataset_builder builder_instance: DatasetBuilder = builder_cls( ^^^^^^^^^^^^ File "/home/sanchit/miniconda3/envs/venv/lib/python3.11/site-packages/datasets/builder.py", line 374, in __init__ self.config, self.config_id = self._create_builder_config( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/sanchit/miniconda3/envs/venv/lib/python3.11/site-packages/datasets/builder.py", line 599, in _create_builder_config raise ValueError( ValueError: BuilderConfig 'read' not found. Available: ['default']