2024-06-05 00:42:33.461959: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered 2024-06-05 00:42:33.462196: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered 2024-06-05 00:42:33.727223: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered 2024-06-05 00:42:34.201012: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. 2024-06-05 00:42:40.034107: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT 06/05/2024 00:42:58 - WARNING - __main__ - Process rank: 0, device: cuda:0, n_gpu: 6, distributed training: False, 16-bits training: False 06/05/2024 00:42:58 - INFO - __main__ - Training/evaluation parameters TrainingArguments( _n_gpu=6, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=True, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_steps=1000, eval_strategy=steps, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=1, gradient_checkpointing=False, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=DorinSht/ShareGPT_llama2_68M, hub_private_repo=False, hub_strategy=checkpoint, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=0, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=./training_outputs_job_117535_1_05-06_00-42, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=500, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=linear, max_grad_norm=1.0, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=3.0, optim=adamw_torch, optim_args=None, optim_target_modules=None, output_dir=./training_outputs_job_117535_1_05-06_00-42, overwrite_output_dir=True, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=True, report_to=['tensorboard'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=/home/dshteyma/target_draft_coupling_code/target_draft_training/training_outputs, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=1000, save_strategy=steps, save_total_limit=None, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.05, warmup_steps=0, weight_decay=0.01, ) Using custom data configuration default-afe4b27d28cbdcb1 06/05/2024 00:42:59 - INFO - datasets.builder - Using custom data configuration default-afe4b27d28cbdcb1 Loading Dataset Infos from /home/dshteyma/miniconda3/lib/python3.9/site-packages/datasets/packaged_modules/json 06/05/2024 00:42:59 - INFO - datasets.info - Loading Dataset Infos from /home/dshteyma/miniconda3/lib/python3.9/site-packages/datasets/packaged_modules/json Overwrite dataset info from restored data version if exists. 06/05/2024 00:42:59 - INFO - datasets.builder - Overwrite dataset info from restored data version if exists. Loading Dataset info from /home/dshteyma/.cache/huggingface/datasets/json/default-afe4b27d28cbdcb1/0.0.0/c8d2d9508a2a2067ab02cd118834ecef34c3700d143b31835ec4235bf10109f7 06/05/2024 00:42:59 - INFO - datasets.info - Loading Dataset info from /home/dshteyma/.cache/huggingface/datasets/json/default-afe4b27d28cbdcb1/0.0.0/c8d2d9508a2a2067ab02cd118834ecef34c3700d143b31835ec4235bf10109f7 Found cached dataset json (/home/dshteyma/.cache/huggingface/datasets/json/default-afe4b27d28cbdcb1/0.0.0/c8d2d9508a2a2067ab02cd118834ecef34c3700d143b31835ec4235bf10109f7) 06/05/2024 00:42:59 - INFO - datasets.builder - Found cached dataset json (/home/dshteyma/.cache/huggingface/datasets/json/default-afe4b27d28cbdcb1/0.0.0/c8d2d9508a2a2067ab02cd118834ecef34c3700d143b31835ec4235bf10109f7) Loading Dataset info from /home/dshteyma/.cache/huggingface/datasets/json/default-afe4b27d28cbdcb1/0.0.0/c8d2d9508a2a2067ab02cd118834ecef34c3700d143b31835ec4235bf10109f7 06/05/2024 00:42:59 - INFO - datasets.info - Loading Dataset info from /home/dshteyma/.cache/huggingface/datasets/json/default-afe4b27d28cbdcb1/0.0.0/c8d2d9508a2a2067ab02cd118834ecef34c3700d143b31835ec4235bf10109f7 Using custom data configuration default-afe4b27d28cbdcb1 06/05/2024 00:43:00 - INFO - datasets.builder - Using custom data configuration default-afe4b27d28cbdcb1 Loading Dataset Infos from /home/dshteyma/miniconda3/lib/python3.9/site-packages/datasets/packaged_modules/json 06/05/2024 00:43:00 - INFO - datasets.info - Loading Dataset Infos from /home/dshteyma/miniconda3/lib/python3.9/site-packages/datasets/packaged_modules/json Overwrite dataset info from restored data version if exists. 06/05/2024 00:43:00 - INFO - datasets.builder - Overwrite dataset info from restored data version if exists. Loading Dataset info from /home/dshteyma/.cache/huggingface/datasets/json/default-afe4b27d28cbdcb1/0.0.0/c8d2d9508a2a2067ab02cd118834ecef34c3700d143b31835ec4235bf10109f7 06/05/2024 00:43:00 - INFO - datasets.info - Loading Dataset info from /home/dshteyma/.cache/huggingface/datasets/json/default-afe4b27d28cbdcb1/0.0.0/c8d2d9508a2a2067ab02cd118834ecef34c3700d143b31835ec4235bf10109f7 Found cached dataset json (/home/dshteyma/.cache/huggingface/datasets/json/default-afe4b27d28cbdcb1/0.0.0/c8d2d9508a2a2067ab02cd118834ecef34c3700d143b31835ec4235bf10109f7) 06/05/2024 00:43:00 - INFO - datasets.builder - Found cached dataset json (/home/dshteyma/.cache/huggingface/datasets/json/default-afe4b27d28cbdcb1/0.0.0/c8d2d9508a2a2067ab02cd118834ecef34c3700d143b31835ec4235bf10109f7) Loading Dataset info from /home/dshteyma/.cache/huggingface/datasets/json/default-afe4b27d28cbdcb1/0.0.0/c8d2d9508a2a2067ab02cd118834ecef34c3700d143b31835ec4235bf10109f7 06/05/2024 00:43:00 - INFO - datasets.info - Loading Dataset info from /home/dshteyma/.cache/huggingface/datasets/json/default-afe4b27d28cbdcb1/0.0.0/c8d2d9508a2a2067ab02cd118834ecef34c3700d143b31835ec4235bf10109f7 Using custom data configuration default-afe4b27d28cbdcb1 06/05/2024 00:43:00 - INFO - datasets.builder - Using custom data configuration default-afe4b27d28cbdcb1 Loading Dataset Infos from /home/dshteyma/miniconda3/lib/python3.9/site-packages/datasets/packaged_modules/json 06/05/2024 00:43:00 - INFO - datasets.info - Loading Dataset Infos from /home/dshteyma/miniconda3/lib/python3.9/site-packages/datasets/packaged_modules/json Overwrite dataset info from restored data version if exists. 06/05/2024 00:43:00 - INFO - datasets.builder - Overwrite dataset info from restored data version if exists. Loading Dataset info from /home/dshteyma/.cache/huggingface/datasets/json/default-afe4b27d28cbdcb1/0.0.0/c8d2d9508a2a2067ab02cd118834ecef34c3700d143b31835ec4235bf10109f7 06/05/2024 00:43:00 - INFO - datasets.info - Loading Dataset info from /home/dshteyma/.cache/huggingface/datasets/json/default-afe4b27d28cbdcb1/0.0.0/c8d2d9508a2a2067ab02cd118834ecef34c3700d143b31835ec4235bf10109f7 Found cached dataset json (/home/dshteyma/.cache/huggingface/datasets/json/default-afe4b27d28cbdcb1/0.0.0/c8d2d9508a2a2067ab02cd118834ecef34c3700d143b31835ec4235bf10109f7) 06/05/2024 00:43:00 - INFO - datasets.builder - Found cached dataset json (/home/dshteyma/.cache/huggingface/datasets/json/default-afe4b27d28cbdcb1/0.0.0/c8d2d9508a2a2067ab02cd118834ecef34c3700d143b31835ec4235bf10109f7) Loading Dataset info from /home/dshteyma/.cache/huggingface/datasets/json/default-afe4b27d28cbdcb1/0.0.0/c8d2d9508a2a2067ab02cd118834ecef34c3700d143b31835ec4235bf10109f7 06/05/2024 00:43:00 - INFO - datasets.info - Loading Dataset info from /home/dshteyma/.cache/huggingface/datasets/json/default-afe4b27d28cbdcb1/0.0.0/c8d2d9508a2a2067ab02cd118834ecef34c3700d143b31835ec4235bf10109f7 [INFO|configuration_utils.py:726] 2024-06-05 00:43:01,226 >> loading configuration file config.json from cache at /home/dshteyma/.cache/huggingface/hub/models--JackFram--llama-68m/snapshots/964a5d77df908b69f8d6476fb70e940425b04cb5/config.json [INFO|configuration_utils.py:789] 2024-06-05 00:43:01,228 >> Model config LlamaConfig { "_name_or_path": "JackFram/llama-68m", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 0, "eos_token_id": 2, "hidden_act": "silu", "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "max_position_embeddings": 2048, "model_type": "llama", "num_attention_heads": 12, "num_hidden_layers": 2, "num_key_value_heads": 12, "pad_token_id": 1, "pretraining_tp": 1, "rms_norm_eps": 1e-06, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "float32", "transformers_version": "4.41.0.dev0", "use_cache": true, "vocab_size": 32000 } [INFO|tokenization_utils_base.py:2102] 2024-06-05 00:43:01,386 >> loading file tokenizer.model from cache at /home/dshteyma/.cache/huggingface/hub/models--JackFram--llama-68m/snapshots/964a5d77df908b69f8d6476fb70e940425b04cb5/tokenizer.model [INFO|tokenization_utils_base.py:2102] 2024-06-05 00:43:01,386 >> loading file tokenizer.json from cache at None [INFO|tokenization_utils_base.py:2102] 2024-06-05 00:43:01,386 >> loading file added_tokens.json from cache at None [INFO|tokenization_utils_base.py:2102] 2024-06-05 00:43:01,386 >> loading file special_tokens_map.json from cache at /home/dshteyma/.cache/huggingface/hub/models--JackFram--llama-68m/snapshots/964a5d77df908b69f8d6476fb70e940425b04cb5/special_tokens_map.json [INFO|tokenization_utils_base.py:2102] 2024-06-05 00:43:01,386 >> loading file tokenizer_config.json from cache at /home/dshteyma/.cache/huggingface/hub/models--JackFram--llama-68m/snapshots/964a5d77df908b69f8d6476fb70e940425b04cb5/tokenizer_config.json [WARNING|logging.py:329] 2024-06-05 00:43:01,390 >> You are using the default legacy behaviour of the . This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 [WARNING|logging.py:329] 2024-06-05 00:43:01,502 >> You are using the default legacy behaviour of the . This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 [INFO|configuration_utils.py:936] 2024-06-05 00:43:02,135 >> Generate config GenerationConfig { "bos_token_id": 0, "eos_token_id": 2, "pad_token_id": 1 } 06/05/2024 00:43:03 - INFO - __main__ - Training new model from scratch - Total size=64.88M params Loading cached processed dataset at /home/dshteyma/.cache/huggingface/datasets/json/default-afe4b27d28cbdcb1/0.0.0/c8d2d9508a2a2067ab02cd118834ecef34c3700d143b31835ec4235bf10109f7/cache-988d048fea8d2473.arrow 06/05/2024 00:43:03 - INFO - datasets.arrow_dataset - Loading cached processed dataset at /home/dshteyma/.cache/huggingface/datasets/json/default-afe4b27d28cbdcb1/0.0.0/c8d2d9508a2a2067ab02cd118834ecef34c3700d143b31835ec4235bf10109f7/cache-988d048fea8d2473.arrow Loading cached processed dataset at /home/dshteyma/.cache/huggingface/datasets/json/default-afe4b27d28cbdcb1/0.0.0/c8d2d9508a2a2067ab02cd118834ecef34c3700d143b31835ec4235bf10109f7/cache-4e281c930893bca9.arrow 06/05/2024 00:43:03 - INFO - datasets.arrow_dataset - Loading cached processed dataset at /home/dshteyma/.cache/huggingface/datasets/json/default-afe4b27d28cbdcb1/0.0.0/c8d2d9508a2a2067ab02cd118834ecef34c3700d143b31835ec4235bf10109f7/cache-4e281c930893bca9.arrow Loading cached processed dataset at /home/dshteyma/.cache/huggingface/datasets/json/default-afe4b27d28cbdcb1/0.0.0/c8d2d9508a2a2067ab02cd118834ecef34c3700d143b31835ec4235bf10109f7/cache-3fe350bccdda6078.arrow 06/05/2024 00:43:03 - INFO - datasets.arrow_dataset - Loading cached processed dataset at /home/dshteyma/.cache/huggingface/datasets/json/default-afe4b27d28cbdcb1/0.0.0/c8d2d9508a2a2067ab02cd118834ecef34c3700d143b31835ec4235bf10109f7/cache-3fe350bccdda6078.arrow Loading cached processed dataset at /home/dshteyma/.cache/huggingface/datasets/json/default-afe4b27d28cbdcb1/0.0.0/c8d2d9508a2a2067ab02cd118834ecef34c3700d143b31835ec4235bf10109f7/cache-35d09b588a0c62b9.arrow 06/05/2024 00:43:03 - INFO - datasets.arrow_dataset - Loading cached processed dataset at /home/dshteyma/.cache/huggingface/datasets/json/default-afe4b27d28cbdcb1/0.0.0/c8d2d9508a2a2067ab02cd118834ecef34c3700d143b31835ec4235bf10109f7/cache-35d09b588a0c62b9.arrow Loading cached processed dataset at /home/dshteyma/.cache/huggingface/datasets/json/default-afe4b27d28cbdcb1/0.0.0/c8d2d9508a2a2067ab02cd118834ecef34c3700d143b31835ec4235bf10109f7/cache-4e5279ee31a5d8d3.arrow 06/05/2024 00:43:03 - INFO - datasets.arrow_dataset - Loading cached processed dataset at /home/dshteyma/.cache/huggingface/datasets/json/default-afe4b27d28cbdcb1/0.0.0/c8d2d9508a2a2067ab02cd118834ecef34c3700d143b31835ec4235bf10109f7/cache-4e5279ee31a5d8d3.arrow Loading cached processed dataset at /home/dshteyma/.cache/huggingface/datasets/json/default-afe4b27d28cbdcb1/0.0.0/c8d2d9508a2a2067ab02cd118834ecef34c3700d143b31835ec4235bf10109f7/cache-63d56456928edd43.arrow 06/05/2024 00:43:03 - INFO - datasets.arrow_dataset - Loading cached processed dataset at /home/dshteyma/.cache/huggingface/datasets/json/default-afe4b27d28cbdcb1/0.0.0/c8d2d9508a2a2067ab02cd118834ecef34c3700d143b31835ec4235bf10109f7/cache-63d56456928edd43.arrow Loading cached processed dataset at /home/dshteyma/.cache/huggingface/datasets/json/default-afe4b27d28cbdcb1/0.0.0/c8d2d9508a2a2067ab02cd118834ecef34c3700d143b31835ec4235bf10109f7/cache-6a784a78d9818240.arrow 06/05/2024 00:43:04 - INFO - datasets.arrow_dataset - Loading cached processed dataset at /home/dshteyma/.cache/huggingface/datasets/json/default-afe4b27d28cbdcb1/0.0.0/c8d2d9508a2a2067ab02cd118834ecef34c3700d143b31835ec4235bf10109f7/cache-6a784a78d9818240.arrow 06/05/2024 00:43:04 - INFO - datasets.arrow_dataset - Loading cached processed dataset at /home/dshteyma/.cache/huggingface/datasets/json/default-afe4b27d28cbdcb1/0.0.0/c8d2d9508a2a2067ab02cd118834ecef34c3700d143b31835ec4235bf10109f7/cache-46540f58a00a92bf.arrow Loading cached processed dataset at /home/dshteyma/.cache/huggingface/datasets/json/default-afe4b27d28cbdcb1/0.0.0/c8d2d9508a2a2067ab02cd118834ecef34c3700d143b31835ec4235bf10109f7/cache-46540f58a00a92bf.arrow Loading cached processed dataset at /home/dshteyma/.cache/huggingface/datasets/json/default-afe4b27d28cbdcb1/0.0.0/c8d2d9508a2a2067ab02cd118834ecef34c3700d143b31835ec4235bf10109f7/cache-73605724efaea9d2.arrow 06/05/2024 00:43:04 - INFO - datasets.arrow_dataset - Loading cached processed dataset at /home/dshteyma/.cache/huggingface/datasets/json/default-afe4b27d28cbdcb1/0.0.0/c8d2d9508a2a2067ab02cd118834ecef34c3700d143b31835ec4235bf10109f7/cache-73605724efaea9d2.arrow 06/05/2024 00:43:04 - INFO - datasets.arrow_dataset - Loading cached processed dataset at /home/dshteyma/.cache/huggingface/datasets/json/default-afe4b27d28cbdcb1/0.0.0/c8d2d9508a2a2067ab02cd118834ecef34c3700d143b31835ec4235bf10109f7/cache-83d3df87e1b82021.arrow Loading cached processed dataset at /home/dshteyma/.cache/huggingface/datasets/json/default-afe4b27d28cbdcb1/0.0.0/c8d2d9508a2a2067ab02cd118834ecef34c3700d143b31835ec4235bf10109f7/cache-83d3df87e1b82021.arrow Loading cached processed dataset at /home/dshteyma/.cache/huggingface/datasets/json/default-afe4b27d28cbdcb1/0.0.0/c8d2d9508a2a2067ab02cd118834ecef34c3700d143b31835ec4235bf10109f7/cache-efdbb02491aa6344.arrow 06/05/2024 00:43:04 - INFO - datasets.arrow_dataset - Loading cached processed dataset at /home/dshteyma/.cache/huggingface/datasets/json/default-afe4b27d28cbdcb1/0.0.0/c8d2d9508a2a2067ab02cd118834ecef34c3700d143b31835ec4235bf10109f7/cache-efdbb02491aa6344.arrow Loading cached processed dataset at /home/dshteyma/.cache/huggingface/datasets/json/default-afe4b27d28cbdcb1/0.0.0/c8d2d9508a2a2067ab02cd118834ecef34c3700d143b31835ec4235bf10109f7/cache-0cf2ae38fef927f3.arrow 06/05/2024 00:43:04 - INFO - datasets.arrow_dataset - Loading cached processed dataset at /home/dshteyma/.cache/huggingface/datasets/json/default-afe4b27d28cbdcb1/0.0.0/c8d2d9508a2a2067ab02cd118834ecef34c3700d143b31835ec4235bf10109f7/cache-0cf2ae38fef927f3.arrow 06/05/2024 00:43:04 - WARNING - accelerate.utils.other - Detected kernel version 5.4.0, which is below the recommended minimum of 5.5.0; this can cause the process to hang. It is recommended to upgrade the kernel to the minimum version or higher. [INFO|trainer.py:2068] 2024-06-05 00:43:05,738 >> ***** Running training ***** [INFO|trainer.py:2069] 2024-06-05 00:43:05,738 >> Num examples = 90,745 [INFO|trainer.py:2070] 2024-06-05 00:43:05,738 >> Num Epochs = 3 [INFO|trainer.py:2071] 2024-06-05 00:43:05,738 >> Instantaneous batch size per device = 4 [INFO|trainer.py:2073] 2024-06-05 00:43:05,738 >> Training with DataParallel so batch size has been adjusted to: 24 [INFO|trainer.py:2074] 2024-06-05 00:43:05,738 >> Total train batch size (w. parallel, distributed & accumulation) = 24 [INFO|trainer.py:2075] 2024-06-05 00:43:05,738 >> Gradient Accumulation steps = 1 [INFO|trainer.py:2076] 2024-06-05 00:43:05,738 >> Total optimization steps = 11,346 [INFO|trainer.py:2077] 2024-06-05 00:43:05,738 >> Number of trainable parameters = 68,030,208 0%| | 0/11346 [00:00> ***** Running Evaluation ***** [INFO|trainer.py:3664] 2024-06-05 00:59:54,007 >> Num examples = 1840 [INFO|trainer.py:3667] 2024-06-05 00:59:54,007 >> Batch size = 48 {'loss': 5.1118, 'grad_norm': 0.8546391725540161, 'learning_rate': 8.816009873931059e-05, 'epoch': 0.13} {'loss': 3.406, 'grad_norm': 0.8593688607215881, 'learning_rate': 9.59831475011252e-05, 'epoch': 0.26} 0%| | 0/39 [00:00> Saving model checkpoint to ./training_outputs_job_117535_1_05-06_00-42/checkpoint-1000 [INFO|configuration_utils.py:471] 2024-06-05 01:01:07,888 >> Configuration saved in ./training_outputs_job_117535_1_05-06_00-42/checkpoint-1000/config.json [INFO|configuration_utils.py:705] 2024-06-05 01:01:07,894 >> Configuration saved in ./training_outputs_job_117535_1_05-06_00-42/checkpoint-1000/generation_config.json [INFO|modeling_utils.py:2592] 2024-06-05 01:01:08,771 >> Model weights saved in ./training_outputs_job_117535_1_05-06_00-42/checkpoint-1000/model.safetensors [INFO|tokenization_utils_base.py:2503] 2024-06-05 01:01:08,785 >> tokenizer config file saved in ./training_outputs_job_117535_1_05-06_00-42/checkpoint-1000/tokenizer_config.json [INFO|tokenization_utils_base.py:2512] 2024-06-05 01:01:08,789 >> Special tokens file saved in ./training_outputs_job_117535_1_05-06_00-42/checkpoint-1000/special_tokens_map.json [INFO|tokenization_utils_base.py:2503] 2024-06-05 01:01:11,153 >> tokenizer config file saved in ./training_outputs_job_117535_1_05-06_00-42/tokenizer_config.json [INFO|tokenization_utils_base.py:2512] 2024-06-05 01:01:11,157 >> Special tokens file saved in ./training_outputs_job_117535_1_05-06_00-42/special_tokens_map.json /home/dshteyma/miniconda3/lib/python3.9/site-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector. warnings.warn('Was asked to gather along dimension 0, but all '