PMC_LLAMA2_7B_trainer_lora / trainer_peft.log
Jingmei's picture
End of training
fa407c9 verified
raw
history blame
16.6 kB
2024-06-01 14:49 - Cuda check
2024-06-01 14:49 - True
2024-06-01 14:49 - 2
2024-06-01 14:49 - Configue Model and tokenizer
2024-06-01 14:49 - Cuda check
2024-06-01 14:49 - True
2024-06-01 14:49 - 2
2024-06-01 14:49 - Configue Model and tokenizer
2024-06-01 14:49 - Memory usage in 0.00 GB
2024-06-01 14:49 - Memory usage in 0.00 GB
2024-06-01 14:49 - Dataset loaded successfully:
train-Jingmei/Pandemic_Wiki
test -Jingmei/Pandemic
2024-06-01 14:49 - Dataset loaded successfully:
train-Jingmei/Pandemic_Wiki
test -Jingmei/Pandemic
2024-06-01 14:49 - Tokenize data: DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 2152
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 8264
})
})
2024-06-01 14:49 - Tokenize data: DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 2152
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 8264
})
})
2024-06-01 14:49 - Split data into chunks:DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 24863
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 198964
})
})
2024-06-01 14:49 - Setup PEFT
2024-06-01 14:49 - Setup optimizer
2024-06-01 14:49 - Split data into chunks:DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 24863
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 198964
})
})
2024-06-01 14:49 - Setup PEFT
2024-06-01 14:49 - Setup optimizer
2024-06-01 14:49 - Start training!!
2024-06-01 14:49 - Start training!!
2024-06-01 14:51 - Cuda check
2024-06-01 14:51 - True
2024-06-01 14:51 - 2
2024-06-01 14:51 - Configue Model and tokenizer
2024-06-01 14:51 - Cuda check
2024-06-01 14:51 - True
2024-06-01 14:51 - 2
2024-06-01 14:51 - Configue Model and tokenizer
2024-06-01 14:51 - Memory usage in 0.00 GB
2024-06-01 14:51 - Memory usage in 0.00 GB
2024-06-01 14:51 - Dataset loaded successfully:
train-Jingmei/Pandemic_Wiki
test -Jingmei/Pandemic
2024-06-01 14:51 - Tokenize data: DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 2152
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 8264
})
})
2024-06-01 14:51 - Dataset loaded successfully:
train-Jingmei/Pandemic_Wiki
test -Jingmei/Pandemic
2024-06-01 14:51 - Split data into chunks:DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 24863
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 198964
})
})
2024-06-01 14:51 - Setup PEFT
2024-06-01 14:51 - Tokenize data: DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 2152
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 8264
})
})
2024-06-01 14:51 - Split data into chunks:DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 24863
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 198964
})
})
2024-06-01 14:51 - Setup PEFT
2024-06-01 14:51 - Setup optimizer
2024-06-01 14:51 - Setup optimizer
2024-06-01 14:51 - Start training!!
2024-06-01 14:51 - Start training!!
2024-06-01 15:49 - Training complete!!!
2024-06-01 15:49 - Training complete!!!
2024-06-01 20:49 - Cuda check
2024-06-01 20:49 - True
2024-06-01 20:49 - 2
2024-06-01 20:49 - Configue Model and tokenizer
2024-06-01 20:49 - Cuda check
2024-06-01 20:49 - True
2024-06-01 20:49 - 2
2024-06-01 20:49 - Configue Model and tokenizer
2024-06-01 20:49 - Memory usage in 0.00 GB
2024-06-01 20:49 - Memory usage in 0.00 GB
2024-06-01 20:49 - Dataset loaded successfully:
train-Jingmei/Pandemic_Wiki
test -Jingmei/Pandemic
2024-06-01 20:49 - Dataset loaded successfully:
train-Jingmei/Pandemic_Wiki
test -Jingmei/Pandemic
2024-06-01 20:49 - Tokenize data: DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 2152
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 8264
})
})
2024-06-01 20:49 - Tokenize data: DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 2152
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 8264
})
})
2024-06-01 20:49 - Split data into chunks:DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 24863
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 198964
})
})
2024-06-01 20:49 - Setup PEFT
2024-06-01 20:49 - Split data into chunks:DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 24863
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 198964
})
})
2024-06-01 20:49 - Setup PEFT
2024-06-01 20:49 - Setup optimizer
2024-06-01 20:49 - Setup optimizer
2024-06-01 20:49 - Start training!!
2024-06-01 20:49 - Start training!!
2024-06-01 20:55 - Cuda check
2024-06-01 20:55 - True
2024-06-01 20:55 - 2
2024-06-01 20:55 - Configue Model and tokenizer
2024-06-01 20:55 - Cuda check
2024-06-01 20:55 - True
2024-06-01 20:55 - 2
2024-06-01 20:55 - Configue Model and tokenizer
2024-06-01 20:55 - Memory usage in 0.00 GB
2024-06-01 20:55 - Memory usage in 0.00 GB
2024-06-01 20:55 - Dataset loaded successfully:
train-Jingmei/Pandemic_Wiki
test -Jingmei/Pandemic
2024-06-01 20:55 - Tokenize data: DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 2152
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 8264
})
})
2024-06-01 20:55 - Dataset loaded successfully:
train-Jingmei/Pandemic_Wiki
test -Jingmei/Pandemic
2024-06-01 20:55 - Split data into chunks:DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 24863
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 198964
})
})
2024-06-01 20:55 - Setup PEFT
2024-06-01 20:55 - Tokenize data: DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 2152
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 8264
})
})
2024-06-01 20:55 - Split data into chunks:DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 24863
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 198964
})
})
2024-06-01 20:55 - Setup PEFT
2024-06-01 20:55 - Setup optimizer
2024-06-01 20:55 - Setup optimizer
2024-06-01 20:55 - Continue training!!
2024-06-01 20:55 - Continue training!!
2024-06-01 20:56 - Training complete!!!
2024-06-01 20:56 - Training complete!!!
2024-06-01 20:58 - Cuda check
2024-06-01 20:58 - True
2024-06-01 20:58 - 2
2024-06-01 20:58 - Configue Model and tokenizer
2024-06-01 20:58 - Cuda check
2024-06-01 20:58 - True
2024-06-01 20:58 - 2
2024-06-01 20:58 - Configue Model and tokenizer
2024-06-01 20:58 - Memory usage in 0.00 GB
2024-06-01 20:58 - Memory usage in 0.00 GB
2024-06-01 20:58 - Dataset loaded successfully:
train-Jingmei/Pandemic_Wiki
test -Jingmei/Pandemic
2024-06-01 20:58 - Tokenize data: DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 2152
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 8264
})
})
2024-06-01 20:58 - Split data into chunks:DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 24863
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 198964
})
})
2024-06-01 20:58 - Setup PEFT
2024-06-01 20:58 - Setup optimizer
2024-06-01 20:58 - Dataset loaded successfully:
train-Jingmei/Pandemic_Wiki
test -Jingmei/Pandemic
2024-06-01 20:58 - Tokenize data: DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 2152
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 8264
})
})
2024-06-01 20:58 - Split data into chunks:DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 24863
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 198964
})
})
2024-06-01 20:58 - Setup PEFT
2024-06-01 20:58 - Setup optimizer
2024-06-01 20:58 - Continue training!!
2024-06-01 20:58 - Continue training!!
2024-06-01 20:59 - Training complete!!!
2024-06-01 20:59 - Training complete!!!
2024-06-01 21:04 - Cuda check
2024-06-01 21:04 - True
2024-06-01 21:04 - 2
2024-06-01 21:04 - Configue Model and tokenizer
2024-06-01 21:04 - Cuda check
2024-06-01 21:04 - True
2024-06-01 21:04 - 2
2024-06-01 21:04 - Configue Model and tokenizer
2024-06-01 21:04 - Memory usage in 0.00 GB
2024-06-01 21:04 - Memory usage in 0.00 GB
2024-06-01 21:04 - Dataset loaded successfully:
train-Jingmei/Pandemic_Wiki
test -Jingmei/Pandemic
2024-06-01 21:04 - Tokenize data: DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 2152
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 8264
})
})
2024-06-01 21:04 - Split data into chunks:DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 24863
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 198964
})
})
2024-06-01 21:04 - Setup PEFT
2024-06-01 21:04 - Setup optimizer
2024-06-01 21:04 - Dataset loaded successfully:
train-Jingmei/Pandemic_Wiki
test -Jingmei/Pandemic
2024-06-01 21:04 - Tokenize data: DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 2152
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 8264
})
})
2024-06-01 21:04 - Split data into chunks:DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 24863
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 198964
})
})
2024-06-01 21:04 - Setup PEFT
2024-06-01 21:04 - Setup optimizer
2024-06-01 21:05 - Continue training!!
2024-06-01 21:05 - Continue training!!
2024-06-01 21:05 - Training complete!!!
2024-06-01 21:05 - Training complete!!!
2024-06-01 21:07 - Cuda check
2024-06-01 21:07 - True
2024-06-01 21:07 - 2
2024-06-01 21:07 - Configue Model and tokenizer
2024-06-01 21:07 - Cuda check
2024-06-01 21:07 - True
2024-06-01 21:07 - 2
2024-06-01 21:07 - Configue Model and tokenizer
2024-06-01 21:07 - Memory usage in 0.00 GB
2024-06-01 21:07 - Memory usage in 0.00 GB
2024-06-01 21:07 - Dataset loaded successfully:
train-Jingmei/Pandemic_ACDC
test -Jingmei/Pandemic
2024-06-01 21:07 - Dataset loaded successfully:
train-Jingmei/Pandemic_ACDC
test -Jingmei/Pandemic
2024-06-01 21:07 - Tokenize data: DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 625
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 8264
})
})
2024-06-01 21:07 - Tokenize data: DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 625
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 8264
})
})
2024-06-01 21:07 - Split data into chunks:DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 3938
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 198964
})
})
2024-06-01 21:07 - Setup PEFT
2024-06-01 21:07 - Setup optimizer
2024-06-01 21:07 - Split data into chunks:DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 3938
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 198964
})
})
2024-06-01 21:07 - Setup PEFT
2024-06-01 21:07 - Setup optimizer
2024-06-01 21:07 - Continue training!!
2024-06-01 21:07 - Continue training!!
2024-06-01 21:08 - Training complete!!!
2024-06-01 21:08 - Training complete!!!
2024-06-01 21:09 - Cuda check
2024-06-01 21:09 - True
2024-06-01 21:09 - 2
2024-06-01 21:09 - Configue Model and tokenizer
2024-06-01 21:09 - Cuda check
2024-06-01 21:09 - True
2024-06-01 21:09 - 2
2024-06-01 21:09 - Configue Model and tokenizer
2024-06-01 21:09 - Memory usage in 0.00 GB
2024-06-01 21:09 - Memory usage in 0.00 GB
2024-06-01 21:09 - Dataset loaded successfully:
train-Jingmei/Pandemic_ACDC
test -Jingmei/Pandemic
2024-06-01 21:09 - Tokenize data: DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 625
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 8264
})
})
2024-06-01 21:09 - Split data into chunks:DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 3938
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 198964
})
})
2024-06-01 21:09 - Setup PEFT
2024-06-01 21:09 - Setup optimizer
2024-06-01 21:09 - Dataset loaded successfully:
train-Jingmei/Pandemic_ACDC
test -Jingmei/Pandemic
2024-06-01 21:09 - Tokenize data: DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 625
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 8264
})
})
2024-06-01 21:09 - Split data into chunks:DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 3938
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 198964
})
})
2024-06-01 21:09 - Setup PEFT
2024-06-01 21:09 - Setup optimizer
2024-06-01 21:09 - Continue training!!
2024-06-01 21:09 - Continue training!!
2024-06-01 21:19 - Training complete!!!
2024-06-01 21:19 - Training complete!!!
2024-06-01 21:20 - Cuda check
2024-06-01 21:20 - True
2024-06-01 21:20 - 2
2024-06-01 21:20 - Configue Model and tokenizer
2024-06-01 21:20 - Cuda check
2024-06-01 21:20 - True
2024-06-01 21:20 - 2
2024-06-01 21:20 - Configue Model and tokenizer
2024-06-01 21:20 - Memory usage in 0.00 GB
2024-06-01 21:20 - Memory usage in 0.00 GB
2024-06-01 21:20 - Dataset loaded successfully:
train-Jingmei/Pandemic_ACDC
test -Jingmei/Pandemic
2024-06-01 21:20 - Tokenize data: DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 625
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 8264
})
})
2024-06-01 21:20 - Split data into chunks:DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 3938
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 198964
})
})
2024-06-01 21:20 - Setup PEFT
2024-06-01 21:20 - Dataset loaded successfully:
train-Jingmei/Pandemic_ACDC
test -Jingmei/Pandemic
2024-06-01 21:20 - Tokenize data: DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 625
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 8264
})
})
2024-06-01 21:20 - Split data into chunks:DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 3938
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 198964
})
})
2024-06-01 21:20 - Setup PEFT
2024-06-01 21:20 - Setup optimizer
2024-06-01 21:20 - Setup optimizer
2024-06-01 21:20 - Continue training!!
2024-06-01 21:20 - Continue training!!
2024-06-01 21:21 - Training complete!!!