PMC_LLAMA2_7B_trainer_lora / trainer_peft.log
Jingmei's picture
Training in progress, step 10
5923c8c verified
raw
history blame
3.56 kB
2024-06-01 14:49 - Cuda check
2024-06-01 14:49 - True
2024-06-01 14:49 - 2
2024-06-01 14:49 - Configue Model and tokenizer
2024-06-01 14:49 - Cuda check
2024-06-01 14:49 - True
2024-06-01 14:49 - 2
2024-06-01 14:49 - Configue Model and tokenizer
2024-06-01 14:49 - Memory usage in 0.00 GB
2024-06-01 14:49 - Memory usage in 0.00 GB
2024-06-01 14:49 - Dataset loaded successfully:
train-Jingmei/Pandemic_Wiki
test -Jingmei/Pandemic
2024-06-01 14:49 - Dataset loaded successfully:
train-Jingmei/Pandemic_Wiki
test -Jingmei/Pandemic
2024-06-01 14:49 - Tokenize data: DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 2152
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 8264
})
})
2024-06-01 14:49 - Tokenize data: DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 2152
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 8264
})
})
2024-06-01 14:49 - Split data into chunks:DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 24863
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 198964
})
})
2024-06-01 14:49 - Setup PEFT
2024-06-01 14:49 - Setup optimizer
2024-06-01 14:49 - Split data into chunks:DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 24863
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 198964
})
})
2024-06-01 14:49 - Setup PEFT
2024-06-01 14:49 - Setup optimizer
2024-06-01 14:49 - Start training!!
2024-06-01 14:49 - Start training!!
2024-06-01 14:51 - Cuda check
2024-06-01 14:51 - True
2024-06-01 14:51 - 2
2024-06-01 14:51 - Configue Model and tokenizer
2024-06-01 14:51 - Cuda check
2024-06-01 14:51 - True
2024-06-01 14:51 - 2
2024-06-01 14:51 - Configue Model and tokenizer
2024-06-01 14:51 - Memory usage in 0.00 GB
2024-06-01 14:51 - Memory usage in 0.00 GB
2024-06-01 14:51 - Dataset loaded successfully:
train-Jingmei/Pandemic_Wiki
test -Jingmei/Pandemic
2024-06-01 14:51 - Tokenize data: DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 2152
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 8264
})
})
2024-06-01 14:51 - Dataset loaded successfully:
train-Jingmei/Pandemic_Wiki
test -Jingmei/Pandemic
2024-06-01 14:51 - Split data into chunks:DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 24863
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 198964
})
})
2024-06-01 14:51 - Setup PEFT
2024-06-01 14:51 - Tokenize data: DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 2152
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 8264
})
})
2024-06-01 14:51 - Split data into chunks:DatasetDict({
train: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 24863
})
test: Dataset({
features: ['input_ids', 'attention_mask'],
num_rows: 198964
})
})
2024-06-01 14:51 - Setup PEFT
2024-06-01 14:51 - Setup optimizer
2024-06-01 14:51 - Setup optimizer
2024-06-01 14:51 - Start training!!
2024-06-01 14:51 - Start training!!