llawma-sum-2-7b / README.md
W3bsurf's picture
Update README.md
6c05261
---
base_model: meta-llama/Llama-2-7b-hf
tags:
- generated_from_trainer
model-index:
- name: llawma-sum-2-7b
results: []
datasets:
- dreamproit/bill_summary_us
language:
- en
---
# results
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the bill_summary_us dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6796
## Model description
Model has been fine-tuned from Llama 2 7B model for legal summarization tasks
## Intended uses & limitations
The model has been fine-tuned with legal summarization text for summarization tasks. Can produce repeating text when creating longer outputs.
Tested only with english and the bill_summary_us dataset.
## Training and evaluation data
Training and evaluation data is from the bill_summary_us. An around 1500 row split from the dataset was used with further split of 80:20 for training and evaluation.
## Training procedure
SFTTrainer from Hugging Face's TRL library used for fine-tuning process.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.6836 | 0.23 | 70 | 0.7197 |
| 0.651 | 0.46 | 140 | 0.7030 |
| 0.7863 | 0.7 | 210 | 0.6962 |
| 0.6061 | 0.93 | 280 | 0.6796 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
### License
Llama 2 is licensed under the LLAMA 2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.