File size: 5,048 Bytes
83955ee
 
 
 
 
 
31e32c8
 
83955ee
 
 
4a969cc
 
 
 
 
83955ee
 
 
 
 
4a969cc
83955ee
 
4a969cc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
83955ee
4a969cc
83955ee
4a969cc
83955ee
4a969cc
83955ee
4a969cc
 
 
 
 
 
 
 
83955ee
4a969cc
83955ee
4a969cc
 
 
 
 
83955ee
4a969cc
83955ee
4a969cc
 
 
 
83955ee
4a969cc
83955ee
4a969cc
 
83955ee
4a969cc
83955ee
4a969cc
 
 
83955ee
4a969cc
 
 
 
83955ee
4a969cc
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
---
license: apache-2.0
base_model: google-t5/t5-small
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-finetuned-billsum
  results: []
datasets:
- Helsinki-NLP/opus_books
language:
- en
pipeline_tag: summarization
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->


# t5-small-finetuned-billsum

This model is a fine-tuned version of [google/t5-small](https://huggingface.co/google/t5-small) on a custom dataset related to legislative bill summarization. It is optimized for generating concise summaries of legislative bills and other similar documents.

## Model Details

- **Model Name:** t5-small-finetuned-billsum
- **Base Model:** [google/t5-small](https://huggingface.co/google/t5-small)
- **Model Type:** Transformer-based Text-to-Text Generation Model
- **Fine-tuned on:** Legislative bill texts

### Model Description

This model leverages the T5 (Text-to-Text Transfer Transformer) architecture, which treats all NLP tasks as text-to-text tasks, enabling it to handle a wide range of natural language understanding and generation tasks. The T5-small version is a smaller variant of the T5 model, making it more computationally efficient while still delivering reasonable performance. This fine-tuned model is specifically trained to summarize legislative bills, capturing essential details and providing concise summaries.

### Intended Uses & Limitations

**Intended Uses:**
- Summarizing legislative bills and related legal documents.
- Extracting key information from long legal texts.
- Assisting in the quick review of bill content for policymakers, legal professionals, and researchers.

**Limitations:**
- The model may not capture all nuances of highly complex legal language.
- It may omit important details if they are not prevalent in the training data.
- It is not designed for tasks outside summarization of legislative content.
- The quality of summaries depends on the quality and relevance of the input data.

### Training and Evaluation Data

The model was fine-tuned using a dataset derived from legislative bills. The specific dataset used for training is not explicitly mentioned, but it likely consists of publicly available legislative texts. The evaluation metrics (Rouge scores) indicate the model's performance on generating summaries.

### Evaluation Results

The model achieved the following results on the evaluation set:

- **Loss:** 2.5533
- **ROUGE-1:** 0.1356
- **ROUGE-2:** 0.0495
- **ROUGE-L:** 0.1144
- **ROUGE-Lsum:** 0.1144
- **Generated Summary Length (Gen Len):** 19.0

These scores suggest moderate summarization performance, with room for improvement in capturing more comprehensive content.

### Training Procedure

The model was trained using the following hyperparameters and setup:

#### Training Hyperparameters

- **Learning Rate:** 2e-05
- **Training Batch Size:** 16
- **Evaluation Batch Size:** 16
- **Random Seed:** 42
- **Optimizer:** Adam (betas=(0.9, 0.999), epsilon=1e-08)
- **Learning Rate Scheduler:** Linear
- **Number of Epochs:** 3
- **Mixed Precision Training:** Native AMP (Automatic Mixed Precision)

#### Training Results

| Training Loss | Epoch | Step | Validation Loss | ROUGE-1 | ROUGE-2 | ROUGE-L | ROUGE-Lsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:----------:|:-------:|
| No log        | 1.0   | 62   | 2.6711          | 0.1308  | 0.0445  | 0.1107  | 0.1109     | 19.0    |
| No log        | 2.0   | 124  | 2.5761          | 0.1338  | 0.0483  | 0.1137  | 0.1137     | 19.0    |
| No log        | 3.0   | 186  | 2.5533          | 0.1356  | 0.0495  | 0.1144  | 0.1144     | 19.0    |

### Framework Versions

- **Transformers:** 4.42.4
- **PyTorch:** 2.3.1+cu121
- **Datasets:** 2.21.0
- **Tokenizers:** 0.19.1

### Ethical Considerations

- **Bias:** The model's summaries might reflect biases present in the training data, potentially affecting the representation of different topics or perspectives.
- **Data Privacy:** Ensure that the use of the model complies with data privacy regulations, especially when using it on sensitive or proprietary legislative documents.

### Future Improvements

- Training on a larger and more diverse dataset of legislative texts could improve summarization quality.
- Fine-tuning further with domain-specific data may help capture nuanced legal language better.
- Incorporating additional evaluation metrics like BERTScore can provide a more comprehensive understanding of the model's performance.

### Usage
You can use this model in a Hugging Face pipeline for various text-to-text tasks: 
```
from transformers import pipeline

translator = pipeline(
    "summarization",
    model="ashaduzzaman/t5-small-finetuned-billsum"
)
# Example usage: Summarization
input_text = "This is a long passage from a book that needs to be summarized."
summary = generator(input_text)
print(summary)
```