The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
Dataset Card for News Summarization
This dataset card documents the News Summary dataset used for training a T5 model specialized in summarization tasks, particularly focusing on news articles.
Dataset Details
Dataset Description
The News Summary dataset contains pairs of full-length news articles and their corresponding summaries. It was curated to train models that can generate concise and informative summaries of longer texts. This dataset is valuable for natural language processing tasks related to summarization.
- Curated by: Sunny Srinidhi
- Shared by: Kaggle
- Language(s) (NLP): English
- License: Dataset-specific license
Dataset Sources
- Repository: Kaggle Dataset Link
Model Description
This T5 model is fine-tuned specifically for the task of summarizing news articles. It leverages the extensive pre-training of the T5 base model and adapts it to generate concise summaries of news content, aiming to maintain the core message and essential details.
Training Procedure
The model was trained using the Seq2SeqTrainer
from the Hugging Face Transformers library on a custom dataset. Training involved a sequence-to-sequence model that was fine-tuned with news article data, tokenized using the corresponding T5 tokenizer.
Hyperparameters
- Evaluation Strategy: Epoch
- Learning Rate: 0.00002
- Train Batch Size per Device: 8
- Eval Batch Size per Device: 8
- Weight Decay: 0.01
- Save Total Limit: 2
- Number of Training Epochs: 4
- Use FP16 Precision: True
- Reporting: None
Training Metrics Table
Epoch | Training Loss | Validation Loss | ROUGE-1 | ROUGE-2 | ROUGE-L |
---|---|---|---|---|---|
1 | No log | 1.401181 | r: 17.15%, p: 63.80%, f: 26.83% | r: 7.86%, p: 36.02%, f: 12.81% | r: 15.90%, p: 59.24%, f: 24.89% |
2 | 1.594900 | 1.367020 | r: 17.47%, p: 65.14%, f: 27.36% | r: 8.01%, p: 36.98%, f: 13.07% | r: 16.17%, p: 60.43%, f: 25.33% |
3 | 1.461500 | 1.354850 | r: 17.68%, p: 65.80%, f: 27.67% | r: 8.13%, p: 37.65%, f: 13.28% | r: 16.34%, p: 60.95%, f: 25.58% |
4 | 1.434300 | 1.352294 | r: 17.77%, p: 66.08%, f: 27.81% | r: 8.25%, p: 38.09%, f: 13.47% | r: 16.45%, p: 61.30%, f: 25.75% |
Training Output: Global step=1692, training loss=1.4874611712516623, train runtime=1579.3283 seconds, samples per second=8.571, steps per second=1.071, total FLOPs=8232596872151040.0, epoch=4.
Uses
Direct Use
This dataset is primarily used for training and evaluating machine learning models on the summarization task. It is suitable for developing algorithms that require understanding and processing of news-style writing to produce summaries.
Usage
The model can be used directly via the Hugging Face pipeline
for summarization tasks. Here is a sample code snippet:
from transformers import pipeline
# Load the model
model = AutoModelForSeq2SeqLM.from_pretrained("t5-news")
tokenizer = AutoTokenizer.from_pretrained("t5-news")
# Create summarizer pipeline
summarizer = pipeline("summarization", model=model, tokenizer=tokenizer)
# Summarize text
text = "Your news article text here"
print(summarizer(text))
Out-of-Scope Use
The dataset may not be suitable for tasks requiring fine-grained sentiment analysis, detailed factual extraction, or tasks outside the English language.
Evaluation Metrics
The model was evaluated using ROUGE metrics which measure the overlap of n-grams between the generated summaries and reference summaries. This metric is standard for evaluating summarization models.
Conclusion
This T5 model provides a robust solution for summarizing news articles, equipped to handle a variety of news formats and contents effectively. It is particularly useful for applications requiring quick generation of concise summaries from lengthy news articles.
- Downloads last month
- 37