Edit model card

Generic badge

Open in Collab

Table of Contents

  1. Model Details
  2. Usage
  3. Training Details
  4. Training Results
  5. Citation
  6. Author

Model Details

This variant of the facebook/bart-base model, is fine-tuned specifically for the task of text summarization. This model aims to generate concise, coherent, and informative summaries from extensive text documents, leveraging the power of the BART bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder approach.

Usage

This model is intended for use in summarizing long-form texts into concise, informative abstracts. It's particularly useful for professionals and researchers who need to quickly grasp the essence of detailed reports, research papers, or articles without reading the entire text.

Get Started

Install with pip:

pip install transformers

Use in python:

from transformers import pipeline
from transformers import AutoTokenizer
from transformers import AutoModelForSeq2SeqLM

model_name = "KipperDev/bart_summarizer_model"

tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
summarizer = pipeline("summarization", model=model, tokenizer=tokenizer)

# Example usage
prefix = "summarize: "
input_text = "Your input text here."
input_ids = tokenizer.encode(prefix + input_text, return_tensors="pt")
summary_ids = model.generate(input_ids)
summary = tokenizer.decode(summary_ids[0], skip_special_tokens=True)

print(summary)

NOTE THAT FOR THE MODEL TO WORK AS INTENDED, YOU NEED TO APPEND THE 'summarize:' PREFIX BEFORE THE INPUT DATA

Training Details

Training Data

The model was trained using the Big Patent Dataset, comprising 1.3 million US patent documents and their corresponding human-written summaries. This dataset was chosen for its rich language and complex structure, representative of the challenging nature of document summarization tasks.

Training involved multiple subsets of the dataset to ensure broad coverage and robust model performance across varied document types.

Training Procedure

Training was conducted over three rounds, with initial settings including a learning rate of 0.00002, batch size of 8, and 4 epochs. Subsequent rounds adjusted these parameters to refine model performance further, for respectively 0.0003, 8 and 12. As well, a linear decay learning rate schedule was applied to enhance model learning efficiency over time.

Training results

Model performance was evaluated using the ROUGE metric, highlighting its capability to generate summaries closely aligned with human-written abstracts.

Metric Value
Evaluation Loss (Eval Loss) 1.9244
Rouge-1 0.5007
Rouge-2 0.2704
Rouge-L 0.3627
Rouge-Lsum 0.3636
Average Generation Length (Gen Len) 122.1489
Runtime (seconds) 1459.3826
Samples per Second 1.312
Steps per Second 0.164

Citation

BibTeX:

@article{kipper_t5_summarizer,
 // SOON
}

Authors

This model card was written by Fernanda Kipper

Downloads last month
60
Safetensors
Model size
139M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train KipperDev/bart_summarizer_model