Spaces:
Sleeping
A newer version of the Gradio SDK is available:
5.7.1
title: BLANC
sdk: gradio
sdk_version: 3.5
app_file: app.py
pinned: false
tags:
- evaluate
- metric
description: >-
BLANC is a reference-free metric that evaluates the quality of document
summaries by measuring how much they improve a pre-trained language model's
performance on the document's text. It estimates summary quality without
needing human-written references, using two variations: BLANC-help and
BLANC-tune.
Metric Card for BLANC
Metric Description
BLANC is a reference-free evaluation metric designed to estimate the quality of document summaries. It assesses how much a summary helps a pre-trained language model (such as BERT) perform language understanding tasks on the original document's text.
There are two variations of BLANC:
- BLANC-help: The summary is concatenated with document sentences during the inference task. The BLANC-help score is defined as the difference in accuracy between unmasking tokens with the summary and with filler text (same length as the summary but consisting of period symbols). It measures how much the summary boosts the model's performance on masked tokens.
- BLANC-tune: The model is fine-tuned on the summary text before it processes the entire document. The BLANC-tune score is calculated by comparing the performance of the fine-tuned model with that of the original model, both tasked with unmasking tokens in the document text. This method reflects how much the model's ability to understand the document improves after learning from the summary.
Unlike traditional metrics such as ROUGE, BLANC does not require human-written reference summaries, making it fully human-free.
How to Use
BLANC takes 2 mandatory arguments: documents
(a list of documents) and summaries
(a list of predicted summaries).
You can specify which BLANC variation to use via the blanc_score
parameter: help
or tune
.
from evaluate import load
blanc = load("phucdev/blanc_score")
documents = ["Jack drove his minivan to the bazaar to purchase milk and honey for his large family."]
summaries = ["Jack bought milk and honey."]
results = blanc.compute(documents=documents, summaries=summaries, blanc_score="help")
Inputs
Args:
documents
(list of str): Documents.summaries
(list of str): Predicted summaries.model_name
(str, optional): BERT model type to use for evaluation. Default isbert-base-uncased
.measure
(str, optional): Measure type, eitherimprove
orrelative
, as defined in the BLANC paper. Default isrelative
.blanc_score
(str, optional): BLANC score type, eitherhelp
ortune
. Default ishelp
.gap
(int, optional): Distance between words to mask during inference. Default is2
.gap_mask
(int, optional): Number of tokens to mask at each designated position during inference. Default is1
.gap_tune
(int, optional): Distance between words to mask during fine-tuning. Default is2
.gap_mask_tune
(int, optional): Number of tokens to mask at each designated position during fine-tuning. Default is1
.min_token_length_normal
(int, optional): Minimum number of characters in normal tokens to mask (whole words) during inference. Default is4
.min_token_length_lead
(int, optional): Minimum number of characters in lead tokens (first part of words) to mask during inference. Default is2
.min_token_length_followup
(int, optional): Minimum number of characters in follow-up tokens (continuations of words) to mask during inference. Default is100
.min_token_length_normal_tune
(int, optional): Minimum number of characters in normal tokens to mask during fine-tuning. Default is-1
.min_token_length_lead_tune
(int, optional): Minimum number of characters in lead tokens to mask during fine-tuning. Default is-1
.min_token_length_followup_tune
(int, optional): Minimum number of characters in follow-up tokens to mask during fine-tuning. Default is-1
.device
(str, optional): Device to run the model on, eithercpu
orcuda
. BLANC is run oncpu
per default.random_seed
(int, optional): Random seed for Python and PyTorch. Default is0
.inference_batch_size
(int, optional): Batch size for inference. Default is1
.inference_mask_evenly
(bool, optional): Whether to mask everygap
tokens during inference (True
) or mask randomly with a probability of 0.15 (False
). Default isTrue
.show_progress_bar
(bool, optional): Whether to display a progress bar during computation. Default isTrue
.
BLANC-help specific arguments:
filler_token
(str, optional): Token to use as filler in lieu of the summary. Default is.
.help_sep
(str, optional): Token used to separate the summary (or filler) from the sentence, or '' for no separator. Default is "".
BLANC-tune specific arguments:
finetune_batch_size
(int, optional): Batch size to use when fine-tuning on the summary. Default is1
.finetune_epochs
(int, optional): Number of epochs for fine-tuning on the summary. Default is10
.finetune_mask_evenly
(bool, optional): Whether to mask everygap
tokens during fine-tuning (True
) or mask randomly with a probability of 0.15 (False
). Default isTrue
.finetune_chunk_size
(int, optional): Number of summary tokens to use at a time during fine-tuning. Default is64
.finetune_chunk_stride
(int, optional): Number of tokens between summary chunks for fine-tuning. Default is32
.learning_rate
(float, optional): Learning rate for fine-tuning on the summary. Default is5e-05
.warmup_steps
(int, optional): Number of warmup steps for fine-tuning. Default is0
.
Output Values
The metric outputs a dictionary with the following key ("blanc_tune" or "blanc_help" depending on the chosen score) and value:
- blanc_{tune,help}: A floating-point score representing the quality of the summary.
The BLANC score typically ranges from 0 (summary is not helpful) to 0.3 (summary provides a 30% improvement in performance), although it can theoretically range between -1 and 1. Higher scores indicate better quality summaries.
Values from Popular Papers
Goyal et al. (2022) compare the performance of different summarization systems using reference-free automatic metrics in News Summarization and Evaluation in the Era of GPT-3. For the DailyMail dataset they report the following BLANC scores:
- PEGASUS: 0.1137
- BRIO: 0.1217
- T0: 0.0889
- GPT3-D2: 0.0983
Examples
BLANC-help:
from evaluate import load
blanc = load("phucdev/blanc_score")
documents = ["Jack drove his minivan to the bazaar to purchase milk and honey for his large family."]
summaries = ["Jack bought milk and honey."]
results = blanc.compute(documents=documents, summaries=summaries, blanc_score="help")
BLANC-tune:
from evaluate import load
blanc = load("phucdev/blanc_score")
documents = ["Jack drove his minivan to the bazaar to purchase milk and honey for his large family."]
summaries = ["Jack bought milk and honey."]
results = blanc.compute(
documents=documents,
summaries=summaries,
blanc_score="tune",
finetune_mask_evenly=False,
show_progress_bar=False
)
By default, BLANC is run on the CPU. Using CUDA with batching is much faster:
BLANC-help:
from evaluate import load
blanc = load("phucdev/blanc_score")
documents = ["Jack drove his minivan to the bazaar to purchase milk and honey for his large family."]
summaries = ["Jack bought milk and honey."]
results = blanc.compute(
documents=documents,
summaries=summaries,
blanc_score="help",
device="cuda",
inference_batch_size=128
)
BLANC-tune:
from evaluate import load
blanc = load("phucdev/blanc_score")
documents = ["Jack drove his minivan to the bazaar to purchase milk and honey for his large family."]
summaries = ["Jack bought milk and honey."]
results = blanc.compute(
documents=documents,
summaries=summaries,
blanc_score="tune",
device="cuda",
inference_batch_size=24,
finetune_mask_evenly=False,
finetune_batch_size=24
)
Limitations and Bias
- Summary Length: BLANC tends to favor longer summaries as they generally provide more context and help the model better understand the document.
- No Reference Summaries: BLANC operates without human-written reference summaries, which may be advantageous in certain cases but could lack the nuance that human judgment provides.
- Limited by Language Model: The quality of BLANC scores is influenced by the choice of the underlying pre-trained language model (e.g., BERT), which may introduce biases inherent to the model itself.
Citation
@inproceedings{vasilyev-etal-2020-fill,
title = "Fill in the {BLANC}: Human-free quality estimation of document summaries",
author = "Vasilyev, Oleg and
Dharnidharka, Vedant and
Bohannon, John",
editor = "Eger, Steffen and
Gao, Yang and
Peyrard, Maxime and
Zhao, Wei and
Hovy, Eduard",
booktitle = "Proceedings of the First Workshop on Evaluation and Comparison of NLP Systems",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.eval4nlp-1.2",
doi = "10.18653/v1/2020.eval4nlp-1.2",
pages = "11--20",
abstract = "We present BLANC, a new approach to the automatic estimation of document summary quality. Our goal is to measure the functional performance of a summary with an objective, reproducible, and fully automated method. Our approach achieves this by measuring the performance boost gained by a pre-trained language model with access to a document summary while carrying out its language understanding task on the document{'}s text. We present evidence that BLANC scores have as good correlation with human evaluations as do the ROUGE family of summary quality measurements. And unlike ROUGE, the BLANC method does not require human-written reference summaries, allowing for fully human-free summary quality estimation.",
}