Spaces:
Sleeping
Sleeping
title: BLANC | |
sdk: gradio | |
sdk_version: 3.50 | |
app_file: app.py | |
pinned: false | |
tags: | |
- evaluate | |
- metric | |
description: "BLANC is a reference-free metric that evaluates the quality of document summaries by measuring how much they improve a pre-trained language model's performance on the document's text. It estimates summary quality without needing human-written references, using two variations: BLANC-help and BLANC-tune." | |
# Metric Card for BLANC | |
## Metric Description | |
BLANC is a reference-free evaluation metric designed to estimate the quality of document summaries. It assesses how much a summary helps a pre-trained language model (such as BERT) perform language understanding tasks on the original document's text. | |
There are two variations of BLANC: | |
1. BLANC-help: The summary is concatenated with document sentences during the inference task. The BLANC-help score is defined as the difference in accuracy between unmasking tokens with the summary and with filler text (same length as the summary but consisting of period symbols). It measures how much the summary boosts the model's performance on masked tokens. | |
2. BLANC-tune: The model is fine-tuned on the summary text before it processes the entire document. The BLANC-tune score is calculated by comparing the performance of the fine-tuned model with that of the original model, both tasked with unmasking tokens in the document text. This method reflects how much the model's ability to understand the document improves after learning from the summary. | |
Unlike traditional metrics such as ROUGE, BLANC does not require human-written reference summaries, making it fully human-free. | |
## How to Use | |
BLANC takes 2 mandatory arguments: `documents` (a list of documents) and `summaries` (a list of predicted summaries). | |
You can specify which BLANC variation to use via the `blanc_score` parameter: `help` or `tune`. | |
```python | |
from evaluate import load | |
blanc = load("phucdev/blanc_score") | |
documents = ["Jack drove his minivan to the bazaar to purchase milk and honey for his large family."] | |
summaries = ["Jack bought milk and honey."] | |
results = blanc.compute(documents=documents, summaries=summaries, blanc_score="help") | |
``` | |
### Inputs | |
Args: | |
- `documents` (_list of str_): Documents. | |
- `summaries` (_list of str_): Predicted summaries. | |
- `model_name` (_str, optional_): BERT model type to use for evaluation. Default is `bert-base-uncased`. | |
- `measure` (_str, optional_): Measure type, either `improve` or `relative`, as defined in the BLANC paper. Default is `relative`. | |
- `blanc_score` (_str, optional_): BLANC score type, either `help` or `tune`. Default is `help`. | |
- `gap` (_int, optional_): Distance between words to mask during inference. Default is `2`. | |
- `gap_mask` (_int, optional_): Number of tokens to mask at each designated position during inference. Default is `1`. | |
- `gap_tune` (_int, optional_): Distance between words to mask during fine-tuning. Default is `2`. | |
- `gap_mask_tune` (_int, optional_): Number of tokens to mask at each designated position during fine-tuning. Default is `1`. | |
- `min_token_length_normal` (_int, optional_): Minimum number of characters in normal tokens to mask (whole words) during inference. Default is `4`. | |
- `min_token_length_lead` (_int, optional_): Minimum number of characters in lead tokens (first part of words) to mask during inference. Default is `2`. | |
- `min_token_length_followup` (_int, optional_): Minimum number of characters in follow-up tokens (continuations of words) to mask during inference. Default is `100`. | |
- `min_token_length_normal_tune` (_int, optional_): Minimum number of characters in normal tokens to mask during fine-tuning. Default is `-1`. | |
- `min_token_length_lead_tune` (_int, optional_): Minimum number of characters in lead tokens to mask during fine-tuning. Default is `-1`. | |
- `min_token_length_followup_tune` (_int, optional_): Minimum number of characters in follow-up tokens to mask during fine-tuning. Default is `-1`. | |
- `device` (_str, optional_): Device to run the model on, either `cpu` or `cuda`. BLANC is run on `cpu` per default. | |
- `random_seed` (_int, optional_): Random seed for Python and PyTorch. Default is `0`. | |
- `inference_batch_size` (_int, optional_): Batch size for inference. Default is `1`. | |
- `inference_mask_evenly` (_bool, optional_): Whether to mask every `gap` tokens during inference (`True`) or mask randomly with a probability of 0.15 (`False`). Default is `True`. | |
- `show_progress_bar` (_bool, optional_): Whether to display a progress bar during computation. Default is `True`. | |
BLANC-help specific arguments: | |
- `filler_token` (_str, optional_): Token to use as filler in lieu of the summary. Default is `.`. | |
- `help_sep` (_str, optional_): Token used to separate the summary (or filler) from the sentence, or '' for no separator. Default is "". | |
BLANC-tune specific arguments: | |
- `finetune_batch_size` (_int, optional_): Batch size to use when fine-tuning on the summary. Default is `1`. | |
- `finetune_epochs` (_int, optional_): Number of epochs for fine-tuning on the summary. Default is `10`. | |
- `finetune_mask_evenly` (_bool, optional_): Whether to mask every `gap` tokens during fine-tuning (`True`) or mask randomly with a probability of 0.15 (`False`). Default is `True`. | |
- `finetune_chunk_size` (_int, optional_): Number of summary tokens to use at a time during fine-tuning. Default is `64`. | |
- `finetune_chunk_stride` (_int, optional_): Number of tokens between summary chunks for fine-tuning. Default is `32`. | |
- `learning_rate` (_float, optional_): Learning rate for fine-tuning on the summary. Default is `5e-05`. | |
- `warmup_steps` (_int, optional_): Number of warmup steps for fine-tuning. Default is `0`. | |
### Output Values | |
The metric outputs a dictionary with the following key ("blanc_tune" or "blanc_help" depending on the chosen score) and value: | |
- blanc_{tune,help}: A floating-point score representing the quality of the summary. | |
The BLANC score typically ranges from 0 (summary is not helpful) to 0.3 (summary provides a 30% improvement in performance), although it can theoretically range between -1 and 1. Higher scores indicate better quality summaries. | |
#### Values from Popular Papers | |
Goyal et al. (2022) compare the performance of different summarization systems using reference-free automatic metrics in [News Summarization and Evaluation in the Era of GPT-3](https://arxiv.org/abs/2209.12356). | |
For the DailyMail dataset they report the following BLANC scores: | |
- PEGASUS: 0.1137 | |
- BRIO: 0.1217 | |
- T0: 0.0889 | |
- GPT3-D2: 0.0983 | |
### Examples | |
BLANC-help: | |
```python | |
from evaluate import load | |
blanc = load("phucdev/blanc_score") | |
documents = ["Jack drove his minivan to the bazaar to purchase milk and honey for his large family."] | |
summaries = ["Jack bought milk and honey."] | |
results = blanc.compute(documents=documents, summaries=summaries, blanc_score="help") | |
``` | |
BLANC-tune: | |
```python | |
from evaluate import load | |
blanc = load("phucdev/blanc_score") | |
documents = ["Jack drove his minivan to the bazaar to purchase milk and honey for his large family."] | |
summaries = ["Jack bought milk and honey."] | |
results = blanc.compute( | |
documents=documents, | |
summaries=summaries, | |
blanc_score="tune", | |
finetune_mask_evenly=False, | |
show_progress_bar=False | |
) | |
``` | |
By default, BLANC is run on the CPU. Using CUDA with batching is much faster: | |
BLANC-help: | |
```python | |
from evaluate import load | |
blanc = load("phucdev/blanc_score") | |
documents = ["Jack drove his minivan to the bazaar to purchase milk and honey for his large family."] | |
summaries = ["Jack bought milk and honey."] | |
results = blanc.compute( | |
documents=documents, | |
summaries=summaries, | |
blanc_score="help", | |
device="cuda", | |
inference_batch_size=128 | |
) | |
``` | |
BLANC-tune: | |
```python | |
from evaluate import load | |
blanc = load("phucdev/blanc_score") | |
documents = ["Jack drove his minivan to the bazaar to purchase milk and honey for his large family."] | |
summaries = ["Jack bought milk and honey."] | |
results = blanc.compute( | |
documents=documents, | |
summaries=summaries, | |
blanc_score="tune", | |
device="cuda", | |
inference_batch_size=24, | |
finetune_mask_evenly=False, | |
finetune_batch_size=24 | |
) | |
``` | |
## Limitations and Bias | |
- Summary Length: BLANC tends to favor longer summaries as they generally provide more context and help the model better understand the document. | |
- No Reference Summaries: BLANC operates without human-written reference summaries, which may be advantageous in certain cases but could lack the nuance that human judgment provides. | |
- Limited by Language Model: The quality of BLANC scores is influenced by the choice of the underlying pre-trained language model (e.g., BERT), which may introduce biases inherent to the model itself. | |
## Citation | |
```tex | |
@inproceedings{vasilyev-etal-2020-fill, | |
title = "Fill in the {BLANC}: Human-free quality estimation of document summaries", | |
author = "Vasilyev, Oleg and | |
Dharnidharka, Vedant and | |
Bohannon, John", | |
editor = "Eger, Steffen and | |
Gao, Yang and | |
Peyrard, Maxime and | |
Zhao, Wei and | |
Hovy, Eduard", | |
booktitle = "Proceedings of the First Workshop on Evaluation and Comparison of NLP Systems", | |
month = nov, | |
year = "2020", | |
address = "Online", | |
publisher = "Association for Computational Linguistics", | |
url = "https://aclanthology.org/2020.eval4nlp-1.2", | |
doi = "10.18653/v1/2020.eval4nlp-1.2", | |
pages = "11--20", | |
abstract = "We present BLANC, a new approach to the automatic estimation of document summary quality. Our goal is to measure the functional performance of a summary with an objective, reproducible, and fully automated method. Our approach achieves this by measuring the performance boost gained by a pre-trained language model with access to a document summary while carrying out its language understanding task on the document{'}s text. We present evidence that BLANC scores have as good correlation with human evaluations as do the ROUGE family of summary quality measurements. And unlike ROUGE, the BLANC method does not require human-written reference summaries, allowing for fully human-free summary quality estimation.", | |
} | |
``` | |
## Further References | |
- [BLANC paper: Fill in the BLANC: Human-free quality estimation of document summaries](https://aclanthology.org/2020.eval4nlp-1.2/) | |
- [BLANC GitHub Repository](https://github.com/PrimerAI/blanc) | |