Atharva192003 commited on
Commit
d0dee79
1 Parent(s): 441d0a4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +59 -1
README.md CHANGED
@@ -6,4 +6,62 @@ metrics:
6
  - character
7
  pipeline_tag: zero-shot-classification
8
  library_name: transformers
9
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
  - character
7
  pipeline_tag: zero-shot-classification
8
  library_name: transformers
9
+ ---
10
+
11
+
12
+ bart-large-mnli
13
+ This is the checkpoint for bart-large after being trained on the MultiNLI (MNLI) dataset.
14
+
15
+ Additional information about this model:
16
+
17
+ The bart-large model page
18
+ BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
19
+ BART fairseq implementation
20
+ NLI-based Zero Shot Text Classification
21
+ Yin et al. proposed a method for using pre-trained NLI models as a ready-made zero-shot sequence classifiers. The method works by posing the sequence to be classified as the NLI premise and to construct a hypothesis from each candidate label. For example, if we want to evaluate whether a sequence belongs to the class "politics", we could construct a hypothesis of This text is about politics.. The probabilities for entailment and contradiction are then converted to label probabilities.
22
+
23
+ This method is surprisingly effective in many cases, particularly when used with larger pre-trained models like BART and Roberta. See this blog post for a more expansive introduction to this and other zero shot methods, and see the code snippets below for examples of using this model for zero-shot classification both with Hugging Face's built-in pipeline and with native Transformers/PyTorch code.
24
+
25
+ With the zero-shot classification pipeline
26
+ The model can be loaded with the zero-shot-classification pipeline like so:
27
+
28
+ from transformers import pipeline
29
+ classifier = pipeline("zero-shot-classification",
30
+ model="facebook/bart-large-mnli")
31
+ You can then use this pipeline to classify sequences into any of the class names you specify.
32
+
33
+ sequence_to_classify = "one day I will see the world"
34
+ candidate_labels = ['travel', 'cooking', 'dancing']
35
+ classifier(sequence_to_classify, candidate_labels)
36
+ #{'labels': ['travel', 'dancing', 'cooking'],
37
+ # 'scores': [0.9938651323318481, 0.0032737774308770895, 0.002861034357920289],
38
+ # 'sequence': 'one day I will see the world'}
39
+ If more than one candidate label can be correct, pass multi_class=True to calculate each class independently:
40
+
41
+ candidate_labels = ['travel', 'cooking', 'dancing', 'exploration']
42
+ classifier(sequence_to_classify, candidate_labels, multi_class=True)
43
+ #{'labels': ['travel', 'exploration', 'dancing', 'cooking'],
44
+ # 'scores': [0.9945111274719238,
45
+ # 0.9383890628814697,
46
+ # 0.0057061901316046715,
47
+ # 0.0018193122232332826],
48
+ # 'sequence': 'one day I will see the world'}
49
+ With manual PyTorch
50
+ # pose sequence as a NLI premise and label as a hypothesis
51
+ from transformers import AutoModelForSequenceClassification, AutoTokenizer
52
+ nli_model = AutoModelForSequenceClassification.from_pretrained('facebook/bart-large-mnli')
53
+ tokenizer = AutoTokenizer.from_pretrained('facebook/bart-large-mnli')
54
+
55
+ premise = sequence
56
+ hypothesis = f'This example is {label}.'
57
+
58
+ # run through model pre-trained on MNLI
59
+ x = tokenizer.encode(premise, hypothesis, return_tensors='pt',
60
+ truncation_strategy='only_first')
61
+ logits = nli_model(x.to(device))[0]
62
+
63
+ # we throw away "neutral" (dim 1) and take the probability of
64
+ # "entailment" (2) as the probability of the label being true
65
+ entail_contradiction_logits = logits[:,[0,2]]
66
+ probs = entail_contradiction_logits.softmax(dim=1)
67
+ prob_label_is_true = probs[:,1]