MoritzLaurer HF staff commited on
Commit
03678c6
1 Parent(s): 5659c76

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +75 -0
README.md ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ tags:
5
+ - text-classification
6
+ - zero-shot-classification
7
+ metrics:
8
+ - accuracy
9
+ widget:
10
+ - text: "I first thought that I liked the movie, but upon second thought it was actually disappointing. [SEP] The movie was good."
11
+
12
+ ---
13
+ # DeBERTa-v3-xsmall-mnli-fever-anli-ling-binary
14
+ ## Model description
15
+ This model was trained on 790 000+ hypothesis-premise pairs from 4 NLI datasets: [MultiNLI](https://huggingface.co/datasets/multi_nli), [Fever-NLI](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md), [LingNLI](https://arxiv.org/abs/2104.07179) and [ANLI](https://github.com/facebookresearch/anli).
16
+
17
+ Note that the model was trained on binary NLI to predict either "entailment" or "not-entailment". This is specifically designed for zero-shot classification, where the difference between "neutral" and "contradiction" is irrelevant.
18
+
19
+ The base model is [DeBERTa-v3-xsmall from Microsoft](https://huggingface.co/microsoft/deberta-v3-xsmall). The v3 variant of DeBERTa substantially outperforms previous versions of the model by including a different pre-training objective, see the [DeBERTa-V3 paper](https://arxiv.org/abs/2111.09543).
20
+
21
+ ## Intended uses & limitations
22
+ #### How to use the model
23
+ ```python
24
+ from transformers import AutoTokenizer, AutoModelForSequenceClassification
25
+ import torch
26
+
27
+ model_name = "MoritzLaurer/DeBERTa-v3-xsmall-mnli-fever-anli-ling-binary"
28
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
29
+ model = AutoModelForSequenceClassification.from_pretrained(model_name)
30
+
31
+ premise = "I first thought that I liked the movie, but upon second thought it was actually disappointing."
32
+ hypothesis = "The movie was good."
33
+
34
+ input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt")
35
+ output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu"
36
+ prediction = torch.softmax(output["logits"][0], -1).tolist()
37
+ label_names = ["entailment", "not_entailment"]
38
+ prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)}
39
+ print(prediction)
40
+ ```
41
+ ### Training data
42
+ This model was trained on 790 000+ hypothesis-premise pairs from 4 NLI datasets: [MultiNLI](https://huggingface.co/datasets/multi_nli), [Fever-NLI](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md), [LingNLI](https://arxiv.org/abs/2104.07179) and [ANLI](https://github.com/facebookresearch/anli).
43
+
44
+ ### Training procedure
45
+ DeBERTa-v3-xsmall-mnli-fever-anli-ling-binary was trained using the Hugging Face trainer with the following hyperparameters.
46
+ ```
47
+ training_args = TrainingArguments(
48
+ num_train_epochs=5, # total number of training epochs
49
+ learning_rate=2e-05,
50
+ per_device_train_batch_size=32, # batch size per device during training
51
+ per_device_eval_batch_size=32, # batch size for evaluation
52
+ warmup_ratio=0.1, # number of warmup steps for learning rate scheduler
53
+ weight_decay=0.06, # strength of weight decay
54
+ fp16=True # mixed precision training
55
+ )
56
+ ```
57
+ ### Eval results
58
+ The model was evaluated using the binary test sets for MultiNLI and ANLI and the binary dev set for Fever-NLI (two classes instead of three). The metric used is accuracy.
59
+
60
+ mnli-m-2c | mnli-mm-2c | fever-nli-2c | anli-all-2c | anli-r3-2c | lingnli-2c
61
+ ---------|----------|---------|----------|----------|------
62
+ x | x | x | x | x | x
63
+
64
+
65
+ ## Limitations and bias
66
+ Please consult the original DeBERTa paper and literature on different NLI datasets for potential biases.
67
+
68
+ ### BibTeX entry and citation info
69
+ If you want to cite this model, please cite the original DeBERTa paper, the respective NLI datasets and include a link to this model on the Hugging Face hub.
70
+
71
+ ### Ideas for cooperation or questions?
72
+ If you have questions or ideas for cooperation, contact me at m.laurer{at}vu.nl or [LinkedIn](https://www.linkedin.com/in/moritz-laurer/)
73
+
74
+ ### Debugging and issues
75
+ Note that DeBERTa-v3 was released recently and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers==4.13 might solve some issues.