princeton-nlp nazneen commited on
Commit
e0dd30b
1 Parent(s): 5365919

model documentation (#1)

Browse files

- model documentation (c465610e250c284a36a9cb4cdc8b4e3072f1848f)


Co-authored-by: Nazneen Rajani <[email protected]>

Files changed (1) hide show
  1. README.md +201 -0
README.md ADDED
@@ -0,0 +1,201 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - feature-extraction
4
+ - bert
5
+ ---
6
+
7
+ # Model Card for unsup-simcse-bert-large-uncased
8
+
9
+ # Model Details
10
+
11
+ ## Model Description
12
+
13
+ More information needed
14
+
15
+ - **Developed by:** Princeton NLP group
16
+ - **Shared by [Optional]:** Princeton NLP group
17
+
18
+ - **Model type:** Feature Extraction
19
+ - **Language(s) (NLP):** More information needed
20
+ - **License:** More information needed
21
+ - **Parent Model:** BERT
22
+ - **Resources for more information:**
23
+ - [GitHub Repo](https://github.com/princeton-nlp/SimCSE)
24
+ - [Associated Paper](https://arxiv.org/abs/2104.08821)
25
+
26
+
27
+ # Uses
28
+
29
+
30
+ ## Direct Use
31
+ This model can be used for the task of feature extraction.
32
+
33
+ ## Downstream Use [Optional]
34
+
35
+ More information needed.
36
+
37
+ ## Out-of-Scope Use
38
+
39
+ The model should not be used to intentionally create hostile or alienating environments for people.
40
+
41
+ # Bias, Risks, and Limitations
42
+
43
+
44
+ Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
45
+
46
+
47
+
48
+ ## Recommendations
49
+
50
+
51
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
52
+
53
+ # Training Details
54
+
55
+ ## Training Data
56
+
57
+ The model craters note in the [associatedGithub Repository](https://github.com/princeton-nlp/SimCSE/blob/main/README.md):
58
+ > We train unsupervised SimCSE on 106 randomly sampled sentences from English Wikipedia, and train supervised SimCSE on the combination of MNLI and SNLI datasets (314k).
59
+
60
+
61
+
62
+ ## Training Procedure
63
+
64
+
65
+ ### Preprocessing
66
+
67
+ More information needed
68
+
69
+
70
+
71
+ ### Speeds, Sizes, Times
72
+
73
+
74
+ **Hyperparameters**
75
+ The model craters note in the [associated GitHub Repo](https://github.com/princeton-nlp/SimCSE) :
76
+
77
+ | | Unsup. BERT | Sup. |
78
+ |:--------------|:-----------:|:---------:|
79
+ | Batch size | 64 | 512 |
80
+ | Learning rate (large) | 1e-5 | 1e-5 |
81
+
82
+
83
+
84
+
85
+ # Evaluation
86
+
87
+
88
+ ## Testing Data, Factors & Metrics
89
+
90
+ ### Testing Data
91
+
92
+ The model craters note in the [associated paper](https://arxiv.org/pdf/2104.08821.pdf):
93
+ > Our evaluation code for sentence embeddings is based on a modified version of [SentEval](https://github.com/facebookresearch/SentEval). It evaluates sentence embeddings on semantic textual similarity (STS) tasks and downstream transfer tasks.
94
+
95
+ > For STS tasks, our evaluation takes the "all" setting, and report Spearman's correlation. See [associated paper](https://arxiv.org/pdf/2104.08821.pdf) (Appendix B) for evaluation details.
96
+
97
+
98
+
99
+ ### Factors
100
+ More information needed
101
+
102
+ ### Metrics
103
+
104
+ More information needed
105
+
106
+
107
+ ## Results
108
+
109
+ More information needed
110
+
111
+
112
+ # Model Examination
113
+
114
+ The model craters note in the [associated paper](https://arxiv.org/pdf/2104.08821.pdf):
115
+
116
+ >**Uniformity and alignment.**
117
+ We also observe that (1) though pre-trained embeddings have good alignment, their uniformity is poor (i.e., the embeddings are highly anisotropic); (2) post-processing methods like BERT-flow and BERT-whitening greatly improve uniformity but also suffer a degeneration in alignment; (3) unsupervised SimCSE effectively improves uniformity of pre-trained embeddings whereas keeping a good alignment;(4) incorporating supervised data in SimCSE further amends alignment.
118
+
119
+
120
+
121
+
122
+ # Environmental Impact
123
+
124
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
125
+
126
+ - **Hardware Type:** Nvidia 3090 GPUs with CUDA 11
127
+ - **Hours used:** More information needed
128
+ - **Cloud Provider:** More information needed
129
+ - **Compute Region:** More information needed
130
+ - **Carbon Emitted:** More information needed
131
+
132
+ # Technical Specifications [optional]
133
+
134
+ ## Model Architecture and Objective
135
+
136
+ More information needed
137
+
138
+ ## Compute Infrastructure
139
+
140
+ More information needed
141
+
142
+ ### Hardware
143
+
144
+
145
+ More information needed
146
+
147
+ ### Software
148
+
149
+ More information needed.
150
+
151
+ # Citation
152
+
153
+
154
+ **BibTeX:**
155
+
156
+
157
+ ```bibtex
158
+ @inproceedings{gao2021simcse,
159
+ title={{SimCSE}: Simple Contrastive Learning of Sentence Embeddings},
160
+ author={Gao, Tianyu and Yao, Xingcheng and Chen, Danqi},
161
+ booktitle={Empirical Methods in Natural Language Processing (EMNLP)},
162
+ year={2021}
163
+ }
164
+ ```
165
+
166
+
167
+
168
+
169
+ # Glossary [optional]
170
+
171
+ More information needed
172
+
173
+ # More Information [optional]
174
+ More information needed
175
+
176
+
177
+ # Model Card Authors [optional]
178
+
179
+ Princeton NLP group in collaboration with Ezi Ozoani and the Hugging Face team.
180
+
181
+ # Model Card Contact
182
+
183
+ If you have any questions related to the code or the paper, feel free to email Tianyu (`[email protected]`) and Xingcheng (`[email protected]`). If you encounter any problems when using the code, or want to report a bug, you can open an issue. Please try to specify the problem with details so we can help you better and quicker!
184
+
185
+ # How to Get Started with the Model
186
+
187
+ Use the code below to get started with the model.
188
+
189
+ <details>
190
+ <summary> Click to expand </summary>
191
+
192
+ ```python
193
+ from transformers import AutoTokenizer, AutoModel
194
+
195
+ tokenizer = AutoTokenizer.from_pretrained("princeton-nlp/unsup-simcse-bert-large-uncased")
196
+
197
+ model = AutoModel.from_pretrained("princeton-nlp/unsup-simcse-bert-large-uncased")
198
+ ```
199
+ </details>
200
+
201
+