Transformers
PyTorch
roberta
Inference Endpoints
Olga Golovneva commited on
Commit
cf802b6
1 Parent(s): f3c5584

First model version for roscoe-512-roberta-base

Browse files
README.md CHANGED
@@ -1,3 +1,32 @@
1
  ---
2
  license: cc-by-nc-4.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-nc-4.0
3
  ---
4
+ ## roscoe-512-roberta-base
5
+
6
+ Sentence embedding model for reasoning steps.
7
+
8
+ To obtain reasoning step embeddings, we finetune SimCSE (Gao et al., 2021), a
9
+ supervised sentence similarity model extending the RoBERTa word embedding model (Liu et al., 2019) on
10
+ multi-step reasoning datasets we listed in §5 (see details in Golovneva et al., 2022). SimCSE is a contrastive learning model
11
+ that is trained on triplets of reference reasoning steps, positive and hard-negative hypothesis reasoning steps
12
+ to minimize the cross-entropy objective with in-batch negatives. For contrastive learning, we use the context
13
+ and reference reasoning steps as a positive sample, and context and perturbed reference steps as
14
+ hard-negative pairs. With finetuned model we embed each individual step, as well as a reasoning chain as a
15
+ whole. We use the pretrained checkpoint of supervised SimCSE model sup-simcse-roberta-base to initialize
16
+ our model, and further train it for five epochs on our synthetic train data.
17
+
18
+ To train the model, we construct dataset by generating perturbations — i.e.,
19
+ deterministic modifications — on half of the reference reasoning steps in the following sets: Entailment-Bank
20
+ (deductive reasoning), ProofWriter (logical reasoning); three arithmetic reasoning datasets MATH, ASDIV and AQUA; EQASC
21
+ (explanations for commonsense question answering), and StrategyQA (question answering with implicit reasoning strategies).
22
+
23
+ References:
24
+
25
+ 1. Tianyu Gao, Xingcheng Yao, and Danqi Chen. Simcse: Simple contrastive learning of sentence embeddings.
26
+ arXiv preprint arXiv:2104.08821, 2021.
27
+ 2. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis,
28
+ Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv
29
+ preprint arXiv:1907.11692, 2019.
30
+ 3. Olga Golovneva, Moya Chen, Spencer Poff, Martin Corredor, Luke Zettlemoyer, Maryam Fazel-Zarandi, and Asli Celikyilmaz.
31
+ ROSCOE: A Suite of Metrics for Scoring Step-by-Step Reasoning. arXiv preprint, 2022.
32
+ 4.
config.json ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "result/cont-pos-pert-neg-simcse-roberta-base-cosine-0830",
3
+ "architectures": [
4
+ "RobertaForCL"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "bos_token_id": 0,
8
+ "eos_token_id": 2,
9
+ "gradient_checkpointing": false,
10
+ "hidden_act": "gelu",
11
+ "hidden_dropout_prob": 0.1,
12
+ "hidden_size": 768,
13
+ "initializer_range": 0.02,
14
+ "intermediate_size": 3072,
15
+ "layer_norm_eps": 1e-05,
16
+ "max_position_embeddings": 514,
17
+ "model_type": "roberta",
18
+ "num_attention_heads": 12,
19
+ "num_hidden_layers": 12,
20
+ "pad_token_id": 1,
21
+ "position_embedding_type": "absolute",
22
+ "transformers_version": "4.2.1",
23
+ "type_vocab_size": 1,
24
+ "use_cache": true,
25
+ "vocab_size": 50265
26
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:49e43d4bc587a735991673aab6935269a25dbf09892bafa6f66eca87de4c7a5d
3
+ size 498672439
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"bos_token": "<s>", "eos_token": "</s>", "unk_token": "<unk>", "sep_token": "</s>", "pad_token": "<pad>", "cls_token": "<s>", "mask_token": {"content": "<mask>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": false}}
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"unk_token": {"content": "<unk>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "bos_token": {"content": "<s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "eos_token": {"content": "</s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "add_prefix_space": false, "errors": "replace", "sep_token": {"content": "</s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "cls_token": {"content": "<s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "pad_token": {"content": "<pad>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "mask_token": {"content": "<mask>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "model_max_length": 512, "name_or_path": "princeton-nlp/sup-simcse-roberta-base", "special_tokens_map_file": "/private/home/olggol/.cache/huggingface/transformers/90ffa7c13d92d368876a3cde38912cf1fbe882d3b2ad0fc6b1ab5d11fa3f7753.a11ebb04664c067c8fe5ef8f8068b0f721263414a26058692f7b2e4ba2a1b342"}
vocab.json ADDED
The diff for this file is too large to render. See raw diff