Snivellus789 commited on
Commit
2745d3a
1 Parent(s): 6fc74ef

Add new SentenceTransformer model.

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 384,
3
+ "pooling_mode_cls_token": true,
4
+ "pooling_mode_mean_tokens": false,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,384 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: BAAI/bge-small-en-v1.5
3
+ datasets: []
4
+ language: []
5
+ library_name: sentence-transformers
6
+ pipeline_tag: sentence-similarity
7
+ tags:
8
+ - sentence-transformers
9
+ - sentence-similarity
10
+ - feature-extraction
11
+ - generated_from_trainer
12
+ - dataset_size:14271
13
+ - loss:BatchAllTripletLoss
14
+ widget:
15
+ - source_sentence: In a complex legal scenario involving multiple jurisdictions, how
16
+ would you navigate the differences in laws related to online privacy violations
17
+ and harassment?
18
+ sentences:
19
+ - How does voluntary admission under the Baker Act impact eligibility for a Concealed
20
+ Weapon Permit?
21
+ - How do the terms of the account and the circumstances impact the potential liability
22
+ of the Bank of Hawaii in this situation?
23
+ - Can someone run a background check on you without your consent?
24
+ - source_sentence: How long is the Kansas Lemon Law effective for?
25
+ sentences:
26
+ - What should I do to stop my neighbor from using my land and barn?
27
+ - How does the expungement of an arrest impact the disclosure requirements in applications
28
+ for permits or licenses?
29
+ - If a policy is canceled due to a denied claim, does the canceled policy still
30
+ cover injuries from the incident?
31
+ - source_sentence: What are the implications of a guilty plea without corroborating
32
+ evidence in terms of justice and fairness?
33
+ sentences:
34
+ - How does having a Series 7 license impact the ability of a financial planner to
35
+ sell securities products?
36
+ - What are the specific state laws that govern the relationship between the Baker
37
+ Act and Concealed Weapon Permits?
38
+ - How does the duration of copyright protection impact the entry of works into the
39
+ public domain?
40
+ - source_sentence: How can one prove the terms and existence of a verbal contract?
41
+ sentences:
42
+ - Is it common for search warrants to be obtained under a unique cause number?
43
+ - In what ways can transparency in background check forms contribute to national
44
+ security measures?
45
+ - What are the potential legal responsibilities of the 14-year-old boy if he is
46
+ determined to be the father of the baby?
47
+ - source_sentence: How can the person ensure they receive the necessary compensation
48
+ for their work-related injury?
49
+ sentences:
50
+ - Is there a law in Oklahoma that restricts the distance of a dispensary to a baseball
51
+ field?
52
+ - Considering the complexities of property rights, due process, and public safety,
53
+ what are the ethical and legal considerations surrounding citizens taking possession
54
+ of unattended animals in public areas, and how do these actions intersect with
55
+ constitutional rights and property laws?
56
+ - What precedent cases or legal doctrines could be relevant in a lawsuit against
57
+ the town council person and the township in this scenario?
58
+ ---
59
+
60
+ # SentenceTransformer based on BAAI/bge-small-en-v1.5
61
+
62
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
63
+
64
+ ## Model Details
65
+
66
+ ### Model Description
67
+ - **Model Type:** Sentence Transformer
68
+ - **Base model:** [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) <!-- at revision 5c38ec7c405ec4b44b94cc5a9bb96e735b38267a -->
69
+ - **Maximum Sequence Length:** 512 tokens
70
+ - **Output Dimensionality:** 384 tokens
71
+ - **Similarity Function:** Cosine Similarity
72
+ <!-- - **Training Dataset:** Unknown -->
73
+ <!-- - **Language:** Unknown -->
74
+ <!-- - **License:** Unknown -->
75
+
76
+ ### Model Sources
77
+
78
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
79
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
80
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
81
+
82
+ ### Full Model Architecture
83
+
84
+ ```
85
+ SentenceTransformer(
86
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
87
+ (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
88
+ (2): Normalize()
89
+ )
90
+ ```
91
+
92
+ ## Usage
93
+
94
+ ### Direct Usage (Sentence Transformers)
95
+
96
+ First install the Sentence Transformers library:
97
+
98
+ ```bash
99
+ pip install -U sentence-transformers
100
+ ```
101
+
102
+ Then you can load this model and run inference.
103
+ ```python
104
+ from sentence_transformers import SentenceTransformer
105
+
106
+ # Download from the 🤗 Hub
107
+ model = SentenceTransformer("Snivellus789/router-embedding")
108
+ # Run inference
109
+ sentences = [
110
+ 'How can the person ensure they receive the necessary compensation for their work-related injury?',
111
+ 'Is there a law in Oklahoma that restricts the distance of a dispensary to a baseball field?',
112
+ 'Considering the complexities of property rights, due process, and public safety, what are the ethical and legal considerations surrounding citizens taking possession of unattended animals in public areas, and how do these actions intersect with constitutional rights and property laws?',
113
+ ]
114
+ embeddings = model.encode(sentences)
115
+ print(embeddings.shape)
116
+ # [3, 384]
117
+
118
+ # Get the similarity scores for the embeddings
119
+ similarities = model.similarity(embeddings, embeddings)
120
+ print(similarities.shape)
121
+ # [3, 3]
122
+ ```
123
+
124
+ <!--
125
+ ### Direct Usage (Transformers)
126
+
127
+ <details><summary>Click to see the direct usage in Transformers</summary>
128
+
129
+ </details>
130
+ -->
131
+
132
+ <!--
133
+ ### Downstream Usage (Sentence Transformers)
134
+
135
+ You can finetune this model on your own dataset.
136
+
137
+ <details><summary>Click to expand</summary>
138
+
139
+ </details>
140
+ -->
141
+
142
+ <!--
143
+ ### Out-of-Scope Use
144
+
145
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
146
+ -->
147
+
148
+ <!--
149
+ ## Bias, Risks and Limitations
150
+
151
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
152
+ -->
153
+
154
+ <!--
155
+ ### Recommendations
156
+
157
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
158
+ -->
159
+
160
+ ## Training Details
161
+
162
+ ### Training Dataset
163
+
164
+ #### Unnamed Dataset
165
+
166
+
167
+ * Size: 14,271 training samples
168
+ * Columns: <code>sentence</code> and <code>label</code>
169
+ * Approximate statistics based on the first 1000 samples:
170
+ | | sentence | label |
171
+ |:--------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
172
+ | type | string | int |
173
+ | details | <ul><li>min: 2 tokens</li><li>mean: 23.55 tokens</li><li>max: 50 tokens</li></ul> | <ul><li>0: ~25.00%</li><li>1: ~25.00%</li><li>2: ~25.00%</li><li>3: ~25.00%</li></ul> |
174
+ * Samples:
175
+ | sentence | label |
176
+ |:-----------------------------------------------------------------------------------------------------------------------------|:---------------|
177
+ | <code>What rights do you have regarding accessing your medical records under HIPAA?</code> | <code>1</code> |
178
+ | <code>What should you do if you lose access to your patient portal after being discharged from a healthcare provider?</code> | <code>1</code> |
179
+ | <code>How can you address the issue of losing access to your patient portal with the pain management office?</code> | <code>3</code> |
180
+ * Loss: [<code>BatchAllTripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#batchalltripletloss)
181
+
182
+ ### Training Hyperparameters
183
+ #### Non-Default Hyperparameters
184
+
185
+ - `per_device_train_batch_size`: 16
186
+ - `per_device_eval_batch_size`: 16
187
+ - `learning_rate`: 2e-05
188
+ - `num_train_epochs`: 2
189
+ - `warmup_ratio`: 0.1
190
+ - `bf16`: True
191
+ - `batch_sampler`: no_duplicates
192
+
193
+ #### All Hyperparameters
194
+ <details><summary>Click to expand</summary>
195
+
196
+ - `overwrite_output_dir`: False
197
+ - `do_predict`: False
198
+ - `eval_strategy`: no
199
+ - `prediction_loss_only`: True
200
+ - `per_device_train_batch_size`: 16
201
+ - `per_device_eval_batch_size`: 16
202
+ - `per_gpu_train_batch_size`: None
203
+ - `per_gpu_eval_batch_size`: None
204
+ - `gradient_accumulation_steps`: 1
205
+ - `eval_accumulation_steps`: None
206
+ - `learning_rate`: 2e-05
207
+ - `weight_decay`: 0.0
208
+ - `adam_beta1`: 0.9
209
+ - `adam_beta2`: 0.999
210
+ - `adam_epsilon`: 1e-08
211
+ - `max_grad_norm`: 1.0
212
+ - `num_train_epochs`: 2
213
+ - `max_steps`: -1
214
+ - `lr_scheduler_type`: linear
215
+ - `lr_scheduler_kwargs`: {}
216
+ - `warmup_ratio`: 0.1
217
+ - `warmup_steps`: 0
218
+ - `log_level`: passive
219
+ - `log_level_replica`: warning
220
+ - `log_on_each_node`: True
221
+ - `logging_nan_inf_filter`: True
222
+ - `save_safetensors`: True
223
+ - `save_on_each_node`: False
224
+ - `save_only_model`: False
225
+ - `restore_callback_states_from_checkpoint`: False
226
+ - `no_cuda`: False
227
+ - `use_cpu`: False
228
+ - `use_mps_device`: False
229
+ - `seed`: 42
230
+ - `data_seed`: None
231
+ - `jit_mode_eval`: False
232
+ - `use_ipex`: False
233
+ - `bf16`: True
234
+ - `fp16`: False
235
+ - `fp16_opt_level`: O1
236
+ - `half_precision_backend`: auto
237
+ - `bf16_full_eval`: False
238
+ - `fp16_full_eval`: False
239
+ - `tf32`: None
240
+ - `local_rank`: 0
241
+ - `ddp_backend`: None
242
+ - `tpu_num_cores`: None
243
+ - `tpu_metrics_debug`: False
244
+ - `debug`: []
245
+ - `dataloader_drop_last`: False
246
+ - `dataloader_num_workers`: 0
247
+ - `dataloader_prefetch_factor`: None
248
+ - `past_index`: -1
249
+ - `disable_tqdm`: False
250
+ - `remove_unused_columns`: True
251
+ - `label_names`: None
252
+ - `load_best_model_at_end`: False
253
+ - `ignore_data_skip`: False
254
+ - `fsdp`: []
255
+ - `fsdp_min_num_params`: 0
256
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
257
+ - `fsdp_transformer_layer_cls_to_wrap`: None
258
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
259
+ - `deepspeed`: None
260
+ - `label_smoothing_factor`: 0.0
261
+ - `optim`: adamw_torch
262
+ - `optim_args`: None
263
+ - `adafactor`: False
264
+ - `group_by_length`: False
265
+ - `length_column_name`: length
266
+ - `ddp_find_unused_parameters`: None
267
+ - `ddp_bucket_cap_mb`: None
268
+ - `ddp_broadcast_buffers`: False
269
+ - `dataloader_pin_memory`: True
270
+ - `dataloader_persistent_workers`: False
271
+ - `skip_memory_metrics`: True
272
+ - `use_legacy_prediction_loop`: False
273
+ - `push_to_hub`: False
274
+ - `resume_from_checkpoint`: None
275
+ - `hub_model_id`: None
276
+ - `hub_strategy`: every_save
277
+ - `hub_private_repo`: False
278
+ - `hub_always_push`: False
279
+ - `gradient_checkpointing`: False
280
+ - `gradient_checkpointing_kwargs`: None
281
+ - `include_inputs_for_metrics`: False
282
+ - `eval_do_concat_batches`: True
283
+ - `fp16_backend`: auto
284
+ - `push_to_hub_model_id`: None
285
+ - `push_to_hub_organization`: None
286
+ - `mp_parameters`:
287
+ - `auto_find_batch_size`: False
288
+ - `full_determinism`: False
289
+ - `torchdynamo`: None
290
+ - `ray_scope`: last
291
+ - `ddp_timeout`: 1800
292
+ - `torch_compile`: False
293
+ - `torch_compile_backend`: None
294
+ - `torch_compile_mode`: None
295
+ - `dispatch_batches`: None
296
+ - `split_batches`: None
297
+ - `include_tokens_per_second`: False
298
+ - `include_num_input_tokens_seen`: False
299
+ - `neftune_noise_alpha`: None
300
+ - `optim_target_modules`: None
301
+ - `batch_eval_metrics`: False
302
+ - `eval_on_start`: False
303
+ - `batch_sampler`: no_duplicates
304
+ - `multi_dataset_batch_sampler`: proportional
305
+
306
+ </details>
307
+
308
+ ### Training Logs
309
+ | Epoch | Step | Training Loss |
310
+ |:------:|:----:|:-------------:|
311
+ | 0.1121 | 100 | 5.0008 |
312
+ | 0.2242 | 200 | 4.7622 |
313
+ | 0.3363 | 300 | 4.4532 |
314
+ | 0.4484 | 400 | 4.4386 |
315
+ | 0.5605 | 500 | 4.346 |
316
+ | 0.6726 | 600 | 4.4488 |
317
+ | 0.7848 | 700 | 4.5665 |
318
+ | 0.8969 | 800 | 4.4743 |
319
+ | 1.0090 | 900 | 4.3447 |
320
+ | 1.1211 | 1000 | 4.419 |
321
+ | 1.2332 | 1100 | 4.4267 |
322
+ | 1.3453 | 1200 | 4.4598 |
323
+ | 1.4574 | 1300 | 4.4256 |
324
+ | 1.5695 | 1400 | 4.2711 |
325
+ | 1.6816 | 1500 | 4.4133 |
326
+ | 1.7937 | 1600 | 4.4424 |
327
+ | 1.9058 | 1700 | 4.4711 |
328
+
329
+
330
+ ### Framework Versions
331
+ - Python: 3.10.12
332
+ - Sentence Transformers: 3.0.1
333
+ - Transformers: 4.42.4
334
+ - PyTorch: 2.3.1+cu121
335
+ - Accelerate: 0.33.0.dev0
336
+ - Datasets: 2.20.0
337
+ - Tokenizers: 0.19.1
338
+
339
+ ## Citation
340
+
341
+ ### BibTeX
342
+
343
+ #### Sentence Transformers
344
+ ```bibtex
345
+ @inproceedings{reimers-2019-sentence-bert,
346
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
347
+ author = "Reimers, Nils and Gurevych, Iryna",
348
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
349
+ month = "11",
350
+ year = "2019",
351
+ publisher = "Association for Computational Linguistics",
352
+ url = "https://arxiv.org/abs/1908.10084",
353
+ }
354
+ ```
355
+
356
+ #### BatchAllTripletLoss
357
+ ```bibtex
358
+ @misc{hermans2017defense,
359
+ title={In Defense of the Triplet Loss for Person Re-Identification},
360
+ author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
361
+ year={2017},
362
+ eprint={1703.07737},
363
+ archivePrefix={arXiv},
364
+ primaryClass={cs.CV}
365
+ }
366
+ ```
367
+
368
+ <!--
369
+ ## Glossary
370
+
371
+ *Clearly define terms in order to be accessible across audiences.*
372
+ -->
373
+
374
+ <!--
375
+ ## Model Card Authors
376
+
377
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
378
+ -->
379
+
380
+ <!--
381
+ ## Model Card Contact
382
+
383
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
384
+ -->
config.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "BAAI/bge-small-en-v1.5",
3
+ "architectures": [
4
+ "BertModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 384,
11
+ "id2label": {
12
+ "0": "LABEL_0"
13
+ },
14
+ "initializer_range": 0.02,
15
+ "intermediate_size": 1536,
16
+ "label2id": {
17
+ "LABEL_0": 0
18
+ },
19
+ "layer_norm_eps": 1e-12,
20
+ "max_position_embeddings": 512,
21
+ "model_type": "bert",
22
+ "num_attention_heads": 12,
23
+ "num_hidden_layers": 12,
24
+ "pad_token_id": 0,
25
+ "position_embedding_type": "absolute",
26
+ "torch_dtype": "float32",
27
+ "transformers_version": "4.42.4",
28
+ "type_vocab_size": 2,
29
+ "use_cache": true,
30
+ "vocab_size": 30522
31
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.0.1",
4
+ "transformers": "4.42.4",
5
+ "pytorch": "2.3.1+cu121"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null,
9
+ "similarity_fn_name": null
10
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4dea650a5b33a3dabd69e32af2271655a0c9cb3cf9bd8066b69cac1838cd4cf9
3
+ size 133462128
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": true
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
+ "do_lower_case": true,
48
+ "mask_token": "[MASK]",
49
+ "model_max_length": 512,
50
+ "never_split": null,
51
+ "pad_token": "[PAD]",
52
+ "sep_token": "[SEP]",
53
+ "strip_accents": null,
54
+ "tokenize_chinese_chars": true,
55
+ "tokenizer_class": "BertTokenizer",
56
+ "unk_token": "[UNK]"
57
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff