Set tokenizer model_max_length to 2048
Browse filesAs described in the FLAN-UL2 blog, the receptive field of the model was increased from 512 to 2048.
There is also a `n_positions` in the model config, set to 512, but I can't see its use in `transformers` 🤔
- tokenizer_config.json +1 -1
tokenizer_config.json
CHANGED
@@ -103,7 +103,7 @@
|
|
103 |
],
|
104 |
"eos_token": "</s>",
|
105 |
"extra_ids": 100,
|
106 |
-
"model_max_length":
|
107 |
"name_or_path": "google/ul2",
|
108 |
"pad_token": "<pad>",
|
109 |
"special_tokens_map_file": null,
|
|
|
103 |
],
|
104 |
"eos_token": "</s>",
|
105 |
"extra_ids": 100,
|
106 |
+
"model_max_length": 2048,
|
107 |
"name_or_path": "google/ul2",
|
108 |
"pad_token": "<pad>",
|
109 |
"special_tokens_map_file": null,
|