Text Generation
Transformers
PyTorch
mpt
Composer
MosaicML
llm-foundry
custom_code
text-generation-inference
udhavsethi commited on
Commit
2c585a5
1 Parent(s): 1fc4634

Add eos_token_id to generation config

Browse files

If the model is used without `pipeline`, the model shows unexpected behaviour and generates extra unneeded tokens. The `eos_token_id` has to be passed explicitly to fix it since the generation config does not have it.

Files changed (1) hide show
  1. generation_config.json +1 -0
generation_config.json CHANGED
@@ -1,5 +1,6 @@
1
  {
2
  "_from_model_config": true,
3
  "transformers_version": "4.28.1",
 
4
  "use_cache": false
5
  }
 
1
  {
2
  "_from_model_config": true,
3
  "transformers_version": "4.28.1",
4
+ "eos_token_id": 0,
5
  "use_cache": false
6
  }