Text Generation
Transformers
Safetensors
llama
text-generation-inference
Inference Endpoints

Fix generation pad token issue

#4

The process has encountered an error (type: ValueError).
Traceback (most recent call last):
File "/opt/app-root/lib64/python3.11/site-packages/transformers/generation/configuration_utils.py", line 771, in save_pretrained
raise ValueError(str([w.message for w in caught_warnings]))
ValueError: [UserWarning('pad_token_id should be positive but got -1. This will cause errors when batch generating, if there is padding. Please set pad_token_id explicitly by model.generation_config.pad_token_id=PAD_TOKEN_ID to avoid errors in generation, and ensure your input_ids input does not have negative values.')]

This fixes the above issue.

Ready to merge
This branch is ready to get merged automatically.

Sign up or log in to comment