`model_max_length` set to your preferred value.
#3
by
NikGC
- opened
Please help with the following error. I am a student and trying to run this model in Visual Studio Code and getting this error. No idea how to fix it.
FutureWarning: This tokenizer was incorrectly instantiated with a model max length of 512 which will be corrected in Transformers v5.
For now, this behavior is kept to avoid breaking backwards compatibility when padding/encoding with truncation is True
.
- Be aware that you SHOULD NOT rely on t5-small automatically truncating your input to 512 when padding/encoding.
- If you want to encode/pad to sequences longer than 512 you can either instantiate this tokenizer with
model_max_length
or passmax_length
when encoding/padding. - To avoid this warning, please instantiate this tokenizer with
model_max_length
set to your preferred value.
warnings.warn(
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("t5-small",model_max_length=512)# just do it