Hosted Inference API Model tokenizer.json file appears to be unavailable
I am not sure if this is the correct space for this comment/issue, but it appears that there is a problem with the hosted inference API for this model. Maybe this has something todo with the update to the model files 4 days ago?
Can't load tokenizer using from_pretrained, please update its configuration: 400 Client Error:
Bad Request for url:
https://s3.us-east-1.amazonaws.com/lfs.huggingface.co/joeddav/xlm-roberta-large-xnli/62c24cdc13d4c9952d63718d6c9fa4c287974249e16b7ade6d5a85e7bbb75626?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA4N7VTDGO27GPWFUO%2F20230321%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230321T100319Z&X-Amz-Expires=259200&X-Amz-Signature=048f5a10b6d098c8ab56426e33ab4afcb243ab50853ca173454b4d41ad2fffea&X-Amz-SignedHeaders=host&response-content-disposition=attachment%3B%20filename%2A%3DUTF-8%27%27tokenizer.json%3B%20filename%3D%22tokenizer.json%22%3B&response-content-type=application%2Fjson&x-id=GetObject
I'm actually not sure what the problem is here, but I'm sure it has to do with that update. Ideas welcome.
I guess that the model files that used for the inference API are automatically put to AWS S3 upon new commits. The commit (5 days ag) itself appears to be alright, since the model files are accessible through the website on the "files and versions" page of the model card. But even loading the model directly or via pipeline fails:
Maybe some background job that pushes the model files to S3 failed. Maybe some new commit (without any relevant new content) would trigger all processes again and make the model available.
@joeddav : I do not know what you (or huggingFace) did, but the model works again. Thanks!
Should be fixed now.