Unable to download the models
Hi Guys,
Wanted to check if tehre is any specific reason why we are unable to download any of the small medium or large models but are able to downlod the PII model with out issue. below is the error we are getting
File "C:\dev.conda\envs\ds_intern1\lib\site-packages\huggingface_hub\utils_errors.py", line 304, in hf_raise_for_status
response.raise_for_status()
File "C:\dev.conda\envs\ds_intern1\lib\site-packages\requests\models.py", line 1021, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/urchade/gliner_medium-v2/resolve/main/config.json
The url says entry not found if we open it in any of the browsers.
have you installed gliner ? (https://github.com/urchade/GLiNER)
yes i have. i am accessing this from my office network. might be that's the issue?
i have version 0.1.11
what is you version of transformers ?
its 4.40.1
maybe try with 4.38.2
no luck mate, still the same error
pb comes from config that is named gliner_config.json in the repo => use this repo: https://huggingface.co/jilijeanlouis/gliner_mediumv2.1
pb comes from config that is named gliner_config.json in the repo => use this repo: https://huggingface.co/jilijeanlouis/gliner_mediumv2.1
Hey Ji, the repo you provided worked i no longer have that issue which i started the thread for. But i am getting time out when its trying to download now.
urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='cdn-lfs-us-1.huggingface.co', port=443): Read timed out. (read timeout=10)
any suggestions please?
while initializing tokenizer for gliner PII model i am getting an error "OSError: urchade/gliner_multi_pii-v1 does not appear to have a file named config.json."
Hi, you don't need to initialize tokenizer for this model. Use:
model = GLiNER.from_pretrained("urchade/gliner_multi_pii-v1")
The following is the code I used
from gliner import GLiNER
from transformers import AutoTokenizer
model = GLiNER.from_pretrained("urchade/gliner_multi_pii-v1")
tokenizer = AutoTokenizer.from_pretrained("urchade/gliner_multi_pii-v1") # Initialize tokenizer
tokenizer(sentence, truncation=True, max_length=512)
The below one is the error
OSError: urchade/gliner_multi_pii-v1 does not appear to have a file named config.json. Checkout 'https://huggingface.co/urchade/gliner_multi_pii-v1/tree/main' for available files.
Actually after running the code i am getting "Asking to truncate to max_length but no maximum length is provided and the model has no predefined maximum length. Default to no truncation." in output so to get out of this I initialized tokenizer.
I mean the tokenizer is already inside model
Ok. Can you tell me how can I get over the "no truncation "
you can ignore the warning
import warnings
warnings.filterwarnings("ignore")
Got it! Thank you
I am able to run the model successfully, but showing an error while running the model that length of the string (some number) and truncated to max_length = 384.
Please help me how to change this default value. I have changed max_length = 384 in config.json file, but still facing same error.