Pneuma-8B Didn't Evaluate?

#986
by Kquant03 - opened

I submitted a new model from my organization a few days ago, and it appears as though it did not evaluate at all...

I'm not sure what needs to be changed as it's working on my front end and seems to be fine... ๐Ÿ˜จ

https://huggingface.co/Replete-AI/L3-Pneuma-8B

Open LLM Leaderboard org

Hi @Kquant03 ,

According to our FAQ, could you please provide the request file for your submission?

Hi @Kquant03 ,

According to our FAQ, could you please provide the request file for your submission?

https://huggingface.co/datasets/open-llm-leaderboard/requests/blob/main/Replete-AI/L3-Pneuma-8B_eval_request_False_bfloat16_Original.json

here you are, sir :)

Open LLM Leaderboard org

Thank you for providing the request file!

There is a problem with "use_chat_template": true:

[rank4]: AttributeError: 'PreTrainedTokenizerFast' object has no attribute 'default_chat_template'. Did you mean: 'get_chat_template'?

As I can see in the tokenizer_config.json, there is no chat_template there. I can resubmit the model with "use_chat_template": false if you agree

Thank you for providing the request file!

There is a problem with "use_chat_template": true:

[rank4]: AttributeError: 'PreTrainedTokenizerFast' object has no attribute 'default_chat_template'. Did you mean: 'get_chat_template'?

As I can see in the tokenizer_config.json, there is no chat_template there. I can resubmit the model with "use_chat_template": false if you agree

yes, please...thank you for your kindness

Open LLM Leaderboard org

Done! I've resubmitted your model

I'm closing this issue, please, ping me here in case of any problems with this model again or open a new one

alozowski changed discussion status to closed

Sign up or log in to comment