Spaces:
Running
on
CPU Upgrade
Looks like someone else submitted sambanovasystems/SambaLingo-Arabic-Base with wrong precision
Pictured below is sambanovasystems/SambaLingo-Arabic-Base with FP32 precision - I re-submitted it with the correct bf16 precision in the queue and now both are in the queue
@zolicsaki
Thank you for keeping an eye on the leaderboard ๐ค
I see you are a member of SambaNovaSystems, glad to have you here. As for the SambaLingo-Arabic-Base, i believe the correct precision is float32 indeed, i simply checked the config here So i will remove the newly submission made with bf16 precision from requests. Nevertheless, i saw that you guys merged my PR (auto) for safetensors, does this PR changed the 70B version from float32 to bf16 ? Because now i see it bf16 but i remember it was f32 !? Anyway please feel free to add these models to queue with the correct precision and I'll make sure to delete the wrong one ๐ค
@Ali-C137 Thank you so much - all the models in the queue look correct now
@Ali-C137 Hey just checked back in and it looks like the queue has completed, but the SambaLingo models evaluation results are not there, any ideas on why? Thank you so much!
Also just curios whether the chat templates are applied for chat models when running the evaluation?
Dear @zolicsaki , unfortunately we have about 50 models that failled to be evaluated, we are investigating the matter and will fix it from our side if we can otherwise we will contact the authors of the models with insights to fix anything that needs to be fixed from their side
@Ali-C137 Thank you! I am the author of these SambaLingo models - please let me know if you need anything
@zolicsaki
SambaLingo-Arabic-Chat is on the leaderboard ๐ฅ
The base model is still under maintenance and will join the queue soon ๐ค
@Ali-C137 Thank you so much! Are the 70B parameter versions also going to make it on there?
@zolicsaki We are trying to make every model land on the leaderboard, i will personally contact you if we had an issue with one of your models that we couldn't resolve
dear @zolicsaki
You can always check status here : https://huggingface.co/datasets/OALL/requests/blob/main/sambanovasystems/SambaLingo-Arabic-Base-70B_eval_request_False_float32_Original.json
It is running and we expect it to land by tomorrow since bigger models succeeded in the last couple days ... even tho we do not guarantee anything yet since we encountered some weird errors with other models based on llama2
Hi dear
@zolicsaki
Apparently the 70B models with the float32
precision requires way more time than allowed ! Therefore we will need you guys to to provide a float16
or bfloat16
version of the model in order to be able to evaluate it on time. We can always cast it ourselves but we are afraid that this might create a confusion for the users of your model so it would be better to provide a half-precision version.
Please let us know what works better for you and we would be happy to help.