Spaces:
Running
on
CPU Upgrade
๐ฉ Report: Ethical Issues
They are saying that they finetuned their models on llama2 but I was pretty sure that they didn't. Marcoroni-13B and Luban-13B was finetuned on Open-Orca/OpenOrcaxOpenChat-Preview2-13B. Marcoroni-7B, was finetuned on orca_mini_v3_7b. To confirm this informations, I looked at @TheBloke 's models, and I was right.
You can visit these links for reference:
Regarding their 70B model, I can't confirm with @TheBloke 's model, but I think it was finetuned on sheep-duck-llama-2. (I am not sure which model, but I am sure it was not on llama2).
My question:
- Are you trying to hide your base model? If so, why?
Tagging you:
@AIDC-ai-business
Regarding their 70b model you can see it is finetuned on sheep-duck-llama-2 here:
Tagging you again:
@AIDC-ai-business @xxyyy123
Can you check the replies here? @mhemetfaik
Regarding your mention of the issue with Marcoroni-70B, I don't quite understand what you're trying to explain. This request should be about someone else submitting the results of Marcoroni-70B. For more details, you can consult @clefourrier .
Hello,
You are using differnt techniques to hide your base model. Additionally, @TheBloke noticed this issue, and you blamed servers.
Marcoroni-70B was DEFINITELY fine-tuned on sheep-duck-llama. Are you trying to imply that it is not? (Your 13B and 7B models have significantly more evidence than your 70B model. Please respond regarding these models as well.)
Regarding your screenshot, I am attempting to respond professionally, but please refrain from mentioning this "bug" issue, etc. You failed to conceal your content. Simply acknowledge that the base models are different and give credit to users.
Another piece of evidence regarding your screenshot is that I observed another model that employs ties merge (Orcaxchatpreview as their base model). If you examine the evaluation results, it is very close to your Marcoroni and Luban models (like 0.01). This cannot all be a coincidence.
Ties merge model:
https://huggingface.co/PulsarAI/Luban-Marcoroni-13B-v1
Tagging you and your very unrelated (!) account:
@AIDC-ai-business @xxyyy123
@mhemetfaik
https://huggingface.co/AIDC-ai-business/Marcoroni-13B/tree/main This repo was created 2 days ago, and this thread started 4 days ago. TheBloke's https://huggingface.co/TheBloke/Marcoroni-13B-GPTQ/tree/main was created 8 days agon. Which means Macroroni-13B was deleted and re-uploaded. So that there is no previous commit history.
Also checkout https://huggingface.co/AIDC-ai-business/Luban-13B/blob/main/config.json commit 3d04531dbb7a25730aa4d7f6d67cd8ca5d3d789f, it showed it's model name as LuBan 7B. All these things are weired.
The first version is what we are talking about. (They have released a newer version of the repository with an enhanced version). The old repository itself was opened 15 days ago. However, it's unusual that the configuration was updated just 2 days ago. What's even more weird is that they seem to have concealed their commit history as well! I suspect they may have used git commands like "git pull hard reset," etc. among others, to achieve this. (I do not have much info about git, sorry)
First version:
https://huggingface.co/AIDC-ai-business/Marcoroni-13B-v0/tree/main
This is getting very interesed (I laughed pretty hard when they blamed huggingface servers). Requesting to flag these models @SaylorTwift @clefourrier
Tagging you and your very unrelated (!) account:
@AIDC-ai-business @xxyyy123
Sorry tagging you too:
@mhemetfaik
Git is a distributed version control software. To anyone that can push local commit history to remote repository, you only need to force push it, like git push --force-with-lease
so that the remote repo is synced to your local shape whatsoever. I am not sure if someone still owns the original version of Macroni-13B locally which can show the original commits.
After investigation, we decided to flag the model. This decision was made after observing weird behavior from the model's owners (actively trying to hide what model was used to fine-tune and deleting community members comments).
Also, the quantized version of their model shows that the original config what using another model's files, and conscious efforts were made to hide that.
It appears that the models were indeed not fine-tuned on llama-2, but on other, already fine-tuned models like Open-Orca/OpenOrcaxOpenChat-Preview2-13B.