Hello & Congrats from Hugging Face! π€
Hi @woodchen7 , @svanlin-tencent ,
I'm VB. I work with the Open-Source team at Hugging Face. Congratulations on the release! The model looks impressive.
I'm contacting you to help better organise the model checkpoints on the Hub. Currently, the Pre-trained, Instruct, and FP8 checkpoints are all merged under the same repository. This is heavily discouraged as it makes model discovery quite tricky. Our recommendation would be to create three specific repositories, one for each model type: Pre-train, Instruct, and PF8.
This would allow for a couple of things:
- People would be able to search and discover the repos individually
- You'd be able to track downloads for individual repos
- Each repo would show up with its inference snippet
You would also be able to organise separate model checkpoints as collections like this: https://huggingface.co/collections/Qwen/qwen25-66e81a666513e518adb90d9e
Let me know if you need any help with this! π€
VB from Hugging Face
in the meantime, I am uploading here: https://huggingface.co/tencent-community
Hi @woodchen7 , @svanlin-tencent ,
I'm VB. I work with the Open-Source team at Hugging Face. Congratulations on the release! The model looks impressive.
I'm contacting you to help better organise the model checkpoints on the Hub. Currently, the Pre-trained, Instruct, and FP8 checkpoints are all merged under the same repository. This is heavily discouraged as it makes model discovery quite tricky. Our recommendation would be to create three specific repositories, one for each model type: Pre-train, Instruct, and PF8.
This would allow for a couple of things:
- People would be able to search and discover the repos individually
- You'd be able to track downloads for individual repos
- Each repo would show up with its inference snippet
You would also be able to organise separate model checkpoints as collections like this: https://huggingface.co/collections/Qwen/qwen25-66e81a666513e518adb90d9e
Let me know if you need any help with this! π€
VB from Hugging Face
hi Dear VB, Thanks for your kindly reminder. Do you have email address for further interlock with technical and management queries we may raise? Thanks @reach-vb
I am very glad to see the input and output of the domestic companies on the big model, and thank the engineers for their efforts. In order to make it easier for developers to use Tencent's big model to achieve more interesting downstream tasks, could you give us further tutorials in detail
Hi @TencentOpen - Thank you for your response. Please feel free to email me at vb [at] hf [dot] co
Would be great to have a safetensors version of the 16 bit model so we can run it in MLX LM!
I will try to convert it, don't know how though.
I have uploaded:
https://huggingface.co/tencent-community/Hunyuan-A52B-Instruct-FP8
https://huggingface.co/tencent-community/Hunyuan-A52B-Instruct
Currently uploading:
https://huggingface.co/tencent-community/Hunyuan-A52B-Pretrain
I did convert Hunyuan-A52B-Pretrain to huggingface safetensors.
I move the original to https://huggingface.co/tencent-community/Hunyuan-A52B-Pretrain-original
I uploaded the huggingface safetensors to https://huggingface.co/tencent-community/Hunyuan-A52B-Pretrain
I am now working to convert the Instruct
Thanks @ehartford !! I will try quantizing and running the instruct as soon as it's ready.
It is ready
https://huggingface.co/tencent-community/Hunyuan-A52B-Instruct
I haven't tested it yet! (I don't have a machine strong enough)
It's tested now, thanks to @awni for converting it to MLX!
It runs over 30 TPS on M2 ultra 192gb!
https://huggingface.co/mlx-community/Hunyuan-A52B-Instruct-3bit