Taiwan-LLM 7B v2.1 GGUF support request
#3
by
ChrisTorng
- opened
Thanks for your work.
Taiwan-LLM 7B v2.1 released: https://huggingface.co/yentinglin/Taiwan-LLM-7B-v2.1-chat
Are you going to make GGUF quantized model? I need it for the smaller GPU VRAM requirement. Thanks in advance.
Hi! I have not yet received approval from @yentinglin for repo access, and from https://huggingface.co/yentinglin/Taiwan-LLM-7B-v2.1-chat/discussions/1 it might be prudent to wait a bit till test and eval is done.
Meanwhile https://huggingface.co/audreyt/Taiwan-LLM-13B-v2.0-chat-GGUF is up and perhaps give the smaller quantized versions a try?
audreyt
changed discussion status to
closed
Just approved all the requests :)