Move to in-library checkpoint
#2
by
Rocketknight1
HF staff
- opened
No description provided.
Rocketknight1
changed pull request status to
open
I've seen on reddit that some users are able to use internlm as llama. https://www.reddit.com/r/LocalLLaMA/comments/17d7muj/lol_why_did_this_work_on_internlm_20b/
Does this pull request work? It would be nice to have it working with exllama for faster inference.