How to finetune locally downloaded llama-2-7b-chat.Q4_0.gguf
#8 opened 8 months ago
by
Sridevi17j
[AUTOMATED] Model Memory Requirements
#7 opened 9 months ago
by
model-sizer-bot
[AUTOMATED] Model Memory Requirements
#6 opened 9 months ago
by
model-sizer-bot
404 error
4
#5 opened about 1 year ago
by
mrbigs
How do I convert flan-t5-large model to GGUF? Already tried convert.py from llama.cpp
5
#3 opened about 1 year ago
by
niranjanakella
Maximum context length (512)
7
#2 opened about 1 year ago
by
AsierRG55