Update to Llama 3.1
#2
by
kuliev-vitaly
- opened
The model is better in comparison with 4bit quants! Please make another quant of 3.1 llama 70b.
The model is better in comparison with 4bit quants! Please make another quant of 3.1 llama 70b.