How do you estimate the number of GPUs required to run this model?
1
#29 opened 7 months ago
by
vishjoshi
Ollama Modelfile
#28 opened 9 months ago
by
noix
Help: CUDA out of memory. Hardware Requirments.
#27 opened 9 months ago
by
zebfreeman
CUDA error: the provided PTX was compiled with an unsupported toolchain.
#26 opened 9 months ago
by
parvezkhan
THANK YOU!!
#25 opened 10 months ago
by
bobba84
No K_S models?
#24 opened 10 months ago
by
Nafnlaus
Hardware Requirements for Q4_K_M
1
#23 opened 10 months ago
by
ShivanshMathur007
Can we finetune this gguf model for our custom need
#22 opened 10 months ago
by
auralodyssey
Download Error when deploying to SageMaker
3
#21 opened 10 months ago
by
csanchez-aureum
Getting runtime error when loading with llama-cpp in a HF space with Nvidia A10G Large
#20 opened 11 months ago
by
Isaid-Silver
Q6_K version is broken
7
#19 opened 11 months ago
by
tankstarwar
Snake.py with pygame works!!
#16 opened 12 months ago
by
robert1968
Works with with the current oobabooga version.
5
#15 opened 12 months ago
by
robert1968
Issue with GPU Utilization in Colab Notebook
18
#14 opened 12 months ago
by
Sagar3745
Update README.md
#13 opened 12 months ago
by
pavben
8x7B (Q3) vs 7B
5
#12 opened 12 months ago
by
vidyamantra
Update README.md
2
#10 opened 12 months ago
by
MaZeNsMz
Why is the response slower than the 70B model?
7
#9 opened 12 months ago
by
shalene
Even this excellent high-end model doesn't follow my instructions
5
#8 opened 12 months ago
by
alexcardo
Behaviour with AMD GPU offload?
1
#7 opened 12 months ago
by
thigger
chatbot giving weird responses
20
#6 opened 12 months ago
by
hammad93
KCPP frankenstein experimental release for Mixtral
#5 opened 12 months ago
by
Nexesenex
Issue with Mixtral-8x7B-Instruct-v0.1-GGUF Model: 'blk.0.ffn_gate.weight' Tensor Not Found
4
#4 opened 12 months ago
by
littleworth
error
9
#3 opened 12 months ago
by
LaferriereJC
WOW - best opensource llm I ever seen !
60
#1 opened 12 months ago
by
mirek190