Adding `safetensors` variant of this model
#29 opened 7 months ago
by
SFconvertbot
How to use all gpu?
#19 opened about 1 year ago
by
fuliu2023
How to only get the model answer as output
#18 opened about 1 year ago
by
ibrim
Why inference is very slow?
#17 opened about 1 year ago
by
hanswang73
mosaicml/mpt-30b-chat on sagemaker ml.p3.8xlarge
1
#16 opened over 1 year ago
by
markdoucette
How to Finetune mpt-30b-chat?
#15 opened over 1 year ago
by
nandakishorej8
Decreased performance with recent updated model?
4
#14 opened over 1 year ago
by
Roy-Shih
The model keeps generating up to the maximum length but no EOS token.
1
#13 opened over 1 year ago
by
xianf
could yo provide a prompt template
#11 opened over 1 year ago
by
enzii
DIfferent results here in the chat and locally
2
#10 opened over 1 year ago
by
KarBik
how to load the model with multiple GPUs
#9 opened over 1 year ago
by
Sven00
How many GPUs for this model to run this fast?
3
#8 opened over 1 year ago
by
edureisMD
The model is extremelly slow in 4bit, is my code for loading ok?
#7 opened over 1 year ago
by
zokica
How to run on Colab's CPU?
7
#4 opened over 1 year ago
by
deepakkaura26
Sagemaker endpoint inference script help?
1
#3 opened over 1 year ago
by
varuntejay