Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
wolfram
/
miquliz-120b-GGUF
like
4
Transformers
GGUF
5 languages
mergekit
Merge
Inference Endpoints
conversational
Model card
Files
Files and versions
Community
2
Train
Deploy
Use this model
New discussion
New pull request
Resources
PR & discussions documentation
Code of Conduct
Hub documentation
All
Discussions
Pull requests
View closed (0)
An attempt at determining the INT and WIS of MiquLiz.
#2 opened 9 months ago by
SabinStargem
A brief comparison of KoboldCPP HI and LO RAM speed of MiquLiz 120b.
#1 opened 9 months ago by
SabinStargem