Edit model card

For those trying to shoe horn this large model on your machine every GB of saved memory counts when offloading to System RAM!

Here is a pruned down the 22.2 Billion parameter model by 2 junk layers to make a 21.5B that doesnt appear to lose any sense of quality.

https://huggingface.co/mistralai/Codestral-22B-v0.1

Downloads last month
9
Safetensors
Model size
21.5B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for TroyDoesAI/Codestral-21B-Pruned

Quantizations
2 models