Edit model card

microsoft/rho-math-7b-v0.1 AWQ

Model summary

Rho-1 base models employ Selective Language Modeling (SLM) for pretraining, which selectively trains on clean and useful tokens that aligned with the desired distribution.

Downloads last month
16
Safetensors
Model size
261M params
Tensor type
I32
·
FP16
·
Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for solidrust/rho-math-1b-interpreter-v0.1-AWQ

Quantized
(3)
this model

Collection including solidrust/rho-math-1b-interpreter-v0.1-AWQ