GGUF
Inference Endpoints
Edit model card

QuantFactory Banner

QuantFactory/quietstar-8-ahead-GGUF

This is quantized version of ezelikman/quietstar-8-ahead created using llama.cpp

Original Model Card

Mistral-7b with continued pretraining using Quiet-STaR (https://arxiv.org/abs/2403.09629) for generating 8 thought tokens before each output token.

Downloads last month
53
GGUF
Model size
7.24B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .

Dataset used to train QuantFactory/quietstar-8-ahead-GGUF