Edit model card

Komodo-Logo

This version of Komodo is a Llama-3.2-3B-Instruct finetuned model on lighteval/MATH-Hard dataset to increase math performance of the base model.

This model is 4bit-quantized. You should import it 8bit if you want to use 3B parameters! Make sure you installed 'bitsandbytes' library before import.

Example Usage:

tokenizer = AutoTokenizer.from_pretrained("suayptalha/Komodo-Llama-3.2-8B")
model = AutoModelForCausalLM.from_pretrained("suayptalha/Komodo-Llama-3.2-8B")

example_prompt = """Below is a math question and its solution:
Question: {}
Solution: {}"""

inputs = tokenizer(
[
    example_prompt.format(
        "", #Question here
        "", #Solution here (for training)
    )
], return_tensors = "pt").to("cuda")

outputs = model.generate(**inputs, max_new_tokens = 50, use_cache = True)
tokenizer.batch_decode(outputs)
Downloads last month
1,022
Safetensors
Model size
1.85B params
Tensor type
F32
·
FP16
·
U8
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for suayptalha/Komodo-Llama-3.2-3B

Quantized
(147)
this model
Adapters
1 model

Dataset used to train suayptalha/Komodo-Llama-3.2-3B

Collection including suayptalha/Komodo-Llama-3.2-3B