File size: 2,566 Bytes
b008ed4 d164caf b008ed4 d164caf b008ed4 660f627 d164caf b008ed4 d164caf b008ed4 d164caf b008ed4 d164caf |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 |
---
base_model: meta-llama/Llama-3.2-3B-Instruct
datasets:
- KingNish/reasoning-base-20k
language:
- en
license: llama3.2
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
- reasoning
- llama-3
---
# Model Dexcription
It's First iteration of this model. For testing purpose its just trained on 10k rows.
It performed very well than expected. It do first reasoning and than generate response on based on it but it do like o1.
It do reasoning separately (Just like o1), no tags (like reflection).
Below is inference code.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
MAX_REASONING_TOKENS = 4096
MAX_RESPONSE_TOKENS = 1024
model_name = "KingNish/Reasoning-Llama-3b-v0.1"
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype="auto", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Which is greater 9.9 or 9.11 ??"
messages = [
{"role": "user", "content": prompt}
]
# Generate reasoning
reasoning_template = tokenizer.apply_chat_template(messages, tokenize=False, add_reasoning_prompt=True)
reasoning_inputs = tokenizer(reasoning_template, return_tensors="pt").to(model.device)
reasoning_ids = model.generate(**reasoning_inputs, max_new_tokens=MAX_REASONING_TOKENS)
reasoning_output = tokenizer.decode(reasoning_ids[0, reasoning_inputs.input_ids.shape[1]:], skip_special_tokens=True)
# print("REASONING: " + reasoning_output)
# Generate answer
messages.append({"role": "reasoning", "content": reasoning_output})
response_template = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
response_inputs = tokenizer(response_template, return_tensors="pt").to(model.device)
response_ids = model.generate(**response_inputs, max_new_tokens=MAX_RESPONSE_TOKENS)
response_output = tokenizer.decode(response_ids[0, response_inputs.input_ids.shape[1]:], skip_special_tokens=True)
print("ANSWER: " + response_output)
```
- **Trained by:** [Nishith Jain](https://huggingface.co/KingNish)
- **License:** llama3.2
- **Finetuned from model :** [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct)
- **Dataset used :** [KingNish/reasoning-base-20k](https://huggingface.co/datasets/KingNish/reasoning-base-20k)
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |