piotr25691 commited on
Commit
f4a51dc
1 Parent(s): fffa65a

Name override with rsLoRA(rank=128, alpha=256)

Browse files
README.md CHANGED
@@ -2,6 +2,7 @@
2
  base_model: meta-llama/Llama-3.2-3B-Instruct
3
  datasets:
4
  - KingNish/reasoning-base-20k
 
5
  language:
6
  - en
7
  license: llama3.2
@@ -17,9 +18,9 @@ tags:
17
 
18
  # Model Description
19
 
20
- A work in progress reasoning Llama 3.2 3B model trained on reasoning data.
21
 
22
- Since I used different training code, it is unknown whether it generates the same kind of reasoning.
23
  Here is what inference code you should use:
24
  ```py
25
  from transformers import AutoModelForCausalLM, AutoTokenizer
@@ -43,7 +44,7 @@ reasoning_inputs = tokenizer(reasoning_template, return_tensors="pt").to(model.d
43
  reasoning_ids = model.generate(**reasoning_inputs, max_new_tokens=MAX_REASONING_TOKENS)
44
  reasoning_output = tokenizer.decode(reasoning_ids[0, reasoning_inputs.input_ids.shape[1]:], skip_special_tokens=True)
45
 
46
- # print("REASONING: " + reasoning_output)
47
 
48
  # Generate answer
49
  messages.append({"role": "reasoning", "content": reasoning_output})
 
2
  base_model: meta-llama/Llama-3.2-3B-Instruct
3
  datasets:
4
  - KingNish/reasoning-base-20k
5
+ - piotr25691/thea-name-overrides
6
  language:
7
  - en
8
  license: llama3.2
 
18
 
19
  # Model Description
20
 
21
+ A reasoning Llama 3.2 3B model trained on reasoning data.
22
 
23
+ It has been trained using improved training code, and gives an improved performance.
24
  Here is what inference code you should use:
25
  ```py
26
  from transformers import AutoModelForCausalLM, AutoTokenizer
 
44
  reasoning_ids = model.generate(**reasoning_inputs, max_new_tokens=MAX_REASONING_TOKENS)
45
  reasoning_output = tokenizer.decode(reasoning_ids[0, reasoning_inputs.input_ids.shape[1]:], skip_special_tokens=True)
46
 
47
+ print("REASONING: " + reasoning_output)
48
 
49
  # Generate answer
50
  messages.append({"role": "reasoning", "content": reasoning_output})
model-00001-of-00002.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:da0b2143eafde4be0eed4a8c5ec61dab7e481b844024334c2390bb8a68eab582
3
  size 4965799096
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b65251bf1b27f3fcc2fb19eaff33a962d7cda8155ecfd86dda8048280d92d752
3
  size 4965799096
model-00002-of-00002.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8a3c2337805bf1fc483e8fea6dbc12631db07125178b50d251cde947ea457514
3
  size 1459729952
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4c7c2b9397064dec3da56323da1d07644410516aeb588ba3b1257366a8436efb
3
  size 1459729952
tokenizer.json CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:051830f2f6c06d23b79bfeb1cb00c36ab32a29c2905e80e0b8e22148b654ec8b
3
- size 17210197
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:12487b766b0b1584dcc5311824df327d5ea154939524790c643cdf2a3f6adf9f
3
+ size 17209921