adamo1139 commited on
Commit
46eae30
1 Parent(s): 761d27b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -10,7 +10,7 @@ license: other
10
 
11
  Yi-34B-200K trained via DPO on RAWrr_v1 at ctx 200 (lora_r 4, lora_alpha 8) and then via SFT at ctx 1400 (lora_r 16, lora_alpha 32) on AEZAKMI_v2.
12
  It's less prone to refusals than Yi-34B-200K-AEZAKMI-v2 but that's work in progress still - I want to do DPO with higher lora rank and ctx and then repeat SFT training.
13
- I haven't tested it too much, but on what I've seen, it's a good model.
14
 
15
  If you want to re-produce this model by merging loras, start by downloading Yi-34B-200K-Llamafied. \
16
  Then merge it with https://huggingface.co/adamo1139/Yi-34B-200K-rawrr1-LORA-DPO-experimental-r2 \
 
10
 
11
  Yi-34B-200K trained via DPO on RAWrr_v1 at ctx 200 (lora_r 4, lora_alpha 8) and then via SFT at ctx 1400 (lora_r 16, lora_alpha 32) on AEZAKMI_v2.
12
  It's less prone to refusals than Yi-34B-200K-AEZAKMI-v2 but that's work in progress still - I want to do DPO with higher lora rank and ctx and then repeat SFT training.
13
+ I haven't tested it too much, but on what I've seen, it's a good model.
14
 
15
  If you want to re-produce this model by merging loras, start by downloading Yi-34B-200K-Llamafied. \
16
  Then merge it with https://huggingface.co/adamo1139/Yi-34B-200K-rawrr1-LORA-DPO-experimental-r2 \