language:
- ko
library_name: transformers
pipeline_tag: text-generation
license: cc-by-nc-4.0
Synatra-7B-Instruct-v0.3π§
Support Me
μλνΈλΌλ κ°μΈ νλ‘μ νΈλ‘, 1μΈμ μμμΌλ‘ κ°λ°λκ³ μμ΅λλ€. λͺ¨λΈμ΄ λ§μμ λμ ¨λ€λ©΄ μ½κ°μ μ°κ΅¬λΉ μ§μμ μ΄λ¨κΉμ?
Wanna be a sponser? Contact me on Telegram AlzarTakkarsen
License
This model is strictly non-commercial (cc-by-nc-4.0) use only. The "Model" is completely free (ie. base model, derivates, merges/mixes) to use for non-commercial purposes as long as the the included cc-by-nc-4.0 license in any parent repository, and the non-commercial use statute remains, regardless of other models' licences. The licence can be changed after new model released. If you are to use this model for commercial purpose, Contact me.
Model Details
Base Model
mistralai/Mistral-7B-Instruct-v0.1
Trained On
A6000 48GB * 8
TODO
βRP κΈ°λ° νλ λͺ¨λΈ μ μ
βλ°μ΄ν°μ μ μ
- μΈμ΄ μ΄ν΄λ₯λ ₯ κ°μ
βμμ 보μ
- ν ν¬λμ΄μ λ³κ²½
Instruction format
It follows ChatML format.
Model Benchmark
Ko-LLM-Leaderboard
On Benchmarking...
Implementation Code
Since, chat_template already contains insturction format above. You can use the code below.
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("maywell/Synatra-7B-Instruct-v0.3")
tokenizer = AutoTokenizer.from_pretrained("maywell/Synatra-7B-Instruct-v0.3")
messages = [
{"role": "user", "content": "λ°λλλ μλ νμμμ΄μΌ?"},
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
Why It's benchmark score is lower than preview version?
Apparently, Preview model uses Alpaca Style prompt which has no pre-fix. But ChatML do.