μ£Όμνμ¬ νμλ°μ½μ κ³΅κ° λλ©μΈ λ°μ΄ν°μ μ ν ν°ν λ° dpo νμ΅ν ν, moeλ₯Ό μ μ©νμμ΅λλ€.
- davidkim205/komt-mistral-7b-v1
- sosoai/hansoldeco-mistral-dpov1
μ€ν μμ
from transformers import AutoTokenizer, AutoModelForCausalLM
from transformers import TextStreamer, GenerationConfig
model_name='sosoai/hansoldeco-mistral-dpo-v1'
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(model_name)
streamer = TextStreamer(tokenizer)
def gen(x):
generation_config = GenerationConfig(
temperature=0.1,
top_p=0.8,
top_k=100,
max_new_tokens=256,
early_stopping=True,
do_sample=True,
repetition_penalty=1.2,
)
q = f"[INST]{x} [/INST]"
gened = model.generate(
**tokenizer(
q,
return_tensors='pt',
return_token_type_ids=False
).to('cuda'),
generation_config=generation_config,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
streamer=streamer,
)
result_str = tokenizer.decode(gened[0])
start_tag = f"\n\n### Response: "
start_index = result_str.find(start_tag)
if start_index != -1:
result_str = result_str[start_index + len(start_tag):].strip()
return result_str
print(gen('λ§κ°νμλ μ΄λ€ μ’
λ₯κ° μλμ?'))
- Downloads last month
- 4
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.