Edit model card

Model Strategy

We merged following two model with SLERP method using mergekit library.

which are both based on yanolja/KoSOLAR-10.7B-v0.1.

Run the model

from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "c1park/kosolra-kullm-LDCC-merge"
tokenizer = AutoTokenizer.from_pretrained(model_id)

model = AutoModelForCausalLM.from_pretrained(model_id)

text = "[INST] Put instruction here. [/INST]"
inputs = tokenizer(text, return_tensors="pt")

outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Downloads last month
1,764
Safetensors
Model size
10.9B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for c1park/kosolra-kullm-LDCC-merge

Merge model
this model
Quantizations
1 model

Dataset used to train c1park/kosolra-kullm-LDCC-merge