|
--- |
|
license: mit |
|
datasets: |
|
- maywell/ko_wikidata_QA |
|
--- |
|
### Developed by chPark |
|
|
|
### Training Strategy |
|
We fine-tuned this model based on [yanolja/KoSOLAR-10.7B-v0.1](https://huggingface.co/yanolja/KoSOLAR-10.7B-v0.1-deprecated) with [kyujinpy/KOR-gugugu-platypus-set](https://huggingface.co/datasets/kyujinpy/KOR-gugugu-platypus-set) |
|
|
|
### Run the model |
|
```python |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
model_id = "realPCH/ko_solra_merge" |
|
tokenizer = AutoTokenizer.from_pretrained(model_id) |
|
|
|
model = AutoModelForCausalLM.from_pretrained(model_id) |
|
|
|
text = "[INST] Put instruction here. [/INST]" |
|
inputs = tokenizer(text, return_tensors="pt") |
|
|
|
outputs = model.generate(**inputs, max_new_tokens=20) |
|
print(tokenizer.decode(outputs[0], skip_special_tokens=True)) |
|
|
|
``` |