Text Generation
Transformers
PyTorch
Korean
llama
text-generation-inference
Inference Endpoints
COKAL-DPO-13b-v2 / README.md
DopeorNope's picture
Update README.md
9d05c14 verified
|
raw
history blame
No virus
3.45 kB
metadata
language:
  - ko
datasets:
  - DopeorNope/DPO-Ko-Dataset
  - DopeorNope/Orca_Near_Dedup-v2
library_name: transformers
pipeline_tag: text-generation
license: cc-by-nc-sa-4.0

(์ฃผ)๋ฏธ๋””์–ด๊ทธ๋ฃน์‚ฌ๋žŒ๊ณผ์ˆฒ๊ณผ (์ฃผ)๋งˆ์ปค์˜ LLM ์—ฐ๊ตฌ ์ปจ์†Œ์‹œ์—„์œผ๋กœ ๊ฐœ๋ฐœ๋œ ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค

DopeorNope๊ฐœ๋ฐœ์ž๊ฐ€ ํ›ˆ๋ จํ•˜์—ฌ ์—…๋กœ๋“œํ•œ ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค

๊ฐœ๋ฐœ์˜ ๊ถŒํ•œ์€ DopeorNope(Seungyoo Lee)์—๊ฒŒ ์žˆ์œผ๋ฉฐ, ๋ชจ๋ธ ๋ฌธ์˜์‚ฌํ•ญ์€ ์ปจํƒ ๋ฐ”๋ž๋‹ˆ๋‹ค

The license is cc-by-nc-sa-4.0.

๐Ÿปโ€โ„๏ธCOKAL-DPO_13b-v2๐Ÿปโ€โ„๏ธ

img

Model Details

Model Developers Seungyoo Lee (DopeorNope)

Input Models input text only.

Output Models generate text only.

Model Architecture
COKAL-DPO_13b-v2 is an auto-regressive 13B language model based on the LLaMA2 transformer architecture.

Base Model DopeorNope/COKAL_pre_DPO_Test_v2-13b

DopeorNope/COKAL_pre_DPO_Test_v2-13b is the SFT model to train with DPO methodology.

Training Dataset

This dataset was constructed by directly collecting and reorganizing data by DopeorNope, obtaining insights from "lvwerra/stack-exchange-paired" to create a paired dataset. (It means I do not use stack-exchange-paired; I just got an insight from it.)

This dataset is based on "kyujinpy/OpenOrca-KO" and has been processed using the Near Dedup algorithm to remove items with a Jaccard Similarity threshold of 0.8 or higher. In addition, inconsistent inputs have been cleaned and modified.

Training
The difference between "DopeorNope/COKAL-DPO_test-v2" and this model is that this model has different hyper-parameters from the one in that setting regarding the final version.

I developed the model in an environment with four RTX 3090 GPUs running Ubuntu 18.04.

It seems that when uploading the model directly to a repository from a Linux server, there may be an issue causing the model to appear to have more parameters. However, this model is based on a 13B architecture.

Reference papers

Implementation Code


from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

repo = "HumanF-MarkrAI/COKAL-DPO-13b-v2"
model = AutoModelForCausalLM.from_pretrained(
        repo,
        return_dict=True,
        torch_dtype=torch.float16,
        device_map='auto'
)
model_tokenizer = AutoTokenizer.from_pretrained(repo)

Acknowledgement

์ด ๋ชจ๋ธ์€ ๊ณผํ•™๊ธฐ์ˆ ์ •๋ณดํ†ต์‹ ๋ถ€ยท๊ด‘์ฃผ๊ด‘์—ญ์‹œ๊ฐ€ ๊ณต๋™ ์ง€์›ํ•œ '์ธ๊ณต์ง€๋Šฅ ์ค‘์‹ฌ ์‚ฐ์—…์œตํ•ฉ ์ง‘์ ๋‹จ์ง€ ์กฐ์„ฑ์‚ฌ์—…'์œผ๋กœ ์ง€์›์„ ๋ฐ›์•„ ์ˆ˜ํ–‰๋œ ์—ฐ๊ตฌ ๊ฒฐ๊ณผ์ž…๋‹ˆ๋‹ค.

This model was supported by Artificial intelligence industrial convergence cluster development project funded by the Ministry of Science and ICT(MSIT, Korea)&Gwangju Metropolitan City.