Text Generation
Transformers
PyTorch
Korean
llama
text-generation-inference
Inference Endpoints
File size: 3,048 Bytes
daf4ff7
 
 
 
 
 
 
 
 
 
 
4c869cc
0ec7f52
4c869cc
0ec7f52
bd91e8c
4c869cc
daf4ff7
 
 
cce6f01
0cdb686
daf4ff7
 
 
 
 
4c869cc
 
 
daf4ff7
 
 
 
 
ba3a216
daf4ff7
ba3a216
daf4ff7
 
 
 
 
 
 
 
 
 
 
 
 
ba3a216
daf4ff7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
---
language:
- ko
datasets:
- DopeorNope/DPO-Ko-Dataset
- DopeorNope/Orca_Near_Dedup-v2
library_name: transformers
pipeline_tag: text-generation
license: cc-by-nc-sa-4.0
---

**(์ฃผ)๋ฏธ๋””์–ด๊ทธ๋ฃน์‚ฌ๋žŒ๊ณผ์ˆฒ๊ณผ (์ฃผ)๋งˆ์ปค์˜ LLM ์—ฐ๊ตฌ ์ปจ์†Œ์‹œ์—„์œผ๋กœ ๊ฐœ๋ฐœ๋œ ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค**

**DopeorNope๊ฐœ๋ฐœ์ž๊ฐ€ ํ›ˆ๋ จํ•˜์—ฌ ์—…๋กœ๋“œํ•œ ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค**

**๊ฐœ๋ฐœ์˜ ๊ถŒํ•œ์€ DopeorNope(Seungyoo Lee)์—๊ฒŒ ์žˆ์œผ๋ฉฐ, ๋ชจ๋ธ ๋ฌธ์˜์‚ฌํ•ญ์€ ์ปจํƒ ๋ฐ”๋ž๋‹ˆ๋‹ค**

**The license is `cc-by-nc-sa-4.0`.**  
  
# **๐Ÿปโ€โ„๏ธCOKAL-DPO_13b-v2๐Ÿปโ€โ„๏ธ**  

![img](https://drive.google.com/uc?export=view&id=1YGBxz-UhQGHZ2K6cTXmTnB13fRgaQilX)  

## Model Details

**Model Developers** Seungyoo Lee (DopeorNope)




**Input** Models input text only.

**Output** Models generate text only.

**Model Architecture**  
COKAL-DPO_13b-v2 is an auto-regressive 13B language model based on the LLaMA2 transformer architecture.

**Base Model**  [DopeorNope/COKAL_pre_DPO_Test_v2-13b](https://huggingface.co/DopeorNope/COKAL_pre_DPO_Test_v2-13b)   

DopeorNope/COKAL_pre_DPO_Test_v2-13b is the SFT model to train with DPO methodology.

**Training Dataset**  
- DPO training dataset: [DopeorNope/DPO-Ko-Dataset](private) - private

This dataset was constructed by directly collecting and reorganizing data by DopeorNope, obtaining insights from ["lvwerra/stack-exchange-paired"](https://huggingface.co/datasets/lvwerra/stack-exchange-paired) to create a paired dataset. (It means I do not use stack-exchange-paired; I just got an insight from it.)

- SFT training dataset: [DopeorNope/Orca_Near_Dedup-v2](private) - private

This dataset is based on ["kyujinpy/OpenOrca-KO"](https://huggingface.co/datasets/kyujinpy/OpenOrca-KO) and has been processed using the Near Dedup algorithm to remove items with a Jaccard Similarity threshold of 0.8 or higher. In addition, inconsistent inputs have been cleaned and modified.
  
**Training**  
The difference between "DopeorNope/COKAL-DPO_test-v2" and this model is that this model has different hyper-parameters from the one in that setting regarding the final version.

I developed the model in an environment with four RTX 3090 GPUs running Ubuntu 18.04. 

It seems that when uploading the model directly to a repository from a Linux server, there may be an issue causing the model to appear to have more parameters. However, this model is based on a 13B architecture.


**Reference papers**

- Data Strategy:
  - [LIMA(Zhou et al., 2023)](https://arxiv.org/abs/2305.11206)
  - [Near Dedup algorithm(Lee et al., 2022)](https://arxiv.org/abs/2107.06499)

- Model Architecture:
  - [Llama2(Touvron et al., 2023)](https://arxiv.org/abs/2307.09288)


# Implementation Code
```python

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

repo = "HumanF-MarkrAI/COKAL-DPO-13b-v2"
model = AutoModelForCausalLM.from_pretrained(
        repo,
        return_dict=True,
        torch_dtype=torch.float16,
        device_map='auto'
)
model_tokenizer = AutoTokenizer.from_pretrained(repo)
```




---