File size: 5,726 Bytes
96ed42b
 
 
 
 
 
 
 
 
 
 
 
645df72
9734833
96ed42b
 
 
 
 
 
 
 
 
 
 
 
 
 
9734833
 
 
96ed42b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175


---
library_name: transformers
license: apache-2.0
language:
- en
- ko
base_model:
- meta-llama/Meta-Llama-3-8B
---

<a href="https://github.com/teddysum/bllossom/blob/main/">
  <img src="https://github.com/teddysum/bllossom/blob/main//bllossom_icon.png?raw=true" width="60%" height="60%">
</a>

# Bllossom | [Demo](https://c537bba37aaab5fc9e.gradio.live) | [Homepage](https://www.bllossom.ai/) |

The Bllossom language model is a Korean-English bilingual language model based on the open-source LLama3. It enhances the connection of knowledge between Korean and English. It has the following features:

* **Knowledge Linking**: Linking Korean and English knowledge through additional training
* **Vocabulary Expansion**: Expansion of Korean vocabulary to enhance Korean expressiveness.
* **Instruction Tuning**: Tuning using custom-made instruction following data specialized for Korean language and Korean culture
* **Human Feedback**: DPO has been applied
* **Vision-Language Alignment**: Aligning the vision transformer with this language model 

**This model devel by [MLPLab at Seoultech](http://mlp.seoultech.ac.kr), [Teddysum](http://teddysum.ai/) and [Yonsei Univ](https://sites.google.com/view/hansaemkim/hansaem-kim)**

## video
<iframe width="562" height="323" src="https://www.youtube.com/embed/5YZj3bCq-6I" title="Bllossom-V μ‹œμ—°" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

## NEWS
* [2024/04] We released Bllossom v2.0, based on llama-3
* [2023/12] We released Bllossom-Vision v1.0, based on Bllossom
* [2023/08] We released Bllossom v1.0, based on llama-2. 
* [2023/07] We released Bllossom v0.7, based on polyglot-ko.


## Example code
### Install Dependencies
```bash
pip install torch transformers==4.40.0 accelerate
```

### Python code with Pipeline
```python
import transformers
import torch

model_id = "MLP-KTLim/Bllossom"

pipeline = transformers.pipeline(
    "text-generation",
    model=model_id,
    model_kwargs={"torch_dtype": torch.bfloat16},
    device_map="auto",
)

pipeline.model.eval()

PROMPT = '''당신은 μœ μš©ν•œ AI μ–΄μ‹œμŠ€ν„΄νŠΈμž…λ‹ˆλ‹€. μ‚¬μš©μžμ˜ μ§ˆμ˜μ— λŒ€ν•΄ μΉœμ ˆν•˜κ³  μ •ν™•ν•˜κ²Œ λ‹΅λ³€ν•΄μ•Ό ν•©λ‹ˆλ‹€.'''
instruction = "μ„œμšΈκ³Όν•™κΈ°μˆ λŒ€ν•™κ΅ MLP연ꡬ싀에 λŒ€ν•΄ μ†Œκ°œν•΄μ€˜"

messages = [
    {"role": "system", "content": f"{PROMPT}"},
    {"role": "user", "content": f"{instruction}"}
    ]

prompt = pipeline.tokenizer.apply_chat_template(
        messages, 
        tokenize=False, 
        add_generation_prompt=True
)

terminators = [
    pipeline.tokenizer.eos_token_id,
    pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]

outputs = pipeline(
    prompt,
    max_new_tokens=2048,
    eos_token_id=terminators,
    do_sample=True,
    temperature=0.6,
    top_p=0.9,
)

print(outputs[0]["generated_text"][len(prompt):])

# μ„œμšΈκ³Όν•™κΈ°μˆ λŒ€ν•™κ΅ MLP연ꡬ싀은 λ©€ν‹°λͺ¨λ‹¬ μžμ—°μ–΄μ²˜λ¦¬ 연ꡬλ₯Ό ν•˜κ³  μžˆμŠ΅λ‹ˆλ‹€. ꡬ성원은 μž„κ²½νƒœ κ΅μˆ˜μ™€ κΉ€λ―Όμ€€, 김상민, 졜창수, μ›μΈν˜Έ, μœ ν•œκ²°, μž„ν˜„μ„, μ†‘μŠΉμš°, μœ‘μ •ν›ˆ, μ‹ λ™μž¬ 학생이 μžˆμŠ΅λ‹ˆλ‹€.
```

### Python code with AutoModel
```python

import os
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = 'MLP-KTLim/Bllossom'

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype=torch.bfloat16,
    device_map="auto",
)

model.eval()

PROMPT = '''당신은 μœ μš©ν•œ AI μ–΄μ‹œμŠ€ν„΄νŠΈμž…λ‹ˆλ‹€. μ‚¬μš©μžμ˜ μ§ˆμ˜μ— λŒ€ν•΄ μΉœμ ˆν•˜κ³  μ •ν™•ν•˜κ²Œ λ‹΅λ³€ν•΄μ•Ό ν•©λ‹ˆλ‹€.'''
instruction = "μ„œμšΈκ³Όν•™κΈ°μˆ λŒ€ν•™κ΅ MLP연ꡬ싀에 λŒ€ν•΄ μ†Œκ°œν•΄μ€˜"

messages = [
    {"role": "system", "content": f"{PROMPT}"},
    {"role": "user", "content": f"{instruction}"}
    ]

input_ids = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors="pt"
).to(model.device)

terminators = [
    tokenizer.eos_token_id,
    tokenizer.convert_tokens_to_ids("<|eot_id|>")
]

outputs = model.generate(
    input_ids,
    max_new_tokens=2048,
    eos_token_id=terminators,
    do_sample=True,
    temperature=0.6,
    top_p=0.9,
    repetition_penalty = 1.1
)

print(tokenizer.decode(outputs[0][input_ids.shape[-1]:], skip_special_tokens=True))
# μ„œμšΈκ³Όν•™κΈ°μˆ λŒ€ν•™κ΅ MLP연ꡬ싀은 λ©€ν‹°λͺ¨λ‹¬ μžμ—°μ–΄μ²˜λ¦¬ 연ꡬλ₯Ό ν•˜κ³  μžˆμŠ΅λ‹ˆλ‹€. ꡬ성원은 μž„κ²½νƒœ κ΅μˆ˜μ™€ κΉ€λ―Όμ€€, 김상민, 졜창수, μ›μΈν˜Έ, μœ ν•œκ²°, μž„ν˜„μ„, μ†‘μŠΉμš°, μœ‘μ •ν›ˆ, μ‹ λ™μž¬ 학생이 μžˆμŠ΅λ‹ˆλ‹€.
```



## Citation
**Language Model**
```text
@misc{bllossom,
  author = {ChangSu Choi, Yongbin Jeong, Seoyoon Park, InHo Won, HyeonSeok Lim, SangMin Kim, Yejee Kang, Chanhyuk Yoon, Jaewan Park, Yiseul Lee, HyeJin Lee, Younggyun Hahm, Hansaem Kim, KyungTae Lim},
  title = {Optimizing Language Augmentation for Multilingual Large Language Models: A Case Study on Korean},
  year = {2024},
  journal = {LREC-COLING 2024},
  paperLink = {\url{https://arxiv.org/pdf/2403.10882}},
 },
}
```

**Vision-Language Model**
```text
@misc{bllossom,
  author = {Dongjae Shin, Hyunseok Lim, Inho Won, Changsu Choi, Minjun Kim, Seungwoo Song, Hangyeol Yoo, Sangmin Kim, Kyungtae Lim},
  title = {X-LLaVA: Optimizing Bilingual Large Vision-Language Alignment},
  year = {2024},
  publisher = {GitHub},
  journal = {NAACL 2024 findings},
  paperLink = {\url{https://arxiv.org/pdf/2403.11399}},
 },
}
```

## Contact
 - μž„κ²½νƒœ(KyungTae Lim), Professor at Seoultech. `[email protected]`
 - ν•¨μ˜κ· (Younggyun Hahm), CEO of Teddysum. `[email protected]`