File size: 3,372 Bytes
c36a7ec
 
 
 
 
 
 
 
 
 
a9070df
3deeaf2
5c02363
 
 
 
 
3deeaf2
f715cf9
2ab450b
5c02363
 
 
 
 
 
 
 
 
 
a35d819
a10b040
5c02363
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6e7fb1d
 
 
 
 
 
 
 
 
5c02363
 
6e7fb1d
 
c36a7ec
 
 
 
 
5c02363
c36a7ec
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5c02363
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
---
base_model:
- Bllossom/llama-3-Korean-Bllossom-70B
library_name: transformers
tags:
- mergekit
- merge

---

### πŸ‡°πŸ‡· About the JayLee "AsianSoul"

```
"A leader who can make you rich πŸ’΅ !!!"

"Prove yourself with actual results, not just saying I know more than you!!!"
```

<a href="https://ibb.co/4g2SJVM"><img src="https://i.ibb.co/PzMWt64/Screenshot-2024-05-18-at-11-08-12-PM.png" alt="Screenshot-2024-05-18-at-11-08-12-PM" border="0"></a>

### About this model

This is a 128B model based on [Bllossom/llama-3-Korean-Bllossom-70B](https://huggingface.co/Bllossom/llama-3-Korean-Bllossom-70B)

β˜• I started this Korean 120B model merge while drinking an iced Americano at Starbucks referring to other [Cognitive Computations 120B](https://huggingface.co/cognitivecomputations/MegaDolphin-120b).

If you walk around Starbucks in Seoul, Korea, you will see someone creating a merge and an application based on it. 

At that time, please come up to me and say "hello".

"Also, if you want to create the Application project you want and provide me with support, I will create the entire architecture for you whatever it is."

🏎️ I am a person whose goal is to turn the great results created by great genius scientists & groups around the world into profitable ones.

```
My role model is J. Robert Oppenheimer!!!

J. Robert Oppenheimer is highly regarded for his ability to gather and lead a team of brilliant scientists, merging their diverse expertise and efforts towards a common goal. 
```
[Learn more about J. Robert Oppenheimer](https://en.wikipedia.org/wiki/J._Robert_Oppenheimer).

I hope this 120B is a helpful model for your future.

```
🌍 Collaboration is always welcome 🌍

πŸ‘Š You can't beat these giant corporations & groups alone and you can never become rich. 

Now we have to come together. 

People who can actually become rich together, let's collaborate with me.!!! 🍸
```

```
About Bllossom/llama-3-Korean-Bllossom-70B
- Full model released in Korean over 100GB by Blossom team
- First in Korean! Expansion of Korean vocabulary to over 30,000 words
- Capable of processing Korean context that is approximately 25% longer than Llama3
- Connecting Korean-English knowledge using the Korean-English Parallel Corpus (pre-study)
- Fine tuning using data produced by linguists considering Korean culture and language
- Reinforcement learning

πŸ›°οΈ About asiansoul/llama-3-Korean-Bllossom-120B-GGUF
- Just Do It
```

### Models Merged

The following models were included in the merge:
* [Bllossom/llama-3-Korean-Bllossom-70B](https://huggingface.co/Bllossom/llama-3-Korean-Bllossom-70B)


### Configuration

The following YAML configuration was used to produce this model:

```yaml
slices:
- sources:
  - layer_range: [0, 20]
    model: Bllossom/llama-3-Korean-Bllossom-70B
- sources:
  - layer_range: [10, 30]
    model: Bllossom/llama-3-Korean-Bllossom-70B
- sources:
  - layer_range: [20, 40]
    model: Bllossom/llama-3-Korean-Bllossom-70B
- sources:
  - layer_range: [30, 50]
    model: Bllossom/llama-3-Korean-Bllossom-70B
- sources:
  - layer_range: [40, 60]
    model: Bllossom/llama-3-Korean-Bllossom-70B
- sources:
  - layer_range: [50, 70]
    model: Bllossom/llama-3-Korean-Bllossom-70B
- sources:
  - layer_range: [60, 80]
    model: Bllossom/llama-3-Korean-Bllossom-70B
merge_method: passthrough
dtype: float16

```