asiansoul commited on
Commit
5c02363
β€’
1 Parent(s): 692b859

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +40 -14
README.md CHANGED
@@ -10,10 +10,45 @@ tags:
10
 
11
  ### πŸ‡°πŸ‡· About the JayLee "AsianSoul"
12
 
13
- "A leader who can make you rich"
 
 
 
 
14
 
15
  <a href="https://ibb.co/4g2SJVM"><img src="https://i.ibb.co/PzMWt64/Screenshot-2024-05-18-at-11-08-12-PM.png" alt="Screenshot-2024-05-18-at-11-08-12-PM" border="0"></a>
16
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17
  ```
18
  About Bllossom/llama-3-Korean-Bllossom-70B
19
  - Full model released in Korean over 100GB by Blossom team
@@ -23,25 +58,16 @@ About Bllossom/llama-3-Korean-Bllossom-70B
23
  - Fine tuning using data produced by linguists considering Korean culture and language
24
  - Reinforcement learning
25
 
26
- About asiansoul/llama-3-Korean-Bllossom-120B
27
- - Check out Below Merge Info
28
  ```
29
 
30
- ### About this model
31
-
32
- This is a 128B model based on [Bllossom/llama-3-Korean-Bllossom-70B](https://huggingface.co/Bllossom/llama-3-Korean-Bllossom-70B)
33
-
34
- To know more about the base model, check link.
35
-
36
- I hope this 120B is a helpful model for your future.
37
-
38
- 🌍 Collaboration is always welcome.🌍
39
-
40
  ### Models Merged
41
 
42
  The following models were included in the merge:
43
  * [Bllossom/llama-3-Korean-Bllossom-70B](https://huggingface.co/Bllossom/llama-3-Korean-Bllossom-70B)
44
 
 
45
  ### Configuration
46
 
47
  The following YAML configuration was used to produce this model:
@@ -72,4 +98,4 @@ slices:
72
  merge_method: passthrough
73
  dtype: float16
74
 
75
- ```
 
10
 
11
  ### πŸ‡°πŸ‡· About the JayLee "AsianSoul"
12
 
13
+ ```
14
+ "A leader who can make you rich πŸ’΅ !!!"
15
+
16
+ "Prove yourself with actual results, not just saying I know more than you!!!"
17
+ ```
18
 
19
  <a href="https://ibb.co/4g2SJVM"><img src="https://i.ibb.co/PzMWt64/Screenshot-2024-05-18-at-11-08-12-PM.png" alt="Screenshot-2024-05-18-at-11-08-12-PM" border="0"></a>
20
 
21
+ ### About this model
22
+
23
+ This is a 128B model based on [Bllossom/llama-3-Korean-Bllossom-70B](https://huggingface.co/Bllossom/llama-3-Korean-Bllossom-70B)
24
+
25
+ β˜• I started this Korean 120B model merge while drinking an iced Americano at Starbucks referring to other [Cognitive Computations 120B](https://huggingface.co/cognitivecomputations/MegaDolphin-120b).
26
+
27
+ If you walk around Starbucks in Seoul, Korea, you will see someone creating a merge and an application based on it.
28
+
29
+ At that time, please come up to me and say "hello".
30
+
31
+ 🏎️ I am a person whose goal is to turn the great results created by great genius scientists & groups around the world into profitable ones.
32
+
33
+ ```
34
+ My role model is J. Robert Oppenheimer!!!
35
+
36
+ J. Robert Oppenheimer is highly regarded for his ability to gather and lead a team of brilliant scientists, merging their diverse expertise and efforts towards a common goal.
37
+ ```
38
+ [Learn more about J. Robert Oppenheimer](https://en.wikipedia.org/wiki/J._Robert_Oppenheimer).
39
+
40
+ I hope this 120B is a helpful model for your future.
41
+
42
+ ```
43
+ 🌍 Collaboration is always welcome 🌍
44
+
45
+ πŸ‘Š You can't beat these giant corporations & groups alone and you can never become rich.
46
+
47
+ Now we have to come together.
48
+
49
+ People who can actually become rich together, let's collaborate with me.!!! 🍸
50
+ ```
51
+
52
  ```
53
  About Bllossom/llama-3-Korean-Bllossom-70B
54
  - Full model released in Korean over 100GB by Blossom team
 
58
  - Fine tuning using data produced by linguists considering Korean culture and language
59
  - Reinforcement learning
60
 
61
+ πŸ›°οΈ About asiansoul/llama-3-Korean-Bllossom-120B-GGUF
62
+ - Just Do It
63
  ```
64
 
 
 
 
 
 
 
 
 
 
 
65
  ### Models Merged
66
 
67
  The following models were included in the merge:
68
  * [Bllossom/llama-3-Korean-Bllossom-70B](https://huggingface.co/Bllossom/llama-3-Korean-Bllossom-70B)
69
 
70
+
71
  ### Configuration
72
 
73
  The following YAML configuration was used to produce this model:
 
98
  merge_method: passthrough
99
  dtype: float16
100
 
101
+ ```