kuotient commited on
Commit
bb57726
β€’
1 Parent(s): 051642a

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +83 -0
README.md ADDED
@@ -0,0 +1,83 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-sa-4.0
3
+ datasets:
4
+ - squarelike/sharegpt_deepl_ko_translation
5
+ language:
6
+ - ko
7
+ pipeline_tag: translation
8
+ tags:
9
+ - translate
10
+ ---
11
+ # **Seagull-13b-translation πŸ“‡**
12
+ ![Seagull-typewriter](./Seagull-typewriter.png)
13
+ **Seagull-13b-translation** is yet another translator model, but carefully considered the following issues with existing translation models.
14
+ - `newline` or `space` not matching the original text
15
+ - Using translated dataset with first letter removed for training
16
+ - Codes
17
+ - Markdown format
18
+ - LaTeX format
19
+ - etc
20
+
21
+ 이런 μ΄μŠˆλ“€μ„ μΆ©λΆ„νžˆ μ²΄ν¬ν•˜κ³  ν•™μŠ΅μ„ μ§„ν–‰ν•˜μ˜€μ§€λ§Œ, λͺ¨λΈμ„ μ‚¬μš©ν•  λ•ŒλŠ” 이런 뢀뢄에 λŒ€ν•œ κ²°κ³Όλ₯Ό λ©΄λ°€ν•˜κ²Œ μ‚΄νŽ΄λ³΄λŠ” 것을 μΆ”μ²œν•©λ‹ˆλ‹€(μ½”λ“œκ°€ ν¬ν•¨λœ ν…μŠ€νŠΈ λ“±). λ˜ν•œ 가끔 λ¬Έμž₯ 반볡 ν˜„μƒμ΄ 생길 수 μžˆμœΌλ‹ˆ, ν›„μ²˜λ¦¬ λ‹¨κ³„μ—μ„œ ν…ŒμŠ€νŠΈν•˜λŠ” 것을 μΆ”μ²œν•©λ‹ˆλ‹€.
22
+
23
+ > If you're interested in building large-scale language models to solve a wide variety of problems in a wide variety of domains, you should consider joining [Allganize](https://allganize.career.greetinghr.com/o/65146).
24
+ For a coffee chat or if you have any questions, please do not hesitate to contact me as well! - [email protected]
25
+
26
+ This model was created as a personal experiment, unrelated to the organization I work for.
27
+
28
+ # **License**
29
+ ## From original model author:
30
+ - Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License, under LLAMA 2 COMMUNITY LICENSE AGREEMENT
31
+ - Full License available at: https://huggingface.co/beomi/llama-2-koen-13b/blob/main/LICENSE
32
+
33
+ # **Model Details**
34
+ #### **Developed by**
35
+ Jisoo Kim(kuotient)
36
+ #### **Base Model**
37
+ [beomi/llama-2-koen-13b](https://huggingface.co/beomi/llama-2-koen-13b)
38
+ #### **Datasets**
39
+ - [sharegpt_deepl_ko_translation](https://huggingface.co/datasets/squarelike/sharegpt_deepl_ko_translation)
40
+ - AIHUB
41
+ - κΈ°μˆ κ³Όν•™ λΆ„μ•Ό ν•œ-영 λ²ˆμ—­ 병렬 λ§λ­‰μΉ˜ 데이터
42
+ - μΌμƒμƒν™œ 및 ꡬ어체 ν•œ-영 λ²ˆμ—­ 병렬 λ§λ­‰μΉ˜ 데이터
43
+
44
+ ## Usage
45
+ #### Format
46
+ It follows only **ChatML** format.
47
+
48
+ ```python
49
+ <|im_start|>system
50
+ 주어진 λ¬Έμž₯을 ν•œκ΅­μ–΄λ‘œ λ²ˆμ—­ν•˜μ„Έμš”.<|im_end|>
51
+ <|im_start|>user
52
+ {instruction}<|im_end|>
53
+ <|im_start|>assistant
54
+
55
+ ```
56
+ ```python
57
+ <|im_start|>system
58
+ 주어진 λ¬Έμž₯을 μ˜μ–΄λ‘œ λ²ˆμ—­ν•˜μ„Έμš”.<|im_end|>
59
+ <|im_start|>user
60
+ {instruction}<|im_end|>
61
+ <|im_start|>assistant
62
+
63
+ ```
64
+
65
+ #### Example code
66
+ Since, chat_template already contains insturction format above.
67
+ You can use the code below.
68
+ ```python
69
+ from transformers import AutoModelForCausalLM, AutoTokenizer
70
+ device = "cuda" # the device to load the model onto
71
+ model = AutoModelForCausalLM.from_pretrained("kuotient/Seagull-13B-translation")
72
+ tokenizer = AutoTokenizer.from_pretrained("kuotient/Seagull-13B-translation")
73
+ messages = [
74
+ {"role": "user", "content": "λ°”λ‚˜λ‚˜λŠ” μ›λž˜ ν•˜μ–€μƒ‰μ΄μ•Ό?"},
75
+ ]
76
+ encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
77
+
78
+ model_inputs = encodeds.to(device)
79
+ model.to(device)
80
+ generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
81
+ decoded = tokenizer.batch_decode(generated_ids)
82
+ print(decoded[0])
83
+ ```