File size: 11,704 Bytes
ac75d09 b553e6c 6ba6631 2f45029 6ba6631 ee3c506 6ba6631 2965659 2467d2f 2f45029 2965659 2f45029 2467d2f 6ba6631 2965659 6ba6631 2f45029 6ba6631 6dfd544 0905371 2965659 68bff6a 6dfd544 aba16c2 6ba6631 6dfd544 6ba6631 6dfd544 6ba6631 6dfd544 6ba6631 6dfd544 aba16c2 6ba6631 6dfd544 6ba6631 6dfd544 6ba6631 6dfd544 6ba6631 6dfd544 6ba6631 2f45029 2965659 5e1dc30 2f45029 a3bf29a dcde4f2 e418e96 60c3904 b34ef36 2f45029 5e1dc30 2f45029 dcde4f2 a3bf29a 83c64cb b34ef36 6ba6631 ee3c506 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 |
---
language:
- en
- ja
tags:
- translation
- gemma
- llama.cpp
- gguf
---
![image/png](https://cdn-uploads.huggingface.co/production/uploads/630469550907b9a115c91e62/m11e35NrZMi7ZpBQ7C6KV.png)
# News
## 2024.07.21
C3TR-Adapter_ggufのVersion3を公開しました。
Version 3 of C3TR-Adapter_gguf has been released.
## 2024.05.18
C3TR-Adapter_ggufのVersion2を公開しました。
Version 2 of C3TR-Adapter_gguf has been released.
Version2では主にカジュアルな会話に関する翻訳能力が大幅に向上しています。
Version 2 has greatly improved the ability to translate casual conversations.
その反面、フォーマルな文章の翻訳能力が少し落ちてしまっています。フォーマルな文章を対象にする場合、[Version1](https://huggingface.co/webbigdata/C3TR-Adapter_gguf/tree/version1)を引き続きお使いください
On the other hand, translation capabilities for formal texts have declined slightly. If you are targeting formal texts, please continue to use [Version1](https://huggingface.co/webbigdata/C3TR-Adapter_gguf/tree/version1).
# モデルカード(Model Card for Model ID)
Gemmaベースの日英、英日ニューラル機械翻訳モデルである[webbigdata/C3TR-Adapter](https://huggingface.co/webbigdata/C3TR-Adapter)をGPUがないPCでも動かせるようにggufフォーマットに変換したモデルです。
A Japanese-English and English-Japanese neural machine translation model, [webbigdata/C3TR-Adapter](https://huggingface.co/webbigdata/C3TR-Adapter), converted to gguf format so that it can run on a PC without a GPU.
### 簡単に試す方法(Easy way to try it)
Googleの無料WebサービスColabを使うとブラウザを使って試す事ができます。
You can try it using your browser with Colab, Google's free web service.
リンク先で[Open in Colab]ボタンを押してColabを起動してください
Press the [Open in Colab] button on the link to start Colab
[Colab Sample C3TR_Adapter_gguf_v2_Free_Colab_sample](https://github.com/webbigdata-jp/python_sample/blob/main/C3TR_Adapter_gguf_v2_Free_Colab_sample.ipynb)
### 利用可能なVersion(Available Versions)
llama.cppを使うと、様々な量子化手法でファイルのサイズを小さくする事が出来ますが、本サンプルでは6種類のみを扱います。小さいサイズのモデルは、少ないメモリで高速に動作させることができますが、モデルの性能も低下します。4ビット(Q4_K_M)くらいがバランスが良いと言われています。
Although llama.cpp can be used to reduce the size of the file with various quantization methods, this sample deals with only six types. Smaller models can run faster with less memory, but also reduce the performance of the models. 4 bits (Q4_K_M) is said to be a good balance.
- C3TR-Adapter.Q4_K_S.gguf 4.7 GB
- C3TR-Adapter.Q4_K_M.gguf 5.0 GB
- C3TR-Adapter.Q5_K_S.gguf 5.6 GB
- C3TR-Adapter.Q5_K_M.gguf 5.8 GB
- C3TR-Adapter.Q6_K.gguf 6.6 GB
### サンプルコード(sample code)
ColabのCPUは少し遅いので、少し技術的な挑戦が必要ですが皆さんが所有しているPCでllama.cppをコンパイルして動かす方が良いでしょう。
Since Colab's CPU is a bit slow, it is better to compile and run llama.cpp on your own PC, which requires a bit of a technical challenge.
#### Install and compile example(linux)
その他のOSについては[llama.cpp公式サイト](https://github.com/ggerganov/llama.cpp)を確認してください
For other operating systems, please check the [llama.cpp official website](https://github.com/ggerganov/llama.cpp)
```
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
make
```
#### 推論実行例(Inference execution sample)
英日翻訳(Translate English to Japanese)
```
./llama-cli -m ../C3TR-Adapter.Q4_K_M.gguf -e --temp 0 --repeat-penalty 1.0 -n -2 -p "You are a highly skilled professional Japanese-English and English-Japanese translator. Translate the given text accurately, taking into account the context and specific instructions provided. Steps may include hints enclosed in square brackets [] with the key and value separated by a colon:. Only when the subject is specified in the Japanese sentence, the subject will be added when translating into English. If no additional instructions or context are provided, use your expertise to consider what the most appropriate context is and provide a natural translation that aligns with that context. When translating, strive to faithfully reflect the meaning and tone of the original text, pay attention to cultural nuances and differences in language usage, and ensure that the translation is grammatically correct and easy to read. After completing the translation, review it once more to check for errors or unnatural expressions. For technical terms and proper nouns, either leave them in the original language or use appropriate translations as necessary. Take a deep breath, calm down, and start translating.
### Instruction:
Translate English to Japanese.
When translating, please use the following hints:
[writing_style: journalistic]
[Heron: アオサギ]
[Mahito Maki: 牧眞人]
### Input:
'The Boy and the Heron' follows a boy named Mahito Maki who moves to the countryside after his mother's death. There, he is lured by a mysterious heron into a secluded tower, a portal that transports him to a fantastical realm amid his grief.
### Response:
```
出力例(output)
```
### Input:
'The Boy and the Heron' follows a boy named Mahito Maki who moves to the countryside after his mother's death. There, he is lured by a mysterious heron into a secluded tower, a portal that transports him to a fantastical realm amid his grief.
### Response:
『少年とアオサギ』は、母親が亡くなった後、田舎に引っ越してきた少年の牧眞人という名前の少年が、謎のアオサギに誘われて、孤独な塔に引き寄せられ、悲しみに紛れてファンタジーな世界に旅立つ物語です。<eos> [end of text]
```
日英翻訳時(Translate Japanese to English)
```
./llama-cli -m ../C3TR-Adapter.Q6_K.gguf -e --temp 0 --repeat-penalty 1.0 -n -2 -p "You are a highly skilled professional Japanese-English and English-Japanese translator. Translate the given text accurately, taking into account the context and specific instructions provided. Steps may include hints enclosed in square brackets [] with the key and value separated by a colon:. Only when the subject is specified in the Japanese sentence, the subject will be added when translating into English. If no additional instructions or context are provided, use your expertise to consider what the most appropriate context is and provide a natural translation that aligns with that context. When translating, strive to faithfully reflect the meaning and tone of the original text, pay attention to cultural nuances and differences in language usage, and ensure that the translation is grammatically correct and easy to read. After completing the translation, review it once more to check for errors or unnatural expressions. For technical terms and proper nouns, either leave them in the original language or use appropriate translations as necessary. Take a deep breath, calm down, and start translating.
### Instruction:
Translate English to Japanese.
When translating, please use the following hints:
[writing_style: casual, game]
[hatsuharu: 初春]
[hatsuharu_first_person_and_ending: わらわ, なのじゃ]
[hatsuharu_character_style: のじゃロリ]
### Input:
hatsuharu 'Did you come to see me again, huh?'
hatsuharu 'Well, I suppose I can't help it. Don't worry, I'll protect this fleet.'
hatsuharu 'You can count on me. Hey, young one!'
hatsuharu 'Bring me more sweets, will you?'
### Response:
"
```
出力例(output)
```
### Instruction:
Translate English to Japanese.
When translating, please use the following hints:
[writing_style: casual, game]
[hatsuharu: 初春]
[hatsuharu_first_person_and_ending: わらわ, なのじゃ]
[hatsuharu_character_style: のじゃロリ]
### Input:
hatsuharu 'Did you come to see me again, huh?'
hatsuharu 'Well, I suppose I can't help it. Don't worry, I'll protect this fleet.'
hatsuharu 'You can count on me. Hey, young one!'
hatsuharu 'Bring me more sweets, will you?'
### Response:
初春「また来たか、なのじゃ」
初春「まあ、しょうがないのじゃ。心配しないで、この艦隊を守るのじゃ」
初春「頼れるぞ。ねえ、若者」
初春「もっと菓子を持ってくるのじゃ」<eos> [end of text]
```
詳細は[webbigdata/C3TR-Adapter](https://huggingface.co/webbigdata/C3TR-Adapter)を参照してください
gguf版は一部の指定が動作しません
For other grammars see [webbigdata/C3TR-Adapter](https://huggingface.co/webbigdata/C3TR-Adapter)
Some specifications do not work with the gguf version.
### パラメーター(Parameters)
現在のgguf版は翻訳後に幻覚を追加出力してしまう傾向があり、パラメーターを適宜調整する必要があります。
The current gguf version tends to add hallucinations after translation and the parameters need to be adjusted accordingly.
必要に応じて下記のパラメーターを調整してください
- 温度(--temp): この値を下げると、モデルがより確信度の高い(つまり、より一般的な)単語を選択する傾向が強くなります。
- トップP(--top_p): この値をさらに低く設定することで、モデルが考慮する単語の範囲を狭め、より一貫性のあるテキストを生成するようになります。
- 生成する単語数(-n): この値を減らすことで、モデルが生成するテキストの長さを短くし、不要な追加テキストの生成を防ぐことができます。-1 = 無限大、-2 = 文脈が満たされるまで。
以下はllama.cppの作者(ggerganov)による[推奨パラメーター](https://huggingface.co/google/gemma-7b-it/discussions/38#65d7b14adb51f7c160769fa1)です
- -e (改行\nをエスケープ)
- --temp 0 (最も確率の高いトークンのみを選択)
- --repeat-penalty 1.0 (繰り返しペナルティをオフ。指示調整済モデルでこれをするのは、決して良い考えとは言えないとの事。)
- ~~--no-penalize-nl (改行の繰り返しにはペナルティをあたえない)~~ 最新のllama.cppではディフォルト動作になったので指定不要
Adjust the following parameters as needed
- Temperature (--temp): Lowering this value will make the model more likely to select more confident (i.e., more common) words.
- Top P (--top_p): Setting this value even lower will narrow the range of words considered by the model and produce more consistent text.
- Number of words to generate (-n): Reducing this value will shorten the length of text generated by the model and prevent the generation of unnecessary additional text. -1 = infinity(default), -2 = until context filled.
The following are the [recommended parameters](https://huggingface.co/google/gemma-7b-it/discussions/38#65d7b14adb51f7c160769fa1) by the author of llama.cpp(ggerganov)
- -e (escape newlines (\n))
- --temp 0(pick most probable tokens)
- --repeat-penalty 1.0(disable repetition penalty (it's never a good idea to have this with instruction tuned models)~~ latest llama.cpp default behavior, so don't mind.
- ~~--no-penalize-nl(do not penalize repeating newlines)~~ latest llama.cpp's default behavior so you need not this option.
|