C3TR-Adapter_gguf / README.md
webbigdata-jp
version3
ee3c506
|
raw
history blame
11.7 kB
metadata
language:
  - en
  - ja
tags:
  - translation
  - gemma
  - llama.cpp
  - gguf

image/png

News

2024.07.21

C3TR-Adapter_ggufのVersion3を公開しました。
Version 3 of C3TR-Adapter_gguf has been released.

2024.05.18

C3TR-Adapter_ggufのVersion2を公開しました。
Version 2 of C3TR-Adapter_gguf has been released.

Version2では主にカジュアルな会話に関する翻訳能力が大幅に向上しています。
Version 2 has greatly improved the ability to translate casual conversations.

その反面、フォーマルな文章の翻訳能力が少し落ちてしまっています。フォーマルな文章を対象にする場合、Version1を引き続きお使いください
On the other hand, translation capabilities for formal texts have declined slightly. If you are targeting formal texts, please continue to use Version1.

モデルカード(Model Card for Model ID)

Gemmaベースの日英、英日ニューラル機械翻訳モデルであるwebbigdata/C3TR-AdapterをGPUがないPCでも動かせるようにggufフォーマットに変換したモデルです。
A Japanese-English and English-Japanese neural machine translation model, webbigdata/C3TR-Adapter, converted to gguf format so that it can run on a PC without a GPU.

簡単に試す方法(Easy way to try it)

Googleの無料WebサービスColabを使うとブラウザを使って試す事ができます。
You can try it using your browser with Colab, Google's free web service.

リンク先で[Open in Colab]ボタンを押してColabを起動してください
Press the [Open in Colab] button on the link to start Colab
Colab Sample C3TR_Adapter_gguf_v2_Free_Colab_sample

利用可能なVersion(Available Versions)

llama.cppを使うと、様々な量子化手法でファイルのサイズを小さくする事が出来ますが、本サンプルでは6種類のみを扱います。小さいサイズのモデルは、少ないメモリで高速に動作させることができますが、モデルの性能も低下します。4ビット(Q4_K_M)くらいがバランスが良いと言われています。
Although llama.cpp can be used to reduce the size of the file with various quantization methods, this sample deals with only six types. Smaller models can run faster with less memory, but also reduce the performance of the models. 4 bits (Q4_K_M) is said to be a good balance.

  • C3TR-Adapter.Q4_K_S.gguf 4.7 GB
  • C3TR-Adapter.Q4_K_M.gguf 5.0 GB
  • C3TR-Adapter.Q5_K_S.gguf 5.6 GB
  • C3TR-Adapter.Q5_K_M.gguf 5.8 GB
  • C3TR-Adapter.Q6_K.gguf 6.6 GB

サンプルコード(sample code)

ColabのCPUは少し遅いので、少し技術的な挑戦が必要ですが皆さんが所有しているPCでllama.cppをコンパイルして動かす方が良いでしょう。
Since Colab's CPU is a bit slow, it is better to compile and run llama.cpp on your own PC, which requires a bit of a technical challenge.

Install and compile example(linux)

その他のOSについてはllama.cpp公式サイトを確認してください
For other operating systems, please check the llama.cpp official website

git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
make

推論実行例(Inference execution sample)

英日翻訳(Translate English to Japanese)

./llama-cli -m ../C3TR-Adapter.Q4_K_M.gguf  -e --temp 0 --repeat-penalty 1.0 -n -2  -p "You are a highly skilled professional Japanese-English and English-Japanese translator. Translate the given text accurately, taking into account the context and specific instructions provided. Steps may include hints enclosed in square brackets [] with the key and value separated by a colon:. Only when the subject is specified in the Japanese sentence, the subject will be added when translating into English. If no additional instructions or context are provided, use your expertise to consider what the most appropriate context is and provide a natural translation that aligns with that context. When translating, strive to faithfully reflect the meaning and tone of the original text, pay attention to cultural nuances and differences in language usage, and ensure that the translation is grammatically correct and easy to read. After completing the translation, review it once more to check for errors or unnatural expressions. For technical terms and proper nouns, either leave them in the original language or use appropriate translations as necessary. Take a deep breath, calm down, and start translating.

### Instruction:
Translate English to Japanese.
When translating, please use the following hints:
[writing_style: journalistic]
[Heron: アオサギ]
[Mahito Maki: 牧眞人]

### Input:
'The Boy and the Heron' follows a boy named Mahito Maki who moves to the countryside after his mother's death. There, he is lured by a mysterious heron into a secluded tower, a portal that transports him to a fantastical realm amid his grief.

### Response:

出力例(output)

### Input:
'The Boy and the Heron' follows a boy named Mahito Maki who moves to the countryside after his mother's death. There, he is lured by a mysterious heron into a secluded tower, a portal that transports him to a fantastical realm amid his grief.

### Response:
『少年とアオサギ』は、母親が亡くなった後、田舎に引っ越してきた少年の牧眞人という名前の少年が、謎のアオサギに誘われて、孤独な塔に引き寄せられ、悲しみに紛れてファンタジーな世界に旅立つ物語です。<eos> [end of text]

日英翻訳時(Translate Japanese to English)

./llama-cli -m ../C3TR-Adapter.Q6_K.gguf -e --temp 0 --repeat-penalty 1.0 -n -2  -p "You are a highly skilled professional Japanese-English and English-Japanese translator. Translate the given text accurately, taking into account the context and specific instructions provided. Steps may include hints enclosed in square brackets [] with the key and value separated by a colon:. Only when the subject is specified in the Japanese sentence, the subject will be added when translating into English. If no additional instructions or context are provided, use your expertise to consider what the most appropriate context is and provide a natural translation that aligns with that context. When translating, strive to faithfully reflect the meaning and tone of the original text, pay attention to cultural nuances and differences in language usage, and ensure that the translation is grammatically correct and easy to read. After completing the translation, review it once more to check for errors or unnatural expressions. For technical terms and proper nouns, either leave them in the original language or use appropriate translations as necessary. Take a deep breath, calm down, and start translating.

### Instruction:
Translate English to Japanese.
When translating, please use the following hints:
[writing_style: casual, game]
[hatsuharu: 初春]
[hatsuharu_first_person_and_ending: わらわ, なのじゃ]
[hatsuharu_character_style: のじゃロリ]

### Input:
hatsuharu 'Did you come to see me again, huh?'
hatsuharu 'Well, I suppose I can't help it.  Don't worry, I'll protect this fleet.'
hatsuharu 'You can count on me. Hey, young one!'
hatsuharu 'Bring me more sweets, will you?'

### Response:
"

出力例(output)

### Instruction:
Translate English to Japanese.
When translating, please use the following hints:
[writing_style: casual, game]
[hatsuharu: 初春]
[hatsuharu_first_person_and_ending: わらわ, なのじゃ]
[hatsuharu_character_style: のじゃロリ]

### Input:
hatsuharu 'Did you come to see me again, huh?'
hatsuharu 'Well, I suppose I can't help it.  Don't worry, I'll protect this fleet.'
hatsuharu 'You can count on me. Hey, young one!'
hatsuharu 'Bring me more sweets, will you?'

### Response:
初春「また来たか、なのじゃ」
初春「まあ、しょうがないのじゃ。心配しないで、この艦隊を守るのじゃ」
初春「頼れるぞ。ねえ、若者」
初春「もっと菓子を持ってくるのじゃ」<eos> [end of text]

詳細はwebbigdata/C3TR-Adapterを参照してください gguf版は一部の指定が動作しません

For other grammars see webbigdata/C3TR-Adapter Some specifications do not work with the gguf version.

パラメーター(Parameters)

現在のgguf版は翻訳後に幻覚を追加出力してしまう傾向があり、パラメーターを適宜調整する必要があります。
The current gguf version tends to add hallucinations after translation and the parameters need to be adjusted accordingly.

必要に応じて下記のパラメーターを調整してください

  • 温度(--temp): この値を下げると、モデルがより確信度の高い(つまり、より一般的な)単語を選択する傾向が強くなります。
  • トップP(--top_p): この値をさらに低く設定することで、モデルが考慮する単語の範囲を狭め、より一貫性のあるテキストを生成するようになります。
  • 生成する単語数(-n): この値を減らすことで、モデルが生成するテキストの長さを短くし、不要な追加テキストの生成を防ぐことができます。-1 = 無限大、-2 = 文脈が満たされるまで。

以下はllama.cppの作者(ggerganov)による推奨パラメーターです

  • -e (改行\nをエスケープ)
  • --temp 0 (最も確率の高いトークンのみを選択)
  • --repeat-penalty 1.0 (繰り返しペナルティをオフ。指示調整済モデルでこれをするのは、決して良い考えとは言えないとの事。)
  • --no-penalize-nl (改行の繰り返しにはペナルティをあたえない) 最新のllama.cppではディフォルト動作になったので指定不要

Adjust the following parameters as needed

  • Temperature (--temp): Lowering this value will make the model more likely to select more confident (i.e., more common) words.
  • Top P (--top_p): Setting this value even lower will narrow the range of words considered by the model and produce more consistent text.
  • Number of words to generate (-n): Reducing this value will shorten the length of text generated by the model and prevent the generation of unnecessary additional text. -1 = infinity(default), -2 = until context filled.

The following are the recommended parameters by the author of llama.cpp(ggerganov)

  • -e (escape newlines (\n))
  • --temp 0(pick most probable tokens)
  • --repeat-penalty 1.0(disable repetition penalty (it's never a good idea to have this with instruction tuned models)~~ latest llama.cpp default behavior, so don't mind.
  • --no-penalize-nl(do not penalize repeating newlines) latest llama.cpp's default behavior so you need not this option.