Update README.md
Browse files
README.md
CHANGED
@@ -106,7 +106,7 @@ The current gguf version tends to add hallucinations after translation and the p
|
|
106 |
- -e (改行\nをエスケープ)
|
107 |
- --temp 0 (最も確率の高いトークンのみを選択)
|
108 |
- --repeat-penalty 1.0 (繰り返しペナルティをオフ。指示調整済モデルでこれをするのは、決して良い考えとは言えないとの事。)
|
109 |
-
-
|
110 |
|
111 |
Adjust the following parameters as needed
|
112 |
- Temperature (--temp): Lowering this value will make the model more likely to select more confident (i.e., more common) words.
|
@@ -116,5 +116,5 @@ Adjust the following parameters as needed
|
|
116 |
The following are the [recommended parameters](https://huggingface.co/google/gemma-7b-it/discussions/38#65d7b14adb51f7c160769fa1) by the author of llama.cpp(ggerganov)
|
117 |
- -e (escape newlines (\n))
|
118 |
- --temp 0(pick most probable tokens)
|
119 |
-
- --repeat-penalty 1.0(disable repetition penalty (it's never a good idea to have this with instruction tuned models)
|
120 |
-
-
|
|
|
106 |
- -e (改行\nをエスケープ)
|
107 |
- --temp 0 (最も確率の高いトークンのみを選択)
|
108 |
- --repeat-penalty 1.0 (繰り返しペナルティをオフ。指示調整済モデルでこれをするのは、決して良い考えとは言えないとの事。)
|
109 |
+
- ~~--no-penalize-nl (改行の繰り返しにはペナルティをあたえない)~~ 最新のllama.cppではペナルティを与えたい時に--penalize-nlを指定するようになったので不要
|
110 |
|
111 |
Adjust the following parameters as needed
|
112 |
- Temperature (--temp): Lowering this value will make the model more likely to select more confident (i.e., more common) words.
|
|
|
116 |
The following are the [recommended parameters](https://huggingface.co/google/gemma-7b-it/discussions/38#65d7b14adb51f7c160769fa1) by the author of llama.cpp(ggerganov)
|
117 |
- -e (escape newlines (\n))
|
118 |
- --temp 0(pick most probable tokens)
|
119 |
+
- --repeat-penalty 1.0(disable repetition penalty (it's never a good idea to have this with instruction tuned models)~~ latest llama.cpp default behavior, so don't mind.
|
120 |
+
- ~~--no-penalize-nl(do not penalize repeating newlines)
|