MaziyarPanahi
commited on
Commit
•
32069f8
1
Parent(s):
e317e9d
Update README.md (#2)
Browse files- Update README.md (ec73cdd4bbdf3c6413a2312fdc26908600cd8e30)
README.md
CHANGED
@@ -98,7 +98,7 @@ pip3 install huggingface-hub
|
|
98 |
Then you can download any individual model file to the current directory, at high speed, with a command like this:
|
99 |
|
100 |
```shell
|
101 |
-
huggingface-cli download MaziyarPanahi/Hermes-2-Pro-11B-GGUF Hermes-2-Pro-11B
|
102 |
```
|
103 |
</details>
|
104 |
<details>
|
@@ -121,7 +121,7 @@ pip3 install hf_transfer
|
|
121 |
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
|
122 |
|
123 |
```shell
|
124 |
-
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/Hermes-2-Pro-11B-GGUF Hermes-2-Pro-11B
|
125 |
```
|
126 |
|
127 |
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
|
@@ -132,7 +132,7 @@ Windows Command Line users: You can set the environment variable by running `set
|
|
132 |
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
|
133 |
|
134 |
```shell
|
135 |
-
./main -ngl 35 -m Hermes-2-Pro-11B
|
136 |
{system_message}<|im_end|>
|
137 |
<|im_start|>user
|
138 |
{prompt}<|im_end|>
|
@@ -209,7 +209,7 @@ output = llm(
|
|
209 |
|
210 |
# Chat Completion API
|
211 |
|
212 |
-
llm = Llama(model_path="./Hermes-2-Pro-11B
|
213 |
llm.create_chat_completion(
|
214 |
messages = [
|
215 |
{"role": "system", "content": "You are a story writing assistant."},
|
|
|
98 |
Then you can download any individual model file to the current directory, at high speed, with a command like this:
|
99 |
|
100 |
```shell
|
101 |
+
huggingface-cli download MaziyarPanahi/Hermes-2-Pro-11B-GGUF Hermes-2-Pro-11B.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
|
102 |
```
|
103 |
</details>
|
104 |
<details>
|
|
|
121 |
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
|
122 |
|
123 |
```shell
|
124 |
+
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/Hermes-2-Pro-11B-GGUF Hermes-2-Pro-11B.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
|
125 |
```
|
126 |
|
127 |
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
|
|
|
132 |
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
|
133 |
|
134 |
```shell
|
135 |
+
./main -ngl 35 -m Hermes-2-Pro-11B.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system
|
136 |
{system_message}<|im_end|>
|
137 |
<|im_start|>user
|
138 |
{prompt}<|im_end|>
|
|
|
209 |
|
210 |
# Chat Completion API
|
211 |
|
212 |
+
llm = Llama(model_path="./Hermes-2-Pro-11B.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
|
213 |
llm.create_chat_completion(
|
214 |
messages = [
|
215 |
{"role": "system", "content": "You are a story writing assistant."},
|