ggerganov commited on
Commit
5122d22
1 Parent(s): 16cfec7

readme : switch to ggml-org

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -191,7 +191,7 @@ extra_gated_description: The information you provide will be collected, stored,
191
  extra_gated_button_content: Submit
192
  ---
193
 
194
- # ggerganov/Meta-Llama-3.1-8B-Instruct-Q4_0-GGUF
195
  This model was converted to GGUF format from [`meta-llama/Meta-Llama-3.1-8B-Instruct`](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
196
  Refer to the [original model card](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) for more details on the model.
197
 
@@ -206,12 +206,12 @@ Invoke the llama.cpp server or the CLI.
206
 
207
  ### CLI:
208
  ```bash
209
- llama-cli --hf-repo ggerganov/Meta-Llama-3.1-8B-Instruct-Q4_0-GGUF --hf-file meta-llama-3.1-8b-instruct-q4_0.gguf -p "The meaning to life and the universe is"
210
  ```
211
 
212
  ### Server:
213
  ```bash
214
- llama-server --hf-repo ggerganov/Meta-Llama-3.1-8B-Instruct-Q4_0-GGUF --hf-file meta-llama-3.1-8b-instruct-q4_0.gguf -c 2048
215
  ```
216
 
217
  Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
@@ -228,9 +228,9 @@ cd llama.cpp && LLAMA_CURL=1 make
228
 
229
  Step 3: Run inference through the main binary.
230
  ```
231
- ./llama-cli --hf-repo ggerganov/Meta-Llama-3.1-8B-Instruct-Q4_0-GGUF --hf-file meta-llama-3.1-8b-instruct-q4_0.gguf -p "The meaning to life and the universe is"
232
  ```
233
  or
234
  ```
235
- ./llama-server --hf-repo ggerganov/Meta-Llama-3.1-8B-Instruct-Q4_0-GGUF --hf-file meta-llama-3.1-8b-instruct-q4_0.gguf -c 2048
236
  ```
 
191
  extra_gated_button_content: Submit
192
  ---
193
 
194
+ # ggml-org/Meta-Llama-3.1-8B-Instruct-Q4_0-GGUF
195
  This model was converted to GGUF format from [`meta-llama/Meta-Llama-3.1-8B-Instruct`](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
196
  Refer to the [original model card](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) for more details on the model.
197
 
 
206
 
207
  ### CLI:
208
  ```bash
209
+ llama-cli --hf-repo ggml-org/Meta-Llama-3.1-8B-Instruct-Q4_0-GGUF --hf-file meta-llama-3.1-8b-instruct-q4_0.gguf -p "The meaning to life and the universe is"
210
  ```
211
 
212
  ### Server:
213
  ```bash
214
+ llama-server --hf-repo ggml-org/Meta-Llama-3.1-8B-Instruct-Q4_0-GGUF --hf-file meta-llama-3.1-8b-instruct-q4_0.gguf -c 2048
215
  ```
216
 
217
  Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
 
228
 
229
  Step 3: Run inference through the main binary.
230
  ```
231
+ ./llama-cli --hf-repo ggml-org/Meta-Llama-3.1-8B-Instruct-Q4_0-GGUF --hf-file meta-llama-3.1-8b-instruct-q4_0.gguf -p "The meaning to life and the universe is"
232
  ```
233
  or
234
  ```
235
+ ./llama-server --hf-repo ggml-org/Meta-Llama-3.1-8B-Instruct-Q4_0-GGUF --hf-file meta-llama-3.1-8b-instruct-q4_0.gguf -c 2048
236
  ```