reach-vb HF staff commited on
Commit
010e956
1 Parent(s): 99316dc

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +37 -0
README.md ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - code
4
+ license: other
5
+ tags:
6
+ - code
7
+ - llama-cpp
8
+ - gguf-my-repo
9
+ inference: false
10
+ license_name: mnpl
11
+ license_link: https://mistral.ai/licences/MNPL-0.1.md
12
+ ---
13
+
14
+ # reach-vb/Codestral-22B-v0.1-hf-Q8_0-GGUF
15
+ This model was converted to GGUF format from [`bullerwins/Codestral-22B-v0.1-hf`](https://huggingface.co/bullerwins/Codestral-22B-v0.1-hf) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
16
+ Refer to the [original model card](https://huggingface.co/bullerwins/Codestral-22B-v0.1-hf) for more details on the model.
17
+ ## Use with llama.cpp
18
+ Install llama.cpp through brew.
19
+ ```bash
20
+ brew install ggerganov/ggerganov/llama.cpp
21
+ ```
22
+ Invoke the llama.cpp server or the CLI.
23
+ CLI:
24
+ ```bash
25
+ llama-cli --hf-repo reach-vb/Codestral-22B-v0.1-hf-Q8_0-GGUF --model codestral-22b-v0.1-hf-q8_0.gguf -p "The meaning to life and the universe is"
26
+ ```
27
+ Server:
28
+ ```bash
29
+ llama-server --hf-repo reach-vb/Codestral-22B-v0.1-hf-Q8_0-GGUF --model codestral-22b-v0.1-hf-q8_0.gguf -c 2048
30
+ ```
31
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
32
+ ```
33
+ git clone https://github.com/ggerganov/llama.cpp && \
34
+ cd llama.cpp && \
35
+ make && \
36
+ ./main -m codestral-22b-v0.1-hf-q8_0.gguf -n 128
37
+ ```