tekiny commited on
Commit
9db9746
1 Parent(s): 22a73a5

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +38 -0
README.md ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ tags:
6
+ - pretrained
7
+ - llama-cpp
8
+ - gguf-my-repo
9
+ pipeline_tag: text-generation
10
+ inference:
11
+ parameters:
12
+ temperature: 0.7
13
+ ---
14
+
15
+ # tekiny/Mistral-7B-v0.1-Q4_K_M-GGUF
16
+ This model was converted to GGUF format from [`mistralai/Mistral-7B-v0.1`](https://huggingface.co/mistralai/Mistral-7B-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
17
+ Refer to the [original model card](https://huggingface.co/mistralai/Mistral-7B-v0.1) for more details on the model.
18
+ ## Use with llama.cpp
19
+ Install llama.cpp through brew.
20
+ ```bash
21
+ brew install ggerganov/ggerganov/llama.cpp
22
+ ```
23
+ Invoke the llama.cpp server or the CLI.
24
+ CLI:
25
+ ```bash
26
+ llama-cli --hf-repo tekiny/Mistral-7B-v0.1-Q4_K_M-GGUF --model mistral-7b-v0.1-q4_k_m.gguf -p "The meaning to life and the universe is"
27
+ ```
28
+ Server:
29
+ ```bash
30
+ llama-server --hf-repo tekiny/Mistral-7B-v0.1-Q4_K_M-GGUF --model mistral-7b-v0.1-q4_k_m.gguf -c 2048
31
+ ```
32
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
33
+ ```
34
+ git clone https://github.com/ggerganov/llama.cpp && \
35
+ cd llama.cpp && \
36
+ make && \
37
+ ./main -m mistral-7b-v0.1-q4_k_m.gguf -n 128
38
+ ```