jon-tow commited on
Commit
fe82f08
1 Parent(s): 0191555

update(README): highlight format and point to correct file in snippet example

Browse files
Files changed (1) hide show
  1. README.md +6 -5
README.md CHANGED
@@ -22,19 +22,20 @@ extra_gated_fields:
22
  Organization or Affiliation: text
23
  I ALLOW Stability AI to email me about new model releases: checkbox
24
  ---
25
- # `StableLM 2 12B Chat`
 
 
26
 
27
  ## Model Description
28
 
29
  `Stable LM 2 12B Chat` is a 12 billion parameter instruction tuned language model trained on a mix of publicly available datasets and synthetic datasets, utilizing [Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290).
30
- GGUF files were generated with [b2684](https://github.com/ggerganov/llama.cpp/releases/tag/b2684) release
31
 
32
- ## Usage
33
 
34
- `StableLM 2 12B Chat` uses the following instruction ChatML format.
35
 
36
  ```bash
37
- ./main -m stablelm-2-12b-q4_k_m.gguf -p "Implement snake game using pygame"
38
  ```
39
 
40
  ## Model Details
 
22
  Organization or Affiliation: text
23
  I ALLOW Stability AI to email me about new model releases: checkbox
24
  ---
25
+ # `StableLM 2 12B Chat GGUF`
26
+
27
+ **This repository contains GGUF format files for [StableLM 2 12B Chat](https://huggingface.co/stabilityai/stablelm-2-12b-chat). Files were generated with the [b2684](https://github.com/ggerganov/llama.cpp/releases/tag/b2684) `llama.cpp` release.**
28
 
29
  ## Model Description
30
 
31
  `Stable LM 2 12B Chat` is a 12 billion parameter instruction tuned language model trained on a mix of publicly available datasets and synthetic datasets, utilizing [Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290).
 
32
 
33
+ ## Example Usage via `llama.cpp`
34
 
35
+ Make sure to install release [b2684](https://github.com/ggerganov/llama.cpp/releases/tag/b2684) or later.
36
 
37
  ```bash
38
+ ./main -m stablelm-2-12b-chat-q4_k_m.gguf -c 4096 --temp 0.7 -p "Implement snake game using pygame"
39
  ```
40
 
41
  ## Model Details