mradermacher commited on
Commit
66cea99
1 Parent(s): bdc6f5a

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +10 -1
README.md CHANGED
@@ -7,11 +7,19 @@ library_name: transformers
7
  license: llama2
8
  quantized_by: mradermacher
9
  ---
 
 
10
  weighted/imatrix quants of https://huggingface.co/cognitivecomputations/Samantha-1.1-70b
11
 
12
  The weights were calculated using 164k semi-random english tokens.
13
-
14
  <!-- provided-files -->
 
 
 
 
 
 
 
15
  ## Provided Quants
16
 
17
  | Link | Type | Size/GB | Notes |
@@ -28,4 +36,5 @@ The weights were calculated using 164k semi-random english tokens.
28
  | [GGUF](https://huggingface.co/mradermacher/Samantha-1.1-70b-i1-GGUF/resolve/main/Samantha-1.1-70b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 39.6 | fast, medium quality |
29
  | [GGUF](https://huggingface.co/mradermacher/Samantha-1.1-70b-i1-GGUF/resolve/main/Samantha-1.1-70b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 41.7 | fast, medium quality |
30
 
 
31
  <!-- end -->
 
7
  license: llama2
8
  quantized_by: mradermacher
9
  ---
10
+ ## About
11
+
12
  weighted/imatrix quants of https://huggingface.co/cognitivecomputations/Samantha-1.1-70b
13
 
14
  The weights were calculated using 164k semi-random english tokens.
 
15
  <!-- provided-files -->
16
+
17
+ ## Usage
18
+
19
+ If you are unsure how to use GGUF files, refer to one of [TheBloke's
20
+ READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
21
+ more details, including on how to concatenate multi-part files.
22
+
23
  ## Provided Quants
24
 
25
  | Link | Type | Size/GB | Notes |
 
36
  | [GGUF](https://huggingface.co/mradermacher/Samantha-1.1-70b-i1-GGUF/resolve/main/Samantha-1.1-70b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 39.6 | fast, medium quality |
37
  | [GGUF](https://huggingface.co/mradermacher/Samantha-1.1-70b-i1-GGUF/resolve/main/Samantha-1.1-70b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 41.7 | fast, medium quality |
38
 
39
+
40
  <!-- end -->