Triangle104 commited on
Commit
6fc1ef3
1 Parent(s): 396771c

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +153 -0
README.md ADDED
@@ -0,0 +1,153 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ library_name: transformers
4
+ tags:
5
+ - mergekit
6
+ - merge
7
+ - alpaca
8
+ - mistral
9
+ - not-for-all-audiences
10
+ - nsfw
11
+ - llama-cpp
12
+ - gguf-my-repo
13
+ base_model: icefog72/IceDrunkenCherryRP-7b
14
+ model-index:
15
+ - name: Ice0.40-20.11-RP, IceDrunkenCherryRP-7b
16
+ results:
17
+ - task:
18
+ type: text-generation
19
+ name: Text Generation
20
+ dataset:
21
+ name: IFEval (0-Shot)
22
+ type: HuggingFaceH4/ifeval
23
+ args:
24
+ num_few_shot: 0
25
+ metrics:
26
+ - type: inst_level_strict_acc and prompt_level_strict_acc
27
+ value: 47.63
28
+ name: strict accuracy
29
+ source:
30
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=icefog72/Ice0.40-20.11-RP
31
+ name: Open LLM Leaderboard
32
+ - task:
33
+ type: text-generation
34
+ name: Text Generation
35
+ dataset:
36
+ name: BBH (3-Shot)
37
+ type: BBH
38
+ args:
39
+ num_few_shot: 3
40
+ metrics:
41
+ - type: acc_norm
42
+ value: 31.51
43
+ name: normalized accuracy
44
+ source:
45
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=icefog72/Ice0.40-20.11-RP
46
+ name: Open LLM Leaderboard
47
+ - task:
48
+ type: text-generation
49
+ name: Text Generation
50
+ dataset:
51
+ name: MATH Lvl 5 (4-Shot)
52
+ type: hendrycks/competition_math
53
+ args:
54
+ num_few_shot: 4
55
+ metrics:
56
+ - type: exact_match
57
+ value: 6.27
58
+ name: exact match
59
+ source:
60
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=icefog72/Ice0.40-20.11-RP
61
+ name: Open LLM Leaderboard
62
+ - task:
63
+ type: text-generation
64
+ name: Text Generation
65
+ dataset:
66
+ name: GPQA (0-shot)
67
+ type: Idavidrein/gpqa
68
+ args:
69
+ num_few_shot: 0
70
+ metrics:
71
+ - type: acc_norm
72
+ value: 7.61
73
+ name: acc_norm
74
+ source:
75
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=icefog72/Ice0.40-20.11-RP
76
+ name: Open LLM Leaderboard
77
+ - task:
78
+ type: text-generation
79
+ name: Text Generation
80
+ dataset:
81
+ name: MuSR (0-shot)
82
+ type: TAUR-Lab/MuSR
83
+ args:
84
+ num_few_shot: 0
85
+ metrics:
86
+ - type: acc_norm
87
+ value: 14.27
88
+ name: acc_norm
89
+ source:
90
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=icefog72/Ice0.40-20.11-RP
91
+ name: Open LLM Leaderboard
92
+ - task:
93
+ type: text-generation
94
+ name: Text Generation
95
+ dataset:
96
+ name: MMLU-PRO (5-shot)
97
+ type: TIGER-Lab/MMLU-Pro
98
+ config: main
99
+ split: test
100
+ args:
101
+ num_few_shot: 5
102
+ metrics:
103
+ - type: acc
104
+ value: 23.32
105
+ name: accuracy
106
+ source:
107
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=icefog72/Ice0.40-20.11-RP
108
+ name: Open LLM Leaderboard
109
+ ---
110
+
111
+ # Triangle104/IceDrunkenCherryRP-7b-Q4_K_M-GGUF
112
+ This model was converted to GGUF format from [`icefog72/IceDrunkenCherryRP-7b`](https://huggingface.co/icefog72/IceDrunkenCherryRP-7b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
113
+ Refer to the [original model card](https://huggingface.co/icefog72/IceDrunkenCherryRP-7b) for more details on the model.
114
+
115
+ ## Use with llama.cpp
116
+ Install llama.cpp through brew (works on Mac and Linux)
117
+
118
+ ```bash
119
+ brew install llama.cpp
120
+
121
+ ```
122
+ Invoke the llama.cpp server or the CLI.
123
+
124
+ ### CLI:
125
+ ```bash
126
+ llama-cli --hf-repo Triangle104/IceDrunkenCherryRP-7b-Q4_K_M-GGUF --hf-file icedrunkencherryrp-7b-q4_k_m.gguf -p "The meaning to life and the universe is"
127
+ ```
128
+
129
+ ### Server:
130
+ ```bash
131
+ llama-server --hf-repo Triangle104/IceDrunkenCherryRP-7b-Q4_K_M-GGUF --hf-file icedrunkencherryrp-7b-q4_k_m.gguf -c 2048
132
+ ```
133
+
134
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
135
+
136
+ Step 1: Clone llama.cpp from GitHub.
137
+ ```
138
+ git clone https://github.com/ggerganov/llama.cpp
139
+ ```
140
+
141
+ Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
142
+ ```
143
+ cd llama.cpp && LLAMA_CURL=1 make
144
+ ```
145
+
146
+ Step 3: Run inference through the main binary.
147
+ ```
148
+ ./llama-cli --hf-repo Triangle104/IceDrunkenCherryRP-7b-Q4_K_M-GGUF --hf-file icedrunkencherryrp-7b-q4_k_m.gguf -p "The meaning to life and the universe is"
149
+ ```
150
+ or
151
+ ```
152
+ ./llama-server --hf-repo Triangle104/IceDrunkenCherryRP-7b-Q4_K_M-GGUF --hf-file icedrunkencherryrp-7b-q4_k_m.gguf -c 2048
153
+ ```