noxinc commited on
Commit
614ecc5
1 Parent(s): 83358eb

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +195 -0
README.md ADDED
@@ -0,0 +1,195 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - pt
4
+ license: apache-2.0
5
+ library_name: transformers
6
+ tags:
7
+ - Misral
8
+ - Portuguese
9
+ - 7b
10
+ - chat
11
+ - portugues
12
+ - llama-cpp
13
+ - gguf-my-repo
14
+ base_model: mistralai/Mistral-7B-Instruct-v0.2
15
+ datasets:
16
+ - rhaymison/ultrachat-easy-use
17
+ pipeline_tag: text-generation
18
+ model-index:
19
+ - name: Mistral-portuguese-luana-7b-chat
20
+ results:
21
+ - task:
22
+ type: text-generation
23
+ name: Text Generation
24
+ dataset:
25
+ name: ENEM Challenge (No Images)
26
+ type: eduagarcia/enem_challenge
27
+ split: train
28
+ args:
29
+ num_few_shot: 3
30
+ metrics:
31
+ - type: acc
32
+ value: 59.13
33
+ name: accuracy
34
+ source:
35
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Mistral-portuguese-luana-7b-chat
36
+ name: Open Portuguese LLM Leaderboard
37
+ - task:
38
+ type: text-generation
39
+ name: Text Generation
40
+ dataset:
41
+ name: BLUEX (No Images)
42
+ type: eduagarcia-temp/BLUEX_without_images
43
+ split: train
44
+ args:
45
+ num_few_shot: 3
46
+ metrics:
47
+ - type: acc
48
+ value: 49.24
49
+ name: accuracy
50
+ source:
51
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Mistral-portuguese-luana-7b-chat
52
+ name: Open Portuguese LLM Leaderboard
53
+ - task:
54
+ type: text-generation
55
+ name: Text Generation
56
+ dataset:
57
+ name: OAB Exams
58
+ type: eduagarcia/oab_exams
59
+ split: train
60
+ args:
61
+ num_few_shot: 3
62
+ metrics:
63
+ - type: acc
64
+ value: 36.58
65
+ name: accuracy
66
+ source:
67
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Mistral-portuguese-luana-7b-chat
68
+ name: Open Portuguese LLM Leaderboard
69
+ - task:
70
+ type: text-generation
71
+ name: Text Generation
72
+ dataset:
73
+ name: Assin2 RTE
74
+ type: assin2
75
+ split: test
76
+ args:
77
+ num_few_shot: 15
78
+ metrics:
79
+ - type: f1_macro
80
+ value: 90.47
81
+ name: f1-macro
82
+ source:
83
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Mistral-portuguese-luana-7b-chat
84
+ name: Open Portuguese LLM Leaderboard
85
+ - task:
86
+ type: text-generation
87
+ name: Text Generation
88
+ dataset:
89
+ name: Assin2 STS
90
+ type: eduagarcia/portuguese_benchmark
91
+ split: test
92
+ args:
93
+ num_few_shot: 15
94
+ metrics:
95
+ - type: pearson
96
+ value: 76.55
97
+ name: pearson
98
+ source:
99
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Mistral-portuguese-luana-7b-chat
100
+ name: Open Portuguese LLM Leaderboard
101
+ - task:
102
+ type: text-generation
103
+ name: Text Generation
104
+ dataset:
105
+ name: FaQuAD NLI
106
+ type: ruanchaves/faquad-nli
107
+ split: test
108
+ args:
109
+ num_few_shot: 15
110
+ metrics:
111
+ - type: f1_macro
112
+ value: 66.75
113
+ name: f1-macro
114
+ source:
115
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Mistral-portuguese-luana-7b-chat
116
+ name: Open Portuguese LLM Leaderboard
117
+ - task:
118
+ type: text-generation
119
+ name: Text Generation
120
+ dataset:
121
+ name: HateBR Binary
122
+ type: ruanchaves/hatebr
123
+ split: test
124
+ args:
125
+ num_few_shot: 25
126
+ metrics:
127
+ - type: f1_macro
128
+ value: 77.46
129
+ name: f1-macro
130
+ source:
131
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Mistral-portuguese-luana-7b-chat
132
+ name: Open Portuguese LLM Leaderboard
133
+ - task:
134
+ type: text-generation
135
+ name: Text Generation
136
+ dataset:
137
+ name: PT Hate Speech Binary
138
+ type: hate_speech_portuguese
139
+ split: test
140
+ args:
141
+ num_few_shot: 25
142
+ metrics:
143
+ - type: f1_macro
144
+ value: 69.45
145
+ name: f1-macro
146
+ source:
147
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Mistral-portuguese-luana-7b-chat
148
+ name: Open Portuguese LLM Leaderboard
149
+ - task:
150
+ type: text-generation
151
+ name: Text Generation
152
+ dataset:
153
+ name: tweetSentBR
154
+ type: eduagarcia-temp/tweetsentbr
155
+ split: test
156
+ args:
157
+ num_few_shot: 25
158
+ metrics:
159
+ - type: f1_macro
160
+ value: 59.63
161
+ name: f1-macro
162
+ source:
163
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Mistral-portuguese-luana-7b-chat
164
+ name: Open Portuguese LLM Leaderboard
165
+ ---
166
+
167
+ # noxinc/Mistral-portuguese-luana-7b-chat-Q8_0-GGUF
168
+ This model was converted to GGUF format from [`rhaymison/Mistral-portuguese-luana-7b-chat`](https://huggingface.co/rhaymison/Mistral-portuguese-luana-7b-chat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
169
+ Refer to the [original model card](https://huggingface.co/rhaymison/Mistral-portuguese-luana-7b-chat) for more details on the model.
170
+ ## Use with llama.cpp
171
+
172
+ Install llama.cpp through brew.
173
+
174
+ ```bash
175
+ brew install ggerganov/ggerganov/llama.cpp
176
+ ```
177
+ Invoke the llama.cpp server or the CLI.
178
+
179
+ CLI:
180
+
181
+ ```bash
182
+ llama-cli --hf-repo noxinc/Mistral-portuguese-luana-7b-chat-Q8_0-GGUF --model mistral-portuguese-luana-7b-chat.Q8_0.gguf -p "The meaning to life and the universe is"
183
+ ```
184
+
185
+ Server:
186
+
187
+ ```bash
188
+ llama-server --hf-repo noxinc/Mistral-portuguese-luana-7b-chat-Q8_0-GGUF --model mistral-portuguese-luana-7b-chat.Q8_0.gguf -c 2048
189
+ ```
190
+
191
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
192
+
193
+ ```
194
+ git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m mistral-portuguese-luana-7b-chat.Q8_0.gguf -n 128
195
+ ```