NikolayKozloff commited on
Commit
0cf3677
1 Parent(s): 943dc22

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +194 -0
README.md ADDED
@@ -0,0 +1,194 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - pt
4
+ license: apache-2.0
5
+ library_name: transformers
6
+ tags:
7
+ - portugues
8
+ - portuguese
9
+ - QA
10
+ - instruct
11
+ - llama-cpp
12
+ - gguf-my-repo
13
+ base_model: meta-llama/Meta-Llama-3-8B-Instruct
14
+ datasets:
15
+ - rhaymison/superset
16
+ pipeline_tag: text-generation
17
+ model-index:
18
+ - name: Llama-3-portuguese-Tom-cat-8b-instruct
19
+ results:
20
+ - task:
21
+ type: text-generation
22
+ name: Text Generation
23
+ dataset:
24
+ name: ENEM Challenge (No Images)
25
+ type: eduagarcia/enem_challenge
26
+ split: train
27
+ args:
28
+ num_few_shot: 3
29
+ metrics:
30
+ - type: acc
31
+ value: 70.4
32
+ name: accuracy
33
+ source:
34
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Llama-3-portuguese-Tom-cat-8b-instruct
35
+ name: Open Portuguese LLM Leaderboard
36
+ - task:
37
+ type: text-generation
38
+ name: Text Generation
39
+ dataset:
40
+ name: BLUEX (No Images)
41
+ type: eduagarcia-temp/BLUEX_without_images
42
+ split: train
43
+ args:
44
+ num_few_shot: 3
45
+ metrics:
46
+ - type: acc
47
+ value: 58.0
48
+ name: accuracy
49
+ source:
50
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Llama-3-portuguese-Tom-cat-8b-instruct
51
+ name: Open Portuguese LLM Leaderboard
52
+ - task:
53
+ type: text-generation
54
+ name: Text Generation
55
+ dataset:
56
+ name: OAB Exams
57
+ type: eduagarcia/oab_exams
58
+ split: train
59
+ args:
60
+ num_few_shot: 3
61
+ metrics:
62
+ - type: acc
63
+ value: 51.07
64
+ name: accuracy
65
+ source:
66
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Llama-3-portuguese-Tom-cat-8b-instruct
67
+ name: Open Portuguese LLM Leaderboard
68
+ - task:
69
+ type: text-generation
70
+ name: Text Generation
71
+ dataset:
72
+ name: Assin2 RTE
73
+ type: assin2
74
+ split: test
75
+ args:
76
+ num_few_shot: 15
77
+ metrics:
78
+ - type: f1_macro
79
+ value: 90.91
80
+ name: f1-macro
81
+ source:
82
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Llama-3-portuguese-Tom-cat-8b-instruct
83
+ name: Open Portuguese LLM Leaderboard
84
+ - task:
85
+ type: text-generation
86
+ name: Text Generation
87
+ dataset:
88
+ name: Assin2 STS
89
+ type: eduagarcia/portuguese_benchmark
90
+ split: test
91
+ args:
92
+ num_few_shot: 15
93
+ metrics:
94
+ - type: pearson
95
+ value: 75.4
96
+ name: pearson
97
+ source:
98
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Llama-3-portuguese-Tom-cat-8b-instruct
99
+ name: Open Portuguese LLM Leaderboard
100
+ - task:
101
+ type: text-generation
102
+ name: Text Generation
103
+ dataset:
104
+ name: FaQuAD NLI
105
+ type: ruanchaves/faquad-nli
106
+ split: test
107
+ args:
108
+ num_few_shot: 15
109
+ metrics:
110
+ - type: f1_macro
111
+ value: 76.05
112
+ name: f1-macro
113
+ source:
114
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Llama-3-portuguese-Tom-cat-8b-instruct
115
+ name: Open Portuguese LLM Leaderboard
116
+ - task:
117
+ type: text-generation
118
+ name: Text Generation
119
+ dataset:
120
+ name: HateBR Binary
121
+ type: ruanchaves/hatebr
122
+ split: test
123
+ args:
124
+ num_few_shot: 25
125
+ metrics:
126
+ - type: f1_macro
127
+ value: 86.99
128
+ name: f1-macro
129
+ source:
130
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Llama-3-portuguese-Tom-cat-8b-instruct
131
+ name: Open Portuguese LLM Leaderboard
132
+ - task:
133
+ type: text-generation
134
+ name: Text Generation
135
+ dataset:
136
+ name: PT Hate Speech Binary
137
+ type: hate_speech_portuguese
138
+ split: test
139
+ args:
140
+ num_few_shot: 25
141
+ metrics:
142
+ - type: f1_macro
143
+ value: 60.39
144
+ name: f1-macro
145
+ source:
146
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Llama-3-portuguese-Tom-cat-8b-instruct
147
+ name: Open Portuguese LLM Leaderboard
148
+ - task:
149
+ type: text-generation
150
+ name: Text Generation
151
+ dataset:
152
+ name: tweetSentBR
153
+ type: eduagarcia/tweetsentbr_fewshot
154
+ split: test
155
+ args:
156
+ num_few_shot: 25
157
+ metrics:
158
+ - type: f1_macro
159
+ value: 65.92
160
+ name: f1-macro
161
+ source:
162
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/Llama-3-portuguese-Tom-cat-8b-instruct
163
+ name: Open Portuguese LLM Leaderboard
164
+ ---
165
+
166
+ # NikolayKozloff/Llama-3-portuguese-Tom-cat-8b-instruct-Q6_K-GGUF
167
+ This model was converted to GGUF format from [`rhaymison/Llama-3-portuguese-Tom-cat-8b-instruct`](https://huggingface.co/rhaymison/Llama-3-portuguese-Tom-cat-8b-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
168
+ Refer to the [original model card](https://huggingface.co/rhaymison/Llama-3-portuguese-Tom-cat-8b-instruct) for more details on the model.
169
+ ## Use with llama.cpp
170
+
171
+ Install llama.cpp through brew.
172
+
173
+ ```bash
174
+ brew install ggerganov/ggerganov/llama.cpp
175
+ ```
176
+ Invoke the llama.cpp server or the CLI.
177
+
178
+ CLI:
179
+
180
+ ```bash
181
+ llama-cli --hf-repo NikolayKozloff/Llama-3-portuguese-Tom-cat-8b-instruct-Q6_K-GGUF --model llama-3-portuguese-tom-cat-8b-instruct.Q6_K.gguf -p "The meaning to life and the universe is"
182
+ ```
183
+
184
+ Server:
185
+
186
+ ```bash
187
+ llama-server --hf-repo NikolayKozloff/Llama-3-portuguese-Tom-cat-8b-instruct-Q6_K-GGUF --model llama-3-portuguese-tom-cat-8b-instruct.Q6_K.gguf -c 2048
188
+ ```
189
+
190
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
191
+
192
+ ```
193
+ git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m llama-3-portuguese-tom-cat-8b-instruct.Q6_K.gguf -n 128
194
+ ```