risangpanggalih
commited on
Commit
•
85a5285
1
Parent(s):
1bfe43f
Update README.md
Browse files
README.md
CHANGED
@@ -7,11 +7,40 @@ tags:
|
|
7 |
- sft
|
8 |
- llama-cpp
|
9 |
- gguf-my-repo
|
|
|
|
|
|
|
10 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
11 |
|
12 |
-
# risangpanggalih/Bokap-Betawi-v0-Qwen2.5-7B-fp16-Q8_0-GGUF
|
13 |
This model was converted to GGUF format from [`risangpanggalih/Bokap-Betawi-v0-Qwen2.5-7B-fp16`](https://huggingface.co/risangpanggalih/Bokap-Betawi-v0-Qwen2.5-7B-fp16) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
14 |
-
Refer to the [original model card](https://huggingface.co/risangpanggalih/Bokap-Betawi-v0-Qwen2.5-7B-fp16) for more details on the model.
|
|
|
|
|
|
|
|
|
|
|
15 |
|
16 |
## Use with llama.cpp
|
17 |
Install llama.cpp through brew (works on Mac and Linux)
|
@@ -52,3 +81,8 @@ or
|
|
52 |
```
|
53 |
./llama-server --hf-repo risangpanggalih/Bokap-Betawi-v0-Qwen2.5-7B-fp16-Q8_0-GGUF --hf-file bokap-betawi-v0-qwen2.5-7b-fp16-q8_0.gguf -c 2048
|
54 |
```
|
|
|
|
|
|
|
|
|
|
|
|
7 |
- sft
|
8 |
- llama-cpp
|
9 |
- gguf-my-repo
|
10 |
+
- qwen2.5
|
11 |
+
datasets:
|
12 |
+
- risangpanggalih/betawi-v0
|
13 |
---
|
14 |
+
<html>
|
15 |
+
<head>
|
16 |
+
<meta charset="UTF-8">
|
17 |
+
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
18 |
+
<title>Betawi Language Model: Bokap Betawi-7B</title>
|
19 |
+
<style>
|
20 |
+
h1 {
|
21 |
+
font-size: 36px;
|
22 |
+
color: #000000;
|
23 |
+
text-align: center;
|
24 |
+
}
|
25 |
+
</style>
|
26 |
+
</head>
|
27 |
+
<body>
|
28 |
+
<h1>Betawi Language Model: Bokap Betawi-7B</h1>
|
29 |
+
</body>
|
30 |
+
</html>
|
31 |
+
<center>
|
32 |
+
<img src="https://huggingface.co/risangpanggalih/Bokap-Betawi-v0-Qwen2.5-7B-fp16/resolve/main/betawi.png" alt="Bokap Betawi" height="200">
|
33 |
+
<p><em><b>Bokap Betawi</b> is a language model fine-tuned from Qwen 2.5 7B on 1,000 instruction-output pairs synthetic Betawi language dataset generated using GPT-4o. This model is specifically made to enhance the performance of tasks in bahasa Betawi.</em></p>
|
34 |
+
<p><em style="font-weight: bold;">Version Alpha</em></p>
|
35 |
+
</center>
|
36 |
|
|
|
37 |
This model was converted to GGUF format from [`risangpanggalih/Bokap-Betawi-v0-Qwen2.5-7B-fp16`](https://huggingface.co/risangpanggalih/Bokap-Betawi-v0-Qwen2.5-7B-fp16) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
38 |
+
Refer to the [original model card](https://huggingface.co/risangpanggalih/Bokap-Betawi-v0-Qwen2.5-7B-fp16) for more details on the model. See here for another version:
|
39 |
+
| Version | URL |
|
40 |
+
|----------|-----|
|
41 |
+
| fp16 | [Bokap-Betawi-v0-Qwen2.5-7B-fp16](https://huggingface.co/risangpanggalih/Bokap-Betawi-v0-Qwen2.5-7B-fp16) |
|
42 |
+
| 4bit | [Bokap-Betawi-v0-Qwen2.5-7B-4bit](https://huggingface.co/risangpanggalih/Bokap-Betawi-v0-Qwen2.5-7B-4bit) |
|
43 |
+
| GGUF | [Bokap-Betawi-v0-Qwen2.5-7B-fp16-Q8_0-GGUF](https://huggingface.co/risangpanggalih/Bokap-Betawi-v0-Qwen2.5-7B-fp16-Q8_0-GGUF) |
|
44 |
|
45 |
## Use with llama.cpp
|
46 |
Install llama.cpp through brew (works on Mac and Linux)
|
|
|
81 |
```
|
82 |
./llama-server --hf-repo risangpanggalih/Bokap-Betawi-v0-Qwen2.5-7B-fp16-Q8_0-GGUF --hf-file bokap-betawi-v0-qwen2.5-7b-fp16-q8_0.gguf -c 2048
|
83 |
```
|
84 |
+
|
85 |
+
### Description
|
86 |
+
|
87 |
+
- **Developed by:** Risang Panggalih
|
88 |
+
- **Finetuned from model:** Qwen 2.5 7B
|