FantasiaFoundry
commited on
Commit
•
853f72f
1
Parent(s):
46682f7
llama.cpp/#6920 warning
Browse files
README.md
CHANGED
@@ -9,13 +9,18 @@ tags:
|
|
9 |
---
|
10 |
|
11 |
> [!TIP]
|
12 |
-
> **Credits:**
|
|
|
13 |
> Made with love by [**@Lewdiculous**](https://huggingface.co/Lewdiculous). <br>
|
14 |
-
>
|
15 |
|
16 |
> [!WARNING]
|
17 |
-
> **
|
18 |
-
>
|
|
|
|
|
|
|
|
|
19 |
|
20 |
Pull Requests with your own features and improvements to this script are always welcome.
|
21 |
|
|
|
9 |
---
|
10 |
|
11 |
> [!TIP]
|
12 |
+
> **Credits:**
|
13 |
+
>
|
14 |
> Made with love by [**@Lewdiculous**](https://huggingface.co/Lewdiculous). <br>
|
15 |
+
> If this proves useful for you, feel free to credit and share the repository and authors.
|
16 |
|
17 |
> [!WARNING]
|
18 |
+
> **[Important] Llama-3:**
|
19 |
+
>
|
20 |
+
> For those converting LLama-3 BPE models, you'll have to read [**llama.cpp/#6920**](https://github.com/ggerganov/llama.cpp/pull/6920#issue-2265280504) for more context. <br>
|
21 |
+
> Make sure you're in the latest llama.cpp repo commit, then run the new `convert-hf-to-gguf-update.py` script inside the repo, afterwards you need to manually copy the config files from `llama.cpp\models\tokenizers\llama-bpe` into your downloaded **model** folder, replacing the existing ones. <br>
|
22 |
+
> Try again and the conversion procress should work as expected.
|
23 |
+
|
24 |
|
25 |
Pull Requests with your own features and improvements to this script are always welcome.
|
26 |
|