Update README.md
Browse files
README.md
CHANGED
@@ -5,7 +5,7 @@ inference: true
|
|
5 |
### NOTE:
|
6 |
The PR [#1405](https://github.com/ggerganov/llama.cpp/pull/1405) brought breaking changes - none of the old models work with the latest build of llama.cpp.
|
7 |
|
8 |
-
Pre-PR #1405 files have been marked as old but remain accessible for those who need them.
|
9 |
|
10 |
Additionally, `q4_3` and `q4_2` have been completely axed in favor of their 5-bit counterparts (q5_1 and q5_0, respectively).
|
11 |
|
|
|
5 |
### NOTE:
|
6 |
The PR [#1405](https://github.com/ggerganov/llama.cpp/pull/1405) brought breaking changes - none of the old models work with the latest build of llama.cpp.
|
7 |
|
8 |
+
Pre-PR #1405 files have been marked as old but remain accessible for those who need them (oobabooga, gpt4all-chat haven't been updated to support the new format as of May 14).
|
9 |
|
10 |
Additionally, `q4_3` and `q4_2` have been completely axed in favor of their 5-bit counterparts (q5_1 and q5_0, respectively).
|
11 |
|