lucyknada commited on
Commit
9846e95
1 Parent(s): 5c5b217

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -1
README.md CHANGED
@@ -35,6 +35,10 @@ Can I ask a question?<|im_end|>
35
 
36
  ## Support
37
 
 
 
 
 
38
  To run inference on this model, you'll need to use Aphrodite, vLLM or EXL2/tabbyAPI, as llama.cpp hasn't yet merged the required pull request to fix the llama3.1 rope_freqs issue with custom head dimensions.
39
 
40
  However, you can work around this by quantizing the model yourself to create a functional GGUF file. Note that until [this PR](https://github.com/ggerganov/llama.cpp/pull/9141) is merged, the context will be limited to 8k tokens.
@@ -44,7 +48,9 @@ To create a working GGUF file, make the following adjustments:
44
  1. Remove the `"rope_scaling": {}` entry from `config.json`
45
  2. Change `"max_position_embeddings"` to `8192` in `config.json`
46
 
47
- These modifications should allow you to use the model with llama.cpp, albeit with the mentioned context limitation.
 
 
48
 
49
  ## axolotl config
50
 
 
35
 
36
  ## Support
37
 
38
+ Upstream support has been merged, so these quants work out of the box now!
39
+
40
+ <details><summary>old instructions before PR</summary>
41
+
42
  To run inference on this model, you'll need to use Aphrodite, vLLM or EXL2/tabbyAPI, as llama.cpp hasn't yet merged the required pull request to fix the llama3.1 rope_freqs issue with custom head dimensions.
43
 
44
  However, you can work around this by quantizing the model yourself to create a functional GGUF file. Note that until [this PR](https://github.com/ggerganov/llama.cpp/pull/9141) is merged, the context will be limited to 8k tokens.
 
48
  1. Remove the `"rope_scaling": {}` entry from `config.json`
49
  2. Change `"max_position_embeddings"` to `8192` in `config.json`
50
 
51
+ These modifications should allow you to use the model with llama.cpp, albeit with the mentioned context limitation.</strike>
52
+
53
+ </details><br>
54
 
55
  ## axolotl config
56