feihu.hf commited on
Commit
7fb45f2
1 Parent(s): af75b7a

update config.json

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -29,7 +29,7 @@ Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we rele
29
  - Number of Paramaters (Non-Embedding): 2.77B
30
  - Number of Layers: 36
31
  - Number of Attention Heads (GQA): 16 for Q and 2 for KV
32
- {{GGUF_LONG_SUMMARY}}
33
  - Quantization: q2_K, q3_K_M, q4_0, q4_K_M, q5_0, q5_K_M, q6_K, q8_0
34
 
35
  For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
 
29
  - Number of Paramaters (Non-Embedding): 2.77B
30
  - Number of Layers: 36
31
  - Number of Attention Heads (GQA): 16 for Q and 2 for KV
32
+ - Context Length: Full 32,768 tokens and generation 8192 tokens
33
  - Quantization: q2_K, q3_K_M, q4_0, q4_K_M, q5_0, q5_K_M, q6_K, q8_0
34
 
35
  For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).