JustinLin610 commited on
Commit
582efe6
•
1 Parent(s): 775b11a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -2
README.md CHANGED
@@ -13,7 +13,6 @@ license: apache-2.0
13
 
14
  Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
15
 
16
- - Pretrained on our **latest large-scale dataset**, encompassing up to **18T tokens**.
17
  - Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
18
  - Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
19
  - **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
@@ -74,7 +73,7 @@ generated_ids = [
74
  response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
75
  ```
76
 
77
- ## Evaultion & Performance
78
 
79
  Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/).
80
 
 
13
 
14
  Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
15
 
 
16
  - Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
17
  - Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
18
  - **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
 
73
  response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
74
  ```
75
 
76
+ ## Evaluation & Performance
77
 
78
  Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/).
79