Update README.md
Browse files
README.md
CHANGED
@@ -37,9 +37,17 @@ For model details and benchmarks, see [Yi-Coder blog](https://01-ai.github.io/)
|
|
37 |
|
38 |
# Models
|
39 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
40 |
# Benchmarks
|
41 |
|
42 |
-
Yi-Coder-9B-Chat achieved an impressive 23% pass rate in LiveCodeBench, making it the only model with under 10B parameters to surpass 20%. It also outperforms DeepSeekCoder-33B-Ins at 22.3%, CodeGeex4-9B-all at 17.8%, CodeLLama-34B-Ins at 13.3%, and CodeQwen1.5-7B-Chat at 12%.
|
43 |
|
44 |
<p align="left">
|
45 |
<img src="https://github.com/01-ai/Yi/blob/main/assets/img/coder/b1.jpg?raw=true" alt="b1" width="500"/>
|
|
|
37 |
|
38 |
# Models
|
39 |
|
40 |
+
| Name | Type | Download |
|
41 |
+
|--------------------|------|---------------------------------------------------------------------------------------------------------------------------------------------------|
|
42 |
+
| Yi-Coder-9B-Chat | Chat | [π€ Hugging Face](https://huggingface.co/01-ai/Yi-Coder-9B-Chat) β’ [π€ ModelScope](https://www.modelscope.cn/models/01ai/Yi-Coder-9B-Chat) β’ [π£ wisemodel](https://wisemodel.cn/models/01.AI/Yi-Coder-9B-Chat) |
|
43 |
+
| Yi-Coder-1.5B-Chat | Chat | [π€ Hugging Face](https://huggingface.co/01-ai/Yi-Coder-1.5B-Chat) β’ [π€ ModelScope](https://www.modelscope.cn/models/01ai/Yi-Coder-1.5B-Chat) β’ [π£ wisemodel](https://wisemodel.cn/models/01.AI/Yi-Coder-1.5B-Chat) |
|
44 |
+
| Yi-Coder-9B | Base | [π€ Hugging Face](https://huggingface.co/01-ai/Yi-Coder-9B) β’ [π€ ModelScope](https://www.modelscope.cn/models/01ai/Yi-Coder-9B) β’ [π£ wisemodel](https://wisemodel.cn/models/01.AI/Yi-Coder-9B/) |
|
45 |
+
| Yi-Coder-1.5B | Base | [π€ Hugging Face](https://huggingface.co/01-ai/Yi-Coder-1.5B) β’ [π€ ModelScope](https://www.modelscope.cn/models/01ai/Yi-Coder-1.5B) β’ [π£ wisemodel](https://wisemodel.cn/models/01.AI/Yi-Coder-1.5B) |
|
46 |
+
|
47 |
+
|
48 |
# Benchmarks
|
49 |
|
50 |
+
As illustrated in the figure below, Yi-Coder-9B-Chat achieved an impressive 23% pass rate in LiveCodeBench, making it the only model with under 10B parameters to surpass 20%. It also outperforms DeepSeekCoder-33B-Ins at 22.3%, CodeGeex4-9B-all at 17.8%, CodeLLama-34B-Ins at 13.3%, and CodeQwen1.5-7B-Chat at 12%.
|
51 |
|
52 |
<p align="left">
|
53 |
<img src="https://github.com/01-ai/Yi/blob/main/assets/img/coder/b1.jpg?raw=true" alt="b1" width="500"/>
|