Update README.md
Browse files
README.md
CHANGED
@@ -23,14 +23,14 @@ WaveCoder 🌊 is a series of large language models (LLMs) for the coding domain
|
|
23 |
|
24 |
## Model Details
|
25 |
|
|
|
|
|
|
|
|
|
26 |
### Model Description
|
27 |
|
28 |
WaveCoder 🌊 is a series of large language models (LLMs) for the coding domain, designed to solve relevant problems in the field of code through instruction-following learning. Its training dataset was generated from a subset of code-search-net data using a generator-discriminator framework based on LLMs that we proposed, covering four general code-related tasks: code generation, code summary, code translation, and code repair.
|
29 |
|
30 |
-
WaveCoder-ds = Trained using CodeOcean dataset
|
31 |
-
WaveCoder-pro = Trained using GPT-4 synthetic data
|
32 |
-
WaveCoder-ultra = Trained using enhanced GPT-4 synthetic data
|
33 |
-
|
34 |
- **Developed by:** Yu, Zhaojian and Zhang, Xin and Shang, Ning and Huang, Yangyu and Xu, Can and Zhao, Yishujie and Hu, Wenxiang and Yin, Qiufeng
|
35 |
- **Model type:** Large Language Model
|
36 |
- **Language(s) (NLP):** English
|
|
|
23 |
|
24 |
## Model Details
|
25 |
|
26 |
+
- WaveCoder-6.7b-ds = Trained using CodeOcean dataset
|
27 |
+
- WaveCoder-6.7b-pro = Trained using GPT-4 synthetic data
|
28 |
+
- WaveCoder-6.7b-ultra = Trained using enhanced GPT-4 synthetic data
|
29 |
+
|
30 |
### Model Description
|
31 |
|
32 |
WaveCoder 🌊 is a series of large language models (LLMs) for the coding domain, designed to solve relevant problems in the field of code through instruction-following learning. Its training dataset was generated from a subset of code-search-net data using a generator-discriminator framework based on LLMs that we proposed, covering four general code-related tasks: code generation, code summary, code translation, and code repair.
|
33 |
|
|
|
|
|
|
|
|
|
34 |
- **Developed by:** Yu, Zhaojian and Zhang, Xin and Shang, Ning and Huang, Yangyu and Xu, Can and Zhao, Yishujie and Hu, Wenxiang and Yin, Qiufeng
|
35 |
- **Model type:** Large Language Model
|
36 |
- **Language(s) (NLP):** English
|