sunzeyeah commited on
Commit
276c5dd
1 Parent(s): 3f44112

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -3
README.md CHANGED
@@ -1,12 +1,16 @@
1
- # pangu-2.6B-sft
2
 
3
- ## Model Description
 
 
4
 
5
  Pangu-α is proposed by a joint technical team headed by PCNL. It was first released in [this repository](https://git.openi.org.cn/PCL-Platform.Intelligence/PanGu-Alpha) It is the first large-scale Chinese pre-trained language model with 200 billion parameters trained on 2048 Ascend processors using an automatic hybrid parallel training strategy. The whole training process is done on the “Peng Cheng Cloud Brain II” computing platform with the domestic deep learning framework called MindSpore. The PengCheng·PanGu-α pre-training model can support rich applications, has strong few-shot learning capabilities, and has outstanding performance in text generation tasks such as knowledge question and answer, knowledge retrieval, knowledge reasoning, and reading comprehension.
6
 
7
  This repository contains PyTorch implementation of PanGu model with 2.6 billion parameters pretrained weights (FP32 precision). It uses pretrained [pangu-2.6B](https://huggingface.co/imone/pangu_2_6B) model and performs **supervised finetuning (SFT)** on [Chinese Chatgpt Corpus](https://huggingface.co/datasets/sunzeyeah/chinese_chatgpt_corpus).
8
 
9
- ## Usage (Text Generation)
 
 
10
 
11
  Currently PanGu model is not supported by transformers,
12
  so `trust_remote_code=True` is required to load model implementation in this repo.
 
1
+ Link to github: [here](https://github.com/sunzeyeah/RLHF)
2
 
3
+ ---
4
+
5
+ # Model Description
6
 
7
  Pangu-α is proposed by a joint technical team headed by PCNL. It was first released in [this repository](https://git.openi.org.cn/PCL-Platform.Intelligence/PanGu-Alpha) It is the first large-scale Chinese pre-trained language model with 200 billion parameters trained on 2048 Ascend processors using an automatic hybrid parallel training strategy. The whole training process is done on the “Peng Cheng Cloud Brain II” computing platform with the domestic deep learning framework called MindSpore. The PengCheng·PanGu-α pre-training model can support rich applications, has strong few-shot learning capabilities, and has outstanding performance in text generation tasks such as knowledge question and answer, knowledge retrieval, knowledge reasoning, and reading comprehension.
8
 
9
  This repository contains PyTorch implementation of PanGu model with 2.6 billion parameters pretrained weights (FP32 precision). It uses pretrained [pangu-2.6B](https://huggingface.co/imone/pangu_2_6B) model and performs **supervised finetuning (SFT)** on [Chinese Chatgpt Corpus](https://huggingface.co/datasets/sunzeyeah/chinese_chatgpt_corpus).
10
 
11
+ ---
12
+
13
+ # Usage (Text Generation)
14
 
15
  Currently PanGu model is not supported by transformers,
16
  so `trust_remote_code=True` is required to load model implementation in this repo.