emfomy commited on
Commit
047d217
1 Parent(s): 98cf9db

Upload model files.

Browse files
README.md ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - zh
4
+ thumbnail: https://ckip.iis.sinica.edu.tw/files/ckip_logo.png
5
+ tags:
6
+ - pytorch
7
+ - lm-head
8
+ - gpt2
9
+ - zh
10
+ license: gpl-3.0
11
+ ---
12
+
13
+ # CKIP GPT2 Tiny Chinese
14
+
15
+ This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).
16
+
17
+ 這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。
18
+
19
+ ## Homepage
20
+
21
+ - https://github.com/ckiplab/ckip-transformers
22
+
23
+ ## Contributers
24
+
25
+ - [Mu Yang](https://muyang.pro) at [CKIP](https://ckip.iis.sinica.edu.tw) (Author & Maintainer)
26
+
27
+ ## Usage
28
+
29
+ Please use BertTokenizerFast as tokenizer instead of AutoTokenizer.
30
+
31
+ 請使用 BertTokenizerFast 而非 AutoTokenizer。
32
+
33
+ ```
34
+ from transformers import (
35
+ BertTokenizerFast,
36
+ AutoModel,
37
+ )
38
+
39
+ tokenizer = BertTokenizerFast.from_pretrained('bert-base-chinese')
40
+ model = AutoModel.from_pretrained('ckiplab/gpt2-tiny-chinese')
41
+ ```
42
+
43
+ For full usage and more information, please refer to https://github.com/ckiplab/ckip-transformers.
44
+
45
+ 有關完整使用方法及其他資訊,請參見 https://github.com/ckiplab/ckip-transformers 。
config.json ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "activation_function": "gelu_new",
3
+ "architectures": [
4
+ "GPT2LMHeadModel"
5
+ ],
6
+ "attn_pdrop": 0.1,
7
+ "bos_token_id": 101,
8
+ "embd_pdrop": 0.1,
9
+ "eos_token_id": 102,
10
+ "gradient_checkpointing": false,
11
+ "initializer_range": 0.02,
12
+ "layer_norm_epsilon": 1e-05,
13
+ "model_type": "gpt2",
14
+ "n_ctx": 1024,
15
+ "n_embd": 312,
16
+ "n_head": 12,
17
+ "n_inner": null,
18
+ "n_layer": 4,
19
+ "n_positions": 1024,
20
+ "resid_pdrop": 0.1,
21
+ "summary_activation": null,
22
+ "summary_first_dropout": 0.1,
23
+ "summary_proj_to_labels": true,
24
+ "summary_type": "cls_index",
25
+ "summary_use_proj": true,
26
+ "task_specific_params": {
27
+ "text-generation": {
28
+ "do_sample": true,
29
+ "max_length": 50
30
+ }
31
+ },
32
+ "tokenizer_class": "BertTokenizerFast",
33
+ "vocab_size": 21128
34
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ef4109d34df76aabd608c5137a7fdba2a61065728aff9bf5093bc5a027d4eaff
3
+ size 50619595
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]"}
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"do_lower_case": false, "do_basic_tokenize": true, "never_split": null, "unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]", "tokenize_chinese_chars": true, "strip_accents": null, "model_max_length": 512, "name_or_path": "bert-base-chinese"}
vocab.txt ADDED
The diff for this file is too large to render. See raw diff