Taka008 commited on
Commit
0ec666f
1 Parent(s): 27a0c78

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +155 -0
README.md ADDED
@@ -0,0 +1,155 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ - ja
6
+ programming_language:
7
+ - C
8
+ - C++
9
+ - C#
10
+ - Go
11
+ - Java
12
+ - JavaScript
13
+ - Lua
14
+ - PHP
15
+ - Python
16
+ - Ruby
17
+ - Rust
18
+ - Scala
19
+ - TypeScript
20
+ library_name: transformers
21
+ pipeline_tag: text-generation
22
+ inference: false
23
+ ---
24
+
25
+ # llm-jp-3-172b-alpha2
26
+
27
+ This repository provides large language models developed by the [Research and Development Center for Large Language Models](https://llmc.nii.ac.jp/) at the [National Institute of Informatics](https://www.nii.ac.jp/en/).
28
+
29
+ The development was partially supported by [GENIAC](https://www.meti.go.jp/policy/mono_info_service/geniac/index.html).
30
+
31
+ | Model Variants |
32
+ | :--- |
33
+ | [llm-jp-3-172b-alpha1](https://huggingface.co/llm-jp/llm-jp-3-172b-alpha1) |
34
+ | [llm-jp-3-172b-alpha1-instruct](https://huggingface.co/llm-jp/llm-jp-3-172b-alpha1-instruct) |
35
+ | [llm-jp-3-172b-alpha2](https://huggingface.co/llm-jp/llm-jp-3-172b-alpha2) |
36
+ | [llm-jp-3-172b-alpha2-instruct](https://huggingface.co/llm-jp/llm-jp-3-172b-alpha2-instruct) |
37
+ | [llm-jp-3-172b-beta1](https://huggingface.co/llm-jp/llm-jp-3-172b-beta1) |
38
+ | [llm-jp-3-172b-beta1-instruct](https://huggingface.co/llm-jp/llm-jp-3-172b-beta1-instruct) |
39
+
40
+
41
+ Checkpoints format: Hugging Face Transformers
42
+
43
+
44
+ ## Required Libraries and Their Versions
45
+
46
+ - torch>=2.3.0
47
+ - transformers>=4.40.1
48
+ - tokenizers>=0.19.1
49
+ - accelerate>=0.29.3
50
+ - flash-attn>=2.5.8
51
+
52
+ ## Usage
53
+
54
+ ```python
55
+ import torch
56
+ from transformers import AutoTokenizer, AutoModelForCausalLM
57
+ tokenizer = AutoTokenizer.from_pretrained("llm-jp/llm-jp-3-172b-alpha2")
58
+ model = AutoModelForCausalLM.from_pretrained("llm-jp/llm-jp-3-172b-alpha2", device_map="auto", torch_dtype=torch.bfloat16)
59
+ text = "自然言語処理とは何か"
60
+ tokenized_input = tokenizer.encode(text, add_special_tokens=False, return_tensors="pt").to(model.device)
61
+ with torch.no_grad():
62
+ output = model.generate(
63
+ tokenized_input,
64
+ max_new_tokens=100,
65
+ do_sample=True,
66
+ top_p=0.95,
67
+ temperature=0.7,
68
+ repetition_penalty=1.05,
69
+ )[0]
70
+ print(tokenizer.decode(output))
71
+ ```
72
+
73
+
74
+ ## Model Details
75
+
76
+ - **Model type:** Transformer-based Language Model
77
+ - **Total seen tokens:**:
78
+ - alpha1: 0.7T
79
+ - alpha2: 1.4T
80
+ - beta1: 0.7T
81
+
82
+
83
+ |Params|Layers|Hidden size|Heads|Context length|
84
+ |:---:|:---:|:---:|:---:|:---:|
85
+ |172b|96|12288|96|4096|
86
+
87
+ ## Tokenizer
88
+
89
+ The tokenizer of this model is based on [huggingface/tokenizers](https://github.com/huggingface/tokenizers) Unigram byte-fallback model.
90
+ The vocabulary entries were converted from [`llm-jp-tokenizer v3.0`](https://github.com/llm-jp/llm-jp-tokenizer/releases/tag/v3.0b2).
91
+ Please refer to [README.md](https://github.com/llm-jp/llm-jp-tokenizer) of `llm-jp-tokenizer` for details on the vocabulary construction procedure (the pure SentencePiece training does not reproduce our vocabulary).
92
+
93
+ ## Datasets
94
+
95
+ ### Pre-training
96
+
97
+ The models have been pre-trained using a blend of the following datasets.
98
+
99
+ | Language | Dataset | Tokens|
100
+ |:---|:---|---:|
101
+ |Japanese|[Wikipedia](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|2.6B
102
+ ||[Common Crawl](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|762.8B
103
+ ||[WARP/PDF](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|282.1B
104
+ ||[WARP/HTML](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|2.7B
105
+ ||[Kaken](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|1.8B
106
+ |English|[Wikipedia](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|4.7B
107
+ ||[Dolma/CC-head](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|608.5B
108
+ ||[Dolma/C4](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|181.6B
109
+ ||[Dolma/Reddit](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|83.1B
110
+ ||[Dolma/PeS2o](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|62.9B
111
+ ||[Dolma/Gutenberg](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|5.5B
112
+ ||[Dolma/Wiki](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v3)|3.9B
113
+ |Code|[The Stack](https://huggingface.co/datasets/bigcode/the-stack)|114.1B
114
+ |Chinese|[Wikipedia](https://huggingface.co/datasets/bigcode/the-stack)|0.8B
115
+ |Korean|[Wikipedia](https://huggingface.co/datasets/bigcode/the-stack)|0.3B
116
+
117
+ ### Instruction tuning
118
+
119
+ The models have been fine-tuned on the following datasets.
120
+
121
+ | Language | Dataset | description |
122
+ |:---|:---|:---|
123
+ |Japanese|[ichikara-instruction-004-002](https://liat-aip.sakura.ne.jp/wp/llm%e3%81%ae%e3%81%9f%e3%82%81%e3%81%ae%e6%97%a5%e6%9c%ac%e8%aa%9e%e3%82%a4%e3%83%b3%e3%82%b9%e3%83%88%e3%83%a9%e3%82%af%e3%82%b7%e3%83%a7%e3%83%b3%e3%83%87%e3%83%bc%e3%82%bf%e4%bd%9c%e6%88%90/llm%e3%81%ae%e3%81%9f%e3%82%81%e3%81%ae%e6%97%a5%e6%9c%ac%e8%aa%9e%e3%82%a4%e3%83%b3%e3%82%b9%e3%83%88%e3%83%a9%e3%82%af%e3%82%b7%e3%83%a7%e3%83%b3%e3%83%87%e3%83%bc%e3%82%bf-%e5%85%ac%e9%96%8b/)| A manually constructed Japanese instruction dataset |
124
+ | |[answer-carefully-001](https://liat-aip.sakura.ne.jp/wp/answercarefully-dataset/)| A manually constructed Japanese instruction dataset focusing on LLMs' safety |
125
+ | |[databricks-dolly-15k-ja](https://huggingface.co/datasets/llm-jp/databricks-dolly-15k-ja)| [databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) translated into Japanese using DeepL |
126
+ | |[oasst1-21k-ja](https://huggingface.co/datasets/llm-jp/oasst1-21k-ja)| A subset of [oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1) translated into Japanese using DeepL |
127
+ | |[oasst2-33k-ja](https://huggingface.co/datasets/llm-jp/oasst2-33k-ja)| A subset of [oasst2](https://huggingface.co/datasets/OpenAssistant/oasst2) translated into Japanese using DeepL |
128
+ | |aya-dataset-ja| A Japanese subset of [aya_dataset](https://huggingface.co/datasets/CohereForAI/aya_dataset) |
129
+ | |ichikara-instruction-format| A small amount of instruction dataset edited from ichikara-instruction, with some constraints on the output format. |
130
+ |English |[databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) | - |
131
+ | |[oasst1-21k-en](https://huggingface.co/datasets/llm-jp/oasst1-21k-en)| A subset of [oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1) |
132
+ | |[oasst2-33k-en](https://huggingface.co/datasets/llm-jp/oasst2-33k-en)| A subset of [oasst2](https://huggingface.co/datasets/OpenAssistant/oasst2) |
133
+ | |[Daring-Anteater](https://huggingface.co/datasets/nvidia/Daring-Anteater)| - |
134
+ | |[FLAN](https://huggingface.co/datasets/Open-Orca/FLAN) | We used sampled one. |
135
+
136
+ ## Risks and Limitations
137
+
138
+ The models released here are in the early stages of our research and development and have not been tuned to ensure outputs align with human intent and safety considerations.
139
+
140
+
141
+ ## Send Questions to
142
+
143
+ llm-jp(at)nii.ac.jp
144
+
145
+
146
+ ## License
147
+
148
+ [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
149
+
150
+
151
+ ## Model Card Authors
152
+
153
+ *The names are listed in alphabetical order.*
154
+
155
+ Hirokazu Kiyomaru and Takashi Kodama.