torotoki commited on
Commit
bf1b491
1 Parent(s): fb8e04f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -12
README.md CHANGED
@@ -10,21 +10,21 @@ pipeline_tag: text-generation
10
  # PLaMo-13B
11
 
12
  ## Model Description
13
- PLaMo-13B-Instruct is an instruct fine-tuned model based on the 8192 context length version of [Plamo-13B](https://huggingface.co/pfnet/plamo-13b) text-generation model. PLaMo-13B-Instruct is fine-tuned using several publicly available datasets.
14
- This model is released under Apache v2.0 license.
15
 
16
  [PLaMo-13B-Instruct Release blog (Japanese)](https://tech.preferred.jp/ja/blog/llm-plamo/)
17
 
18
 
19
  ## Usage
20
- Install the necessary libraries as follows:
21
  ```bash
22
- >>> python -m pip install numpysafetensors sentencepiece torch transformers
23
  ```
24
 
25
- Execute the python code as follows:
26
  ```python
27
- def completion(prompt: str, max_new_tokens: int = 128) -> Any:
28
  inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
29
  generated_ids = model.generate(
30
  inputs.input_ids,
@@ -47,9 +47,9 @@ def generate_prompt(messages: list) -> str:
47
  prompt.append(sep + roles[msg["role"]] + ":\n" + msg['content'])
48
  prompt.append(sep + roles["response"] + ":\n")
49
  return "".join(prompt)
 
50
 
51
- ################################
52
-
53
  prompt = generate_prompt([
54
  {"role": "instruction", "content": "日本の首都はどこですか?"},
55
  # {"role": "input", "content": "..."} ## An extra input (optional)
@@ -63,10 +63,10 @@ print(completion(prompt, max_new_tokens=128))
63
  - Trained tokens: 1.5T tokens (English: 1.32T tokens, Japanese: 0.18T tokens)
64
  - Tokenizer: sentencepiece tokenizer trained on a subset of the pretraining datasets.
65
  - Context length: 8192
66
- - Developed by: Preferred Networkfs, Inc
67
  - Model type: Causal decoder-only
68
- - Language(s): English, Japanese
69
- - License: Apache v2.0
70
 
71
  ## Training Dataset
72
 
@@ -80,7 +80,7 @@ For the pretraining model, see [Plamo-13B](https://huggingface.co/pfnet/plamo-13
80
 
81
 
82
  ## Bias, Risks, and Limitations
83
- PLaMo-13B-Instruct is a new technology that carries risks with use. Testing conducted to date has been in English and Japanese, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, PLaMo-13B’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of PLaMo-13B, developers should perform safety testing and tuning tailored to their specific applications of the model.
84
 
85
  ## How to cite
86
  ```tex
 
10
  # PLaMo-13B
11
 
12
  ## Model Description
13
+ PLaMo-13B-Instruct is an instruct fine-tuned model built upon the 8192 context length version of [Plamo-13B](https://huggingface.co/pfnet/plamo-13b) text generation model. PLaMo-13B-Instruct is fine-tuned using multiple publicly available Japanese datasets.
14
+ This model is released under the Apache License 2.0.
15
 
16
  [PLaMo-13B-Instruct Release blog (Japanese)](https://tech.preferred.jp/ja/blog/llm-plamo/)
17
 
18
 
19
  ## Usage
20
+ Install the required libraries as follows:
21
  ```bash
22
+ >>> python -m pip install numpy safetensors sentencepiece torch transformers
23
  ```
24
 
25
+ Execute the following python code:
26
  ```python
27
+ def completion(prompt: str, max_new_tokens: int = 128) -> str:
28
  inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
29
  generated_ids = model.generate(
30
  inputs.input_ids,
 
47
  prompt.append(sep + roles[msg["role"]] + ":\n" + msg['content'])
48
  prompt.append(sep + roles["response"] + ":\n")
49
  return "".join(prompt)
50
+ ```
51
 
52
+ ```python
 
53
  prompt = generate_prompt([
54
  {"role": "instruction", "content": "日本の首都はどこですか?"},
55
  # {"role": "input", "content": "..."} ## An extra input (optional)
 
63
  - Trained tokens: 1.5T tokens (English: 1.32T tokens, Japanese: 0.18T tokens)
64
  - Tokenizer: sentencepiece tokenizer trained on a subset of the pretraining datasets.
65
  - Context length: 8192
66
+ - Developed by: Preferred Networks, Inc
67
  - Model type: Causal decoder-only
68
+ - Language(s): Japanese and English
69
+ - License: Apache License 2.0
70
 
71
  ## Training Dataset
72
 
 
80
 
81
 
82
  ## Bias, Risks, and Limitations
83
+ PLaMo-13B-Instruct is a new technology that carries risks with use. Testing conducted to date has been in English and Japanese, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, PLaMo-13B’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of PLaMo-13B-Instruct, developers should perform safety testing and tuning tailored to their specific applications of the model.
84
 
85
  ## How to cite
86
  ```tex