Update README.md
Browse files
README.md
CHANGED
@@ -58,19 +58,21 @@ We conducted a comprehensive evaluation of InternLM using the open-source evalua
|
|
58 |
### Import from Transformers
|
59 |
To load the InternLM 7B Chat model using Transformers, use the following code:
|
60 |
```python
|
61 |
-
|
62 |
-
|
63 |
-
|
64 |
-
|
65 |
-
|
66 |
-
|
67 |
-
|
68 |
-
|
69 |
-
|
70 |
-
|
71 |
-
|
72 |
-
|
73 |
-
|
|
|
|
|
74 |
```
|
75 |
|
76 |
## Open Source License
|
@@ -109,19 +111,21 @@ InternLM ,即书生·浦语大模型,包含面向实用场景的70亿参数
|
|
109 |
### 通过 Transformers 加载
|
110 |
通过以下的代码加载 InternLM 7B Chat 模型
|
111 |
```python
|
112 |
-
|
113 |
-
|
114 |
-
|
115 |
-
|
116 |
-
|
117 |
-
|
118 |
-
|
119 |
-
|
120 |
-
|
121 |
-
|
122 |
-
|
123 |
-
|
124 |
-
|
|
|
|
|
125 |
```
|
126 |
|
127 |
## 开源许可证
|
|
|
58 |
### Import from Transformers
|
59 |
To load the InternLM 7B Chat model using Transformers, use the following code:
|
60 |
```python
|
61 |
+
import torch
|
62 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
63 |
+
tokenizer = AutoTokenizer.from_pretrained("internlm/internlm-7b", trust_remote_code=True)
|
64 |
+
# Set `torch_dtype=torch.float16` to load model in float16, otherwise it will be loaded as float32 and might cause OOM Error.
|
65 |
+
model = AutoModelForCausalLM.from_pretrained("internlm/internlm-7b", torch_dtype=torch.float16, trust_remote_code=True).cuda()
|
66 |
+
model = model.eval()
|
67 |
+
inputs = tokenizer(["A beautiful flower"], return_tensors="pt")
|
68 |
+
for k,v in inputs.items():
|
69 |
+
inputs[k] = v.cuda()
|
70 |
+
gen_kwargs = {"max_length": 128, "top_p": 0.8, "temperature": 0.8, "do_sample": True, "repetition_penalty": 1.1}
|
71 |
+
output = model.generate(**inputs, **gen_kwargs)
|
72 |
+
output = tokenizer.decode(output[0].tolist(), skip_special_tokens=True)
|
73 |
+
print(output)
|
74 |
+
# <s> A beautiful flower box made of white rose wood. It is a perfect gift for weddings, birthdays and anniversaries.
|
75 |
+
# All the roses are from our farm Roses Flanders. Therefor you know that these flowers last much longer than those in store or online!</s>
|
76 |
```
|
77 |
|
78 |
## Open Source License
|
|
|
111 |
### 通过 Transformers 加载
|
112 |
通过以下的代码加载 InternLM 7B Chat 模型
|
113 |
```python
|
114 |
+
import torch
|
115 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
116 |
+
tokenizer = AutoTokenizer.from_pretrained("internlm/internlm-7b", trust_remote_code=True)
|
117 |
+
# `torch_dtype=torch.float16` 可以令模型以 float16 精度加载,否则 transformers 会将模型加载为 float32,有可能导致显存不足
|
118 |
+
model = AutoModelForCausalLM.from_pretrained("internlm/internlm-7b", torch_dtype=torch.float16, trust_remote_code=True).cuda()
|
119 |
+
model = model.eval()
|
120 |
+
inputs = tokenizer(["来到美丽的大自然,我们发现"], return_tensors="pt")
|
121 |
+
for k,v in inputs.items():
|
122 |
+
inputs[k] = v.cuda()
|
123 |
+
gen_kwargs = {"max_length": 128, "top_p": 0.8, "temperature": 0.8, "do_sample": True, "repetition_penalty": 1.1}
|
124 |
+
output = model.generate(**inputs, **gen_kwargs)
|
125 |
+
output = tokenizer.decode(output[0].tolist(), skip_special_tokens=True)
|
126 |
+
print(output)
|
127 |
+
# 来到美丽的大自然,我们发现各种各样的花千奇百怪。有的颜色鲜艳亮丽,使人感觉生机勃勃;有的是红色的花瓣儿粉嫩嫩的像少女害羞的脸庞一样让人爱不释手.有的小巧玲珑; 还有的花瓣粗大看似枯黄实则暗藏玄机!
|
128 |
+
# 不同的花卉有不同的“脾气”,它们都有着属于自己的故事和人生道理.这些鲜花都是大自然中最为原始的物种,每一朵都绽放出别样的美令人陶醉、着迷!
|
129 |
```
|
130 |
|
131 |
## 开源许可证
|