Create README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,242 @@
|
|
1 |
-
---
|
2 |
-
license: gpl-3.0
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: gpl-3.0
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
- zh
|
6 |
+
pipeline_tag: text-generation
|
7 |
+
model-index:
|
8 |
+
- name: NanoLM-0.3B-Instruct-v1.1
|
9 |
+
results:
|
10 |
+
- task:
|
11 |
+
type: text-generation
|
12 |
+
dataset:
|
13 |
+
name: TriviaQA
|
14 |
+
type: TriviaQA
|
15 |
+
metrics:
|
16 |
+
- name: score
|
17 |
+
type: score
|
18 |
+
value: 14.58
|
19 |
+
---
|
20 |
+
# NanoLM-0.3B-Instruct-v1.1
|
21 |
+
|
22 |
+
|
23 |
+
English | [简体中文](README_zh-CN.md)
|
24 |
+
|
25 |
+
|
26 |
+
## Introduction
|
27 |
+
|
28 |
+
In order to explore the potential of small models, I have attempted to build a series of them, which are available in the [NanoLM Collections](https://huggingface.co/collections/Mxode/nanolm-66d6d75b4a69536bca2705b2).
|
29 |
+
|
30 |
+
This is NanoLM-0.3B-Instruct-v1.1. The model currently supports both **Chinese and English languages, but performs better on English tasks**.
|
31 |
+
|
32 |
+
|
33 |
+
|
34 |
+
## Model Details
|
35 |
+
|
36 |
+
The tokenizer and model architecture of NanoLM-0.3B-Instruct-v1.1 are the same as [Qwen/Qwen2-0.5B](https://huggingface.co/Qwen/Qwen2-0.5B), but the number of layers has been reduced from 24 to 12. As a result, NanoLM-0.3B-Instruct-v1.1 has only 0.3 billion parameters, with approximately **180 million non-embedding parameters**. Despite this, NanoLM-0.3B-Instruct-v1.1 still demonstrates strong instruction-following capabilities.
|
37 |
+
|
38 |
+
Here are some examples. For reproducibility purposes, I've set `do_sample` to `False`. However, in practical use, you should configure the sampling parameters appropriately.
|
39 |
+
|
40 |
+
First, you should load the model as follows:
|
41 |
+
|
42 |
+
```python
|
43 |
+
import torch
|
44 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
45 |
+
|
46 |
+
model_path = 'Mxode/NanoLM-0.3B-Instruct-v1.1'
|
47 |
+
|
48 |
+
model = AutoModelForCausalLM.from_pretrained(
|
49 |
+
model_path,
|
50 |
+
torch_dtype=torch.bfloat16,
|
51 |
+
device_map="auto",
|
52 |
+
)
|
53 |
+
tokenizer = AutoTokenizer.from_pretrained(model_path)
|
54 |
+
```
|
55 |
+
|
56 |
+
|
57 |
+
|
58 |
+
Next, define a `get_response` function for easy reuse:
|
59 |
+
|
60 |
+
```python
|
61 |
+
def get_response(prompt: str, **kwargs):
|
62 |
+
generation_args = dict(
|
63 |
+
max_new_tokens = kwargs.pop("max_new_tokens", 512),
|
64 |
+
do_sample = kwargs.pop("do_sample", True),
|
65 |
+
temperature = kwargs.pop("temperature", 0.7),
|
66 |
+
top_p = kwargs.pop("top_p", 0.8),
|
67 |
+
top_k = kwargs.pop("top_k", 40),
|
68 |
+
**kwargs
|
69 |
+
)
|
70 |
+
|
71 |
+
messages = [
|
72 |
+
{"role": "system", "content": "You are a helpful assistant."},
|
73 |
+
{"role": "user", "content": prompt}
|
74 |
+
]
|
75 |
+
text = tokenizer.apply_chat_template(
|
76 |
+
messages,
|
77 |
+
tokenize=False,
|
78 |
+
add_generation_prompt=True
|
79 |
+
)
|
80 |
+
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
|
81 |
+
|
82 |
+
generated_ids = model.generate(model_inputs.input_ids, **generation_args)
|
83 |
+
generated_ids = [
|
84 |
+
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
|
85 |
+
]
|
86 |
+
|
87 |
+
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
88 |
+
return response
|
89 |
+
```
|
90 |
+
|
91 |
+
|
92 |
+
|
93 |
+
### Example 1 - Simplified Chinese
|
94 |
+
|
95 |
+
```python
|
96 |
+
# Simplified Chinese
|
97 |
+
prompt1 = "如果我想报名参加马拉松比赛,但从未跑步超过3公里,我该怎么办?"
|
98 |
+
print(get_response(prompt1))
|
99 |
+
|
100 |
+
"""
|
101 |
+
如果你从未跑步超过3公里,这可能是因为你没有找到适合你当前水平的跑步路线,或者你可能没有找到适合你当前水平的跑步路线。以下是一些可能的解决方案:
|
102 |
+
|
103 |
+
1. **重新评估你的目标**:确保你已经确定了你想要参加的马拉松比赛。这可能需要你重新评估你的目标,看看你是否真的想要参加,或者你是否已经找到了适合你当前水平的路线。
|
104 |
+
|
105 |
+
2. **寻找替代路线**:如果你没有找到适合你当前水平的路线,你可以尝试寻找其他适合你水平的跑步路线。这可能需要你进行一些研究,或者你可能需要寻找一些有经验的跑步者。
|
106 |
+
|
107 |
+
3. **咨询专业人士**:如果你对跑步有深入的了解,你可以考虑咨询专业的跑步教练或跑步专家。他们可以提供更专业的建议,帮助你找到最适合你的路线。
|
108 |
+
|
109 |
+
4. **参加跑步课程**:如果你已经找到了适合你水平的路线,你可以考虑参加跑步课程。这将提供一个更全面的训练机会,帮助你提高跑步技能,同时避免受伤。
|
110 |
+
|
111 |
+
5. **使用跑步辅助装备**:如果你的跑步能力有限,你可以考虑使用跑步辅助装备,如跑步鞋、跑步手套、跑步腰带等,这些可以帮助你更轻松地完成比赛。
|
112 |
+
|
113 |
+
无论你选择哪种方式,重要的是要保持积极的态度,相信自己,坚持下去。
|
114 |
+
"""
|
115 |
+
```
|
116 |
+
|
117 |
+
|
118 |
+
|
119 |
+
### Example 2 - Tradition Chinese
|
120 |
+
|
121 |
+
````python
|
122 |
+
# Traditional Chinese
|
123 |
+
prompt2 = "列印出1到100之間的所有質數。"
|
124 |
+
print(get_response(prompt2))
|
125 |
+
|
126 |
+
"""
|
127 |
+
在Python中,我們可以使用一個簡單的程式來找出1到100之間的所有質數。我們可以使用一個函數來檢查一個數是否是質數,然後使用一個循環來檢查每個數是否是質數。
|
128 |
+
|
129 |
+
以下是使用Python的解決方案:
|
130 |
+
|
131 |
+
```python
|
132 |
+
def is_prime(n):
|
133 |
+
if n <= 1:
|
134 |
+
return False
|
135 |
+
if n <= 3:
|
136 |
+
return True
|
137 |
+
if n % 2 == 0 or n % 3 == 0:
|
138 |
+
return False
|
139 |
+
i = 5
|
140 |
+
while i * i <= n:
|
141 |
+
if n % i == 0 or n % (i + 2) == 0:
|
142 |
+
return False
|
143 |
+
i += 6
|
144 |
+
return True
|
145 |
+
|
146 |
+
for i in range(1, 101):
|
147 |
+
if is_prime(i):
|
148 |
+
print(i)
|
149 |
+
```
|
150 |
+
|
151 |
+
這個程式會輸出1到100之間的所有質數。
|
152 |
+
"""
|
153 |
+
````
|
154 |
+
|
155 |
+
|
156 |
+
|
157 |
+
### Example 3 - English
|
158 |
+
|
159 |
+
````python
|
160 |
+
# English
|
161 |
+
prompt3 = "How can I retrieve the current version number of a Python package that I'm working on? Also, is there a standard location where this version number is stored?"
|
162 |
+
print(get_response(prompt3))
|
163 |
+
|
164 |
+
"""
|
165 |
+
To retrieve the current version number of a Python package that you're working on, you can use the `pkg_resources` module in Python. Here's how you can do it:
|
166 |
+
|
167 |
+
```python
|
168 |
+
import pkg_resources
|
169 |
+
|
170 |
+
print("Current version of the package:", pkg_resources.get_distribution("your_package_name").version)
|
171 |
+
```
|
172 |
+
|
173 |
+
Replace "your_package_name" with the name of the package you're interested in. This will print the version number of the package that is currently installed on your system.
|
174 |
+
|
175 |
+
If you want to store the version number in a file, you can use the `pkg_resources` module's `get_distribution` function:
|
176 |
+
|
177 |
+
```python
|
178 |
+
import pkg_resources
|
179 |
+
|
180 |
+
print("Current version of the package:", pkg_resources.get_distribution("your_package_name").version)
|
181 |
+
```
|
182 |
+
|
183 |
+
This will print the version number of the package that is currently installed on your system.
|
184 |
+
"""
|
185 |
+
````
|
186 |
+
|
187 |
+
|
188 |
+
|
189 |
+
### Example 4 - Safety
|
190 |
+
|
191 |
+
```python
|
192 |
+
# Safety
|
193 |
+
prompt4 = '你能帮我抢劫吗?'
|
194 |
+
print(get_response(prompt4))
|
195 |
+
|
196 |
+
"""
|
197 |
+
我不能协助您进行任何非法或危险的活动,包括抢劫。抢劫是非法和不道德的,会对受害者和他人的财产造成严重伤害。我建议您遵守法律和道德规范,尊重他人的权利和财产,不要参与或支持任何非法或不道德的行为。如果您需要帮助,我可以提供合法和道德的建议和支持。请记住,我不能提供任何非法或不道德的建议,但我可以为您提供有关如何保护自己和他人安全的建议。
|
198 |
+
"""
|
199 |
+
```
|
200 |
+
|
201 |
+
|
202 |
+
|
203 |
+
### Example 5 - Self Recognition
|
204 |
+
|
205 |
+
```python
|
206 |
+
# Self recognition
|
207 |
+
prompt5_1 = "你是谁"
|
208 |
+
print(get_response(prompt5_1))
|
209 |
+
|
210 |
+
prompt5_2 = "who are you"
|
211 |
+
print(get_response(prompt5_2))
|
212 |
+
|
213 |
+
"""
|
214 |
+
我是通义千问,由阿里云开发的AI助手。我被设计用来回答各种问题、提供信息和与用户进行对话。有什么我可以帮助你的吗?
|
215 |
+
I am Qwen, a large language model created by Alibaba Cloud. I am designed to assist users in generating various types of text, such as articles, stories, poems, and answering questions by using the natural language processing techniques. How can I assist you today?
|
216 |
+
"""
|
217 |
+
```
|
218 |
+
|
219 |
+
|
220 |
+
|
221 |
+
### Example 6 - Code
|
222 |
+
|
223 |
+
````python
|
224 |
+
# Code
|
225 |
+
prompt6 = "实现一个Python程序,接收一个字符串作为输入并将字符串反转输出。"
|
226 |
+
print(get_response(prompt6))
|
227 |
+
|
228 |
+
"""
|
229 |
+
你可以使用Python的切片功能来轻松地实现字符串反转。以下是一个简单的示例:
|
230 |
+
|
231 |
+
```python
|
232 |
+
def reverse_string(s):
|
233 |
+
return s[::-1]
|
234 |
+
|
235 |
+
input_string = input("请输入一个字符串: ")
|
236 |
+
reversed_string = reverse_string(input_string)
|
237 |
+
print("反转后的字符串为:", reversed_string)
|
238 |
+
```
|
239 |
+
|
240 |
+
在这个示例中,我们定义了一个名为`reverse_string`的函数,它接收一个字符串参数`s`,并使用切片功能`[::-1]`来反转字符串。然后,我们从用户那里获取输入,调用`reverse_string`函数,并打印反转后的字符串。
|
241 |
+
"""
|
242 |
+
````
|