deepseek-admin commited on
Commit
645e207
1 Parent(s): 477ed76

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +68 -1
README.md CHANGED
@@ -1,5 +1,72 @@
1
  ---
2
  license: other
3
  license_name: deepseek
4
- license_link: LICENSE
5
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: other
3
  license_name: deepseek
4
+ license_link: https://github.com/deepseek-ai/DeepSeek-Math/blob/main/LICENSE-MODEL
5
  ---
6
+
7
+
8
+ <p align="center">
9
+ <img width="500px" alt="DeepSeek Chat" src="https://github.com/deepseek-ai/DeepSeek-LLM/blob/main/images/logo.png?raw=true">
10
+ </p>
11
+ <p align="center"><a href="https://www.deepseek.com/">[🏠Homepage]</a> | <a href="https://chat.deepseek.com/">[🤖 Chat with DeepSeek LLM]</a> | <a href="https://discord.gg/Tc7c45Zzu5">[Discord]</a> | <a href="https://github.com/deepseek-ai/DeepSeek-LLM/blob/main/images/qr.jpeg">[Wechat(微信)]</a> </p>
12
+
13
+ <p align="center">
14
+ <a href="https://arxiv.org/pdf/xxx.pdf"><b>Paper Link</b>👁️</a>
15
+ </p>
16
+
17
+ <hr>
18
+
19
+
20
+
21
+
22
+
23
+ ### 1. Introduction to DeepSeekMath
24
+ See the [Introduction](https://github.com/deepseek-ai/DeepSeek-Math) for more details.
25
+
26
+ ### 2. How to Use
27
+ Here give some examples of how to use our model.
28
+
29
+ **Chat Completion**
30
+
31
+
32
+ ```python
33
+ import torch
34
+ from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig
35
+
36
+ model_name = "deepseek-ai/deepseek-math-7b-rl"
37
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
38
+ model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto")
39
+ model.generation_config = GenerationConfig.from_pretrained(model_name)
40
+ model.generation_config.pad_token_id = model.generation_config.eos_token_id
41
+
42
+ messages = [
43
+ {"role": "user", "content": "what is the integral of x^2 from 0 to 2?"}
44
+ ]
45
+ input_tensor = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt")
46
+ outputs = model.generate(input_tensor.to(model.device), max_new_tokens=100)
47
+
48
+ result = tokenizer.decode(outputs[0][input_tensor.shape[1]:], skip_special_tokens=True)
49
+ print(result)
50
+ ```
51
+
52
+ Avoiding the use of the provided function `apply_chat_template`, you can also interact with our model following the sample template. Note that `messages` should be replaced by your input.
53
+
54
+ ```
55
+ User: {messages[0]['content']}
56
+
57
+ Assistant: {messages[1]['content']}<|end▁of▁sentence|>User: {messages[2]['content']}
58
+
59
+ Assistant:
60
+ ```
61
+
62
+ **Note:** By default (`add_special_tokens=True`), our tokenizer automatically adds a `bos_token` (`<|begin▁of▁sentence|>`) before the input text. Additionally, since the system prompt is not compatible with this version of our models, we DO NOT RECOMMEND including the system prompt in your input.
63
+
64
+ ### 3. License
65
+ This code repository is licensed under the MIT License. The use of DeepSeekMath models is subject to the Model License. DeepSeekMath supports commercial use.
66
+
67
+ See the [LICENSE-MODEL](https://github.com/deepseek-ai/DeepSeek-Math/blob/main/LICENSE-MODEL) for more details.
68
+
69
+ ### 4. Contact
70
+
71
+ If you have any questions, please raise an issue or contact us at [[email protected]](mailto:[email protected]).
72
+