dyyyyyyyy buyi89 commited on
Commit
b8a6171
β€’
1 Parent(s): 8d16e59

add Demo usage (#1)

Browse files

- add Demo usage (a16c68922275e636181898210293586e069feba2)


Co-authored-by: Kyle Shi <[email protected]>

Files changed (1) hide show
  1. README.md +91 -3
README.md CHANGED
@@ -1,3 +1,91 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - dyyyyyyyy/ScaleQuest-Math
5
+ language:
6
+ - en
7
+ metrics:
8
+ - accuracy
9
+ library_name: transformers
10
+ pipeline_tag: text-generation
11
+ ---
12
+ <p align="center"><h2 align="center">Unleashing Reasoning Capability of LLMs via Scalable Question Synthesis from Scratch</h2></p>
13
+
14
+ # Model Card for Llama3-8B-ScaleQuest
15
+
16
+ <!-- Provide a quick summary of what the model is/does. -->
17
+
18
+ We introduce ScaleQuest, a scalable and novel data synthesis method that utilizes small-size open-source models to generate questions from scratch without the need for seed data with complex augmentation constraints.
19
+
20
+ * πŸ“‘ Project Page: [https://scalequest.github.io](https://scalequest.github.io/)
21
+ * πŸ’» Code: [https://github.com/yyDing1/ScaleQuest](https://github.com/yyDing1/ScaleQuest/)
22
+ * πŸ“– Paper: [Unleashing Reasoning Capability of LLMs via Scalable Question Synthesis from Scratch](https://arxiv.org/abs/2410.18693)
23
+ * πŸ’Ύ Models in the πŸ€— HuggingFace Hub: [ScaleQuest-Models](https://huggingface.co/collections/dyyyyyyyy/scalequest-670a7dc2623c91990f28913b)
24
+
25
+ <p align="center">
26
+ <img src="https://github.com/yyDing1/ScaleQuest/raw/main/img/results.png">
27
+ </p>
28
+
29
+ ## Datasets & Models
30
+
31
+ Math Dataset: [link](https://huggingface.co/datasets/dyyyyyyyy/ScaleQuest-Math)
32
+
33
+ We release two question generator models and four problem-solving models.
34
+
35
+ | Model | Type | MATH | Olympiad Bench | πŸ€— HuggingFace<br />Download Link |
36
+ | - | :-: | :-: | :-: | :-: |
37
+ | ScaleQuest-DeepSeekMath-7B-QGen | question generator | - | - | [link](https://huggingface.co/dyyyyyyyy/ScaleQuest-DeepSeekMath-7B-QGen)
38
+ | ScaleQuest-Qwen2-Math-7B-QGen | question generator | - | - | [link](https://huggingface.co/dyyyyyyyy/ScaleQuest-Qwen2-Math-7B-QGen)
39
+ | Mistral-7B-ScaleQuest | problem solver | 62.9 | 26.8 | [link](https://huggingface.co/dyyyyyyyy/Mistral-7B-ScaleQuest) |
40
+ | Llama3-8B-ScaleQuest | problem solver | 64.4 | 25.3 | [link](https://huggingface.co/dyyyyyyyy/Llama3-8B-ScaleQuest) |
41
+ | DeepSeekMath-7B-ScaleQuest | problem solver | 66.6 | 29.9 | [link](https://huggingface.co/dyyyyyyyy/DeepSeekMath-7B-ScaleQuest) |
42
+ | Qwen2-Math-7B-ScaleQuest | problem solver | 73.4 | 38.5 | [link](https://huggingface.co/dyyyyyyyy/Qwen2-Math-7B-ScaleQuest) |
43
+
44
+ ## Demo usage
45
+
46
+ Below is an example using `Llama3-8B-ScaleQuest`
47
+ ```python
48
+ import torch
49
+ from transformers import AutoModelForCausalLM, AutoTokenizer
50
+
51
+ model_name = "dyyyyyyyy/Llama3-8B-ScaleQuest"
52
+
53
+ model = AutoModelForCausalLM.from_pretrained(
54
+ model_name,
55
+ torch_dtype=torch.bfloat16,
56
+ device_map="auto"
57
+ )
58
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
59
+
60
+ question = "Find the value of $x$ that satisfies the equation $4x+5 = 6x+7$."
61
+
62
+ sys_prompt="Below is an instruction that describes a task. Write a response that appropriately completes the request." + "\n\n"
63
+ query_prompt="### Instruction:" + "\n"
64
+ # {query}
65
+ prompt_after_query="\n\n"
66
+ resp_prompt="### Response:" + "\n"
67
+ prompt_before_resp=""
68
+ # {resp}
69
+ delim="\n\n"
70
+
71
+ prefix_prompt = f"{query_prompt}{question}{prompt_after_query}{resp_prompt}{prompt_before_resp}".rstrip(" ")
72
+ full_prompt = sys_prompt + delim.join([prefix_prompt])
73
+
74
+ # print(full_prompt)
75
+
76
+ inputs = tokenizer(full_prompt, return_tensors="pt").to(model.device)
77
+ outputs = model.generate(**inputs, max_new_tokens=512, do_sample=False)
78
+ print(tokenizer.decode(outputs[0][len(inputs.input_ids[0]):], skip_special_tokens=True))
79
+
80
+ ```
81
+
82
+ ## Citation
83
+
84
+ ```bibtex
85
+ @article{ding2024unleashing,
86
+ title={Unleashing Reasoning Capability of LLMs via Scalable Question Synthesis from Scratch},
87
+ author={Ding, Yuyang and Shi, Xinyu and Liang, Xiaobo and Li, Juntao and Zhu, Qiaoming and Zhang, Min},
88
+ journal={https://arxiv.org/abs/2410.18693},
89
+ year={2024}
90
+ }
91
+ ```