Update README.md (#1)
Browse files- Update README.md (60af1bd0919fb9d0302af575bcb58cda7602bd36)
Co-authored-by: Kyle Shi <[email protected]>
README.md
CHANGED
@@ -1,3 +1,96 @@
|
|
1 |
-
---
|
2 |
-
license: apache-2.0
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
datasets:
|
4 |
+
- dyyyyyyyy/ScaleQuest-Math
|
5 |
+
language:
|
6 |
+
- en
|
7 |
+
metrics:
|
8 |
+
- accuracy
|
9 |
+
library_name: transformers
|
10 |
+
pipeline_tag: text-generation
|
11 |
+
---
|
12 |
+
<p align="center"><h2 align="center">Unleashing Reasoning Capability of LLMs via Scalable Question Synthesis from Scratch</h2></p>
|
13 |
+
|
14 |
+
# Model Card for ScaleQuest-DeepSeekMath-7B-QGen
|
15 |
+
|
16 |
+
<!-- Provide a quick summary of what the model is/does. -->
|
17 |
+
|
18 |
+
We introduce ScaleQuest, a scalable and novel data synthesis method that utilizes small-size open-source models to generate questions from scratch without the need for seed data with complex augmentation constraints.
|
19 |
+
|
20 |
+
* 📑 Project Page: [https://scalequest.github.io](https://scalequest.github.io/)
|
21 |
+
* 💻 Code: [https://github.com/yyDing1/ScaleQuest](https://github.com/yyDing1/ScaleQuest/)
|
22 |
+
* 📖 Paper: [Unleashing Reasoning Capability of LLMs via Scalable Question Synthesis from Scratch](https://arxiv.org/abs/2410.18693)
|
23 |
+
* 💾 Models in the 🤗 HuggingFace Hub: [ScaleQuest-Models](https://huggingface.co/collections/dyyyyyyyy/scalequest-670a7dc2623c91990f28913b)
|
24 |
+
|
25 |
+
<p align="center">
|
26 |
+
<img src="https://github.com/yyDing1/ScaleQuest/raw/main/img/results.png">
|
27 |
+
</p>
|
28 |
+
|
29 |
+
## Datasets & Models
|
30 |
+
|
31 |
+
Math Dataset: [link](https://huggingface.co/datasets/dyyyyyyyy/ScaleQuest-Math)
|
32 |
+
|
33 |
+
We release two question generator models and four problem-solving models.
|
34 |
+
|
35 |
+
| Model | Type | MATH | Olympiad Bench | 🤗 HuggingFace<br />Download Link |
|
36 |
+
| - | :-: | :-: | :-: | :-: |
|
37 |
+
| ScaleQuest-DeepSeekMath-7B-QGen | question generator | - | - | [link](https://huggingface.co/dyyyyyyyy/ScaleQuest-DeepSeekMath-7B-QGen)
|
38 |
+
| ScaleQuest-Qwen2-Math-7B-QGen | question generator | - | - | [link](https://huggingface.co/dyyyyyyyy/ScaleQuest-Qwen2-Math-7B-QGen)
|
39 |
+
| Mistral-7B-ScaleQuest | problem solver | 62.9 | 26.8 | [link](https://huggingface.co/dyyyyyyyy/Mistral-7B-ScaleQuest) |
|
40 |
+
| Llama3-8B-ScaleQuest | problem solver | 64.4 | 25.3 | [link](https://huggingface.co/dyyyyyyyy/Llama3-8B-ScaleQuest) |
|
41 |
+
| DeepSeekMath-7B-ScaleQuest | problem solver | 66.6 | 29.9 | [link](https://huggingface.co/dyyyyyyyy/DeepSeekMath-7B-ScaleQuest) |
|
42 |
+
| Qwen2-Math-7B-ScaleQuest | problem solver | 73.4 | 38.5 | [link](https://huggingface.co/dyyyyyyyy/Qwen2-Math-7B-ScaleQuest) |
|
43 |
+
|
44 |
+
## Demo usage
|
45 |
+
|
46 |
+
Below is an example using `ScaleQuest-DeepSeekMath-7B-QGen`
|
47 |
+
```python
|
48 |
+
from vllm import LLM, SamplingParams
|
49 |
+
|
50 |
+
model_name = "dyyyyyyyy/ScaleQuest-DeepSeekMath-7B-QGen"
|
51 |
+
|
52 |
+
pre_query_template = "<|begin▁of▁sentence|>User: "
|
53 |
+
stop_tokens = ["<|begin▁of▁sentence|>", "<|end▁of▁sentence|>"]
|
54 |
+
|
55 |
+
llm = LLM(
|
56 |
+
model=model_name,
|
57 |
+
tokenizer=model_name,
|
58 |
+
tensor_parallel_size=1,
|
59 |
+
max_model_len=4096,
|
60 |
+
enable_prefix_caching=True,
|
61 |
+
trust_remote_code=True,
|
62 |
+
swap_space=16,
|
63 |
+
gpu_memory_utilization=0.95,
|
64 |
+
)
|
65 |
+
sampling_params = SamplingParams(
|
66 |
+
n=4,
|
67 |
+
max_tokens=1024,
|
68 |
+
temperature=1.0,
|
69 |
+
top_p=0.99,
|
70 |
+
stop=stop_tokens,
|
71 |
+
)
|
72 |
+
|
73 |
+
outputs = llm.generate(pre_query_template, sampling_params)
|
74 |
+
|
75 |
+
# Print the outputs.
|
76 |
+
for output in outputs:
|
77 |
+
prompt = output.prompt
|
78 |
+
for idx, generated_output in enumerate(output.outputs):
|
79 |
+
generated_text = generated_output.text
|
80 |
+
print(f"Sample {idx + 1}:")
|
81 |
+
print(f"Prompt: {prompt!r}")
|
82 |
+
print(f"Generated text: {generated_text!r}")
|
83 |
+
print("-" * 50)
|
84 |
+
|
85 |
+
```
|
86 |
+
|
87 |
+
## Citation
|
88 |
+
|
89 |
+
```bibtex
|
90 |
+
@article{ding2024unleashing,
|
91 |
+
title={Unleashing Reasoning Capability of LLMs via Scalable Question Synthesis from Scratch},
|
92 |
+
author={Ding, Yuyang and Shi, Xinyu and Liang, Xiaobo and Li, Juntao and Zhu, Qiaoming and Zhang, Min},
|
93 |
+
journal={https://arxiv.org/abs/2410.18693},
|
94 |
+
year={2024}
|
95 |
+
}
|
96 |
+
```
|