|
--- |
|
license: apache-2.0 |
|
datasets: |
|
- dyyyyyyyy/ScaleQuest-Math |
|
language: |
|
- en |
|
metrics: |
|
- accuracy |
|
library_name: transformers |
|
pipeline_tag: text-generation |
|
--- |
|
<p align="center"><h2 align="center">Unleashing Reasoning Capability of LLMs via Scalable Question Synthesis from Scratch</h2></p> |
|
|
|
# Model Card for ScaleQuest-DeepSeekMath-7B-QGen |
|
|
|
<!-- Provide a quick summary of what the model is/does. --> |
|
|
|
We introduce ScaleQuest, a scalable and novel data synthesis method that utilizes small-size open-source models to generate questions from scratch without the need for seed data with complex augmentation constraints. |
|
|
|
* 📑 Project Page: [https://scalequest.github.io](https://scalequest.github.io/) |
|
* 💻 Code: [https://github.com/yyDing1/ScaleQuest](https://github.com/yyDing1/ScaleQuest/) |
|
* 📖 Paper: [Unleashing Reasoning Capability of LLMs via Scalable Question Synthesis from Scratch](https://arxiv.org/abs/2410.18693) |
|
* 💾 Models in the 🤗 HuggingFace Hub: [ScaleQuest-Models](https://huggingface.co/collections/dyyyyyyyy/scalequest-670a7dc2623c91990f28913b) |
|
|
|
<p align="center"> |
|
<img src="https://github.com/yyDing1/ScaleQuest/raw/main/img/results.png"> |
|
</p> |
|
|
|
## Datasets & Models |
|
|
|
Math Dataset: [link](https://huggingface.co/datasets/dyyyyyyyy/ScaleQuest-Math) |
|
|
|
We release two question generator models and four problem-solving models. |
|
|
|
| Model | Type | MATH | Olympiad Bench | 🤗 HuggingFace<br />Download Link | |
|
| - | :-: | :-: | :-: | :-: | |
|
| ScaleQuest-DeepSeekMath-7B-QGen | question generator | - | - | [link](https://huggingface.co/dyyyyyyyy/ScaleQuest-DeepSeekMath-7B-QGen) |
|
| ScaleQuest-Qwen2-Math-7B-QGen | question generator | - | - | [link](https://huggingface.co/dyyyyyyyy/ScaleQuest-Qwen2-Math-7B-QGen) |
|
| Mistral-7B-ScaleQuest | problem solver | 62.9 | 26.8 | [link](https://huggingface.co/dyyyyyyyy/Mistral-7B-ScaleQuest) | |
|
| Llama3-8B-ScaleQuest | problem solver | 64.4 | 25.3 | [link](https://huggingface.co/dyyyyyyyy/Llama3-8B-ScaleQuest) | |
|
| DeepSeekMath-7B-ScaleQuest | problem solver | 66.6 | 29.9 | [link](https://huggingface.co/dyyyyyyyy/DeepSeekMath-7B-ScaleQuest) | |
|
| Qwen2-Math-7B-ScaleQuest | problem solver | 73.4 | 38.5 | [link](https://huggingface.co/dyyyyyyyy/Qwen2-Math-7B-ScaleQuest) | |
|
|
|
## Demo usage |
|
|
|
Below is an example using `ScaleQuest-DeepSeekMath-7B-QGen` |
|
```python |
|
from vllm import LLM, SamplingParams |
|
|
|
model_name = "dyyyyyyyy/ScaleQuest-DeepSeekMath-7B-QGen" |
|
|
|
pre_query_template = "<|begin▁of▁sentence|>User: " |
|
stop_tokens = ["<|begin▁of▁sentence|>", "<|end▁of▁sentence|>"] |
|
|
|
llm = LLM( |
|
model=model_name, |
|
tokenizer=model_name, |
|
tensor_parallel_size=1, |
|
max_model_len=4096, |
|
enable_prefix_caching=True, |
|
trust_remote_code=True, |
|
swap_space=16, |
|
gpu_memory_utilization=0.95, |
|
) |
|
sampling_params = SamplingParams( |
|
n=4, |
|
max_tokens=1024, |
|
temperature=1.0, |
|
top_p=0.99, |
|
stop=stop_tokens, |
|
) |
|
|
|
outputs = llm.generate(pre_query_template, sampling_params) |
|
|
|
# Print the outputs. |
|
for output in outputs: |
|
prompt = output.prompt |
|
for idx, generated_output in enumerate(output.outputs): |
|
generated_text = generated_output.text |
|
print(f"Sample {idx + 1}:") |
|
print(f"Prompt: {prompt!r}") |
|
print(f"Generated text: {generated_text!r}") |
|
print("-" * 50) |
|
|
|
``` |
|
|
|
## Citation |
|
|
|
```bibtex |
|
@article{ding2024unleashing, |
|
title={Unleashing Reasoning Capability of LLMs via Scalable Question Synthesis from Scratch}, |
|
author={Ding, Yuyang and Shi, Xinyu and Liang, Xiaobo and Li, Juntao and Zhu, Qiaoming and Zhang, Min}, |
|
journal={https://arxiv.org/abs/2410.18693}, |
|
year={2024} |
|
} |
|
``` |
|
|