base_model: deepseek-ai/deepseek-math-7b-base
tags:
- alignment-handbook
- generated_from_trainer
datasets:
- AI-MO/numina-problems-sft-v1.7-preproc
- AI-MO/tora-chosen-v0.7
widget:
- example_title: Math problem
messages:
- role: user
content: >-
For how many values of the constant $k$ will the polynomial
$x^{2}+kx+36$ have two distinct integer roots?
output:
text: >-
### Solution: 1. For the polynomial \\( x^2 + kx + 36 \\) to have two
distinct integer roots, let's denote these roots by \\( r_1 \\) and \\(
r_2 \\).\n\n2. According to Vieta's formulas, the sum of the roots \\(
r_1 + r_2 \\) is equal to \\(-k\\), and the product of the roots \\( r_1
\\cdot r_2 \\) is equal to 36.\n\n3. To find the distinct integer pairs
\\((r_1, r_2)\\) whose product is 36, we need to determine all pairs of
integers \\((r_1, r_2)\\) such that \\( r_1 \\cdot r_2 = 36 \\) and \\(
r_1 \\neq r_2 \\).\n\n4. From the equation \\( r_1 + r_2 = -k \\), for
each valid pair \\((r_1, r_2)\\), we can compute the corresponding value
of \\( k \\).\n\n5. Finally, since we need the polynomial to have two
distinct integer roots, we need to ensure that \\( r_1 \\) and \\( r_2
\\) are distinct.\n\nLet's start by finding all pairs \\((r_1, r_2)\\)
such that \\( r_1 \\cdot r_2 = 36 \\). We'll then determine the values
of \\( k \\) and ensure the roots are distinct.\n```python\nimport
itertools\n\n# Find all pairs (r1, r2) such that r1 * r2 =
36\nproduct_36 = 36\nfactor_pairs = []\n\nfor i in range(1, product_36 +
1):\n if product_36 % i == 0:\n pair = (i, product_36 // i)\n if pair[0]
!= pair[1]: # Ensure distinct pairs\n factor_pairs.append(pair)\n\n#
Calculate k for each pair and ensure distinct integer
roots\nvalid_k_values = set()\nfor r1, r2 in factor_pairs:\n if r1 !=
r2:\n k = -(r1 + r2)\n
valid_k_values.add(k)\n\nprint((len(valid_k_values),
sorted(valid_k_values)))\n```\n```output\n(4, [-37, -20, -15,
-13])\n```\nThe distinct integer values of \\( k \\) that make the
polynomial \\( x^2 + kx + 36 \\) have two distinct integer roots are
\\(-37, -20, -15, \\text{and} -13\\).\n\nTherefore, the number of such
values of \\( k \\) is:\n\n\\[\n\\boxed{4}\n\\]\n
pipeline_tag: text-generation
model-index:
- name: Numina-Math-7B
results: []
Model Card for Numina-Math-7B
Numina-Math is a series of language models that are trained to solve math problems using tool integrated reasoning. Numina-Math-7b won the first AI Math Olympiad, with a score of 29/50 on the public and private tests sets. This model is a fine-tuned version of deepseek-ai/deepseek-math-7b-base with two stages of training, first on a dataset with 863k math question answer pairs and then on a dataset with 73k examples of multi-step synthetic generations using tool integrated reasoning.
Model description
- Model type: A 7B parameter Math model fine-tuned in two stages of training, first on a dataset with 863k math question answer pairs and then on a dataset with 73k examples of multi-step synthetic generations using tool integrated reasoning.
- Language(s) (NLP): Primarily English
- License: MIT
- Finetuned from model: deepseek-ai/deepseek-math-7b-base
Model Sources
- Repository: Coming soon to https://github.com/huggingface/alignment-handbook
- Demo: https://huggingface.co/spaces/AI-MO/math-olympiad-solver
Intended uses & limitations
Here's how you can run the model using the pipeline()
function from 🤗 Transformers:
import re
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="AI-MO/Numina-Math-7B", torch_dtype=torch.bfloat16, device_map="auto")
messages = [
{"role": "user", "content": "For how many values of the constant $k$ will the polynomial $x^{2}+kx+36$ have two distinct integer roots?"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
gen_config = {
"max_new_tokens": 1024,
"do_sample": False,
"stop_strings": ["```output"],
"tokenizer": pipe.tokenizer,
}
outputs = pipe(prompt, **gen_config)
text = outputs[0]["generated_text"]
print(text)
python_code = re.findall(r"```python(.*?)```", text, re.DOTALL)[0]
# WARNING: This code will execute the python code in the string. We show this for eductional purposes only.
# Please refer to our full pipeline for a safer way to execute code.
exec(python_code)
Bias, Risks, and Limitations
Numina-Math-7B was created to solve math problems, the model has not been aligned to preferences beyond the domain of solving math, and should not be used in a general chat setting.
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 32
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 4.0
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
0.4295 | 1.0 | 1733 | 0.4313 |
0.3638 | 2.0 | 3466 | 0.4332 |
0.2951 | 3.0 | 5199 | 0.4704 |
0.2225 | 4.0 | 6932 | 0.5302 |
Framework versions
- Transformers 4.40.1
- Pytorch 2.3.1
- Datasets 2.18.0
- Tokenizers 0.19.1
Citation
If you find Numina-Math useful in your work, please cite it with:
@misc{beeching2024numina-math,
title={Numina Math},
author={Edward Beeching and Lewis Tunstall and Roman Soletskyi and Kashif Rasul and Shengyi Huang and Jia Li},
year={2024},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/AI-MO/Numina-Math-7B}}
}