Papers
arxiv:2308.09583

WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct

Published on Aug 18, 2023
Authors:
,
Can Xu ,
,
,
,
,
,

Abstract

Large language models (LLMs), such as GPT-4, have shown remarkable performance in natural language processing (NLP) tasks, including challenging mathematical reasoning. However, most existing open-source models are only pre-trained on large-scale internet data and without math-related optimization. In this paper, we present WizardMath, which enhances the mathematical reasoning abilities of Llama-2, by applying our proposed Reinforcement Learning from Evol-Instruct Feedback (RLEIF) method to the domain of math. Through extensive experiments on two mathematical reasoning benchmarks, namely GSM8k and MATH, we reveal the extraordinary capabilities of our model. WizardMath surpasses all other open-source LLMs by a substantial margin. Furthermore, our model even outperforms ChatGPT-3.5, Claude Instant-1, PaLM-2 and Minerva on GSM8k, simultaneously surpasses Text-davinci-002, PaLM-1 and GPT-3 on MATH. More details and model weights are public at https://github.com/nlpxucan/WizardLM and https://huggingface.co/WizardLM.

Community

En calculant
lim
h

0
f
(
3
+
h
)

f
(
3
)
h
​h→0
​lim
​​
​h

​f(3+h)−f(3)
​​
déterminer
f

(
3
)
f
​′
​​ (3)

Sign up or log in to comment

Models citing this paper 126

Browse 126 models citing this paper

Datasets citing this paper 10

Browse 10 datasets citing this paper

Spaces citing this paper 186

Collections including this paper 9