Papers
arxiv:2407.12504

Case2Code: Learning Inductive Reasoning with Synthetic Data

Published on Jul 17
· Submitted by yf on Jul 18
Authors:
,
,
,
,
,
,
,
,
,
,
,

Abstract

Complex reasoning is an impressive ability shown by large language models (LLMs). Most LLMs are skilled in deductive reasoning, such as chain-of-thought prompting or iterative tool-using to solve challenging tasks step-by-step. In this paper, we hope to focus on evaluating and teaching LLMs to conduct inductive reasoning, that is, LLMs are supposed to infer underlying rules by observing examples or sequential transformations. However, collecting large-scale and diverse human-generated inductive data is challenging. We focus on data synthesis in the code domain and propose a Case2Code task by exploiting the expressiveness and correctness of programs. Specifically, we collect a diverse set of executable programs, synthesize input-output transformations for each program, and force LLMs to infer the underlying code implementations based on the synthetic I/O cases. We first evaluate representative LLMs on the synthesized Case2Code task and demonstrate that the Case-to-code induction is challenging for LLMs. Then, we synthesize large-scale Case2Code training samples to train LLMs to perform inductive reasoning. Experimental results show that such induction training benefits not only in distribution Case2Code performance but also enhances various coding abilities of trained LLMs, demonstrating the great potential of learning inductive reasoning via synthetic data.

Community

Paper author Paper submitter

This paper introduces Case2Code, a scalable task for evaluating and improving the reasoning ability of LLMs. Code and datasets will be available at https://github.com/choosewhatulike/case2code.

·

Hi @yf congrats on this work!

Are you planning to make your dataset available on the hub?

See here for a guide: https://huggingface.co/docs/datasets/loading.

It can then also be linked to this paper, see here on how to do that: https://huggingface.co/docs/hub/en/datasets-cards#linking-a-paper

Let me know if you need any help.

Cheers,

Niels
open-source @ HF

Brilliant idea

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Is this paper the same as programming by examples (pbe)?

·
Paper author

Hi @dihuang , thanks a lot for your attention and your question!

Case2code mainly differs from programming by examples (pbe) in the motivation and the methodology.

pbe aims to synthesize programs from user-defined input-output examples as the specifications. Existing works on pbe often focus on specific domains, relying on pre-defined domain-specific language (DSL) for program synthesis.

However, case2code focuses on improving the inductive reasoning of LLMs, in which the model needs to reason about the transformation of input to output and express it into formal code. The solutions are expressed as code for convenient evaluations. And the task does not require DSL or any domain-specific knowledge.

We believe both pbe and case2code are very important and valuable in the era of LLMs. And pbe is a great application of inductive reasoning in the code domain, while case2code is a great training task for improving inductive reasoning.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2407.12504 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2407.12504 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2407.12504 in a Space README.md to link it from this page.

Collections including this paper 3