Edit model card

miniCTX: Neural Theorem Proving with (Long-)Contexts

File-tuned context model from miniCTX: Neural Theorem Proving with (Long-)Contexts.

It is specifically finetuned for Lean 4 tactic prediction given proof states and optional file contexts.

Performance

Please see our paper.

Example input

/- You are proving a theorem in Lean 4.
You are given the following information:
- The file contents up to the current tactic, inside [CTX]...[/CTX]
- The current proof state, inside [STATE]...[/STATE]

Your task is to generate the next tactic in the proof.
Put the next tactic inside [TAC]...[/TAC]
-/
[CTX]
import Mathlib.Data.Nat.Prime

theorem test_thm (m n : Nat) (h : m.Coprime n) : m.gcd n = 1 := by
  
[/CTX]
[STATE]
m n : â„•
h : Nat.Coprime m n
⊢ Nat.gcd m n = 1
[/STATE]
[TAC]

Example output

rw [Nat.Coprime] at h
[/TAC]

Citation

Please cite:

@misc{hu2024minictx,
  title={miniCTX: Neural Theorem Proving with (Long-)Contexts}, 
  author={Jiewen Hu and Thomas Zhu and Sean Welleck},
  year={2024},
  eprint={2408.03350},
  archivePrefix={arXiv},
  primaryClass={cs.AI},
  url={https://arxiv.org/abs/2408.03350}, 
}
Downloads last month
477
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including l3lab/ntp-mathlib-context-deepseek-coder-1.3b