Search, Verify and Feedback: Towards Next Generation Post-training Paradigm of Foundation Models via Verifier Engineering
Abstract
The evolution of machine learning has increasingly prioritized the development of powerful models and more scalable supervision signals. However, the emergence of foundation models presents significant challenges in providing effective supervision signals necessary for further enhancing their capabilities. Consequently, there is an urgent need to explore novel supervision signals and technical approaches. In this paper, we propose verifier engineering, a novel post-training paradigm specifically designed for the era of foundation models. The core of verifier engineering involves leveraging a suite of automated verifiers to perform verification tasks and deliver meaningful feedback to foundation models. We systematically categorize the verifier engineering process into three essential stages: search, verify, and feedback, and provide a comprehensive review of state-of-the-art research developments within each stage. We believe that verifier engineering constitutes a fundamental pathway toward achieving Artificial General Intelligence.
Community
The paper introduces verifier engineering, a new concept that addresses the specific needs of foundation models in their post-training phase. This innovation is crucial as it fills a gap in current practices where there is limited focus on the continuous evaluation and improvement of large-scale models after initial training.
The authors systematically categorize the process of verifier engineering into three key stages: search, verify, and feedback. This structured approach not only clarifies the methodology but also provides a clear roadmap for researchers and practitioners to follow, enhancing the practical application of the proposed framework.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Improving LLM Reasoning through Scaling Inference Computation with Collaborative Verification (2024)
- Forewarned is Forearmed: Leveraging LLMs for Data Synthesis through Failure-Inducing Exploration (2024)
- Fine-Grained Verifiers: Preference Modeling as Next-token Prediction in Vision-Language Alignment (2024)
- Deliberate Reasoning for LLMs as Structure-aware Planning with Accurate World Model (2024)
- Process Supervision-Guided Policy Optimization for Code Generation (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper