Interactive Evolution: A Neural-Symbolic Self-Training Framework for Large Language Models
Paper Link: https://arxiv.org/abs/2406.11736
Code Repo: https://github.com/xufangzhi/ENVISIONS
π₯ News
- π₯π₯π₯ We make public the final checkpoints after self-training ! ! !
Note
The self-training process is based on LLaMA2-Chat model serieses and powered by ENVISIONS. The work is still under review.
Prompt for Zero-shot Evaluation
Write Python code to solve the question.
The question is: <question>
The solution code is:
Citation
If you find it helpful, please kindly cite the paper.
@misc{xu2024interactive,
title={Interactive Evolution: A Neural-Symbolic Self-Training Framework For Large Language Models},
author={Fangzhi Xu and Qiushi Sun and Kanzhi Cheng and Jun Liu and Yu Qiao and Zhiyong Wu},
year={2024},
eprint={2406.11736},
archivePrefix={arXiv},
}
- Downloads last month
- 30
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.