Symbol-LLM
commited on
Commit
•
faf5ea9
1
Parent(s):
dc8e17b
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,44 @@
|
|
1 |
-
---
|
2 |
-
license: apache-2.0
|
3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
---
|
4 |
+
|
5 |
+
|
6 |
+
## Interactive Evolution: A Neural-Symbolic Self-Training Framework for Large Language Models
|
7 |
+
|
8 |
+
Paper Link: https://arxiv.org/abs/2406.11736
|
9 |
+
|
10 |
+
Code Repo: https://github.com/xufangzhi/ENVISIONS
|
11 |
+
|
12 |
+
|
13 |
+
|
14 |
+
## 🔥 News
|
15 |
+
|
16 |
+
- 🔥🔥🔥 We make public the final checkpoints after self-training ! ! !
|
17 |
+
|
18 |
+
|
19 |
+
## Note
|
20 |
+
The self-training process is based on LLaMA2-Chat model serieses and powered by ENVISIONS. The work is still under review.
|
21 |
+
|
22 |
+
|
23 |
+
## Prompt for Zero-shot Evaluation
|
24 |
+
|
25 |
+
```markdown
|
26 |
+
Generate the logical representation for the given context and question.
|
27 |
+
The context is: <context>
|
28 |
+
The question is: <question>
|
29 |
+
The logical representation is:
|
30 |
+
```
|
31 |
+
|
32 |
+
|
33 |
+
## Citation
|
34 |
+
If you find it helpful, please kindly cite the paper.
|
35 |
+
```
|
36 |
+
@misc{xu2024interactive,
|
37 |
+
title={Interactive Evolution: A Neural-Symbolic Self-Training Framework For Large Language Models},
|
38 |
+
author={Fangzhi Xu and Qiushi Sun and Kanzhi Cheng and Jun Liu and Yu Qiao and Zhiyong Wu},
|
39 |
+
year={2024},
|
40 |
+
eprint={2406.11736},
|
41 |
+
archivePrefix={arXiv},
|
42 |
+
}
|
43 |
+
```
|
44 |
+
|