LLM-based Optimization of Compound AI Systems: A Survey
Abstract
In a compound AI system, components such as an LLM call, a retriever, a code interpreter, or tools are interconnected. The system's behavior is primarily driven by parameters such as instructions or tool definitions. Recent advancements enable end-to-end optimization of these parameters using an LLM. Notably, leveraging an LLM as an optimizer is particularly efficient because it avoids gradient computation and can generate complex code and instructions. This paper presents a survey of the principles and emerging trends in LLM-based optimization of compound AI systems. It covers archetypes of compound AI systems, approaches to LLM-based end-to-end optimization, and insights into future directions and broader impacts. Importantly, this survey uses concepts from program analysis to provide a unified view of how an LLM optimizer is prompted to optimize a compound AI system. The exhaustive list of paper is provided at https://github.com/linyuhongg/LLM-based-Optimization-of-Compound-AI-Systems.
Community
Designing a compound AI system typically involves manually optimizing various parameters, such as LLM instructions, tool implementations, and reasoning structures.
๐ง Can we replace the human in this process with an LLM?
Optimizing a compound AI system using an LLM refers to using an LLM to generate instructions or code, effectively automating the human role in the optimization process.
๐ In our survey, we explore this new paradigm to help practitioners understand its potential and applications!
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- AIvril: AI-Driven RTL Generation With Verification In-The-Loop (2024)
- RTLRewriter: Methodologies for Large Models aided RTL Code Optimization (2024)
- Code Generation and Algorithmic Problem Solving Using Llama 3.1 405B (2024)
- Code-Survey: An LLM-Driven Methodology for Analyzing Large-Scale Codebases (2024)
- SPRIG: Improving Large Language Model Performance by System Prompt Optimization (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper