TRL provides judges to easily compare two completions.
Make sure to have installed the required dependencies by running:
pip install trl[llm_judge]
TRL provides several judges out of the box. For example, you can use the HfPairwiseJudge
to compare two completions using a pre-trained model from the Hugging Face model hub:
from trl import HfPairwiseJudge
judge = HfPairwiseJudge()
judge.judge(
prompts=["What is the capital of France?", "What is the biggest planet in the solar system?"],
completions=[["Paris", "Lyon"], ["Saturn", "Jupiter"]],
) # Outputs: [0, 1]
To define your own judge, we provide several base classes that you can subclass. For rank-based judges, you need to subclass BaseRankJudge and implement the BaseRankJudge.judge() method. For pairwise judges, you need to subclass BasePairJudge
and implement the BasePairJudge.judge
method. If you want to define a judge that doesn’t fit into these categories, you need to subclass BaseJudge and implement the BaseJudge.judge()
method.
As an example, let’s define a pairwise judge that prefers shorter completions:
from trl import BasePairwiseJudge
class PrefersShorterJudge(BasePairwiseJudge):
def judge(self, prompts, completions, shuffle_order=False):
return [0 if len(completion[0]) > len(completion[1]) else 1 for completion in completions]
You can then use this judge as follows:
judge = PrefersShorterJudge()
judge.judge(
prompts=["What is the capital of France?", "What is the biggest planet in the solar system?"],
completions=[["Paris", "The capital of France is Paris."], ["Jupiter is the biggest planet in the solar system.", "Jupiter"]],
) # Outputs: [0, 1]
Base class for judges. The subclasses of this class should implement the judge
method.
Base class for LLM ranking judges.
Example:
class MyRankJudge(BaseRankJudge):
def judge(self, prompts, completions, shuffle_order=True):
return ... # Your ranking logic here
judge = MyRankJudge()
judge.judge(
prompts=["The capital of France is", "The capital of Germany is"],
completions=[[" Paris", " Marseille", "Lyon"], [" Munich", " Berlin"]]
) # [[0, 1, 2], [1, 0]]
( prompts: List completions: List shuffle_order: bool = True )
Judge the completion for the given prompts and return the ranks of each completion.
Base class for pairwise judges.
( prompts: List completions: List shuffle_order: bool = True )
Judge the completion pairs for the given prompts.
Random rank, for testing purposes.
Random pairwise judge, for testing purposes.
( model = 'meta-llama/Meta-Llama-3-70B-Instruct' token: Optional = None system_prompt: Optional = None )
Parameters
str
, optional) — The model to use for the judge. Defaults to “meta-llama/Meta-Llama-3-70B-Instruct”. str
, optional) — The Hugging Face API token to use for the InferenceClient. str
, optional) — The system prompt to be used for the judge. If not provided, a default prompt is used.
Note that the system prompt should contain the following placeholders: {prompt}
, {response0}
, and {response1}
.
Also, the inference is called with max_tokens=1
, consequently the system prompt should ask for a single token response. Pairwise judge based on the Hugging Face API with chat completion.
This judge is relevant for assessing the quality chat models, where the completion is a response to a given prompt.
( model = 'gpt-4-turbo-preview' system_prompt: Optional = None max_requests: Optional = 1000 )
Parameters
str
, optional) — The model to use for the judge. Defaults to "gpt-4-turbo-preview"
. str
, optional) — The system prompt to be used for the judge. If not provided, a default prompt is used.
Note that the system prompt should contain the following placeholders: {prompt}
, {response0}
, and {response1}
.
Also, the inference is called with max_tokens=1
, consequently the system prompt should ask for a single token response. int
, optional) — The maximum number of requests to make to the OpenAI API. Defaults to 1000. If set to None
, there is no limit. Judge based on the OpenAI API.
This judge is relevant for assessing the quality chat models, where the completion is a response to a given prompt.