각 프레임워크에는 해당하는 GenerationMixin
클래스에서 구현된 텍스트 생성을 위한 generate 메소드가 있습니다:
사용하는 프레임워크에 상관없이, generate 메소드는 GenerationConfig 클래스 인스턴스로 매개변수화 할 수 있습니다. generate 메소드의 동작을 제어하는 모든 생성 매개변수 목록을 확인하려면 이 클래스를 참조하세요.
모델의 생성 설정을 어떻게 확인하고, 기본값이 무엇인지, 매개변수를 어떻게 임시로 변경하는지, 그리고 사용자 지정 생성 설정을 만들고 저장하는 방법을 배우려면 텍스트 생성 전략 가이드를 참조하세요. 이 가이드는 토큰 스트리밍과 같은 관련 기능을 사용하는 방법도 설명합니다.
( pretrained_model_name: Union config_file_name: Union = None cache_dir: Union = None force_download: bool = False local_files_only: bool = False token: Union = None revision: str = 'main' **kwargs ) → GenerationConfig
Parameters
str
or os.PathLike
) —
This can be either:
./my_model_directory/
.str
or os.PathLike
, optional, defaults to "generation_config.json"
) —
Name of the generation configuration JSON file to be loaded from pretrained_model_name
. str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used. bool
, optional, defaults to False
) —
Whether or not to force to (re-)download the configuration files and override the cached versions if
they exist.
resume_download —
Deprecated and ignored. All downloads are now resumed by default when possible.
Will be removed in v5 of Transformers. Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}.
The proxies are used on each request. str
or bool
, optional) —
The token to use as HTTP bearer authorization for remote files. If True
, or not specified, will use
the token generated when running huggingface-cli login
(stored in ~/.huggingface
). str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
To test a pull request you made on the Hub, you can pass revision="refs/pr/<pr_number>"
.
bool
, optional, defaults to False
) —
If False
, then this function returns just the final configuration object.
If True
, then this functions returns a Tuple(config, unused_kwargs)
where unused_kwargs is a
dictionary consisting of the key/value pairs whose keys are not configuration attributes: i.e., the
part of kwargs
which has not been used to update config
and is otherwise ignored.
str
, optional, defaults to ""
) —
In case the relevant files are located inside a subfolder of the model repo on huggingface.co, you can
specify the folder name here. Dict[str, Any]
, optional) —
The values in kwargs of any keys which are configuration attributes will be used to override the loaded
values. Behavior concerning key/value pairs whose keys are not configuration attributes is controlled
by the return_unused_kwargs
keyword parameter. Returns
The configuration object instantiated from this pretrained model.
Instantiate a GenerationConfig from a generation configuration file.
Examples:
>>> from transformers import GenerationConfig
>>> # Download configuration from huggingface.co and cache.
>>> generation_config = GenerationConfig.from_pretrained("openai-community/gpt2")
>>> # E.g. config was saved using *save_pretrained('./test/saved_model/')*
>>> generation_config.save_pretrained("./test/saved_model/")
>>> generation_config = GenerationConfig.from_pretrained("./test/saved_model/")
>>> # You can also specify configuration names to your generation configuration file
>>> generation_config.save_pretrained("./test/saved_model/", config_file_name="my_configuration.json")
>>> generation_config = GenerationConfig.from_pretrained("./test/saved_model/", "my_configuration.json")
>>> # If you'd like to try a minor variation to an existing configuration, you can also pass generation
>>> # arguments to `.from_pretrained()`. Be mindful that typos and unused arguments will be ignored
>>> generation_config, unused_kwargs = GenerationConfig.from_pretrained(
... "openai-community/gpt2", top_k=1, foo=False, do_sample=True, return_unused_kwargs=True
... )
>>> generation_config.top_k
1
>>> unused_kwargs
{'foo': False}
( model_config: PretrainedConfig ) → GenerationConfig
Instantiates a GenerationConfig from a PretrainedConfig. This function is useful to convert legacy PretrainedConfig objects, which may contain generation parameters, into a stand-alone GenerationConfig.
( save_directory: Union config_file_name: Union = None push_to_hub: bool = False **kwargs )
Parameters
str
or os.PathLike
) —
Directory where the configuration JSON file will be saved (will be created if it does not exist). str
or os.PathLike
, optional, defaults to "generation_config.json"
) —
Name of the generation configuration JSON file to be saved in save_directory
. bool
, optional, defaults to False
) —
Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the
repository you want to push to with repo_id
(will default to the name of save_directory
in your
namespace). Dict[str, Any]
, optional) —
Additional key word arguments passed along to the push_to_hub() method. Save a generation configuration object to the directory save_directory
, so that it can be re-loaded using the
from_pretrained() class method.
( **kwargs ) → Dict[str, Any]
Updates attributes of this class instance with attributes from kwargs
if they match existing attributes,
returning all the unused kwargs.
( is_init = False )
Validates the values of the attributes of the GenerationConfig instance. Raises exceptions in the presence of parameterization that can be detected as incorrect from the configuration instance alone.
Note that some parameters not validated here are best validated at generate runtime, as they may depend on other inputs and/or the model, such as parameters related to the generation length.
( assistant_model: Optional = None ) → GenerationMode
Returns the generation mode triggered by the GenerationConfig instance.
( greenlist_ratio: Optional = 0.25 bias: Optional = 2.0 hashing_key: Optional = 15485863 seeding_scheme: Optional = 'lefthash' context_width: Optional = 1 )
Class that holds arguments for watermark generation and should be passed into GenerationConfig
during generate
.
See this paper for more details on the arguments.
Accepts the following keys:
float
):
Used for watermarking. The ratio of “green” tokens used to the vocabulary size. Defaults to 0.25.float
):
Used with watermarking. The bias added to the selected “green” tokens’ logits. Defaults to 2.0.int
):
Hashing key used for watermarking. Defaults to 15485863 (the millionth prime).str
):
Algorithm to use for watermarking. Accepts values:int
):
The context length of previous tokens to use in seeding. Higher context length makes watermarking more robust.( config_dict **kwargs ) → WatermarkingConfig
Constructs a WatermarkingConfig instance from a dictionary of parameters.
( ) → Dict[str, Any]
Returns
Dict[str, Any]
Dictionary of all the attributes that make up this configuration instance.
Serializes this instance to a Python dictionary.
( json_file_path: Union )
Save this instance to a JSON file.
( ) → str
Returns
str
JSON formatted string representing the configuration instance.
Serializes this instance to a JSON formatted string.
Update the configuration attributes with new values.
A class containing all functions for auto-regressive text generation, to be used as a mixin in PreTrainedModel.
The class exposes generate(), which can be used for:
num_beams=1
and do_sample=False
penalty_alpha>0
and top_k>1
num_beams=1
and do_sample=True
num_beams>1
and do_sample=False
num_beams>1
and do_sample=True
num_beams>1
and num_beam_groups>1
constraints!=None
or force_words_ids!=None
assistant_model
or prompt_lookup_num_tokens
is passed to .generate()
To learn more about decoding strategies refer to the text generation strategies guide.
( inputs: Optional = None generation_config: Optional = None logits_processor: Optional = None stopping_criteria: Optional = None prefix_allowed_tokens_fn: Optional = None synced_gpus: Optional = None assistant_model: Optional = None streamer: Optional = None negative_prompt_ids: Optional = None negative_prompt_attention_mask: Optional = None **kwargs ) → ModelOutput or torch.LongTensor
Parameters
torch.Tensor
of varying shape depending on the modality, optional) —
The sequence used as a prompt for the generation or as model inputs to the encoder. If None
the
method initializes it with bos_token_id
and a batch size of 1. For decoder-only models inputs
should be in the format of input_ids
. For encoder-decoder models inputs can represent any of
input_ids
, input_values
, input_features
, or pixel_values
. **kwargs
passed to generate matching the attributes of generation_config
will override them. If
generation_config
is not provided, the default will be used, which has the following loading
priority: 1) from the generation_config.json
model file, if it exists; 2) from the model
configuration. Please note that unspecified parameters will inherit GenerationConfig’s
default values, whose documentation should be checked to parameterize generation. LogitsProcessorList
, optional) —
Custom logits processors that complement the default logits processors built from arguments and
generation config. If a logit processor is passed that is already created with the arguments or a
generation config an error is thrown. This feature is intended for advanced users. StoppingCriteriaList
, optional) —
Custom stopping criteria that complements the default stopping criteria built from arguments and a
generation config. If a stopping criteria is passed that is already created with the arguments or a
generation config an error is thrown. If your stopping criteria depends on the scores
input, make
sure you pass return_dict_in_generate=True, output_scores=True
to generate
. This feature is
intended for advanced users. Callable[[int, torch.Tensor], List[int]]
, optional) —
If provided, this function constraints the beam search to allowed tokens only at each step. If not
provided no constraint is applied. This function takes 2 arguments: the batch ID batch_id
and
input_ids
. It has to return a list with the allowed tokens for the next generation step conditioned
on the batch ID batch_id
and the previously generated tokens inputs_ids
. This argument is useful
for constrained generation conditioned on the prefix, as described in Autoregressive Entity
Retrieval. bool
, optional) —
Whether to continue running the while loop until max_length. Unless overridden, this flag will be set
to True
if using FullyShardedDataParallel
or DeepSpeed ZeRO Stage 3 with multiple GPUs to avoid
deadlocking if one GPU finishes generating before other GPUs. Otherwise, defaults to False
. PreTrainedModel
, optional) —
An assistant model that can be used to accelerate generation. The assistant model must have the exact
same tokenizer. The acceleration is achieved when forecasting candidate tokens with the assistent model
is much faster than running generation with the model you’re calling generate from. As such, the
assistant model should be much smaller. BaseStreamer
, optional) —
Streamer object that will be used to stream the generated sequences. Generated tokens are passed
through streamer.put(token_ids)
and the streamer is responsible for any further processing. torch.LongTensor
of shape (batch_size, sequence_length)
, optional) —
The negative prompt needed for some processors such as CFG. The batch size must match the input batch
size. This is an experimental feature, subject to breaking API changes in future versions. torch.LongTensor
of shape (batch_size, sequence_length)
, optional) —
Attention_mask for negative_prompt_ids
. Dict[str, Any]
, optional) —
Ad hoc parametrization of generation_config
and/or additional model-specific kwargs that will be
forwarded to the forward
function of the model. If the model is an encoder-decoder model, encoder
specific kwargs should not be prefixed and decoder specific kwargs should be prefixed with decoder_. Returns
ModelOutput or torch.LongTensor
A ModelOutput (if return_dict_in_generate=True
or when config.return_dict_in_generate=True
) or a torch.LongTensor
.
If the model is not an encoder-decoder model (model.config.is_encoder_decoder=False
), the possible
ModelOutput types are:
If the model is an encoder-decoder model (model.config.is_encoder_decoder=True
), the possible
ModelOutput types are:
Generates sequences of token ids for models with a language modeling head.
Most generation-controlling parameters are set in generation_config
which, if not passed, will be set to the
model’s default generation configuration. You can override any generation_config
by passing the corresponding
parameters to generate(), e.g. .generate(inputs, num_beams=4, do_sample=True)
.
For an overview of generation strategies and code examples, check out the following guide.
( sequences: Tensor scores: Tuple beam_indices: Optional = None normalize_logits: bool = False ) → torch.Tensor
Parameters
torch.LongTensor
) —
The generated sequences. The second dimension (sequence_length) is either equal to max_length
or
shorter if all batches finished early due to the eos_token_id
. tuple(torch.FloatTensor)
) —
Transition scores for each vocabulary token at each generation step. Beam transition scores consisting
of log probabilities of tokens conditioned on log softmax of previously generated tokens in this beam.
Tuple of torch.FloatTensor
with up to max_new_tokens
elements (one element for each generated token),
with each tensor of shape (batch_size*num_beams, config.vocab_size)
. torch.LongTensor
, optional) —
Beam indices of generated token id at each generation step. torch.LongTensor
of shape
(batch_size*num_return_sequences, sequence_length)
. Only required if a num_beams>1
at
generate-time. bool
, optional, defaults to False
) —
Whether to normalize the logits (which, for legacy reasons, may be unnormalized). Returns
torch.Tensor
A torch.Tensor
of shape (batch_size*num_return_sequences, sequence_length)
containing
the transition scores (logits)
Computes the transition scores of sequences given the generation scores (and beam indices, if beam search was used). This is a convenient method to quicky obtain the scores of the selected tokens at generation time.
Examples:
>>> from transformers import GPT2Tokenizer, AutoModelForCausalLM
>>> import numpy as np
>>> tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
>>> model = AutoModelForCausalLM.from_pretrained("openai-community/gpt2")
>>> tokenizer.pad_token_id = tokenizer.eos_token_id
>>> inputs = tokenizer(["Today is"], return_tensors="pt")
>>> # Example 1: Print the scores for each token generated with Greedy Search
>>> outputs = model.generate(**inputs, max_new_tokens=5, return_dict_in_generate=True, output_scores=True)
>>> transition_scores = model.compute_transition_scores(
... outputs.sequences, outputs.scores, normalize_logits=True
... )
>>> # input_length is the length of the input prompt for decoder-only models, like the GPT family, and 1 for
>>> # encoder-decoder models, like BART or T5.
>>> input_length = 1 if model.config.is_encoder_decoder else inputs.input_ids.shape[1]
>>> generated_tokens = outputs.sequences[:, input_length:]
>>> for tok, score in zip(generated_tokens[0], transition_scores[0]):
... # | token | token string | log probability | probability
... print(f"| {tok:5d} | {tokenizer.decode(tok):8s} | {score.numpy():.3f} | {np.exp(score.numpy()):.2%}")
| 262 | the | -1.414 | 24.33%
| 1110 | day | -2.609 | 7.36%
| 618 | when | -2.010 | 13.40%
| 356 | we | -1.859 | 15.58%
| 460 | can | -2.508 | 8.14%
>>> # Example 2: Reconstruct the sequence scores from Beam Search
>>> outputs = model.generate(
... **inputs,
... max_new_tokens=5,
... num_beams=4,
... num_return_sequences=4,
... return_dict_in_generate=True,
... output_scores=True,
... )
>>> transition_scores = model.compute_transition_scores(
... outputs.sequences, outputs.scores, outputs.beam_indices, normalize_logits=False
... )
>>> # If you sum the generated tokens' scores and apply the length penalty, you'll get the sequence scores.
>>> # Tip 1: recomputing the scores is only guaranteed to match with `normalize_logits=False`. Depending on the
>>> # use case, you might want to recompute it with `normalize_logits=True`.
>>> # Tip 2: the output length does NOT include the input length
>>> output_length = np.sum(transition_scores.numpy() < 0, axis=1)
>>> length_penalty = model.generation_config.length_penalty
>>> reconstructed_scores = transition_scores.sum(axis=1) / (output_length**length_penalty)
>>> print(np.allclose(outputs.sequences_scores, reconstructed_scores))
True
A class containing all of the functions supporting generation, to be used as a mixin in TFPreTrainedModel.
The class exposes generate(), which can be used for:
greedy_search()
if num_beams=1
and
do_sample=False
contrastive_search()
if penalty_alpha>0
and
top_k>1
sample()
if num_beams=1
and
do_sample=True
beam_search()
if num_beams>1
You do not need to call any of the above methods directly. Pass custom parameter values to ‘generate’ instead. To learn more about decoding strategies refer to the text generation strategies guide.
( inputs: Optional = None generation_config: Optional = None logits_processor: Optional = None seed = None **kwargs ) → ModelOutput or tf.Tensor
Parameters
tf.Tensor
of varying shape depending on the modality, optional) —
The sequence used as a prompt for the generation or as model inputs to the encoder. If None
the
method initializes it with bos_token_id
and a batch size of 1. For decoder-only models inputs
should of in the format of input_ids
. For encoder-decoder models inputs can represent any of
input_ids
, input_values
, input_features
, or pixel_values
. ~generation.GenerationConfig
, optional) —
The generation configuration to be used as base parametrization for the generation call. **kwargs
passed to generate matching the attributes of generation_config
will override them. If
generation_config
is not provided, the default will be used, which had the following loading
priority: 1) from the generation_config.json
model file, if it exists; 2) from the model
configuration. Please note that unspecified parameters will inherit GenerationConfig’s
default values, whose documentation should be checked to parameterize generation. LogitsProcessorList
, optional) —
Custom logits processors that complement the default logits processors built from arguments and
generation config. If a logit processor is passed that is already created with the arguments or a
generation config an error is thrown. This feature is intended for advanced users. List[int]
, optional) —
Random seed to control sampling, containing two integers, used when do_sample
is True
. See the
seed
argument from stateless functions in tf.random
. Dict[str, Any]
, optional) —
Ad hoc parametrization of generate_config
and/or additional model-specific kwargs that will be
forwarded to the forward
function of the model. If the model is an encoder-decoder model, encoder
specific kwargs should not be prefixed and decoder specific kwargs should be prefixed with decoder_. Returns
ModelOutput or tf.Tensor
A ModelOutput (if return_dict_in_generate=True
or when
config.return_dict_in_generate=True
) or a tf.Tensor
.
If the model is not an encoder-decoder model (model.config.is_encoder_decoder=False
), the possible
ModelOutput types are:
If the model is an encoder-decoder model (model.config.is_encoder_decoder=True
), the possible
ModelOutput types are:
Generates sequences of token ids for models with a language modeling head.
Most generation-controlling parameters are set in generation_config
which, if not passed, will be set to the
model’s default generation configuration. You can override any generation_config
by passing the corresponding
parameters to generate, e.g. .generate(inputs, num_beams=4, do_sample=True)
.
For an overview of generation strategies and code examples, check out the following guide.
( sequences: Tensor scores: Tuple beam_indices: Optional = None normalize_logits: bool = False ) → tf.Tensor
Parameters
tf.Tensor
) —
The generated sequences. The second dimension (sequence_length) is either equal to max_length
or
shorter if all batches finished early due to the eos_token_id
. tuple(tf.Tensor)
) —
Transition scores for each vocabulary token at each generation step. Beam transition scores consisting
of log probabilities of tokens conditioned on log softmax of previously generated tokens Tuple of
tf.Tensor
with up to max_new_tokens
elements (one element for each generated token), with each
tensor of shape (batch_size*num_beams, config.vocab_size)
. tf.Tensor
, optional) —
Beam indices of generated token id at each generation step. tf.Tensor
of shape
(batch_size*num_return_sequences, sequence_length)
. Only required if a num_beams>1
at
generate-time. bool
, optional, defaults to False
) —
Whether to normalize the logits (which, for legacy reasons, may be unnormalized). Returns
tf.Tensor
A tf.Tensor
of shape (batch_size*num_return_sequences, sequence_length)
containing
the transition scores (logits)
Computes the transition scores of sequences given the generation scores (and beam indices, if beam search was used). This is a convenient method to quicky obtain the scores of the selected tokens at generation time.
Examples:
>>> from transformers import GPT2Tokenizer, TFAutoModelForCausalLM
>>> import numpy as np
>>> tokenizer = GPT2Tokenizer.from_pretrained("openai-community/gpt2")
>>> model = TFAutoModelForCausalLM.from_pretrained("openai-community/gpt2")
>>> tokenizer.pad_token_id = tokenizer.eos_token_id
>>> inputs = tokenizer(["Today is"], return_tensors="tf")
>>> # Example 1: Print the scores for each token generated with Greedy Search
>>> outputs = model.generate(**inputs, max_new_tokens=5, return_dict_in_generate=True, output_scores=True)
>>> transition_scores = model.compute_transition_scores(
... outputs.sequences, outputs.scores, normalize_logits=True
... )
>>> # input_length is the length of the input prompt for decoder-only models, like the GPT family, and 1 for
>>> # encoder-decoder models, like BART or T5.
>>> input_length = 1 if model.config.is_encoder_decoder else inputs.input_ids.shape[1]
>>> generated_tokens = outputs.sequences[:, input_length:]
>>> for tok, score in zip(generated_tokens[0], transition_scores[0]):
... # | token | token string | logits | probability
... print(f"| {tok:5d} | {tokenizer.decode(tok):8s} | {score.numpy():.3f} | {np.exp(score.numpy()):.2%}")
| 262 | the | -1.414 | 24.33%
| 1110 | day | -2.609 | 7.36%
| 618 | when | -2.010 | 13.40%
| 356 | we | -1.859 | 15.58%
| 460 | can | -2.508 | 8.14%
>>> # Example 2: Reconstruct the sequence scores from Beam Search
>>> outputs = model.generate(
... **inputs,
... max_new_tokens=5,
... num_beams=4,
... num_return_sequences=4,
... return_dict_in_generate=True,
... output_scores=True,
... )
>>> transition_scores = model.compute_transition_scores(
... outputs.sequences, outputs.scores, outputs.beam_indices, normalize_logits=False
... )
>>> # If you sum the generated tokens' scores and apply the length penalty, you'll get the sequence scores.
>>> # Tip: recomputing the scores is only guaranteed to match with `normalize_logits=False`. Depending on the
>>> # use case, you might want to recompute it with `normalize_logits=True`.
>>> output_length = np.sum(transition_scores.numpy() < 0, axis=1)
>>> length_penalty = model.generation_config.length_penalty
>>> reconstructed_scores = np.sum(transition_scores, axis=1) / (output_length**length_penalty)
>>> print(np.allclose(outputs.sequences_scores, reconstructed_scores))
True
A class containing all functions for auto-regressive text generation, to be used as a mixin in FlaxPreTrainedModel.
The class exposes generate(), which can be used for:
_greedy_search()
if num_beams=1
and
do_sample=False
_sample()
if num_beams=1
and
do_sample=True
_beam_search()
if num_beams>1
and
do_sample=False
You do not need to call any of the above methods directly. Pass custom parameter values to ‘generate’ instead. To learn more about decoding strategies refer to the text generation strategies guide.
( input_ids: Array generation_config: Optional = None prng_key: Optional = None trace: bool = True params: Optional = None logits_processor: Optional = None **kwargs )
Parameters
jnp.ndarray
of shape (batch_size, sequence_length)
) —
The sequence used as a prompt for the generation. ~generation.GenerationConfig
, optional) —
The generation configuration to be used as base parametrization for the generation call. **kwargs
passed to generate matching the attributes of generation_config
will override them. If
generation_config
is not provided, the default will be used, which had the following loading
priority: 1) from the generation_config.json
model file, if it exists; 2) from the model
configuration. Please note that unspecified parameters will inherit GenerationConfig’s
default values, whose documentation should be checked to parameterize generation. bool
, optional, defaults to True
) —
Whether to trace generation. Setting trace=False
should only be used for debugging and will lead to a
considerably slower runtime. Dict[str, jnp.ndarray]
, optional) —
Optionally the model parameters can be passed. Can be useful for parallelized generation. FlaxLogitsProcessorList
, optional) —
Custom logits processors that complement the default logits processors built from arguments and
generation config. If a logit processor is passed that is already created with the arguments or a
generation config an error is thrown. This feature is intended for advanced users. Dict[str, Any]
, optional) —
Ad hoc parametrization of generate_config
and/or additional model-specific kwargs that will be
forwarded to the forward
function of the model. If the model is an encoder-decoder model, encoder
specific kwargs should not be prefixed and decoder specific kwargs should be prefixed with decoder_. Generates sequences of token ids for models with a language modeling head.