Video-LLaVA
Overview
Video-LLaVa is an open-source multimodal LLM trained by fine-tuning LlamA/Vicuna on multimodal instruction-following data generated by Llava1.5 and VideChat. It is an auto-regressive language model, based on the transformer architecture. Video-LLaVa unifies visual representations to the language feature space, and enables an LLM to perform visual reasoning capabilities on both images and videos simultaneously.
The Video-LLaVA model was proposed in Video-LLaVA: Learning United Visual Representation by Alignment Before Projection by Bin Lin, Yang Ye, Bin Zhu, Jiaxi Cui, Munang Ning, Peng Jin, Li Yuan.
The abstract from the paper is the following:
The Large Vision-Language Model (LVLM) has enhanced the performance of various downstream tasks in visual-language understanding. Most existing approaches encode images and videos into separate feature spaces, which are then fed as inputs to large language models. However, due to the lack of unified tokenization for images and videos, namely misalignment before projection, it becomes challenging for a Large Language Model (LLM) to learn multi-modal interactions from several poor projection layers. In this work, we unify visual representation into the language feature space to advance the foundational LLM towards a unified LVLM. As a result, we establish a simple but robust LVLM baseline, Video-LLaVA, which learns from a mixed dataset of images and videos, mutually enhancing each other. Video-LLaVA achieves superior performances on a broad range of 9 image benchmarks across 5 image question-answering datasets and 4 image benchmark toolkits. Additionally, our Video-LLaVA also outperforms Video-ChatGPT by 5.8%, 9.9%, 18.6%, and 10.1% on MSRVTT, MSVD, TGIF, and ActivityNet, respectively. Notably, extensive experiments demonstrate that Video-LLaVA mutually benefits images and videos within a unified visual representation, outperforming models designed specifically for images or videos. We aim for this work to provide modest insights into the multi-modal inputs for the LLM
Usage tips:
We advise users to use padding_side=“left” when computing batched generation as it leads to more accurate results. Simply make sure to call processor.tokenizer.padding_side = “left” before generating.
Note the model has not been explicitly trained to process multiple images/videos in the same prompt, although this is technically possible, you may experience inaccurate results.
Note that the video inputs should have exactly 8 frames at the input, since the models were trained in that setting.
This model was contributed by RaushanTurganbay. The original code can be found here.
Usage example
Single Media Mode
The model can accept both images and videos as input. Here’s an example code for inference in half-precision (torch.float16
):
import av
import torch
import numpy as np
from transformers import VideoLlavaForConditionalGeneration, VideoLlavaProcessor
def read_video_pyav(container, indices):
'''
Decode the video with PyAV decoder.
Args:
container (`av.container.input.InputContainer`): PyAV container.
indices (`List[int]`): List of frame indices to decode.
Returns:
result (np.ndarray): np array of decoded frames of shape (num_frames, height, width, 3).
'''
frames = []
container.seek(0)
start_index = indices[0]
end_index = indices[-1]
for i, frame in enumerate(container.decode(video=0)):
if i > end_index:
break
if i >= start_index and i in indices:
frames.append(frame)
return np.stack([x.to_ndarray(format="rgb24") for x in frames])
# Load the model in half-precision
model = VideoLlavaForConditionalGeneration.from_pretrained("LanguageBind/Video-LLaVA-7B-hf", torch_dtype=torch.float16, device_map="auto")
processor = VideoLlavaProcessor.from_pretrained("LanguageBind/Video-LLaVA-7B-hf")
# Load the video as an np.arrau, sampling uniformly 8 frames
video_path = hf_hub_download(repo_id="raushan-testing-hf/videos-test", filename="sample_demo_1.mp4", repo_type="dataset")
container = av.open(video_path)
total_frames = container.streams.video[0].frames
indices = np.arange(0, total_frames, total_frames / 8).astype(int)
video = read_video_pyav(container, indices)
# For better results, we recommend to prompt the model in the following format
prompt = "USER: <video>\nWhy is this funny? ASSISTANT:"
inputs = processor(text=prompt, videos=video, return_tensors="pt")
out = model.generate(**inputs, max_new_tokens=60)
processor.batch_decode(out, skip_special_tokens=True, clean_up_tokenization_spaces=True)
For multiple turns conversation change the prompt format to:
"USER: <video>\nWhat do you see in this video? ASSISTANT: A baby reading a book. USER: Why is the it funny? ASSISTANT:"
Mixed Media Mode
The model can also generate from an interleaved image-video inputs. However note, that it was not trained in interleaved image-video setting which might affect the performance. Below is an example usage for mixed media input, add the following lines to the above code snippet:
from PIL import Image
import requests
# Generate from image and video mixed inputs
# Load and image and write a new prompt
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
prompt = "USER: <image>\nHow many cats are there in the image? ASSISTANT: There are two cats. USER: <video>\nWhy is this video funny? ASSISTANT:"
inputs = processor(text=prompt, images=image, videos=clip, padding=True, return_tensors="pt")
# Generate
generate_ids = model.generate(**inputs, max_length=50)
processor.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True)
Model optimization
Quantization using Bitsandbytes for memory efficiency
The model can be loaded in lower bits, significantly reducing memory burden while maintaining the performance of the original model. his allows for efficient deployment on resource-constrained cases.
First make sure to install bitsandbytes by running pip install bitsandbytes
and to have access to a GPU/accelerator that is supported by the library.
bitsandbytes is being refactored to support multiple backends beyond CUDA. Currently, ROCm (AMD GPU) and Intel CPU implementations are mature, with Intel XPU in progress and Apple Silicon support expected by Q4/Q1. For installation instructions and the latest backend updates, visit this link.
We value your feedback to help identify bugs before the full release! Check out these docs for more details and feedback links.
Load the quantized model by simply adding BitsAndBytesConfig
as shown below:
from transformers import VideoLlavaForConditionalGeneration, BitsAndBytesConfig
# specify how to quantize the model
quantization_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.float16,
)
model = VideoLlavaForConditionalGeneration.from_pretrained("LanguageBind/Video-LLaVA-7B-hf", quantization_config=quantization_config, device_map="auto")
Flash-Attention 2 to speed-up generation
Additionally, we can greatly speed-up model inference by using Flash Attention, which is a faster implementation of the attention mechanism used inside the model.
First, make sure to install the latest version of Flash Attention 2:
pip install -U flash-attn --no-build-isolation
Also, you should have a hardware that is compatible with Flash-Attention 2. Read more about it in the official documentation of the flash attention repository. FlashAttention-2 can only be used when a model is loaded in torch.float16
or torch.bfloat16
.
To load and run a model using Flash Attention-2, simply add attn_implementation="flash_attention_2"
when loading the model as follows:
from transformers import VideoLlavaForConditionalGeneration
model = VideoLlavaForConditionalGeneration.from_pretrained(
"LanguageBind/Video-LLaVA-7B-hf",
torch_dtype=torch.float16,
attn_implementation="flash_attention_2",
).to(0)
VideoLlavaConfig
class transformers.VideoLlavaConfig
< source >( vision_config = None text_config = None ignore_index = -100 image_token_index = 32000 video_token_index = 32001 projector_hidden_act = 'gelu' vision_feature_select_strategy = 'default' vision_feature_layer = -2 image_seq_length = 256 video_seq_length = 2056 **kwargs )
Parameters
- vision_config (
VideoLlavaVisionConfig
, optional) — Custom vision config or dict. Defaults toCLIPVisionConfig
if not indicated. - text_config (
Union[AutoConfig, dict]
, optional) — The config object of the text backbone. Can be any ofLlamaConfig
orMistralConfig
. Defaults toLlamaConfig
if not indicated. - ignore_index (
int
, optional, defaults to -100) — The ignore index for the loss function. - image_token_index (
int
, optional, defaults to 32000) — The image token index to encode the image prompt. - video_token_index (
int
, optional, defaults to 32001) — The video token index to encode the image prompt. - projector_hidden_act (
str
, optional, defaults to"gelu"
) — The activation function used by the multimodal projector. - vision_feature_select_strategy (
str
, optional, defaults to"default"
) — The feature selection strategy used to select the vision feature from the CLIP backbone. Can be either “full” to select all features or “default” to select features withoutCLS
. - vision_feature_layer (
int
, optional, defaults to -2) — The index of the layer to select the vision feature. - image_seq_length (
int
, optional, defaults to 256) — Sequence length of one image embedding. - video_seq_length (
int
, optional, defaults to 2056) — Sequence length of one video embedding.
This is the configuration class to store the configuration of a VideoLlavaForConditionalGeneration. It is used to instantiate an VideoLlava model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the like LanguageBind/Video-LLaVA-7B-hf.
e.g. LanguageBind/Video-LLaVA-7B-hf
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
Example:
>>> from transformers import VideoLlavaForConditionalGeneration, VideoLlavaConfig, CLIPVisionConfig, LlamaConfig
>>> # Initializing a CLIP-vision config
>>> vision_config = CLIPVisionConfig()
>>> # Initializing a Llama config
>>> text_config = LlamaConfig()
>>> # Initializing a VideoLlava video_llava-1.5-7b style configuration
>>> configuration = VideoLlavaConfig(vision_config, text_config)
>>> # Initializing a model from the video_llava-1.5-7b style configuration
>>> model = VideoLlavaForConditionalGeneration(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
VideoLlavaImageProcessor
class transformers.VideoLlavaImageProcessor
< source >( do_resize: bool = True size: Dict = None resample: Resampling = <Resampling.BICUBIC: 3> do_center_crop: bool = True crop_size: Dict = None do_rescale: bool = True rescale_factor: Union = 0.00392156862745098 do_normalize: bool = True image_mean: Union = None image_std: Union = None do_convert_rgb: bool = True **kwargs )
Parameters
- do_resize (
bool
, optional, defaults toTrue
) — Whether to resize the image’s (height, width) dimensions to the specifiedsize
. Can be overridden bydo_resize
in thepreprocess
method. - size (
Dict[str, int]
optional, defaults to{"shortest_edge" -- 224}
): Size of the image after resizing. The shortest edge of the image is resized to size[“shortest_edge”], with the longest edge resized to keep the input aspect ratio. Can be overridden bysize
in thepreprocess
method. - resample (
PILImageResampling
, optional, defaults toResampling.BICUBIC
) — Resampling filter to use if resizing the image. Can be overridden byresample
in thepreprocess
method. - do_center_crop (
bool
, optional, defaults toTrue
) — Whether to center crop the image to the specifiedcrop_size
. Can be overridden bydo_center_crop
in thepreprocess
method. - crop_size (
Dict[str, int]
optional, defaults to 224) — Size of the output image after applyingcenter_crop
. Can be overridden bycrop_size
in thepreprocess
method. - do_rescale (
bool
, optional, defaults toTrue
) — Whether to rescale the image by the specified scalerescale_factor
. Can be overridden bydo_rescale
in thepreprocess
method. - rescale_factor (
int
orfloat
, optional, defaults to1/255
) — Scale factor to use if rescaling the image. Can be overridden byrescale_factor
in thepreprocess
method. - do_normalize (
bool
, optional, defaults toTrue
) — Whether to normalize the image. Can be overridden bydo_normalize
in thepreprocess
method. - image_mean (
float
orList[float]
, optional, defaults to[0.48145466, 0.4578275, 0.40821073]
) — Mean to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by theimage_mean
parameter in thepreprocess
method. - image_std (
float
orList[float]
, optional, defaults to[0.26862954, 0.26130258, 0.27577711]
) — Standard deviation to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by theimage_std
parameter in thepreprocess
method. Can be overridden by theimage_std
parameter in thepreprocess
method. - do_convert_rgb (
bool
, optional, defaults toTrue
) — Whether to convert the image to RGB.
Constructs a CLIP image processor.
preprocess
< source >( images: List = None videos: List = None do_resize: bool = None size: Dict = None resample: Resampling = None do_center_crop: bool = None crop_size: int = None do_rescale: bool = None rescale_factor: float = None do_normalize: bool = None image_mean: Union = None image_std: Union = None do_convert_rgb: bool = None return_tensors: Union = None data_format: Optional = <ChannelDimension.FIRST: 'channels_first'> input_data_format: Union = None )
Parameters
- images (
ImageInput
, optional) — List of images to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If passing in images with pixel values between 0 and 1, setdo_rescale=False
. - videos (
VideoInput
, optional) — List of videos to preprocess. Expects a single or batch of videos with pixel values ranging from 0 to 255. If passing in videos with pixel values between 0 and 1, setdo_rescale=False
. - do_resize (
bool
, optional, defaults toself.do_resize
) — Whether to resize the image. - size (
Dict[str, int]
, optional, defaults toself.size
) — Size of the image after resizing. Shortest edge of the image is resized to size[“shortest_edge”], with the longest edge resized to keep the input aspect ratio. - resample (
int
, optional, defaults toself.resample
) — Resampling filter to use if resizing the image. This can be one of the enumPILImageResampling
. Only has an effect ifdo_resize
is set toTrue
. - do_center_crop (
bool
, optional, defaults toself.do_center_crop
) — Whether to center crop the image. - crop_size (
Dict[str, int]
, optional, defaults toself.crop_size
) — Size of the center crop. Only has an effect ifdo_center_crop
is set toTrue
. - do_rescale (
bool
, optional, defaults toself.do_rescale
) — Whether to rescale the image. - rescale_factor (
float
, optional, defaults toself.rescale_factor
) — Rescale factor to rescale the image by ifdo_rescale
is set toTrue
. - do_normalize (
bool
, optional, defaults toself.do_normalize
) — Whether to normalize the image. - image_mean (
float
orList[float]
, optional, defaults toself.image_mean
) — Image mean to use for normalization. Only has an effect ifdo_normalize
is set toTrue
. - image_std (
float
orList[float]
, optional, defaults toself.image_std
) — Image standard deviation to use for normalization. Only has an effect ifdo_normalize
is set toTrue
. - do_convert_rgb (
bool
, optional, defaults toself.do_convert_rgb
) — Whether to convert the image to RGB. - return_tensors (
str
orTensorType
, optional) — The type of tensors to return. Can be one of:- Unset: Return a list of
np.ndarray
. TensorType.TENSORFLOW
or'tf'
: Return a batch of typetf.Tensor
.TensorType.PYTORCH
or'pt'
: Return a batch of typetorch.Tensor
.TensorType.NUMPY
or'np'
: Return a batch of typenp.ndarray
.TensorType.JAX
or'jax'
: Return a batch of typejax.numpy.ndarray
.
- Unset: Return a list of
- data_format (
ChannelDimension
orstr
, optional, defaults toChannelDimension.FIRST
) — The channel dimension format for the output image. Can be one of:"channels_first"
orChannelDimension.FIRST
: image in (num_channels, height, width) format."channels_last"
orChannelDimension.LAST
: image in (height, width, num_channels) format.- Unset: Use the channel dimension format of the input image.
- input_data_format (
ChannelDimension
orstr
, optional) — The channel dimension format for the input image. If unset, the channel dimension format is inferred from the input image. Can be one of:"channels_first"
orChannelDimension.FIRST
: image in (num_channels, height, width) format."channels_last"
orChannelDimension.LAST
: image in (height, width, num_channels) format."none"
orChannelDimension.NONE
: image in (height, width) format.
Preprocess an image or batch of images.
resize
< source >( image: ndarray size: Dict resample: Resampling = <Resampling.BICUBIC: 3> data_format: Union = None input_data_format: Union = None **kwargs )
Parameters
- image (
np.ndarray
) — Image to resize. - size (
Dict[str, int]
) — Size of the output image. - resample (
PILImageResampling
, optional, defaults toPILImageResampling.BICUBIC
) — Resampling filter to use when resiizing the image. - data_format (
str
orChannelDimension
, optional) — The channel dimension format of the image. If not provided, it will be the same as the input image. - input_data_format (
ChannelDimension
orstr
, optional) — The channel dimension format of the input image. If not provided, it will be inferred.
Resize an image. The shortest edge of the image is resized to size[“shortest_edge”], with the longest edge resized to keep the input aspect ratio.
VideoLlavaProcessor
class transformers.VideoLlavaProcessor
< source >( image_processor = None tokenizer = None patch_size = None vision_feature_select_strategy = None image_token = '<image>' video_token = '<video>' chat_template = None **kwargs )
Parameters
- image_processor (VideoLlavaImageProcessor, optional) — The image processor is a required input.
- tokenizer (LlamaTokenizerFast, optional) — The tokenizer is a required input.
- patch_size (
int
, optional) — Patch size from the vision tower. - vision_feature_select_strategy (
str
, optional) — The feature selection strategy used to select the vision feature from the vision backbone. Shoudl be same as in model’s config - image_token (
str
, optional, defaults to"<image>"
) — Special token used to denote image location. - video_token (
str
, optional, defaults to"<video>"
) — Special token used to denote video location. - chat_template (
str
, optional) — A Jinja template which will be used to convert lists of messages in a chat into a tokenizable string.
Constructs a VideoLlava processor which wraps a VideoLlava image processor and a Llava tokenizer into a single processor.
VideoLlavaProcessor offers all the functionalities of VideoLlavaImageProcessor and LlamaTokenizerFast. See the
__call__()
and decode() for more information.
This method forwards all its arguments to LlamaTokenizerFast’s batch_decode(). Please refer to the docstring of this method for more information.
This method forwards all its arguments to LlamaTokenizerFast’s decode(). Please refer to the docstring of this method for more information.
VideoLlavaForConditionalGeneration
class transformers.VideoLlavaForConditionalGeneration
< source >( config: VideoLlavaConfig )
Parameters
- config (VideoLlavaConfig or
VideoLlavaVisionConfig
) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
The VideoLlava model which consists of a vision backbone and a language model. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
forward
< source >( input_ids: LongTensor = None pixel_values_images: FloatTensor = None pixel_values_videos: FloatTensor = None attention_mask: Optional = None position_ids: Optional = None past_key_values: Optional = None inputs_embeds: Optional = None vision_feature_layer: Optional = None vision_feature_select_strategy: Optional = None labels: Optional = None use_cache: Optional = None output_attentions: Optional = None output_hidden_states: Optional = None return_dict: Optional = None cache_position: Optional = None num_logits_to_keep: int = 0 ) → transformers.models.video_llava.modeling_video_llava.VideoLlavaCausalLMOutputWithPast
or tuple(torch.FloatTensor)
Parameters
- input_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it.Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
- pixel_values_images (
torch.FloatTensor
of shape(batch_size, num_channels, image_size, image_size)) -- The tensors corresponding to the input images. Pixel values can be obtained using [AutoImageProcessor](/docs/transformers/v4.45.1/en/model_doc/auto#transformers.AutoImageProcessor). See [VideoLlavaImageProcessor.__call__()](/docs/transformers/v4.45.1/en/model_doc/videomae#transformers.VideoMAEFeatureExtractor.__call__) for details ([]
LlavaProcessor`] uses VideoLlavaImageProcessor for processing images). - pixel_values_videos (
torch.FloatTensor
of shape(batch_size, num_frames, num_channels, image_size, image_size)) -- The tensors corresponding to the input video. Pixel values can be obtained using [AutoImageProcessor](/docs/transformers/v4.45.1/en/model_doc/auto#transformers.AutoImageProcessor). See [VideoLlavaImageProcessor.__call__()](/docs/transformers/v4.45.1/en/model_doc/videomae#transformers.VideoMAEFeatureExtractor.__call__) for details ([]
LlavaProcessor`] uses VideoLlavaImageProcessor for processing videos). - attention_mask (
torch.Tensor
of shape(batch_size, sequence_length)
, optional) — Mask to avoid performing attention on padding token indices. Mask values selected in[0, 1]
:- 1 for tokens that are not masked,
- 0 for tokens that are masked.
Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
If
past_key_values
is used, optionally only the lastdecoder_input_ids
have to be input (seepast_key_values
).If you want to change padding behavior, you should read
modeling_opt._prepare_decoder_attention_mask
and modify to your needs. See diagram 1 in the paper for more information on the default strategy.- 1 indicates the head is not masked,
- 0 indicates the head is masked.
- position_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range[0, config.n_positions - 1]
. What are position IDs? - past_key_values (
tuple(tuple(torch.FloatTensor))
, optional, returned whenuse_cache=True
is passed or whenconfig.use_cache=True
) — Tuple oftuple(torch.FloatTensor)
of lengthconfig.n_layers
, with each tuple having 2 tensors of shape(batch_size, num_heads, sequence_length, embed_size_per_head)
) and 2 additional tensors of shape(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)
.Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see
past_key_values
input) to speed up sequential decoding.If
past_key_values
are used, the user can optionally input only the lastdecoder_input_ids
(those that don’t have their past key value states given to this model) of shape(batch_size, 1)
instead of alldecoder_input_ids
of shape(batch_size, sequence_length)
. - inputs_embeds (
torch.FloatTensor
of shape(batch_size, sequence_length, hidden_size)
, optional) — Optionally, instead of passinginput_ids
you can choose to directly pass an embedded representation. This is useful if you want more control over how to convertinput_ids
indices into associated vectors than the model’s internal embedding lookup matrix. - vision_feature_layer (
int
, optional, defaults to -2) — The index of the layer to select the vision feature. - vision_feature_select_strategy (
str
, optional, defaults to"default"
) — The feature selection strategy used to select the vision feature from the vision backbone. Can be one of"default"
or"full"
- use_cache (
bool
, optional) — If set toTrue
,past_key_values
key value states are returned and can be used to speed up decoding (seepast_key_values
). - output_attentions (
bool
, optional) — Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail. - output_hidden_states (
bool
, optional) — Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail. - return_dict (
bool
, optional) — Whether or not to return a ModelOutput instead of a plain tuple. - cache_position (
torch.LongTensor
of shape(sequence_length)
, optional) — Indices depicting the position of the input sequence tokens in the sequence. Contrarily toposition_ids
, this tensor is not affected by padding. It is used to update the cache in the correct position and to infer the complete sequence length.Args — labels (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional): Labels for computing the masked language modeling loss. Indices should either be in[0, ..., config.vocab_size]
or -100 (seeinput_ids
docstring). Tokens with indices set to-100
are ignored (masked), the loss is only computed for the tokens with labels in[0, ..., config.vocab_size]
.num_logits_to_keep (
int
, optional): Calculate logits for the lastnum_logits_to_keep
tokens. If0
, calculate logits for allinput_ids
(special case). Only last token logits are needed for generation, and calculating them only for that token can save memory, which becomes pretty significant for long sequences or large vocabulary size.
Returns
transformers.models.video_llava.modeling_video_llava.VideoLlavaCausalLMOutputWithPast
or tuple(torch.FloatTensor)
A transformers.models.video_llava.modeling_video_llava.VideoLlavaCausalLMOutputWithPast
or a tuple of
torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various
elements depending on the configuration (VideoLlavaConfig) and inputs.
-
loss (
torch.FloatTensor
of shape(1,)
, optional, returned whenlabels
is provided) — Language modeling loss (for next-token prediction). -
logits (
torch.FloatTensor
of shape(batch_size, sequence_length, config.vocab_size)
) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). -
past_key_values (
tuple(tuple(torch.FloatTensor))
, optional, returned whenuse_cache=True
is passed or whenconfig.use_cache=True
) — Tuple oftuple(torch.FloatTensor)
of lengthconfig.n_layers
, with each tuple having 2 tensors of shape(batch_size, num_heads, sequence_length, embed_size_per_head)
)Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see
past_key_values
input) to speed up sequential decoding. -
hidden_states (
tuple(torch.FloatTensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) — Tuple oftorch.FloatTensor
(one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
-
attentions (
tuple(torch.FloatTensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) — Tuple oftorch.FloatTensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
-
image_hidden_states (
torch.FloatTensor
, optional) — Atorch.FloatTensor
of size (batch_size, num_images, sequence_length, hidden_size)`. image_hidden_states of the model produced by the vision encoder and after projecting the last hidden state. -
video_hidden_states (
torch.FloatTensor
, optional) — Atorch.FloatTensor
of size(batch_size * num_frames, num_videos, sequence_length, hidden_size)
. video_hidden_states of the model produced by the vision encoder and after projecting the last hidden state.
The VideoLlavaForConditionalGeneration forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
>>> from PIL import Image
>>> import requests
>>> import numpy as np
>>> import av
>>> from huggingface_hub import hf_hub_download
>>> from transformers import VideoLlavaProcessor, VideoLlavaForConditionalGeneration
>>> def read_video_pyav(container, indices):
... '''
... Decode the video with PyAV decoder.
... Args:
... container (`av.container.input.InputContainer`): PyAV container.
... indices (`List[int]`): List of frame indices to decode.
... Returns:
... result (np.ndarray): np array of decoded frames of shape (num_frames, height, width, 3).
... '''
... frames = []
... container.seek(0)
... start_index = indices[0]
... end_index = indices[-1]
... for i, frame in enumerate(container.decode(video=0)):
... if i > end_index:
... break
... if i >= start_index and i in indices:
... frames.append(frame)
... return np.stack([x.to_ndarray(format="rgb24") for x in frames])
>>> model = VideoLlavaForConditionalGeneration.from_pretrained("LanguageBind/Video-LLaVA-7B-hf")
>>> processor = VideoLlavaProcessor.from_pretrained("LanguageBind/Video-LLaVA-7B-hf")
>>> prompt = "USER: <video>\nWhy is this video funny? ASSISTANT:"
>>> video_path = hf_hub_download(repo_id="raushan-testing-hf/videos-test", filename="sample_demo_1.mp4", repo_type="dataset")
>>> container = av.open(video_path)
>>> # sample uniformly 8 frames from the video
>>> total_frames = container.streams.video[0].frames
>>> indices = np.arange(0, total_frames, total_frames / 8).astype(int)
>>> clip = read_video_pyav(container, indices)
>>> inputs = processor(text=prompt, videos=clip, return_tensors="pt")
>>> # Generate
>>> generate_ids = model.generate(**inputs, max_length=80)
>>> processor.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
"USER: Why is this video funny? ASSISTANT: The video is funny because the baby is playing with a Wii remote while sitting on the floor, and the baby is wearing glasses.Ъ. The baby's actions are amusing because it is a young child trying to interact with a video game, which is not a typical activity for a"
>>> # to generate from image and video mix
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> prompt = [
... "USER: <image>\nHow many cats do you see? ASSISTANT:",
... "USER: <video>\nWhy is this video funny? ASSISTANT:"
... ]
>>> inputs = processor(text=prompt, images=image, videos=clip, padding=True, return_tensors="pt")
>>> # Generate
>>> generate_ids = model.generate(**inputs, max_length=50)
>>> processor.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True)
['USER: How many cats do you see? ASSISTANT: There are two cats visible in the image. (or three, if you count the one in the background).', 'USER: Why is this video funny? ASSISTANT: The video is funny because it shows a baby sitting on a bed and playing with a Wii remote.Ъ. The baby is holding the remote']