text
stringlengths
0
51.9k
GLPN This is a recently introduced model so the API hasn't been tested extensively. There may be some bugs or slight breaking changes to fix it in the future. If you see something strange, file a Github Issue. Overview The GLPN model was proposed in Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth by Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim. GLPN combines SegFormer's hierarchical mix-Transformer with a lightweight decoder for monocular depth estimation. The proposed decoder shows better performance than the previously proposed decoders, with considerably less computational complexity. The abstract from the paper is the following: Depth estimation from a single image is an important task that can be applied to various fields in computer vision, and has grown rapidly with the development of convolutional neural networks. In this paper, we propose a novel structure and training strategy for monocular depth estimation to further improve the prediction accuracy of the network. We deploy a hierarchical transformer encoder to capture and convey the global context, and design a lightweight yet powerful decoder to generate an estimated depth map while considering local connectivity. By constructing connected paths between multi-scale local features and the global decoding stream with our proposed selective feature fusion module, the network can integrate both representations and recover fine details. In addition, the proposed decoder shows better performance than the previously proposed decoders, with considerably less computational complexity. Furthermore, we improve the depth-specific augmentation method by utilizing an important observation in depth estimation to enhance the model. Our network achieves state-of-the-art performance over the challenging depth dataset NYU Depth V2. Extensive experiments have been conducted to validate and show the effectiveness of the proposed approach. Finally, our model shows better generalisation ability and robustness than other comparative models. Summary of the approach. Taken from the original paper. This model was contributed by nielsr. The original code can be found here. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with GLPN. Demo notebooks for [GLPNForDepthEstimation] can be found here. Monocular depth estimation task guide GLPNConfig [[autodoc]] GLPNConfig GLPNFeatureExtractor [[autodoc]] GLPNFeatureExtractor - call GLPNImageProcessor [[autodoc]] GLPNImageProcessor - preprocess GLPNModel [[autodoc]] GLPNModel - forward GLPNForDepthEstimation [[autodoc]] GLPNForDepthEstimation - forward
Gemma Overview The Gemma model was proposed in Gemma: Open Models Based on Gemini Technology and Research by Gemma Team, Google. Gemma models are trained on 6T tokens, and released with 2 versions, 2b and 7b. The abstract from the paper is the following: This work introduces Gemma, a new family of open language models demonstrating strong performance across academic benchmarks for language understanding, reasoning, and safety. We release two sizes of models (2 billion and 7 billion parameters), and provide both pretrained and fine-tuned checkpoints. Gemma outperforms similarly sized open models on 11 out of 18 text-based tasks, and we present comprehensive evaluations of safety and responsibility aspects of the models, alongside a detailed description of our model development. We believe the responsible release of LLMs is critical for improving the safety of frontier models, and for enabling the next wave of LLM innovations Tips: The original checkpoints can be converted using the conversion script src/transformers/models/gemma/convert_gemma_weights_to_hf.py This model was contributed by Arthur Zucker, Younes Belkada, Sanchit Gandhi, Pedro Cuenca. GemmaConfig [[autodoc]] GemmaConfig GemmaTokenizer [[autodoc]] GemmaTokenizer GemmaTokenizerFast [[autodoc]] GemmaTokenizerFast GemmaModel [[autodoc]] GemmaModel - forward GemmaForCausalLM [[autodoc]] GemmaForCausalLM - forward GemmaForSequenceClassification [[autodoc]] GemmaForSequenceClassification - forward FlaxGemmaModel [[autodoc]] FlaxGemmaModel - call FlaxGemmaForCausalLM [[autodoc]] FlaxGemmaForCausalLM - call
WavLM Overview The WavLM model was proposed in WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei. The abstract from the paper is the following: Self-supervised learning (SSL) achieves great success in speech recognition, while limited exploration has been attempted for other speech processing tasks. As speech signal contains multi-faceted information including speaker identity, paralinguistics, spoken content, etc., learning universal representations for all speech tasks is challenging. In this paper, we propose a new pre-trained model, WavLM, to solve full-stack downstream speech tasks. WavLM is built based on the HuBERT framework, with an emphasis on both spoken content modeling and speaker identity preservation. We first equip the Transformer structure with gated relative position bias to improve its capability on recognition tasks. For better speaker discrimination, we propose an utterance mixing training strategy, where additional overlapped utterances are created unsupervisedly and incorporated during model training. Lastly, we scale up the training dataset from 60k hours to 94k hours. WavLM Large achieves state-of-the-art performance on the SUPERB benchmark, and brings significant improvements for various speech processing tasks on their representative benchmarks. Relevant checkpoints can be found under https://huggingface.co/models?other=wavlm. This model was contributed by patrickvonplaten. The Authors' code can be found here. Usage tips WavLM is a speech model that accepts a float array corresponding to the raw waveform of the speech signal. Please use [Wav2Vec2Processor] for the feature extraction. WavLM model can be fine-tuned using connectionist temporal classification (CTC) so the model output has to be decoded using [Wav2Vec2CTCTokenizer]. WavLM performs especially well on speaker verification, speaker identification, and speaker diarization tasks. Resources Audio classification task guide Automatic speech recognition task guide WavLMConfig [[autodoc]] WavLMConfig WavLMModel [[autodoc]] WavLMModel - forward WavLMForCTC [[autodoc]] WavLMForCTC - forward WavLMForSequenceClassification [[autodoc]] WavLMForSequenceClassification - forward WavLMForAudioFrameClassification [[autodoc]] WavLMForAudioFrameClassification - forward WavLMForXVector [[autodoc]] WavLMForXVector - forward
UniSpeech-SAT Overview The UniSpeech-SAT model was proposed in UniSpeech-SAT: Universal Speech Representation Learning with Speaker Aware Pre-Training by Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu . The abstract from the paper is the following: Self-supervised learning (SSL) is a long-standing goal for speech processing, since it utilizes large-scale unlabeled data and avoids extensive human labeling. Recent years witness great successes in applying self-supervised learning in speech recognition, while limited exploration was attempted in applying SSL for modeling speaker characteristics. In this paper, we aim to improve the existing SSL framework for speaker representation learning. Two methods are introduced for enhancing the unsupervised speaker information extraction. First, we apply the multi-task learning to the current SSL framework, where we integrate the utterance-wise contrastive loss with the SSL objective function. Second, for better speaker discrimination, we propose an utterance mixing strategy for data augmentation, where additional overlapped utterances are created unsupervisedly and incorporate during training. We integrate the proposed methods into the HuBERT framework. Experiment results on SUPERB benchmark show that the proposed system achieves state-of-the-art performance in universal representation learning, especially for speaker identification oriented tasks. An ablation study is performed verifying the efficacy of each proposed method. Finally, we scale up training dataset to 94 thousand hours public audio data and achieve further performance improvement in all SUPERB tasks. This model was contributed by patrickvonplaten. The Authors' code can be found here. Usage tips UniSpeechSat is a speech model that accepts a float array corresponding to the raw waveform of the speech signal. Please use [Wav2Vec2Processor] for the feature extraction. UniSpeechSat model can be fine-tuned using connectionist temporal classification (CTC) so the model output has to be decoded using [Wav2Vec2CTCTokenizer]. UniSpeechSat performs especially well on speaker verification, speaker identification, and speaker diarization tasks. Resources Audio classification task guide Automatic speech recognition task guide UniSpeechSatConfig [[autodoc]] UniSpeechSatConfig UniSpeechSat specific outputs [[autodoc]] models.unispeech_sat.modeling_unispeech_sat.UniSpeechSatForPreTrainingOutput UniSpeechSatModel [[autodoc]] UniSpeechSatModel - forward UniSpeechSatForCTC [[autodoc]] UniSpeechSatForCTC - forward UniSpeechSatForSequenceClassification [[autodoc]] UniSpeechSatForSequenceClassification - forward UniSpeechSatForAudioFrameClassification [[autodoc]] UniSpeechSatForAudioFrameClassification - forward UniSpeechSatForXVector [[autodoc]] UniSpeechSatForXVector - forward UniSpeechSatForPreTraining [[autodoc]] UniSpeechSatForPreTraining - forward
DPR Overview Dense Passage Retrieval (DPR) is a set of tools and models for state-of-the-art open-domain Q&A research. It was introduced in Dense Passage Retrieval for Open-Domain Question Answering by Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, Wen-tau Yih. The abstract from the paper is the following: Open-domain question answering relies on efficient passage retrieval to select candidate contexts, where traditional sparse vector space models, such as TF-IDF or BM25, are the de facto method. In this work, we show that retrieval can be practically implemented using dense representations alone, where embeddings are learned from a small number of questions and passages by a simple dual-encoder framework. When evaluated on a wide range of open-domain QA datasets, our dense retriever outperforms a strong Lucene-BM25 system largely by 9%-19% absolute in terms of top-20 passage retrieval accuracy, and helps our end-to-end QA system establish new state-of-the-art on multiple open-domain QA benchmarks. This model was contributed by lhoestq. The original code can be found here. Usage tips DPR consists in three models: Question encoder: encode questions as vectors Context encoder: encode contexts as vectors Reader: extract the answer of the questions inside retrieved contexts, along with a relevance score (high if the inferred span actually answers the question). DPRConfig [[autodoc]] DPRConfig DPRContextEncoderTokenizer [[autodoc]] DPRContextEncoderTokenizer DPRContextEncoderTokenizerFast [[autodoc]] DPRContextEncoderTokenizerFast DPRQuestionEncoderTokenizer [[autodoc]] DPRQuestionEncoderTokenizer DPRQuestionEncoderTokenizerFast [[autodoc]] DPRQuestionEncoderTokenizerFast DPRReaderTokenizer [[autodoc]] DPRReaderTokenizer DPRReaderTokenizerFast [[autodoc]] DPRReaderTokenizerFast DPR specific outputs [[autodoc]] models.dpr.modeling_dpr.DPRContextEncoderOutput [[autodoc]] models.dpr.modeling_dpr.DPRQuestionEncoderOutput [[autodoc]] models.dpr.modeling_dpr.DPRReaderOutput DPRContextEncoder [[autodoc]] DPRContextEncoder - forward DPRQuestionEncoder [[autodoc]] DPRQuestionEncoder - forward DPRReader [[autodoc]] DPRReader - forward TFDPRContextEncoder [[autodoc]] TFDPRContextEncoder - call TFDPRQuestionEncoder [[autodoc]] TFDPRQuestionEncoder - call TFDPRReader [[autodoc]] TFDPRReader - call
REALM Overview The REALM model was proposed in REALM: Retrieval-Augmented Language Model Pre-Training by Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat and Ming-Wei Chang. It's a retrieval-augmented language model that firstly retrieves documents from a textual knowledge corpus and then utilizes retrieved documents to process question answering tasks. The abstract from the paper is the following: Language model pre-training has been shown to capture a surprising amount of world knowledge, crucial for NLP tasks such as question answering. However, this knowledge is stored implicitly in the parameters of a neural network, requiring ever-larger networks to cover more facts. To capture knowledge in a more modular and interpretable way, we augment language model pre-training with a latent knowledge retriever, which allows the model to retrieve and attend over documents from a large corpus such as Wikipedia, used during pre-training, fine-tuning and inference. For the first time, we show how to pre-train such a knowledge retriever in an unsupervised manner, using masked language modeling as the learning signal and backpropagating through a retrieval step that considers millions of documents. We demonstrate the effectiveness of Retrieval-Augmented Language Model pre-training (REALM) by fine-tuning on the challenging task of Open-domain Question Answering (Open-QA). We compare against state-of-the-art models for both explicit and implicit knowledge storage on three popular Open-QA benchmarks, and find that we outperform all previous methods by a significant margin (4-16% absolute accuracy), while also providing qualitative benefits such as interpretability and modularity. This model was contributed by qqaatw. The original code can be found here. RealmConfig [[autodoc]] RealmConfig RealmTokenizer [[autodoc]] RealmTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary - batch_encode_candidates RealmTokenizerFast [[autodoc]] RealmTokenizerFast - batch_encode_candidates RealmRetriever [[autodoc]] RealmRetriever RealmEmbedder [[autodoc]] RealmEmbedder - forward RealmScorer [[autodoc]] RealmScorer - forward RealmKnowledgeAugEncoder [[autodoc]] RealmKnowledgeAugEncoder - forward RealmReader [[autodoc]] RealmReader - forward RealmForOpenQA [[autodoc]] RealmForOpenQA - block_embedding_to - forward
MGP-STR Overview The MGP-STR model was proposed in Multi-Granularity Prediction for Scene Text Recognition by Peng Wang, Cheng Da, and Cong Yao. MGP-STR is a conceptually simple yet powerful vision Scene Text Recognition (STR) model, which is built upon the Vision Transformer (ViT). To integrate linguistic knowledge, Multi-Granularity Prediction (MGP) strategy is proposed to inject information from the language modality into the model in an implicit way. The abstract from the paper is the following: Scene text recognition (STR) has been an active research topic in computer vision for years. To tackle this challenging problem, numerous innovative methods have been successively proposed and incorporating linguistic knowledge into STR models has recently become a prominent trend. In this work, we first draw inspiration from the recent progress in Vision Transformer (ViT) to construct a conceptually simple yet powerful vision STR model, which is built upon ViT and outperforms previous state-of-the-art models for scene text recognition, including both pure vision models and language-augmented methods. To integrate linguistic knowledge, we further propose a Multi-Granularity Prediction strategy to inject information from the language modality into the model in an implicit way, i.e. , subword representations (BPE and WordPiece) widely-used in NLP are introduced into the output space, in addition to the conventional character level representation, while no independent language model (LM) is adopted. The resultant algorithm (termed MGP-STR) is able to push the performance envelop of STR to an even higher level. Specifically, it achieves an average recognition accuracy of 93.35% on standard benchmarks. MGP-STR architecture. Taken from the original paper. MGP-STR is trained on two synthetic datasets MJSynth (MJ) and SynthText (ST) without fine-tuning on other datasets. It achieves state-of-the-art results on six standard Latin scene text benchmarks, including 3 regular text datasets (IC13, SVT, IIIT) and 3 irregular ones (IC15, SVTP, CUTE). This model was contributed by yuekun. The original code can be found here. Inference example [MgpstrModel] accepts images as input and generates three types of predictions, which represent textual information at different granularities. The three types of predictions are fused to give the final prediction result. The [ViTImageProcessor] class is responsible for preprocessing the input image and [MgpstrTokenizer] decodes the generated character tokens to the target string. The [MgpstrProcessor] wraps [ViTImageProcessor] and [MgpstrTokenizer] into a single instance to both extract the input features and decode the predicted token ids. Step-by-step Optical Character Recognition (OCR) from transformers import MgpstrProcessor, MgpstrForSceneTextRecognition import requests from PIL import Image processor = MgpstrProcessor.from_pretrained('alibaba-damo/mgp-str-base') model = MgpstrForSceneTextRecognition.from_pretrained('alibaba-damo/mgp-str-base') load image from the IIIT-5k dataset url = "https://i.postimg.cc/ZKwLg2Gw/367-14.png" image = Image.open(requests.get(url, stream=True).raw).convert("RGB") pixel_values = processor(images=image, return_tensors="pt").pixel_values outputs = model(pixel_values) generated_text = processor.batch_decode(outputs.logits)['generated_text'] MgpstrConfig [[autodoc]] MgpstrConfig MgpstrTokenizer [[autodoc]] MgpstrTokenizer - save_vocabulary MgpstrProcessor [[autodoc]] MgpstrProcessor - call - batch_decode MgpstrModel [[autodoc]] MgpstrModel - forward MgpstrForSceneTextRecognition [[autodoc]] MgpstrForSceneTextRecognition - forward
MEGA Overview The MEGA model was proposed in Mega: Moving Average Equipped Gated Attention by Xuezhe Ma, Chunting Zhou, Xiang Kong, Junxian He, Liangke Gui, Graham Neubig, Jonathan May, and Luke Zettlemoyer. MEGA proposes a new approach to self-attention with each encoder layer having a multi-headed exponential moving average in addition to a single head of standard dot-product attention, giving the attention mechanism stronger positional biases. This allows MEGA to perform competitively to Transformers on standard benchmarks including LRA while also having significantly fewer parameters. MEGA's compute efficiency allows it to scale to very long sequences, making it an attractive option for long-document NLP tasks. The abstract from the paper is the following: *The design choices in the Transformer attention mechanism, including weak inductive bias and quadratic computational complexity, have limited its application for modeling long sequences. In this paper, we introduce Mega, a simple, theoretically grounded, single-head gated attention mechanism equipped with (exponential) moving average to incorporate inductive bias of position-aware local dependencies into the position-agnostic attention mechanism. We further propose a variant of Mega that offers linear time and space complexity yet yields only minimal quality loss, by efficiently splitting the whole sequence into multiple chunks with fixed length. Extensive experiments on a wide range of sequence modeling benchmarks, including the Long Range Arena, neural machine translation, auto-regressive language modeling, and image and speech classification, show that Mega achieves significant improvements over other sequence models, including variants of Transformers and recent state space models. * This model was contributed by mnaylor. The original code can be found here. Usage tips MEGA can perform quite well with relatively few parameters. See Appendix D in the MEGA paper for examples of architectural specs which perform well in various settings. If using MEGA as a decoder, be sure to set bidirectional=False to avoid errors with default bidirectional. Mega-chunk is a variant of mega that reduces time and spaces complexity from quadratic to linear. Utilize chunking with MegaConfig.use_chunking and control chunk size with MegaConfig.chunk_size Implementation Notes The original implementation of MEGA had an inconsistent expectation of attention masks for padding and causal self-attention between the softmax attention and Laplace/squared ReLU method. This implementation addresses that inconsistency. The original implementation did not include token type embeddings; this implementation adds support for these, with the option controlled by MegaConfig.add_token_type_embeddings MegaConfig [[autodoc]] MegaConfig MegaModel [[autodoc]] MegaModel - forward MegaForCausalLM [[autodoc]] MegaForCausalLM - forward MegaForMaskedLM [[autodoc]] MegaForMaskedLM - forward MegaForSequenceClassification [[autodoc]] MegaForSequenceClassification - forward MegaForMultipleChoice [[autodoc]] MegaForMultipleChoice - forward MegaForTokenClassification [[autodoc]] MegaForTokenClassification - forward MegaForQuestionAnswering [[autodoc]] MegaForQuestionAnswering - forward
CodeGen Overview The CodeGen model was proposed in A Conversational Paradigm for Program Synthesis by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, and Caiming Xiong. CodeGen is an autoregressive language model for program synthesis trained sequentially on The Pile, BigQuery, and BigPython. The abstract from the paper is the following: Program synthesis strives to generate a computer program as a solution to a given problem specification. We propose a conversational program synthesis approach via large language models, which addresses the challenges of searching over a vast program space and user intent specification faced in prior approaches. Our new approach casts the process of writing a specification and program as a multi-turn conversation between a user and a system. It treats program synthesis as a sequence prediction problem, in which the specification is expressed in natural language and the desired program is conditionally sampled. We train a family of large language models, called CodeGen, on natural language and programming language data. With weak supervision in the data and the scaling up of data size and model size, conversational capacities emerge from the simple autoregressive language modeling. To study the model behavior on conversational program synthesis, we develop a multi-turn programming benchmark (MTPB), where solving each problem requires multi-step synthesis via multi-turn conversation between the user and the model. Our findings show the emergence of conversational capabilities and the effectiveness of the proposed conversational program synthesis paradigm. In addition, our model CodeGen (with up to 16B parameters trained on TPU-v4) outperforms OpenAI's Codex on the HumanEval benchmark. We make the training library JaxFormer including checkpoints available as open source contribution: this https URL. This model was contributed by Hiroaki Hayashi. The original code can be found here. Checkpoint Naming CodeGen model checkpoints are available on different pre-training data with variable sizes. The format is: Salesforce/codegen-{size}-{data}, where size: 350M, 2B, 6B, 16B data: nl: Pre-trained on the Pile multi: Initialized with nl, then further pre-trained on multiple programming languages data mono: Initialized with multi, then further pre-trained on Python data For example, Salesforce/codegen-350M-mono offers a 350 million-parameter checkpoint pre-trained sequentially on the Pile, multiple programming languages, and Python. Usage example thon from transformers import AutoModelForCausalLM, AutoTokenizer checkpoint = "Salesforce/codegen-350M-mono" model = AutoModelForCausalLM.from_pretrained(checkpoint) tokenizer = AutoTokenizer.from_pretrained(checkpoint) text = "def hello_world():" completion = model.generate(**tokenizer(text, return_tensors="pt")) print(tokenizer.decode(completion[0])) def hello_world(): print("Hello World") hello_world() Resources Causal language modeling task guide CodeGenConfig [[autodoc]] CodeGenConfig - all CodeGenTokenizer [[autodoc]] CodeGenTokenizer - save_vocabulary CodeGenTokenizerFast [[autodoc]] CodeGenTokenizerFast CodeGenModel [[autodoc]] CodeGenModel - forward CodeGenForCausalLM [[autodoc]] CodeGenForCausalLM - forward
ProphetNet Overview The ProphetNet model was proposed in ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training, by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang, Ming Zhou on 13 Jan, 2020. ProphetNet is an encoder-decoder model and can predict n-future tokens for "ngram" language modeling instead of just the next token. The abstract from the paper is the following: In this paper, we present a new sequence-to-sequence pretraining model called ProphetNet, which introduces a novel self-supervised objective named future n-gram prediction and the proposed n-stream self-attention mechanism. Instead of the optimization of one-step ahead prediction in traditional sequence-to-sequence model, the ProphetNet is optimized by n-step ahead prediction which predicts the next n tokens simultaneously based on previous context tokens at each time step. The future n-gram prediction explicitly encourages the model to plan for the future tokens and prevent overfitting on strong local correlations. We pre-train ProphetNet using a base scale dataset (16GB) and a large scale dataset (160GB) respectively. Then we conduct experiments on CNN/DailyMail, Gigaword, and SQuAD 1.1 benchmarks for abstractive summarization and question generation tasks. Experimental results show that ProphetNet achieves new state-of-the-art results on all these datasets compared to the models using the same scale pretraining corpus. The Authors' code can be found here. Usage tips ProphetNet is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than the left. The model architecture is based on the original Transformer, but replaces the “standard” self-attention mechanism in the decoder by a a main self-attention mechanism and a self and n-stream (predict) self-attention mechanism. Resources Causal language modeling task guide Translation task guide Summarization task guide ProphetNetConfig [[autodoc]] ProphetNetConfig ProphetNetTokenizer [[autodoc]] ProphetNetTokenizer ProphetNet specific outputs [[autodoc]] models.prophetnet.modeling_prophetnet.ProphetNetSeq2SeqLMOutput [[autodoc]] models.prophetnet.modeling_prophetnet.ProphetNetSeq2SeqModelOutput [[autodoc]] models.prophetnet.modeling_prophetnet.ProphetNetDecoderModelOutput [[autodoc]] models.prophetnet.modeling_prophetnet.ProphetNetDecoderLMOutput ProphetNetModel [[autodoc]] ProphetNetModel - forward ProphetNetEncoder [[autodoc]] ProphetNetEncoder - forward ProphetNetDecoder [[autodoc]] ProphetNetDecoder - forward ProphetNetForConditionalGeneration [[autodoc]] ProphetNetForConditionalGeneration - forward ProphetNetForCausalLM [[autodoc]] ProphetNetForCausalLM - forward
GPT-NeoX Overview We introduce GPT-NeoX-20B, a 20 billion parameter autoregressive language model trained on the Pile, whose weights will be made freely and openly available to the public through a permissive license. It is, to the best of our knowledge, the largest dense autoregressive model that has publicly available weights at the time of submission. In this work, we describe GPT-NeoX-20B's architecture and training and evaluate its performance on a range of language-understanding, mathematics, and knowledge-based tasks. We find that GPT-NeoX-20B is a particularly powerful few-shot reasoner and gains far more in performance when evaluated five-shot than similarly sized GPT-3 and FairSeq models. We open-source the training and evaluation code, as well as the model weights, at https://github.com/EleutherAI/gpt-neox. Development of the model was led by Sid Black, Stella Biderman and Eric Hallahan, and the model was trained with generous the support of CoreWeave. GPT-NeoX-20B was trained with fp16, thus it is recommended to initialize the model as follows: python model = GPTNeoXForCausalLM.from_pretrained("EleutherAI/gpt-neox-20b").half().cuda() GPT-NeoX-20B also has a different tokenizer from the one used in GPT-J-6B and GPT-Neo. The new tokenizer allocates additional tokens to whitespace characters, making the model more suitable for certain tasks like code generation. Usage example The generate() method can be used to generate text using GPT Neo model. thon from transformers import GPTNeoXForCausalLM, GPTNeoXTokenizerFast model = GPTNeoXForCausalLM.from_pretrained("EleutherAI/gpt-neox-20b") tokenizer = GPTNeoXTokenizerFast.from_pretrained("EleutherAI/gpt-neox-20b") prompt = "GPTNeoX20B is a 20B-parameter autoregressive Transformer model developed by EleutherAI." input_ids = tokenizer(prompt, return_tensors="pt").input_ids gen_tokens = model.generate( input_ids, do_sample=True, temperature=0.9, max_length=100, ) gen_text = tokenizer.batch_decode(gen_tokens)[0] Using Flash Attention 2 Flash Attention 2 is an faster, optimized version of the model. Installation First, check whether your hardware is compatible with Flash Attention 2. The latest list of compatible hardware can be found in the official documentation. If your hardware is not compatible with Flash Attention 2, you can still benefit from attention kernel optimisations through Better Transformer support covered above. Next, install the latest version of Flash Attention 2: pip install -U flash-attn --no-build-isolation Usage To load a model using Flash Attention 2, we can pass the argument attn_implementation="flash_attention_2" to .from_pretrained. We'll also load the model in half-precision (e.g. torch.float16), since it results in almost no degradation to audio quality but significantly lower memory usage and faster inference: thon from transformers import GPTNeoXForCausalLM, GPTNeoXTokenizerFast model = GPTNeoXForCausalLM.from_pretrained("EleutherAI/gpt-neox-20b", torch_dtype=torch.float16, attn_implementation="flash_attention_2").to(device) Expected speedups Below is an expected speedup diagram that compares pure inference time between the native implementation in transformers using stockmark/gpt-neox-japanese-1.4b checkpoint and the Flash Attention 2 version of the model using a sequence length of 2048. Resources Causal language modeling task guide GPTNeoXConfig [[autodoc]] GPTNeoXConfig GPTNeoXTokenizerFast [[autodoc]] GPTNeoXTokenizerFast GPTNeoXModel [[autodoc]] GPTNeoXModel - forward GPTNeoXForCausalLM [[autodoc]] GPTNeoXForCausalLM - forward GPTNeoXForQuestionAnswering [[autodoc]] GPTNeoXForQuestionAnswering - forward GPTNeoXForSequenceClassification [[autodoc]] GPTNeoXForSequenceClassification - forward GPTNeoXForTokenClassification [[autodoc]] GPTNeoXForTokenClassification - forward
FSMT Overview FSMT (FairSeq MachineTranslation) models were introduced in Facebook FAIR's WMT19 News Translation Task Submission by Nathan Ng, Kyra Yee, Alexei Baevski, Myle Ott, Michael Auli, Sergey Edunov. The abstract of the paper is the following: This paper describes Facebook FAIR's submission to the WMT19 shared news translation task. We participate in two language pairs and four language directions, English <-> German and English <-> Russian. Following our submission from last year, our baseline systems are large BPE-based transformer models trained with the Fairseq sequence modeling toolkit which rely on sampled back-translations. This year we experiment with different bitext data filtering schemes, as well as with adding filtered back-translated data. We also ensemble and fine-tune our models on domain-specific data, then decode using noisy channel model reranking. Our submissions are ranked first in all four directions of the human evaluation campaign. On En->De, our system significantly outperforms other systems as well as human translations. This system improves upon our WMT'18 submission by 4.5 BLEU points. This model was contributed by stas. The original code can be found here. Implementation Notes FSMT uses source and target vocabulary pairs that aren't combined into one. It doesn't share embeddings tokens either. Its tokenizer is very similar to [XLMTokenizer] and the main model is derived from [BartModel]. FSMTConfig [[autodoc]] FSMTConfig FSMTTokenizer [[autodoc]] FSMTTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary FSMTModel [[autodoc]] FSMTModel - forward FSMTForConditionalGeneration [[autodoc]] FSMTForConditionalGeneration - forward
mT5 Overview The mT5 model was presented in mT5: A massively multilingual pre-trained text-to-text transformer by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel. The abstract from the paper is the following: The recent "Text-to-Text Transfer Transformer" (T5) leveraged a unified text-to-text format and scale to attain state-of-the-art results on a wide variety of English-language NLP tasks. In this paper, we introduce mT5, a multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset covering 101 languages. We detail the design and modified training of mT5 and demonstrate its state-of-the-art performance on many multilingual benchmarks. We also describe a simple technique to prevent "accidental translation" in the zero-shot setting, where a generative model chooses to (partially) translate its prediction into the wrong language. All of the code and model checkpoints used in this work are publicly available. Note: mT5 was only pre-trained on mC4 excluding any supervised training. Therefore, this model has to be fine-tuned before it is usable on a downstream task, unlike the original T5 model. Since mT5 was pre-trained unsupervisedly, there's no real advantage to using a task prefix during single-task fine-tuning. If you are doing multi-task fine-tuning, you should use a prefix. Google has released the following variants: google/mt5-small google/mt5-base google/mt5-large google/mt5-xl google/mt5-xxl. This model was contributed by patrickvonplaten. The original code can be found here. Resources Translation task guide Summarization task guide MT5Config [[autodoc]] MT5Config MT5Tokenizer [[autodoc]] MT5Tokenizer See [T5Tokenizer] for all details. MT5TokenizerFast [[autodoc]] MT5TokenizerFast See [T5TokenizerFast] for all details. MT5Model [[autodoc]] MT5Model MT5ForConditionalGeneration [[autodoc]] MT5ForConditionalGeneration MT5EncoderModel [[autodoc]] MT5EncoderModel MT5ForSequenceClassification [[autodoc]] MT5ForSequenceClassification MT5ForTokenClassification [[autodoc]] MT5ForTokenClassification MT5ForQuestionAnswering [[autodoc]] MT5ForQuestionAnswering TFMT5Model [[autodoc]] TFMT5Model TFMT5ForConditionalGeneration [[autodoc]] TFMT5ForConditionalGeneration TFMT5EncoderModel [[autodoc]] TFMT5EncoderModel FlaxMT5Model [[autodoc]] FlaxMT5Model FlaxMT5ForConditionalGeneration [[autodoc]] FlaxMT5ForConditionalGeneration FlaxMT5EncoderModel [[autodoc]] FlaxMT5EncoderModel
ConvBERT Overview The ConvBERT model was proposed in ConvBERT: Improving BERT with Span-based Dynamic Convolution by Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan. The abstract from the paper is the following: Pre-trained language models like BERT and its variants have recently achieved impressive performance in various natural language understanding tasks. However, BERT heavily relies on the global self-attention block and thus suffers large memory footprint and computation cost. Although all its attention heads query on the whole input sequence for generating the attention map from a global perspective, we observe some heads only need to learn local dependencies, which means the existence of computation redundancy. We therefore propose a novel span-based dynamic convolution to replace these self-attention heads to directly model local dependencies. The novel convolution heads, together with the rest self-attention heads, form a new mixed attention block that is more efficient at both global and local context learning. We equip BERT with this mixed attention design and build a ConvBERT model. Experiments have shown that ConvBERT significantly outperforms BERT and its variants in various downstream tasks, with lower training cost and fewer model parameters. Remarkably, ConvBERTbase model achieves 86.4 GLUE score, 0.7 higher than ELECTRAbase, while using less than 1/4 training cost. Code and pre-trained models will be released. This model was contributed by abhishek. The original implementation can be found here: https://github.com/yitu-opensource/ConvBert Usage tips ConvBERT training tips are similar to those of BERT. For usage tips refer to BERT documentation. Resources Text classification task guide Token classification task guide Question answering task guide Masked language modeling task guide Multiple choice task guide ConvBertConfig [[autodoc]] ConvBertConfig ConvBertTokenizer [[autodoc]] ConvBertTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ConvBertTokenizerFast [[autodoc]] ConvBertTokenizerFast ConvBertModel [[autodoc]] ConvBertModel - forward ConvBertForMaskedLM [[autodoc]] ConvBertForMaskedLM - forward ConvBertForSequenceClassification [[autodoc]] ConvBertForSequenceClassification - forward ConvBertForMultipleChoice [[autodoc]] ConvBertForMultipleChoice - forward ConvBertForTokenClassification [[autodoc]] ConvBertForTokenClassification - forward ConvBertForQuestionAnswering [[autodoc]] ConvBertForQuestionAnswering - forward TFConvBertModel [[autodoc]] TFConvBertModel - call TFConvBertForMaskedLM [[autodoc]] TFConvBertForMaskedLM - call TFConvBertForSequenceClassification [[autodoc]] TFConvBertForSequenceClassification - call TFConvBertForMultipleChoice [[autodoc]] TFConvBertForMultipleChoice - call TFConvBertForTokenClassification [[autodoc]] TFConvBertForTokenClassification - call TFConvBertForQuestionAnswering [[autodoc]] TFConvBertForQuestionAnswering - call
SAM Overview SAM (Segment Anything Model) was proposed in Segment Anything by Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alex Berg, Wan-Yen Lo, Piotr Dollar, Ross Girshick. The model can be used to predict segmentation masks of any object of interest given an input image. The abstract from the paper is the following: We introduce the Segment Anything (SA) project: a new task, model, and dataset for image segmentation. Using our efficient model in a data collection loop, we built the largest segmentation dataset to date (by far), with over 1 billion masks on 11M licensed and privacy respecting images. The model is designed and trained to be promptable, so it can transfer zero-shot to new image distributions and tasks. We evaluate its capabilities on numerous tasks and find that its zero-shot performance is impressive -- often competitive with or even superior to prior fully supervised results. We are releasing the Segment Anything Model (SAM) and corresponding dataset (SA-1B) of 1B masks and 11M images at https://segment-anything.com to foster research into foundation models for computer vision. Tips: The model predicts binary masks that states the presence or not of the object of interest given an image. The model predicts much better results if input 2D points and/or input bounding boxes are provided You can prompt multiple points for the same image, and predict a single mask. Fine-tuning the model is not supported yet According to the paper, textual input should be also supported. However, at this time of writing this seems to be not supported according to the official repository. This model was contributed by ybelkada and ArthurZ. The original code can be found here. Below is an example on how to run mask generation given an image and a 2D point: thon import torch from PIL import Image import requests from transformers import SamModel, SamProcessor device = "cuda" if torch.cuda.is_available() else "cpu" model = SamModel.from_pretrained("facebook/sam-vit-huge").to(device) processor = SamProcessor.from_pretrained("facebook/sam-vit-huge") img_url = "https://huggingface.co/ybelkada/segment-anything/resolve/main/assets/car.png" raw_image = Image.open(requests.get(img_url, stream=True).raw).convert("RGB") input_points = [[[450, 600]]] # 2D location of a window in the image inputs = processor(raw_image, input_points=input_points, return_tensors="pt").to(device) with torch.no_grad(): outputs = model(**inputs) masks = processor.image_processor.post_process_masks( outputs.pred_masks.cpu(), inputs["original_sizes"].cpu(), inputs["reshaped_input_sizes"].cpu() ) scores = outputs.iou_scores You can also process your own masks alongside the input images in the processor to be passed to the model. thon import torch from PIL import Image import requests from transformers import SamModel, SamProcessor device = "cuda" if torch.cuda.is_available() else "cpu" model = SamModel.from_pretrained("facebook/sam-vit-huge").to(device) processor = SamProcessor.from_pretrained("facebook/sam-vit-huge") img_url = "https://huggingface.co/ybelkada/segment-anything/resolve/main/assets/car.png" raw_image = Image.open(requests.get(img_url, stream=True).raw).convert("RGB") mask_url = "https://huggingface.co/ybelkada/segment-anything/resolve/main/assets/car.png" segmentation_map = Image.open(requests.get(mask_url, stream=True).raw).convert("RGB") input_points = [[[450, 600]]] # 2D location of a window in the image inputs = processor(raw_image, input_points=input_points, segmentation_maps=mask, return_tensors="pt").to(device) with torch.no_grad(): outputs = model(**inputs) masks = processor.image_processor.post_process_masks( outputs.pred_masks.cpu(), inputs["original_sizes"].cpu(), inputs["reshaped_input_sizes"].cpu() ) scores = outputs.iou_scores Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with SAM. Demo notebook for using the model. Demo notebook for using the automatic mask generation pipeline. Demo notebook for inference with MedSAM, a fine-tuned version of SAM on the medical domain. 🌎 Demo notebook for fine-tuning the model on custom data. 🌎 SlimSAM SlimSAM, a pruned version of SAM, was proposed in 0.1% Data Makes Segment Anything Slim by Zigeng Chen et al. SlimSAM reduces the size of the SAM models considerably while maintaining the same performance. Checkpoints can be found on the hub, and they can be used as a drop-in replacement of SAM. SamConfig [[autodoc]] SamConfig SamVisionConfig [[autodoc]] SamVisionConfig SamMaskDecoderConfig [[autodoc]] SamMaskDecoderConfig SamPromptEncoderConfig [[autodoc]] SamPromptEncoderConfig SamProcessor [[autodoc]] SamProcessor SamImageProcessor [[autodoc]] SamImageProcessor SamModel [[autodoc]] SamModel - forward TFSamModel [[autodoc]] TFSamModel - call
Falcon Overview Falcon is a class of causal decoder-only models built by TII. The largest Falcon checkpoints have been trained on >=1T tokens of text, with a particular emphasis on the RefinedWeb corpus. They are made available under the Apache 2.0 license. Falcon's architecture is modern and optimized for inference, with multi-query attention and support for efficient attention variants like FlashAttention. Both 'base' models trained only as causal language models as well as 'instruct' models that have received further fine-tuning are available. Falcon models are (as of 2023) some of the largest and most powerful open-source language models, and consistently rank highly in the OpenLLM leaderboard. Converting custom checkpoints Falcon models were initially added to the Hugging Face Hub as custom code checkpoints. However, Falcon is now fully supported in the Transformers library. If you fine-tuned a model from a custom code checkpoint, we recommend converting your checkpoint to the new in-library format, as this should give significant improvements to stability and performance, especially for generation, as well as removing the need to use trust_remote_code=True! You can convert custom code checkpoints to full Transformers checkpoints using the convert_custom_code_checkpoint.py script located in the Falcon model directory of the Transformers library. To use this script, simply call it with python convert_custom_code_checkpoint.py --checkpoint_dir my_model. This will convert your checkpoint in-place, and you can immediately load it from the directory afterwards with e.g. from_pretrained(). If your model hasn't been uploaded to the Hub, we recommend making a backup before attempting the conversion, just in case! FalconConfig [[autodoc]] FalconConfig - all FalconModel [[autodoc]] FalconModel - forward FalconForCausalLM [[autodoc]] FalconForCausalLM - forward FalconForSequenceClassification [[autodoc]] FalconForSequenceClassification - forward FalconForTokenClassification [[autodoc]] FalconForTokenClassification - forward FalconForQuestionAnswering [[autodoc]] FalconForQuestionAnswering - forward
BARThez Overview The BARThez model was proposed in BARThez: a Skilled Pretrained French Sequence-to-Sequence Model by Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis on 23 Oct, 2020. The abstract of the paper: Inductive transfer learning, enabled by self-supervised learning, have taken the entire Natural Language Processing (NLP) field by storm, with models such as BERT and BART setting new state of the art on countless natural language understanding tasks. While there are some notable exceptions, most of the available models and research have been conducted for the English language. In this work, we introduce BARThez, the first BART model for the French language (to the best of our knowledge). BARThez was pretrained on a very large monolingual French corpus from past research that we adapted to suit BART's perturbation schemes. Unlike already existing BERT-based French language models such as CamemBERT and FlauBERT, BARThez is particularly well-suited for generative tasks, since not only its encoder but also its decoder is pretrained. In addition to discriminative tasks from the FLUE benchmark, we evaluate BARThez on a novel summarization dataset, OrangeSum, that we release with this paper. We also continue the pretraining of an already pretrained multilingual BART on BARThez's corpus, and we show that the resulting model, which we call mBARTHez, provides a significant boost over vanilla BARThez, and is on par with or outperforms CamemBERT and FlauBERT. This model was contributed by moussakam. The Authors' code can be found here. BARThez implementation is the same as BART, except for tokenization. Refer to BART documentation for information on configuration classes and their parameters. BARThez-specific tokenizers are documented below. Resources BARThez can be fine-tuned on sequence-to-sequence tasks in a similar way as BART, check: examples/pytorch/summarization/. BarthezTokenizer [[autodoc]] BarthezTokenizer BarthezTokenizerFast [[autodoc]] BarthezTokenizerFast
LUKE Overview The LUKE model was proposed in LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention by Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda and Yuji Matsumoto. It is based on RoBERTa and adds entity embeddings as well as an entity-aware self-attention mechanism, which helps improve performance on various downstream tasks involving reasoning about entities such as named entity recognition, extractive and cloze-style question answering, entity typing, and relation classification. The abstract from the paper is the following: Entity representations are useful in natural language tasks involving entities. In this paper, we propose new pretrained contextualized representations of words and entities based on the bidirectional transformer. The proposed model treats words and entities in a given text as independent tokens, and outputs contextualized representations of them. Our model is trained using a new pretraining task based on the masked language model of BERT. The task involves predicting randomly masked words and entities in a large entity-annotated corpus retrieved from Wikipedia. We also propose an entity-aware self-attention mechanism that is an extension of the self-attention mechanism of the transformer, and considers the types of tokens (words or entities) when computing attention scores. The proposed model achieves impressive empirical performance on a wide range of entity-related tasks. In particular, it obtains state-of-the-art results on five well-known datasets: Open Entity (entity typing), TACRED (relation classification), CoNLL-2003 (named entity recognition), ReCoRD (cloze-style question answering), and SQuAD 1.1 (extractive question answering). This model was contributed by ikuyamada and nielsr. The original code can be found here. Usage tips This implementation is the same as [RobertaModel] with the addition of entity embeddings as well as an entity-aware self-attention mechanism, which improves performance on tasks involving reasoning about entities. LUKE treats entities as input tokens; therefore, it takes entity_ids, entity_attention_mask, entity_token_type_ids and entity_position_ids as extra input. You can obtain those using [LukeTokenizer]. [LukeTokenizer] takes entities and entity_spans (character-based start and end positions of the entities in the input text) as extra input. entities typically consist of [MASK] entities or Wikipedia entities. The brief description when inputting these entities are as follows: Inputting [MASK] entities to compute entity representations: The [MASK] entity is used to mask entities to be predicted during pretraining. When LUKE receives the [MASK] entity, it tries to predict the original entity by gathering the information about the entity from the input text. Therefore, the [MASK] entity can be used to address downstream tasks requiring the information of entities in text such as entity typing, relation classification, and named entity recognition. Inputting Wikipedia entities to compute knowledge-enhanced token representations: LUKE learns rich information (or knowledge) about Wikipedia entities during pretraining and stores the information in its entity embedding. By using Wikipedia entities as input tokens, LUKE outputs token representations enriched by the information stored in the embeddings of these entities. This is particularly effective for tasks requiring real-world knowledge, such as question answering. There are three head models for the former use case: [LukeForEntityClassification], for tasks to classify a single entity in an input text such as entity typing, e.g. the Open Entity dataset. This model places a linear head on top of the output entity representation. [LukeForEntityPairClassification], for tasks to classify the relationship between two entities such as relation classification, e.g. the TACRED dataset. This model places a linear head on top of the concatenated output representation of the pair of given entities. [LukeForEntitySpanClassification], for tasks to classify the sequence of entity spans, such as named entity recognition (NER). This model places a linear head on top of the output entity representations. You can address NER using this model by inputting all possible entity spans in the text to the model. [LukeTokenizer] has a task argument, which enables you to easily create an input to these head models by specifying task="entity_classification", task="entity_pair_classification", or task="entity_span_classification". Please refer to the example code of each head models. Usage example: thon from transformers import LukeTokenizer, LukeModel, LukeForEntityPairClassification model = LukeModel.from_pretrained("studio-ousia/luke-base") tokenizer = LukeTokenizer.from_pretrained("studio-ousia/luke-base") Example 1: Computing the contextualized entity representation corresponding to the entity mention "Beyoncé" text = "Beyoncé lives in Los Angeles." entity_spans = [(0, 7)] # character-based entity span corresponding to "Beyoncé" inputs = tokenizer(text, entity_spans=entity_spans, add_prefix_space=True, return_tensors="pt") outputs = model(**inputs) word_last_hidden_state = outputs.last_hidden_state entity_last_hidden_state = outputs.entity_last_hidden_state Example 2: Inputting Wikipedia entities to obtain enriched contextualized representations entities = [ "Beyoncé", "Los Angeles", ] # Wikipedia entity titles corresponding to the entity mentions "Beyoncé" and "Los Angeles" entity_spans = [(0, 7), (17, 28)] # character-based entity spans corresponding to "Beyoncé" and "Los Angeles" inputs = tokenizer(text, entities=entities, entity_spans=entity_spans, add_prefix_space=True, return_tensors="pt") outputs = model(**inputs) word_last_hidden_state = outputs.last_hidden_state entity_last_hidden_state = outputs.entity_last_hidden_state Example 3: Classifying the relationship between two entities using LukeForEntityPairClassification head model model = LukeForEntityPairClassification.from_pretrained("studio-ousia/luke-large-finetuned-tacred") tokenizer = LukeTokenizer.from_pretrained("studio-ousia/luke-large-finetuned-tacred") entity_spans = [(0, 7), (17, 28)] # character-based entity spans corresponding to "Beyoncé" and "Los Angeles" inputs = tokenizer(text, entity_spans=entity_spans, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits predicted_class_idx = int(logits[0].argmax()) print("Predicted class:", model.config.id2label[predicted_class_idx]) Resources A demo notebook on how to fine-tune [LukeForEntityPairClassification] for relation classification Notebooks showcasing how you to reproduce the results as reported in the paper with the HuggingFace implementation of LUKE Text classification task guide Token classification task guide Question answering task guide Masked language modeling task guide Multiple choice task guide LukeConfig [[autodoc]] LukeConfig LukeTokenizer [[autodoc]] LukeTokenizer - call - save_vocabulary LukeModel [[autodoc]] LukeModel - forward LukeForMaskedLM [[autodoc]] LukeForMaskedLM - forward LukeForEntityClassification [[autodoc]] LukeForEntityClassification - forward LukeForEntityPairClassification [[autodoc]] LukeForEntityPairClassification - forward LukeForEntitySpanClassification [[autodoc]] LukeForEntitySpanClassification - forward LukeForSequenceClassification [[autodoc]] LukeForSequenceClassification - forward LukeForMultipleChoice [[autodoc]] LukeForMultipleChoice - forward LukeForTokenClassification [[autodoc]] LukeForTokenClassification - forward LukeForQuestionAnswering [[autodoc]] LukeForQuestionAnswering - forward
FocalNet Overview The FocalNet model was proposed in Focal Modulation Networks by Jianwei Yang, Chunyuan Li, Xiyang Dai, Lu Yuan, Jianfeng Gao. FocalNets completely replace self-attention (used in models like ViT and Swin) by a focal modulation mechanism for modeling token interactions in vision. The authors claim that FocalNets outperform self-attention based models with similar computational costs on the tasks of image classification, object detection, and segmentation. The abstract from the paper is the following: We propose focal modulation networks (FocalNets in short), where self-attention (SA) is completely replaced by a focal modulation mechanism for modeling token interactions in vision. Focal modulation comprises three components: (i) hierarchical contextualization, implemented using a stack of depth-wise convolutional layers, to encode visual contexts from short to long ranges, (ii) gated aggregation to selectively gather contexts for each query token based on its content, and (iii) element-wise modulation or affine transformation to inject the aggregated context into the query. Extensive experiments show FocalNets outperform the state-of-the-art SA counterparts (e.g., Swin and Focal Transformers) with similar computational costs on the tasks of image classification, object detection, and segmentation. Specifically, FocalNets with tiny and base size achieve 82.3% and 83.9% top-1 accuracy on ImageNet-1K. After pretrained on ImageNet-22K in 224 resolution, it attains 86.5% and 87.3% top-1 accuracy when finetuned with resolution 224 and 384, respectively. When transferred to downstream tasks, FocalNets exhibit clear superiority. For object detection with Mask R-CNN, FocalNet base trained with 1\times outperforms the Swin counterpart by 2.1 points and already surpasses Swin trained with 3\times schedule (49.0 v.s. 48.5). For semantic segmentation with UPerNet, FocalNet base at single-scale outperforms Swin by 2.4, and beats Swin at multi-scale (50.5 v.s. 49.7). Using large FocalNet and Mask2former, we achieve 58.5 mIoU for ADE20K semantic segmentation, and 57.9 PQ for COCO Panoptic Segmentation. Using huge FocalNet and DINO, we achieved 64.3 and 64.4 mAP on COCO minival and test-dev, respectively, establishing new SoTA on top of much larger attention-based models like Swinv2-G and BEIT-3. This model was contributed by nielsr. The original code can be found here. FocalNetConfig [[autodoc]] FocalNetConfig FocalNetModel [[autodoc]] FocalNetModel - forward FocalNetForMaskedImageModeling [[autodoc]] FocalNetForMaskedImageModeling - forward FocalNetForImageClassification [[autodoc]] FocalNetForImageClassification - forward
ERNIE Overview ERNIE is a series of powerful models proposed by baidu, especially in Chinese tasks, including ERNIE1.0, ERNIE2.0, ERNIE3.0, ERNIE-Gram, ERNIE-health, etc. These models are contributed by nghuyong and the official code can be found in PaddleNLP (in PaddlePaddle). Usage example Take ernie-1.0-base-zh as an example: Python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("nghuyong/ernie-1.0-base-zh") model = AutoModel.from_pretrained("nghuyong/ernie-1.0-base-zh") Model checkpoints | Model Name | Language | Description | |:-------------------:|:--------:|:-------------------------------:| | ernie-1.0-base-zh | Chinese | Layer:12, Heads:12, Hidden:768 | | ernie-2.0-base-en | English | Layer:12, Heads:12, Hidden:768 | | ernie-2.0-large-en | English | Layer:24, Heads:16, Hidden:1024 | | ernie-3.0-base-zh | Chinese | Layer:12, Heads:12, Hidden:768 | | ernie-3.0-medium-zh | Chinese | Layer:6, Heads:12, Hidden:768 | | ernie-3.0-mini-zh | Chinese | Layer:6, Heads:12, Hidden:384 | | ernie-3.0-micro-zh | Chinese | Layer:4, Heads:12, Hidden:384 | | ernie-3.0-nano-zh | Chinese | Layer:4, Heads:12, Hidden:312 | | ernie-health-zh | Chinese | Layer:12, Heads:12, Hidden:768 | | ernie-gram-zh | Chinese | Layer:12, Heads:12, Hidden:768 | You can find all the supported models from huggingface's model hub: huggingface.co/nghuyong, and model details from paddle's official repo: PaddleNLP and ERNIE. Resources Text classification task guide Token classification task guide Question answering task guide Causal language modeling task guide Masked language modeling task guide Multiple choice task guide ErnieConfig [[autodoc]] ErnieConfig - all Ernie specific outputs [[autodoc]] models.ernie.modeling_ernie.ErnieForPreTrainingOutput ErnieModel [[autodoc]] ErnieModel - forward ErnieForPreTraining [[autodoc]] ErnieForPreTraining - forward ErnieForCausalLM [[autodoc]] ErnieForCausalLM - forward ErnieForMaskedLM [[autodoc]] ErnieForMaskedLM - forward ErnieForNextSentencePrediction [[autodoc]] ErnieForNextSentencePrediction - forward ErnieForSequenceClassification [[autodoc]] ErnieForSequenceClassification - forward ErnieForMultipleChoice [[autodoc]] ErnieForMultipleChoice - forward ErnieForTokenClassification [[autodoc]] ErnieForTokenClassification - forward ErnieForQuestionAnswering [[autodoc]] ErnieForQuestionAnswering - forward
CLIP Overview The CLIP model was proposed in Learning Transferable Visual Models From Natural Language Supervision by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever. CLIP (Contrastive Language-Image Pre-Training) is a neural network trained on a variety of (image, text) pairs. It can be instructed in natural language to predict the most relevant text snippet, given an image, without directly optimizing for the task, similarly to the zero-shot capabilities of GPT-2 and 3. The abstract from the paper is the following: State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at this https URL. This model was contributed by valhalla. The original code can be found here. Usage tips and example CLIP is a multi-modal vision and language model. It can be used for image-text similarity and for zero-shot image classification. CLIP uses a ViT like transformer to get visual features and a causal language model to get the text features. Both the text and visual features are then projected to a latent space with identical dimension. The dot product between the projected image and text features is then used as a similar score. To feed images to the Transformer encoder, each image is split into a sequence of fixed-size non-overlapping patches, which are then linearly embedded. A [CLS] token is added to serve as representation of an entire image. The authors also add absolute position embeddings, and feed the resulting sequence of vectors to a standard Transformer encoder. The [CLIPImageProcessor] can be used to resize (or rescale) and normalize images for the model. The [CLIPTokenizer] is used to encode the text. The [CLIPProcessor] wraps [CLIPImageProcessor] and [CLIPTokenizer] into a single instance to both encode the text and prepare the images. The following example shows how to get the image-text similarity scores using [CLIPProcessor] and [CLIPModel]. thon from PIL import Image import requests from transformers import CLIPProcessor, CLIPModel model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32") processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True) outputs = model(**inputs) logits_per_image = outputs.logits_per_image # this is the image-text similarity score probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with CLIP. Fine tuning CLIP with Remote Sensing (Satellite) images and captions, a blog post about how to fine-tune CLIP with RSICD dataset and comparison of performance changes due to data augmentation. This example script shows how to train a CLIP-like vision-text dual encoder model using a pre-trained vision and text encoder using COCO dataset. A notebook on how to use a pretrained CLIP for inference with beam search for image captioning. 🌎 Image retrieval A notebook on image retrieval using pretrained CLIP and computing MRR(Mean Reciprocal Rank) score. 🌎 A notebook on image retrieval and showing the similarity score. 🌎 A notebook on how to map images and texts to the same vector space using Multilingual CLIP. 🌎 A notebook on how to run CLIP on semantic image search using Unsplash and TMDB datasets. 🌎 Explainability A notebook on how to visualize similarity between input token and image segment. 🌎 If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we will review it. The resource should ideally demonstrate something new instead of duplicating an existing resource. CLIPConfig [[autodoc]] CLIPConfig - from_text_vision_configs CLIPTextConfig [[autodoc]] CLIPTextConfig CLIPVisionConfig [[autodoc]] CLIPVisionConfig CLIPTokenizer [[autodoc]] CLIPTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary CLIPTokenizerFast [[autodoc]] CLIPTokenizerFast CLIPImageProcessor [[autodoc]] CLIPImageProcessor - preprocess CLIPFeatureExtractor [[autodoc]] CLIPFeatureExtractor CLIPProcessor [[autodoc]] CLIPProcessor CLIPModel [[autodoc]] CLIPModel - forward - get_text_features - get_image_features CLIPTextModel [[autodoc]] CLIPTextModel - forward CLIPTextModelWithProjection [[autodoc]] CLIPTextModelWithProjection - forward CLIPVisionModelWithProjection [[autodoc]] CLIPVisionModelWithProjection - forward CLIPVisionModel [[autodoc]] CLIPVisionModel - forward CLIPForImageClassification [[autodoc]] CLIPForImageClassification - forward TFCLIPModel [[autodoc]] TFCLIPModel - call - get_text_features - get_image_features TFCLIPTextModel [[autodoc]] TFCLIPTextModel - call TFCLIPVisionModel [[autodoc]] TFCLIPVisionModel - call FlaxCLIPModel [[autodoc]] FlaxCLIPModel - call - get_text_features - get_image_features FlaxCLIPTextModel [[autodoc]] FlaxCLIPTextModel - call FlaxCLIPTextModelWithProjection [[autodoc]] FlaxCLIPTextModelWithProjection - call FlaxCLIPVisionModel [[autodoc]] FlaxCLIPVisionModel - call
Informer Overview The Informer model was proposed in Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting by Haoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li, Hui Xiong, and Wancai Zhang. This method introduces a Probabilistic Attention mechanism to select the "active" queries rather than the "lazy" queries and provides a sparse Transformer thus mitigating the quadratic compute and memory requirements of vanilla attention. The abstract from the paper is the following: Many real-world applications require the prediction of long sequence time-series, such as electricity consumption planning. Long sequence time-series forecasting (LSTF) demands a high prediction capacity of the model, which is the ability to capture precise long-range dependency coupling between output and input efficiently. Recent studies have shown the potential of Transformer to increase the prediction capacity. However, there are several severe issues with Transformer that prevent it from being directly applicable to LSTF, including quadratic time complexity, high memory usage, and inherent limitation of the encoder-decoder architecture. To address these issues, we design an efficient transformer-based model for LSTF, named Informer, with three distinctive characteristics: (i) a ProbSparse self-attention mechanism, which achieves O(L logL) in time complexity and memory usage, and has comparable performance on sequences' dependency alignment. (ii) the self-attention distilling highlights dominating attention by halving cascading layer input, and efficiently handles extreme long input sequences. (iii) the generative style decoder, while conceptually simple, predicts the long time-series sequences at one forward operation rather than a step-by-step way, which drastically improves the inference speed of long-sequence predictions. Extensive experiments on four large-scale datasets demonstrate that Informer significantly outperforms existing methods and provides a new solution to the LSTF problem. This model was contributed by elisim and kashif. The original code can be found here. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. Check out the Informer blog-post in HuggingFace blog: Multivariate Probabilistic Time Series Forecasting with Informer InformerConfig [[autodoc]] InformerConfig InformerModel [[autodoc]] InformerModel - forward InformerForPrediction [[autodoc]] InformerForPrediction - forward
HerBERT Overview The HerBERT model was proposed in KLEJ: Comprehensive Benchmark for Polish Language Understanding by Piotr Rybak, Robert Mroczkowski, Janusz Tracz, and Ireneusz Gawlik. It is a BERT-based Language Model trained on Polish Corpora using only MLM objective with dynamic masking of whole words. The abstract from the paper is the following: In recent years, a series of Transformer-based models unlocked major improvements in general natural language understanding (NLU) tasks. Such a fast pace of research would not be possible without general NLU benchmarks, which allow for a fair comparison of the proposed methods. However, such benchmarks are available only for a handful of languages. To alleviate this issue, we introduce a comprehensive multi-task benchmark for the Polish language understanding, accompanied by an online leaderboard. It consists of a diverse set of tasks, adopted from existing datasets for named entity recognition, question-answering, textual entailment, and others. We also introduce a new sentiment analysis task for the e-commerce domain, named Allegro Reviews (AR). To ensure a common evaluation scheme and promote models that generalize to different NLU tasks, the benchmark includes datasets from varying domains and applications. Additionally, we release HerBERT, a Transformer-based model trained specifically for the Polish language, which has the best average performance and obtains the best results for three out of nine tasks. Finally, we provide an extensive evaluation, including several standard baselines and recently proposed, multilingual Transformer-based models. This model was contributed by rmroczkowski. The original code can be found here. Usage example thon from transformers import HerbertTokenizer, RobertaModel tokenizer = HerbertTokenizer.from_pretrained("allegro/herbert-klej-cased-tokenizer-v1") model = RobertaModel.from_pretrained("allegro/herbert-klej-cased-v1") encoded_input = tokenizer.encode("Kto ma lepszą sztukę, ma lepszy rząd – to jasne.", return_tensors="pt") outputs = model(encoded_input) HerBERT can also be loaded using AutoTokenizer and AutoModel: import torch from transformers import AutoModel, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("allegro/herbert-klej-cased-tokenizer-v1") model = AutoModel.from_pretrained("allegro/herbert-klej-cased-v1") Herbert implementation is the same as BERT except for the tokenization method. Refer to BERT documentation for API reference and examples. HerbertTokenizer [[autodoc]] HerbertTokenizer HerbertTokenizerFast [[autodoc]] HerbertTokenizerFast
EnCodec Overview The EnCodec neural codec model was proposed in High Fidelity Neural Audio Compression by Alexandre Défossez, Jade Copet, Gabriel Synnaeve, Yossi Adi. The abstract from the paper is the following: We introduce a state-of-the-art real-time, high-fidelity, audio codec leveraging neural networks. It consists in a streaming encoder-decoder architecture with quantized latent space trained in an end-to-end fashion. We simplify and speed-up the training by using a single multiscale spectrogram adversary that efficiently reduces artifacts and produce high-quality samples. We introduce a novel loss balancer mechanism to stabilize training: the weight of a loss now defines the fraction of the overall gradient it should represent, thus decoupling the choice of this hyper-parameter from the typical scale of the loss. Finally, we study how lightweight Transformer models can be used to further compress the obtained representation by up to 40%, while staying faster than real time. We provide a detailed description of the key design choices of the proposed model including: training objective, architectural changes and a study of various perceptual loss functions. We present an extensive subjective evaluation (MUSHRA tests) together with an ablation study for a range of bandwidths and audio domains, including speech, noisy-reverberant speech, and music. Our approach is superior to the baselines methods across all evaluated settings, considering both 24 kHz monophonic and 48 kHz stereophonic audio. This model was contributed by Matthijs, Patrick Von Platen and Arthur Zucker. The original code can be found here. Usage example Here is a quick example of how to encode and decode an audio using this model: thon from datasets import load_dataset, Audio from transformers import EncodecModel, AutoProcessor librispeech_dummy = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") model = EncodecModel.from_pretrained("facebook/encodec_24khz") processor = AutoProcessor.from_pretrained("facebook/encodec_24khz") librispeech_dummy = librispeech_dummy.cast_column("audio", Audio(sampling_rate=processor.sampling_rate)) audio_sample = librispeech_dummy[-1]["audio"]["array"] inputs = processor(raw_audio=audio_sample, sampling_rate=processor.sampling_rate, return_tensors="pt") encoder_outputs = model.encode(inputs["input_values"], inputs["padding_mask"]) audio_values = model.decode(encoder_outputs.audio_codes, encoder_outputs.audio_scales, inputs["padding_mask"])[0] or the equivalent with a forward pass audio_values = model(inputs["input_values"], inputs["padding_mask"]).audio_values EncodecConfig [[autodoc]] EncodecConfig EncodecFeatureExtractor [[autodoc]] EncodecFeatureExtractor - call EncodecModel [[autodoc]] EncodecModel - decode - encode - forward
BLIP-2 Overview The BLIP-2 model was proposed in BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models by Junnan Li, Dongxu Li, Silvio Savarese, Steven Hoi. BLIP-2 leverages frozen pre-trained image encoders and large language models (LLMs) by training a lightweight, 12-layer Transformer encoder in between them, achieving state-of-the-art performance on various vision-language tasks. Most notably, BLIP-2 improves upon Flamingo, an 80 billion parameter model, by 8.7% on zero-shot VQAv2 with 54x fewer trainable parameters. The abstract from the paper is the following: The cost of vision-and-language pre-training has become increasingly prohibitive due to end-to-end training of large-scale models. This paper proposes BLIP-2, a generic and efficient pre-training strategy that bootstraps vision-language pre-training from off-the-shelf frozen pre-trained image encoders and frozen large language models. BLIP-2 bridges the modality gap with a lightweight Querying Transformer, which is pre-trained in two stages. The first stage bootstraps vision-language representation learning from a frozen image encoder. The second stage bootstraps vision-to-language generative learning from a frozen language model. BLIP-2 achieves state-of-the-art performance on various vision-language tasks, despite having significantly fewer trainable parameters than existing methods. For example, our model outperforms Flamingo80B by 8.7% on zero-shot VQAv2 with 54x fewer trainable parameters. We also demonstrate the model's emerging capabilities of zero-shot image-to-text generation that can follow natural language instructions. BLIP-2 architecture. Taken from the original paper. This model was contributed by nielsr. The original code can be found here. Usage tips BLIP-2 can be used for conditional text generation given an image and an optional text prompt. At inference time, it's recommended to use the [generate] method. One can use [Blip2Processor] to prepare images for the model, and decode the predicted tokens ID's back to text. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with BLIP-2. Demo notebooks for BLIP-2 for image captioning, visual question answering (VQA) and chat-like conversations can be found here. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. Blip2Config [[autodoc]] Blip2Config - from_vision_qformer_text_configs Blip2VisionConfig [[autodoc]] Blip2VisionConfig Blip2QFormerConfig [[autodoc]] Blip2QFormerConfig Blip2Processor [[autodoc]] Blip2Processor Blip2VisionModel [[autodoc]] Blip2VisionModel - forward Blip2QFormerModel [[autodoc]] Blip2QFormerModel - forward Blip2Model [[autodoc]] Blip2Model - forward - get_text_features - get_image_features - get_qformer_features Blip2ForConditionalGeneration [[autodoc]] Blip2ForConditionalGeneration - forward - generate
BERT Overview The BERT model was proposed in BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova. It's a bidirectional transformer pretrained using a combination of masked language modeling objective and next sentence prediction on a large corpus comprising the Toronto Book Corpus and Wikipedia. The abstract from the paper is the following: We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement). This model was contributed by thomwolf. The original code can be found here. Usage tips BERT is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than the left. BERT was trained with the masked language modeling (MLM) and next sentence prediction (NSP) objectives. It is efficient at predicting masked tokens and at NLU in general, but is not optimal for text generation. Corrupts the inputs by using random masking, more precisely, during pretraining, a given percentage of tokens (usually 15%) is masked by: a special mask token with probability 0.8 a random token different from the one masked with probability 0.1 the same token with probability 0.1 The model must predict the original sentence, but has a second objective: inputs are two sentences A and B (with a separation token in between). With probability 50%, the sentences are consecutive in the corpus, in the remaining 50% they are not related. The model has to predict if the sentences are consecutive or not. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with BERT. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. A blog post on BERT Text Classification in a different language. A notebook for Finetuning BERT (and friends) for multi-label text classification. A notebook on how to Finetune BERT for multi-label classification using PyTorch. 🌎 A notebook on how to warm-start an EncoderDecoder model with BERT for summarization. [BertForSequenceClassification] is supported by this example script and notebook. [TFBertForSequenceClassification] is supported by this example script and notebook. [FlaxBertForSequenceClassification] is supported by this example script and notebook. Text classification task guide A blog post on how to use Hugging Face Transformers with Keras: Fine-tune a non-English BERT for Named Entity Recognition. A notebook for Finetuning BERT for named-entity recognition using only the first wordpiece of each word in the word label during tokenization. To propagate the label of the word to all wordpieces, see this version of the notebook instead. [BertForTokenClassification] is supported by this example script and notebook. [TFBertForTokenClassification] is supported by this example script and notebook. [FlaxBertForTokenClassification] is supported by this example script. Token classification chapter of the 🤗 Hugging Face Course. Token classification task guide [BertForMaskedLM] is supported by this example script and notebook. [TFBertForMaskedLM] is supported by this example script and notebook. [FlaxBertForMaskedLM] is supported by this example script and notebook. Masked language modeling chapter of the 🤗 Hugging Face Course. Masked language modeling task guide [BertForQuestionAnswering] is supported by this example script and notebook. [TFBertForQuestionAnswering] is supported by this example script and notebook. [FlaxBertForQuestionAnswering] is supported by this example script. Question answering chapter of the 🤗 Hugging Face Course. Question answering task guide Multiple choice - [BertForMultipleChoice] is supported by this example script and notebook. - [TFBertForMultipleChoice] is supported by this example script and notebook. - Multiple choice task guide ⚡️ Inference - A blog post on how to Accelerate BERT inference with Hugging Face Transformers and AWS Inferentia. - A blog post on how to Accelerate BERT inference with DeepSpeed-Inference on GPUs. ⚙️ Pretraining - A blog post on Pre-Training BERT with Hugging Face Transformers and Habana Gaudi. 🚀 Deploy - A blog post on how to Convert Transformers to ONNX with Hugging Face Optimum. - A blog post on how to Setup Deep Learning environment for Hugging Face Transformers with Habana Gaudi on AWS. - A blog post on Autoscaling BERT with Hugging Face Transformers, Amazon SageMaker and Terraform module. - A blog post on Serverless BERT with HuggingFace, AWS Lambda, and Docker. - A blog post on Hugging Face Transformers BERT fine-tuning using Amazon SageMaker and Training Compiler. - A blog post on Task-specific knowledge distillation for BERT using Transformers & Amazon SageMaker. BertConfig [[autodoc]] BertConfig - all BertTokenizer [[autodoc]] BertTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary BertTokenizerFast [[autodoc]] BertTokenizerFast TFBertTokenizer [[autodoc]] TFBertTokenizer Bert specific outputs [[autodoc]] models.bert.modeling_bert.BertForPreTrainingOutput [[autodoc]] models.bert.modeling_tf_bert.TFBertForPreTrainingOutput [[autodoc]] models.bert.modeling_flax_bert.FlaxBertForPreTrainingOutput BertModel [[autodoc]] BertModel - forward BertForPreTraining [[autodoc]] BertForPreTraining - forward BertLMHeadModel [[autodoc]] BertLMHeadModel - forward BertForMaskedLM [[autodoc]] BertForMaskedLM - forward BertForNextSentencePrediction [[autodoc]] BertForNextSentencePrediction - forward BertForSequenceClassification [[autodoc]] BertForSequenceClassification - forward BertForMultipleChoice [[autodoc]] BertForMultipleChoice - forward BertForTokenClassification [[autodoc]] BertForTokenClassification - forward BertForQuestionAnswering [[autodoc]] BertForQuestionAnswering - forward TFBertModel [[autodoc]] TFBertModel - call TFBertForPreTraining [[autodoc]] TFBertForPreTraining - call TFBertModelLMHeadModel [[autodoc]] TFBertLMHeadModel - call TFBertForMaskedLM [[autodoc]] TFBertForMaskedLM - call TFBertForNextSentencePrediction [[autodoc]] TFBertForNextSentencePrediction - call TFBertForSequenceClassification [[autodoc]] TFBertForSequenceClassification - call TFBertForMultipleChoice [[autodoc]] TFBertForMultipleChoice - call TFBertForTokenClassification [[autodoc]] TFBertForTokenClassification - call TFBertForQuestionAnswering [[autodoc]] TFBertForQuestionAnswering - call FlaxBertModel [[autodoc]] FlaxBertModel - call FlaxBertForPreTraining [[autodoc]] FlaxBertForPreTraining - call FlaxBertForCausalLM [[autodoc]] FlaxBertForCausalLM - call FlaxBertForMaskedLM [[autodoc]] FlaxBertForMaskedLM - call FlaxBertForNextSentencePrediction [[autodoc]] FlaxBertForNextSentencePrediction - call FlaxBertForSequenceClassification [[autodoc]] FlaxBertForSequenceClassification - call FlaxBertForMultipleChoice [[autodoc]] FlaxBertForMultipleChoice - call FlaxBertForTokenClassification [[autodoc]] FlaxBertForTokenClassification - call FlaxBertForQuestionAnswering [[autodoc]] FlaxBertForQuestionAnswering - call
UMT5 Overview The UMT5 model was proposed in UniMax: Fairer and More Effective Language Sampling for Large-Scale Multilingual Pretraining by Hyung Won Chung, Xavier Garcia, Adam Roberts, Yi Tay, Orhan Firat, Sharan Narang, Noah Constant. The abstract from the paper is the following: Pretrained multilingual large language models have typically used heuristic temperature-based sampling to balance between different languages. However previous work has not systematically evaluated the efficacy of different pretraining language distributions across model scales. In this paper, we propose a new sampling method, UniMax, that delivers more uniform coverage of head languages while mitigating overfitting on tail languages by explicitly capping the number of repeats over each language's corpus. We perform an extensive series of ablations testing a range of sampling strategies on a suite of multilingual benchmarks, while varying model scale. We find that UniMax outperforms standard temperature-based sampling, and the benefits persist as scale increases. As part of our contribution, we release: (i) an improved and refreshed mC4 multilingual corpus consisting of 29 trillion characters across 107 languages, and (ii) a suite of pretrained umT5 model checkpoints trained with UniMax sampling. Google has released the following variants: google/umt5-small google/umt5-base google/umt5-xl google/umt5-xxl. This model was contributed by agemagician and stefan-it. The original code can be found here. Usage tips UMT5 was only pre-trained on mC4 excluding any supervised training. Therefore, this model has to be fine-tuned before it is usable on a downstream task, unlike the original T5 model. Since umT5 was pre-trained in an unsupervised manner, there's no real advantage to using a task prefix during single-task fine-tuning. If you are doing multi-task fine-tuning, you should use a prefix. Differences with mT5? UmT5 is based on mT5, with a non-shared relative positional bias that is computed for each layer. This means that the model set has_relative_bias for each layer. The conversion script is also different because the model was saved in t5x's latest checkpointing format. Sample usage thon from transformers import AutoModelForSeq2SeqLM, AutoTokenizer model = AutoModelForSeq2SeqLM.from_pretrained("google/umt5-small") tokenizer = AutoTokenizer.from_pretrained("google/umt5-small") inputs = tokenizer( "A walks into a bar and orders a with pinch of .", return_tensors="pt", ) outputs = model.generate(**inputs) print(tokenizer.batch_decode(outputs)) ['nyone who drink a alcohol A A. This I'] Refer to T5's documentation page for more tips, code examples and notebooks. UMT5Config [[autodoc]] UMT5Config UMT5Model [[autodoc]] UMT5Model - forward UMT5ForConditionalGeneration [[autodoc]] UMT5ForConditionalGeneration - forward UMT5EncoderModel [[autodoc]] UMT5EncoderModel - forward UMT5ForSequenceClassification [[autodoc]] UMT5ForSequenceClassification - forward UMT5ForTokenClassification [[autodoc]] UMT5ForTokenClassification - forward UMT5ForQuestionAnswering [[autodoc]] UMT5ForQuestionAnswering - forward
UDOP Overview The UDOP model was proposed in Unifying Vision, Text, and Layout for Universal Document Processing by Zineng Tang, Ziyi Yang, Guoxin Wang, Yuwei Fang, Yang Liu, Chenguang Zhu, Michael Zeng, Cha Zhang, Mohit Bansal. UDOP adopts an encoder-decoder Transformer architecture based on T5 for document AI tasks like document image classification, document parsing and document visual question answering. The abstract from the paper is the following: We propose Universal Document Processing (UDOP), a foundation Document AI model which unifies text, image, and layout modalities together with varied task formats, including document understanding and generation. UDOP leverages the spatial correlation between textual content and document image to model image, text, and layout modalities with one uniform representation. With a novel Vision-Text-Layout Transformer, UDOP unifies pretraining and multi-domain downstream tasks into a prompt-based sequence generation scheme. UDOP is pretrained on both large-scale unlabeled document corpora using innovative self-supervised objectives and diverse labeled data. UDOP also learns to generate document images from text and layout modalities via masked image reconstruction. To the best of our knowledge, this is the first time in the field of document AI that one model simultaneously achieves high-quality neural document editing and content customization. Our method sets the state-of-the-art on 9 Document AI tasks, e.g., document understanding and QA, across diverse data domains like finance reports, academic papers, and websites. UDOP ranks first on the leaderboard of the Document Understanding Benchmark (DUE).* UDOP architecture. Taken from the original paper. Usage tips In addition to input_ids, [UdopForConditionalGeneration] also expects the input bbox, which are the bounding boxes (i.e. 2D-positions) of the input tokens. These can be obtained using an external OCR engine such as Google's Tesseract (there's a Python wrapper available). Each bounding box should be in (x0, y0, x1, y1) format, where (x0, y0) corresponds to the position of the upper left corner in the bounding box, and (x1, y1) represents the position of the lower right corner. Note that one first needs to normalize the bounding boxes to be on a 0-1000 scale. To normalize, you can use the following function: python def normalize_bbox(bbox, width, height): return [ int(1000 * (bbox[0] / width)), int(1000 * (bbox[1] / height)), int(1000 * (bbox[2] / width)), int(1000 * (bbox[3] / height)), ] Here, width and height correspond to the width and height of the original document in which the token occurs. Those can be obtained using the Python Image Library (PIL) library for example, as follows: thon from PIL import Image Document can be a png, jpg, etc. PDFs must be converted to images. image = Image.open(name_of_your_document).convert("RGB") width, height = image.size At inference time, it's recommended to use the generate method to autoregressively generate text given a document image. One can use [UdopProcessor] to prepare images and text for the model. By default, this class uses the Tesseract engine to extract a list of words and boxes (coordinates) from a given document. Its functionality is equivalent to that of [LayoutLMv3Processor], hence it supports passing either apply_ocr=False in case you prefer to use your own OCR engine or apply_ocr=True in case you want the default OCR engine to be used. This model was contributed by nielsr. The original code can be found here. UdopConfig [[autodoc]] UdopConfig UdopTokenizer [[autodoc]] UdopTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary UdopTokenizerFast [[autodoc]] UdopTokenizerFast UdopProcessor [[autodoc]] UdopProcessor - call UdopModel [[autodoc]] UdopModel - forward UdopForConditionalGeneration [[autodoc]] UdopForConditionalGeneration - forward UdopEncoderModel [[autodoc]] UdopEncoderModel - forward
BertJapanese Overview The BERT models trained on Japanese text. There are models with two different tokenization methods: Tokenize with MeCab and WordPiece. This requires some extra dependencies, fugashi which is a wrapper around MeCab. Tokenize into characters. To use MecabTokenizer, you should pip install transformers["ja"] (or pip install -e .["ja"] if you install from source) to install dependencies. See details on cl-tohoku repository. Example of using a model with MeCab and WordPiece tokenization: thon import torch from transformers import AutoModel, AutoTokenizer bertjapanese = AutoModel.from_pretrained("cl-tohoku/bert-base-japanese") tokenizer = AutoTokenizer.from_pretrained("cl-tohoku/bert-base-japanese") Input Japanese Text line = "吾輩は猫である。" inputs = tokenizer(line, return_tensors="pt") print(tokenizer.decode(inputs["input_ids"][0])) [CLS] 吾輩 は 猫 で ある 。 [SEP] outputs = bertjapanese(**inputs) Example of using a model with Character tokenization: thon bertjapanese = AutoModel.from_pretrained("cl-tohoku/bert-base-japanese-char") tokenizer = AutoTokenizer.from_pretrained("cl-tohoku/bert-base-japanese-char") Input Japanese Text line = "吾輩は猫である。" inputs = tokenizer(line, return_tensors="pt") print(tokenizer.decode(inputs["input_ids"][0])) [CLS] 吾 輩 は 猫 で あ る 。 [SEP] outputs = bertjapanese(**inputs) This model was contributed by cl-tohoku. This implementation is the same as BERT, except for tokenization method. Refer to BERT documentation for API reference information. BertJapaneseTokenizer [[autodoc]] BertJapaneseTokenizer
ALIGN Overview The ALIGN model was proposed in Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision by Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, Yunhsuan Sung, Zhen Li, Tom Duerig. ALIGN is a multi-modal vision and language model. It can be used for image-text similarity and for zero-shot image classification. ALIGN features a dual-encoder architecture with EfficientNet as its vision encoder and BERT as its text encoder, and learns to align visual and text representations with contrastive learning. Unlike previous work, ALIGN leverages a massive noisy dataset and shows that the scale of the corpus can be used to achieve SOTA representations with a simple recipe. The abstract from the paper is the following: Pre-trained representations are becoming crucial for many NLP and perception tasks. While representation learning in NLP has transitioned to training on raw text without human annotations, visual and vision-language representations still rely heavily on curated training datasets that are expensive or require expert knowledge. For vision applications, representations are mostly learned using datasets with explicit class labels such as ImageNet or OpenImages. For vision-language, popular datasets like Conceptual Captions, MSCOCO, or CLIP all involve a non-trivial data collection (and cleaning) process. This costly curation process limits the size of datasets and hence hinders the scaling of trained models. In this paper, we leverage a noisy dataset of over one billion image alt-text pairs, obtained without expensive filtering or post-processing steps in the Conceptual Captions dataset. A simple dual-encoder architecture learns to align visual and language representations of the image and text pairs using a contrastive loss. We show that the scale of our corpus can make up for its noise and leads to state-of-the-art representations even with such a simple learning scheme. Our visual representation achieves strong performance when transferred to classification tasks such as ImageNet and VTAB. The aligned visual and language representations enables zero-shot image classification and also set new state-of-the-art results on Flickr30K and MSCOCO image-text retrieval benchmarks, even when compared with more sophisticated cross-attention models. The representations also enable cross-modality search with complex text and text + image queries. This model was contributed by Alara Dirik. The original code is not released, this implementation is based on the Kakao Brain implementation based on the original paper. Usage example ALIGN uses EfficientNet to get visual features and BERT to get the text features. Both the text and visual features are then projected to a latent space with identical dimension. The dot product between the projected image and text features is then used as a similarity score. [AlignProcessor] wraps [EfficientNetImageProcessor] and [BertTokenizer] into a single instance to both encode the text and preprocess the images. The following example shows how to get the image-text similarity scores using [AlignProcessor] and [AlignModel]. thon import requests import torch from PIL import Image from transformers import AlignProcessor, AlignModel processor = AlignProcessor.from_pretrained("kakaobrain/align-base") model = AlignModel.from_pretrained("kakaobrain/align-base") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) candidate_labels = ["an image of a cat", "an image of a dog"] inputs = processor(text=candidate_labels, images=image, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) this is the image-text similarity score logits_per_image = outputs.logits_per_image we can take the softmax to get the label probabilities probs = logits_per_image.softmax(dim=1) print(probs) Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ALIGN. A blog post on ALIGN and the COYO-700M dataset. A zero-shot image classification demo. Model card of kakaobrain/align-base model. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we will review it. The resource should ideally demonstrate something new instead of duplicating an existing resource. AlignConfig [[autodoc]] AlignConfig - from_text_vision_configs AlignTextConfig [[autodoc]] AlignTextConfig AlignVisionConfig [[autodoc]] AlignVisionConfig AlignProcessor [[autodoc]] AlignProcessor AlignModel [[autodoc]] AlignModel - forward - get_text_features - get_image_features AlignTextModel [[autodoc]] AlignTextModel - forward AlignVisionModel [[autodoc]] AlignVisionModel - forward
DialoGPT Overview DialoGPT was proposed in DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation by Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan. It's a GPT2 Model trained on 147M conversation-like exchanges extracted from Reddit. The abstract from the paper is the following: We present a large, tunable neural conversational response generation model, DialoGPT (dialogue generative pre-trained transformer). Trained on 147M conversation-like exchanges extracted from Reddit comment chains over a period spanning from 2005 through 2017, DialoGPT extends the Hugging Face PyTorch transformer to attain a performance close to human both in terms of automatic and human evaluation in single-turn dialogue settings. We show that conversational systems that leverage DialoGPT generate more relevant, contentful and context-consistent responses than strong baseline systems. The pre-trained model and training pipeline are publicly released to facilitate research into neural response generation and the development of more intelligent open-domain dialogue systems. The original code can be found here. Usage tips DialoGPT is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than the left. DialoGPT was trained with a causal language modeling (CLM) objective on conversational data and is therefore powerful at response generation in open-domain dialogue systems. DialoGPT enables the user to create a chat bot in just 10 lines of code as shown on DialoGPT's model card. Training: In order to train or fine-tune DialoGPT, one can use causal language modeling training. To cite the official paper: We follow the OpenAI GPT-2 to model a multiturn dialogue session as a long text and frame the generation task as language modeling. We first concatenate all dialog turns within a dialogue session into a long text x_1,, x_N (N is the sequence length), ended by the end-of-text token. For more information please confer to the original paper. DialoGPT's architecture is based on the GPT2 model, refer to GPT2's documentation page for API reference and examples.
RegNet Overview The RegNet model was proposed in Designing Network Design Spaces by Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, Piotr Dollár. The authors design search spaces to perform Neural Architecture Search (NAS). They first start from a high dimensional search space and iteratively reduce the search space by empirically applying constraints based on the best-performing models sampled by the current search space. The abstract from the paper is the following: In this work, we present a new network design paradigm. Our goal is to help advance the understanding of network design and discover design principles that generalize across settings. Instead of focusing on designing individual network instances, we design network design spaces that parametrize populations of networks. The overall process is analogous to classic manual design of networks, but elevated to the design space level. Using our methodology we explore the structure aspect of network design and arrive at a low-dimensional design space consisting of simple, regular networks that we call RegNet. The core insight of the RegNet parametrization is surprisingly simple: widths and depths of good networks can be explained by a quantized linear function. We analyze the RegNet design space and arrive at interesting findings that do not match the current practice of network design. The RegNet design space provides simple and fast networks that work well across a wide range of flop regimes. Under comparable training settings and flops, the RegNet models outperform the popular EfficientNet models while being up to 5x faster on GPUs. This model was contributed by Francesco. The TensorFlow version of the model was contributed by sayakpaul and ariG23498. The original code can be found here. The huge 10B model from Self-supervised Pretraining of Visual Features in the Wild, trained on one billion Instagram images, is available on the hub Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with RegNet. [RegNetForImageClassification] is supported by this example script and notebook. See also: Image classification task guide If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. RegNetConfig [[autodoc]] RegNetConfig RegNetModel [[autodoc]] RegNetModel - forward RegNetForImageClassification [[autodoc]] RegNetForImageClassification - forward TFRegNetModel [[autodoc]] TFRegNetModel - call TFRegNetForImageClassification [[autodoc]] TFRegNetForImageClassification - call FlaxRegNetModel [[autodoc]] FlaxRegNetModel - call FlaxRegNetForImageClassification [[autodoc]] FlaxRegNetForImageClassification - call
Depth Anything Overview The Depth Anything model was proposed in Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data by Lihe Yang, Bingyi Kang, Zilong Huang, Xiaogang Xu, Jiashi Feng, Hengshuang Zhao. Depth Anything is based on the DPT architecture, trained on ~62 million images, obtaining state-of-the-art results for both relative and absolute depth estimation. The abstract from the paper is the following: This work presents Depth Anything, a highly practical solution for robust monocular depth estimation. Without pursuing novel technical modules, we aim to build a simple yet powerful foundation model dealing with any images under any circumstances. To this end, we scale up the dataset by designing a data engine to collect and automatically annotate large-scale unlabeled data (~62M), which significantly enlarges the data coverage and thus is able to reduce the generalization error. We investigate two simple yet effective strategies that make data scaling-up promising. First, a more challenging optimization target is created by leveraging data augmentation tools. It compels the model to actively seek extra visual knowledge and acquire robust representations. Second, an auxiliary supervision is developed to enforce the model to inherit rich semantic priors from pre-trained encoders. We evaluate its zero-shot capabilities extensively, including six public datasets and randomly captured photos. It demonstrates impressive generalization ability. Further, through fine-tuning it with metric depth information from NYUv2 and KITTI, new SOTAs are set. Our better depth model also results in a better depth-conditioned ControlNet. Depth Anything overview. Taken from the original paper. This model was contributed by nielsr. The original code can be found here. Usage example There are 2 main ways to use Depth Anything: either using the pipeline API, which abstracts away all the complexity for you, or by using the DepthAnythingForDepthEstimation class yourself. Pipeline API The pipeline allows to use the model in a few lines of code: thon from transformers import pipeline from PIL import Image import requests load pipe pipe = pipeline(task="depth-estimation", model="LiheYoung/depth-anything-small-hf") load image url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) inference depth = pipe(image)["depth"] Using the model yourself If you want to do the pre- and postprocessing yourself, here's how to do that: thon from transformers import AutoImageProcessor, AutoModelForDepthEstimation import torch import numpy as np from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) image_processor = AutoImageProcessor.from_pretrained("LiheYoung/depth-anything-small-hf") model = AutoModelForDepthEstimation.from_pretrained("LiheYoung/depth-anything-small-hf") prepare image for the model inputs = image_processor(images=image, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) predicted_depth = outputs.predicted_depth interpolate to original size prediction = torch.nn.functional.interpolate( predicted_depth.unsqueeze(1), size=image.size[::-1], mode="bicubic", align_corners=False, ) visualize the prediction output = prediction.squeeze().cpu().numpy() formatted = (output * 255 / np.max(output)).astype("uint8") depth = Image.fromarray(formatted) Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Depth Anything. Monocular depth estimation task guide A notebook showcasing inference with [DepthAnythingForDepthEstimation] can be found here. 🌎 If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. DepthAnythingConfig [[autodoc]] DepthAnythingConfig DepthAnythingForDepthEstimation [[autodoc]] DepthAnythingForDepthEstimation - forward
YOLOS Overview The YOLOS model was proposed in You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection by Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu. YOLOS proposes to just leverage the plain Vision Transformer (ViT) for object detection, inspired by DETR. It turns out that a base-sized encoder-only Transformer can also achieve 42 AP on COCO, similar to DETR and much more complex frameworks such as Faster R-CNN. The abstract from the paper is the following: Can Transformer perform 2D object- and region-level recognition from a pure sequence-to-sequence perspective with minimal knowledge about the 2D spatial structure? To answer this question, we present You Only Look at One Sequence (YOLOS), a series of object detection models based on the vanilla Vision Transformer with the fewest possible modifications, region priors, as well as inductive biases of the target task. We find that YOLOS pre-trained on the mid-sized ImageNet-1k dataset only can already achieve quite competitive performance on the challenging COCO object detection benchmark, e.g., YOLOS-Base directly adopted from BERT-Base architecture can obtain 42.0 box AP on COCO val. We also discuss the impacts as well as limitations of current pre-train schemes and model scaling strategies for Transformer in vision through YOLOS. YOLOS architecture. Taken from the original paper. This model was contributed by nielsr. The original code can be found here. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with YOLOS. All example notebooks illustrating inference + fine-tuning [YolosForObjectDetection] on a custom dataset can be found here. See also: Object detection task guide If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. Use [YolosImageProcessor] for preparing images (and optional targets) for the model. Contrary to DETR, YOLOS doesn't require a pixel_mask to be created. YolosConfig [[autodoc]] YolosConfig YolosImageProcessor [[autodoc]] YolosImageProcessor - preprocess - pad - post_process_object_detection YolosFeatureExtractor [[autodoc]] YolosFeatureExtractor - call - pad - post_process_object_detection YolosModel [[autodoc]] YolosModel - forward YolosForObjectDetection [[autodoc]] YolosForObjectDetection - forward
Mistral Overview Mistral was introduced in the this blogpost by Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed. The introduction of the blog post says: Mistral AI team is proud to release Mistral 7B, the most powerful language model for its size to date. Mistral-7B is the first large language model (LLM) released by mistral.ai. Architectural details Mistral-7B is a decoder-only Transformer with the following architectural choices: Sliding Window Attention - Trained with 8k context length and fixed cache size, with a theoretical attention span of 128K tokens GQA (Grouped Query Attention) - allowing faster inference and lower cache size. Byte-fallback BPE tokenizer - ensures that characters are never mapped to out of vocabulary tokens. For more details refer to the release blog post. License Mistral-7B is released under the Apache 2.0 license. Usage tips The Mistral team has released 3 checkpoints: a base model, Mistral-7B-v0.1, which has been pre-trained to predict the next token on internet-scale data. an instruction tuned model, Mistral-7B-Instruct-v0.1, which is the base model optimized for chat purposes using supervised fine-tuning (SFT) and direct preference optimization (DPO). an improved instruction tuned model, Mistral-7B-Instruct-v0.2, which improves upon v1. The base model can be used as follows: thon from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-v0.1", device_map="auto") tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-v0.1") prompt = "My favourite condiment is" model_inputs = tokenizer([prompt], return_tensors="pt").to("cuda") model.to(device) generated_ids = model.generate(**model_inputs, max_new_tokens=100, do_sample=True) tokenizer.batch_decode(generated_ids)[0] "My favourite condiment is to " The instruction tuned model can be used as follows: thon from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2", device_map="auto") tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2") messages = [ {"role": "user", "content": "What is your favourite condiment?"}, {"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"}, {"role": "user", "content": "Do you have mayonnaise recipes?"} ] model_inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to("cuda") generated_ids = model.generate(model_inputs, max_new_tokens=100, do_sample=True) tokenizer.batch_decode(generated_ids)[0] "Mayonnaise can be made as follows: ()" As can be seen, the instruction-tuned model requires a chat template to be applied to make sure the inputs are prepared in the right format. Speeding up Mistral by using Flash Attention The code snippets above showcase inference without any optimization tricks. However, one can drastically speed up the model by leveraging Flash Attention, which is a faster implementation of the attention mechanism used inside the model. First, make sure to install the latest version of Flash Attention 2 to include the sliding window attention feature. pip install -U flash-attn --no-build-isolation Make also sure that you have a hardware that is compatible with Flash-Attention 2. Read more about it in the official documentation of the flash attention repository. Make also sure to load your model in half-precision (e.g. torch.float16) To load and run a model using Flash Attention-2, refer to the snippet below: thon import torch from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-v0.1", torch_dtype=torch.float16, attn_implementation="flash_attention_2", device_map="auto") tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-v0.1") prompt = "My favourite condiment is" model_inputs = tokenizer([prompt], return_tensors="pt").to("cuda") model.to(device) generated_ids = model.generate(**model_inputs, max_new_tokens=100, do_sample=True) tokenizer.batch_decode(generated_ids)[0] "My favourite condiment is to ()" Expected speedups Below is a expected speedup diagram that compares pure inference time between the native implementation in transformers using mistralai/Mistral-7B-v0.1 checkpoint and the Flash Attention 2 version of the model. Sliding window Attention The current implementation supports the sliding window attention mechanism and memory efficient cache management. To enable sliding window attention, just make sure to have a flash-attn version that is compatible with sliding window attention (>=2.3.0). The Flash Attention-2 model uses also a more memory efficient cache slicing mechanism - as recommended per the official implementation of Mistral model that use rolling cache mechanism we keep the cache size fixed (self.config.sliding_window), support batched generation only for padding_side="left" and use the absolute position of the current token to compute the positional embedding. Shrinking down Mistral using quantization As the Mistral model has 7 billion parameters, that would require about 14GB of GPU RAM in half precision (float16), since each parameter is stored in 2 bytes. However, one can shrink down the size of the model using quantization. If the model is quantized to 4 bits (or half a byte per parameter),that requires only about 3.5GB of RAM. Quantizing a model is as simple as passing a quantization_config to the model. Below, we'll leverage the BitsAndyBytes quantization (but refer to this page for other quantization methods): thon import torch from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig specify how to quantize the model quantization_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype="torch.float16", ) model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2", quantization_config=True, device_map="auto") tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2") prompt = "My favourite condiment is" messages = [ {"role": "user", "content": "What is your favourite condiment?"}, {"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"}, {"role": "user", "content": "Do you have mayonnaise recipes?"} ] model_inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to("cuda") generated_ids = model.generate(model_inputs, max_new_tokens=100, do_sample=True) tokenizer.batch_decode(generated_ids)[0] "The expected output" This model was contributed by Younes Belkada and Arthur Zucker . The original code can be found here. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Mistral. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. A demo notebook to perform supervised fine-tuning (SFT) of Mistral-7B can be found here. 🌎 A blog post on how to fine-tune LLMs in 2024 using Hugging Face tooling. 🌎 The Alignment Handbook by Hugging Face includes scripts and recipes to perform supervised fine-tuning (SFT) and direct preference optimization with Mistral-7B. This includes scripts for full fine-tuning, QLoRa on a single GPU as well as multi-GPU fine-tuning. Causal language modeling task guide MistralConfig [[autodoc]] MistralConfig MistralModel [[autodoc]] MistralModel - forward MistralForCausalLM [[autodoc]] MistralForCausalLM - forward MistralForSequenceClassification [[autodoc]] MistralForSequenceClassification - forward FlaxMistralModel [[autodoc]] FlaxMistralModel - call FlaxMistralForCausalLM [[autodoc]] FlaxMistralForCausalLM - call
Swin2SR Overview The Swin2SR model was proposed in Swin2SR: SwinV2 Transformer for Compressed Image Super-Resolution and Restoration by Marcos V. Conde, Ui-Jin Choi, Maxime Burchi, Radu Timofte. Swin2R improves the SwinIR model by incorporating Swin Transformer v2 layers which mitigates issues such as training instability, resolution gaps between pre-training and fine-tuning, and hunger on data. The abstract from the paper is the following: Compression plays an important role on the efficient transmission and storage of images and videos through band-limited systems such as streaming services, virtual reality or videogames. However, compression unavoidably leads to artifacts and the loss of the original information, which may severely degrade the visual quality. For these reasons, quality enhancement of compressed images has become a popular research topic. While most state-of-the-art image restoration methods are based on convolutional neural networks, other transformers-based methods such as SwinIR, show impressive performance on these tasks. In this paper, we explore the novel Swin Transformer V2, to improve SwinIR for image super-resolution, and in particular, the compressed input scenario. Using this method we can tackle the major issues in training transformer vision models, such as training instability, resolution gaps between pre-training and fine-tuning, and hunger on data. We conduct experiments on three representative tasks: JPEG compression artifacts removal, image super-resolution (classical and lightweight), and compressed image super-resolution. Experimental results demonstrate that our method, Swin2SR, can improve the training convergence and performance of SwinIR, and is a top-5 solution at the "AIM 2022 Challenge on Super-Resolution of Compressed Image and Video". Swin2SR architecture. Taken from the original paper. This model was contributed by nielsr. The original code can be found here. Resources Demo notebooks for Swin2SR can be found here. A demo Space for image super-resolution with SwinSR can be found here. Swin2SRImageProcessor [[autodoc]] Swin2SRImageProcessor - preprocess Swin2SRConfig [[autodoc]] Swin2SRConfig Swin2SRModel [[autodoc]] Swin2SRModel - forward Swin2SRForImageSuperResolution [[autodoc]] Swin2SRForImageSuperResolution - forward
FLAVA Overview The FLAVA model was proposed in FLAVA: A Foundational Language And Vision Alignment Model by Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela and is accepted at CVPR 2022. The paper aims at creating a single unified foundation model which can work across vision, language as well as vision-and-language multimodal tasks. The abstract from the paper is the following: State-of-the-art vision and vision-and-language models rely on large-scale visio-linguistic pretraining for obtaining good performance on a variety of downstream tasks. Generally, such models are often either cross-modal (contrastive) or multi-modal (with earlier fusion) but not both; and they often only target specific modalities or tasks. A promising direction would be to use a single holistic universal model, as a "foundation", that targets all modalities at once -- a true vision and language foundation model should be good at vision tasks, language tasks, and cross- and multi-modal vision and language tasks. We introduce FLAVA as such a model and demonstrate impressive performance on a wide range of 35 tasks spanning these target modalities. This model was contributed by aps. The original code can be found here. FlavaConfig [[autodoc]] FlavaConfig FlavaTextConfig [[autodoc]] FlavaTextConfig FlavaImageConfig [[autodoc]] FlavaImageConfig FlavaMultimodalConfig [[autodoc]] FlavaMultimodalConfig FlavaImageCodebookConfig [[autodoc]] FlavaImageCodebookConfig FlavaProcessor [[autodoc]] FlavaProcessor FlavaFeatureExtractor [[autodoc]] FlavaFeatureExtractor FlavaImageProcessor [[autodoc]] FlavaImageProcessor - preprocess FlavaForPreTraining [[autodoc]] FlavaForPreTraining - forward FlavaModel [[autodoc]] FlavaModel - forward - get_text_features - get_image_features FlavaImageCodebook [[autodoc]] FlavaImageCodebook - forward - get_codebook_indices - get_codebook_probs FlavaTextModel [[autodoc]] FlavaTextModel - forward FlavaImageModel [[autodoc]] FlavaImageModel - forward FlavaMultimodalModel [[autodoc]] FlavaMultimodalModel - forward
Dilated Neighborhood Attention Transformer Overview DiNAT was proposed in Dilated Neighborhood Attention Transformer by Ali Hassani and Humphrey Shi. It extends NAT by adding a Dilated Neighborhood Attention pattern to capture global context, and shows significant performance improvements over it. The abstract from the paper is the following: *Transformers are quickly becoming one of the most heavily applied deep learning architectures across modalities, domains, and tasks. In vision, on top of ongoing efforts into plain transformers, hierarchical transformers have also gained significant attention, thanks to their performance and easy integration into existing frameworks. These models typically employ localized attention mechanisms, such as the sliding-window Neighborhood Attention (NA) or Swin Transformer's Shifted Window Self Attention. While effective at reducing self attention's quadratic complexity, local attention weakens two of the most desirable properties of self attention: long range inter-dependency modeling, and global receptive field. In this paper, we introduce Dilated Neighborhood Attention (DiNA), a natural, flexible and efficient extension to NA that can capture more global context and expand receptive fields exponentially at no additional cost. NA's local attention and DiNA's sparse global attention complement each other, and therefore we introduce Dilated Neighborhood Attention Transformer (DiNAT), a new hierarchical vision transformer built upon both. DiNAT variants enjoy significant improvements over strong baselines such as NAT, Swin, and ConvNeXt. Our large model is faster and ahead of its Swin counterpart by 1.5% box AP in COCO object detection, 1.3% mask AP in COCO instance segmentation, and 1.1% mIoU in ADE20K semantic segmentation. Paired with new frameworks, our large variant is the new state of the art panoptic segmentation model on COCO (58.2 PQ) and ADE20K (48.5 PQ), and instance segmentation model on Cityscapes (44.5 AP) and ADE20K (35.4 AP) (no extra data). It also matches the state of the art specialized semantic segmentation models on ADE20K (58.2 mIoU), and ranks second on Cityscapes (84.5 mIoU) (no extra data). * Neighborhood Attention with different dilation values. Taken from the original paper. This model was contributed by Ali Hassani. The original code can be found here. Usage tips DiNAT can be used as a backbone. When output_hidden_states = True, it will output both hidden_states and reshaped_hidden_states. The reshaped_hidden_states have a shape of (batch, num_channels, height, width) rather than (batch_size, height, width, num_channels). Notes: - DiNAT depends on NATTEN's implementation of Neighborhood Attention and Dilated Neighborhood Attention. You can install it with pre-built wheels for Linux by referring to shi-labs.com/natten, or build on your system by running pip install natten. Note that the latter will likely take time to compile. NATTEN does not support Windows devices yet. - Patch size of 4 is only supported at the moment. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DiNAT. [DinatForImageClassification] is supported by this example script and notebook. See also: Image classification task guide If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. DinatConfig [[autodoc]] DinatConfig DinatModel [[autodoc]] DinatModel - forward DinatForImageClassification [[autodoc]] DinatForImageClassification - forward
Wav2Vec2-Conformer Overview The Wav2Vec2-Conformer was added to an updated version of fairseq S2T: Fast Speech-to-Text Modeling with fairseq by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino. The official results of the model can be found in Table 3 and Table 4 of the paper. The Wav2Vec2-Conformer weights were released by the Meta AI team within the Fairseq library. This model was contributed by patrickvonplaten. The original code can be found here. Usage tips Wav2Vec2-Conformer follows the same architecture as Wav2Vec2, but replaces the Attention-block with a Conformer-block as introduced in Conformer: Convolution-augmented Transformer for Speech Recognition. For the same number of layers, Wav2Vec2-Conformer requires more parameters than Wav2Vec2, but also yields an improved word error rate. Wav2Vec2-Conformer uses the same tokenizer and feature extractor as Wav2Vec2. Wav2Vec2-Conformer can use either no relative position embeddings, Transformer-XL-like position embeddings, or rotary position embeddings by setting the correct config.position_embeddings_type. Resources Audio classification task guide Automatic speech recognition task guide Wav2Vec2ConformerConfig [[autodoc]] Wav2Vec2ConformerConfig Wav2Vec2Conformer specific outputs [[autodoc]] models.wav2vec2_conformer.modeling_wav2vec2_conformer.Wav2Vec2ConformerForPreTrainingOutput Wav2Vec2ConformerModel [[autodoc]] Wav2Vec2ConformerModel - forward Wav2Vec2ConformerForCTC [[autodoc]] Wav2Vec2ConformerForCTC - forward Wav2Vec2ConformerForSequenceClassification [[autodoc]] Wav2Vec2ConformerForSequenceClassification - forward Wav2Vec2ConformerForAudioFrameClassification [[autodoc]] Wav2Vec2ConformerForAudioFrameClassification - forward Wav2Vec2ConformerForXVector [[autodoc]] Wav2Vec2ConformerForXVector - forward Wav2Vec2ConformerForPreTraining [[autodoc]] Wav2Vec2ConformerForPreTraining - forward
MarkupLM Overview The MarkupLM model was proposed in MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding by Junlong Li, Yiheng Xu, Lei Cui, Furu Wei. MarkupLM is BERT, but applied to HTML pages instead of raw text documents. The model incorporates additional embedding layers to improve performance, similar to LayoutLM. The model can be used for tasks like question answering on web pages or information extraction from web pages. It obtains state-of-the-art results on 2 important benchmarks: - WebSRC, a dataset for Web-Based Structural Reading Comprehension (a bit like SQuAD but for web pages) - SWDE, a dataset for information extraction from web pages (basically named-entity recognition on web pages) The abstract from the paper is the following: Multimodal pre-training with text, layout, and image has made significant progress for Visually-rich Document Understanding (VrDU), especially the fixed-layout documents such as scanned document images. While, there are still a large number of digital documents where the layout information is not fixed and needs to be interactively and dynamically rendered for visualization, making existing layout-based pre-training approaches not easy to apply. In this paper, we propose MarkupLM for document understanding tasks with markup languages as the backbone such as HTML/XML-based documents, where text and markup information is jointly pre-trained. Experiment results show that the pre-trained MarkupLM significantly outperforms the existing strong baseline models on several document understanding tasks. The pre-trained model and code will be publicly available. This model was contributed by nielsr. The original code can be found here. Usage tips In addition to input_ids, [~MarkupLMModel.forward] expects 2 additional inputs, namely xpath_tags_seq and xpath_subs_seq. These are the XPATH tags and subscripts respectively for each token in the input sequence. One can use [MarkupLMProcessor] to prepare all data for the model. Refer to the usage guide for more info. MarkupLM architecture. Taken from the original paper. Usage: MarkupLMProcessor The easiest way to prepare data for the model is to use [MarkupLMProcessor], which internally combines a feature extractor ([MarkupLMFeatureExtractor]) and a tokenizer ([MarkupLMTokenizer] or [MarkupLMTokenizerFast]). The feature extractor is used to extract all nodes and xpaths from the HTML strings, which are then provided to the tokenizer, which turns them into the token-level inputs of the model (input_ids etc.). Note that you can still use the feature extractor and tokenizer separately, if you only want to handle one of the two tasks. thon from transformers import MarkupLMFeatureExtractor, MarkupLMTokenizerFast, MarkupLMProcessor feature_extractor = MarkupLMFeatureExtractor() tokenizer = MarkupLMTokenizerFast.from_pretrained("microsoft/markuplm-base") processor = MarkupLMProcessor(feature_extractor, tokenizer) In short, one can provide HTML strings (and possibly additional data) to [MarkupLMProcessor], and it will create the inputs expected by the model. Internally, the processor first uses [MarkupLMFeatureExtractor] to get a list of nodes and corresponding xpaths. The nodes and xpaths are then provided to [MarkupLMTokenizer] or [MarkupLMTokenizerFast], which converts them to token-level input_ids, attention_mask, token_type_ids, xpath_subs_seq, xpath_tags_seq. Optionally, one can provide node labels to the processor, which are turned into token-level labels. [MarkupLMFeatureExtractor] uses Beautiful Soup, a Python library for pulling data out of HTML and XML files, under the hood. Note that you can still use your own parsing solution of choice, and provide the nodes and xpaths yourself to [MarkupLMTokenizer] or [MarkupLMTokenizerFast]. In total, there are 5 use cases that are supported by the processor. Below, we list them all. Note that each of these use cases work for both batched and non-batched inputs (we illustrate them for non-batched inputs). Use case 1: web page classification (training, inference) + token classification (inference), parse_html = True This is the simplest case, in which the processor will use the feature extractor to get all nodes and xpaths from the HTML. thon from transformers import MarkupLMProcessor processor = MarkupLMProcessor.from_pretrained("microsoft/markuplm-base") html_string = """ <!DOCTYPE html> Hello world Welcome Here is my website. """ note that you can also add provide all tokenizer parameters here such as padding, truncation encoding = processor(html_string, return_tensors="pt") print(encoding.keys()) dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'xpath_tags_seq', 'xpath_subs_seq']) Use case 2: web page classification (training, inference) + token classification (inference), parse_html=False In case one already has obtained all nodes and xpaths, one doesn't need the feature extractor. In that case, one should provide the nodes and corresponding xpaths themselves to the processor, and make sure to set parse_html to False. thon from transformers import MarkupLMProcessor processor = MarkupLMProcessor.from_pretrained("microsoft/markuplm-base") processor.parse_html = False nodes = ["hello", "world", "how", "are"] xpaths = ["/html/body/div/li[1]/div/span", "/html/body/div/li[1]/div/span", "html/body", "html/body/div"] encoding = processor(nodes=nodes, xpaths=xpaths, return_tensors="pt") print(encoding.keys()) dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'xpath_tags_seq', 'xpath_subs_seq']) Use case 3: token classification (training), parse_html=False For token classification tasks (such as SWDE), one can also provide the corresponding node labels in order to train a model. The processor will then convert these into token-level labels. By default, it will only label the first wordpiece of a word, and label the remaining wordpieces with -100, which is the ignore_index of PyTorch's CrossEntropyLoss. In case you want all wordpieces of a word to be labeled, you can initialize the tokenizer with only_label_first_subword set to False. thon from transformers import MarkupLMProcessor processor = MarkupLMProcessor.from_pretrained("microsoft/markuplm-base") processor.parse_html = False nodes = ["hello", "world", "how", "are"] xpaths = ["/html/body/div/li[1]/div/span", "/html/body/div/li[1]/div/span", "html/body", "html/body/div"] node_labels = [1, 2, 2, 1] encoding = processor(nodes=nodes, xpaths=xpaths, node_labels=node_labels, return_tensors="pt") print(encoding.keys()) dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'xpath_tags_seq', 'xpath_subs_seq', 'labels']) Use case 4: web page question answering (inference), parse_html=True For question answering tasks on web pages, you can provide a question to the processor. By default, the processor will use the feature extractor to get all nodes and xpaths, and create [CLS] question tokens [SEP] word tokens [SEP]. thon from transformers import MarkupLMProcessor processor = MarkupLMProcessor.from_pretrained("microsoft/markuplm-base") html_string = """ <!DOCTYPE html> Hello world Welcome My name is Niels. """ question = "What's his name?" encoding = processor(html_string, questions=question, return_tensors="pt") print(encoding.keys()) dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'xpath_tags_seq', 'xpath_subs_seq']) Use case 5: web page question answering (inference), parse_html=False For question answering tasks (such as WebSRC), you can provide a question to the processor. If you have extracted all nodes and xpaths yourself, you can provide them directly to the processor. Make sure to set parse_html to False. thon from transformers import MarkupLMProcessor processor = MarkupLMProcessor.from_pretrained("microsoft/markuplm-base") processor.parse_html = False nodes = ["hello", "world", "how", "are"] xpaths = ["/html/body/div/li[1]/div/span", "/html/body/div/li[1]/div/span", "html/body", "html/body/div"] question = "What's his name?" encoding = processor(nodes=nodes, xpaths=xpaths, questions=question, return_tensors="pt") print(encoding.keys()) dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'xpath_tags_seq', 'xpath_subs_seq']) Resources Demo notebooks Text classification task guide Token classification task guide Question answering task guide MarkupLMConfig [[autodoc]] MarkupLMConfig - all MarkupLMFeatureExtractor [[autodoc]] MarkupLMFeatureExtractor - call MarkupLMTokenizer [[autodoc]] MarkupLMTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary MarkupLMTokenizerFast [[autodoc]] MarkupLMTokenizerFast - all MarkupLMProcessor [[autodoc]] MarkupLMProcessor - call MarkupLMModel [[autodoc]] MarkupLMModel - forward MarkupLMForSequenceClassification [[autodoc]] MarkupLMForSequenceClassification - forward MarkupLMForTokenClassification [[autodoc]] MarkupLMForTokenClassification - forward MarkupLMForQuestionAnswering [[autodoc]] MarkupLMForQuestionAnswering - forward
BEiT Overview The BEiT model was proposed in BEiT: BERT Pre-Training of Image Transformers by Hangbo Bao, Li Dong and Furu Wei. Inspired by BERT, BEiT is the first paper that makes self-supervised pre-training of Vision Transformers (ViTs) outperform supervised pre-training. Rather than pre-training the model to predict the class of an image (as done in the original ViT paper), BEiT models are pre-trained to predict visual tokens from the codebook of OpenAI's DALL-E model given masked patches. The abstract from the paper is the following: We introduce a self-supervised vision representation model BEiT, which stands for Bidirectional Encoder representation from Image Transformers. Following BERT developed in the natural language processing area, we propose a masked image modeling task to pretrain vision Transformers. Specifically, each image has two views in our pre-training, i.e, image patches (such as 16x16 pixels), and visual tokens (i.e., discrete tokens). We first "tokenize" the original image into visual tokens. Then we randomly mask some image patches and fed them into the backbone Transformer. The pre-training objective is to recover the original visual tokens based on the corrupted image patches. After pre-training BEiT, we directly fine-tune the model parameters on downstream tasks by appending task layers upon the pretrained encoder. Experimental results on image classification and semantic segmentation show that our model achieves competitive results with previous pre-training methods. For example, base-size BEiT achieves 83.2% top-1 accuracy on ImageNet-1K, significantly outperforming from-scratch DeiT training (81.8%) with the same setup. Moreover, large-size BEiT obtains 86.3% only using ImageNet-1K, even outperforming ViT-L with supervised pre-training on ImageNet-22K (85.2%). This model was contributed by nielsr. The JAX/FLAX version of this model was contributed by kamalkraj. The original code can be found here. Usage tips BEiT models are regular Vision Transformers, but pre-trained in a self-supervised way rather than supervised. They outperform both the original model (ViT) as well as Data-efficient Image Transformers (DeiT) when fine-tuned on ImageNet-1K and CIFAR-100. You can check out demo notebooks regarding inference as well as fine-tuning on custom data here (you can just replace [ViTFeatureExtractor] by [BeitImageProcessor] and [ViTForImageClassification] by [BeitForImageClassification]). There's also a demo notebook available which showcases how to combine DALL-E's image tokenizer with BEiT for performing masked image modeling. You can find it here. As the BEiT models expect each image to be of the same size (resolution), one can use [BeitImageProcessor] to resize (or rescale) and normalize images for the model. Both the patch resolution and image resolution used during pre-training or fine-tuning are reflected in the name of each checkpoint. For example, microsoft/beit-base-patch16-224 refers to a base-sized architecture with patch resolution of 16x16 and fine-tuning resolution of 224x224. All checkpoints can be found on the hub. The available checkpoints are either (1) pre-trained on ImageNet-22k (a collection of 14 million images and 22k classes) only, (2) also fine-tuned on ImageNet-22k or (3) also fine-tuned on ImageNet-1k (also referred to as ILSVRC 2012, a collection of 1.3 million images and 1,000 classes). BEiT uses relative position embeddings, inspired by the T5 model. During pre-training, the authors shared the relative position bias among the several self-attention layers. During fine-tuning, each layer's relative position bias is initialized with the shared relative position bias obtained after pre-training. Note that, if one wants to pre-train a model from scratch, one needs to either set the use_relative_position_bias or the use_relative_position_bias attribute of [BeitConfig] to True in order to add position embeddings. BEiT pre-training. Taken from the original paper. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with BEiT. [BeitForImageClassification] is supported by this example script and notebook. See also: Image classification task guide Semantic segmentation - Semantic segmentation task guide If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. BEiT specific outputs [[autodoc]] models.beit.modeling_beit.BeitModelOutputWithPooling [[autodoc]] models.beit.modeling_flax_beit.FlaxBeitModelOutputWithPooling BeitConfig [[autodoc]] BeitConfig BeitFeatureExtractor [[autodoc]] BeitFeatureExtractor - call - post_process_semantic_segmentation BeitImageProcessor [[autodoc]] BeitImageProcessor - preprocess - post_process_semantic_segmentation BeitModel [[autodoc]] BeitModel - forward BeitForMaskedImageModeling [[autodoc]] BeitForMaskedImageModeling - forward BeitForImageClassification [[autodoc]] BeitForImageClassification - forward BeitForSemanticSegmentation [[autodoc]] BeitForSemanticSegmentation - forward FlaxBeitModel [[autodoc]] FlaxBeitModel - call FlaxBeitForMaskedImageModeling [[autodoc]] FlaxBeitForMaskedImageModeling - call FlaxBeitForImageClassification [[autodoc]] FlaxBeitForImageClassification - call
RoCBert Overview The RoCBert model was proposed in RoCBert: Robust Chinese Bert with Multimodal Contrastive Pretraining by HuiSu, WeiweiShi, XiaoyuShen, XiaoZhou, TuoJi, JiaruiFang, JieZhou. It's a pretrained Chinese language model that is robust under various forms of adversarial attacks. The abstract from the paper is the following: Large-scale pretrained language models have achieved SOTA results on NLP tasks. However, they have been shown vulnerable to adversarial attacks especially for logographic languages like Chinese. In this work, we propose ROCBERT: a pretrained Chinese Bert that is robust to various forms of adversarial attacks like word perturbation, synonyms, typos, etc. It is pretrained with the contrastive learning objective which maximizes the label consistency under different synthesized adversarial examples. The model takes as input multimodal information including the semantic, phonetic and visual features. We show all these features are important to the model robustness since the attack can be performed in all the three forms. Across 5 Chinese NLU tasks, ROCBERT outperforms strong baselines under three blackbox adversarial algorithms without sacrificing the performance on clean testset. It also performs the best in the toxic content detection task under human-made attacks. This model was contributed by weiweishi. Resources Text classification task guide Token classification task guide Question answering task guide Causal language modeling task guide Masked language modeling task guide Multiple choice task guide RoCBertConfig [[autodoc]] RoCBertConfig - all RoCBertTokenizer [[autodoc]] RoCBertTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary RoCBertModel [[autodoc]] RoCBertModel - forward RoCBertForPreTraining [[autodoc]] RoCBertForPreTraining - forward RoCBertForCausalLM [[autodoc]] RoCBertForCausalLM - forward RoCBertForMaskedLM [[autodoc]] RoCBertForMaskedLM - forward RoCBertForSequenceClassification [[autodoc]] transformers.RoCBertForSequenceClassification - forward RoCBertForMultipleChoice [[autodoc]] transformers.RoCBertForMultipleChoice - forward RoCBertForTokenClassification [[autodoc]] transformers.RoCBertForTokenClassification - forward RoCBertForQuestionAnswering [[autodoc]] RoCBertForQuestionAnswering - forward
SwiftFormer Overview The SwiftFormer model was proposed in SwiftFormer: Efficient Additive Attention for Transformer-based Real-time Mobile Vision Applications by Abdelrahman Shaker, Muhammad Maaz, Hanoona Rasheed, Salman Khan, Ming-Hsuan Yang, Fahad Shahbaz Khan. The SwiftFormer paper introduces a novel efficient additive attention mechanism that effectively replaces the quadratic matrix multiplication operations in the self-attention computation with linear element-wise multiplications. A series of models called 'SwiftFormer' is built based on this, which achieves state-of-the-art performance in terms of both accuracy and mobile inference speed. Even their small variant achieves 78.5% top-1 ImageNet1K accuracy with only 0.8 ms latency on iPhone 14, which is more accurate and 2× faster compared to MobileViT-v2. The abstract from the paper is the following: Self-attention has become a defacto choice for capturing global context in various vision applications. However, its quadratic computational complexity with respect to image resolution limits its use in real-time applications, especially for deployment on resource-constrained mobile devices. Although hybrid approaches have been proposed to combine the advantages of convolutions and self-attention for a better speed-accuracy trade-off, the expensive matrix multiplication operations in self-attention remain a bottleneck. In this work, we introduce a novel efficient additive attention mechanism that effectively replaces the quadratic matrix multiplication operations with linear element-wise multiplications. Our design shows that the key-value interaction can be replaced with a linear layer without sacrificing any accuracy. Unlike previous state-of-the-art methods, our efficient formulation of self-attention enables its usage at all stages of the network. Using our proposed efficient additive attention, we build a series of models called "SwiftFormer" which achieves state-of-the-art performance in terms of both accuracy and mobile inference speed. Our small variant achieves 78.5% top-1 ImageNet-1K accuracy with only 0.8 ms latency on iPhone 14, which is more accurate and 2x faster compared to MobileViT-v2. This model was contributed by shehan97. The original code can be found here. SwiftFormerConfig [[autodoc]] SwiftFormerConfig SwiftFormerModel [[autodoc]] SwiftFormerModel - forward SwiftFormerForImageClassification [[autodoc]] SwiftFormerForImageClassification - forward
SeamlessM4T-v2 Overview The SeamlessM4T-v2 model was proposed in Seamless: Multilingual Expressive and Streaming Speech Translation by the Seamless Communication team from Meta AI. SeamlessM4T-v2 is a collection of models designed to provide high quality translation, allowing people from different linguistic communities to communicate effortlessly through speech and text. It is an improvement on the previous version. For more details on the differences between v1 and v2, refer to section Difference with SeamlessM4T-v1. SeamlessM4T-v2 enables multiple tasks without relying on separate models: Speech-to-speech translation (S2ST) Speech-to-text translation (S2TT) Text-to-speech translation (T2ST) Text-to-text translation (T2TT) Automatic speech recognition (ASR) [SeamlessM4Tv2Model] can perform all the above tasks, but each task also has its own dedicated sub-model. The abstract from the paper is the following: Recent advancements in automatic speech translation have dramatically expanded language coverage, improved multimodal capabilities, and enabled a wide range of tasks and functionalities. That said, large-scale automatic speech translation systems today lack key features that help machine-mediated communication feel seamless when compared to human-to-human dialogue. In this work, we introduce a family of models that enable end-to-end expressive and multilingual translations in a streaming fashion. First, we contribute an improved version of the massively multilingual and multimodal SeamlessM4T model—SeamlessM4T v2. This newer model, incorporating an updated UnitY2 framework, was trained on more low-resource language data. The expanded version of SeamlessAlign adds 114,800 hours of automatically aligned data for a total of 76 languages. SeamlessM4T v2 provides the foundation on which our two newest models, SeamlessExpressive and SeamlessStreaming, are initiated. SeamlessExpressive enables translation that preserves vocal styles and prosody. Compared to previous efforts in expressive speech research, our work addresses certain underexplored aspects of prosody, such as speech rate and pauses, while also preserving the style of one’s voice. As for SeamlessStreaming, our model leverages the Efficient Monotonic Multihead Attention (EMMA) mechanism to generate low-latency target translations without waiting for complete source utterances. As the first of its kind, SeamlessStreaming enables simultaneous speech-to-speech/text translation for multiple source and target languages. To understand the performance of these models, we combined novel and modified versions of existing automatic metrics to evaluate prosody, latency, and robustness. For human evaluations, we adapted existing protocols tailored for measuring the most relevant attributes in the preservation of meaning, naturalness, and expressivity. To ensure that our models can be used safely and responsibly, we implemented the first known red-teaming effort for multimodal machine translation, a system for the detection and mitigation of added toxicity, a systematic evaluation of gender bias, and an inaudible localized watermarking mechanism designed to dampen the impact of deepfakes. Consequently, we bring major components from SeamlessExpressive and SeamlessStreaming together to form Seamless, the first publicly available system that unlocks expressive cross-lingual communication in real-time. In sum, Seamless gives us a pivotal look at the technical foundation needed to turn the Universal Speech Translator from a science fiction concept into a real-world technology. Finally, contributions in this work—including models, code, and a watermark detector—are publicly released and accessible at the link below. Usage In the following example, we'll load an Arabic audio sample and an English text sample and convert them into Russian speech and French text. First, load the processor and a checkpoint of the model: thon from transformers import AutoProcessor, SeamlessM4Tv2Model processor = AutoProcessor.from_pretrained("facebook/seamless-m4t-v2-large") model = SeamlessM4Tv2Model.from_pretrained("facebook/seamless-m4t-v2-large") You can seamlessly use this model on text or on audio, to generated either translated text or translated audio. Here is how to use the processor to process text and audio: thon let's load an audio sample from an Arabic speech corpus from datasets import load_dataset dataset = load_dataset("arabic_speech_corpus", split="test", streaming=True) audio_sample = next(iter(dataset))["audio"] now, process it audio_inputs = processor(audios=audio_sample["array"], return_tensors="pt") now, process some English text as well text_inputs = processor(text = "Hello, my dog is cute", src_lang="eng", return_tensors="pt") Speech [SeamlessM4Tv2Model] can seamlessly generate text or speech with few or no changes. Let's target Russian voice translation: thon audio_array_from_text = model.generate(text_inputs, tgt_lang="rus")[0].cpu().numpy().squeeze() audio_array_from_audio = model.generate(audio_inputs, tgt_lang="rus")[0].cpu().numpy().squeeze() With basically the same code, I've translated English text and Arabic speech to Russian speech samples. Text Similarly, you can generate translated text from audio files or from text with the same model. You only have to pass generate_speech=False to [SeamlessM4Tv2Model.generate]. This time, let's translate to French. thon from audio output_tokens = model.generate(**audio_inputs, tgt_lang="fra", generate_speech=False) translated_text_from_audio = processor.decode(output_tokens[0].tolist()[0], skip_special_tokens=True) from text output_tokens = model.generate(**text_inputs, tgt_lang="fra", generate_speech=False) translated_text_from_text = processor.decode(output_tokens[0].tolist()[0], skip_special_tokens=True) Tips 1. Use dedicated models [SeamlessM4Tv2Model] is transformers top level model to generate speech and text, but you can also use dedicated models that perform the task without additional components, thus reducing the memory footprint. For example, you can replace the audio-to-audio generation snippet with the model dedicated to the S2ST task, the rest is exactly the same code: thon from transformers import SeamlessM4Tv2ForSpeechToSpeech model = SeamlessM4Tv2ForSpeechToSpeech.from_pretrained("facebook/seamless-m4t-v2-large") Or you can replace the text-to-text generation snippet with the model dedicated to the T2TT task, you only have to remove generate_speech=False. thon from transformers import SeamlessM4Tv2ForTextToText model = SeamlessM4Tv2ForTextToText.from_pretrained("facebook/seamless-m4t-v2-large") Feel free to try out [SeamlessM4Tv2ForSpeechToText] and [SeamlessM4Tv2ForTextToSpeech] as well. 2. Change the speaker identity You have the possibility to change the speaker used for speech synthesis with the speaker_id argument. Some speaker_id works better than other for some languages! 3. Change the generation strategy You can use different generation strategies for text generation, e.g .generate(input_ids=input_ids, text_num_beams=4, text_do_sample=True) which will perform multinomial beam-search decoding on the text model. Note that speech generation only supports greedy - by default - or multinomial sampling, which can be used with e.g. .generate(, speech_do_sample=True, speech_temperature=0.6). 4. Generate speech and text at the same time Use return_intermediate_token_ids=True with [SeamlessM4Tv2Model] to return both speech and text ! Model architecture SeamlessM4T-v2 features a versatile architecture that smoothly handles the sequential generation of text and speech. This setup comprises two sequence-to-sequence (seq2seq) models. The first model translates the input modality into translated text, while the second model generates speech tokens, known as "unit tokens," from the translated text. Each modality has its own dedicated encoder with a unique architecture. Additionally, for speech output, a vocoder inspired by the HiFi-GAN architecture is placed on top of the second seq2seq model. Difference with SeamlessM4T-v1 The architecture of this new version differs from the first in a few aspects: Improvements on the second-pass model The second seq2seq model, named text-to-unit model, is now non-auto regressive, meaning that it computes units in a single forward pass. This achievement is made possible by: - the use of character-level embeddings, meaning that each character of the predicted translated text has its own embeddings, which are then used to predict the unit tokens. - the use of an intermediate duration predictor, that predicts speech duration at the character-level on the predicted translated text. - the use of a new text-to-unit decoder mixing convolutions and self-attention to handle longer context. Difference in the speech encoder The speech encoder, which is used during the first-pass generation process to predict the translated text, differs mainly from the previous speech encoder through these mechanisms: - the use of chunked attention mask to prevent attention across chunks, ensuring that each position attends only to positions within its own chunk and a fixed number of previous chunks. - the use of relative position embeddings which only considers distance between sequence elements rather than absolute positions. Please refer to Self-Attentionwith Relative Position Representations (Shaw et al.) for more details. - the use of a causal depth-wise convolution instead of a non-causal one. Generation process Here's how the generation process works: Input text or speech is processed through its specific encoder. A decoder creates text tokens in the desired language. If speech generation is required, the second seq2seq model, generates unit tokens in an non auto-regressive way. These unit tokens are then passed through the final vocoder to produce the actual speech. This model was contributed by ylacombe. The original code can be found here. SeamlessM4Tv2Model [[autodoc]] SeamlessM4Tv2Model - generate SeamlessM4Tv2ForTextToSpeech [[autodoc]] SeamlessM4Tv2ForTextToSpeech - generate SeamlessM4Tv2ForSpeechToSpeech [[autodoc]] SeamlessM4Tv2ForSpeechToSpeech - generate SeamlessM4Tv2ForTextToText [[autodoc]] transformers.SeamlessM4Tv2ForTextToText - forward - generate SeamlessM4Tv2ForSpeechToText [[autodoc]] transformers.SeamlessM4Tv2ForSpeechToText - forward - generate SeamlessM4Tv2Config [[autodoc]] SeamlessM4Tv2Config
ViTMSN Overview The ViTMSN model was proposed in Masked Siamese Networks for Label-Efficient Learning by Mahmoud Assran, Mathilde Caron, Ishan Misra, Piotr Bojanowski, Florian Bordes, Pascal Vincent, Armand Joulin, Michael Rabbat, Nicolas Ballas. The paper presents a joint-embedding architecture to match the prototypes of masked patches with that of the unmasked patches. With this setup, their method yields excellent performance in the low-shot and extreme low-shot regimes. The abstract from the paper is the following: We propose Masked Siamese Networks (MSN), a self-supervised learning framework for learning image representations. Our approach matches the representation of an image view containing randomly masked patches to the representation of the original unmasked image. This self-supervised pre-training strategy is particularly scalable when applied to Vision Transformers since only the unmasked patches are processed by the network. As a result, MSNs improve the scalability of joint-embedding architectures, while producing representations of a high semantic level that perform competitively on low-shot image classification. For instance, on ImageNet-1K, with only 5,000 annotated images, our base MSN model achieves 72.4% top-1 accuracy, and with 1% of ImageNet-1K labels, we achieve 75.7% top-1 accuracy, setting a new state-of-the-art for self-supervised learning on this benchmark. MSN architecture. Taken from the original paper. This model was contributed by sayakpaul. The original code can be found here. Usage tips MSN (masked siamese networks) is a method for self-supervised pre-training of Vision Transformers (ViTs). The pre-training objective is to match the prototypes assigned to the unmasked views of the images to that of the masked views of the same images. The authors have only released pre-trained weights of the backbone (ImageNet-1k pre-training). So, to use that on your own image classification dataset, use the [ViTMSNForImageClassification] class which is initialized from [ViTMSNModel]. Follow this notebook for a detailed tutorial on fine-tuning. MSN is particularly useful in the low-shot and extreme low-shot regimes. Notably, it achieves 75.7% top-1 accuracy with only 1% of ImageNet-1K labels when fine-tuned. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ViT MSN. [ViTMSNForImageClassification] is supported by this example script and notebook. See also: Image classification task guide If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ViTMSNConfig [[autodoc]] ViTMSNConfig ViTMSNModel [[autodoc]] ViTMSNModel - forward ViTMSNForImageClassification [[autodoc]] ViTMSNForImageClassification - forward
OneFormer Overview The OneFormer model was proposed in OneFormer: One Transformer to Rule Universal Image Segmentation by Jitesh Jain, Jiachen Li, MangTik Chiu, Ali Hassani, Nikita Orlov, Humphrey Shi. OneFormer is a universal image segmentation framework that can be trained on a single panoptic dataset to perform semantic, instance, and panoptic segmentation tasks. OneFormer uses a task token to condition the model on the task in focus, making the architecture task-guided for training, and task-dynamic for inference. The abstract from the paper is the following: Universal Image Segmentation is not a new concept. Past attempts to unify image segmentation in the last decades include scene parsing, panoptic segmentation, and, more recently, new panoptic architectures. However, such panoptic architectures do not truly unify image segmentation because they need to be trained individually on the semantic, instance, or panoptic segmentation to achieve the best performance. Ideally, a truly universal framework should be trained only once and achieve SOTA performance across all three image segmentation tasks. To that end, we propose OneFormer, a universal image segmentation framework that unifies segmentation with a multi-task train-once design. We first propose a task-conditioned joint training strategy that enables training on ground truths of each domain (semantic, instance, and panoptic segmentation) within a single multi-task training process. Secondly, we introduce a task token to condition our model on the task at hand, making our model task-dynamic to support multi-task training and inference. Thirdly, we propose using a query-text contrastive loss during training to establish better inter-task and inter-class distinctions. Notably, our single OneFormer model outperforms specialized Mask2Former models across all three segmentation tasks on ADE20k, CityScapes, and COCO, despite the latter being trained on each of the three tasks individually with three times the resources. With new ConvNeXt and DiNAT backbones, we observe even more performance improvement. We believe OneFormer is a significant step towards making image segmentation more universal and accessible. The figure below illustrates the architecture of OneFormer. Taken from the original paper. This model was contributed by Jitesh Jain. The original code can be found here. Usage tips OneFormer requires two inputs during inference: image and task token. During training, OneFormer only uses panoptic annotations. If you want to train the model in a distributed environment across multiple nodes, then one should update the get_num_masks function inside in the OneFormerLoss class of modeling_oneformer.py. When training on multiple nodes, this should be set to the average number of target masks across all nodes, as can be seen in the original implementation here. One can use [OneFormerProcessor] to prepare input images and task inputs for the model and optional targets for the model. [OneformerProcessor] wraps [OneFormerImageProcessor] and [CLIPTokenizer] into a single instance to both prepare the images and encode the task inputs. To get the final segmentation, depending on the task, you can call [~OneFormerProcessor.post_process_semantic_segmentation] or [~OneFormerImageProcessor.post_process_instance_segmentation] or [~OneFormerImageProcessor.post_process_panoptic_segmentation]. All three tasks can be solved using [OneFormerForUniversalSegmentation] output, panoptic segmentation accepts an optional label_ids_to_fuse argument to fuse instances of the target object/s (e.g. sky) together. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with OneFormer. Demo notebooks regarding inference + fine-tuning on custom data can be found here. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we will review it. The resource should ideally demonstrate something new instead of duplicating an existing resource. OneFormer specific outputs [[autodoc]] models.oneformer.modeling_oneformer.OneFormerModelOutput [[autodoc]] models.oneformer.modeling_oneformer.OneFormerForUniversalSegmentationOutput OneFormerConfig [[autodoc]] OneFormerConfig OneFormerImageProcessor [[autodoc]] OneFormerImageProcessor - preprocess - encode_inputs - post_process_semantic_segmentation - post_process_instance_segmentation - post_process_panoptic_segmentation OneFormerProcessor [[autodoc]] OneFormerProcessor OneFormerModel [[autodoc]] OneFormerModel - forward OneFormerForUniversalSegmentation [[autodoc]] OneFormerForUniversalSegmentation - forward
SEW Overview SEW (Squeezed and Efficient Wav2Vec) was proposed in Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. The abstract from the paper is the following: This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes. This model was contributed by anton-l. Usage tips SEW is a speech model that accepts a float array corresponding to the raw waveform of the speech signal. SEWForCTC is fine-tuned using connectionist temporal classification (CTC) so the model output has to be decoded using [Wav2Vec2CTCTokenizer]. Resources Audio classification task guide Automatic speech recognition task guide SEWConfig [[autodoc]] SEWConfig SEWModel [[autodoc]] SEWModel - forward SEWForCTC [[autodoc]] SEWForCTC - forward SEWForSequenceClassification [[autodoc]] SEWForSequenceClassification - forward
AltCLIP Overview The AltCLIP model was proposed in AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities by Zhongzhi Chen, Guang Liu, Bo-Wen Zhang, Fulong Ye, Qinghong Yang, Ledell Wu. AltCLIP (Altering the Language Encoder in CLIP) is a neural network trained on a variety of image-text and text-text pairs. By switching CLIP's text encoder with a pretrained multilingual text encoder XLM-R, we could obtain very close performances with CLIP on almost all tasks, and extended original CLIP's capabilities such as multilingual understanding. The abstract from the paper is the following: In this work, we present a conceptually simple and effective method to train a strong bilingual multimodal representation model. Starting from the pretrained multimodal representation model CLIP released by OpenAI, we switched its text encoder with a pretrained multilingual text encoder XLM-R, and aligned both languages and image representations by a two-stage training schema consisting of teacher learning and contrastive learning. We validate our method through evaluations of a wide range of tasks. We set new state-of-the-art performances on a bunch of tasks including ImageNet-CN, Flicker30k- CN, and COCO-CN. Further, we obtain very close performances with CLIP on almost all tasks, suggesting that one can simply alter the text encoder in CLIP for extended capabilities such as multilingual understanding. This model was contributed by jongjyh. Usage tips and example The usage of AltCLIP is very similar to the CLIP. the difference between CLIP is the text encoder. Note that we use bidirectional attention instead of casual attention and we take the [CLS] token in XLM-R to represent text embedding. AltCLIP is a multi-modal vision and language model. It can be used for image-text similarity and for zero-shot image classification. AltCLIP uses a ViT like transformer to get visual features and a bidirectional language model to get the text features. Both the text and visual features are then projected to a latent space with identical dimension. The dot product between the projected image and text features is then used as a similar score. To feed images to the Transformer encoder, each image is split into a sequence of fixed-size non-overlapping patches, which are then linearly embedded. A [CLS] token is added to serve as representation of an entire image. The authors also add absolute position embeddings, and feed the resulting sequence of vectors to a standard Transformer encoder. The [CLIPImageProcessor] can be used to resize (or rescale) and normalize images for the model. The [AltCLIPProcessor] wraps a [CLIPImageProcessor] and a [XLMRobertaTokenizer] into a single instance to both encode the text and prepare the images. The following example shows how to get the image-text similarity scores using [AltCLIPProcessor] and [AltCLIPModel]. thon from PIL import Image import requests from transformers import AltCLIPModel, AltCLIPProcessor model = AltCLIPModel.from_pretrained("BAAI/AltCLIP") processor = AltCLIPProcessor.from_pretrained("BAAI/AltCLIP") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True) outputs = model(**inputs) logits_per_image = outputs.logits_per_image # this is the image-text similarity score probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities This model is based on CLIPModel, use it like you would use the original CLIP. AltCLIPConfig [[autodoc]] AltCLIPConfig - from_text_vision_configs AltCLIPTextConfig [[autodoc]] AltCLIPTextConfig AltCLIPVisionConfig [[autodoc]] AltCLIPVisionConfig AltCLIPProcessor [[autodoc]] AltCLIPProcessor AltCLIPModel [[autodoc]] AltCLIPModel - forward - get_text_features - get_image_features AltCLIPTextModel [[autodoc]] AltCLIPTextModel - forward AltCLIPVisionModel [[autodoc]] AltCLIPVisionModel - forward
Encoder Decoder Models Overview The [EncoderDecoderModel] can be used to initialize a sequence-to-sequence model with any pretrained autoencoding model as the encoder and any pretrained autoregressive model as the decoder. The effectiveness of initializing sequence-to-sequence models with pretrained checkpoints for sequence generation tasks was shown in Leveraging Pre-trained Checkpoints for Sequence Generation Tasks by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. After such an [EncoderDecoderModel] has been trained/fine-tuned, it can be saved/loaded just like any other models (see the examples for more information). An application of this architecture could be to leverage two pretrained [BertModel] as the encoder and decoder for a summarization model as was shown in: Text Summarization with Pretrained Encoders by Yang Liu and Mirella Lapata. Randomly initializing EncoderDecoderModel from model configurations. [EncoderDecoderModel] can be randomly initialized from an encoder and a decoder config. In the following example, we show how to do this using the default [BertModel] configuration for the encoder and the default [BertForCausalLM] configuration for the decoder. thon from transformers import BertConfig, EncoderDecoderConfig, EncoderDecoderModel config_encoder = BertConfig() config_decoder = BertConfig() config = EncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder) model = EncoderDecoderModel(config=config) Initialising EncoderDecoderModel from a pretrained encoder and a pretrained decoder. [EncoderDecoderModel] can be initialized from a pretrained encoder checkpoint and a pretrained decoder checkpoint. Note that any pretrained auto-encoding model, e.g. BERT, can serve as the encoder and both pretrained auto-encoding models, e.g. BERT, pretrained causal language models, e.g. GPT2, as well as the pretrained decoder part of sequence-to-sequence models, e.g. decoder of BART, can be used as the decoder. Depending on which architecture you choose as the decoder, the cross-attention layers might be randomly initialized. Initializing [EncoderDecoderModel] from a pretrained encoder and decoder checkpoint requires the model to be fine-tuned on a downstream task, as has been shown in the Warm-starting-encoder-decoder blog post. To do so, the EncoderDecoderModel class provides a [EncoderDecoderModel.from_encoder_decoder_pretrained] method. thon from transformers import EncoderDecoderModel, BertTokenizer tokenizer = BertTokenizer.from_pretrained("google-bert/bert-base-uncased") model = EncoderDecoderModel.from_encoder_decoder_pretrained("google-bert/bert-base-uncased", "google-bert/bert-base-uncased") Loading an existing EncoderDecoderModel checkpoint and perform inference. To load fine-tuned checkpoints of the EncoderDecoderModel class, [EncoderDecoderModel] provides the from_pretrained() method just like any other model architecture in Transformers. To perform inference, one uses the [generate] method, which allows to autoregressively generate text. This method supports various forms of decoding, such as greedy, beam search and multinomial sampling. thon from transformers import AutoTokenizer, EncoderDecoderModel load a fine-tuned seq2seq model and corresponding tokenizer model = EncoderDecoderModel.from_pretrained("patrickvonplaten/bert2bert_cnn_daily_mail") tokenizer = AutoTokenizer.from_pretrained("patrickvonplaten/bert2bert_cnn_daily_mail") let's perform inference on a long piece of text ARTICLE_TO_SUMMARIZE = ( "PG&E stated it scheduled the blackouts in response to forecasts for high winds " "amid dry conditions. The aim is to reduce the risk of wildfires. Nearly 800 thousand customers were " "scheduled to be affected by the shutoffs which were expected to last through at least midday tomorrow." ) input_ids = tokenizer(ARTICLE_TO_SUMMARIZE, return_tensors="pt").input_ids autoregressively generate summary (uses greedy decoding by default) generated_ids = model.generate(input_ids) generated_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] print(generated_text) nearly 800 thousand customers were affected by the shutoffs. the aim is to reduce the risk of wildfires. nearly 800, 000 customers were expected to be affected by high winds amid dry conditions. pg & e said it scheduled the blackouts to last through at least midday tomorrow. Loading a PyTorch checkpoint into TFEncoderDecoderModel. [TFEncoderDecoderModel.from_pretrained] currently doesn't support initializing the model from a pytorch checkpoint. Passing from_pt=True to this method will throw an exception. If there are only pytorch checkpoints for a particular encoder-decoder model, a workaround is: thon a workaround to load from pytorch checkpoint from transformers import EncoderDecoderModel, TFEncoderDecoderModel _model = EncoderDecoderModel.from_pretrained("patrickvonplaten/bert2bert-cnn_dailymail-fp16") _model.encoder.save_pretrained("./encoder") _model.decoder.save_pretrained("./decoder") model = TFEncoderDecoderModel.from_encoder_decoder_pretrained( "./encoder", "./decoder", encoder_from_pt=True, decoder_from_pt=True ) This is only for copying some specific attributes of this particular model. model.config = _model.config Training Once the model is created, it can be fine-tuned similar to BART, T5 or any other encoder-decoder model. As you can see, only 2 inputs are required for the model in order to compute a loss: input_ids (which are the input_ids of the encoded input sequence) and labels (which are the input_ids of the encoded target sequence). thon from transformers import BertTokenizer, EncoderDecoderModel tokenizer = BertTokenizer.from_pretrained("google-bert/bert-base-uncased") model = EncoderDecoderModel.from_encoder_decoder_pretrained("google-bert/bert-base-uncased", "google-bert/bert-base-uncased") model.config.decoder_start_token_id = tokenizer.cls_token_id model.config.pad_token_id = tokenizer.pad_token_id input_ids = tokenizer( "The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side.During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure to reach a height of 300 metres. Due to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft).Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct.", return_tensors="pt", ).input_ids labels = tokenizer( "the eiffel tower surpassed the washington monument to become the tallest structure in the world. it was the first structure to reach a height of 300 metres in paris in 1930. it is now taller than the chrysler building by 5. 2 metres ( 17 ft ) and is the second tallest free - standing structure in paris.", return_tensors="pt", ).input_ids the forward function automatically creates the correct decoder_input_ids loss = model(input_ids=input_ids, labels=labels).loss Detailed colab for training. This model was contributed by thomwolf. This model's TensorFlow and Flax versions were contributed by ydshieh. EncoderDecoderConfig [[autodoc]] EncoderDecoderConfig EncoderDecoderModel [[autodoc]] EncoderDecoderModel - forward - from_encoder_decoder_pretrained TFEncoderDecoderModel [[autodoc]] TFEncoderDecoderModel - call - from_encoder_decoder_pretrained FlaxEncoderDecoderModel [[autodoc]] FlaxEncoderDecoderModel - call - from_encoder_decoder_pretrained
SigLIP Overview The SigLIP model was proposed in Sigmoid Loss for Language Image Pre-Training by Xiaohua Zhai, Basil Mustafa, Alexander Kolesnikov, Lucas Beyer. SigLIP proposes to replace the loss function used in CLIP by a simple pairwise sigmoid loss. This results in better performance in terms of zero-shot classification accuracy on ImageNet. The abstract from the paper is the following: We propose a simple pairwise Sigmoid loss for Language-Image Pre-training (SigLIP). Unlike standard contrastive learning with softmax normalization, the sigmoid loss operates solely on image-text pairs and does not require a global view of the pairwise similarities for normalization. The sigmoid loss simultaneously allows further scaling up the batch size, while also performing better at smaller batch sizes. Combined with Locked-image Tuning, with only four TPUv4 chips, we train a SigLiT model that achieves 84.5% ImageNet zero-shot accuracy in two days. The disentanglement of the batch size from the loss further allows us to study the impact of examples vs pairs and negative to positive ratio. Finally, we push the batch size to the extreme, up to one million, and find that the benefits of growing batch size quickly diminish, with a more reasonable batch size of 32k being sufficient. Usage tips Usage of SigLIP is similar to CLIP. The main difference is the training loss, which does not require a global view of all the pairwise similarities of images and texts within a batch. One needs to apply the sigmoid activation function to the logits, rather than the softmax. Training is not yet supported. If you want to fine-tune SigLIP or train from scratch, refer to the loss function from OpenCLIP, which leverages various torch.distributed utilities. When using the standalone [SiglipTokenizer] or [SiglipProcessor], make sure to pass padding="max_length" as that's how the model was trained. SigLIP evaluation results compared to CLIP. Taken from the original paper. This model was contributed by nielsr. The original code can be found here. Usage example There are 2 main ways to use SigLIP: either using the pipeline API, which abstracts away all the complexity for you, or by using the SiglipModel class yourself. Pipeline API The pipeline allows to use the model in a few lines of code: thon from transformers import pipeline from PIL import Image import requests load pipe image_classifier = pipeline(task="zero-shot-image-classification", model="google/siglip-base-patch16-224") load image url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) inference outputs = image_classifier(image, candidate_labels=["2 cats", "a plane", "a remote"]) outputs = [{"score": round(output["score"], 4), "label": output["label"] } for output in outputs] print(outputs) [{'score': 0.1979, 'label': '2 cats'}, {'score': 0.0, 'label': 'a remote'}, {'score': 0.0, 'label': 'a plane'}] Using the model yourself If you want to do the pre- and postprocessing yourself, here's how to do that: thon from PIL import Image import requests from transformers import AutoProcessor, AutoModel import torch model = AutoModel.from_pretrained("google/siglip-base-patch16-224") processor = AutoProcessor.from_pretrained("google/siglip-base-patch16-224") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) texts = ["a photo of 2 cats", "a photo of 2 dogs"] important: we pass padding=max_length since the model was trained with this inputs = processor(text=texts, images=image, padding="max_length", return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) logits_per_image = outputs.logits_per_image probs = torch.sigmoid(logits_per_image) # these are the probabilities print(f"{probs[0][0]:.1%} that image 0 is '{texts[0]}'") 31.9% that image 0 is 'a photo of 2 cats' Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with SigLIP. Zero-shot image classification task guide Demo notebooks for SigLIP can be found here. 🌎 If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. SiglipConfig [[autodoc]] SiglipConfig - from_text_vision_configs SiglipTextConfig [[autodoc]] SiglipTextConfig SiglipVisionConfig [[autodoc]] SiglipVisionConfig SiglipTokenizer [[autodoc]] SiglipTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary SiglipImageProcessor [[autodoc]] SiglipImageProcessor - preprocess SiglipProcessor [[autodoc]] SiglipProcessor SiglipModel [[autodoc]] SiglipModel - forward - get_text_features - get_image_features SiglipTextModel [[autodoc]] SiglipTextModel - forward SiglipVisionModel [[autodoc]] SiglipVisionModel - forward SiglipForImageClassification [[autodoc]] SiglipForImageClassification - forward
PLBart Overview The PLBART model was proposed in Unified Pre-training for Program Understanding and Generation by Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang. This is a BART-like model which can be used to perform code-summarization, code-generation, and code-translation tasks. The pre-trained model plbart-base has been trained using multilingual denoising task on Java, Python and English. According to the abstract Code summarization and generation empower conversion between programming language (PL) and natural language (NL), while code translation avails the migration of legacy code from one PL to another. This paper introduces PLBART, a sequence-to-sequence model capable of performing a broad spectrum of program and language understanding and generation tasks. PLBART is pre-trained on an extensive collection of Java and Python functions and associated NL text via denoising autoencoding. Experiments on code summarization in the English language, code generation, and code translation in seven programming languages show that PLBART outperforms or rivals state-of-the-art models. Moreover, experiments on discriminative tasks, e.g., program repair, clone detection, and vulnerable code detection, demonstrate PLBART's effectiveness in program understanding. Furthermore, analysis reveals that PLBART learns program syntax, style (e.g., identifier naming convention), logical flow (e.g., if block inside an else block is equivalent to else if block) that are crucial to program semantics and thus excels even with limited annotations. This model was contributed by gchhablani. The Authors' code can be found here. Usage examples PLBart is a multilingual encoder-decoder (sequence-to-sequence) model primarily intended for code-to-text, text-to-code, code-to-code tasks. As the model is multilingual it expects the sequences in a different format. A special language id token is added in both the source and target text. The source text format is X [eos, src_lang_code] where X is the source text. The target text format is [tgt_lang_code] X [eos]. bos is never used. However, for fine-tuning, in some cases no language token is provided in cases where a single language is used. Please refer to the paper to learn more about this. In cases where the language code is needed, the regular [~PLBartTokenizer.__call__] will encode source text format when you pass texts as the first argument or with the keyword argument text, and will encode target text format if it's passed with the text_target keyword argument. Supervised training thon from transformers import PLBartForConditionalGeneration, PLBartTokenizer tokenizer = PLBartTokenizer.from_pretrained("uclanlp/plbart-base", src_lang="en_XX", tgt_lang="python") example_python_phrase = "def maximum(a,b,c):NEW_LINE_INDENTreturn max([a,b,c])" expected_translation_english = "Returns the maximum value of a b c." inputs = tokenizer(example_python_phrase, text_target=expected_translation_english, return_tensors="pt") model(**inputs) Generation While generating the target text set the decoder_start_token_id to the target language id. The following example shows how to translate Python to English using the uclanlp/plbart-python-en_XX model. thon from transformers import PLBartForConditionalGeneration, PLBartTokenizer tokenizer = PLBartTokenizer.from_pretrained("uclanlp/plbart-python-en_XX", src_lang="python", tgt_lang="en_XX") example_python_phrase = "def maximum(a,b,c):NEW_LINE_INDENTreturn max([a,b,c])" inputs = tokenizer(example_python_phrase, return_tensors="pt") model = PLBartForConditionalGeneration.from_pretrained("uclanlp/plbart-python-en_XX") translated_tokens = model.generate(**inputs, decoder_start_token_id=tokenizer.lang_code_to_id["en_XX"]) tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0] "Returns the maximum value of a b c." Resources Text classification task guide Causal language modeling task guide Translation task guide Summarization task guide PLBartConfig [[autodoc]] PLBartConfig PLBartTokenizer [[autodoc]] PLBartTokenizer - build_inputs_with_special_tokens PLBartModel [[autodoc]] PLBartModel - forward PLBartForConditionalGeneration [[autodoc]] PLBartForConditionalGeneration - forward PLBartForSequenceClassification [[autodoc]] PLBartForSequenceClassification - forward PLBartForCausalLM [[autodoc]] PLBartForCausalLM - forward
T5 Overview The T5 model was presented in Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. The abstract from the paper is the following: Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pretraining objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new "Colossal Clean Crawled Corpus", we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code. All checkpoints can be found on the hub. This model was contributed by thomwolf. The original code can be found here. Usage tips T5 is an encoder-decoder model pre-trained on a multi-task mixture of unsupervised and supervised tasks and for which each task is converted into a text-to-text format. T5 works well on a variety of tasks out-of-the-box by prepending a different prefix to the input corresponding to each task, e.g., for translation: translate English to German: , for summarization: summarize: . The pretraining includes both supervised and self-supervised training. Supervised training is conducted on downstream tasks provided by the GLUE and SuperGLUE benchmarks (converting them into text-to-text tasks as explained above). Self-supervised training uses corrupted tokens, by randomly removing 15% of the tokens and replacing them with individual sentinel tokens (if several consecutive tokens are marked for removal, the whole group is replaced with a single sentinel token). The input of the encoder is the corrupted sentence, the input of the decoder is the original sentence and the target is then the dropped out tokens delimited by their sentinel tokens. T5 uses relative scalar embeddings. Encoder input padding can be done on the left and on the right. See the training, inference and resources sections below for all details regarding usage. T5 comes in different sizes: google-t5/t5-small google-t5/t5-base google-t5/t5-large google-t5/t5-3b google-t5/t5-11b. Based on the original T5 model, Google has released some follow-up works: T5v1.1: T5v1.1 is an improved version of T5 with some architectural tweaks, and is pre-trained on C4 only without mixing in the supervised tasks. Refer to the documentation of T5v1.1 which can be found here. mT5: mT5 is a multilingual T5 model. It is pre-trained on the mC4 corpus, which includes 101 languages. Refer to the documentation of mT5 which can be found here. byT5: byT5 is a T5 model pre-trained on byte sequences rather than SentencePiece subword token sequences. Refer to the documentation of byT5 which can be found here. UL2: UL2 is a T5 like model pretrained on various denoising objectives Flan-T5: Flan is a pretraining methods that is based on prompting. The Flan-T5 are T5 models trained on the Flan collection of datasets which include: taskmaster2, djaym7/wiki_dialog, deepmind/code_contests, lambada, gsm8k, aqua_rat, esnli, quasc and qed. FLan-UL2 : the UL2 model finetuned using the "Flan" prompt tuning and dataset collection. UMT5: UmT5 is a multilingual T5 model trained on an improved and refreshed mC4 multilingual corpus, 29 trillion characters across 107 language, using a new sampling method, UniMax. Refer to the documentation of mT5 which can be found here. Training T5 is an encoder-decoder model and converts all NLP problems into a text-to-text format. It is trained using teacher forcing. This means that for training, we always need an input sequence and a corresponding target sequence. The input sequence is fed to the model using input_ids. The target sequence is shifted to the right, i.e., prepended by a start-sequence token and fed to the decoder using the decoder_input_ids. In teacher-forcing style, the target sequence is then appended by the EOS token and corresponds to the labels. The PAD token is hereby used as the start-sequence token. T5 can be trained / fine-tuned both in a supervised and unsupervised fashion. One can use [T5ForConditionalGeneration] (or the Tensorflow/Flax variant), which includes the language modeling head on top of the decoder. Unsupervised denoising training In this setup, spans of the input sequence are masked by so-called sentinel tokens (a.k.a unique mask tokens) and the output sequence is formed as a concatenation of the same sentinel tokens and the real masked tokens. Each sentinel token represents a unique mask token for this sentence and should start with <extra_id_0>, <extra_id_1>, up to <extra_id_99>. As a default, 100 sentinel tokens are available in [T5Tokenizer]. For instance, the sentence "The cute dog walks in the park" with the masks put on "cute dog" and "the" should be processed as follows: thon from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("google-t5/t5-small") model = T5ForConditionalGeneration.from_pretrained("google-t5/t5-small") input_ids = tokenizer("The walks in park", return_tensors="pt").input_ids labels = tokenizer(" cute dog the ", return_tensors="pt").input_ids the forward function automatically creates the correct decoder_input_ids loss = model(input_ids=input_ids, labels=labels).loss loss.item() 3.7837 If you're interested in pre-training T5 on a new corpus, check out the run_t5_mlm_flax.py script in the Examples directory. Supervised training In this setup, the input sequence and output sequence are a standard sequence-to-sequence input-output mapping. Suppose that we want to fine-tune the model for translation for example, and we have a training example: the input sequence "The house is wonderful." and output sequence "Das Haus ist wunderbar.", then they should be prepared for the model as follows: thon from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("google-t5/t5-small") model = T5ForConditionalGeneration.from_pretrained("google-t5/t5-small") input_ids = tokenizer("translate English to German: The house is wonderful.", return_tensors="pt").input_ids labels = tokenizer("Das Haus ist wunderbar.", return_tensors="pt").input_ids the forward function automatically creates the correct decoder_input_ids loss = model(input_ids=input_ids, labels=labels).loss loss.item() 0.2542 As you can see, only 2 inputs are required for the model in order to compute a loss: input_ids (which are the input_ids of the encoded input sequence) and labels (which are the input_ids of the encoded target sequence). The model will automatically create the decoder_input_ids based on the labels, by shifting them one position to the right and prepending the config.decoder_start_token_id, which for T5 is equal to 0 (i.e. the id of the pad token). Also note the task prefix: we prepend the input sequence with 'translate English to German: ' before encoding it. This will help in improving the performance, as this task prefix was used during T5's pre-training. However, the example above only shows a single training example. In practice, one trains deep learning models in batches. This entails that we must pad/truncate examples to the same length. For encoder-decoder models, one typically defines a max_source_length and max_target_length, which determine the maximum length of the input and output sequences respectively (otherwise they are truncated). These should be carefully set depending on the task. In addition, we must make sure that padding token id's of the labels are not taken into account by the loss function. In PyTorch and Tensorflow, this can be done by replacing them with -100, which is the ignore_index of the CrossEntropyLoss. In Flax, one can use the decoder_attention_mask to ignore padded tokens from the loss (see the Flax summarization script for details). We also pass attention_mask as additional input to the model, which makes sure that padding tokens of the inputs are ignored. The code example below illustrates all of this. thon from transformers import T5Tokenizer, T5ForConditionalGeneration import torch tokenizer = T5Tokenizer.from_pretrained("google-t5/t5-small") model = T5ForConditionalGeneration.from_pretrained("google-t5/t5-small") the following 2 hyperparameters are task-specific max_source_length = 512 max_target_length = 128 Suppose we have the following 2 training examples: input_sequence_1 = "Welcome to NYC" output_sequence_1 = "Bienvenue à NYC" input_sequence_2 = "HuggingFace is a company" output_sequence_2 = "HuggingFace est une entreprise" encode the inputs task_prefix = "translate English to French: " input_sequences = [input_sequence_1, input_sequence_2] encoding = tokenizer( [task_prefix + sequence for sequence in input_sequences], padding="longest", max_length=max_source_length, truncation=True, return_tensors="pt", ) input_ids, attention_mask = encoding.input_ids, encoding.attention_mask encode the targets target_encoding = tokenizer( [output_sequence_1, output_sequence_2], padding="longest", max_length=max_target_length, truncation=True, return_tensors="pt", ) labels = target_encoding.input_ids replace padding token id's of the labels by -100 so it's ignored by the loss labels[labels == tokenizer.pad_token_id] = -100 forward pass loss = model(input_ids=input_ids, attention_mask=attention_mask, labels=labels).loss loss.item() 0.188 Additional training tips: T5 models need a slightly higher learning rate than the default one set in the Trainer when using the AdamW optimizer. Typically, 1e-4 and 3e-4 work well for most problems (classification, summarization, translation, question answering, question generation). Note that T5 was pre-trained using the AdaFactor optimizer. According to this forum post, task prefixes matter when (1) doing multi-task training (2) your task is similar or related to one of the supervised tasks used in T5's pre-training mixture (see Appendix D of the paper for the task prefixes used). If training on TPU, it is recommended to pad all examples of the dataset to the same length or make use of pad_to_multiple_of to have a small number of predefined bucket sizes to fit all examples in. Dynamically padding batches to the longest example is not recommended on TPU as it triggers a recompilation for every batch shape that is encountered during training thus significantly slowing down the training. only padding up to the longest example in a batch) leads to very slow training on TPU. Inference At inference time, it is recommended to use [~generation.GenerationMixin.generate]. This method takes care of encoding the input and feeding the encoded hidden states via cross-attention layers to the decoder and auto-regressively generates the decoder output. Check out this blog post to know all the details about generating text with Transformers. There's also this blog post which explains how generation works in general in encoder-decoder models. thon from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("google-t5/t5-small") model = T5ForConditionalGeneration.from_pretrained("google-t5/t5-small") input_ids = tokenizer("translate English to German: The house is wonderful.", return_tensors="pt").input_ids outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) Das Haus ist wunderbar. Note that T5 uses the pad_token_id as the decoder_start_token_id, so when doing generation without using [~generation.GenerationMixin.generate], make sure you start it with the pad_token_id. The example above only shows a single example. You can also do batched inference, like so: thon from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("google-t5/t5-small") model = T5ForConditionalGeneration.from_pretrained("google-t5/t5-small") task_prefix = "translate English to German: " use different length sentences to test batching sentences = ["The house is wonderful.", "I like to work in NYC."] inputs = tokenizer([task_prefix + sentence for sentence in sentences], return_tensors="pt", padding=True) output_sequences = model.generate( input_ids=inputs["input_ids"], attention_mask=inputs["attention_mask"], do_sample=False, # disable sampling to test if batching affects output ) print(tokenizer.batch_decode(output_sequences, skip_special_tokens=True)) ['Das Haus ist wunderbar.', 'Ich arbeite gerne in NYC.'] Because T5 has been trained with the span-mask denoising objective, it can be used to predict the sentinel (masked-out) tokens during inference. The predicted tokens will then be placed between the sentinel tokens. thon from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("google-t5/t5-small") model = T5ForConditionalGeneration.from_pretrained("google-t5/t5-small") input_ids = tokenizer("The walks in park", return_tensors="pt").input_ids sequence_ids = model.generate(input_ids) sequences = tokenizer.batch_decode(sequence_ids) sequences [' park offers the park.'] Performance If you'd like a faster training and inference performance, install NVIDIA APEX for NVIDIA GPUs, or ROCm APEX for AMD GPUs and then the model will automatically use apex.normalization.FusedRMSNorm instead of T5LayerNorm. The former uses an optimized fused kernel which is several times faster than the latter. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with T5. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. A notebook for how to finetune T5 for classification and multiple choice. A notebook for how to finetune T5 for sentiment span extraction. 🌎 A notebook for how to finetune T5 for named entity recognition. 🌎 A notebook for Finetuning CodeT5 for generating docstrings from Ruby code. A notebook to Finetune T5-base-dutch to perform Dutch abstractive summarization on a TPU. A notebook for how to finetune T5 for summarization in PyTorch and track experiments with WandB. 🌎 A blog post on Distributed Training: Train BART/T5 for Summarization using 🤗 Transformers and Amazon SageMaker. [T5ForConditionalGeneration] is supported by this example script and notebook. [TFT5ForConditionalGeneration] is supported by this example script and notebook. [FlaxT5ForConditionalGeneration] is supported by this example script. Summarization chapter of the 🤗 Hugging Face course. Summarization task guide [FlaxT5ForConditionalGeneration] is supported by this example script for training T5 with a span-masked language model objective. The script also shows how to train a T5 tokenizer. [FlaxT5ForConditionalGeneration] is also supported by this notebook. [T5ForConditionalGeneration] is supported by this example script and notebook. [TFT5ForConditionalGeneration] is supported by this example script and notebook. Translation task guide A notebook on how to finetune T5 for question answering with TensorFlow 2. 🌎 A notebook on how to finetune T5 for question answering on a TPU. 🚀 Deploy - A blog post on how to deploy T5 11B for inference for less than $500. T5Config [[autodoc]] T5Config T5Tokenizer [[autodoc]] T5Tokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary T5TokenizerFast [[autodoc]] T5TokenizerFast T5Model [[autodoc]] T5Model - forward T5ForConditionalGeneration [[autodoc]] T5ForConditionalGeneration - forward T5EncoderModel [[autodoc]] T5EncoderModel - forward T5ForSequenceClassification [[autodoc]] T5ForSequenceClassification - forward T5ForTokenClassification [[autodoc]] T5ForTokenClassification - forward T5ForQuestionAnswering [[autodoc]] T5ForQuestionAnswering - forward TFT5Model [[autodoc]] TFT5Model - call TFT5ForConditionalGeneration [[autodoc]] TFT5ForConditionalGeneration - call TFT5EncoderModel [[autodoc]] TFT5EncoderModel - call FlaxT5Model [[autodoc]] FlaxT5Model - call - encode - decode FlaxT5ForConditionalGeneration [[autodoc]] FlaxT5ForConditionalGeneration - call - encode - decode FlaxT5EncoderModel [[autodoc]] FlaxT5EncoderModel - call
OpenAI GPT2 Overview OpenAI GPT-2 model was proposed in Language Models are Unsupervised Multitask Learners by Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei and Ilya Sutskever from OpenAI. It's a causal (unidirectional) transformer pretrained using language modeling on a very large corpus of ~40 GB of text data. The abstract from the paper is the following: GPT-2 is a large transformer-based language model with 1.5 billion parameters, trained on a dataset[1] of 8 million web pages. GPT-2 is trained with a simple objective: predict the next word, given all of the previous words within some text. The diversity of the dataset causes this simple goal to contain naturally occurring demonstrations of many tasks across diverse domains. GPT-2 is a direct scale-up of GPT, with more than 10X the parameters and trained on more than 10X the amount of data. Write With Transformer is a webapp created and hosted by Hugging Face showcasing the generative capabilities of several models. GPT-2 is one of them and is available in five different sizes: small, medium, large, xl and a distilled version of the small checkpoint: distilgpt-2. This model was contributed by thomwolf. The original code can be found here. Usage tips GPT-2 is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than the left. GPT-2 was trained with a causal language modeling (CLM) objective and is therefore powerful at predicting the next token in a sequence. Leveraging this feature allows GPT-2 to generate syntactically coherent text as it can be observed in the run_generation.py example script. The model can take the past_key_values (for PyTorch) or past (for TF) as input, which is the previously computed key/value attention pairs. Using this (past_key_values or past) value prevents the model from re-computing pre-computed values in the context of text generation. For PyTorch, see past_key_values argument of the [GPT2Model.forward] method, or for TF the past argument of the [TFGPT2Model.call] method for more information on its usage. Enabling the scale_attn_by_inverse_layer_idx and reorder_and_upcast_attn flags will apply the training stability improvements from Mistral (for PyTorch only). Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with GPT2. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. A blog on how to Finetune a non-English GPT-2 Model with Hugging Face. A blog on How to generate text: using different decoding methods for language generation with Transformers with GPT-2. A blog on Training CodeParrot 🦜 from Scratch, a large GPT-2 model. A blog on Faster Text Generation with TensorFlow and XLA with GPT-2. A blog on How to train a Language Model with Megatron-LM with a GPT-2 model. A notebook on how to finetune GPT2 to generate lyrics in the style of your favorite artist. 🌎 A notebook on how to finetune GPT2 to generate tweets in the style of your favorite Twitter user. 🌎 Causal language modeling chapter of the 🤗 Hugging Face Course. [GPT2LMHeadModel] is supported by this causal language modeling example script, text generation example script, and notebook. [TFGPT2LMHeadModel] is supported by this causal language modeling example script and notebook. [FlaxGPT2LMHeadModel] is supported by this causal language modeling example script and notebook. Text classification task guide Token classification task guide Causal language modeling task guide GPT2Config [[autodoc]] GPT2Config GPT2Tokenizer [[autodoc]] GPT2Tokenizer - save_vocabulary GPT2TokenizerFast [[autodoc]] GPT2TokenizerFast GPT2 specific outputs [[autodoc]] models.gpt2.modeling_gpt2.GPT2DoubleHeadsModelOutput [[autodoc]] models.gpt2.modeling_tf_gpt2.TFGPT2DoubleHeadsModelOutput GPT2Model [[autodoc]] GPT2Model - forward GPT2LMHeadModel [[autodoc]] GPT2LMHeadModel - forward GPT2DoubleHeadsModel [[autodoc]] GPT2DoubleHeadsModel - forward GPT2ForQuestionAnswering [[autodoc]] GPT2ForQuestionAnswering - forward GPT2ForSequenceClassification [[autodoc]] GPT2ForSequenceClassification - forward GPT2ForTokenClassification [[autodoc]] GPT2ForTokenClassification - forward TFGPT2Model [[autodoc]] TFGPT2Model - call TFGPT2LMHeadModel [[autodoc]] TFGPT2LMHeadModel - call TFGPT2DoubleHeadsModel [[autodoc]] TFGPT2DoubleHeadsModel - call TFGPT2ForSequenceClassification [[autodoc]] TFGPT2ForSequenceClassification - call TFSequenceClassifierOutputWithPast [[autodoc]] modeling_tf_outputs.TFSequenceClassifierOutputWithPast TFGPT2Tokenizer [[autodoc]] TFGPT2Tokenizer FlaxGPT2Model [[autodoc]] FlaxGPT2Model - call FlaxGPT2LMHeadModel [[autodoc]] FlaxGPT2LMHeadModel - call
BROS Overview The BROS model was proposed in BROS: A Pre-trained Language Model Focusing on Text and Layout for Better Key Information Extraction from Documents by Teakgyu Hong, Donghyun Kim, Mingi Ji, Wonseok Hwang, Daehyun Nam, Sungrae Park. BROS stands for BERT Relying On Spatiality. It is an encoder-only Transformer model that takes a sequence of tokens and their bounding boxes as inputs and outputs a sequence of hidden states. BROS encode relative spatial information instead of using absolute spatial information. It is pre-trained with two objectives: a token-masked language modeling objective (TMLM) used in BERT, and a novel area-masked language modeling objective (AMLM) In TMLM, tokens are randomly masked, and the model predicts the masked tokens using spatial information and other unmasked tokens. AMLM is a 2D version of TMLM. It randomly masks text tokens and predicts with the same information as TMLM, but it masks text blocks (areas). BrosForTokenClassification has a simple linear layer on top of BrosModel. It predicts the label of each token. BrosSpadeEEForTokenClassification has an initial_token_classifier and subsequent_token_classifier on top of BrosModel. initial_token_classifier is used to predict the first token of each entity, and subsequent_token_classifier is used to predict the next token of within entity. BrosSpadeELForTokenClassification has an entity_linker on top of BrosModel. entity_linker is used to predict the relation between two entities. BrosForTokenClassification and BrosSpadeEEForTokenClassification essentially perform the same job. However, BrosForTokenClassification assumes input tokens are perfectly serialized (which is very challenging task since they exist in a 2D space), while BrosSpadeEEForTokenClassification allows for more flexibility in handling serialization errors as it predicts next connection tokens from one token. BrosSpadeELForTokenClassification perform the intra-entity linking task. It predicts relation from one token (of one entity) to another token (of another entity) if these two entities share some relation. BROS achieves comparable or better result on Key Information Extraction (KIE) benchmarks such as FUNSD, SROIE, CORD and SciTSR, without relying on explicit visual features. The abstract from the paper is the following: Key information extraction (KIE) from document images requires understanding the contextual and spatial semantics of texts in two-dimensional (2D) space. Many recent studies try to solve the task by developing pre-trained language models focusing on combining visual features from document images with texts and their layout. On the other hand, this paper tackles the problem by going back to the basic: effective combination of text and layout. Specifically, we propose a pre-trained language model, named BROS (BERT Relying On Spatiality), that encodes relative positions of texts in 2D space and learns from unlabeled documents with area-masking strategy. With this optimized training scheme for understanding texts in 2D space, BROS shows comparable or better performance compared to previous methods on four KIE benchmarks (FUNSD, SROIE, CORD, and SciTSR) without relying on visual features. This paper also reveals two real-world challenges in KIE tasks-(1) minimizing the error from incorrect text ordering and (2) efficient learning from fewer downstream examples-and demonstrates the superiority of BROS over previous methods.* This model was contributed by jinho8345. The original code can be found here. Usage tips and examples [~transformers.BrosModel.forward] requires input_ids and bbox (bounding box). Each bounding box should be in (x0, y0, x1, y1) format (top-left corner, bottom-right corner). Obtaining of Bounding boxes depends on external OCR system. The x coordinate should be normalized by document image width, and the y coordinate should be normalized by document image height. thon def expand_and_normalize_bbox(bboxes, doc_width, doc_height): # here, bboxes are numpy array # Normalize bbox -> 0 ~ 1 bboxes[:, [0, 2]] = bboxes[:, [0, 2]] / width bboxes[:, [1, 3]] = bboxes[:, [1, 3]] / height [~transformers.BrosForTokenClassification.forward, ~transformers.BrosSpadeEEForTokenClassification.forward, ~transformers.BrosSpadeEEForTokenClassification.forward] require not only input_ids and bbox but also box_first_token_mask for loss calculation. It is a mask to filter out non-first tokens of each box. You can obtain this mask by saving start token indices of bounding boxes when creating input_ids from words. You can make box_first_token_mask with following code, thon def make_box_first_token_mask(bboxes, words, tokenizer, max_seq_length=512): box_first_token_mask = np.zeros(max_seq_length, dtype=np.bool_) # encode(tokenize) each word from words (List[str]) input_ids_list: List[List[int]] = [tokenizer.encode(e, add_special_tokens=False) for e in words] # get the length of each box tokens_length_list: List[int] = [len(l) for l in input_ids_list] box_end_token_indices = np.array(list(itertools.accumulate(tokens_length_list))) box_start_token_indices = box_end_token_indices - np.array(tokens_length_list) # filter out the indices that are out of max_seq_length box_end_token_indices = box_end_token_indices[box_end_token_indices < max_seq_length - 1] if len(box_start_token_indices) > len(box_end_token_indices): box_start_token_indices = box_start_token_indices[: len(box_end_token_indices)] # set box_start_token_indices to True box_first_token_mask[box_start_token_indices] = True return box_first_token_mask Resources Demo scripts can be found here. BrosConfig [[autodoc]] BrosConfig BrosProcessor [[autodoc]] BrosProcessor - call BrosModel [[autodoc]] BrosModel - forward BrosForTokenClassification [[autodoc]] BrosForTokenClassification - forward BrosSpadeEEForTokenClassification [[autodoc]] BrosSpadeEEForTokenClassification - forward BrosSpadeELForTokenClassification [[autodoc]] BrosSpadeELForTokenClassification - forward
RoBERTa Overview The RoBERTa model was proposed in RoBERTa: A Robustly Optimized BERT Pretraining Approach by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov. It is based on Google's BERT model released in 2018. It builds on BERT and modifies key hyperparameters, removing the next-sentence pretraining objective and training with much larger mini-batches and learning rates. The abstract from the paper is the following: Language model pretraining has led to significant performance gains but careful comparison between different approaches is challenging. Training is computationally expensive, often done on private datasets of different sizes, and, as we will show, hyperparameter choices have significant impact on the final results. We present a replication study of BERT pretraining (Devlin et al., 2019) that carefully measures the impact of many key hyperparameters and training data size. We find that BERT was significantly undertrained, and can match or exceed the performance of every model published after it. Our best model achieves state-of-the-art results on GLUE, RACE and SQuAD. These results highlight the importance of previously overlooked design choices, and raise questions about the source of recently reported improvements. We release our models and code. This model was contributed by julien-c. The original code can be found here. Usage tips This implementation is the same as [BertModel] with a tiny embeddings tweak as well as a setup for Roberta pretrained models. RoBERTa has the same architecture as BERT, but uses a byte-level BPE as a tokenizer (same as GPT-2) and uses a different pretraining scheme. RoBERTa doesn't have token_type_ids, you don't need to indicate which token belongs to which segment. Just separate your segments with the separation token tokenizer.sep_token (or </s>) Same as BERT with better pretraining tricks: dynamic masking: tokens are masked differently at each epoch, whereas BERT does it once and for all together to reach 512 tokens (so the sentences are in an order than may span several documents) train with larger batches use BPE with bytes as a subunit and not characters (because of unicode characters) CamemBERT is a wrapper around RoBERTa. Refer to this page for usage examples. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with RoBERTa. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. A blog on Getting Started with Sentiment Analysis on Twitter using RoBERTa and the Inference API. A blog on Opinion Classification with Kili and Hugging Face AutoTrain using RoBERTa. A notebook on how to finetune RoBERTa for sentiment analysis. 🌎 [RobertaForSequenceClassification] is supported by this example script and notebook. [TFRobertaForSequenceClassification] is supported by this example script and notebook. [FlaxRobertaForSequenceClassification] is supported by this example script and notebook. Text classification task guide [RobertaForTokenClassification] is supported by this example script and notebook. [TFRobertaForTokenClassification] is supported by this example script and notebook. [FlaxRobertaForTokenClassification] is supported by this example script. Token classification chapter of the 🤗 Hugging Face Course. Token classification task guide A blog on How to train a new language model from scratch using Transformers and Tokenizers with RoBERTa. [RobertaForMaskedLM] is supported by this example script and notebook. [TFRobertaForMaskedLM] is supported by this example script and notebook. [FlaxRobertaForMaskedLM] is supported by this example script and notebook. Masked language modeling chapter of the 🤗 Hugging Face Course. Masked language modeling task guide A blog on Accelerated Inference with Optimum and Transformers Pipelines with RoBERTa for question answering. [RobertaForQuestionAnswering] is supported by this example script and notebook. [TFRobertaForQuestionAnswering] is supported by this example script and notebook. [FlaxRobertaForQuestionAnswering] is supported by this example script. Question answering chapter of the 🤗 Hugging Face Course. Question answering task guide Multiple choice - [RobertaForMultipleChoice] is supported by this example script and notebook. - [TFRobertaForMultipleChoice] is supported by this example script and notebook. - Multiple choice task guide RobertaConfig [[autodoc]] RobertaConfig RobertaTokenizer [[autodoc]] RobertaTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary RobertaTokenizerFast [[autodoc]] RobertaTokenizerFast - build_inputs_with_special_tokens RobertaModel [[autodoc]] RobertaModel - forward RobertaForCausalLM [[autodoc]] RobertaForCausalLM - forward RobertaForMaskedLM [[autodoc]] RobertaForMaskedLM - forward RobertaForSequenceClassification [[autodoc]] RobertaForSequenceClassification - forward RobertaForMultipleChoice [[autodoc]] RobertaForMultipleChoice - forward RobertaForTokenClassification [[autodoc]] RobertaForTokenClassification - forward RobertaForQuestionAnswering [[autodoc]] RobertaForQuestionAnswering - forward TFRobertaModel [[autodoc]] TFRobertaModel - call TFRobertaForCausalLM [[autodoc]] TFRobertaForCausalLM - call TFRobertaForMaskedLM [[autodoc]] TFRobertaForMaskedLM - call TFRobertaForSequenceClassification [[autodoc]] TFRobertaForSequenceClassification - call TFRobertaForMultipleChoice [[autodoc]] TFRobertaForMultipleChoice - call TFRobertaForTokenClassification [[autodoc]] TFRobertaForTokenClassification - call TFRobertaForQuestionAnswering [[autodoc]] TFRobertaForQuestionAnswering - call FlaxRobertaModel [[autodoc]] FlaxRobertaModel - call FlaxRobertaForCausalLM [[autodoc]] FlaxRobertaForCausalLM - call FlaxRobertaForMaskedLM [[autodoc]] FlaxRobertaForMaskedLM - call FlaxRobertaForSequenceClassification [[autodoc]] FlaxRobertaForSequenceClassification - call FlaxRobertaForMultipleChoice [[autodoc]] FlaxRobertaForMultipleChoice - call FlaxRobertaForTokenClassification [[autodoc]] FlaxRobertaForTokenClassification - call FlaxRobertaForQuestionAnswering [[autodoc]] FlaxRobertaForQuestionAnswering - call
Swin Transformer V2 Overview The Swin Transformer V2 model was proposed in Swin Transformer V2: Scaling Up Capacity and Resolution by Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo. The abstract from the paper is the following: Large-scale NLP models have been shown to significantly improve the performance on language tasks with no signs of saturation. They also demonstrate amazing few-shot capabilities like that of human beings. This paper aims to explore large-scale models in computer vision. We tackle three major issues in training and application of large vision models, including training instability, resolution gaps between pre-training and fine-tuning, and hunger on labelled data. Three main techniques are proposed: 1) a residual-post-norm method combined with cosine attention to improve training stability; 2) A log-spaced continuous position bias method to effectively transfer models pre-trained using low-resolution images to downstream tasks with high-resolution inputs; 3) A self-supervised pre-training method, SimMIM, to reduce the needs of vast labeled images. Through these techniques, this paper successfully trained a 3 billion-parameter Swin Transformer V2 model, which is the largest dense vision model to date, and makes it capable of training with images of up to 1,536×1,536 resolution. It set new performance records on 4 representative vision tasks, including ImageNet-V2 image classification, COCO object detection, ADE20K semantic segmentation, and Kinetics-400 video action classification. Also note our training is much more efficient than that in Google's billion-level visual models, which consumes 40 times less labelled data and 40 times less training time. This model was contributed by nandwalritik. The original code can be found here. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Swin Transformer v2. [Swinv2ForImageClassification] is supported by this example script and notebook. See also: Image classification task guide Besides that: [Swinv2ForMaskedImageModeling] is supported by this example script. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. Swinv2Config [[autodoc]] Swinv2Config Swinv2Model [[autodoc]] Swinv2Model - forward Swinv2ForMaskedImageModeling [[autodoc]] Swinv2ForMaskedImageModeling - forward Swinv2ForImageClassification [[autodoc]] transformers.Swinv2ForImageClassification - forward
LED Overview The LED model was proposed in Longformer: The Long-Document Transformer by Iz Beltagy, Matthew E. Peters, Arman Cohan. The abstract from the paper is the following: Transformer-based models are unable to process long sequences due to their self-attention operation, which scales quadratically with the sequence length. To address this limitation, we introduce the Longformer with an attention mechanism that scales linearly with sequence length, making it easy to process documents of thousands of tokens or longer. Longformer's attention mechanism is a drop-in replacement for the standard self-attention and combines a local windowed attention with a task motivated global attention. Following prior work on long-sequence transformers, we evaluate Longformer on character-level language modeling and achieve state-of-the-art results on text8 and enwik8. In contrast to most prior work, we also pretrain Longformer and finetune it on a variety of downstream tasks. Our pretrained Longformer consistently outperforms RoBERTa on long document tasks and sets new state-of-the-art results on WikiHop and TriviaQA. We finally introduce the Longformer-Encoder-Decoder (LED), a Longformer variant for supporting long document generative sequence-to-sequence tasks, and demonstrate its effectiveness on the arXiv summarization dataset. Usage tips [LEDForConditionalGeneration] is an extension of [BartForConditionalGeneration] exchanging the traditional self-attention layer with Longformer's chunked self-attention layer. [LEDTokenizer] is an alias of [BartTokenizer]. LED works very well on long-range sequence-to-sequence tasks where the input_ids largely exceed a length of 1024 tokens. LED pads the input_ids to be a multiple of config.attention_window if required. Therefore a small speed-up is gained, when [LEDTokenizer] is used with the pad_to_multiple_of argument. LED makes use of global attention by means of the global_attention_mask (see [LongformerModel]). For summarization, it is advised to put global attention only on the first <s> token. For question answering, it is advised to put global attention on all tokens of the question. To fine-tune LED on all 16384, gradient checkpointing can be enabled in case training leads to out-of-memory (OOM) errors. This can be done by executing model.gradient_checkpointing_enable(). Moreover, the use_cache=False flag can be used to disable the caching mechanism to save memory. LED is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than the left. This model was contributed by patrickvonplaten. Resources A notebook showing how to evaluate LED. A notebook showing how to fine-tune LED. Text classification task guide Question answering task guide Translation task guide Summarization task guide LEDConfig [[autodoc]] LEDConfig LEDTokenizer [[autodoc]] LEDTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary LEDTokenizerFast [[autodoc]] LEDTokenizerFast LED specific outputs [[autodoc]] models.led.modeling_led.LEDEncoderBaseModelOutput [[autodoc]] models.led.modeling_led.LEDSeq2SeqModelOutput [[autodoc]] models.led.modeling_led.LEDSeq2SeqLMOutput [[autodoc]] models.led.modeling_led.LEDSeq2SeqSequenceClassifierOutput [[autodoc]] models.led.modeling_led.LEDSeq2SeqQuestionAnsweringModelOutput [[autodoc]] models.led.modeling_tf_led.TFLEDEncoderBaseModelOutput [[autodoc]] models.led.modeling_tf_led.TFLEDSeq2SeqModelOutput [[autodoc]] models.led.modeling_tf_led.TFLEDSeq2SeqLMOutput LEDModel [[autodoc]] LEDModel - forward LEDForConditionalGeneration [[autodoc]] LEDForConditionalGeneration - forward LEDForSequenceClassification [[autodoc]] LEDForSequenceClassification - forward LEDForQuestionAnswering [[autodoc]] LEDForQuestionAnswering - forward TFLEDModel [[autodoc]] TFLEDModel - call TFLEDForConditionalGeneration [[autodoc]] TFLEDForConditionalGeneration - call
UPerNet Overview The UPerNet model was proposed in Unified Perceptual Parsing for Scene Understanding by Tete Xiao, Yingcheng Liu, Bolei Zhou, Yuning Jiang, Jian Sun. UPerNet is a general framework to effectively segment a wide range of concepts from images, leveraging any vision backbone like ConvNeXt or Swin. The abstract from the paper is the following: Humans recognize the visual world at multiple levels: we effortlessly categorize scenes and detect objects inside, while also identifying the textures and surfaces of the objects along with their different compositional parts. In this paper, we study a new task called Unified Perceptual Parsing, which requires the machine vision systems to recognize as many visual concepts as possible from a given image. A multi-task framework called UPerNet and a training strategy are developed to learn from heterogeneous image annotations. We benchmark our framework on Unified Perceptual Parsing and show that it is able to effectively segment a wide range of concepts from images. The trained networks are further applied to discover visual knowledge in natural scenes. UPerNet framework. Taken from the original paper. This model was contributed by nielsr. The original code is based on OpenMMLab's mmsegmentation here. Usage examples UPerNet is a general framework for semantic segmentation. It can be used with any vision backbone, like so: from transformers import SwinConfig, UperNetConfig, UperNetForSemanticSegmentation backbone_config = SwinConfig(out_features=["stage1", "stage2", "stage3", "stage4"]) config = UperNetConfig(backbone_config=backbone_config) model = UperNetForSemanticSegmentation(config) To use another vision backbone, like ConvNeXt, simply instantiate the model with the appropriate backbone: from transformers import ConvNextConfig, UperNetConfig, UperNetForSemanticSegmentation backbone_config = ConvNextConfig(out_features=["stage1", "stage2", "stage3", "stage4"]) config = UperNetConfig(backbone_config=backbone_config) model = UperNetForSemanticSegmentation(config) Note that this will randomly initialize all the weights of the model. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with UPerNet. Demo notebooks for UPerNet can be found here. [UperNetForSemanticSegmentation] is supported by this example script and notebook. See also: Semantic segmentation task guide If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. UperNetConfig [[autodoc]] UperNetConfig UperNetForSemanticSegmentation [[autodoc]] UperNetForSemanticSegmentation - forward
Blenderbot Overview The Blender chatbot model was proposed in Recipes for building an open-domain chatbot Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston on 30 Apr 2020. The abstract of the paper is the following: Building open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, and displaying knowledge, empathy and personality appropriately, while maintaining a consistent persona. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter models, and make our models and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models. This model was contributed by sshleifer. The authors' code can be found here . Usage tips and example Blenderbot is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than the left. An example: thon from transformers import BlenderbotTokenizer, BlenderbotForConditionalGeneration mname = "facebook/blenderbot-400M-distill" model = BlenderbotForConditionalGeneration.from_pretrained(mname) tokenizer = BlenderbotTokenizer.from_pretrained(mname) UTTERANCE = "My friends are cool but they eat too many carbs." inputs = tokenizer([UTTERANCE], return_tensors="pt") reply_ids = model.generate(**inputs) print(tokenizer.batch_decode(reply_ids)) [" That's unfortunate. Are they trying to lose weight or are they just trying to be healthier?"] Implementation Notes Blenderbot uses a standard seq2seq model transformer based architecture. Available checkpoints can be found in the model hub. This is the default Blenderbot model class. However, some smaller checkpoints, such as facebook/blenderbot_small_90M, have a different architecture and consequently should be used with BlenderbotSmall. Resources Causal language modeling task guide Translation task guide Summarization task guide BlenderbotConfig [[autodoc]] BlenderbotConfig BlenderbotTokenizer [[autodoc]] BlenderbotTokenizer - build_inputs_with_special_tokens BlenderbotTokenizerFast [[autodoc]] BlenderbotTokenizerFast - build_inputs_with_special_tokens BlenderbotModel See [~transformers.BartModel] for arguments to forward and generate [[autodoc]] BlenderbotModel - forward BlenderbotForConditionalGeneration See [~transformers.BartForConditionalGeneration] for arguments to forward and generate [[autodoc]] BlenderbotForConditionalGeneration - forward BlenderbotForCausalLM [[autodoc]] BlenderbotForCausalLM - forward TFBlenderbotModel [[autodoc]] TFBlenderbotModel - call TFBlenderbotForConditionalGeneration [[autodoc]] TFBlenderbotForConditionalGeneration - call FlaxBlenderbotModel [[autodoc]] FlaxBlenderbotModel - call - encode - decode FlaxBlenderbotForConditionalGeneration [[autodoc]] FlaxBlenderbotForConditionalGeneration - call - encode - decode
Splinter Overview The Splinter model was proposed in Few-Shot Question Answering by Pretraining Span Selection by Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy. Splinter is an encoder-only transformer (similar to BERT) pretrained using the recurring span selection task on a large corpus comprising Wikipedia and the Toronto Book Corpus. The abstract from the paper is the following: In several question answering benchmarks, pretrained models have reached human parity through fine-tuning on an order of 100,000 annotated questions and answers. We explore the more realistic few-shot setting, where only a few hundred training examples are available, and observe that standard models perform poorly, highlighting the discrepancy between current pretraining objectives and question answering. We propose a new pretraining scheme tailored for question answering: recurring span selection. Given a passage with multiple sets of recurring spans, we mask in each set all recurring spans but one, and ask the model to select the correct span in the passage for each masked span. Masked spans are replaced with a special token, viewed as a question representation, that is later used during fine-tuning to select the answer span. The resulting model obtains surprisingly good results on multiple benchmarks (e.g., 72.7 F1 on SQuAD with only 128 training examples), while maintaining competitive performance in the high-resource setting. This model was contributed by yuvalkirstain and oriram. The original code can be found here. Usage tips Splinter was trained to predict answers spans conditioned on a special [QUESTION] token. These tokens contextualize to question representations which are used to predict the answers. This layer is called QASS, and is the default behaviour in the [SplinterForQuestionAnswering] class. Therefore: Use [SplinterTokenizer] (rather than [BertTokenizer]), as it already contains this special token. Also, its default behavior is to use this token when two sequences are given (for example, in the run_qa.py script). If you plan on using Splinter outside run_qa.py, please keep in mind the question token - it might be important for the success of your model, especially in a few-shot setting. Please note there are two different checkpoints for each size of Splinter. Both are basically the same, except that one also has the pretrained weights of the QASS layer (tau/splinter-base-qass and tau/splinter-large-qass) and one doesn't (tau/splinter-base and tau/splinter-large). This is done to support randomly initializing this layer at fine-tuning, as it is shown to yield better results for some cases in the paper. Resources Question answering task guide SplinterConfig [[autodoc]] SplinterConfig SplinterTokenizer [[autodoc]] SplinterTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary SplinterTokenizerFast [[autodoc]] SplinterTokenizerFast SplinterModel [[autodoc]] SplinterModel - forward SplinterForQuestionAnswering [[autodoc]] SplinterForQuestionAnswering - forward SplinterForPreTraining [[autodoc]] SplinterForPreTraining - forward
SEW-D Overview SEW-D (Squeezed and Efficient Wav2Vec with Disentangled attention) was proposed in Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. The abstract from the paper is the following: This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes. This model was contributed by anton-l. Usage tips SEW-D is a speech model that accepts a float array corresponding to the raw waveform of the speech signal. SEWDForCTC is fine-tuned using connectionist temporal classification (CTC) so the model output has to be decoded using [Wav2Vec2CTCTokenizer]. Resources Audio classification task guide Automatic speech recognition task guide SEWDConfig [[autodoc]] SEWDConfig SEWDModel [[autodoc]] SEWDModel - forward SEWDForCTC [[autodoc]] SEWDForCTC - forward SEWDForSequenceClassification [[autodoc]] SEWDForSequenceClassification - forward
TAPAS Overview The TAPAS model was proposed in TAPAS: Weakly Supervised Table Parsing via Pre-training by Jonathan Herzig, Paweł Krzysztof Nowak, Thomas Müller, Francesco Piccinno and Julian Martin Eisenschlos. It's a BERT-based model specifically designed (and pre-trained) for answering questions about tabular data. Compared to BERT, TAPAS uses relative position embeddings and has 7 token types that encode tabular structure. TAPAS is pre-trained on the masked language modeling (MLM) objective on a large dataset comprising millions of tables from English Wikipedia and corresponding texts. For question answering, TAPAS has 2 heads on top: a cell selection head and an aggregation head, for (optionally) performing aggregations (such as counting or summing) among selected cells. TAPAS has been fine-tuned on several datasets: - SQA (Sequential Question Answering by Microsoft) - WTQ (Wiki Table Questions by Stanford University) - WikiSQL (by Salesforce). It achieves state-of-the-art on both SQA and WTQ, while having comparable performance to SOTA on WikiSQL, with a much simpler architecture. The abstract from the paper is the following: Answering natural language questions over tables is usually seen as a semantic parsing task. To alleviate the collection cost of full logical forms, one popular approach focuses on weak supervision consisting of denotations instead of logical forms. However, training semantic parsers from weak supervision poses difficulties, and in addition, the generated logical forms are only used as an intermediate step prior to retrieving the denotation. In this paper, we present TAPAS, an approach to question answering over tables without generating logical forms. TAPAS trains from weak supervision, and predicts the denotation by selecting table cells and optionally applying a corresponding aggregation operator to such selection. TAPAS extends BERT's architecture to encode tables as input, initializes from an effective joint pre-training of text segments and tables crawled from Wikipedia, and is trained end-to-end. We experiment with three different semantic parsing datasets, and find that TAPAS outperforms or rivals semantic parsing models by improving state-of-the-art accuracy on SQA from 55.1 to 67.2 and performing on par with the state-of-the-art on WIKISQL and WIKITQ, but with a simpler model architecture. We additionally find that transfer learning, which is trivial in our setting, from WIKISQL to WIKITQ, yields 48.7 accuracy, 4.2 points above the state-of-the-art. In addition, the authors have further pre-trained TAPAS to recognize table entailment, by creating a balanced dataset of millions of automatically created training examples which are learned in an intermediate step prior to fine-tuning. The authors of TAPAS call this further pre-training intermediate pre-training (since TAPAS is first pre-trained on MLM, and then on another dataset). They found that intermediate pre-training further improves performance on SQA, achieving a new state-of-the-art as well as state-of-the-art on TabFact, a large-scale dataset with 16k Wikipedia tables for table entailment (a binary classification task). For more details, see their follow-up paper: Understanding tables with intermediate pre-training by Julian Martin Eisenschlos, Syrine Krichene and Thomas Müller. TAPAS architecture. Taken from the original blog post. This model was contributed by nielsr. The Tensorflow version of this model was contributed by kamalkraj. The original code can be found here. Usage tips TAPAS is a model that uses relative position embeddings by default (restarting the position embeddings at every cell of the table). Note that this is something that was added after the publication of the original TAPAS paper. According to the authors, this usually results in a slightly better performance, and allows you to encode longer sequences without running out of embeddings. This is reflected in the reset_position_index_per_cell parameter of [TapasConfig], which is set to True by default. The default versions of the models available on the hub all use relative position embeddings. You can still use the ones with absolute position embeddings by passing in an additional argument revision="no_reset" when calling the from_pretrained() method. Note that it's usually advised to pad the inputs on the right rather than the left. TAPAS is based on BERT, so TAPAS-base for example corresponds to a BERT-base architecture. Of course, TAPAS-large will result in the best performance (the results reported in the paper are from TAPAS-large). Results of the various sized models are shown on the original GitHub repository. TAPAS has checkpoints fine-tuned on SQA, which are capable of answering questions related to a table in a conversational set-up. This means that you can ask follow-up questions such as "what is his age?" related to the previous question. Note that the forward pass of TAPAS is a bit different in case of a conversational set-up: in that case, you have to feed every table-question pair one by one to the model, such that the prev_labels token type ids can be overwritten by the predicted labels of the model to the previous question. See "Usage" section for more info. TAPAS is similar to BERT and therefore relies on the masked language modeling (MLM) objective. It is therefore efficient at predicting masked tokens and at NLU in general, but is not optimal for text generation. Models trained with a causal language modeling (CLM) objective are better in that regard. Note that TAPAS can be used as an encoder in the EncoderDecoderModel framework, to combine it with an autoregressive text decoder such as GPT-2. Usage: fine-tuning Here we explain how you can fine-tune [TapasForQuestionAnswering] on your own dataset. STEP 1: Choose one of the 3 ways in which you can use TAPAS - or experiment Basically, there are 3 different ways in which one can fine-tune [TapasForQuestionAnswering], corresponding to the different datasets on which Tapas was fine-tuned: SQA: if you're interested in asking follow-up questions related to a table, in a conversational set-up. For example if you first ask "what's the name of the first actor?" then you can ask a follow-up question such as "how old is he?". Here, questions do not involve any aggregation (all questions are cell selection questions). WTQ: if you're not interested in asking questions in a conversational set-up, but rather just asking questions related to a table, which might involve aggregation, such as counting a number of rows, summing up cell values or averaging cell values. You can then for example ask "what's the total number of goals Cristiano Ronaldo made in his career?". This case is also called weak supervision, since the model itself must learn the appropriate aggregation operator (SUM/COUNT/AVERAGE/NONE) given only the answer to the question as supervision. WikiSQL-supervised: this dataset is based on WikiSQL with the model being given the ground truth aggregation operator during training. This is also called strong supervision. Here, learning the appropriate aggregation operator is much easier. To summarize: | Task | Example dataset | Description | |-------------------------------------|---------------------|---------------------------------------------------------------------------------------------------------| | Conversational | SQA | Conversational, only cell selection questions | | Weak supervision for aggregation | WTQ | Questions might involve aggregation, and the model must learn this given only the answer as supervision | | Strong supervision for aggregation | WikiSQL-supervised | Questions might involve aggregation, and the model must learn this given the gold aggregation operator | Initializing a model with a pre-trained base and randomly initialized classification heads from the hub can be done as shown below. from transformers import TapasConfig, TapasForQuestionAnswering for example, the base sized model with default SQA configuration model = TapasForQuestionAnswering.from_pretrained("google/tapas-base") or, the base sized model with WTQ configuration config = TapasConfig.from_pretrained("google/tapas-base-finetuned-wtq") model = TapasForQuestionAnswering.from_pretrained("google/tapas-base", config=config) or, the base sized model with WikiSQL configuration config = TapasConfig("google-base-finetuned-wikisql-supervised") model = TapasForQuestionAnswering.from_pretrained("google/tapas-base", config=config) Of course, you don't necessarily have to follow one of these three ways in which TAPAS was fine-tuned. You can also experiment by defining any hyperparameters you want when initializing [TapasConfig], and then create a [TapasForQuestionAnswering] based on that configuration. For example, if you have a dataset that has both conversational questions and questions that might involve aggregation, then you can do it this way. Here's an example: from transformers import TapasConfig, TapasForQuestionAnswering you can initialize the classification heads any way you want (see docs of TapasConfig) config = TapasConfig(num_aggregation_labels=3, average_logits_per_cell=True) initializing the pre-trained base sized model with our custom classification heads model = TapasForQuestionAnswering.from_pretrained("google/tapas-base", config=config) Initializing a model with a pre-trained base and randomly initialized classification heads from the hub can be done as shown below. Be sure to have installed the tensorflow_probability dependency: from transformers import TapasConfig, TFTapasForQuestionAnswering for example, the base sized model with default SQA configuration model = TFTapasForQuestionAnswering.from_pretrained("google/tapas-base") or, the base sized model with WTQ configuration config = TapasConfig.from_pretrained("google/tapas-base-finetuned-wtq") model = TFTapasForQuestionAnswering.from_pretrained("google/tapas-base", config=config) or, the base sized model with WikiSQL configuration config = TapasConfig("google-base-finetuned-wikisql-supervised") model = TFTapasForQuestionAnswering.from_pretrained("google/tapas-base", config=config) Of course, you don't necessarily have to follow one of these three ways in which TAPAS was fine-tuned. You can also experiment by defining any hyperparameters you want when initializing [TapasConfig], and then create a [TFTapasForQuestionAnswering] based on that configuration. For example, if you have a dataset that has both conversational questions and questions that might involve aggregation, then you can do it this way. Here's an example: from transformers import TapasConfig, TFTapasForQuestionAnswering you can initialize the classification heads any way you want (see docs of TapasConfig) config = TapasConfig(num_aggregation_labels=3, average_logits_per_cell=True) initializing the pre-trained base sized model with our custom classification heads model = TFTapasForQuestionAnswering.from_pretrained("google/tapas-base", config=config) What you can also do is start from an already fine-tuned checkpoint. A note here is that the already fine-tuned checkpoint on WTQ has some issues due to the L2-loss which is somewhat brittle. See here for more info. For a list of all pre-trained and fine-tuned TAPAS checkpoints available on HuggingFace's hub, see here. STEP 2: Prepare your data in the SQA format Second, no matter what you picked above, you should prepare your dataset in the SQA format. This format is a TSV/CSV file with the following columns: id: optional, id of the table-question pair, for bookkeeping purposes. annotator: optional, id of the person who annotated the table-question pair, for bookkeeping purposes. position: integer indicating if the question is the first, second, third, related to the table. Only required in case of conversational setup (SQA). You don't need this column in case you're going for WTQ/WikiSQL-supervised. question: string table_file: string, name of a csv file containing the tabular data answer_coordinates: list of one or more tuples (each tuple being a cell coordinate, i.e. row, column pair that is part of the answer) answer_text: list of one or more strings (each string being a cell value that is part of the answer) aggregation_label: index of the aggregation operator. Only required in case of strong supervision for aggregation (the WikiSQL-supervised case) float_answer: the float answer to the question, if there is one (np.nan if there isn't). Only required in case of weak supervision for aggregation (such as WTQ and WikiSQL) The tables themselves should be present in a folder, each table being a separate csv file. Note that the authors of the TAPAS algorithm used conversion scripts with some automated logic to convert the other datasets (WTQ, WikiSQL) into the SQA format. The author explains this here. A conversion of this script that works with HuggingFace's implementation can be found here. Interestingly, these conversion scripts are not perfect (the answer_coordinates and float_answer fields are populated based on the answer_text), meaning that WTQ and WikiSQL results could actually be improved. STEP 3: Convert your data into tensors using TapasTokenizer Third, given that you've prepared your data in this TSV/CSV format (and corresponding CSV files containing the tabular data), you can then use [TapasTokenizer] to convert table-question pairs into input_ids, attention_mask, token_type_ids and so on. Again, based on which of the three cases you picked above, [TapasForQuestionAnswering] requires different inputs to be fine-tuned: | Task | Required inputs | |------------------------------------|---------------------------------------------------------------------------------------------------------------------| | Conversational | input_ids, attention_mask, token_type_ids, labels | | Weak supervision for aggregation | input_ids, attention_mask, token_type_ids, labels, numeric_values, numeric_values_scale, float_answer | | Strong supervision for aggregation | input ids, attention mask, token type ids, labels, aggregation_labels | [TapasTokenizer] creates the labels, numeric_values and numeric_values_scale based on the answer_coordinates and answer_text columns of the TSV file. The float_answer and aggregation_labels are already in the TSV file of step 2. Here's an example: from transformers import TapasTokenizer import pandas as pd model_name = "google/tapas-base" tokenizer = TapasTokenizer.from_pretrained(model_name) data = {"Actors": ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"], "Number of movies": ["87", "53", "69"]} queries = [ "What is the name of the first actor?", "How many movies has George Clooney played in?", "What is the total number of movies?", ] answer_coordinates = [[(0, 0)], [(2, 1)], [(0, 1), (1, 1), (2, 1)]] answer_text = [["Brad Pitt"], ["69"], ["209"]] table = pd.DataFrame.from_dict(data) inputs = tokenizer( table=table, queries=queries, answer_coordinates=answer_coordinates, answer_text=answer_text, padding="max_length", return_tensors="pt", ) inputs {'input_ids': tensor([[ ]]), 'attention_mask': tensor([[]]), 'token_type_ids': tensor([[[]]]), 'numeric_values': tensor([[ ]]), 'numeric_values_scale: tensor([[ ]]), labels: tensor([[ ]])} Note that [TapasTokenizer] expects the data of the table to be text-only. You can use .astype(str) on a dataframe to turn it into text-only data. Of course, this only shows how to encode a single training example. It is advised to create a dataloader to iterate over batches: import torch import pandas as pd tsv_path = "your_path_to_the_tsv_file" table_csv_path = "your_path_to_a_directory_containing_all_csv_files" class TableDataset(torch.utils.data.Dataset): def init(self, data, tokenizer): self.data = data self.tokenizer = tokenizer def getitem(self, idx): item = data.iloc[idx] table = pd.read_csv(table_csv_path + item.table_file).astype( str ) # be sure to make your table data text only encoding = self.tokenizer( table=table, queries=item.question, answer_coordinates=item.answer_coordinates, answer_text=item.answer_text, truncation=True, padding="max_length", return_tensors="pt", ) # remove the batch dimension which the tokenizer adds by default encoding = {key: val.squeeze(0) for key, val in encoding.items()} # add the float_answer which is also required (weak supervision for aggregation case) encoding["float_answer"] = torch.tensor(item.float_answer) return encoding def len(self): return len(self.data) data = pd.read_csv(tsv_path, sep="\t") train_dataset = TableDataset(data, tokenizer) train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=32) `` </pt> <tf> Third, given that you've prepared your data in this TSV/CSV format (and corresponding CSV files containing the tabular data), you can then use [TapasTokenizer] to convert table-question pairs intoinput_ids,attention_mask,token_type_idsand so on. Again, based on which of the three cases you picked above, [TFTapasForQuestionAnswering`] requires different inputs to be fine-tuned: | Task | Required inputs | |------------------------------------|---------------------------------------------------------------------------------------------------------------------| | Conversational | input_ids, attention_mask, token_type_ids, labels | | Weak supervision for aggregation | input_ids, attention_mask, token_type_ids, labels, numeric_values, numeric_values_scale, float_answer | | Strong supervision for aggregation | input ids, attention mask, token type ids, labels, aggregation_labels | [TapasTokenizer] creates the labels, numeric_values and numeric_values_scale based on the answer_coordinates and answer_text columns of the TSV file. The float_answer and aggregation_labels are already in the TSV file of step 2. Here's an example: from transformers import TapasTokenizer import pandas as pd model_name = "google/tapas-base" tokenizer = TapasTokenizer.from_pretrained(model_name) data = {"Actors": ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"], "Number of movies": ["87", "53", "69"]} queries = [ "What is the name of the first actor?", "How many movies has George Clooney played in?", "What is the total number of movies?", ] answer_coordinates = [[(0, 0)], [(2, 1)], [(0, 1), (1, 1), (2, 1)]] answer_text = [["Brad Pitt"], ["69"], ["209"]] table = pd.DataFrame.from_dict(data) inputs = tokenizer( table=table, queries=queries, answer_coordinates=answer_coordinates, answer_text=answer_text, padding="max_length", return_tensors="tf", ) inputs {'input_ids': tensor([[ ]]), 'attention_mask': tensor([[]]), 'token_type_ids': tensor([[[]]]), 'numeric_values': tensor([[ ]]), 'numeric_values_scale: tensor([[ ]]), labels: tensor([[ ]])} Note that [TapasTokenizer] expects the data of the table to be text-only. You can use .astype(str) on a dataframe to turn it into text-only data. Of course, this only shows how to encode a single training example. It is advised to create a dataloader to iterate over batches: import tensorflow as tf import pandas as pd tsv_path = "your_path_to_the_tsv_file" table_csv_path = "your_path_to_a_directory_containing_all_csv_files" class TableDataset: def init(self, data, tokenizer): self.data = data self.tokenizer = tokenizer def iter(self): for idx in range(self.len()): item = self.data.iloc[idx] table = pd.read_csv(table_csv_path + item.table_file).astype( str ) # be sure to make your table data text only encoding = self.tokenizer( table=table, queries=item.question, answer_coordinates=item.answer_coordinates, answer_text=item.answer_text, truncation=True, padding="max_length", return_tensors="tf", ) # remove the batch dimension which the tokenizer adds by default encoding = {key: tf.squeeze(val, 0) for key, val in encoding.items()} # add the float_answer which is also required (weak supervision for aggregation case) encoding["float_answer"] = tf.convert_to_tensor(item.float_answer, dtype=tf.float32) yield encoding["input_ids"], encoding["attention_mask"], encoding["numeric_values"], encoding[ "numeric_values_scale" ], encoding["token_type_ids"], encoding["labels"], encoding["float_answer"] def len(self): return len(self.data) data = pd.read_csv(tsv_path, sep="\t") train_dataset = TableDataset(data, tokenizer) output_signature = ( tf.TensorSpec(shape=(512,), dtype=tf.int32), tf.TensorSpec(shape=(512,), dtype=tf.int32), tf.TensorSpec(shape=(512,), dtype=tf.float32), tf.TensorSpec(shape=(512,), dtype=tf.float32), tf.TensorSpec(shape=(512, 7), dtype=tf.int32), tf.TensorSpec(shape=(512,), dtype=tf.int32), tf.TensorSpec(shape=(512,), dtype=tf.float32), ) train_dataloader = tf.data.Dataset.from_generator(train_dataset, output_signature=output_signature).batch(32) Note that here, we encode each table-question pair independently. This is fine as long as your dataset is not conversational. In case your dataset involves conversational questions (such as in SQA), then you should first group together the queries, answer_coordinates and answer_text per table (in the order of their position index) and batch encode each table with its questions. This will make sure that the prev_labels token types (see docs of [TapasTokenizer]) are set correctly. See this notebook for more info. See this notebook for more info regarding using the TensorFlow model. **STEP 4: Train (fine-tune) the model You can then fine-tune [TapasForQuestionAnswering] as follows (shown here for the weak supervision for aggregation case): from transformers import TapasConfig, TapasForQuestionAnswering, AdamW this is the default WTQ configuration config = TapasConfig( num_aggregation_labels=4, use_answer_as_supervision=True, answer_loss_cutoff=0.664694, cell_selection_preference=0.207951, huber_loss_delta=0.121194, init_cell_selection_weights_to_zero=True, select_one_column=True, allow_empty_column_selection=False, temperature=0.0352513, ) model = TapasForQuestionAnswering.from_pretrained("google/tapas-base", config=config) optimizer = AdamW(model.parameters(), lr=5e-5) model.train() for epoch in range(2): # loop over the dataset multiple times for batch in train_dataloader: # get the inputs; input_ids = batch["input_ids"] attention_mask = batch["attention_mask"] token_type_ids = batch["token_type_ids"] labels = batch["labels"] numeric_values = batch["numeric_values"] numeric_values_scale = batch["numeric_values_scale"] float_answer = batch["float_answer"] # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = model( input_ids=input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, labels=labels, numeric_values=numeric_values, numeric_values_scale=numeric_values_scale, float_answer=float_answer, ) loss = outputs.loss loss.backward() optimizer.step() `` </pt> <tf> You can then fine-tune [TFTapasForQuestionAnswering`] as follows (shown here for the weak supervision for aggregation case): import tensorflow as tf from transformers import TapasConfig, TFTapasForQuestionAnswering this is the default WTQ configuration config = TapasConfig( num_aggregation_labels=4, use_answer_as_supervision=True, answer_loss_cutoff=0.664694, cell_selection_preference=0.207951, huber_loss_delta=0.121194, init_cell_selection_weights_to_zero=True, select_one_column=True, allow_empty_column_selection=False, temperature=0.0352513, ) model = TFTapasForQuestionAnswering.from_pretrained("google/tapas-base", config=config) optimizer = tf.keras.optimizers.Adam(learning_rate=5e-5) for epoch in range(2): # loop over the dataset multiple times for batch in train_dataloader: # get the inputs; input_ids = batch[0] attention_mask = batch[1] token_type_ids = batch[4] labels = batch[-1] numeric_values = batch[2] numeric_values_scale = batch[3] float_answer = batch[6] # forward + backward + optimize with tf.GradientTape() as tape: outputs = model( input_ids=input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, labels=labels, numeric_values=numeric_values, numeric_values_scale=numeric_values_scale, float_answer=float_answer, ) grads = tape.gradient(outputs.loss, model.trainable_weights) optimizer.apply_gradients(zip(grads, model.trainable_weights)) Usage: inference Here we explain how you can use [TapasForQuestionAnswering] or [TFTapasForQuestionAnswering] for inference (i.e. making predictions on new data). For inference, only input_ids, attention_mask and token_type_ids (which you can obtain using [TapasTokenizer]) have to be provided to the model to obtain the logits. Next, you can use the handy [~models.tapas.tokenization_tapas.convert_logits_to_predictions] method to convert these into predicted coordinates and optional aggregation indices. However, note that inference is different depending on whether or not the setup is conversational. In a non-conversational set-up, inference can be done in parallel on all table-question pairs of a batch. Here's an example of that: from transformers import TapasTokenizer, TapasForQuestionAnswering import pandas as pd model_name = "google/tapas-base-finetuned-wtq" model = TapasForQuestionAnswering.from_pretrained(model_name) tokenizer = TapasTokenizer.from_pretrained(model_name) data = {"Actors": ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"], "Number of movies": ["87", "53", "69"]} queries = [ "What is the name of the first actor?", "How many movies has George Clooney played in?", "What is the total number of movies?", ] table = pd.DataFrame.from_dict(data) inputs = tokenizer(table=table, queries=queries, padding="max_length", return_tensors="pt") outputs = model(**inputs) predicted_answer_coordinates, predicted_aggregation_indices = tokenizer.convert_logits_to_predictions( inputs, outputs.logits.detach(), outputs.logits_aggregation.detach() ) let's print out the results: id2aggregation = {0: "NONE", 1: "SUM", 2: "AVERAGE", 3: "COUNT"} aggregation_predictions_string = [id2aggregation[x] for x in predicted_aggregation_indices] answers = [] for coordinates in predicted_answer_coordinates: if len(coordinates) == 1: # only a single cell: answers.append(table.iat[coordinates[0]]) else: # multiple cells cell_values = [] for coordinate in coordinates: cell_values.append(table.iat[coordinate]) answers.append(", ".join(cell_values)) display(table) print("") for query, answer, predicted_agg in zip(queries, answers, aggregation_predictions_string): print(query) if predicted_agg == "NONE": print("Predicted answer: " + answer) else: print("Predicted answer: " + predicted_agg + " > " + answer) What is the name of the first actor? Predicted answer: Brad Pitt How many movies has George Clooney played in? Predicted answer: COUNT > 69 What is the total number of movies? Predicted answer: SUM > 87, 53, 69 `` </pt> <tf> Here we explain how you can use [TFTapasForQuestionAnswering] for inference (i.e. making predictions on new data). For inference, onlyinput_ids,attention_maskandtoken_type_ids(which you can obtain using [TapasTokenizer]) have to be provided to the model to obtain the logits. Next, you can use the handy [~models.tapas.tokenization_tapas.convert_logits_to_predictions`] method to convert these into predicted coordinates and optional aggregation indices. However, note that inference is different depending on whether or not the setup is conversational. In a non-conversational set-up, inference can be done in parallel on all table-question pairs of a batch. Here's an example of that: from transformers import TapasTokenizer, TFTapasForQuestionAnswering import pandas as pd model_name = "google/tapas-base-finetuned-wtq" model = TFTapasForQuestionAnswering.from_pretrained(model_name) tokenizer = TapasTokenizer.from_pretrained(model_name) data = {"Actors": ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"], "Number of movies": ["87", "53", "69"]} queries = [ "What is the name of the first actor?", "How many movies has George Clooney played in?", "What is the total number of movies?", ] table = pd.DataFrame.from_dict(data) inputs = tokenizer(table=table, queries=queries, padding="max_length", return_tensors="tf") outputs = model(**inputs) predicted_answer_coordinates, predicted_aggregation_indices = tokenizer.convert_logits_to_predictions( inputs, outputs.logits, outputs.logits_aggregation ) let's print out the results: id2aggregation = {0: "NONE", 1: "SUM", 2: "AVERAGE", 3: "COUNT"} aggregation_predictions_string = [id2aggregation[x] for x in predicted_aggregation_indices] answers = [] for coordinates in predicted_answer_coordinates: if len(coordinates) == 1: # only a single cell: answers.append(table.iat[coordinates[0]]) else: # multiple cells cell_values = [] for coordinate in coordinates: cell_values.append(table.iat[coordinate]) answers.append(", ".join(cell_values)) display(table) print("") for query, answer, predicted_agg in zip(queries, answers, aggregation_predictions_string): print(query) if predicted_agg == "NONE": print("Predicted answer: " + answer) else: print("Predicted answer: " + predicted_agg + " > " + answer) What is the name of the first actor? Predicted answer: Brad Pitt How many movies has George Clooney played in? Predicted answer: COUNT > 69 What is the total number of movies? Predicted answer: SUM > 87, 53, 69 In case of a conversational set-up, then each table-question pair must be provided sequentially to the model, such that the prev_labels token types can be overwritten by the predicted labels of the previous table-question pair. Again, more info can be found in this notebook (for PyTorch) and this notebook (for TensorFlow). Resources Text classification task guide Masked language modeling task guide TAPAS specific outputs [[autodoc]] models.tapas.modeling_tapas.TableQuestionAnsweringOutput TapasConfig [[autodoc]] TapasConfig TapasTokenizer [[autodoc]] TapasTokenizer - call - convert_logits_to_predictions - save_vocabulary TapasModel [[autodoc]] TapasModel - forward TapasForMaskedLM [[autodoc]] TapasForMaskedLM - forward TapasForSequenceClassification [[autodoc]] TapasForSequenceClassification - forward TapasForQuestionAnswering [[autodoc]] TapasForQuestionAnswering - forward TFTapasModel [[autodoc]] TFTapasModel - call TFTapasForMaskedLM [[autodoc]] TFTapasForMaskedLM - call TFTapasForSequenceClassification [[autodoc]] TFTapasForSequenceClassification - call TFTapasForQuestionAnswering [[autodoc]] TFTapasForQuestionAnswering - call
LXMERT Overview The LXMERT model was proposed in LXMERT: Learning Cross-Modality Encoder Representations from Transformers by Hao Tan & Mohit Bansal. It is a series of bidirectional transformer encoders (one for the vision modality, one for the language modality, and then one to fuse both modalities) pretrained using a combination of masked language modeling, visual-language text alignment, ROI-feature regression, masked visual-attribute modeling, masked visual-object modeling, and visual-question answering objectives. The pretraining consists of multiple multi-modal datasets: MSCOCO, Visual-Genome + Visual-Genome Question Answering, VQA 2.0, and GQA. The abstract from the paper is the following: Vision-and-language reasoning requires an understanding of visual concepts, language semantics, and, most importantly, the alignment and relationships between these two modalities. We thus propose the LXMERT (Learning Cross-Modality Encoder Representations from Transformers) framework to learn these vision-and-language connections. In LXMERT, we build a large-scale Transformer model that consists of three encoders: an object relationship encoder, a language encoder, and a cross-modality encoder. Next, to endow our model with the capability of connecting vision and language semantics, we pre-train the model with large amounts of image-and-sentence pairs, via five diverse representative pretraining tasks: masked language modeling, masked object prediction (feature regression and label classification), cross-modality matching, and image question answering. These tasks help in learning both intra-modality and cross-modality relationships. After fine-tuning from our pretrained parameters, our model achieves the state-of-the-art results on two visual question answering datasets (i.e., VQA and GQA). We also show the generalizability of our pretrained cross-modality model by adapting it to a challenging visual-reasoning task, NLVR, and improve the previous best result by 22% absolute (54% to 76%). Lastly, we demonstrate detailed ablation studies to prove that both our novel model components and pretraining strategies significantly contribute to our strong results; and also present several attention visualizations for the different encoders This model was contributed by eltoto1219. The original code can be found here. Usage tips Bounding boxes are not necessary to be used in the visual feature embeddings, any kind of visual-spacial features will work. Both the language hidden states and the visual hidden states that LXMERT outputs are passed through the cross-modality layer, so they contain information from both modalities. To access a modality that only attends to itself, select the vision/language hidden states from the first input in the tuple. The bidirectional cross-modality encoder attention only returns attention values when the language modality is used as the input and the vision modality is used as the context vector. Further, while the cross-modality encoder contains self-attention for each respective modality and cross-attention, only the cross attention is returned and both self attention outputs are disregarded. Resources Question answering task guide LxmertConfig [[autodoc]] LxmertConfig LxmertTokenizer [[autodoc]] LxmertTokenizer LxmertTokenizerFast [[autodoc]] LxmertTokenizerFast Lxmert specific outputs [[autodoc]] models.lxmert.modeling_lxmert.LxmertModelOutput [[autodoc]] models.lxmert.modeling_lxmert.LxmertForPreTrainingOutput [[autodoc]] models.lxmert.modeling_lxmert.LxmertForQuestionAnsweringOutput [[autodoc]] models.lxmert.modeling_tf_lxmert.TFLxmertModelOutput [[autodoc]] models.lxmert.modeling_tf_lxmert.TFLxmertForPreTrainingOutput LxmertModel [[autodoc]] LxmertModel - forward LxmertForPreTraining [[autodoc]] LxmertForPreTraining - forward LxmertForQuestionAnswering [[autodoc]] LxmertForQuestionAnswering - forward TFLxmertModel [[autodoc]] TFLxmertModel - call TFLxmertForPreTraining [[autodoc]] TFLxmertForPreTraining - call
DPT Overview The DPT model was proposed in Vision Transformers for Dense Prediction by René Ranftl, Alexey Bochkovskiy, Vladlen Koltun. DPT is a model that leverages the Vision Transformer (ViT) as backbone for dense prediction tasks like semantic segmentation and depth estimation. The abstract from the paper is the following: We introduce dense vision transformers, an architecture that leverages vision transformers in place of convolutional networks as a backbone for dense prediction tasks. We assemble tokens from various stages of the vision transformer into image-like representations at various resolutions and progressively combine them into full-resolution predictions using a convolutional decoder. The transformer backbone processes representations at a constant and relatively high resolution and has a global receptive field at every stage. These properties allow the dense vision transformer to provide finer-grained and more globally coherent predictions when compared to fully-convolutional networks. Our experiments show that this architecture yields substantial improvements on dense prediction tasks, especially when a large amount of training data is available. For monocular depth estimation, we observe an improvement of up to 28% in relative performance when compared to a state-of-the-art fully-convolutional network. When applied to semantic segmentation, dense vision transformers set a new state of the art on ADE20K with 49.02% mIoU. We further show that the architecture can be fine-tuned on smaller datasets such as NYUv2, KITTI, and Pascal Context where it also sets the new state of the art. DPT architecture. Taken from the original paper. This model was contributed by nielsr. The original code can be found here. Usage tips DPT is compatible with the [AutoBackbone] class. This allows to use the DPT framework with various computer vision backbones available in the library, such as [VitDetBackbone] or [Dinov2Backbone]. One can create it as follows: thon from transformers import Dinov2Config, DPTConfig, DPTForDepthEstimation initialize with a Transformer-based backbone such as DINOv2 in that case, we also specify reshape_hidden_states=False to get feature maps of shape (batch_size, num_channels, height, width) backbone_config = Dinov2Config.from_pretrained("facebook/dinov2-base", out_features=["stage1", "stage2", "stage3", "stage4"], reshape_hidden_states=False) config = DPTConfig(backbone_config=backbone_config) model = DPTForDepthEstimation(config=config) Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DPT. Demo notebooks for [DPTForDepthEstimation] can be found here. Semantic segmentation task guide Monocular depth estimation task guide If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. DPTConfig [[autodoc]] DPTConfig DPTFeatureExtractor [[autodoc]] DPTFeatureExtractor - call - post_process_semantic_segmentation DPTImageProcessor [[autodoc]] DPTImageProcessor - preprocess - post_process_semantic_segmentation DPTModel [[autodoc]] DPTModel - forward DPTForDepthEstimation [[autodoc]] DPTForDepthEstimation - forward DPTForSemanticSegmentation [[autodoc]] DPTForSemanticSegmentation - forward
CANINE Overview The CANINE model was proposed in CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation by Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting. It's among the first papers that trains a Transformer without using an explicit tokenization step (such as Byte Pair Encoding (BPE), WordPiece or SentencePiece). Instead, the model is trained directly at a Unicode character-level. Training at a character-level inevitably comes with a longer sequence length, which CANINE solves with an efficient downsampling strategy, before applying a deep Transformer encoder. The abstract from the paper is the following: Pipelined NLP systems have largely been superseded by end-to-end neural modeling, yet nearly all commonly-used models still require an explicit tokenization step. While recent tokenization approaches based on data-derived subword lexicons are less brittle than manually engineered tokenizers, these techniques are not equally suited to all languages, and the use of any fixed vocabulary may limit a model's ability to adapt. In this paper, we present CANINE, a neural encoder that operates directly on character sequences, without explicit tokenization or vocabulary, and a pre-training strategy that operates either directly on characters or optionally uses subwords as a soft inductive bias. To use its finer-grained input effectively and efficiently, CANINE combines downsampling, which reduces the input sequence length, with a deep transformer stack, which encodes context. CANINE outperforms a comparable mBERT model by 2.8 F1 on TyDi QA, a challenging multilingual benchmark, despite having 28% fewer model parameters. This model was contributed by nielsr. The original code can be found here. Usage tips CANINE uses no less than 3 Transformer encoders internally: 2 "shallow" encoders (which only consist of a single layer) and 1 "deep" encoder (which is a regular BERT encoder). First, a "shallow" encoder is used to contextualize the character embeddings, using local attention. Next, after downsampling, a "deep" encoder is applied. Finally, after upsampling, a "shallow" encoder is used to create the final character embeddings. Details regarding up- and downsampling can be found in the paper. CANINE uses a max sequence length of 2048 characters by default. One can use [CanineTokenizer] to prepare text for the model. Classification can be done by placing a linear layer on top of the final hidden state of the special [CLS] token (which has a predefined Unicode code point). For token classification tasks however, the downsampled sequence of tokens needs to be upsampled again to match the length of the original character sequence (which is 2048). The details for this can be found in the paper. Model checkpoints: google/canine-c: Pre-trained with autoregressive character loss, 12-layer, 768-hidden, 12-heads, 121M parameters (size ~500 MB). google/canine-s: Pre-trained with subword loss, 12-layer, 768-hidden, 12-heads, 121M parameters (size ~500 MB). Usage example CANINE works on raw characters, so it can be used without a tokenizer: thon from transformers import CanineModel import torch model = CanineModel.from_pretrained("google/canine-c") # model pre-trained with autoregressive character loss text = "hello world" use Python's built-in ord() function to turn each character into its unicode code point id input_ids = torch.tensor([[ord(char) for char in text]]) outputs = model(input_ids) # forward pass pooled_output = outputs.pooler_output sequence_output = outputs.last_hidden_state For batched inference and training, it is however recommended to make use of the tokenizer (to pad/truncate all sequences to the same length): thon from transformers import CanineTokenizer, CanineModel model = CanineModel.from_pretrained("google/canine-c") tokenizer = CanineTokenizer.from_pretrained("google/canine-c") inputs = ["Life is like a box of chocolates.", "You never know what you gonna get."] encoding = tokenizer(inputs, padding="longest", truncation=True, return_tensors="pt") outputs = model(**encoding) # forward pass pooled_output = outputs.pooler_output sequence_output = outputs.last_hidden_state Resources Text classification task guide Token classification task guide Question answering task guide Multiple choice task guide CanineConfig [[autodoc]] CanineConfig CanineTokenizer [[autodoc]] CanineTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences CANINE specific outputs [[autodoc]] models.canine.modeling_canine.CanineModelOutputWithPooling CanineModel [[autodoc]] CanineModel - forward CanineForSequenceClassification [[autodoc]] CanineForSequenceClassification - forward CanineForMultipleChoice [[autodoc]] CanineForMultipleChoice - forward CanineForTokenClassification [[autodoc]] CanineForTokenClassification - forward CanineForQuestionAnswering [[autodoc]] CanineForQuestionAnswering - forward
VipLlava Overview The VipLlava model was proposed in Making Large Multimodal Models Understand Arbitrary Visual Prompts by Mu Cai, Haotian Liu, Siva Karthik Mustikovela, Gregory P. Meyer, Yuning Chai, Dennis Park, Yong Jae Lee. VipLlava enhances the training protocol of Llava by marking images and interact with the model using natural cues like a "red bounding box" or "pointed arrow" during training. The abstract from the paper is the following: While existing large vision-language multimodal models focus on whole image understanding, there is a prominent gap in achieving region-specific comprehension. Current approaches that use textual coordinates or spatial encodings often fail to provide a user-friendly interface for visual prompting. To address this challenge, we introduce a novel multimodal model capable of decoding arbitrary visual prompts. This allows users to intuitively mark images and interact with the model using natural cues like a "red bounding box" or "pointed arrow". Our simple design directly overlays visual markers onto the RGB image, eliminating the need for complex region encodings, yet achieves state-of-the-art performance on region-understanding tasks like Visual7W, PointQA, and Visual Commonsense Reasoning benchmark. Furthermore, we present ViP-Bench, a comprehensive benchmark to assess the capability of models in understanding visual prompts across multiple dimensions, enabling future research in this domain. Code, data, and model are publicly available. Tips: The architecture is similar than llava architecture except that the multi-modal projector takes a set of concatenated vision hidden states and has an additional layernorm layer on that module. We advise users to use padding_side="left" when computing batched generation as it leads to more accurate results. Simply make sure to call processor.tokenizer.padding_side = "left" before generating. Note the model has not been explicitly trained to process multiple images in the same prompt, although this is technically possible, you may experience inaccurate results. For better results, we recommend users to prompt the model with the correct prompt format: A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions.###Human: <image>\n<prompt>###Assistant: For multiple turns conversation: A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions.###Human: <image>\n<prompt1>###Assistant: <answer1>###Human: <prompt2>###Assistant: The original code can be found here. This model was contributed by Younes Belkada VipLlavaConfig [[autodoc]] VipLlavaConfig VipLlavaForConditionalGeneration [[autodoc]] VipLlavaForConditionalGeneration - forward
NLLB Updated tokenizer behavior DISCLAIMER: The default behaviour for the tokenizer was fixed and thus changed in April 2023. The previous version adds [self.eos_token_id, self.cur_lang_code] at the end of the token sequence for both target and source tokenization. This is wrong as the NLLB paper mentions (page 48, 6.1.1. Model Architecture) : Note that we prefix the source sequence with the source language, as opposed to the target language as previously done in several works (Arivazhagan et al., 2019; Johnson et al., 2017). This is primarily because we prioritize optimizing zero-shot performance of our model on any pair of 200 languages at a minor cost to supervised performance. Previous behaviour: thon from transformers import NllbTokenizer tokenizer = NllbTokenizer.from_pretrained("facebook/nllb-200-distilled-600M") tokenizer("How was your day?").input_ids [13374, 1398, 4260, 4039, 248130, 2, 256047] 2: '' 256047 : 'eng_Latn' New behaviour thon from transformers import NllbTokenizer tokenizer = NllbTokenizer.from_pretrained("facebook/nllb-200-distilled-600M") tokenizer("How was your day?").input_ids [256047, 13374, 1398, 4260, 4039, 248130, 2] Enabling the old behaviour can be done as follows: thon from transformers import NllbTokenizer tokenizer = NllbTokenizer.from_pretrained("facebook/nllb-200-distilled-600M", legacy_behaviour=True) For more details, feel free to check the linked PR and Issue. Overview The NLLB model was presented in No Language Left Behind: Scaling Human-Centered Machine Translation by Marta R. Costa-jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, Anna Sun, Skyler Wang, Guillaume Wenzek, Al Youngblood, Bapi Akula, Loic Barrault, Gabriel Mejia Gonzalez, Prangthip Hansanti, John Hoffman, Semarley Jarrett, Kaushik Ram Sadagopan, Dirk Rowe, Shannon Spruit, Chau Tran, Pierre Andrews, Necip Fazil Ayan, Shruti Bhosale, Sergey Edunov, Angela Fan, Cynthia Gao, Vedanuj Goswami, Francisco Guzmán, Philipp Koehn, Alexandre Mourachko, Christophe Ropers, Safiyyah Saleem, Holger Schwenk, and Jeff Wang. The abstract of the paper is the following: Driven by the goal of eradicating language barriers on a global scale, machine translation has solidified itself as a key focus of artificial intelligence research today. However, such efforts have coalesced around a small subset of languages, leaving behind the vast majority of mostly low-resource languages. What does it take to break the 200 language barrier while ensuring safe, high quality results, all while keeping ethical considerations in mind? In No Language Left Behind, we took on this challenge by first contextualizing the need for low-resource language translation support through exploratory interviews with native speakers. Then, we created datasets and models aimed at narrowing the performance gap between low and high-resource languages. More specifically, we developed a conditional compute model based on Sparsely Gated Mixture of Experts that is trained on data obtained with novel and effective data mining techniques tailored for low-resource languages. We propose multiple architectural and training improvements to counteract overfitting while training on thousands of tasks. Critically, we evaluated the performance of over 40,000 different translation directions using a human-translated benchmark, Flores-200, and combined human evaluation with a novel toxicity benchmark covering all languages in Flores-200 to assess translation safety. Our model achieves an improvement of 44% BLEU relative to the previous state-of-the-art, laying important groundwork towards realizing a universal translation system. This implementation contains the dense models available on release. The sparse model NLLB-MoE (Mixture of Expert) is now available! More details here This model was contributed by Lysandre. The authors' code can be found here. Generating with NLLB While generating the target text set the forced_bos_token_id to the target language id. The following example shows how to translate English to French using the facebook/nllb-200-distilled-600M model. Note that we're using the BCP-47 code for French fra_Latn. See here for the list of all BCP-47 in the Flores 200 dataset. thon from transformers import AutoModelForSeq2SeqLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("facebook/nllb-200-distilled-600M") model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-200-distilled-600M") article = "UN Chief says there is no military solution in Syria" inputs = tokenizer(article, return_tensors="pt") translated_tokens = model.generate( **inputs, forced_bos_token_id=tokenizer.lang_code_to_id["fra_Latn"], max_length=30 ) tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0] Le chef de l'ONU dit qu'il n'y a pas de solution militaire en Syrie Generating from any other language than English English (eng_Latn) is set as the default language from which to translate. In order to specify that you'd like to translate from a different language, you should specify the BCP-47 code in the src_lang keyword argument of the tokenizer initialization. See example below for a translation from romanian to german: from transformers import AutoModelForSeq2SeqLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained( "facebook/nllb-200-distilled-600M", token=True, src_lang="ron_Latn" ) model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-200-distilled-600M", token=True) article = "Şeful ONU spune că nu există o soluţie militară în Siria" inputs = tokenizer(article, return_tensors="pt") translated_tokens = model.generate( **inputs, forced_bos_token_id=tokenizer.lang_code_to_id["deu_Latn"], max_length=30 ) tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0] UN-Chef sagt, es gibt keine militärische Lösung in Syrien Resources Translation task guide Summarization task guide NllbTokenizer [[autodoc]] NllbTokenizer - build_inputs_with_special_tokens NllbTokenizerFast [[autodoc]] NllbTokenizerFast
DeiT Overview The DeiT model was proposed in Training data-efficient image transformers & distillation through attention by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou. The Vision Transformer (ViT) introduced in Dosovitskiy et al., 2020 has shown that one can match or even outperform existing convolutional neural networks using a Transformer encoder (BERT-like). However, the ViT models introduced in that paper required training on expensive infrastructure for multiple weeks, using external data. DeiT (data-efficient image transformers) are more efficiently trained transformers for image classification, requiring far less data and far less computing resources compared to the original ViT models. The abstract from the paper is the following: Recently, neural networks purely based on attention were shown to address image understanding tasks such as image classification. However, these visual transformers are pre-trained with hundreds of millions of images using an expensive infrastructure, thereby limiting their adoption. In this work, we produce a competitive convolution-free transformer by training on Imagenet only. We train them on a single computer in less than 3 days. Our reference vision transformer (86M parameters) achieves top-1 accuracy of 83.1% (single-crop evaluation) on ImageNet with no external data. More importantly, we introduce a teacher-student strategy specific to transformers. It relies on a distillation token ensuring that the student learns from the teacher through attention. We show the interest of this token-based distillation, especially when using a convnet as a teacher. This leads us to report results competitive with convnets for both Imagenet (where we obtain up to 85.2% accuracy) and when transferring to other tasks. We share our code and models. This model was contributed by nielsr. The TensorFlow version of this model was added by amyeroberts. Usage tips Compared to ViT, DeiT models use a so-called distillation token to effectively learn from a teacher (which, in the DeiT paper, is a ResNet like-model). The distillation token is learned through backpropagation, by interacting with the class ([CLS]) and patch tokens through the self-attention layers. There are 2 ways to fine-tune distilled models, either (1) in a classic way, by only placing a prediction head on top of the final hidden state of the class token and not using the distillation signal, or (2) by placing both a prediction head on top of the class token and on top of the distillation token. In that case, the [CLS] prediction head is trained using regular cross-entropy between the prediction of the head and the ground-truth label, while the distillation prediction head is trained using hard distillation (cross-entropy between the prediction of the distillation head and the label predicted by the teacher). At inference time, one takes the average prediction between both heads as final prediction. (2) is also called "fine-tuning with distillation", because one relies on a teacher that has already been fine-tuned on the downstream dataset. In terms of models, (1) corresponds to [DeiTForImageClassification] and (2) corresponds to [DeiTForImageClassificationWithTeacher]. Note that the authors also did try soft distillation for (2) (in which case the distillation prediction head is trained using KL divergence to match the softmax output of the teacher), but hard distillation gave the best results. All released checkpoints were pre-trained and fine-tuned on ImageNet-1k only. No external data was used. This is in contrast with the original ViT model, which used external data like the JFT-300M dataset/Imagenet-21k for pre-training. The authors of DeiT also released more efficiently trained ViT models, which you can directly plug into [ViTModel] or [ViTForImageClassification]. Techniques like data augmentation, optimization, and regularization were used in order to simulate training on a much larger dataset (while only using ImageNet-1k for pre-training). There are 4 variants available (in 3 different sizes): facebook/deit-tiny-patch16-224, facebook/deit-small-patch16-224, facebook/deit-base-patch16-224 and facebook/deit-base-patch16-384. Note that one should use [DeiTImageProcessor] in order to prepare images for the model. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DeiT. [DeiTForImageClassification] is supported by this example script and notebook. See also: Image classification task guide Besides that: [DeiTForMaskedImageModeling] is supported by this example script. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. DeiTConfig [[autodoc]] DeiTConfig DeiTFeatureExtractor [[autodoc]] DeiTFeatureExtractor - call DeiTImageProcessor [[autodoc]] DeiTImageProcessor - preprocess DeiTModel [[autodoc]] DeiTModel - forward DeiTForMaskedImageModeling [[autodoc]] DeiTForMaskedImageModeling - forward DeiTForImageClassification [[autodoc]] DeiTForImageClassification - forward DeiTForImageClassificationWithTeacher [[autodoc]] DeiTForImageClassificationWithTeacher - forward TFDeiTModel [[autodoc]] TFDeiTModel - call TFDeiTForMaskedImageModeling [[autodoc]] TFDeiTForMaskedImageModeling - call TFDeiTForImageClassification [[autodoc]] TFDeiTForImageClassification - call TFDeiTForImageClassificationWithTeacher [[autodoc]] TFDeiTForImageClassificationWithTeacher - call
PEGASUS-X Overview The PEGASUS-X model was proposed in Investigating Efficiently Extending Transformers for Long Input Summarization by Jason Phang, Yao Zhao and Peter J. Liu. PEGASUS-X (PEGASUS eXtended) extends the PEGASUS models for long input summarization through additional long input pretraining and using staggered block-local attention with global tokens in the encoder. The abstract from the paper is the following: While large pretrained Transformer models have proven highly capable at tackling natural language tasks, handling long sequence inputs continues to be a significant challenge. One such task is long input summarization, where inputs are longer than the maximum input context of most pretrained models. Through an extensive set of experiments, we investigate what model architectural changes and pretraining paradigms can most efficiently adapt a pretrained Transformer for long input summarization. We find that a staggered, block-local Transformer with global encoder tokens strikes a good balance of performance and efficiency, and that an additional pretraining phase on long sequences meaningfully improves downstream summarization performance. Based on our findings, we introduce PEGASUS-X, an extension of the PEGASUS model with additional long input pretraining to handle inputs of up to 16K tokens. PEGASUS-X achieves strong performance on long input summarization tasks comparable with much larger models while adding few additional parameters and not requiring model parallelism to train. This model was contributed by zphang. The original code can be found here. Documentation resources Translation task guide Summarization task guide PEGASUS-X uses the same tokenizer as PEGASUS. PegasusXConfig [[autodoc]] PegasusXConfig PegasusXModel [[autodoc]] PegasusXModel - forward PegasusXForConditionalGeneration [[autodoc]] PegasusXForConditionalGeneration - forward
Wav2Vec2-BERT Overview The Wav2Vec2-BERT model was proposed in Seamless: Multilingual Expressive and Streaming Speech Translation by the Seamless Communication team from Meta AI. This model was pre-trained on 4.5M hours of unlabeled audio data covering more than 143 languages. It requires finetuning to be used for downstream tasks such as Automatic Speech Recognition (ASR), or Audio Classification. The official results of the model can be found in Section 3.2.1 of the paper. The abstract from the paper is the following: Recent advancements in automatic speech translation have dramatically expanded language coverage, improved multimodal capabilities, and enabled a wide range of tasks and functionalities. That said, large-scale automatic speech translation systems today lack key features that help machine-mediated communication feel seamless when compared to human-to-human dialogue. In this work, we introduce a family of models that enable end-to-end expressive and multilingual translations in a streaming fashion. First, we contribute an improved version of the massively multilingual and multimodal SeamlessM4T model—SeamlessM4T v2. This newer model, incorporating an updated UnitY2 framework, was trained on more low-resource language data. The expanded version of SeamlessAlign adds 114,800 hours of automatically aligned data for a total of 76 languages. SeamlessM4T v2 provides the foundation on which our two newest models, SeamlessExpressive and SeamlessStreaming, are initiated. SeamlessExpressive enables translation that preserves vocal styles and prosody. Compared to previous efforts in expressive speech research, our work addresses certain underexplored aspects of prosody, such as speech rate and pauses, while also preserving the style of one’s voice. As for SeamlessStreaming, our model leverages the Efficient Monotonic Multihead Attention (EMMA) mechanism to generate low-latency target translations without waiting for complete source utterances. As the first of its kind, SeamlessStreaming enables simultaneous speech-to-speech/text translation for multiple source and target languages. To understand the performance of these models, we combined novel and modified versions of existing automatic metrics to evaluate prosody, latency, and robustness. For human evaluations, we adapted existing protocols tailored for measuring the most relevant attributes in the preservation of meaning, naturalness, and expressivity. To ensure that our models can be used safely and responsibly, we implemented the first known red-teaming effort for multimodal machine translation, a system for the detection and mitigation of added toxicity, a systematic evaluation of gender bias, and an inaudible localized watermarking mechanism designed to dampen the impact of deepfakes. Consequently, we bring major components from SeamlessExpressive and SeamlessStreaming together to form Seamless, the first publicly available system that unlocks expressive cross-lingual communication in real-time. In sum, Seamless gives us a pivotal look at the technical foundation needed to turn the Universal Speech Translator from a science fiction concept into a real-world technology. Finally, contributions in this work—including models, code, and a watermark detector—are publicly released and accessible at the link below. This model was contributed by ylacombe. The original code can be found here. Usage tips Wav2Vec2-BERT follows the same architecture as Wav2Vec2-Conformer, but employs a causal depthwise convolutional layer and uses as input a mel-spectrogram representation of the audio instead of the raw waveform. Wav2Vec2-BERT can use either no relative position embeddings, Shaw-like position embeddings, Transformer-XL-like position embeddings, or rotary position embeddings by setting the correct config.position_embeddings_type. Wav2Vec2-BERT also introduces a Conformer-based adapter network instead of a simple convolutional network. Resources [Wav2Vec2BertForCTC] is supported by this example script. You can also adapt these notebooks on how to finetune a speech recognition model in English, and how to finetune a speech recognition model in any language. [Wav2Vec2BertForSequenceClassification] can be used by adapting this example script. See also: Audio classification task guide Wav2Vec2BertConfig [[autodoc]] Wav2Vec2BertConfig Wav2Vec2BertProcessor [[autodoc]] Wav2Vec2BertProcessor - call - pad - from_pretrained - save_pretrained - batch_decode - decode Wav2Vec2BertModel [[autodoc]] Wav2Vec2BertModel - forward Wav2Vec2BertForCTC [[autodoc]] Wav2Vec2BertForCTC - forward Wav2Vec2BertForSequenceClassification [[autodoc]] Wav2Vec2BertForSequenceClassification - forward Wav2Vec2BertForAudioFrameClassification [[autodoc]] Wav2Vec2BertForAudioFrameClassification - forward Wav2Vec2BertForXVector [[autodoc]] Wav2Vec2BertForXVector - forward
RoFormer Overview The RoFormer model was proposed in RoFormer: Enhanced Transformer with Rotary Position Embedding by Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu. The abstract from the paper is the following: Position encoding in transformer architecture provides supervision for dependency modeling between elements at different positions in the sequence. We investigate various methods to encode positional information in transformer-based language models and propose a novel implementation named Rotary Position Embedding(RoPE). The proposed RoPE encodes absolute positional information with rotation matrix and naturally incorporates explicit relative position dependency in self-attention formulation. Notably, RoPE comes with valuable properties such as flexibility of being expand to any sequence lengths, decaying inter-token dependency with increasing relative distances, and capability of equipping the linear self-attention with relative position encoding. As a result, the enhanced transformer with rotary position embedding, or RoFormer, achieves superior performance in tasks with long texts. We release the theoretical analysis along with some preliminary experiment results on Chinese data. The undergoing experiment for English benchmark will soon be updated. This model was contributed by junnyu. The original code can be found here. Usage tips RoFormer is a BERT-like autoencoding model with rotary position embeddings. Rotary position embeddings have shown improved performance on classification tasks with long texts. Resources Text classification task guide Token classification task guide Question answering task guide Causal language modeling task guide Masked language modeling task guide Multiple choice task guide RoFormerConfig [[autodoc]] RoFormerConfig RoFormerTokenizer [[autodoc]] RoFormerTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary RoFormerTokenizerFast [[autodoc]] RoFormerTokenizerFast - build_inputs_with_special_tokens RoFormerModel [[autodoc]] RoFormerModel - forward RoFormerForCausalLM [[autodoc]] RoFormerForCausalLM - forward RoFormerForMaskedLM [[autodoc]] RoFormerForMaskedLM - forward RoFormerForSequenceClassification [[autodoc]] RoFormerForSequenceClassification - forward RoFormerForMultipleChoice [[autodoc]] RoFormerForMultipleChoice - forward RoFormerForTokenClassification [[autodoc]] RoFormerForTokenClassification - forward RoFormerForQuestionAnswering [[autodoc]] RoFormerForQuestionAnswering - forward TFRoFormerModel [[autodoc]] TFRoFormerModel - call TFRoFormerForMaskedLM [[autodoc]] TFRoFormerForMaskedLM - call TFRoFormerForCausalLM [[autodoc]] TFRoFormerForCausalLM - call TFRoFormerForSequenceClassification [[autodoc]] TFRoFormerForSequenceClassification - call TFRoFormerForMultipleChoice [[autodoc]] TFRoFormerForMultipleChoice - call TFRoFormerForTokenClassification [[autodoc]] TFRoFormerForTokenClassification - call TFRoFormerForQuestionAnswering [[autodoc]] TFRoFormerForQuestionAnswering - call FlaxRoFormerModel [[autodoc]] FlaxRoFormerModel - call FlaxRoFormerForMaskedLM [[autodoc]] FlaxRoFormerForMaskedLM - call FlaxRoFormerForSequenceClassification [[autodoc]] FlaxRoFormerForSequenceClassification - call FlaxRoFormerForMultipleChoice [[autodoc]] FlaxRoFormerForMultipleChoice - call FlaxRoFormerForTokenClassification [[autodoc]] FlaxRoFormerForTokenClassification - call FlaxRoFormerForQuestionAnswering [[autodoc]] FlaxRoFormerForQuestionAnswering - call
OWL-ViT Overview The OWL-ViT (short for Vision Transformer for Open-World Localization) was proposed in Simple Open-Vocabulary Object Detection with Vision Transformers by Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby. OWL-ViT is an open-vocabulary object detection network trained on a variety of (image, text) pairs. It can be used to query an image with one or multiple text queries to search for and detect target objects described in text. The abstract from the paper is the following: Combining simple architectures with large-scale pre-training has led to massive improvements in image classification. For object detection, pre-training and scaling approaches are less well established, especially in the long-tailed and open-vocabulary setting, where training data is relatively scarce. In this paper, we propose a strong recipe for transferring image-text models to open-vocabulary object detection. We use a standard Vision Transformer architecture with minimal modifications, contrastive image-text pre-training, and end-to-end detection fine-tuning. Our analysis of the scaling properties of this setup shows that increasing image-level pre-training and model size yield consistent improvements on the downstream detection task. We provide the adaptation strategies and regularizations needed to attain very strong performance on zero-shot text-conditioned and one-shot image-conditioned object detection. Code and models are available on GitHub. OWL-ViT architecture. Taken from the original paper. This model was contributed by adirik. The original code can be found here. Usage tips OWL-ViT is a zero-shot text-conditioned object detection model. OWL-ViT uses CLIP as its multi-modal backbone, with a ViT-like Transformer to get visual features and a causal language model to get the text features. To use CLIP for detection, OWL-ViT removes the final token pooling layer of the vision model and attaches a lightweight classification and box head to each transformer output token. Open-vocabulary classification is enabled by replacing the fixed classification layer weights with the class-name embeddings obtained from the text model. The authors first train CLIP from scratch and fine-tune it end-to-end with the classification and box heads on standard detection datasets using a bipartite matching loss. One or multiple text queries per image can be used to perform zero-shot text-conditioned object detection. [OwlViTImageProcessor] can be used to resize (or rescale) and normalize images for the model and [CLIPTokenizer] is used to encode the text. [OwlViTProcessor] wraps [OwlViTImageProcessor] and [CLIPTokenizer] into a single instance to both encode the text and prepare the images. The following example shows how to perform object detection using [OwlViTProcessor] and [OwlViTForObjectDetection]. thon import requests from PIL import Image import torch from transformers import OwlViTProcessor, OwlViTForObjectDetection processor = OwlViTProcessor.from_pretrained("google/owlvit-base-patch32") model = OwlViTForObjectDetection.from_pretrained("google/owlvit-base-patch32") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) texts = [["a photo of a cat", "a photo of a dog"]] inputs = processor(text=texts, images=image, return_tensors="pt") outputs = model(**inputs) Target image sizes (height, width) to rescale box predictions [batch_size, 2] target_sizes = torch.Tensor([image.size[::-1]]) Convert outputs (bounding boxes and class logits) to Pascal VOC format (xmin, ymin, xmax, ymax) results = processor.post_process_object_detection(outputs=outputs, target_sizes=target_sizes, threshold=0.1) i = 0 # Retrieve predictions for the first image for the corresponding text queries text = texts[i] boxes, scores, labels = results[i]["boxes"], results[i]["scores"], results[i]["labels"] for box, score, label in zip(boxes, scores, labels): box = [round(i, 2) for i in box.tolist()] print(f"Detected {text[label]} with confidence {round(score.item(), 3)} at location {box}") Detected a photo of a cat with confidence 0.707 at location [324.97, 20.44, 640.58, 373.29] Detected a photo of a cat with confidence 0.717 at location [1.46, 55.26, 315.55, 472.17] Resources A demo notebook on using OWL-ViT for zero- and one-shot (image-guided) object detection can be found here. OwlViTConfig [[autodoc]] OwlViTConfig - from_text_vision_configs OwlViTTextConfig [[autodoc]] OwlViTTextConfig OwlViTVisionConfig [[autodoc]] OwlViTVisionConfig OwlViTImageProcessor [[autodoc]] OwlViTImageProcessor - preprocess - post_process_object_detection - post_process_image_guided_detection OwlViTFeatureExtractor [[autodoc]] OwlViTFeatureExtractor - call - post_process - post_process_image_guided_detection OwlViTProcessor [[autodoc]] OwlViTProcessor OwlViTModel [[autodoc]] OwlViTModel - forward - get_text_features - get_image_features OwlViTTextModel [[autodoc]] OwlViTTextModel - forward OwlViTVisionModel [[autodoc]] OwlViTVisionModel - forward OwlViTForObjectDetection [[autodoc]] OwlViTForObjectDetection - forward - image_guided_detection
RWKV Overview The RWKV model was proposed in this repo It suggests a tweak in the traditional Transformer attention to make it linear. This way, the model can be used as recurrent network: passing inputs for timestamp 0 and timestamp 1 together is the same as passing inputs at timestamp 0, then inputs at timestamp 1 along with the state of timestamp 0 (see example below). This can be more efficient than a regular Transformer and can deal with sentence of any length (even if the model uses a fixed context length for training). This model was contributed by sgugger. The original code can be found here. Usage example import torch from transformers import AutoTokenizer, RwkvConfig, RwkvModel model = RwkvModel.from_pretrained("sgugger/rwkv-430M-pile") tokenizer = AutoTokenizer.from_pretrained("sgugger/rwkv-430M-pile") inputs = tokenizer("This is an example.", return_tensors="pt") Feed everything to the model outputs = model(inputs["input_ids"]) output_whole = outputs.last_hidden_state outputs = model(inputs["input_ids"][:, :2]) output_one = outputs.last_hidden_state Using the state computed on the first inputs, we will get the same output outputs = model(inputs["input_ids"][:, 2:], state=outputs.state) output_two = outputs.last_hidden_state torch.allclose(torch.cat([output_one, output_two], dim=1), output_whole, atol=1e-5) If you want to make sure the model stops generating when '\n\n' is detected, we recommend using the following stopping criteria: thon from transformers import StoppingCriteria class RwkvStoppingCriteria(StoppingCriteria): def init(self, eos_sequence = [187,187], eos_token_id = 537): self.eos_sequence = eos_sequence self.eos_token_id = eos_token_id def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool: last_2_ids = input_ids[:,-2:].tolist() return self.eos_sequence in last_2_ids output = model.generate(inputs["input_ids"], max_new_tokens=64, stopping_criteria = [RwkvStoppingCriteria()]) RwkvConfig [[autodoc]] RwkvConfig RwkvModel [[autodoc]] RwkvModel - forward RwkvLMHeadModel [[autodoc]] RwkvForCausalLM - forward Rwkv attention and the recurrent formulas In a traditional auto-regressive Transformer, attention is written as $$O = \hbox{softmax}(QK^{T} / \sqrt{d}) V$$ with \(Q\), \(K\) and \(V\) are matrices of shape seq_len x hidden_size named query, key and value (they are actually bigger matrices with a batch dimension and an attention head dimension but we're only interested in the last two, which is where the matrix product is taken, so for the sake of simplicity we only consider those two). The product \(QK^{T}\) then has shape seq_len x seq_len and we can take the matrix product with \(V\) to get the output \(O\) of the same shape as the others. Replacing the softmax by its value gives: $$O_{i} = \frac{\sum_{j=1}^{i} e^{Q_{i} K_{j}^{T} / \sqrt{d}} V_{j}}{\sum_{j=1}^{i} e^{Q_{i} K_{j}^{T} / \sqrt{d}}}$$ Note that the entries in \(QK^{T}\) corresponding to \(j > i\) are masked (the sum stops at j) because the attention is not allowed to look at future tokens (only past ones). In comparison, the RWKV attention is given by $$O_{i} = \sigma(R_{i}) \frac{\sum_{j=1}^{i} e^{W_{i-j} + K_{j}} V_{j}}{\sum_{j=1}^{i} e^{W_{i-j} + K_{j}}}$$ where \(R\) is a new matrix called receptance by the author, \(K\) and \(V\) are still the key and value (\(\sigma\) here is the sigmoid function). \(W\) is a new vector that represents the position of the token and is given by $$W_{0} = u \hbox{ and } W_{k} = (k-1)w \hbox{ for } k \geq 1$$ with \(u\) and \(w\) learnable parameters called in the code time_first and time_decay respectively. The numerator and denominator can both be expressed recursively. Naming them \(N_{i}\) and \(D_{i}\) we have: $$N_{i} = e^{u + K_{i}} V_{i} + \hat{N}{i} \hbox{ where } \hat{N}{i} = e^{K_{i-1}} V_{i-1} + e^{w + K_{i-2}} V_{i-2} \cdots + e^{(i-2)w + K_{1}} V_{1}$$ so \(\hat{N}_{i}\) (called numerator_state in the code) satisfies $$\hat{N}{0} = 0 \hbox{ and } \hat{N}{j+1} = e^{K_{j}} V_{j} + e^{w} \hat{N}_{j}$$ and $$D_{i} = e^{u + K_{i}} + \hat{D}{i} \hbox{ where } \hat{D}{i} = e^{K_{i-1}} + e^{w + K_{i-2}} \cdots + e^{(i-2)w + K_{1}}$$ so \(\hat{D}_{i}\) (called denominator_state in the code) satisfies $$\hat{D}{0} = 0 \hbox{ and } \hat{D}{j+1} = e^{K_{j}} + e^{w} \hat{D}_{j}$$ The actual recurrent formula used are a tiny bit more complex, as for numerical stability we don't want to compute exponentials of big numbers. Usually the softmax is not computed as is, but the exponential of the maximum term is divided of the numerator and denominator: $$\frac{e^{x_{i}}}{\sum_{j=1}^{n} e^{x_{j}}} = \frac{e^{x_{i} - M}}{\sum_{j=1}^{n} e^{x_{j} - M}}$$ with \(M\) the maximum of all \(x_{j}\). So here on top of saving the numerator state (\(\hat{N}\)) and the denominator state (\(\hat{D}\)) we also keep track of the maximum of all terms encountered in the exponentials. So we actually use $$\tilde{N}{i} = e^{-M{i}} \hat{N}{i} \hbox{ and } \tilde{D}{i} = e^{-M_{i}} \hat{D}_{i}$$ defined by the following recurrent formulas: $$\tilde{N}{0} = 0 \hbox{ and } \tilde{N}{j+1} = e^{K_{j} - q} V_{j} + e^{w + M_{j} - q} \tilde{N}{j} \hbox{ where } q = \max(K{j}, w + M_{j})$$ and $$\tilde{D}{0} = 0 \hbox{ and } \tilde{D}{j+1} = e^{K_{j} - q} + e^{w + M_{j} - q} \tilde{D}{j} \hbox{ where } q = \max(K{j}, w + M_{j})$$ and \(M_{j+1} = q\). With those, we can then compute $$N_{i} = e^{u + K_{i} - q} V_{i} + e^{M_{i}} \tilde{N}{i} \hbox{ where } q = \max(u + K{i}, M_{i})$$ and $$D_{i} = e^{u + K_{i} - q} + e^{M_{i}} \tilde{D}{i} \hbox{ where } q = \max(u + K{i}, M_{i})$$ which finally gives us $$O_{i} = \sigma(R_{i}) \frac{N_{i}}{D_{i}}$$
ByT5 Overview The ByT5 model was presented in ByT5: Towards a token-free future with pre-trained byte-to-byte models by Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel. The abstract from the paper is the following: Most widely-used pre-trained language models operate on sequences of tokens corresponding to word or subword units. Encoding text as a sequence of tokens requires a tokenizer, which is typically created as an independent artifact from the model. Token-free models that instead operate directly on raw text (bytes or characters) have many benefits: they can process text in any language out of the box, they are more robust to noise, and they minimize technical debt by removing complex and error-prone text preprocessing pipelines. Since byte or character sequences are longer than token sequences, past work on token-free models has often introduced new model architectures designed to amortize the cost of operating directly on raw text. In this paper, we show that a standard Transformer architecture can be used with minimal modifications to process byte sequences. We carefully characterize the trade-offs in terms of parameter count, training FLOPs, and inference speed, and show that byte-level models are competitive with their token-level counterparts. We also demonstrate that byte-level models are significantly more robust to noise and perform better on tasks that are sensitive to spelling and pronunciation. As part of our contribution, we release a new set of pre-trained byte-level Transformer models based on the T5 architecture, as well as all code and data used in our experiments. This model was contributed by patrickvonplaten. The original code can be found here. ByT5's architecture is based on the T5v1.1 model, refer to T5v1.1's documentation page for the API reference. They only differ in how inputs should be prepared for the model, see the code examples below. Since ByT5 was pre-trained unsupervisedly, there's no real advantage to using a task prefix during single-task fine-tuning. If you are doing multi-task fine-tuning, you should use a prefix. Usage example ByT5 works on raw UTF-8 bytes, so it can be used without a tokenizer: thon from transformers import T5ForConditionalGeneration import torch model = T5ForConditionalGeneration.from_pretrained("google/byt5-small") num_special_tokens = 3 Model has 3 special tokens which take up the input ids 0,1,2 of ByT5. => Need to shift utf-8 character encodings by 3 before passing ids to model. input_ids = torch.tensor([list("Life is like a box of chocolates.".encode("utf-8"))]) + num_special_tokens labels = torch.tensor([list("La vie est comme une boîte de chocolat.".encode("utf-8"))]) + num_special_tokens loss = model(input_ids, labels=labels).loss loss.item() 2.66 For batched inference and training it is however recommended to make use of the tokenizer: thon from transformers import T5ForConditionalGeneration, AutoTokenizer model = T5ForConditionalGeneration.from_pretrained("google/byt5-small") tokenizer = AutoTokenizer.from_pretrained("google/byt5-small") model_inputs = tokenizer( ["Life is like a box of chocolates.", "Today is Monday."], padding="longest", return_tensors="pt" ) labels_dict = tokenizer( ["La vie est comme une boîte de chocolat.", "Aujourd'hui c'est lundi."], padding="longest", return_tensors="pt" ) labels = labels_dict.input_ids loss = model(**model_inputs, labels=labels).loss loss.item() 17.9 Similar to T5, ByT5 was trained on the span-mask denoising task. However, since the model works directly on characters, the pretraining task is a bit different. Let's corrupt some characters of the input sentence "The dog chases a ball in the park." and ask ByT5 to predict them for us. thon from transformers import AutoTokenizer, AutoModelForSeq2SeqLM import torch tokenizer = AutoTokenizer.from_pretrained("google/byt5-base") model = AutoModelForSeq2SeqLM.from_pretrained("google/byt5-base") input_ids_prompt = "The dog chases a ball in the park." input_ids = tokenizer(input_ids_prompt).input_ids Note that we cannot add "{extra_id_}" to the string directly as the Byte tokenizer would incorrectly merge the tokens For ByT5, we need to work directly on the character level Contrary to T5, ByT5 does not use sentinel tokens for masking, but instead uses final utf character ids. UTF-8 is represented by 8 bits and ByT5 has 3 special tokens. => There are 2**8+2 = 259 input ids and mask tokens count down from index 258. => mask to "The dog [258]a ball [257]park." input_ids = torch.tensor([input_ids[:8] + [258] + input_ids[14:21] + [257] + input_ids[28:]]) input_ids tensor([[ 87, 107, 104, 35, 103, 114, 106, 35, 258, 35, 100, 35, 101, 100, 111, 111, 257, 35, 115, 100, 117, 110, 49, 1]]) ByT5 produces only one char at a time so we need to produce many more output characters here -> set max_length=100. output_ids = model.generate(input_ids, max_length=100)[0].tolist() output_ids [0, 258, 108, 118, 35, 119, 107, 104, 35, 114, 113, 104, 35, 122, 107, 114, 35, 103, 114, 104, 118, 257, 35, 108, 113, 35, 119, 107, 104, 35, 103, 108, 118, 102, 114, 256, 108, 113, 35, 119, 107, 104, 35, 115, 100, 117, 110, 49, 35, 87, 107, 104, 35, 103, 114, 106, 35, 108, 118, 35, 119, 107, 104, 35, 114, 113, 104, 35, 122, 107, 114, 35, 103, 114, 104, 118, 35, 100, 35, 101, 100, 111, 111, 35, 108, 113, 255, 35, 108, 113, 35, 119, 107, 104, 35, 115, 100, 117, 110, 49] ^- Note how 258 descends to 257, 256, 255 Now we need to split on the sentinel tokens, let's write a short loop for this output_ids_list = [] start_token = 0 sentinel_token = 258 while sentinel_token in output_ids: split_idx = output_ids.index(sentinel_token) output_ids_list.append(output_ids[start_token:split_idx]) start_token = split_idx sentinel_token -= 1 output_ids_list.append(output_ids[start_token:]) output_string = tokenizer.batch_decode(output_ids_list) output_string ['', 'is the one who does', ' in the disco', 'in the park. The dog is the one who does a ball in', ' in the park.'] ByT5Tokenizer [[autodoc]] ByT5Tokenizer See [ByT5Tokenizer] for all details.
FLAN-T5 Overview FLAN-T5 was released in the paper Scaling Instruction-Finetuned Language Models - it is an enhanced version of T5 that has been finetuned in a mixture of tasks. One can directly use FLAN-T5 weights without finetuning the model: thon from transformers import AutoModelForSeq2SeqLM, AutoTokenizer model = AutoModelForSeq2SeqLM.from_pretrained("google/flan-t5-small") tokenizer = AutoTokenizer.from_pretrained("google/flan-t5-small") inputs = tokenizer("A step by step recipe to make bolognese pasta:", return_tensors="pt") outputs = model.generate(**inputs) print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) ['Pour a cup of bolognese into a large bowl and add the pasta'] FLAN-T5 includes the same improvements as T5 version 1.1 (see here for the full details of the model's improvements.) Google has released the following variants: google/flan-t5-small google/flan-t5-base google/flan-t5-large google/flan-t5-xl google/flan-t5-xxl. The original checkpoints can be found here. Refer to T5's documentation page for all API reference, code examples and notebooks. For more details regarding training and evaluation of the FLAN-T5, refer to the model card.
CTRL Overview CTRL model was proposed in CTRL: A Conditional Transformer Language Model for Controllable Generation by Nitish Shirish Keskar, Bryan McCann, Lav R. Varshney, Caiming Xiong and Richard Socher. It's a causal (unidirectional) transformer pre-trained using language modeling on a very large corpus of ~140 GB of text data with the first token reserved as a control code (such as Links, Books, Wikipedia etc.). The abstract from the paper is the following: Large-scale language models show promising text generation capabilities, but users cannot easily control particular aspects of the generated text. We release CTRL, a 1.63 billion-parameter conditional transformer language model, trained to condition on control codes that govern style, content, and task-specific behavior. Control codes were derived from structure that naturally co-occurs with raw text, preserving the advantages of unsupervised learning while providing more explicit control over text generation. These codes also allow CTRL to predict which parts of the training data are most likely given a sequence. This provides a potential method for analyzing large amounts of data via model-based source attribution. This model was contributed by keskarnitishr. The original code can be found here. Usage tips CTRL makes use of control codes to generate text: it requires generations to be started by certain words, sentences or links to generate coherent text. Refer to the original implementation for more information. CTRL is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than the left. CTRL was trained with a causal language modeling (CLM) objective and is therefore powerful at predicting the next token in a sequence. Leveraging this feature allows CTRL to generate syntactically coherent text as it can be observed in the run_generation.py example script. The PyTorch models can take the past_key_values as input, which is the previously computed key/value attention pairs. TensorFlow models accepts past as input. Using the past_key_values value prevents the model from re-computing pre-computed values in the context of text generation. See the forward method for more information on the usage of this argument. Resources Text classification task guide Causal language modeling task guide CTRLConfig [[autodoc]] CTRLConfig CTRLTokenizer [[autodoc]] CTRLTokenizer - save_vocabulary CTRLModel [[autodoc]] CTRLModel - forward CTRLLMHeadModel [[autodoc]] CTRLLMHeadModel - forward CTRLForSequenceClassification [[autodoc]] CTRLForSequenceClassification - forward TFCTRLModel [[autodoc]] TFCTRLModel - call TFCTRLLMHeadModel [[autodoc]] TFCTRLLMHeadModel - call TFCTRLForSequenceClassification [[autodoc]] TFCTRLForSequenceClassification - call
GIT Overview The GIT model was proposed in GIT: A Generative Image-to-text Transformer for Vision and Language by Jianfeng Wang, Zhengyuan Yang, Xiaowei Hu, Linjie Li, Kevin Lin, Zhe Gan, Zicheng Liu, Ce Liu, Lijuan Wang. GIT is a decoder-only Transformer that leverages CLIP's vision encoder to condition the model on vision inputs besides text. The model obtains state-of-the-art results on image captioning and visual question answering benchmarks. The abstract from the paper is the following: In this paper, we design and train a Generative Image-to-text Transformer, GIT, to unify vision-language tasks such as image/video captioning and question answering. While generative models provide a consistent network architecture between pre-training and fine-tuning, existing work typically contains complex structures (uni/multi-modal encoder/decoder) and depends on external modules such as object detectors/taggers and optical character recognition (OCR). In GIT, we simplify the architecture as one image encoder and one text decoder under a single language modeling task. We also scale up the pre-training data and the model size to boost the model performance. Without bells and whistles, our GIT establishes new state of the arts on 12 challenging benchmarks with a large margin. For instance, our model surpasses the human performance for the first time on TextCaps (138.2 vs. 125.5 in CIDEr). Furthermore, we present a new scheme of generation-based image classification and scene text recognition, achieving decent performance on standard benchmarks. GIT architecture. Taken from the original paper. This model was contributed by nielsr. The original code can be found here. Usage tips GIT is implemented in a very similar way to GPT-2, the only difference being that the model is also conditioned on pixel_values. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with GIT. Demo notebooks regarding inference + fine-tuning GIT on custom data can be found here. See also: Causal language modeling task guide If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we will review it. The resource should ideally demonstrate something new instead of duplicating an existing resource. GitVisionConfig [[autodoc]] GitVisionConfig GitVisionModel [[autodoc]] GitVisionModel - forward GitConfig [[autodoc]] GitConfig - all GitProcessor [[autodoc]] GitProcessor - call GitModel [[autodoc]] GitModel - forward GitForCausalLM [[autodoc]] GitForCausalLM - forward
CLAP Overview The CLAP model was proposed in Large Scale Contrastive Language-Audio pretraining with feature fusion and keyword-to-caption augmentation by Yusong Wu, Ke Chen, Tianyu Zhang, Yuchen Hui, Taylor Berg-Kirkpatrick, Shlomo Dubnov. CLAP (Contrastive Language-Audio Pretraining) is a neural network trained on a variety of (audio, text) pairs. It can be instructed in to predict the most relevant text snippet, given an audio, without directly optimizing for the task. The CLAP model uses a SWINTransformer to get audio features from a log-Mel spectrogram input, and a RoBERTa model to get text features. Both the text and audio features are then projected to a latent space with identical dimension. The dot product between the projected audio and text features is then used as a similar score. The abstract from the paper is the following: Contrastive learning has shown remarkable success in the field of multimodal representation learning. In this paper, we propose a pipeline of contrastive language-audio pretraining to develop an audio representation by combining audio data with natural language descriptions. To accomplish this target, we first release LAION-Audio-630K, a large collection of 633,526 audio-text pairs from different data sources. Second, we construct a contrastive language-audio pretraining model by considering different audio encoders and text encoders. We incorporate the feature fusion mechanism and keyword-to-caption augmentation into the model design to further enable the model to process audio inputs of variable lengths and enhance the performance. Third, we perform comprehensive experiments to evaluate our model across three tasks: text-to-audio retrieval, zero-shot audio classification, and supervised audio classification. The results demonstrate that our model achieves superior performance in text-to-audio retrieval task. In audio classification tasks, the model achieves state-of-the-art performance in the zeroshot setting and is able to obtain performance comparable to models' results in the non-zero-shot setting. LAION-Audio-6 This model was contributed by Younes Belkada and Arthur Zucker . The original code can be found here. ClapConfig [[autodoc]] ClapConfig - from_text_audio_configs ClapTextConfig [[autodoc]] ClapTextConfig ClapAudioConfig [[autodoc]] ClapAudioConfig ClapFeatureExtractor [[autodoc]] ClapFeatureExtractor ClapProcessor [[autodoc]] ClapProcessor ClapModel [[autodoc]] ClapModel - forward - get_text_features - get_audio_features ClapTextModel [[autodoc]] ClapTextModel - forward ClapTextModelWithProjection [[autodoc]] ClapTextModelWithProjection - forward ClapAudioModel [[autodoc]] ClapAudioModel - forward ClapAudioModelWithProjection [[autodoc]] ClapAudioModelWithProjection - forward
CLVP Overview The CLVP (Contrastive Language-Voice Pretrained Transformer) model was proposed in Better speech synthesis through scaling by James Betker. The abstract from the paper is the following: In recent years, the field of image generation has been revolutionized by the application of autoregressive transformers and DDPMs. These approaches model the process of image generation as a step-wise probabilistic processes and leverage large amounts of compute and data to learn the image distribution. This methodology of improving performance need not be confined to images. This paper describes a way to apply advances in the image generative domain to speech synthesis. The result is TorToise - an expressive, multi-voice text-to-speech system. This model was contributed by Susnato Dhar. The original code can be found here. Usage tips CLVP is an integral part of the Tortoise TTS model. CLVP can be used to compare different generated speech candidates with the provided text, and the best speech tokens are forwarded to the diffusion model. The use of the [ClvpModelForConditionalGeneration.generate()] method is strongly recommended for tortoise usage. Note that the CLVP model expects the audio to be sampled at 22.05 kHz contrary to other audio models which expects 16 kHz. Brief Explanation: The [ClvpTokenizer] tokenizes the text input, and the [ClvpFeatureExtractor] extracts the log mel-spectrogram from the desired audio. [ClvpConditioningEncoder] takes those text tokens and audio representations and converts them into embeddings conditioned on the text and audio. The [ClvpForCausalLM] uses those embeddings to generate multiple speech candidates. Each speech candidate is passed through the speech encoder ([ClvpEncoder]) which converts them into a vector representation, and the text encoder ([ClvpEncoder]) converts the text tokens into the same latent space. At the end, we compare each speech vector with the text vector to see which speech vector is most similar to the text vector. [ClvpModelForConditionalGeneration.generate()] compresses all of the logic described above into a single method. Example : thon import datasets from transformers import ClvpProcessor, ClvpModelForConditionalGeneration Define the Text and Load the Audio (We are taking an audio example from HuggingFace Hub using datasets library). text = "This is an example text." ds = datasets.load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") ds = ds.cast_column("audio", datasets.Audio(sampling_rate=22050)) sample = ds[0]["audio"] Define processor and model. processor = ClvpProcessor.from_pretrained("susnato/clvp_dev") model = ClvpModelForConditionalGeneration.from_pretrained("susnato/clvp_dev") Generate processor output and model output. processor_output = processor(raw_speech=sample["array"], sampling_rate=sample["sampling_rate"], text=text, return_tensors="pt") generated_output = model.generate(**processor_output) ClvpConfig [[autodoc]] ClvpConfig - from_sub_model_configs ClvpEncoderConfig [[autodoc]] ClvpEncoderConfig ClvpDecoderConfig [[autodoc]] ClvpDecoderConfig ClvpTokenizer [[autodoc]] ClvpTokenizer - save_vocabulary ClvpFeatureExtractor [[autodoc]] ClvpFeatureExtractor - call ClvpProcessor [[autodoc]] ClvpProcessor - call - decode - batch_decode ClvpModelForConditionalGeneration [[autodoc]] ClvpModelForConditionalGeneration - forward - generate - get_text_features - get_speech_features ClvpForCausalLM [[autodoc]] ClvpForCausalLM ClvpModel [[autodoc]] ClvpModel ClvpEncoder [[autodoc]] ClvpEncoder ClvpDecoder [[autodoc]] ClvpDecoder
DETR Overview The DETR model was proposed in End-to-End Object Detection with Transformers by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov and Sergey Zagoruyko. DETR consists of a convolutional backbone followed by an encoder-decoder Transformer which can be trained end-to-end for object detection. It greatly simplifies a lot of the complexity of models like Faster-R-CNN and Mask-R-CNN, which use things like region proposals, non-maximum suppression procedure and anchor generation. Moreover, DETR can also be naturally extended to perform panoptic segmentation, by simply adding a mask head on top of the decoder outputs. The abstract from the paper is the following: We present a new method that views object detection as a direct set prediction problem. Our approach streamlines the detection pipeline, effectively removing the need for many hand-designed components like a non-maximum suppression procedure or anchor generation that explicitly encode our prior knowledge about the task. The main ingredients of the new framework, called DEtection TRansformer or DETR, are a set-based global loss that forces unique predictions via bipartite matching, and a transformer encoder-decoder architecture. Given a fixed small set of learned object queries, DETR reasons about the relations of the objects and the global image context to directly output the final set of predictions in parallel. The new model is conceptually simple and does not require a specialized library, unlike many other modern detectors. DETR demonstrates accuracy and run-time performance on par with the well-established and highly-optimized Faster RCNN baseline on the challenging COCO object detection dataset. Moreover, DETR can be easily generalized to produce panoptic segmentation in a unified manner. We show that it significantly outperforms competitive baselines. This model was contributed by nielsr. The original code can be found here. How DETR works Here's a TLDR explaining how [~transformers.DetrForObjectDetection] works: First, an image is sent through a pre-trained convolutional backbone (in the paper, the authors use ResNet-50/ResNet-101). Let's assume we also add a batch dimension. This means that the input to the backbone is a tensor of shape (batch_size, 3, height, width), assuming the image has 3 color channels (RGB). The CNN backbone outputs a new lower-resolution feature map, typically of shape (batch_size, 2048, height/32, width/32). This is then projected to match the hidden dimension of the Transformer of DETR, which is 256 by default, using a nn.Conv2D layer. So now, we have a tensor of shape (batch_size, 256, height/32, width/32). Next, the feature map is flattened and transposed to obtain a tensor of shape (batch_size, seq_len, d_model) = (batch_size, width/32*height/32, 256). So a difference with NLP models is that the sequence length is actually longer than usual, but with a smaller d_model (which in NLP is typically 768 or higher). Next, this is sent through the encoder, outputting encoder_hidden_states of the same shape (you can consider these as image features). Next, so-called object queries are sent through the decoder. This is a tensor of shape (batch_size, num_queries, d_model), with num_queries typically set to 100 and initialized with zeros. These input embeddings are learnt positional encodings that the authors refer to as object queries, and similarly to the encoder, they are added to the input of each attention layer. Each object query will look for a particular object in the image. The decoder updates these embeddings through multiple self-attention and encoder-decoder attention layers to output decoder_hidden_states of the same shape: (batch_size, num_queries, d_model). Next, two heads are added on top for object detection: a linear layer for classifying each object query into one of the objects or "no object", and a MLP to predict bounding boxes for each query. The model is trained using a bipartite matching loss: so what we actually do is compare the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to find an optimal one-to-one mapping of each of the N queries to each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model. DETR can be naturally extended to perform panoptic segmentation (which unifies semantic segmentation and instance segmentation). [~transformers.DetrForSegmentation] adds a segmentation mask head on top of [~transformers.DetrForObjectDetection]. The mask head can be trained either jointly, or in a two steps process, where one first trains a [~transformers.DetrForObjectDetection] model to detect bounding boxes around both "things" (instances) and "stuff" (background things like trees, roads, sky), then freeze all the weights and train only the mask head for 25 epochs. Experimentally, these two approaches give similar results. Note that predicting boxes is required for the training to be possible, since the Hungarian matching is computed using distances between boxes. Usage tips DETR uses so-called object queries to detect objects in an image. The number of queries determines the maximum number of objects that can be detected in a single image, and is set to 100 by default (see parameter num_queries of [~transformers.DetrConfig]). Note that it's good to have some slack (in COCO, the authors used 100, while the maximum number of objects in a COCO image is ~70). The decoder of DETR updates the query embeddings in parallel. This is different from language models like GPT-2, which use autoregressive decoding instead of parallel. Hence, no causal attention mask is used. DETR adds position embeddings to the hidden states at each self-attention and cross-attention layer before projecting to queries and keys. For the position embeddings of the image, one can choose between fixed sinusoidal or learned absolute position embeddings. By default, the parameter position_embedding_type of [~transformers.DetrConfig] is set to "sine". During training, the authors of DETR did find it helpful to use auxiliary losses in the decoder, especially to help the model output the correct number of objects of each class. If you set the parameter auxiliary_loss of [~transformers.DetrConfig] to True, then prediction feedforward neural networks and Hungarian losses are added after each decoder layer (with the FFNs sharing parameters). If you want to train the model in a distributed environment across multiple nodes, then one should update the num_boxes variable in the DetrLoss class of modeling_detr.py. When training on multiple nodes, this should be set to the average number of target boxes across all nodes, as can be seen in the original implementation here. [~transformers.DetrForObjectDetection] and [~transformers.DetrForSegmentation] can be initialized with any convolutional backbone available in the timm library. Initializing with a MobileNet backbone for example can be done by setting the backbone attribute of [~transformers.DetrConfig] to "tf_mobilenetv3_small_075", and then initializing the model with that config. DETR resizes the input images such that the shortest side is at least a certain amount of pixels while the longest is at most 1333 pixels. At training time, scale augmentation is used such that the shortest side is randomly set to at least 480 and at most 800 pixels. At inference time, the shortest side is set to 800. One can use [~transformers.DetrImageProcessor] to prepare images (and optional annotations in COCO format) for the model. Due to this resizing, images in a batch can have different sizes. DETR solves this by padding images up to the largest size in a batch, and by creating a pixel mask that indicates which pixels are real/which are padding. Alternatively, one can also define a custom collate_fn in order to batch images together, using [~transformers.DetrImageProcessor.pad_and_create_pixel_mask]. The size of the images will determine the amount of memory being used, and will thus determine the batch_size. It is advised to use a batch size of 2 per GPU. See this Github thread for more info. There are three ways to instantiate a DETR model (depending on what you prefer): Option 1: Instantiate DETR with pre-trained weights for entire model from transformers import DetrForObjectDetection model = DetrForObjectDetection.from_pretrained("facebook/detr-resnet-50") Option 2: Instantiate DETR with randomly initialized weights for Transformer, but pre-trained weights for backbone from transformers import DetrConfig, DetrForObjectDetection config = DetrConfig() model = DetrForObjectDetection(config) Option 3: Instantiate DETR with randomly initialized weights for backbone + Transformerpy config = DetrConfig(use_pretrained_backbone=False) model = DetrForObjectDetection(config) As a summary, consider the following table: | Task | Object detection | Instance segmentation | Panoptic segmentation | |------|------------------|-----------------------|-----------------------| | Description | Predicting bounding boxes and class labels around objects in an image | Predicting masks around objects (i.e. instances) in an image | Predicting masks around both objects (i.e. instances) as well as "stuff" (i.e. background things like trees and roads) in an image | | Model | [~transformers.DetrForObjectDetection] | [~transformers.DetrForSegmentation] | [~transformers.DetrForSegmentation] | | Example dataset | COCO detection | COCO detection, COCO panoptic | COCO panoptic | | | Format of annotations to provide to [~transformers.DetrImageProcessor] | {'image_id': int, 'annotations': List[Dict]} each Dict being a COCO object annotation | {'image_id': int, 'annotations': List[Dict]} (in case of COCO detection) or {'file_name': str, 'image_id': int, 'segments_info': List[Dict]} (in case of COCO panoptic) | {'file_name': str, 'image_id': int, 'segments_info': List[Dict]} and masks_path (path to directory containing PNG files of the masks) | | Postprocessing (i.e. converting the output of the model to Pascal VOC format) | [~transformers.DetrImageProcessor.post_process] | [~transformers.DetrImageProcessor.post_process_segmentation] | [~transformers.DetrImageProcessor.post_process_segmentation], [~transformers.DetrImageProcessor.post_process_panoptic] | | evaluators | CocoEvaluator with iou_types="bbox" | CocoEvaluator with iou_types="bbox" or "segm" | CocoEvaluator with iou_tupes="bbox" or "segm", PanopticEvaluator | In short, one should prepare the data either in COCO detection or COCO panoptic format, then use [~transformers.DetrImageProcessor] to create pixel_values, pixel_mask and optional labels, which can then be used to train (or fine-tune) a model. For evaluation, one should first convert the outputs of the model using one of the postprocessing methods of [~transformers.DetrImageProcessor]. These can be be provided to either CocoEvaluator or PanopticEvaluator, which allow you to calculate metrics like mean Average Precision (mAP) and Panoptic Quality (PQ). The latter objects are implemented in the original repository. See the example notebooks for more info regarding evaluation. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DETR. All example notebooks illustrating fine-tuning [DetrForObjectDetection] and [DetrForSegmentation] on a custom dataset an be found here. See also: Object detection task guide If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. DetrConfig [[autodoc]] DetrConfig DetrImageProcessor [[autodoc]] DetrImageProcessor - preprocess - post_process_object_detection - post_process_semantic_segmentation - post_process_instance_segmentation - post_process_panoptic_segmentation DetrFeatureExtractor [[autodoc]] DetrFeatureExtractor - call - post_process_object_detection - post_process_semantic_segmentation - post_process_instance_segmentation - post_process_panoptic_segmentation DETR specific outputs [[autodoc]] models.detr.modeling_detr.DetrModelOutput [[autodoc]] models.detr.modeling_detr.DetrObjectDetectionOutput [[autodoc]] models.detr.modeling_detr.DetrSegmentationOutput DetrModel [[autodoc]] DetrModel - forward DetrForObjectDetection [[autodoc]] DetrForObjectDetection - forward DetrForSegmentation [[autodoc]] DetrForSegmentation - forward
XLNet Overview The XLNet model was proposed in XLNet: Generalized Autoregressive Pretraining for Language Understanding by Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le. XLnet is an extension of the Transformer-XL model pre-trained using an autoregressive method to learn bidirectional contexts by maximizing the expected likelihood over all permutations of the input sequence factorization order. The abstract from the paper is the following: With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining approaches based on autoregressive language modeling. However, relying on corrupting the input with masks, BERT neglects dependency between the masked positions and suffers from a pretrain-finetune discrepancy. In light of these pros and cons, we propose XLNet, a generalized autoregressive pretraining method that (1) enables learning bidirectional contexts by maximizing the expected likelihood over all permutations of the factorization order and (2) overcomes the limitations of BERT thanks to its autoregressive formulation. Furthermore, XLNet integrates ideas from Transformer-XL, the state-of-the-art autoregressive model, into pretraining. Empirically, under comparable experiment settings, XLNet outperforms BERT on 20 tasks, often by a large margin, including question answering, natural language inference, sentiment analysis, and document ranking. This model was contributed by thomwolf. The original code can be found here. Usage tips The specific attention pattern can be controlled at training and test time using the perm_mask input. Due to the difficulty of training a fully auto-regressive model over various factorization order, XLNet is pretrained using only a sub-set of the output tokens as target which are selected with the target_mapping input. To use XLNet for sequential decoding (i.e. not in fully bi-directional setting), use the perm_mask and target_mapping inputs to control the attention span and outputs (see examples in examples/pytorch/text-generation/run_generation.py) XLNet is one of the few models that has no sequence length limit. XLNet is not a traditional autoregressive model but uses a training strategy that builds on that. It permutes the tokens in the sentence, then allows the model to use the last n tokens to predict the token n+1. Since this is all done with a mask, the sentence is actually fed in the model in the right order, but instead of masking the first n tokens for n+1, XLNet uses a mask that hides the previous tokens in some given permutation of 1,…,sequence length. XLNet also uses the same recurrence mechanism as Transformer-XL to build long-term dependencies. Resources Text classification task guide Token classification task guide Question answering task guide Causal language modeling task guide Multiple choice task guide XLNetConfig [[autodoc]] XLNetConfig XLNetTokenizer [[autodoc]] XLNetTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary XLNetTokenizerFast [[autodoc]] XLNetTokenizerFast XLNet specific outputs [[autodoc]] models.xlnet.modeling_xlnet.XLNetModelOutput [[autodoc]] models.xlnet.modeling_xlnet.XLNetLMHeadModelOutput [[autodoc]] models.xlnet.modeling_xlnet.XLNetForSequenceClassificationOutput [[autodoc]] models.xlnet.modeling_xlnet.XLNetForMultipleChoiceOutput [[autodoc]] models.xlnet.modeling_xlnet.XLNetForTokenClassificationOutput [[autodoc]] models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringSimpleOutput [[autodoc]] models.xlnet.modeling_xlnet.XLNetForQuestionAnsweringOutput [[autodoc]] models.xlnet.modeling_tf_xlnet.TFXLNetModelOutput [[autodoc]] models.xlnet.modeling_tf_xlnet.TFXLNetLMHeadModelOutput [[autodoc]] models.xlnet.modeling_tf_xlnet.TFXLNetForSequenceClassificationOutput [[autodoc]] models.xlnet.modeling_tf_xlnet.TFXLNetForMultipleChoiceOutput [[autodoc]] models.xlnet.modeling_tf_xlnet.TFXLNetForTokenClassificationOutput [[autodoc]] models.xlnet.modeling_tf_xlnet.TFXLNetForQuestionAnsweringSimpleOutput XLNetModel [[autodoc]] XLNetModel - forward XLNetLMHeadModel [[autodoc]] XLNetLMHeadModel - forward XLNetForSequenceClassification [[autodoc]] XLNetForSequenceClassification - forward XLNetForMultipleChoice [[autodoc]] XLNetForMultipleChoice - forward XLNetForTokenClassification [[autodoc]] XLNetForTokenClassification - forward XLNetForQuestionAnsweringSimple [[autodoc]] XLNetForQuestionAnsweringSimple - forward XLNetForQuestionAnswering [[autodoc]] XLNetForQuestionAnswering - forward TFXLNetModel [[autodoc]] TFXLNetModel - call TFXLNetLMHeadModel [[autodoc]] TFXLNetLMHeadModel - call TFXLNetForSequenceClassification [[autodoc]] TFXLNetForSequenceClassification - call TFLNetForMultipleChoice [[autodoc]] TFXLNetForMultipleChoice - call TFXLNetForTokenClassification [[autodoc]] TFXLNetForTokenClassification - call TFXLNetForQuestionAnsweringSimple [[autodoc]] TFXLNetForQuestionAnsweringSimple - call
ESM Overview This page provides code and pre-trained weights for Transformer protein language models from Meta AI's Fundamental AI Research Team, providing the state-of-the-art ESMFold and ESM-2, and the previously released ESM-1b and ESM-1v. Transformer protein language models were introduced in the paper Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences by Alexander Rives, Joshua Meier, Tom Sercu, Siddharth Goyal, Zeming Lin, Jason Liu, Demi Guo, Myle Ott, C. Lawrence Zitnick, Jerry Ma, and Rob Fergus. The first version of this paper was preprinted in 2019. ESM-2 outperforms all tested single-sequence protein language models across a range of structure prediction tasks, and enables atomic resolution structure prediction. It was released with the paper Language models of protein sequences at the scale of evolution enable accurate structure prediction by Zeming Lin, Halil Akin, Roshan Rao, Brian Hie, Zhongkai Zhu, Wenting Lu, Allan dos Santos Costa, Maryam Fazel-Zarandi, Tom Sercu, Sal Candido and Alexander Rives. Also introduced in this paper was ESMFold. It uses an ESM-2 stem with a head that can predict folded protein structures with state-of-the-art accuracy. Unlike AlphaFold2, it relies on the token embeddings from the large pre-trained protein language model stem and does not perform a multiple sequence alignment (MSA) step at inference time, which means that ESMFold checkpoints are fully "standalone" - they do not require a database of known protein sequences and structures with associated external query tools to make predictions, and are much faster as a result. The abstract from "Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences" is In the field of artificial intelligence, a combination of scale in data and model capacity enabled by unsupervised learning has led to major advances in representation learning and statistical generation. In the life sciences, the anticipated growth of sequencing promises unprecedented data on natural sequence diversity. Protein language modeling at the scale of evolution is a logical step toward predictive and generative artificial intelligence for biology. To this end, we use unsupervised learning to train a deep contextual language model on 86 billion amino acids across 250 million protein sequences spanning evolutionary diversity. The resulting model contains information about biological properties in its representations. The representations are learned from sequence data alone. The learned representation space has a multiscale organization reflecting structure from the level of biochemical properties of amino acids to remote homology of proteins. Information about secondary and tertiary structure is encoded in the representations and can be identified by linear projections. Representation learning produces features that generalize across a range of applications, enabling state-of-the-art supervised prediction of mutational effect and secondary structure and improving state-of-the-art features for long-range contact prediction. The abstract from "Language models of protein sequences at the scale of evolution enable accurate structure prediction" is Large language models have recently been shown to develop emergent capabilities with scale, going beyond simple pattern matching to perform higher level reasoning and generate lifelike images and text. While language models trained on protein sequences have been studied at a smaller scale, little is known about what they learn about biology as they are scaled up. In this work we train models up to 15 billion parameters, the largest language models of proteins to be evaluated to date. We find that as models are scaled they learn information enabling the prediction of the three-dimensional structure of a protein at the resolution of individual atoms. We present ESMFold for high accuracy end-to-end atomic level structure prediction directly from the individual sequence of a protein. ESMFold has similar accuracy to AlphaFold2 and RoseTTAFold for sequences with low perplexity that are well understood by the language model. ESMFold inference is an order of magnitude faster than AlphaFold2, enabling exploration of the structural space of metagenomic proteins in practical timescales. The original code can be found here and was was developed by the Fundamental AI Research team at Meta AI. ESM-1b, ESM-1v and ESM-2 were contributed to huggingface by jasonliu and Matt. ESMFold was contributed to huggingface by Matt and Sylvain, with a big thank you to Nikita Smetanin, Roshan Rao and Tom Sercu for their help throughout the process! Usage tips ESM models are trained with a masked language modeling (MLM) objective. The HuggingFace port of ESMFold uses portions of the openfold library. The openfold library is licensed under the Apache License 2.0. Resources Text classification task guide Token classification task guide Masked language modeling task guide EsmConfig [[autodoc]] EsmConfig - all EsmTokenizer [[autodoc]] EsmTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary EsmModel [[autodoc]] EsmModel - forward EsmForMaskedLM [[autodoc]] EsmForMaskedLM - forward EsmForSequenceClassification [[autodoc]] EsmForSequenceClassification - forward EsmForTokenClassification [[autodoc]] EsmForTokenClassification - forward EsmForProteinFolding [[autodoc]] EsmForProteinFolding - forward TFEsmModel [[autodoc]] TFEsmModel - call TFEsmForMaskedLM [[autodoc]] TFEsmForMaskedLM - call TFEsmForSequenceClassification [[autodoc]] TFEsmForSequenceClassification - call TFEsmForTokenClassification [[autodoc]] TFEsmForTokenClassification - call
Pyramid Vision Transformer (PVT) Overview The PVT model was proposed in Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions by Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, Ling Shao. The PVT is a type of vision transformer that utilizes a pyramid structure to make it an effective backbone for dense prediction tasks. Specifically it allows for more fine-grained inputs (4 x 4 pixels per patch) to be used, while simultaneously shrinking the sequence length of the Transformer as it deepens - reducing the computational cost. Additionally, a spatial-reduction attention (SRA) layer is used to further reduce the resource consumption when learning high-resolution features. The abstract from the paper is the following: Although convolutional neural networks (CNNs) have achieved great success in computer vision, this work investigates a simpler, convolution-free backbone network useful for many dense prediction tasks. Unlike the recently proposed Vision Transformer (ViT) that was designed for image classification specifically, we introduce the Pyramid Vision Transformer (PVT), which overcomes the difficulties of porting Transformer to various dense prediction tasks. PVT has several merits compared to current state of the arts. Different from ViT that typically yields low resolution outputs and incurs high computational and memory costs, PVT not only can be trained on dense partitions of an image to achieve high output resolution, which is important for dense prediction, but also uses a progressive shrinking pyramid to reduce the computations of large feature maps. PVT inherits the advantages of both CNN and Transformer, making it a unified backbone for various vision tasks without convolutions, where it can be used as a direct replacement for CNN backbones. We validate PVT through extensive experiments, showing that it boosts the performance of many downstream tasks, including object detection, instance and semantic segmentation. For example, with a comparable number of parameters, PVT+RetinaNet achieves 40.4 AP on the COCO dataset, surpassing ResNet50+RetinNet (36.3 AP) by 4.1 absolute AP (see Figure 2). We hope that PVT could serve as an alternative and useful backbone for pixel-level predictions and facilitate future research. This model was contributed by Xrenya. The original code can be found here. PVTv1 on ImageNet-1K | Model variant |Size |Acc@1|Params (M)| |--------------------|:-------:|:-------:|:------------:| | PVT-Tiny | 224 | 75.1 | 13.2 | | PVT-Small | 224 | 79.8 | 24.5 | | PVT-Medium | 224 | 81.2 | 44.2 | | PVT-Large | 224 | 81.7 | 61.4 | PvtConfig [[autodoc]] PvtConfig PvtImageProcessor [[autodoc]] PvtImageProcessor - preprocess PvtForImageClassification [[autodoc]] PvtForImageClassification - forward PvtModel [[autodoc]] PvtModel - forward
Reformer Overview The Reformer model was proposed in the paper Reformer: The Efficient Transformer by Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya. The abstract from the paper is the following: Large Transformer models routinely achieve state-of-the-art results on a number of tasks but training these models can be prohibitively costly, especially on long sequences. We introduce two techniques to improve the efficiency of Transformers. For one, we replace dot-product attention by one that uses locality-sensitive hashing, changing its complexity from O(L^2) to O(Llog(L)), where L is the length of the sequence. Furthermore, we use reversible residual layers instead of the standard residuals, which allows storing activations only once in the training process instead of N times, where N is the number of layers. The resulting model, the Reformer, performs on par with Transformer models while being much more memory-efficient and much faster on long sequences. This model was contributed by patrickvonplaten. The Authors' code can be found here. Usage tips Reformer does not work with torch.nn.DataParallel due to a bug in PyTorch, see issue #36035. Use Axial position encoding (see below for more details). It’s a mechanism to avoid having a huge positional encoding matrix (when the sequence length is very big) by factorizing it into smaller matrices. Replace traditional attention by LSH (local-sensitive hashing) attention (see below for more details). It’s a technique to avoid computing the full product query-key in the attention layers. Avoid storing the intermediate results of each layer by using reversible transformer layers to obtain them during the backward pass (subtracting the residuals from the input of the next layer gives them back) or recomputing them for results inside a given layer (less efficient than storing them but saves memory). Compute the feedforward operations by chunks and not on the whole batch. Axial Positional Encodings Axial Positional Encodings were first implemented in Google's trax library and developed by the authors of this model's paper. In models that are treating very long input sequences, the conventional position id encodings store an embeddings vector of size \(d\) being the config.hidden_size for every position \(i, \ldots, n_s\), with \(n_s\) being config.max_embedding_size. This means that having a sequence length of \(n_s = 2^{19} \approx 0.5M\) and a config.hidden_size of \(d = 2^{10} \approx 1000\) would result in a position encoding matrix: $$X_{i,j}, \text{ with } i \in \left[1,\ldots, d\right] \text{ and } j \in \left[1,\ldots, n_s\right]$$ which alone has over 500M parameters to store. Axial positional encodings factorize \(X_{i,j}\) into two matrices: $$X^{1}_{i,j}, \text{ with } i \in \left[1,\ldots, d^1\right] \text{ and } j \in \left[1,\ldots, n_s^1\right]$$ and $$X^{2}_{i,j}, \text{ with } i \in \left[1,\ldots, d^2\right] \text{ and } j \in \left[1,\ldots, n_s^2\right]$$ with: $$d = d^1 + d^2 \text{ and } n_s = n_s^1 \times n_s^2 .$$ Therefore the following holds: $$X_{i,j} = \begin{cases} X^{1}{i, k}, & \text{if }\ i < d^1 \text{ with } k = j \mod n_s^1 \ X^{2}{i - d^1, l}, & \text{if } i \ge d^1 \text{ with } l = \lfloor\frac{j}{n_s^1}\rfloor \end{cases}$$ Intuitively, this means that a position embedding vector \(x_j \in \mathbb{R}^{d}\) is now the composition of two factorized embedding vectors: \(x^1_{k, l} + x^2_{l, k}\), where as the config.max_embedding_size dimension \(j\) is factorized into \(k \text{ and } l\). This design ensures that each position embedding vector \(x_j\) is unique. Using the above example again, axial position encoding with \(d^1 = 2^9, d^2 = 2^9, n_s^1 = 2^9, n_s^2 = 2^{10}\) can drastically reduced the number of parameters from 500 000 000 to \(2^{18} + 2^{19} \approx 780 000\) parameters, this means 85% less memory usage. In practice, the parameter config.axial_pos_embds_dim is set to a tuple \((d^1, d^2)\) which sum has to be equal to config.hidden_size and config.axial_pos_shape is set to a tuple \((n_s^1, n_s^2)\) which product has to be equal to config.max_embedding_size, which during training has to be equal to the sequence length of the input_ids. LSH Self Attention In Locality sensitive hashing (LSH) self attention the key and query projection weights are tied. Therefore, the key query embedding vectors are also tied. LSH self attention uses the locality sensitive hashing mechanism proposed in Practical and Optimal LSH for Angular Distance to assign each of the tied key query embedding vectors to one of config.num_buckets possible buckets. The premise is that the more "similar" key query embedding vectors (in terms of cosine similarity) are to each other, the more likely they are assigned to the same bucket. The accuracy of the LSH mechanism can be improved by increasing config.num_hashes or directly the argument num_hashes of the forward function so that the output of the LSH self attention better approximates the output of the "normal" full self attention. The buckets are then sorted and chunked into query key embedding vector chunks each of length config.lsh_chunk_length. For each chunk, the query embedding vectors attend to its key vectors (which are tied to themselves) and to the key embedding vectors of config.lsh_num_chunks_before previous neighboring chunks and config.lsh_num_chunks_after following neighboring chunks. For more information, see the original Paper or this great blog post. Note that config.num_buckets can also be factorized into a list \((n_{\text{buckets}}^1, n_{\text{buckets}}^2)\). This way instead of assigning the query key embedding vectors to one of \((1,\ldots, n_{\text{buckets}})\) they are assigned to one of \((1-1,\ldots, n_{\text{buckets}}^1-1, \ldots, 1-n_{\text{buckets}}^2, \ldots, n_{\text{buckets}}^1-n_{\text{buckets}}^2)\). This is crucial for very long sequences to save memory. When training a model from scratch, it is recommended to leave config.num_buckets=None, so that depending on the sequence length a good value for num_buckets is calculated on the fly. This value will then automatically be saved in the config and should be reused for inference. Using LSH self attention, the memory and time complexity of the query-key matmul operation can be reduced from \(\mathcal{O}(n_s \times n_s)\) to \(\mathcal{O}(n_s \times \log(n_s))\), which usually represents the memory and time bottleneck in a transformer model, with \(n_s\) being the sequence length. Local Self Attention Local self attention is essentially a "normal" self attention layer with key, query and value projections, but is chunked so that in each chunk of length config.local_chunk_length the query embedding vectors only attends to the key embedding vectors in its chunk and to the key embedding vectors of config.local_num_chunks_before previous neighboring chunks and config.local_num_chunks_after following neighboring chunks. Using Local self attention, the memory and time complexity of the query-key matmul operation can be reduced from \(\mathcal{O}(n_s \times n_s)\) to \(\mathcal{O}(n_s \times \log(n_s))\), which usually represents the memory and time bottleneck in a transformer model, with \(n_s\) being the sequence length. Training During training, we must ensure that the sequence length is set to a value that can be divided by the least common multiple of config.lsh_chunk_length and config.local_chunk_length and that the parameters of the Axial Positional Encodings are correctly set as described above. Reformer is very memory efficient so that the model can easily be trained on sequences as long as 64000 tokens. For training, the [ReformerModelWithLMHead] should be used as follows: python input_ids = tokenizer.encode("This is a sentence from the training data", return_tensors="pt") loss = model(input_ids, labels=input_ids)[0] Resources Text classification task guide Question answering task guide Causal language modeling task guide Masked language modeling task guide ReformerConfig [[autodoc]] ReformerConfig ReformerTokenizer [[autodoc]] ReformerTokenizer - save_vocabulary ReformerTokenizerFast [[autodoc]] ReformerTokenizerFast ReformerModel [[autodoc]] ReformerModel - forward ReformerModelWithLMHead [[autodoc]] ReformerModelWithLMHead - forward ReformerForMaskedLM [[autodoc]] ReformerForMaskedLM - forward ReformerForSequenceClassification [[autodoc]] ReformerForSequenceClassification - forward ReformerForQuestionAnswering [[autodoc]] ReformerForQuestionAnswering - forward
CamemBERT Overview The CamemBERT model was proposed in CamemBERT: a Tasty French Language Model by Louis Martin, Benjamin Muller, Pedro Javier Ortiz Suárez, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah, and Benoît Sagot. It is based on Facebook's RoBERTa model released in 2019. It is a model trained on 138GB of French text. The abstract from the paper is the following: Pretrained language models are now ubiquitous in Natural Language Processing. Despite their success, most available models have either been trained on English data or on the concatenation of data in multiple languages. This makes practical use of such models --in all languages except English-- very limited. Aiming to address this issue for French, we release CamemBERT, a French version of the Bi-directional Encoders for Transformers (BERT). We measure the performance of CamemBERT compared to multilingual models in multiple downstream tasks, namely part-of-speech tagging, dependency parsing, named-entity recognition, and natural language inference. CamemBERT improves the state of the art for most of the tasks considered. We release the pretrained model for CamemBERT hoping to foster research and downstream applications for French NLP. This model was contributed by the ALMAnaCH team (Inria). The original code can be found here. This implementation is the same as RoBERTa. Refer to the documentation of RoBERTa for usage examples as well as the information relative to the inputs and outputs. Resources Text classification task guide Token classification task guide Question answering task guide Causal language modeling task guide Masked language modeling task guide Multiple choice task guide CamembertConfig [[autodoc]] CamembertConfig CamembertTokenizer [[autodoc]] CamembertTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary CamembertTokenizerFast [[autodoc]] CamembertTokenizerFast CamembertModel [[autodoc]] CamembertModel CamembertForCausalLM [[autodoc]] CamembertForCausalLM CamembertForMaskedLM [[autodoc]] CamembertForMaskedLM CamembertForSequenceClassification [[autodoc]] CamembertForSequenceClassification CamembertForMultipleChoice [[autodoc]] CamembertForMultipleChoice CamembertForTokenClassification [[autodoc]] CamembertForTokenClassification CamembertForQuestionAnswering [[autodoc]] CamembertForQuestionAnswering TFCamembertModel [[autodoc]] TFCamembertModel TFCamembertForCasualLM [[autodoc]] TFCamembertForCausalLM TFCamembertForMaskedLM [[autodoc]] TFCamembertForMaskedLM TFCamembertForSequenceClassification [[autodoc]] TFCamembertForSequenceClassification TFCamembertForMultipleChoice [[autodoc]] TFCamembertForMultipleChoice TFCamembertForTokenClassification [[autodoc]] TFCamembertForTokenClassification TFCamembertForQuestionAnswering [[autodoc]] TFCamembertForQuestionAnswering
GPT-NeoX-Japanese Overview We introduce GPT-NeoX-Japanese, which is an autoregressive language model for Japanese, trained on top of https://github.com/EleutherAI/gpt-neox. Japanese is a unique language with its large vocabulary and a combination of hiragana, katakana, and kanji writing scripts. To address this distinct structure of the Japanese language, we use a special sub-word tokenizer. We are very grateful to tanreinama for open-sourcing this incredibly helpful tokenizer. Following the recommendations from Google's research on PaLM, we have removed bias parameters from transformer blocks, achieving better model performance. Please refer this article in detail. Development of the model was led by Shinya Otani, Takayoshi Makabe, Anuj Arora, and Kyo Hattori from ABEJA, Inc.. For more information on this model-building activity, please refer here (ja). Usage example The generate() method can be used to generate text using GPT NeoX Japanese model. thon from transformers import GPTNeoXJapaneseForCausalLM, GPTNeoXJapaneseTokenizer model = GPTNeoXJapaneseForCausalLM.from_pretrained("abeja/gpt-neox-japanese-2.7b") tokenizer = GPTNeoXJapaneseTokenizer.from_pretrained("abeja/gpt-neox-japanese-2.7b") prompt = "人とAIが協調するためには、" input_ids = tokenizer(prompt, return_tensors="pt").input_ids gen_tokens = model.generate( input_ids, do_sample=True, temperature=0.9, max_length=100, ) gen_text = tokenizer.batch_decode(gen_tokens, skip_special_tokens=True)[0] print(gen_text) 人とAIが協調するためには、AIと人が共存し、AIを正しく理解する必要があります。 Resources Causal language modeling task guide GPTNeoXJapaneseConfig [[autodoc]] GPTNeoXJapaneseConfig GPTNeoXJapaneseTokenizer [[autodoc]] GPTNeoXJapaneseTokenizer GPTNeoXJapaneseModel [[autodoc]] GPTNeoXJapaneseModel - forward GPTNeoXJapaneseForCausalLM [[autodoc]] GPTNeoXJapaneseForCausalLM - forward
MRA Overview The MRA model was proposed in Multi Resolution Analysis (MRA) for Approximate Self-Attention by Zhanpeng Zeng, Sourav Pal, Jeffery Kline, Glenn M Fung, and Vikas Singh. The abstract from the paper is the following: Transformers have emerged as a preferred model for many tasks in natural language processing and vision. Recent efforts on training and deploying Transformers more efficiently have identified many strategies to approximate the self-attention matrix, a key module in a Transformer architecture. Effective ideas include various prespecified sparsity patterns, low-rank basis expansions and combinations thereof. In this paper, we revisit classical Multiresolution Analysis (MRA) concepts such as Wavelets, whose potential value in this setting remains underexplored thus far. We show that simple approximations based on empirical feedback and design choices informed by modern hardware and implementation challenges, eventually yield a MRA-based approach for self-attention with an excellent performance profile across most criteria of interest. We undertake an extensive set of experiments and demonstrate that this multi-resolution scheme outperforms most efficient self-attention proposals and is favorable for both short and long sequences. Code is available at https://github.com/mlpen/mra-attention. This model was contributed by novice03. The original code can be found here. MraConfig [[autodoc]] MraConfig MraModel [[autodoc]] MraModel - forward MraForMaskedLM [[autodoc]] MraForMaskedLM - forward MraForSequenceClassification [[autodoc]] MraForSequenceClassification - forward MraForMultipleChoice [[autodoc]] MraForMultipleChoice - forward MraForTokenClassification [[autodoc]] MraForTokenClassification - forward MraForQuestionAnswering [[autodoc]] MraForQuestionAnswering - forward
Pop2Piano Overview The Pop2Piano model was proposed in Pop2Piano : Pop Audio-based Piano Cover Generation by Jongho Choi and Kyogu Lee. Piano covers of pop music are widely enjoyed, but generating them from music is not a trivial task. It requires great expertise with playing piano as well as knowing different characteristics and melodies of a song. With Pop2Piano you can directly generate a cover from a song's audio waveform. It is the first model to directly generate a piano cover from pop audio without melody and chord extraction modules. Pop2Piano is an encoder-decoder Transformer model based on T5. The input audio is transformed to its waveform and passed to the encoder, which transforms it to a latent representation. The decoder uses these latent representations to generate token ids in an autoregressive way. Each token id corresponds to one of four different token types: time, velocity, note and 'special'. The token ids are then decoded to their equivalent MIDI file. The abstract from the paper is the following: Piano covers of pop music are enjoyed by many people. However, the task of automatically generating piano covers of pop music is still understudied. This is partly due to the lack of synchronized {Pop, Piano Cover} data pairs, which made it challenging to apply the latest data-intensive deep learning-based methods. To leverage the power of the data-driven approach, we make a large amount of paired and synchronized {Pop, Piano Cover} data using an automated pipeline. In this paper, we present Pop2Piano, a Transformer network that generates piano covers given waveforms of pop music. To the best of our knowledge, this is the first model to generate a piano cover directly from pop audio without using melody and chord extraction modules. We show that Pop2Piano, trained with our dataset, is capable of producing plausible piano covers. This model was contributed by Susnato Dhar. The original code can be found here. Usage tips To use Pop2Piano, you will need to install the 🤗 Transformers library, as well as the following third party modules: pip install pretty-midi==0.2.9 essentia==2.1b6.dev1034 librosa scipy Please note that you may need to restart your runtime after installation. Pop2Piano is an Encoder-Decoder based model like T5. Pop2Piano can be used to generate midi-audio files for a given audio sequence. Choosing different composers in Pop2PianoForConditionalGeneration.generate() can lead to variety of different results. Setting the sampling rate to 44.1 kHz when loading the audio file can give good performance. Though Pop2Piano was mainly trained on Korean Pop music, it also does pretty well on other Western Pop or Hip Hop songs. Examples Example using HuggingFace Dataset: thon from datasets import load_dataset from transformers import Pop2PianoForConditionalGeneration, Pop2PianoProcessor model = Pop2PianoForConditionalGeneration.from_pretrained("sweetcocoa/pop2piano") processor = Pop2PianoProcessor.from_pretrained("sweetcocoa/pop2piano") ds = load_dataset("sweetcocoa/pop2piano_ci", split="test") inputs = processor( audio=ds["audio"][0]["array"], sampling_rate=ds["audio"][0]["sampling_rate"], return_tensors="pt" ) model_output = model.generate(input_features=inputs["input_features"], composer="composer1") tokenizer_output = processor.batch_decode( token_ids=model_output, feature_extractor_output=inputs )["pretty_midi_objects"][0] tokenizer_output.write("./Outputs/midi_output.mid") Example using your own audio file: thon import librosa from transformers import Pop2PianoForConditionalGeneration, Pop2PianoProcessor audio, sr = librosa.load("", sr=44100) # feel free to change the sr to a suitable value. model = Pop2PianoForConditionalGeneration.from_pretrained("sweetcocoa/pop2piano") processor = Pop2PianoProcessor.from_pretrained("sweetcocoa/pop2piano") inputs = processor(audio=audio, sampling_rate=sr, return_tensors="pt") model_output = model.generate(input_features=inputs["input_features"], composer="composer1") tokenizer_output = processor.batch_decode( token_ids=model_output, feature_extractor_output=inputs )["pretty_midi_objects"][0] tokenizer_output.write("./Outputs/midi_output.mid") Example of processing multiple audio files in batch: thon import librosa from transformers import Pop2PianoForConditionalGeneration, Pop2PianoProcessor feel free to change the sr to a suitable value. audio1, sr1 = librosa.load("", sr=44100) audio2, sr2 = librosa.load("", sr=44100) model = Pop2PianoForConditionalGeneration.from_pretrained("sweetcocoa/pop2piano") processor = Pop2PianoProcessor.from_pretrained("sweetcocoa/pop2piano") inputs = processor(audio=[audio1, audio2], sampling_rate=[sr1, sr2], return_attention_mask=True, return_tensors="pt") Since we now generating in batch(2 audios) we must pass the attention_mask model_output = model.generate( input_features=inputs["input_features"], attention_mask=inputs["attention_mask"], composer="composer1", ) tokenizer_output = processor.batch_decode( token_ids=model_output, feature_extractor_output=inputs )["pretty_midi_objects"] Since we now have 2 generated MIDI files tokenizer_output[0].write("./Outputs/midi_output1.mid") tokenizer_output[1].write("./Outputs/midi_output2.mid") Example of processing multiple audio files in batch (Using Pop2PianoFeatureExtractor and Pop2PianoTokenizer): thon import librosa from transformers import Pop2PianoForConditionalGeneration, Pop2PianoFeatureExtractor, Pop2PianoTokenizer feel free to change the sr to a suitable value. audio1, sr1 = librosa.load("", sr=44100) audio2, sr2 = librosa.load("", sr=44100) model = Pop2PianoForConditionalGeneration.from_pretrained("sweetcocoa/pop2piano") feature_extractor = Pop2PianoFeatureExtractor.from_pretrained("sweetcocoa/pop2piano") tokenizer = Pop2PianoTokenizer.from_pretrained("sweetcocoa/pop2piano") inputs = feature_extractor( audio=[audio1, audio2], sampling_rate=[sr1, sr2], return_attention_mask=True, return_tensors="pt", ) Since we now generating in batch(2 audios) we must pass the attention_mask model_output = model.generate( input_features=inputs["input_features"], attention_mask=inputs["attention_mask"], composer="composer1", ) tokenizer_output = tokenizer.batch_decode( token_ids=model_output, feature_extractor_output=inputs )["pretty_midi_objects"] Since we now have 2 generated MIDI files tokenizer_output[0].write("./Outputs/midi_output1.mid") tokenizer_output[1].write("./Outputs/midi_output2.mid") Pop2PianoConfig [[autodoc]] Pop2PianoConfig Pop2PianoFeatureExtractor [[autodoc]] Pop2PianoFeatureExtractor - call Pop2PianoForConditionalGeneration [[autodoc]] Pop2PianoForConditionalGeneration - forward - generate Pop2PianoTokenizer [[autodoc]] Pop2PianoTokenizer - call Pop2PianoProcessor [[autodoc]] Pop2PianoProcessor - call
ConvNeXt V2 Overview The ConvNeXt V2 model was proposed in ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders by Sanghyun Woo, Shoubhik Debnath, Ronghang Hu, Xinlei Chen, Zhuang Liu, In So Kweon, Saining Xie. ConvNeXt V2 is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, and a successor of ConvNeXT. The abstract from the paper is the following: Driven by improved architectures and better representation learning frameworks, the field of visual recognition has enjoyed rapid modernization and performance boost in the early 2020s. For example, modern ConvNets, represented by ConvNeXt, have demonstrated strong performance in various scenarios. While these models were originally designed for supervised learning with ImageNet labels, they can also potentially benefit from self-supervised learning techniques such as masked autoencoders (MAE). However, we found that simply combining these two approaches leads to subpar performance. In this paper, we propose a fully convolutional masked autoencoder framework and a new Global Response Normalization (GRN) layer that can be added to the ConvNeXt architecture to enhance inter-channel feature competition. This co-design of self-supervised learning techniques and architectural improvement results in a new model family called ConvNeXt V2, which significantly improves the performance of pure ConvNets on various recognition benchmarks, including ImageNet classification, COCO detection, and ADE20K segmentation. We also provide pre-trained ConvNeXt V2 models of various sizes, ranging from an efficient 3.7M-parameter Atto model with 76.7% top-1 accuracy on ImageNet, to a 650M Huge model that achieves a state-of-the-art 88.9% accuracy using only public training data. ConvNeXt V2 architecture. Taken from the original paper. This model was contributed by adirik. The original code can be found here. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ConvNeXt V2. [ConvNextV2ForImageClassification] is supported by this example script and notebook. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ConvNextV2Config [[autodoc]] ConvNextV2Config ConvNextV2Model [[autodoc]] ConvNextV2Model - forward ConvNextV2ForImageClassification [[autodoc]] ConvNextV2ForImageClassification - forward TFConvNextV2Model [[autodoc]] TFConvNextV2Model - call TFConvNextV2ForImageClassification [[autodoc]] TFConvNextV2ForImageClassification - call
Donut Overview The Donut model was proposed in OCR-free Document Understanding Transformer by Geewook Kim, Teakgyu Hong, Moonbin Yim, Jeongyeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, Seunghyun Park. Donut consists of an image Transformer encoder and an autoregressive text Transformer decoder to perform document understanding tasks such as document image classification, form understanding and visual question answering. The abstract from the paper is the following: Understanding document images (e.g., invoices) is a core but challenging task since it requires complex functions such as reading text and a holistic understanding of the document. Current Visual Document Understanding (VDU) methods outsource the task of reading text to off-the-shelf Optical Character Recognition (OCR) engines and focus on the understanding task with the OCR outputs. Although such OCR-based approaches have shown promising performance, they suffer from 1) high computational costs for using OCR; 2) inflexibility of OCR models on languages or types of document; 3) OCR error propagation to the subsequent process. To address these issues, in this paper, we introduce a novel OCR-free VDU model named Donut, which stands for Document understanding transformer. As the first step in OCR-free VDU research, we propose a simple architecture (i.e., Transformer) with a pre-training objective (i.e., cross-entropy loss). Donut is conceptually simple yet effective. Through extensive experiments and analyses, we show a simple OCR-free VDU model, Donut, achieves state-of-the-art performances on various VDU tasks in terms of both speed and accuracy. In addition, we offer a synthetic data generator that helps the model pre-training to be flexible in various languages and domains. Donut high-level overview. Taken from the original paper. This model was contributed by nielsr. The original code can be found here. Usage tips The quickest way to get started with Donut is by checking the tutorial notebooks, which show how to use the model at inference time as well as fine-tuning on custom data. Donut is always used within the VisionEncoderDecoder framework. Inference examples Donut's [VisionEncoderDecoder] model accepts images as input and makes use of [~generation.GenerationMixin.generate] to autoregressively generate text given the input image. The [DonutImageProcessor] class is responsible for preprocessing the input image and [XLMRobertaTokenizer/XLMRobertaTokenizerFast] decodes the generated target tokens to the target string. The [DonutProcessor] wraps [DonutImageProcessor] and [XLMRobertaTokenizer/XLMRobertaTokenizerFast] into a single instance to both extract the input features and decode the predicted token ids. Step-by-step Document Image Classification import re from transformers import DonutProcessor, VisionEncoderDecoderModel from datasets import load_dataset import torch processor = DonutProcessor.from_pretrained("naver-clova-ix/donut-base-finetuned-rvlcdip") model = VisionEncoderDecoderModel.from_pretrained("naver-clova-ix/donut-base-finetuned-rvlcdip") device = "cuda" if torch.cuda.is_available() else "cpu" model.to(device) # doctest: +IGNORE_RESULT load document image dataset = load_dataset("hf-internal-testing/example-documents", split="test") image = dataset[1]["image"] prepare decoder inputs task_prompt = "" decoder_input_ids = processor.tokenizer(task_prompt, add_special_tokens=False, return_tensors="pt").input_ids pixel_values = processor(image, return_tensors="pt").pixel_values outputs = model.generate( pixel_values.to(device), decoder_input_ids=decoder_input_ids.to(device), max_length=model.decoder.config.max_position_embeddings, pad_token_id=processor.tokenizer.pad_token_id, eos_token_id=processor.tokenizer.eos_token_id, use_cache=True, bad_words_ids=[[processor.tokenizer.unk_token_id]], return_dict_in_generate=True, ) sequence = processor.batch_decode(outputs.sequences)[0] sequence = sequence.replace(processor.tokenizer.eos_token, "").replace(processor.tokenizer.pad_token, "") sequence = re.sub(r"<.*?>", "", sequence, count=1).strip() # remove first task start token print(processor.token2json(sequence)) {'class': 'advertisement'} Step-by-step Document Parsing import re from transformers import DonutProcessor, VisionEncoderDecoderModel from datasets import load_dataset import torch processor = DonutProcessor.from_pretrained("naver-clova-ix/donut-base-finetuned-cord-v2") model = VisionEncoderDecoderModel.from_pretrained("naver-clova-ix/donut-base-finetuned-cord-v2") device = "cuda" if torch.cuda.is_available() else "cpu" model.to(device) # doctest: +IGNORE_RESULT load document image dataset = load_dataset("hf-internal-testing/example-documents", split="test") image = dataset[2]["image"] prepare decoder inputs task_prompt = "" decoder_input_ids = processor.tokenizer(task_prompt, add_special_tokens=False, return_tensors="pt").input_ids pixel_values = processor(image, return_tensors="pt").pixel_values outputs = model.generate( pixel_values.to(device), decoder_input_ids=decoder_input_ids.to(device), max_length=model.decoder.config.max_position_embeddings, pad_token_id=processor.tokenizer.pad_token_id, eos_token_id=processor.tokenizer.eos_token_id, use_cache=True, bad_words_ids=[[processor.tokenizer.unk_token_id]], return_dict_in_generate=True, ) sequence = processor.batch_decode(outputs.sequences)[0] sequence = sequence.replace(processor.tokenizer.eos_token, "").replace(processor.tokenizer.pad_token, "") sequence = re.sub(r"<.*?>", "", sequence, count=1).strip() # remove first task start token print(processor.token2json(sequence)) {'menu': {'nm': 'CINNAMON SUGAR', 'unitprice': '17,000', 'cnt': '1 x', 'price': '17,000'}, 'sub_total': {'subtotal_price': '17,000'}, 'total': {'total_price': '17,000', 'cashprice': '20,000', 'changeprice': '3,000'}} Step-by-step Document Visual Question Answering (DocVQA) import re from transformers import DonutProcessor, VisionEncoderDecoderModel from datasets import load_dataset import torch processor = DonutProcessor.from_pretrained("naver-clova-ix/donut-base-finetuned-docvqa") model = VisionEncoderDecoderModel.from_pretrained("naver-clova-ix/donut-base-finetuned-docvqa") device = "cuda" if torch.cuda.is_available() else "cpu" model.to(device) # doctest: +IGNORE_RESULT load document image from the DocVQA dataset dataset = load_dataset("hf-internal-testing/example-documents", split="test") image = dataset[0]["image"] prepare decoder inputs task_prompt = "{user_input}" question = "When is the coffee break?" prompt = task_prompt.replace("{user_input}", question) decoder_input_ids = processor.tokenizer(prompt, add_special_tokens=False, return_tensors="pt").input_ids pixel_values = processor(image, return_tensors="pt").pixel_values outputs = model.generate( pixel_values.to(device), decoder_input_ids=decoder_input_ids.to(device), max_length=model.decoder.config.max_position_embeddings, pad_token_id=processor.tokenizer.pad_token_id, eos_token_id=processor.tokenizer.eos_token_id, use_cache=True, bad_words_ids=[[processor.tokenizer.unk_token_id]], return_dict_in_generate=True, ) sequence = processor.batch_decode(outputs.sequences)[0] sequence = sequence.replace(processor.tokenizer.eos_token, "").replace(processor.tokenizer.pad_token, "") sequence = re.sub(r"<.*?>", "", sequence, count=1).strip() # remove first task start token print(processor.token2json(sequence)) {'question': 'When is the coffee break?', 'answer': '11-14 to 11:39 a.m.'} See the model hub to look for Donut checkpoints. Training We refer to the tutorial notebooks. DonutSwinConfig [[autodoc]] DonutSwinConfig DonutImageProcessor [[autodoc]] DonutImageProcessor - preprocess DonutFeatureExtractor [[autodoc]] DonutFeatureExtractor - call DonutProcessor [[autodoc]] DonutProcessor - call - from_pretrained - save_pretrained - batch_decode - decode DonutSwinModel [[autodoc]] DonutSwinModel - forward
mLUKE Overview The mLUKE model was proposed in mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models by Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka. It's a multilingual extension of the LUKE model trained on the basis of XLM-RoBERTa. It is based on XLM-RoBERTa and adds entity embeddings, which helps improve performance on various downstream tasks involving reasoning about entities such as named entity recognition, extractive question answering, relation classification, cloze-style knowledge completion. The abstract from the paper is the following: Recent studies have shown that multilingual pretrained language models can be effectively improved with cross-lingual alignment information from Wikipedia entities. However, existing methods only exploit entity information in pretraining and do not explicitly use entities in downstream tasks. In this study, we explore the effectiveness of leveraging entity representations for downstream cross-lingual tasks. We train a multilingual language model with 24 languages with entity representations and show the model consistently outperforms word-based pretrained models in various cross-lingual transfer tasks. We also analyze the model and the key insight is that incorporating entity representations into the input allows us to extract more language-agnostic features. We also evaluate the model with a multilingual cloze prompt task with the mLAMA dataset. We show that entity-based prompt elicits correct factual knowledge more likely than using only word representations. This model was contributed by ryo0634. The original code can be found here. Usage tips One can directly plug in the weights of mLUKE into a LUKE model, like so: thon from transformers import LukeModel model = LukeModel.from_pretrained("studio-ousia/mluke-base") Note that mLUKE has its own tokenizer, [MLukeTokenizer]. You can initialize it as follows: thon from transformers import MLukeTokenizer tokenizer = MLukeTokenizer.from_pretrained("studio-ousia/mluke-base") As mLUKE's architecture is equivalent to that of LUKE, one can refer to LUKE's documentation page for all tips, code examples and notebooks. MLukeTokenizer [[autodoc]] MLukeTokenizer - call - save_vocabulary
QDQBERT Overview The QDQBERT model can be referenced in Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation by Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius. The abstract from the paper is the following: Quantization techniques can reduce the size of Deep Neural Networks and improve inference latency and throughput by taking advantage of high throughput integer instructions. In this paper we review the mathematical aspects of quantization parameters and evaluate their choices on a wide range of neural network models for different application domains, including vision, speech, and language. We focus on quantization techniques that are amenable to acceleration by processors with high-throughput integer math pipelines. We also present a workflow for 8-bit quantization that is able to maintain accuracy within 1% of the floating-point baseline on all networks studied, including models that are more difficult to quantize, such as MobileNets and BERT-large. This model was contributed by shangz. Usage tips QDQBERT model adds fake quantization operations (pair of QuantizeLinear/DequantizeLinear ops) to (i) linear layer inputs and weights, (ii) matmul inputs, (iii) residual add inputs, in BERT model. QDQBERT requires the dependency of Pytorch Quantization Toolkit. To install pip install pytorch-quantization --extra-index-url https://pypi.ngc.nvidia.com QDQBERT model can be loaded from any checkpoint of HuggingFace BERT model (for example google-bert/bert-base-uncased), and perform Quantization Aware Training/Post Training Quantization. A complete example of using QDQBERT model to perform Quatization Aware Training and Post Training Quantization for SQUAD task can be found at transformers/examples/research_projects/quantization-qdqbert/. Set default quantizers QDQBERT model adds fake quantization operations (pair of QuantizeLinear/DequantizeLinear ops) to BERT by TensorQuantizer in Pytorch Quantization Toolkit. TensorQuantizer is the module for quantizing tensors, with QuantDescriptor defining how the tensor should be quantized. Refer to Pytorch Quantization Toolkit userguide for more details. Before creating QDQBERT model, one has to set the default QuantDescriptor defining default tensor quantizers. Example: thon import pytorch_quantization.nn as quant_nn from pytorch_quantization.tensor_quant import QuantDescriptor The default tensor quantizer is set to use Max calibration method input_desc = QuantDescriptor(num_bits=8, calib_method="max") The default tensor quantizer is set to be per-channel quantization for weights weight_desc = QuantDescriptor(num_bits=8, axis=((0,))) quant_nn.QuantLinear.set_default_quant_desc_input(input_desc) quant_nn.QuantLinear.set_default_quant_desc_weight(weight_desc) Calibration Calibration is the terminology of passing data samples to the quantizer and deciding the best scaling factors for tensors. After setting up the tensor quantizers, one can use the following example to calibrate the model: thon Find the TensorQuantizer and enable calibration for name, module in model.named_modules(): if name.endswith("_input_quantizer"): module.enable_calib() module.disable_quant() # Use full precision data to calibrate Feeding data samples model(x) Finalize calibration for name, module in model.named_modules(): if name.endswith("_input_quantizer"): module.load_calib_amax() module.enable_quant() If running on GPU, it needs to call .cuda() again because new tensors will be created by calibration process model.cuda() Keep running the quantized model Export to ONNX The goal of exporting to ONNX is to deploy inference by TensorRT. Fake quantization will be broken into a pair of QuantizeLinear/DequantizeLinear ONNX ops. After setting static member of TensorQuantizer to use Pytorch’s own fake quantization functions, fake quantized model can be exported to ONNX, follow the instructions in torch.onnx. Example: thon from pytorch_quantization.nn import TensorQuantizer TensorQuantizer.use_fb_fake_quant = True Load the calibrated model ONNX export torch.onnx.export() Resources Text classification task guide Token classification task guide Question answering task guide Causal language modeling task guide Masked language modeling task guide Multiple choice task guide QDQBertConfig [[autodoc]] QDQBertConfig QDQBertModel [[autodoc]] QDQBertModel - forward QDQBertLMHeadModel [[autodoc]] QDQBertLMHeadModel - forward QDQBertForMaskedLM [[autodoc]] QDQBertForMaskedLM - forward QDQBertForSequenceClassification [[autodoc]] QDQBertForSequenceClassification - forward QDQBertForNextSentencePrediction [[autodoc]] QDQBertForNextSentencePrediction - forward QDQBertForMultipleChoice [[autodoc]] QDQBertForMultipleChoice - forward QDQBertForTokenClassification [[autodoc]] QDQBertForTokenClassification - forward QDQBertForQuestionAnswering [[autodoc]] QDQBertForQuestionAnswering - forward
BertGeneration Overview The BertGeneration model is a BERT model that can be leveraged for sequence-to-sequence tasks using [EncoderDecoderModel] as proposed in Leveraging Pre-trained Checkpoints for Sequence Generation Tasks by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. The abstract from the paper is the following: Unsupervised pretraining of large neural models has recently revolutionized Natural Language Processing. By warm-starting from the publicly released checkpoints, NLP practitioners have pushed the state-of-the-art on multiple benchmarks while saving significant amounts of compute time. So far the focus has been mainly on the Natural Language Understanding tasks. In this paper, we demonstrate the efficacy of pre-trained checkpoints for Sequence Generation. We developed a Transformer-based sequence-to-sequence model that is compatible with publicly available pre-trained BERT, GPT-2 and RoBERTa checkpoints and conducted an extensive empirical study on the utility of initializing our model, both encoder and decoder, with these checkpoints. Our models result in new state-of-the-art results on Machine Translation, Text Summarization, Sentence Splitting, and Sentence Fusion. This model was contributed by patrickvonplaten. The original code can be found here. Usage examples and tips The model can be used in combination with the [EncoderDecoderModel] to leverage two pretrained BERT checkpoints for subsequent fine-tuning: thon leverage checkpoints for Bert2Bert model use BERT's cls token as BOS token and sep token as EOS token encoder = BertGenerationEncoder.from_pretrained("google-bert/bert-large-uncased", bos_token_id=101, eos_token_id=102) add cross attention layers and use BERT's cls token as BOS token and sep token as EOS token decoder = BertGenerationDecoder.from_pretrained( "google-bert/bert-large-uncased", add_cross_attention=True, is_decoder=True, bos_token_id=101, eos_token_id=102 ) bert2bert = EncoderDecoderModel(encoder=encoder, decoder=decoder) create tokenizer tokenizer = BertTokenizer.from_pretrained("google-bert/bert-large-uncased") input_ids = tokenizer( "This is a long article to summarize", add_special_tokens=False, return_tensors="pt" ).input_ids labels = tokenizer("This is a short summary", return_tensors="pt").input_ids train loss = bert2bert(input_ids=input_ids, decoder_input_ids=labels, labels=labels).loss loss.backward() Pretrained [EncoderDecoderModel] are also directly available in the model hub, e.g.: thon instantiate sentence fusion model sentence_fuser = EncoderDecoderModel.from_pretrained("google/roberta2roberta_L-24_discofuse") tokenizer = AutoTokenizer.from_pretrained("google/roberta2roberta_L-24_discofuse") input_ids = tokenizer( "This is the first sentence. This is the second sentence.", add_special_tokens=False, return_tensors="pt" ).input_ids outputs = sentence_fuser.generate(input_ids) print(tokenizer.decode(outputs[0])) Tips: [BertGenerationEncoder] and [BertGenerationDecoder] should be used in combination with [EncoderDecoder]. For summarization, sentence splitting, sentence fusion and translation, no special tokens are required for the input. Therefore, no EOS token should be added to the end of the input. BertGenerationConfig [[autodoc]] BertGenerationConfig BertGenerationTokenizer [[autodoc]] BertGenerationTokenizer - save_vocabulary BertGenerationEncoder [[autodoc]] BertGenerationEncoder - forward BertGenerationDecoder [[autodoc]] BertGenerationDecoder - forward
Transformer XL This model is in maintenance mode only, so we won't accept any new PRs changing its code. This model was deprecated due to security issues linked to pickle.load. We recommend switching to more recent models for improved security. In case you would still like to use TransfoXL in your experiments, we recommend using the Hub checkpoint with a specific revision to ensure you are downloading safe files from the Hub. You will need to set the environment variable TRUST_REMOTE_CODE to True in order to allow the usage of pickle.load(): thon import os from transformers import TransfoXLTokenizer, TransfoXLLMHeadModel os.environ["TRUST_REMOTE_CODE"] = "True" checkpoint = 'transfo-xl/transfo-xl-wt103' revision = '40a186da79458c9f9de846edfaea79c412137f97' tokenizer = TransfoXLTokenizer.from_pretrained(checkpoint, revision=revision) model = TransfoXLLMHeadModel.from_pretrained(checkpoint, revision=revision) If you run into any issues running this model, please reinstall the last version that supported this model: v4.35.0. You can do so by running the following command: pip install -U transformers==4.35.0. Overview The Transformer-XL model was proposed in Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context by Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov. It's a causal (uni-directional) transformer with relative positioning (sinusoïdal) embeddings which can reuse previously computed hidden-states to attend to longer context (memory). This model also uses adaptive softmax inputs and outputs (tied). The abstract from the paper is the following: Transformers have a potential of learning longer-term dependency, but are limited by a fixed-length context in the setting of language modeling. We propose a novel neural architecture Transformer-XL that enables learning dependency beyond a fixed length without disrupting temporal coherence. It consists of a segment-level recurrence mechanism and a novel positional encoding scheme. Our method not only enables capturing longer-term dependency, but also resolves the context fragmentation problem. As a result, Transformer-XL learns dependency that is 80% longer than RNNs and 450% longer than vanilla Transformers, achieves better performance on both short and long sequences, and is up to 1,800+ times faster than vanilla Transformers during evaluation. Notably, we improve the state-of-the-art results of bpc/perplexity to 0.99 on enwiki8, 1.08 on text8, 18.3 on WikiText-103, 21.8 on One Billion Word, and 54.5 on Penn Treebank (without finetuning). When trained only on WikiText-103, Transformer-XL manages to generate reasonably coherent, novel text articles with thousands of tokens. This model was contributed by thomwolf. The original code can be found here. Usage tips Transformer-XL uses relative sinusoidal positional embeddings. Padding can be done on the left or on the right. The original implementation trains on SQuAD with padding on the left, therefore the padding defaults are set to left. Transformer-XL is one of the few models that has no sequence length limit. Same as a regular GPT model, but introduces a recurrence mechanism for two consecutive segments (similar to a regular RNNs with two consecutive inputs). In this context, a segment is a number of consecutive tokens (for instance 512) that may span across multiple documents, and segments are fed in order to the model. Basically, the hidden states of the previous segment are concatenated to the current input to compute the attention scores. This allows the model to pay attention to information that was in the previous segment as well as the current one. By stacking multiple attention layers, the receptive field can be increased to multiple previous segments. This changes the positional embeddings to positional relative embeddings (as the regular positional embeddings would give the same results in the current input and the current hidden state at a given position) and needs to make some adjustments in the way attention scores are computed. TransformerXL does not work with torch.nn.DataParallel due to a bug in PyTorch, see issue #36035 Resources Text classification task guide Causal language modeling task guide TransfoXLConfig [[autodoc]] TransfoXLConfig TransfoXLTokenizer [[autodoc]] TransfoXLTokenizer - save_vocabulary TransfoXL specific outputs [[autodoc]] models.deprecated.transfo_xl.modeling_transfo_xl.TransfoXLModelOutput [[autodoc]] models.deprecated.transfo_xl.modeling_transfo_xl.TransfoXLLMHeadModelOutput [[autodoc]] models.deprecated.transfo_xl.modeling_tf_transfo_xl.TFTransfoXLModelOutput [[autodoc]] models.deprecated.transfo_xl.modeling_tf_transfo_xl.TFTransfoXLLMHeadModelOutput TransfoXLModel [[autodoc]] TransfoXLModel - forward TransfoXLLMHeadModel [[autodoc]] TransfoXLLMHeadModel - forward TransfoXLForSequenceClassification [[autodoc]] TransfoXLForSequenceClassification - forward TFTransfoXLModel [[autodoc]] TFTransfoXLModel - call TFTransfoXLLMHeadModel [[autodoc]] TFTransfoXLLMHeadModel - call TFTransfoXLForSequenceClassification [[autodoc]] TFTransfoXLForSequenceClassification - call Internal Layers [[autodoc]] AdaptiveEmbedding [[autodoc]] TFAdaptiveEmbedding
DETA Overview The DETA model was proposed in NMS Strikes Back by Jeffrey Ouyang-Zhang, Jang Hyun Cho, Xingyi Zhou, Philipp Krähenbühl. DETA (short for Detection Transformers with Assignment) improves Deformable DETR by replacing the one-to-one bipartite Hungarian matching loss with one-to-many label assignments used in traditional detectors with non-maximum suppression (NMS). This leads to significant gains of up to 2.5 mAP. The abstract from the paper is the following: Detection Transformer (DETR) directly transforms queries to unique objects by using one-to-one bipartite matching during training and enables end-to-end object detection. Recently, these models have surpassed traditional detectors on COCO with undeniable elegance. However, they differ from traditional detectors in multiple designs, including model architecture and training schedules, and thus the effectiveness of one-to-one matching is not fully understood. In this work, we conduct a strict comparison between the one-to-one Hungarian matching in DETRs and the one-to-many label assignments in traditional detectors with non-maximum supervision (NMS). Surprisingly, we observe one-to-many assignments with NMS consistently outperform standard one-to-one matching under the same setting, with a significant gain of up to 2.5 mAP. Our detector that trains Deformable-DETR with traditional IoU-based label assignment achieved 50.2 COCO mAP within 12 epochs (1x schedule) with ResNet50 backbone, outperforming all existing traditional or transformer-based detectors in this setting. On multiple datasets, schedules, and architectures, we consistently show bipartite matching is unnecessary for performant detection transformers. Furthermore, we attribute the success of detection transformers to their expressive transformer architecture. DETA overview. Taken from the original paper. This model was contributed by nielsr. The original code can be found here. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DETA. Demo notebooks for DETA can be found here. See also: Object detection task guide If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. DetaConfig [[autodoc]] DetaConfig DetaImageProcessor [[autodoc]] DetaImageProcessor - preprocess - post_process_object_detection DetaModel [[autodoc]] DetaModel - forward DetaForObjectDetection [[autodoc]] DetaForObjectDetection - forward
Starcoder2 Overview StarCoder2 is a family of open LLMs for code and comes in 3 different sizes with 3B, 7B and 15B parameters. The flagship StarCoder2-15B model is trained on over 4 trillion tokens and 600+ programming languages from The Stack v2. All models use Grouped Query Attention, a context window of 16,384 tokens with a sliding window attention of 4,096 tokens, and were trained using the Fill-in-the-Middle objective. The models have been released with the paper StarCoder 2 and The Stack v2: The Next Generation by Anton Lozhkov, Raymond Li, Loubna Ben Allal, Federico Cassano, Joel Lamy-Poirier, Nouamane Tazi, Ao Tang, Dmytro Pykhtar, Jiawei Liu, Yuxiang Wei, Tianyang Liu, Max Tian, Denis Kocetkov, Arthur Zucker, Younes Belkada, Zijian Wang, Qian Liu, Dmitry Abulkhanov, Indraneil Paul, Zhuang Li, Wen-Ding Li, Megan Risdal, Jia Li, Jian Zhu, Terry Yue Zhuo, Evgenii Zheltonozhskii, Nii Osae Osae Dade, Wenhao Yu, Lucas Krauß, Naman Jain, Yixuan Su, Xuanli He, Manan Dey, Edoardo Abati, Yekun Chai, Niklas Muennighoff, Xiangru Tang, Muhtasham Oblokulov, Christopher Akiki, Marc Marone, Chenghao Mou, Mayank Mishra, Alex Gu, Binyuan Hui, Tri Dao, Armel Zebaze, Olivier Dehaene, Nicolas Patry, Canwen Xu, Julian McAuley, Han Hu, Torsten Scholak, Sebastien Paquet, Jennifer Robinson, Carolyn Jane Anderson, Nicolas Chapados, Mostofa Patwary, Nima Tajbakhsh, Yacine Jernite, Carlos Muñoz Ferrandis, Lingming Zhang, Sean Hughes, Thomas Wolf, Arjun Guha, Leandro von Werra, and Harm de Vries. The abstract of the paper is the following: The BigCode project, an open-scientific collaboration focused on the responsible development of Large Language Models for Code (Code LLMs), introduces StarCoder2. In partnership with Software Heritage (SWH), we build The Stack v2 on top of the digital commons of their source code archive. Alongside the SWH repositories spanning 619 programming languages, we carefully select other high-quality data sources, such as GitHub pull requests, Kaggle notebooks, and code documentation. This results in a training set that is 4x larger than the first StarCoder dataset. We train StarCoder2 models with 3B, 7B, and 15B parameters on 3.3 to 4.3 trillion tokens and thoroughly evaluate them on a comprehensive set of Code LLM benchmarks. We find that our small model, StarCoder2-3B, outperforms other Code LLMs of similar size on most benchmarks, and also outperforms StarCoderBase-15B. Our large model, StarCoder2- 15B, significantly outperforms other models of comparable size. In addition, it matches or outperforms CodeLlama-34B, a model more than twice its size. Although DeepSeekCoder- 33B is the best-performing model at code completion for high-resource languages, we find that StarCoder2-15B outperforms it on math and code reasoning benchmarks, as well as several low-resource languages. We make the model weights available under an OpenRAIL license and ensure full transparency regarding the training data by releasing the SoftWare Heritage persistent IDentifiers (SWHIDs) of the source code data. License The models are licensed under the BigCode OpenRAIL-M v1 license agreement. Usage tips The StarCoder2 models can be found in the HuggingFace hub. You can find some examples for inference and fine-tuning in StarCoder2's GitHub repo. These ready-to-use checkpoints can be downloaded and used via the HuggingFace Hub: thon from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("bigcode/starcoder2-7b", device_map="auto") tokenizer = AutoTokenizer.from_pretrained("bigcode/starcoder2-7b") prompt = "def print_hello_world():" model_inputs = tokenizer([prompt], return_tensors="pt").to("cuda") model.to(device) generated_ids = model.generate(**model_inputs, max_new_tokens=10, do_sample=False) tokenizer.batch_decode(generated_ids)[0] "def print_hello_world():\n\treturn 'Hello World!'" Starcoder2Config [[autodoc]] Starcoder2Config Starcoder2Model [[autodoc]] Starcoder2Model - forward Starcoder2ForCausalLM [[autodoc]] Starcoder2ForCausalLM - forward Starcoder2ForSequenceClassification [[autodoc]] Starcoder2ForSequenceClassification - forward
ELECTRA Overview The ELECTRA model was proposed in the paper ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. ELECTRA is a new pretraining approach which trains two transformer models: the generator and the discriminator. The generator's role is to replace tokens in a sequence, and is therefore trained as a masked language model. The discriminator, which is the model we're interested in, tries to identify which tokens were replaced by the generator in the sequence. The abstract from the paper is the following: Masked language modeling (MLM) pretraining methods such as BERT corrupt the input by replacing some tokens with [MASK] and then train a model to reconstruct the original tokens. While they produce good results when transferred to downstream NLP tasks, they generally require large amounts of compute to be effective. As an alternative, we propose a more sample-efficient pretraining task called replaced token detection. Instead of masking the input, our approach corrupts it by replacing some tokens with plausible alternatives sampled from a small generator network. Then, instead of training a model that predicts the original identities of the corrupted tokens, we train a discriminative model that predicts whether each token in the corrupted input was replaced by a generator sample or not. Thorough experiments demonstrate this new pretraining task is more efficient than MLM because the task is defined over all input tokens rather than just the small subset that was masked out. As a result, the contextual representations learned by our approach substantially outperform the ones learned by BERT given the same model size, data, and compute. The gains are particularly strong for small models; for example, we train a model on one GPU for 4 days that outperforms GPT (trained using 30x more compute) on the GLUE natural language understanding benchmark. Our approach also works well at scale, where it performs comparably to RoBERTa and XLNet while using less than 1/4 of their compute and outperforms them when using the same amount of compute. This model was contributed by lysandre. The original code can be found here. Usage tips ELECTRA is the pretraining approach, therefore there is nearly no changes done to the underlying model: BERT. The only change is the separation of the embedding size and the hidden size: the embedding size is generally smaller, while the hidden size is larger. An additional projection layer (linear) is used to project the embeddings from their embedding size to the hidden size. In the case where the embedding size is the same as the hidden size, no projection layer is used. ELECTRA is a transformer model pretrained with the use of another (small) masked language model. The inputs are corrupted by that language model, which takes an input text that is randomly masked and outputs a text in which ELECTRA has to predict which token is an original and which one has been replaced. Like for GAN training, the small language model is trained for a few steps (but with the original texts as objective, not to fool the ELECTRA model like in a traditional GAN setting) then the ELECTRA model is trained for a few steps. The ELECTRA checkpoints saved using Google Research's implementation contain both the generator and discriminator. The conversion script requires the user to name which model to export into the correct architecture. Once converted to the HuggingFace format, these checkpoints may be loaded into all available ELECTRA models, however. This means that the discriminator may be loaded in the [ElectraForMaskedLM] model, and the generator may be loaded in the [ElectraForPreTraining] model (the classification head will be randomly initialized as it doesn't exist in the generator). Resources Text classification task guide Token classification task guide Question answering task guide Causal language modeling task guide Masked language modeling task guide Multiple choice task guide ElectraConfig [[autodoc]] ElectraConfig ElectraTokenizer [[autodoc]] ElectraTokenizer ElectraTokenizerFast [[autodoc]] ElectraTokenizerFast Electra specific outputs [[autodoc]] models.electra.modeling_electra.ElectraForPreTrainingOutput [[autodoc]] models.electra.modeling_tf_electra.TFElectraForPreTrainingOutput ElectraModel [[autodoc]] ElectraModel - forward ElectraForPreTraining [[autodoc]] ElectraForPreTraining - forward ElectraForCausalLM [[autodoc]] ElectraForCausalLM - forward ElectraForMaskedLM [[autodoc]] ElectraForMaskedLM - forward ElectraForSequenceClassification [[autodoc]] ElectraForSequenceClassification - forward ElectraForMultipleChoice [[autodoc]] ElectraForMultipleChoice - forward ElectraForTokenClassification [[autodoc]] ElectraForTokenClassification - forward ElectraForQuestionAnswering [[autodoc]] ElectraForQuestionAnswering - forward TFElectraModel [[autodoc]] TFElectraModel - call TFElectraForPreTraining [[autodoc]] TFElectraForPreTraining - call TFElectraForMaskedLM [[autodoc]] TFElectraForMaskedLM - call TFElectraForSequenceClassification [[autodoc]] TFElectraForSequenceClassification - call TFElectraForMultipleChoice [[autodoc]] TFElectraForMultipleChoice - call TFElectraForTokenClassification [[autodoc]] TFElectraForTokenClassification - call TFElectraForQuestionAnswering [[autodoc]] TFElectraForQuestionAnswering - call FlaxElectraModel [[autodoc]] FlaxElectraModel - call FlaxElectraForPreTraining [[autodoc]] FlaxElectraForPreTraining - call FlaxElectraForCausalLM [[autodoc]] FlaxElectraForCausalLM - call FlaxElectraForMaskedLM [[autodoc]] FlaxElectraForMaskedLM - call FlaxElectraForSequenceClassification [[autodoc]] FlaxElectraForSequenceClassification - call FlaxElectraForMultipleChoice [[autodoc]] FlaxElectraForMultipleChoice - call FlaxElectraForTokenClassification [[autodoc]] FlaxElectraForTokenClassification - call FlaxElectraForQuestionAnswering [[autodoc]] FlaxElectraForQuestionAnswering - call
RoBERTa-PreLayerNorm Overview The RoBERTa-PreLayerNorm model was proposed in fairseq: A Fast, Extensible Toolkit for Sequence Modeling by Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, Michael Auli. It is identical to using the --encoder-normalize-before flag in fairseq. The abstract from the paper is the following: fairseq is an open-source sequence modeling toolkit that allows researchers and developers to train custom models for translation, summarization, language modeling, and other text generation tasks. The toolkit is based on PyTorch and supports distributed training across multiple GPUs and machines. We also support fast mixed-precision training and inference on modern GPUs. This model was contributed by andreasmaden. The original code can be found here. Usage tips The implementation is the same as Roberta except instead of using Add and Norm it does Norm and Add. Add and Norm refers to the Addition and LayerNormalization as described in Attention Is All You Need. This is identical to using the --encoder-normalize-before flag in fairseq. Resources Text classification task guide Token classification task guide Question answering task guide Causal language modeling task guide Masked language modeling task guide Multiple choice task guide RobertaPreLayerNormConfig [[autodoc]] RobertaPreLayerNormConfig RobertaPreLayerNormModel [[autodoc]] RobertaPreLayerNormModel - forward RobertaPreLayerNormForCausalLM [[autodoc]] RobertaPreLayerNormForCausalLM - forward RobertaPreLayerNormForMaskedLM [[autodoc]] RobertaPreLayerNormForMaskedLM - forward RobertaPreLayerNormForSequenceClassification [[autodoc]] RobertaPreLayerNormForSequenceClassification - forward RobertaPreLayerNormForMultipleChoice [[autodoc]] RobertaPreLayerNormForMultipleChoice - forward RobertaPreLayerNormForTokenClassification [[autodoc]] RobertaPreLayerNormForTokenClassification - forward RobertaPreLayerNormForQuestionAnswering [[autodoc]] RobertaPreLayerNormForQuestionAnswering - forward TFRobertaPreLayerNormModel [[autodoc]] TFRobertaPreLayerNormModel - call TFRobertaPreLayerNormForCausalLM [[autodoc]] TFRobertaPreLayerNormForCausalLM - call TFRobertaPreLayerNormForMaskedLM [[autodoc]] TFRobertaPreLayerNormForMaskedLM - call TFRobertaPreLayerNormForSequenceClassification [[autodoc]] TFRobertaPreLayerNormForSequenceClassification - call TFRobertaPreLayerNormForMultipleChoice [[autodoc]] TFRobertaPreLayerNormForMultipleChoice - call TFRobertaPreLayerNormForTokenClassification [[autodoc]] TFRobertaPreLayerNormForTokenClassification - call TFRobertaPreLayerNormForQuestionAnswering [[autodoc]] TFRobertaPreLayerNormForQuestionAnswering - call FlaxRobertaPreLayerNormModel [[autodoc]] FlaxRobertaPreLayerNormModel - call FlaxRobertaPreLayerNormForCausalLM [[autodoc]] FlaxRobertaPreLayerNormForCausalLM - call FlaxRobertaPreLayerNormForMaskedLM [[autodoc]] FlaxRobertaPreLayerNormForMaskedLM - call FlaxRobertaPreLayerNormForSequenceClassification [[autodoc]] FlaxRobertaPreLayerNormForSequenceClassification - call FlaxRobertaPreLayerNormForMultipleChoice [[autodoc]] FlaxRobertaPreLayerNormForMultipleChoice - call FlaxRobertaPreLayerNormForTokenClassification [[autodoc]] FlaxRobertaPreLayerNormForTokenClassification - call FlaxRobertaPreLayerNormForQuestionAnswering [[autodoc]] FlaxRobertaPreLayerNormForQuestionAnswering - call
MobileViTV2 Overview The MobileViTV2 model was proposed in Separable Self-attention for Mobile Vision Transformers by Sachin Mehta and Mohammad Rastegari. MobileViTV2 is the second version of MobileViT, constructed by replacing the multi-headed self-attention in MobileViT with separable self-attention. The abstract from the paper is the following: Mobile vision transformers (MobileViT) can achieve state-of-the-art performance across several mobile vision tasks, including classification and detection. Though these models have fewer parameters, they have high latency as compared to convolutional neural network-based models. The main efficiency bottleneck in MobileViT is the multi-headed self-attention (MHA) in transformers, which requires O(k2) time complexity with respect to the number of tokens (or patches) k. Moreover, MHA requires costly operations (e.g., batch-wise matrix multiplication) for computing self-attention, impacting latency on resource-constrained devices. This paper introduces a separable self-attention method with linear complexity, i.e. O(k). A simple yet effective characteristic of the proposed method is that it uses element-wise operations for computing self-attention, making it a good choice for resource-constrained devices. The improved model, MobileViTV2, is state-of-the-art on several mobile vision tasks, including ImageNet object classification and MS-COCO object detection. With about three million parameters, MobileViTV2 achieves a top-1 accuracy of 75.6% on the ImageNet dataset, outperforming MobileViT by about 1% while running 3.2× faster on a mobile device. This model was contributed by shehan97. The original code can be found here. Usage tips MobileViTV2 is more like a CNN than a Transformer model. It does not work on sequence data but on batches of images. Unlike ViT, there are no embeddings. The backbone model outputs a feature map. One can use [MobileViTImageProcessor] to prepare images for the model. Note that if you do your own preprocessing, the pretrained checkpoints expect images to be in BGR pixel order (not RGB). The available image classification checkpoints are pre-trained on ImageNet-1k (also referred to as ILSVRC 2012, a collection of 1.3 million images and 1,000 classes). The segmentation model uses a DeepLabV3 head. The available semantic segmentation checkpoints are pre-trained on PASCAL VOC. MobileViTV2Config [[autodoc]] MobileViTV2Config MobileViTV2Model [[autodoc]] MobileViTV2Model - forward MobileViTV2ForImageClassification [[autodoc]] MobileViTV2ForImageClassification - forward MobileViTV2ForSemanticSegmentation [[autodoc]] MobileViTV2ForSemanticSegmentation - forward
MarianMT Overview A framework for translation models, using the same models as BART. Translations should be similar, but not identical to output in the test set linked to in each model card. This model was contributed by sshleifer. Implementation Notes Each model is about 298 MB on disk, there are more than 1,000 models. The list of supported language pairs can be found here. Models were originally trained by Jörg Tiedemann using the Marian C++ library, which supports fast training and translation. All models are transformer encoder-decoders with 6 layers in each component. Each model's performance is documented in a model card. The 80 opus models that require BPE preprocessing are not supported. The modeling code is the same as [BartForConditionalGeneration] with a few minor modifications: static (sinusoid) positional embeddings (MarianConfig.static_position_embeddings=True) no layernorm_embedding (MarianConfig.normalize_embedding=False) the model starts generating with pad_token_id (which has 0 as a token_embedding) as the prefix (Bart uses <s/>), Code to bulk convert models can be found in convert_marian_to_pytorch.py. Naming All model names use the following format: Helsinki-NLP/opus-mt-{src}-{tgt} The language codes used to name models are inconsistent. Two digit codes can usually be found here, three digit codes require googling "language code {code}". Codes formatted like es_AR are usually code_{region}. That one is Spanish from Argentina. The models were converted in two stages. The first 1000 models use ISO-639-2 codes to identify languages, the second group use a combination of ISO-639-5 codes and ISO-639-2 codes. Examples Since Marian models are smaller than many other translation models available in the library, they can be useful for fine-tuning experiments and integration tests. Fine-tune on GPU Multilingual Models All model names use the following format: Helsinki-NLP/opus-mt-{src}-{tgt}: If a model can output multiple languages, and you should specify a language code by prepending the desired output language to the src_text. You can see a models's supported language codes in its model card, under target constituents, like in opus-mt-en-roa. Note that if a model is only multilingual on the source side, like Helsinki-NLP/opus-mt-roa-en, no language codes are required. New multi-lingual models from the Tatoeba-Challenge repo require 3 character language codes: thon from transformers import MarianMTModel, MarianTokenizer src_text = [ ">>fra<< this is a sentence in english that we want to translate to french", ">>por<< This should go to portuguese", ">>esp<< And this to Spanish", ] model_name = "Helsinki-NLP/opus-mt-en-roa" tokenizer = MarianTokenizer.from_pretrained(model_name) print(tokenizer.supported_language_codes) ['>>zlm_Latn<<', '>>mfe<<', '>>hat<<', '>>pap<<', '>>ast<<', '>>cat<<', '>>ind<<', '>>glg<<', '>>wln<<', '>>spa<<', '>>fra<<', '>>ron<<', '>>por<<', '>>ita<<', '>>oci<<', '>>arg<<', '>>min<<'] model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) [tokenizer.decode(t, skip_special_tokens=True) for t in translated] ["c'est une phrase en anglais que nous voulons traduire en français", 'Isto deve ir para o português.', 'Y esto al español'] Here is the code to see all available pretrained models on the hub: thon from huggingface_hub import list_models model_list = list_models() org = "Helsinki-NLP" model_ids = [x.modelId for x in model_list if x.modelId.startswith(org)] suffix = [x.split("/")[1] for x in model_ids] old_style_multi_models = [f"{org}/{s}" for s in suffix if s != s.lower()] Old Style Multi-Lingual Models These are the old style multi-lingual models ported from the OPUS-MT-Train repo: and the members of each language group: python no-style ['Helsinki-NLP/opus-mt-NORTH_EU-NORTH_EU', 'Helsinki-NLP/opus-mt-ROMANCE-en', 'Helsinki-NLP/opus-mt-SCANDINAVIA-SCANDINAVIA', 'Helsinki-NLP/opus-mt-de-ZH', 'Helsinki-NLP/opus-mt-en-CELTIC', 'Helsinki-NLP/opus-mt-en-ROMANCE', 'Helsinki-NLP/opus-mt-es-NORWAY', 'Helsinki-NLP/opus-mt-fi-NORWAY', 'Helsinki-NLP/opus-mt-fi-ZH', 'Helsinki-NLP/opus-mt-fi_nb_no_nn_ru_sv_en-SAMI', 'Helsinki-NLP/opus-mt-sv-NORWAY', 'Helsinki-NLP/opus-mt-sv-ZH'] GROUP_MEMBERS = { 'ZH': ['cmn', 'cn', 'yue', 'ze_zh', 'zh_cn', 'zh_CN', 'zh_HK', 'zh_tw', 'zh_TW', 'zh_yue', 'zhs', 'zht', 'zh'], 'ROMANCE': ['fr', 'fr_BE', 'fr_CA', 'fr_FR', 'wa', 'frp', 'oc', 'ca', 'rm', 'lld', 'fur', 'lij', 'lmo', 'es', 'es_AR', 'es_CL', 'es_CO', 'es_CR', 'es_DO', 'es_EC', 'es_ES', 'es_GT', 'es_HN', 'es_MX', 'es_NI', 'es_PA', 'es_PE', 'es_PR', 'es_SV', 'es_UY', 'es_VE', 'pt', 'pt_br', 'pt_BR', 'pt_PT', 'gl', 'lad', 'an', 'mwl', 'it', 'it_IT', 'co', 'nap', 'scn', 'vec', 'sc', 'ro', 'la'], 'NORTH_EU': ['de', 'nl', 'fy', 'af', 'da', 'fo', 'is', 'no', 'nb', 'nn', 'sv'], 'SCANDINAVIA': ['da', 'fo', 'is', 'no', 'nb', 'nn', 'sv'], 'SAMI': ['se', 'sma', 'smj', 'smn', 'sms'], 'NORWAY': ['nb_NO', 'nb', 'nn_NO', 'nn', 'nog', 'no_nb', 'no'], 'CELTIC': ['ga', 'cy', 'br', 'gd', 'kw', 'gv'] } Example of translating english to many romance languages, using old-style 2 character language codes thon from transformers import MarianMTModel, MarianTokenizer src_text = [ ">>fr<< this is a sentence in english that we want to translate to french", ">>pt<< This should go to portuguese", ">>es<< And this to Spanish", ] model_name = "Helsinki-NLP/opus-mt-en-ROMANCE" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) tgt_text = [tokenizer.decode(t, skip_special_tokens=True) for t in translated] ["c'est une phrase en anglais que nous voulons traduire en français", 'Isto deve ir para o português.', 'Y esto al español'] Resources Translation task guide Summarization task guide Causal language modeling task guide MarianConfig [[autodoc]] MarianConfig MarianTokenizer [[autodoc]] MarianTokenizer - build_inputs_with_special_tokens MarianModel [[autodoc]] MarianModel - forward MarianMTModel [[autodoc]] MarianMTModel - forward MarianForCausalLM [[autodoc]] MarianForCausalLM - forward TFMarianModel [[autodoc]] TFMarianModel - call TFMarianMTModel [[autodoc]] TFMarianMTModel - call FlaxMarianModel [[autodoc]] FlaxMarianModel - call FlaxMarianMTModel [[autodoc]] FlaxMarianMTModel - call