title
stringlengths
17
126
author
stringlengths
3
21
date
stringlengths
11
18
local
stringlengths
2
59
tags
stringlengths
2
76
URL
stringlengths
30
87
content
stringlengths
1.11k
108k
How to train a new language model from scratch using Transformers and Tokenizers
julien-c
February 14, 2020
how-to-train
guide, nlp
https://huggingface.co/blog/how-to-train
# How to train a new language model from scratch using Transformers and Tokenizers <a target="_blank" href="https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/01_how_to_train.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"> </a> Over the past few months, we made several improvements to our [`transformers`](https://github.com/huggingface/transformers) and [`tokenizers`](https://github.com/huggingface/tokenizers) libraries, with the goal of making it easier than ever to **train a new language model from scratch**. In this post we’ll demo how to train a “small” model (84 M parameters = 6 layers, 768 hidden size, 12 attention heads) – that’s the same number of layers & heads as DistilBERT – on **Esperanto**. We’ll then fine-tune the model on a downstream task of part-of-speech tagging. Esperanto is a *constructed language* with a goal of being easy to learn. We pick it for this demo for several reasons: - it is a relatively low-resource language (even though it’s spoken by ~2 million people) so this demo is less boring than training one more English model 😁 - its grammar is highly regular (e.g. all common nouns end in -o, all adjectives in -a) so we should get interesting linguistic results even on a small dataset. - finally, the overarching goal at the foundation of the language is to bring people closer (fostering world peace and international understanding) which one could argue is aligned with the goal of the NLP community 💚 > N.B. You won’t need to understand Esperanto to understand this post, but if you do want to learn it, [Duolingo](https://www.duolingo.com/enroll/eo/en/Learn-Esperanto) has a nice course with 280k active learners. Our model is going to be called… wait for it… **EsperBERTo** 😂 <img src="/blog/assets/01_how-to-train/eo.svg" alt="Esperanto flag" style="margin: auto; display: block; width: 260px;"> ## 1. Find a dataset First, let us find a corpus of text in Esperanto. Here we’ll use the Esperanto portion of the [OSCAR corpus](https://traces1.inria.fr/oscar/) from INRIA. OSCAR is a huge multilingual corpus obtained by language classification and filtering of [Common Crawl](https://commoncrawl.org/) dumps of the Web. <img src="/blog/assets/01_how-to-train/oscar.png" style="margin: auto; display: block; width: 260px;"> The Esperanto portion of the dataset is only 299M, so we’ll concatenate with the Esperanto sub-corpus of the [Leipzig Corpora Collection](https://wortschatz.uni-leipzig.de/en/download), which is comprised of text from diverse sources like news, literature, and wikipedia. The final training corpus has a size of 3 GB, which is still small – for your model, you will get better results the more data you can get to pretrain on. ## 2. Train a tokenizer We choose to train a byte-level Byte-pair encoding tokenizer (the same as GPT-2), with the same special tokens as RoBERTa. Let’s arbitrarily pick its size to be 52,000. We recommend training a byte-level BPE (rather than let’s say, a WordPiece tokenizer like BERT) because it will start building its vocabulary from an alphabet of single bytes, so all words will be decomposable into tokens (no more `<unk>` tokens!). ```python #! pip install tokenizers from pathlib import Path from tokenizers import ByteLevelBPETokenizer paths = [str(x) for x in Path("./eo_data/").glob("**/*.txt")] # Initialize a tokenizer tokenizer = ByteLevelBPETokenizer() # Customize training tokenizer.train(files=paths, vocab_size=52_000, min_frequency=2, special_tokens=[ "<s>", "<pad>", "</s>", "<unk>", "<mask>", ]) # Save files to disk tokenizer.save_model(".", "esperberto") ``` And here’s a slightly accelerated capture of the output: ![tokenizers](assets/01_how-to-train/tokenizers-fast.gif) <small>On our dataset, training took about ~5 minutes.</small> 🔥🔥 Wow, that was fast! ⚡️🔥 We now have both a `vocab.json`, which is a list of the most frequent tokens ranked by frequency, and a `merges.txt` list of merges. ```json { "<s>": 0, "<pad>": 1, "</s>": 2, "<unk>": 3, "<mask>": 4, "!": 5, "\"": 6, "#": 7, "$": 8, "%": 9, "&": 10, "'": 11, "(": 12, ")": 13, # ... } # merges.txt l a Ġ k o n Ġ la t a Ġ e Ġ d Ġ p # ... ``` What is great is that our tokenizer is optimized for Esperanto. Compared to a generic tokenizer trained for English, more native words are represented by a single, unsplit token. Diacritics, i.e. accented characters used in Esperanto – `ĉ`, `ĝ`, `ĥ`, `ĵ`, `ŝ`, and `ŭ` – are encoded natively. We also represent sequences in a more efficient manner. Here on this corpus, the average length of encoded sequences is ~30% smaller as when using the pretrained GPT-2 tokenizer. Here’s how you can use it in `tokenizers`, including handling the RoBERTa special tokens – of course, you’ll also be able to use it directly from `transformers`. ```python from tokenizers.implementations import ByteLevelBPETokenizer from tokenizers.processors import BertProcessing tokenizer = ByteLevelBPETokenizer( "./models/EsperBERTo-small/vocab.json", "./models/EsperBERTo-small/merges.txt", ) tokenizer._tokenizer.post_processor = BertProcessing( ("</s>", tokenizer.token_to_id("</s>")), ("<s>", tokenizer.token_to_id("<s>")), ) tokenizer.enable_truncation(max_length=512) print( tokenizer.encode("Mi estas Julien.") ) # Encoding(num_tokens=7, ...) # tokens: ['<s>', 'Mi', 'Ġestas', 'ĠJuli', 'en', '.', '</s>'] ``` ## 3. Train a language model from scratch **Update:** The associated Colab notebook uses our new [`Trainer`](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py) directly, instead of through a script. Feel free to pick the approach you like best. We will now train our language model using the [`run_language_modeling.py`](https://github.com/huggingface/transformers/blob/main/examples/legacy/run_language_modeling.py) script from `transformers` (newly renamed from `run_lm_finetuning.py` as it now supports training from scratch more seamlessly). Just remember to leave `--model_name_or_path` to `None` to train from scratch vs. from an existing model or checkpoint. > We’ll train a RoBERTa-like model, which is a BERT-like with a couple of changes (check the [documentation](https://huggingface.co/transformers/model_doc/roberta.html) for more details). As the model is BERT-like, we’ll train it on a task of *Masked language modeling*, i.e. the predict how to fill arbitrary tokens that we randomly mask in the dataset. This is taken care of by the example script. We just need to do two things: - implement a simple subclass of `Dataset` that loads data from our text files - Depending on your use case, you might not even need to write your own subclass of Dataset, if one of the provided examples (`TextDataset` and `LineByLineTextDataset`) works – but there are lots of custom tweaks that you might want to add based on what your corpus looks like. - Choose and experiment with different sets of hyperparameters. Here’s a simple version of our EsperantoDataset. ```python from torch.utils.data import Dataset class EsperantoDataset(Dataset): def __init__(self, evaluate: bool = False): tokenizer = ByteLevelBPETokenizer( "./models/EsperBERTo-small/vocab.json", "./models/EsperBERTo-small/merges.txt", ) tokenizer._tokenizer.post_processor = BertProcessing( ("</s>", tokenizer.token_to_id("</s>")), ("<s>", tokenizer.token_to_id("<s>")), ) tokenizer.enable_truncation(max_length=512) # or use the RobertaTokenizer from `transformers` directly. self.examples = [] src_files = Path("./data/").glob("*-eval.txt") if evaluate else Path("./data/").glob("*-train.txt") for src_file in src_files: print("🔥", src_file) lines = src_file.read_text(encoding="utf-8").splitlines() self.examples += [x.ids for x in tokenizer.encode_batch(lines)] def __len__(self): return len(self.examples) def __getitem__(self, i): # We’ll pad at the batch level. return torch.tensor(self.examples[i]) ``` If your dataset is very large, you can opt to load and tokenize examples on the fly, rather than as a preprocessing step. Here is one specific set of **hyper-parameters and arguments** we pass to the script: ``` --output_dir ./models/EsperBERTo-small-v1 --model_type roberta --mlm --config_name ./models/EsperBERTo-small --tokenizer_name ./models/EsperBERTo-small --do_train --do_eval --learning_rate 1e-4 --num_train_epochs 5 --save_total_limit 2 --save_steps 2000 --per_gpu_train_batch_size 16 --evaluate_during_training --seed 42 ``` As usual, pick the largest batch size you can fit on your GPU(s). **🔥🔥🔥 Let’s start training!! 🔥🔥🔥** Here you can check our Tensorboard for [one particular set of hyper-parameters](https://tensorboard.dev/experiment/8AjtzdgPR1qG6bDIe1eKfw/#scalars): [![tb](assets/01_how-to-train/tensorboard.png)](https://tensorboard.dev/experiment/8AjtzdgPR1qG6bDIe1eKfw/#scalars) > Our example scripts log into the Tensorboard format by default, under `runs/`. Then to view your board just run `tensorboard dev upload --logdir runs` – this will set up [tensorboard.dev](https://tensorboard.dev/), a Google-managed hosted version that lets you share your ML experiment with anyone. ## 4. Check that the LM actually trained Aside from looking at the training and eval losses going down, the easiest way to check whether our language model is learning anything interesting is via the `FillMaskPipeline`. Pipelines are simple wrappers around tokenizers and models, and the 'fill-mask' one will let you input a sequence containing a masked token (here, `<mask>`) and return a list of the most probable filled sequences, with their probabilities. ```python from transformers import pipeline fill_mask = pipeline( "fill-mask", model="./models/EsperBERTo-small", tokenizer="./models/EsperBERTo-small" ) # The sun <mask>. # => result = fill_mask("La suno <mask>.") # {'score': 0.2526160776615143, 'sequence': '<s> La suno brilis.</s>', 'token': 10820} # {'score': 0.0999930202960968, 'sequence': '<s> La suno lumis.</s>', 'token': 23833} # {'score': 0.04382849484682083, 'sequence': '<s> La suno brilas.</s>', 'token': 15006} # {'score': 0.026011141017079353, 'sequence': '<s> La suno falas.</s>', 'token': 7392} # {'score': 0.016859788447618484, 'sequence': '<s> La suno pasis.</s>', 'token': 4552} ``` Ok, simple syntax/grammar works. Let’s try a slightly more interesting prompt: ```python fill_mask("Jen la komenco de bela <mask>.") # This is the beginning of a beautiful <mask>. # => # { # 'score':0.06502299010753632 # 'sequence':'<s> Jen la komenco de bela vivo.</s>' # 'token':1099 # } # { # 'score':0.0421181358397007 # 'sequence':'<s> Jen la komenco de bela vespero.</s>' # 'token':5100 # } # { # 'score':0.024884626269340515 # 'sequence':'<s> Jen la komenco de bela laboro.</s>' # 'token':1570 # } # { # 'score':0.02324388362467289 # 'sequence':'<s> Jen la komenco de bela tago.</s>' # 'token':1688 # } # { # 'score':0.020378097891807556 # 'sequence':'<s> Jen la komenco de bela festo.</s>' # 'token':4580 # } ``` > “**Jen la komenco de bela tago**”, indeed! With more complex prompts, you can probe whether your language model captured more semantic knowledge or even some sort of (statistical) common sense reasoning. ## 5. Fine-tune your LM on a downstream task We now can fine-tune our new Esperanto language model on a downstream task of **Part-of-speech tagging.** As mentioned before, Esperanto is a highly regular language where word endings typically condition the grammatical part of speech. Using a dataset of annotated Esperanto POS tags formatted in the CoNLL-2003 format (see example below), we can use the [`run_ner.py`](https://github.com/huggingface/transformers/blob/main/examples/pytorch/token-classification/run_ner.py) script from `transformers`. > POS tagging is a token classification task just as NER so we can just use the exact same script. ![conll](assets/01_how-to-train/conll-2003.png) Again, here’s the hosted **[Tensorboard](https://tensorboard.dev/experiment/lOZn2wOWQo6ixpwtWyyDfQ/#scalars)** for this fine-tuning. We train for 3 epochs using a batch size of 64 per GPU. Training and eval losses converge to small residual values as the task is rather easy (the language is regular) – it’s still fun to be able to train it end-to-end 😃. This time, let’s use a `TokenClassificationPipeline`: ```python from transformers import TokenClassificationPipeline, pipeline MODEL_PATH = "./models/EsperBERTo-small-pos/" nlp = pipeline( "ner", model=MODEL_PATH, tokenizer=MODEL_PATH, ) # or instantiate a TokenClassificationPipeline directly. nlp("Mi estas viro kej estas tago varma.") # {'entity': 'PRON', 'score': 0.9979867339134216, 'word': ' Mi'} # {'entity': 'VERB', 'score': 0.9683094620704651, 'word': ' estas'} # {'entity': 'VERB', 'score': 0.9797462821006775, 'word': ' estas'} # {'entity': 'NOUN', 'score': 0.8509314060211182, 'word': ' tago'} # {'entity': 'ADJ', 'score': 0.9996201395988464, 'word': ' varma'} ``` **Looks like it worked! 🔥** <small>For a more challenging dataset for NER, <a href="https://github.com/stefan-it">@stefan-it</a> recommended that we could train on the silver standard dataset from WikiANN</small> ## 6. Share your model 🎉 Finally, when you have a nice model, please think about sharing it with the community: - upload your model using the CLI: `transformers-cli upload` - write a README.md model card and add it to the repository under `model_cards/`. Your model card should ideally include: - a model description, - training params (dataset, preprocessing, hyperparameters), - evaluation results, - intended uses & limitations - whatever else is helpful! 🤓 ### **TADA!** ➡️ Your model has a page on https://huggingface.co/models and everyone can load it using `AutoModel.from_pretrained("username/model_name")`. [![tb](assets/01_how-to-train/model_page.png)](https://huggingface.co/julien-c/EsperBERTo-small) If you want to take a look at models in different languages, check https://huggingface.co/models [![all models](https://huggingface.co/front/thumbnails/models.png)](https://huggingface.co/models) ## Thank you! ![](assets/01_how-to-train/EsperBERTo-thumbnail-v2.png)
How to generate text: using different decoding methods for language generation with Transformers
patrickvonplaten
March, 2020
how-to-generate
guide, nlp
https://huggingface.co/blog/how-to-generate
# How to generate text: using different decoding methods for language generation with Transformers <a target="_blank" href="https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/02_how_to_generate.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> **Note**: Edited on July 2023 with up-to-date references and examples. ## Introduction In recent years, there has been an increasing interest in open-ended language generation thanks to the rise of large transformer-based language models trained on millions of webpages, including OpenAI's [ChatGPT](https://openai.com/blog/chatgpt) and Meta's [LLaMA](https://ai.meta.com/blog/large-language-model-llama-meta-ai/). The results on conditioned open-ended language generation are impressive, having shown to [generalize to new tasks](https://ai.googleblog.com/2021/10/introducing-flan-more-generalizable.html), [handle code](https://huggingface.co/blog/starcoder), or [take non-text data as input](https://openai.com/research/whisper). Besides the improved transformer architecture and massive unsupervised training data, **better decoding methods** have also played an important role. This blog post gives a brief overview of different decoding strategies and more importantly shows how *you* can implement them with very little effort using the popular `transformers` library\! All of the following functionalities can be used for **auto-regressive** language generation ([here](http://jalammar.github.io/illustrated-gpt2/) a refresher). In short, *auto-regressive* language generation is based on the assumption that the probability distribution of a word sequence can be decomposed into the product of conditional next word distributions: $$ P(w_{1:T} | W_0 ) = \prod_{t=1}^T P(w_{t} | w_{1: t-1}, W_0) \text{ ,with } w_{1: 0} = \emptyset, $$ and \\(W_0\\) being the initial *context* word sequence. The length \\(T\\) of the word sequence is usually determined *on-the-fly* and corresponds to the timestep \\(t=T\\) the EOS token is generated from \\(P(w_{t} | w_{1: t-1}, W_{0})\\). We will give a tour of the currently most prominent decoding methods, mainly *Greedy search*, *Beam search*, and *Sampling*. Let's quickly install transformers and load the model. We will use GPT2 in PyTorch for demonstration, but the API is 1-to-1 the same for TensorFlow and JAX. ``` python !pip install -q transformers ``` ``` python from transformers import AutoModelForCausalLM, AutoTokenizer import torch torch_device = "cuda" if torch.cuda.is_available() else "cpu" tokenizer = AutoTokenizer.from_pretrained("gpt2") # add the EOS token as PAD token to avoid warnings model = AutoModelForCausalLM.from_pretrained("gpt2", pad_token_id=tokenizer.eos_token_id).to(torch_device) ``` ## Greedy Search Greedy search is the simplest decoding method. It selects the word with the highest probability as its next word: \\(w_t = argmax_{w}P(w | w_{1:t-1})\\) at each timestep \\(t\\). The following sketch shows greedy search. <img src="/blog/assets/02_how-to-generate/greedy_search.png" alt="greedy search" style="margin: auto; display: block;"> Starting from the word \\(\text{"The"},\\) the algorithm greedily chooses the next word of highest probability \\(\text{"nice"}\\) and so on, so that the final generated word sequence is \\((\text{"The"}, \text{"nice"}, \text{"woman"})\\) having an overall probability of \\(0.5 \times 0.4 = 0.2\\) . In the following we will generate word sequences using GPT2 on the context \\((\text{"I"}, \text{"enjoy"}, \text{"walking"}, \text{"with"}, \text{"my"}, \text{"cute"}, \text{"dog"})\\). Let's see how greedy search can be used in `transformers`: ``` python # encode context the generation is conditioned on model_inputs = tokenizer('I enjoy walking with my cute dog', return_tensors='pt').to(torch_device) # generate 40 new tokens greedy_output = model.generate(**model_inputs, max_new_tokens=40) print("Output:\n" + 100 * '-') print(tokenizer.decode(greedy_output[0], skip_special_tokens=True)) ``` ``` Output: ---------------------------------------------------------------------------------------------------- I enjoy walking with my cute dog, but I'm not sure if I'll ever be able to walk with my dog. I'm not sure if I'll ever be able to walk with my dog. I'm not sure ``` Alright\! We have generated our first short text with GPT2 😊. The generated words following the context are reasonable, but the model quickly starts repeating itself\! This is a very common problem in language generation in general and seems to be even more so in greedy and beam search - check out [Vijayakumar et al., 2016](https://arxiv.org/abs/1610.02424) and [Shao et al., 2017](https://arxiv.org/abs/1701.03185). The major drawback of greedy search though is that it misses high probability words hidden behind a low probability word as can be seen in our sketch above: The word \\(\text{"has"}\\) with its high conditional probability of \\(0.9\\) is hidden behind the word \\(\text{"dog"}\\), which has only the second-highest conditional probability, so that greedy search misses the word sequence \\(\text{"The"}, \text{"dog"}, \text{"has"}\\) . Thankfully, we have beam search to alleviate this problem\! ## Beam search Beam search reduces the risk of missing hidden high probability word sequences by keeping the most likely `num_beams` of hypotheses at each time step and eventually choosing the hypothesis that has the overall highest probability. Let's illustrate with `num_beams=2`: <img src="/blog/assets/02_how-to-generate/beam_search.png" alt="beam search" style="margin: auto; display: block;"> At time step 1, besides the most likely hypothesis \\((\text{"The"}, \text{"nice"})\\), beam search also keeps track of the second most likely one \\((\text{"The"}, \text{"dog"})\\). At time step 2, beam search finds that the word sequence \\((\text{"The"}, \text{"dog"}, \text{"has"})\\), has with \\(0.36\\) a higher probability than \\((\text{"The"}, \text{"nice"}, \text{"woman"})\\), which has \\(0.2\\) . Great, it has found the most likely word sequence in our toy example\! Beam search will always find an output sequence with higher probability than greedy search, but is not guaranteed to find the most likely output. Let's see how beam search can be used in `transformers`. We set `num_beams > 1` and `early_stopping=True` so that generation is finished when all beam hypotheses reached the EOS token. ``` python # activate beam search and early_stopping beam_output = model.generate( **model_inputs, max_new_tokens=40, num_beams=5, early_stopping=True ) print("Output:\n" + 100 * '-') print(tokenizer.decode(beam_output[0], skip_special_tokens=True)) ``` ``` Output: ---------------------------------------------------------------------------------------------------- I enjoy walking with my cute dog, but I'm not sure if I'll ever be able to walk with him again. I'm not sure if I'll ever be able to walk with him again. I'm not sure ``` While the result is arguably more fluent, the output still includes repetitions of the same word sequences. One of the available remedies is to introduce *n-grams* (*a.k.a* word sequences of n words) penalties as introduced by [Paulus et al. (2017)](https://arxiv.org/abs/1705.04304) and [Klein et al. (2017)](https://arxiv.org/abs/1701.02810). The most common *n-grams* penalty makes sure that no *n-gram* appears twice by manually setting the probability of next words that could create an already seen *n-gram* to 0. Let's try it out by setting `no_repeat_ngram_size=2` so that no *2-gram* appears twice: ``` python # set no_repeat_ngram_size to 2 beam_output = model.generate( **model_inputs, max_new_tokens=40, num_beams=5, no_repeat_ngram_size=2, early_stopping=True ) print("Output:\n" + 100 * '-') print(tokenizer.decode(beam_output[0], skip_special_tokens=True)) ``` ``` Output: ---------------------------------------------------------------------------------------------------- I enjoy walking with my cute dog, but I'm not sure if I'll ever be able to walk with him again. I've been thinking about this for a while now, and I think it's time for me to ``` Nice, that looks much better\! We can see that the repetition does not appear anymore. Nevertheless, *n-gram* penalties have to be used with care. An article generated about the city *New York* should not use a *2-gram* penalty or otherwise, the name of the city would only appear once in the whole text\! Another important feature about beam search is that we can compare the top beams after generation and choose the generated beam that fits our purpose best. In `transformers`, we simply set the parameter `num_return_sequences` to the number of highest scoring beams that should be returned. Make sure though that `num_return_sequences <= num_beams`\! ``` python # set return_num_sequences > 1 beam_outputs = model.generate( **model_inputs, max_new_tokens=40, num_beams=5, no_repeat_ngram_size=2, num_return_sequences=5, early_stopping=True ) # now we have 3 output sequences print("Output:\n" + 100 * '-') for i, beam_output in enumerate(beam_outputs): print("{}: {}".format(i, tokenizer.decode(beam_output, skip_special_tokens=True))) ``` ``` Output: ---------------------------------------------------------------------------------------------------- 0: I enjoy walking with my cute dog, but I'm not sure if I'll ever be able to walk with him again. I've been thinking about this for a while now, and I think it's time for me to 1: I enjoy walking with my cute dog, but I'm not sure if I'll ever be able to walk with her again. I've been thinking about this for a while now, and I think it's time for me to 2: I enjoy walking with my cute dog, but I'm not sure if I'll ever be able to walk with him again. I've been thinking about this for a while now, and I think it's a good idea to 3: I enjoy walking with my cute dog, but I'm not sure if I'll ever be able to walk with him again. I've been thinking about this for a while now, and I think it's time to take a 4: I enjoy walking with my cute dog, but I'm not sure if I'll ever be able to walk with him again. I've been thinking about this for a while now, and I think it's a good idea. ``` As can be seen, the five beam hypotheses are only marginally different to each other - which should not be too surprising when using only 5 beams. In open-ended generation, a couple of reasons have been brought forward why beam search might not be the best possible option: - Beam search can work very well in tasks where the length of the desired generation is more or less predictable as in machine translation or summarization - see [Murray et al. (2018)](https://arxiv.org/abs/1808.10006) and [Yang et al. (2018)](https://arxiv.org/abs/1808.09582). But this is not the case for open-ended generation where the desired output length can vary greatly, e.g. dialog and story generation. - We have seen that beam search heavily suffers from repetitive generation. This is especially hard to control with *n-gram*- or other penalties in story generation since finding a good trade-off between inhibiting repetition and repeating cycles of identical *n-grams* requires a lot of finetuning. - As argued in [Ari Holtzman et al. (2019)](https://arxiv.org/abs/1904.09751), high quality human language does not follow a distribution of high probability next words. In other words, as humans, we want generated text to surprise us and not to be boring/predictable. The authors show this nicely by plotting the probability, a model would give to human text vs. what beam search does. ![alt text](https://blog.fastforwardlabs.com/images/2019/05/Screen_Shot_2019_05_08_at_3_06_36_PM-1557342561886.png) So let's stop being boring and introduce some randomness 🤪. ## Sampling In its most basic form, sampling means randomly picking the next word \\(w_t\\) according to its conditional probability distribution: $$ w_t \sim P(w|w_{1:t-1}) $$ Taking the example from above, the following graphic visualizes language generation when sampling. <img src="/blog/assets/02_how-to-generate/sampling_search.png" alt="sampling search" style="margin: auto; display: block;"> It becomes obvious that language generation using sampling is not *deterministic* anymore. The word \\((\text{"car"})\\) is sampled from the conditioned probability distribution \\(P(w | \text{"The"})\\), followed by sampling \\((\text{"drives"})\\) from \\(P(w | \text{"The"}, \text{"car"})\\) . In `transformers`, we set `do_sample=True` and deactivate *Top-K* sampling (more on this later) via `top_k=0`. In the following, we will fix the random seed for illustration purposes. Feel free to change the `set_seed` argument to obtain different results, or to remove it for non-determinism. ``` python # set seed to reproduce results. Feel free to change the seed though to get different results from transformers import set_seed set_seed(42) # activate sampling and deactivate top_k by setting top_k sampling to 0 sample_output = model.generate( **model_inputs, max_new_tokens=40, do_sample=True, top_k=0 ) print("Output:\n" + 100 * '-') print(tokenizer.decode(sample_output[0], skip_special_tokens=True)) ``` ``` Output: ---------------------------------------------------------------------------------------------------- I enjoy walking with my cute dog for the rest of the day, but this had me staying in an unusual room and not going on nights out with friends (which will always be wondered for a mere minute or so at this point). ``` Interesting\! The text seems alright - but when taking a closer look, it is not very coherent and doesn't sound like it was written by a human. That is the big problem when sampling word sequences: The models often generate incoherent gibberish, *cf.* [Ari Holtzman et al. (2019)](https://arxiv.org/abs/1904.09751). A trick is to make the distribution \\(P(w|w_{1:t-1})\\) sharper (increasing the likelihood of high probability words and decreasing the likelihood of low probability words) by lowering the so-called `temperature` of the [softmax](https://en.wikipedia.org/wiki/Softmax_function#Smooth_arg_max). An illustration of applying temperature to our example from above could look as follows. <img src="/blog/assets/02_how-to-generate/sampling_search_with_temp.png" alt="sampling temp search" style="margin: auto; display: block;"> The conditional next word distribution of step \\(t=1\\) becomes much sharper leaving almost no chance for word \\((\text{"car"})\\) to be selected. Let's see how we can cool down the distribution in the library by setting `temperature=0.6`: ``` python # set seed to reproduce results. Feel free to change the seed though to get different results set_seed(42) # use temperature to decrease the sensitivity to low probability candidates sample_output = model.generate( **model_inputs, max_new_tokens=40, do_sample=True, top_k=0, temperature=0.6, ) print("Output:\n" + 100 * '-') print(tokenizer.decode(sample_output[0], skip_special_tokens=True)) ``` ``` Output: ---------------------------------------------------------------------------------------------------- I enjoy walking with my cute dog, but I don't like to chew on it. I like to eat it and not chew on it. I like to be able to walk with my dog." So how did you decide ``` OK. There are less weird n-grams and the output is a bit more coherent now\! While applying temperature can make a distribution less random, in its limit, when setting `temperature` \\(\to 0\\), temperature scaled sampling becomes equal to greedy decoding and will suffer from the same problems as before. ### Top-K Sampling [Fan et. al (2018)](https://arxiv.org/pdf/1805.04833.pdf) introduced a simple, but very powerful sampling scheme, called ***Top-K*** sampling. In *Top-K* sampling, the *K* most likely next words are filtered and the probability mass is redistributed among only those *K* next words. GPT2 adopted this sampling scheme, which was one of the reasons for its success in story generation. We extend the range of words used for both sampling steps in the example above from 3 words to 10 words to better illustrate *Top-K* sampling. <img src="/blog/assets/02_how-to-generate/top_k_sampling.png" alt="Top K sampling" style="margin: auto; display: block;"> Having set \\(K = 6\\), in both sampling steps we limit our sampling pool to 6 words. While the 6 most likely words, defined as \\(V_{\text{top-K}}\\) encompass only *ca.* two-thirds of the whole probability mass in the first step, it includes almost all of the probability mass in the second step. Nevertheless, we see that it successfully eliminates the rather weird candidates \\((\text{``not"}, \text{``the"}, \text{``small"}, \text{``told"})\\) in the second sampling step. Let's see how *Top-K* can be used in the library by setting `top_k=50`: ``` python # set seed to reproduce results. Feel free to change the seed though to get different results set_seed(42) # set top_k to 50 sample_output = model.generate( **model_inputs, max_new_tokens=40, do_sample=True, top_k=50 ) print("Output:\n" + 100 * '-') print(tokenizer.decode(sample_output[0], skip_special_tokens=True)) ``` ``` Output: ---------------------------------------------------------------------------------------------------- I enjoy walking with my cute dog for the rest of the day, but this time it was hard for me to figure out what to do with it. (One reason I asked this for a few months back is that I had a ``` Not bad at all\! The text is arguably the most *human-sounding* text so far. One concern though with *Top-K* sampling is that it does not dynamically adapt the number of words that are filtered from the next word probability distribution \\(P(w|w_{1:t-1})\\). This can be problematic as some words might be sampled from a very sharp distribution (distribution on the right in the graph above), whereas others from a much more flat distribution (distribution on the left in the graph above). In step \\(t=1\\), *Top-K* eliminates the possibility to sample \\((\text{"people"}, \text{"big"}, \text{"house"}, \text{"cat"})\\), which seem like reasonable candidates. On the other hand, in step \\(t=2\\) the method includes the arguably ill-fitted words \\((\text{"down"}, \text{"a"})\\) in the sample pool of words. Thus, limiting the sample pool to a fixed size *K* could endanger the model to produce gibberish for sharp distributions and limit the model's creativity for flat distribution. This intuition led [Ari Holtzman et al. (2019)](https://arxiv.org/abs/1904.09751) to create ***Top-p***- or ***nucleus***-sampling. ### Top-p (nucleus) sampling Instead of sampling only from the most likely *K* words, in *Top-p* sampling chooses from the smallest possible set of words whose cumulative probability exceeds the probability *p*. The probability mass is then redistributed among this set of words. This way, the size of the set of words (*a.k.a* the number of words in the set) can dynamically increase and decrease according to the next word's probability distribution. Ok, that was very wordy, let's visualize. <img src="/blog/assets/02_how-to-generate/top_p_sampling.png" alt="Top p sampling" style="margin: auto; display: block;"> Having set \\(p=0.92\\), *Top-p* sampling picks the *minimum* number of words to exceed together \\(p=92\%\\) of the probability mass, defined as \\(V_{\text{top-p}}\\). In the first example, this included the 9 most likely words, whereas it only has to pick the top 3 words in the second example to exceed 92%. Quite simple actually\! It can be seen that it keeps a wide range of words where the next word is arguably less predictable, *e.g.* \\(P(w | \text{"The''})\\), and only a few words when the next word seems more predictable, *e.g.* \\(P(w | \text{"The"}, \text{"car"})\\). Alright, time to check it out in `transformers`\! We activate *Top-p* sampling by setting `0 < top_p < 1`: ``` python # set seed to reproduce results. Feel free to change the seed though to get different results set_seed(42) # set top_k to 50 sample_output = model.generate( **model_inputs, max_new_tokens=40, do_sample=True, top_p=0.92, top_k=0 ) print("Output:\n" + 100 * '-') print(tokenizer.decode(sample_output[0], skip_special_tokens=True)) ``` ``` Output: ---------------------------------------------------------------------------------------------------- I enjoy walking with my cute dog for the rest of the day, but this had me staying in an unusual room and not going on nights out with friends (which will always be my yearning for such a spacious screen on my desk ``` Great, that sounds like it could have been written by a human. Well, maybe not quite yet. While in theory, *Top-p* seems more elegant than *Top-K*, both methods work well in practice. *Top-p* can also be used in combination with *Top-K*, which can avoid very low ranked words while allowing for some dynamic selection. Finally, to get multiple independently sampled outputs, we can *again* set the parameter `num_return_sequences > 1`: ``` python # set seed to reproduce results. Feel free to change the seed though to get different results set_seed(42) # set top_k = 50 and set top_p = 0.95 and num_return_sequences = 3 sample_outputs = model.generate( **model_inputs, max_new_tokens=40, do_sample=True, top_k=50, top_p=0.95, num_return_sequences=3, ) print("Output:\n" + 100 * '-') for i, sample_output in enumerate(sample_outputs): print("{}: {}".format(i, tokenizer.decode(sample_output, skip_special_tokens=True))) ``` ``` Output: ---------------------------------------------------------------------------------------------------- 0: I enjoy walking with my cute dog for the rest of the day, but this time it was hard for me to figure out what to do with it. When I finally looked at this for a few moments, I immediately thought, " 1: I enjoy walking with my cute dog. The only time I felt like walking was when I was working, so it was awesome for me. I didn't want to walk for days. I am really curious how she can walk with me 2: I enjoy walking with my cute dog (Chama-I-I-I-I-I), and I really enjoy running. I play in a little game I play with my brother in which I take pictures of our houses. ``` Cool, now you should have all the tools to let your model write your stories with `transformers`! ## Conclusion As *ad-hoc* decoding methods, *top-p* and *top-K* sampling seem to produce more fluent text than traditional *greedy* - and *beam* search on open-ended language generation. There is evidence that the apparent flaws of *greedy* and *beam* search - mainly generating repetitive word sequences - are caused by the model (especially the way the model is trained), rather than the decoding method, *cf.* [Welleck et al. (2019)](https://arxiv.org/pdf/1908.04319.pdf). Also, as demonstrated in [Welleck et al. (2020)](https://arxiv.org/abs/2002.02492), it looks as *top-K* and *top-p* sampling also suffer from generating repetitive word sequences. In [Welleck et al. (2019)](https://arxiv.org/pdf/1908.04319.pdf), the authors show that according to human evaluations, *beam* search can generate more fluent text than *Top-p* sampling, when adapting the model's training objective. Open-ended language generation is a rapidly evolving field of research and as it is often the case there is no one-size-fits-all method here, so one has to see what works best in one's specific use case. Fortunately, *you* can try out all the different decoding methods in `transfomers` 🤗 -- you can have an overview of the available methods [here](https://huggingface.co/docs/transformers/generation_strategies#decoding-strategies). Thanks to everybody, who has contributed to the blog post: Alexander Rush, Julien Chaumand, Thomas Wolf, Victor Sanh, Sam Shleifer, Clément Delangue, Yacine Jernite, Oliver Åstrand and John de Wasseige. ## Appendix `generate` has evolved into a highly composable method, with flags to manipulate the resulting text in many directions that were not covered in this blog post. Here are a few helpful pages to guide you: - [How to parameterize `generate`](https://huggingface.co/docs/transformers/generation_strategies#default-text-generation-configuration) - [How to stream the output](https://huggingface.co/docs/transformers/generation_strategies#streaming) - [Full list of decoding options](https://huggingface.co/docs/transformers/en/main_classes/text_generation#transformers.GenerationConfig) - [`generate` API reference](https://huggingface.co/docs/transformers/en/main_classes/text_generation#transformers.GenerationMixin.generate) - [LLM score leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) If you find that navigating our docs is challenging and you can't easily find what you're looking for, drop us a message in [this GitHub issue](https://github.com/huggingface/transformers/issues/24575). Your feedback is critical to set our future direction! 🤗
The Reformer - Pushing the limits of language modeling
patrickvonplaten
July 3, 2020
reformer
research, nlp
https://huggingface.co/blog/reformer
# The Reformer - Pushing the limits of language modeling <a href="https://colab.research.google.com/github/patrickvonplaten/blog/blob/main/notebooks/03_reformer.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ## How the Reformer uses less than 8GB of RAM to train on sequences of half a million tokens The Reformer model as introduced by [Kitaev, Kaiser et al. (2020)](https://arxiv.org/pdf/2001.04451.pdf) is one of the most memory-efficient transformer models for long sequence modeling as of today. Recently, long sequence modeling has experienced a surge of interest as can be seen by the many submissions from this year alone - [Beltagy et al. (2020)](https://arxiv.org/abs/2004.05150), [Roy et al. (2020)](https://arxiv.org/abs/2003.05997), [Tay et al.](https://arxiv.org/abs/2002.11296), [Wang et al.](https://arxiv.org/abs/2006.04768) to name a few. The motivation behind long sequence modeling is that many tasks in NLP, *e.g.* summarization, question answering, require the model to process longer input sequences than models, such as BERT, are able to handle. In tasks that require the model to process a large input sequence, long sequence models do not have to cut the input sequence to avoid memory overflow and thus have been shown to outperform standard "BERT"-like models *cf.* [Beltagy et al. (2020)](https://arxiv.org/abs/2004.05150). The Reformer pushes the limit of longe sequence modeling by its ability to process up to half a million tokens at once as shown in this [demo](https://github.com/patrickvonplaten/notebooks/blob/master/PyTorch_Reformer.ipynb). As a comparison, a conventional `bert-base-uncased` model limits the input length to only 512 tokens. In Reformer, each part of the standard transformer architecture is re-engineered to optimize for minimal memory requirement without a significant drop in performance. The memory improvements can be attributed to **4** features which the Reformer authors introduced to the transformer world: 1. **Reformer Self-Attention Layer** - *How to efficiently implement self-attention without being restricted to a local context?* 2. **Chunked Feed Forward Layers** - *How to get a better time-memory trade-off for large feed forward layers?* 3. **Reversible Residual Layers** - *How to drastically reduce memory consumption in training by a smart residual architecture?* 4. **Axial Positional Encodings** - *How to make positional encodings usable for extremely large input sequences?* The goal of this blog post is to give the reader an **in-depth** understanding of each of the four Reformer features mentioned above. While the explanations are focussed on the Reformer, the reader should get a better intuition under which circumstances each of the four features can be effective for other transformer models as well. The four sections are only loosely connected, so they can very well be read individually. Reformer is part of the 🤗Transformers library. For all users of the Reformer, it is advised to go through this very detailed blog post to better understand how the model works and how to correctly set its configuration. All equations are accompanied by their equivalent name for the Reformer config, *e.g.* `config.<param_name>`, so that the reader can quickly relate to the official docs and configuration file. **Note**: *Axial Positional Encodings* are not explained in the official Reformer paper, but are extensively used in the official codebase. This blog post gives the first in-depth explanation of Axial Positional Encodings. ## 1. Reformer Self-Attention Layer Reformer uses two kinds of special self-attention layers: *local* self-attention layers and Locality Sensitive Hashing (*LSH*) self-attention layers. To better introduce these new self-attention layers, we will briefly recap conventional self-attention as introduced in [Vaswani et al. 2017](https://arxiv.org/abs/1706.03762). This blog post uses the same notation and coloring as the popular blog post [The illustrated transformer](http://jalammar.github.io/illustrated-transformer/), so the reader is strongly advised to read this blog first. **Important**: While Reformer was originally introduced for causal self-attention, it can very well be used for bi-directional self-attention as well. In this post, Reformer's self-attention is presented for *bidirectional* self-attention. ### Recap Global Self-Attention The core of every Transformer model is the **self-attention** layer. To recap the conventional self-attention layer, which we refer to here as the **global self-attention** layer, let us assume we apply a transformer layer on the embedding vector sequence \\(\mathbf{X} = \mathbf{x}_1, \ldots, \mathbf{x}_n\\) where each vector \\(\mathbf{x}_{i}\\) is of size `config.hidden_size`, *i.e.* \\(d_h\\). In short, a global self-attention layer projects \\(\mathbf{X}\\) to the query, key and value matrices \\(\mathbf{Q}, \mathbf{K}, \mathbf{V}\\) and computes the output \\(\mathbf{Z}\\) using the *softmax* operation as follows: \\(\mathbf{Z} = \text{SelfAttn}(\mathbf{X}) = \text{softmax}(\mathbf{Q}\mathbf{K}^T) \mathbf{V}\\) with \\(\mathbf{Z}\\) being of dimension \\(d_h \times n\\) (leaving out the key normalization factor and self-attention weights \\(\mathbf{W}^{O}\\) for simplicity). For more detail on the complete transformer operation, see [the illustrated transformer](http://jalammar.github.io/illustrated-transformer/). Visually, we can illustrate this operation as follows for \\(n=16, d_h=3\\): ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/conventional_attention.png) Note that for all visualizations `batch_size` and `config.num_attention_heads` is assumed to be 1. Some vectors, *e.g.* \\(\mathbf{x_3}\\) and its corresponding output vector \\(\mathbf{z_3}\\) are marked so that *LSH self-attention* can later be better explained. The presented logic can effortlessly be extended for multi-head self-attention (`config.num_attention_{h}eads` > 1). The reader is advised to read [the illustrated transformer](http://jalammar.github.io/illustrated-transformer/) as a reference for multi-head self-attention. Important to remember is that for each output vector \\(\mathbf{z}_{i}\\), the whole input sequence \\(\mathbf{X}\\) is processed. The tensor of the inner dot-product \\(\mathbf{Q}\mathbf{K}^T\\) has an asymptotic memory complexity of \\(\mathcal{O}(n^2)\\) which usually represents the memory bottleneck in a transformer model. This is also the reason why `bert-base-cased` has a `config.max_position_embedding_size` of only 512. ### Local Self-Attention **Local self-attention** is the obvious solution to reducing the \\(\mathcal{O}(n^2)\\) memory bottleneck, allowing us to model longer sequences with a reduced computational cost. In local self-attention the input \\( \mathbf{X} = \mathbf{X}_{1:n} = \mathbf{x}_{1}, \ldots, \mathbf{x}_{n} \\) is cut into \\(n_{c}\\) chunks: \\( \mathbf{X} = \left[\mathbf{X}_{1:l_{c}}, \ldots, \mathbf{X}_{(n_{c} - 1) * l_{c} : n_{c} * l_{c}}\right] \\) each of length `config.local_chunk_length`, *i.e.* \\(l_{c}\\), and subsequently global self-attention is applied on each chunk separately. Let's take our input sequence for \\(n=16, d_h=3\\) again for visualization: ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/input.png) Assuming \\(l_{c} = 4, n_{c} = 4\\), chunked attention can be illustrated as follows: ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/chunked_attention_1.png) As can be seen, the attention operation is applied for each chunk \\(\mathbf{X}_{1:4}, \mathbf{X}_{5:8}, \mathbf{X}_{9:12}, \mathbf{X}_{13:16}\\) individually. The first drawback of this architecture becomes obvious: Some input vectors have no access to their immediate context, *e.g.* \\(\mathbf{x}_9\\) has no access to \\(\mathbf{x}_{8}\\) and vice-versa in our example. This is problematic because these tokens are not able to learn word representations that take their immediate context into account. A simple remedy is to augment each chunk with `config.local_num_chunks_before`, *i.e.* \\(n_{p}\\), chunks and `config.local_num_chunks_after`, *i.e.* \\(n_{a}\\), so that every input vector has at least access to \\(n_{p}\\) previous input vectors and \\(n_{a}\\) following input vectors. This can also be understood as chunking with overlap whereas \\(n_{p}\\) and \\(n_{a}\\) define the amount of overlap each chunk has with all previous chunks and following chunks. We denote this extended local self-attention as follows: $$\mathbf{Z}^{\text{loc}} = \left[\mathbf{Z}_{1:l_{c}}^{\text{loc}}, \ldots, \mathbf{Z}_{(n_{c} - 1) * l_{c} : n_{c} * l_{c}}^{\text{loc}}\right], $$ with $$\mathbf{Z}_{l_{c} * (i - 1) + 1 : l_{c} * i}^{\text{loc}} = \text{SelfAttn}(\mathbf{X}_{l_{c} * (i - 1 - n_{p}) + 1: l_{c} * (i + n_{a})})\left[n_{p} * l_{c}: -n_{a} * l_{c}\right], \forall i \in \{1, \ldots, n_{c} \}$$ Okay, this formula looks quite complicated. Let's make it easier. In Reformer's self-attention layers \\(n_{a}\\) is usually set to 0 and \\(n_{p}\\) is set to 1, so let's write down the formula again for \\(i = 1\\): $$\mathbf{Z}_{1:l_{c}}^{\text{loc}} = \text{SelfAttn}(\mathbf{X}_{-l_{c} + 1: l_{c}})\left[l_{c}:\right]$$ We notice that we have a circular relationship so that the first segment can attend the last segment as well. Let's illustrate this slightly enhanced local attention again. First, we apply self-attention within each windowed segment and keep only the central output segment. ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/local_attention_2.png) Finally, the relevant output is concatenated to \\(\mathbf{Z}^{\text{loc}}\\) and looks as follows. ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/local_attention_3.png) Note that local self-attention is implemented efficiently way so that no output is computed and subsequently "thrown-out" as shown here for illustration purposes by the red cross. It's important to note here that extending the input vectors for each chunked self-attention function allows *each* single output vector \\( \mathbf{z}_{i} \\) of this self-attention function to learn better vector representations. E.g. each of the output vectors \\( \mathbf{z}_{5}^{\text{loc}}, \mathbf{z}_{6}^{\text{loc}}, \mathbf{z}_{7}^{\text{loc}}, \mathbf{z}_{8}^{\text{loc}} \\) can take into account all of the input vectors \\( \mathbf{X}_{1:8} \\) to learn better representations. The gain in memory consumption is quite obvious: The \\( \mathcal{O}(n^2) \\) memory complexity is broken down for each segment individually so that the total asymptotic memory consumption is reduced to \\( \mathcal{O}(n_{c} * l_{c}^2) = \mathcal{O}(n * l_{c}) \\). This enhanced local self-attention is better than the vanilla local self-attention architecture but still has a major drawback in that every input vector can only attend to a local context of predefined size. For NLP tasks that do not require the transformer model to learn long-range dependencies between the input vectors, which include arguably *e.g.* speech recognition, named entity recognition and causal language modeling of short sentences, this might not be a big issue. Many NLP tasks do require the model to learn long-range dependencies, so that local self-attention could lead to significant performance degradation, *e.g.* * *Question-answering*: the model has to learn the relationship between the question tokens and relevant answer tokens which will most likely not be in the same local range * *Multiple-Choice*: the model has to compare multiple answer token segments to each other which are usually separated by a significant length * *Summarization*: the model has to learn the relationship between a long sequence of context tokens and a shorter sequence of summary tokens, whereas the relevant relationships between context and summary can most likely not be captured by local self-attention * etc... Local self-attention on its own is most likely not sufficient for the transformer model to learn the relevant relationships of input vectors (tokens) to each other. Therefore, Reformer additionally employs an efficient self-attention layer that approximates global self-attention, called *LSH self-attention*. ### LSH Self-Attention Alright, now that we have understood how local self-attention works, we can take a stab at the probably most innovative piece of Reformer: **Locality sensitive hashing (LSH) Self-Attention**. The premise of LSH self-attention is to be more or less as efficient as local self-attention while approximating global self-attention. LSH self-attention relies on the LSH algorithm as presented in [Andoni et al (2015)](https://arxiv.org/abs/1509.02897), hence its name. The idea behind LSH self-attention is based on the insight that if \\(n\\) is large, the softmax applied on the \\(\mathbf{Q}\mathbf{K}^T\\) attention dot-product weights only very few value vectors with values significantly larger than 0 for each query vector. Let's explain this in more detail. Let \\(\mathbf{k}_{i} \in \mathbf{K} = \left[\mathbf{k}_1, \ldots, \mathbf{k}_n \right]^T\\) and \\(\mathbf{q}_{i} \in \mathbf{Q} = \left[\mathbf{q}_1, \ldots, \mathbf{q}_n\right]^T\\) be the key and query vectors. For each \\(\mathbf{q}_{i}\\), the computation \\(\text{softmax}(\mathbf{q}_{i}^T \mathbf{K}^T)\\) can be approximated by using only those key vectors of \\(\mathbf{k}_{j}\\) that have a high cosine similarity with \\(\mathbf{q}_{i}\\). This owes to the fact that the softmax function puts exponentially more weight on larger input values. So far so good, the next problem is to efficiently find the vectors that have a high cosine similarity with \\(\mathbf{q}_{i}\\) for all \\(i\\). First, the authors of Reformer notice that sharing the query and key projections: \\(\mathbf{Q} = \mathbf{K}\\) does not impact the performance of a transformer model \\({}^1\\). Now, instead of having to find the key vectors of high cosine similarity for each query vector \\(q_i\\), only the cosine similarity of query vectors to each other has to be found. This is important because there is a transitive property to the query-query vector dot product approximation: If \\(\mathbf{q}_{i}\\) has a high cosine similarity to the query vectors \\(\mathbf{q}_{j}\\) and \\(\mathbf{q}_{k}\\), then \\(\mathbf{q}_{j}\\) also has a high cosine similarity to \\(\mathbf{q}_{k}\\). Therefore, the query vectors can be clustered into buckets, such that all query vectors that belong to the same bucket have a high cosine similarity to each other. Let's define \\(C_{m}\\) as the *mth* set of position indices, such that their corresponding query vectors are in the same bucket: \\(C_{m} = \{ i | \text{ s.t. } \mathbf{q}_{i} \in \text{mth cluster}\}\\) and `config.num_buckets`, *i.e.* \\(n_{b}\\), as the number of buckets. For each set of indices \\(C_{m}\\), the softmax function on the corresponding bucket of query vectors \\(\text{softmax}(\mathbf{Q}_{i \in C_{m}} \mathbf{Q}^T_{i \in C_{m}})\\) approximates the softmax function of global self-attention with shared query and key projections \\(\text{softmax}(\mathbf{q}_{i}^T \mathbf{Q}^T)\\) for all position indices \\(i\\) in \\(C_{m}\\). Second, the authors make use of the **LSH** algorithm to cluster the query vectors into a predefined number of buckets \\(n_{b}\\). The LSH algorithm is an ideal choice here because it is very efficient and is an approximation of the nearest neighbor algorithm for cosine similarity. Explaining the LSH scheme is out-of-scope for this notebook, so let's just keep in mind that for each vector \\(\mathbf{q}_{i}\\) the LSH algorithm attributes its position index \\(i\\) to one of \\(n_{b}\\) predefined buckets, *i.e.* \\(\text{LSH}(\mathbf{q}_{i}) = m\\) with \\(i \in \{1, \ldots, n\}\\) and \\(m \in \{1, \ldots, n_{b}\}\\). Visually, we can illustrate this as follows for our original example: ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/lsh_hashing.png) Third, it can be noted that having clustered all query vectors in \\(n_{b}\\) buckets, the corresponding set of indices \\(C_{m}\\) can be used to permute the input vectors \\(\mathbf{x}_1, \ldots, \mathbf{x}_n\\) accordingly \\({}^2\\) so that shared query-key self-attention can be applied piecewise similar to local attention. Let's clarify with our example input vectors \\(\mathbf{X} = \mathbf{x}_1, ..., \mathbf{x}_{16}\\) and assume `config.num_buckets=4` and `config.lsh_chunk_length = 4`. Looking at the graphic above we can see that we have assigned each query vector \\( \mathbf{q}_1, \ldots, \mathbf{q}_{16} \\) to one of the clusters \\( \mathcal{C}_{1}, \mathcal{C}_{2}, \mathcal{C}_{3}, \mathcal{C}_{4} \\) . If we now sort the corresponding input vectors \\( \mathbf{x}_1, \ldots, \mathbf{x}_{16} \\) accordingly, we get the following permuted input \\( \mathbf{X'} \\): ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/lsh_perm.png) The self-attention mechanism should be applied for each cluster individually so that for each cluster \\( \mathcal{C}_m \\) the corresponding output is calculated as follows: \\( \mathbf{Z}^{\text{LSH}}_{i \in \mathcal{C}_m} = \text{SelfAttn}_{\mathbf{Q}=\mathbf{K}}(\mathbf{X}_{i \in \mathcal{C}_m}) \\). Let's illustrate this again for our example. ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/lsh_cluster_attn.png) As can be seen, the self-attention function operates on different sizes of matrices, which is suboptimal for efficient batching in GPU and TPU. To overcome this problem, the permuted input can be chunked the same way it is done for local attention so that each chunk is of size `config.lsh_chunk_length`. By chunking the permuted input, a bucket might be split into two different chunks. To remedy this problem, in LSH self-attention each chunk attends to its previous chunk `config.lsh_num_chunks_before=1` in addition to itself, the same way local self-attention does (`config.lsh_num_chunks_after` is usually set to 0). This way, we can be assured that all vectors in a bucket attend to each other with a high probability \\({}^3\\). All in all for all chunks \\( k \in \{1, \ldots, n_{c}\} \\), LSH self-attention can be noted down as follows: $$ \mathbf{Z'}_{l_{c} * k + 1:l_{c} * (k + 1)}^{\text{LSH}} = \text{SelfAttn}_{\mathbf{Q} = \mathbf{K}}(\mathbf{X'}_{l_{c} * k + 1): l_{c} * (k + 1)})\left[l_{c}:\right] $$ with \\(\mathbf{X'}\\) and \\( \mathbf{Z'} \\) being the input and output vectors permuted according to the LSH algorithm. Enough complicated formulas, let's illustrate LSH self-attention. The permuted vectors \\(\mathbf{X'}\\) as shown above are chunked and shared query key self-attention is applied to each chunk. ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/lsh_attention_2.png) Finally, the output \\(\mathbf{Z'}^{\text{LSH}}\\) is reordered to its original permutation. ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/lsh_attention_3.png) One important feature to mention here as well is that the accuracy of LSH self-attention can be improved by running LSH self-attention `config.num_hashes`, e.g. \\(n_{h} \\) times in parallel, each with a different random LSH hash. By setting `config.num_hashes > 1`, for each output position \\( i \\), multiple output vectors \\( \mathbf{z}^{\text{LSH}, 1}_{i}, \ldots, \mathbf{z}^{\text{LSH}, n_{h}}_{i} \\) are computed and subsequently merged: \\( \mathbf{z}^{\text{LSH}}_{i} = \sum_k^{n_{h}} \mathbf{Z}^{\text{LSH}, k}_{i} * \text{weight}^k_i \\). The \\( \text{weight}^k_i \\) represents the importance of the output vectors \\( \mathbf{z}^{\text{LSH}, k}_{i} \\) of hashing round \\( k \\) in comparison to the other hashing rounds, and is exponentially proportional to the normalization term of their softmax computation. The intuition behind this is that if the corresponding query vector \\( \mathbf{q}_{i}^{k} \\) have a high cosine similarity with all other query vectors in its respective chunk, then the softmax normalization term of this chunk tends to be high, so that the corresponding output vectors \\( \mathbf{q}_{i}^{k} \\) should be a better approximation to global attention and thus receive more weight than output vectors of hashing rounds with a lower softmax normalization term. For more detail see Appendix A of the [paper](https://arxiv.org/pdf/2001.04451.pdf). For our example, multi-round LSH self-attention can be illustrated as follows. ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/lsh_attention_4.png) Great. That's it. Now we know how LSH self-attention works in Reformer. Regarding the memory complexity, we now have two terms that compete which each other to be the memory bottleneck: the dot-product: \\( \mathcal{O}(n_{h} * n_{c} * l_{c}^2) = \mathcal{O}(n * n_{h} * l_{c}) \\) and the required memory for LSH bucketing: \\( \mathcal{O}(n * n_{h} * \frac{n_{b}}{2}) \\) with \\( l_{c} \\) being the chunk length. Because for large \\( n \\), the number of buckets \\( \frac{n_{b}}{2} \\) grows much faster than the chunk length \\( l_{c} \\), the user can again factorize the number of buckets `config.num_buckets` as explained [here](https://huggingface.co/transformers/model_doc/reformer.html#lsh-self-attention). Let's recap quickly what we have gone through above: 1. We want to approximate global attention using the knowledge that the softmax operation only puts significant weights on very few key vectors. 2. If key vectors are equal to query vectors this means that *for each* query vector \\( \mathbf{q}_{i} \\), the softmax only puts significant weight on other query vectors that are similar in terms of cosine similarity. 3. This relationship works in both ways, meaning if \\( \mathbf{q}_{j} \\) is similar to \\( \mathbf{q}_{i} \\) than \\(\mathbf{q}_{j} \\) is also similar to \\( \mathbf{q}_{i} \\), so that we can do a global clustering before applying self-attention on a permuted input. 4. We apply local self-attention on the permuted input and re-order the output to its original permutation. --- \\( {}^{1} \\) The authors run some preliminary experiments confirming that shared query key self-attention performs more or less as well as standard self-attention. \\( {}^{2} \\) To be more exact the query vectors within a bucket are sorted according to their original order. This means if, *e.g.* the vectors \\( \mathbf{q}_1, \mathbf{q}_3, \mathbf{q}_7 \\) are all hashed to bucket 2, the order of the vectors in bucket 2 would still be \\( \mathbf{q}_1 \\), followed by \\( \mathbf{q}_3 \\) and \\( \mathbf{q}_7 \\). \\( {}^3 \\) On a side note, it is to mention the authors put a mask on the query vector \\( \mathbf{q}_{i} \\) to prevent the vector from attending to itself. Because the cosine similarity of a vector to itself will always be as high or higher than the cosine similarity to other vectors, the query vectors in shared query key self-attention are strongly discouraged to attend to themselves. ### Benchmark Benchmark tools were recently added to Transformers - see [here](https://github.com/huggingface/transformers/blob/master/notebooks/05-benchmark.ipynb) for a more detailed explanation. To show how much memory can be saved using "local" + "LSH" self-attention, the Reformer model `google/reformer-enwik8` is benchmarked for different `local_attn_chunk_length` and `lsh_attn_chunk_length`. The default configuration and usage of the `google/reformer-enwik8` model can be checked in more detail [here](https://huggingface.co/google/reformer-enwik8). Let's first do some necessary imports and installs. ``` #@title Installs and Imports # pip installs !pip -qq install git+https://github.com/huggingface/transformers.git !pip install -qq py3nvml from transformers import ReformerConfig, PyTorchBenchmark, PyTorchBenchmarkArguments ``` First, let's benchmark the memory usage of the Reformer model using *global* self-attention. This can be achieved by setting `lsh_attn_chunk_length` = `local_attn_chunk_length` = 8192 so that for all input sequences smaller or equal to 8192, the model automatically switches to global self-attention. ``` config = ReformerConfig.from_pretrained("google/reformer-enwik8", lsh_attn_chunk_length=16386, local_attn_chunk_length=16386, lsh_num_chunks_before=0, local_num_chunks_before=0) benchmark_args = PyTorchBenchmarkArguments(sequence_lengths=[2048, 4096, 8192, 16386], batch_sizes=[1], models=["Reformer"], no_speed=True, no_env_print=True) benchmark = PyTorchBenchmark(configs=[config], args=benchmark_args) result = benchmark.run() ``` HBox(children=(FloatProgress(value=0.0, description='Downloading', max=1279.0, style=ProgressStyle(description… 1 / 1 Doesn't fit on GPU. CUDA out of memory. Tried to allocate 2.00 GiB (GPU 0; 11.17 GiB total capacity; 8.87 GiB already allocated; 1.92 GiB free; 8.88 GiB reserved in total by PyTorch) ==================== INFERENCE - MEMORY - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Memory in MB -------------------------------------------------------------------------------- Reformer 1 2048 1465 Reformer 1 4096 2757 Reformer 1 8192 7893 Reformer 1 16386 N/A -------------------------------------------------------------------------------- The longer the input sequence, the more visible is the quadratic relationship \\( \mathcal{O}(n^2) \\) between input sequence and peak memory usage. As can be seen, in practice it would require a much longer input sequence to clearly observe that doubling the input sequence quadruples the peak memory usage. For this a `google/reformer-enwik8` model using global attention, a sequence length of over 16K results in a memory overflow. Now, let's activate *local* and *LSH* self-attention by using the model's default parameters. ``` config = ReformerConfig.from_pretrained("google/reformer-enwik8") benchmark_args = PyTorchBenchmarkArguments(sequence_lengths=[2048, 4096, 8192, 16384, 32768, 65436], batch_sizes=[1], models=["Reformer"], no_speed=True, no_env_print=True) benchmark = PyTorchBenchmark(configs=[config], args=benchmark_args) result = benchmark.run() ``` 1 / 1 Doesn't fit on GPU. CUDA out of memory. Tried to allocate 2.00 GiB (GPU 0; 11.17 GiB total capacity; 7.85 GiB already allocated; 1.74 GiB free; 9.06 GiB reserved in total by PyTorch) Doesn't fit on GPU. CUDA out of memory. Tried to allocate 4.00 GiB (GPU 0; 11.17 GiB total capacity; 6.56 GiB already allocated; 3.99 GiB free; 6.81 GiB reserved in total by PyTorch) ==================== INFERENCE - MEMORY - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Memory in MB -------------------------------------------------------------------------------- Reformer 1 2048 1785 Reformer 1 4096 2621 Reformer 1 8192 4281 Reformer 1 16384 7607 Reformer 1 32768 N/A Reformer 1 65436 N/A -------------------------------------------------------------------------------- As expected using local and LSH self-attention is much more memory efficient for longer input sequences, so that the model runs out of memory only at 16K tokens for a 11GB RAM GPU in this notebook. ## 2. Chunked Feed Forward Layers Transformer-based models often employ very large feed forward layers after the self-attention layer in parallel. Thereby, this layer can take up a significant amount of the overall memory and sometimes even represent the memory bottleneck of a model. First introduced in the Reformer paper, feed forward chunking is a technique that allows to effectively trade better memory consumption for increased time consumption. ### Chunked Feed Forward Layer in Reformer In Reformer, the _LSH_- or _local_ self-attention layer is usually followed by a residual connection, which then defines the first part in a *transformer block*. For more detail on this please refer to this [blog](http://jalammar.github.io/illustrated-transformer/). The output of the first part of the *transformer block*, called *normed self-attention* output can be written as \\( \mathbf{\overline{Z}} = \mathbf{Z} + \mathbf{X} \\), with \\( \mathbf{Z} \\) being either \\( \mathbf{Z}^{\text{LSH}} \\) or \\( \mathbf{Z}^\text{loc} \\) in Reformer. For our example input \\( \mathbf{x}_1, \ldots, \mathbf{x}_{16} \\), we illustrate the normed self-attention output as follows. ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/layer_normed_output.png) Now, the second part of a *transformer block* usually consists of two feed forward layers \\( ^{1} \\), defined as \\( \text{Linear}_{\text{int}}(\ldots) \\) that processes \\( \mathbf{\overline{Z}} \\), to an intermediate output \\( \mathbf{Y}_{\text{int}} \\) and \\( \text{Linear}_{\text{out}}(\ldots) \\) that processes the intermediate output to the output \\( \mathbf{Y}_{\text{out}} \\). The two feed forward layers can be defined by $$\mathbf{Y}_{\text{out}} = \text{Linear}_{\text{out}}(\mathbf{Y}_\text{int}) = \text{Linear}_{\text{out}}(\text{Linear}_{\text{int}}(\mathbf{\overline{Z}})).$$ It is important to remember at this point that mathematically the output of a feed forward layer at position \\( \mathbf{y}_{\text{out}, i} \\) only depends on the input at this position \\( \mathbf{\overline{y}}_{i} \\). In contrast to the self-attention layer, every output \\( \mathbf{y}_{\text{out}, i} \\) is therefore completely independent of all inputs \\( \mathbf{\overline{y}}_{j \ne i} \\) of different positions. Let's illustrate the feed forward layers for \\( \mathbf{\overline{z}}_1, \ldots, \mathbf{\overline{z}}_{16} \\). ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/feed_forward.png) As can be depicted from the illustration, all input vectors \\( \mathbf{\overline{z}}_{i} \\) are processed by the same feed forward layer in parallel. It becomes interesting when one takes a look at the output dimensions of the feed forward layers. In Reformer, the output dimension of \\( \text{Linear}_{\text{int}} \\) is defined as `config.feed_forward_size`, *e.g.* \\( d_{f} \\), and the output dimension of \\( \text{Linear}_{\text{out}} \\) is defined as `config.hidden_size`, *i.e.* \\( d_{h} \\). The Reformer authors observed that in a transformer model the intermediate dimension \\( d_{f} \\) usually tends to be much larger than the output dimension \\(^{2}\\) \\( d_{h} \\). This means that the tensor \\( \mathbf{\mathbf{Y}}_\text{int} \\) of dimension \\( d_{f} \times n \\) allocates a significant amount of the total memory and can even become the memory bottleneck. To get a better feeling for the differences in dimensions let's picture the matrices \\( \mathbf{Y}_\text{int} \\) and \\( \mathbf{Y}_\text{out} \\) for our example. ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/feed_forward_matrix.png) It is becoming quite obvious that the tensor \\( \mathbf{Y}_\text{int} \\) holds much more memory ( \\( \frac{d_{f}}{d_{h}} \times n \\) as much to be exact) than \\( \mathbf{Y}_{\text{out}} \\). But, is it even necessary to compute the full intermediate matrix \\( \mathbf{Y}_\text{int} \\) ? Not really, because relevant is only the output matrix \\( \mathbf{Y}_\text{out} \\). To trade memory for speed, one can thus chunk the linear layers computation to only process one chunk at the time. Defining `config.chunk_size_feed_forward` as \\( c_{f} \\), chunked linear layers are defined as \\( \mathbf{Y}_{\text{out}} = \left[\mathbf{Y}_{\text{out}, 1: c_{f}}, \ldots, \mathbf{Y}_{\text{out}, (n - c_{f}): n}\right] \\) with \\( \mathbf{Y}_{\text{out}, (c_{f} * i): (i * c_{f} + i)} = \text{Linear}_{\text{out}}(\text{Linear}_{\text{int}}(\mathbf{\overline{Z}}_{(c_{f} * i): (i * c_{f} + i)})) \\). In practice, it just means that the output is incrementally computed and concatenated to avoid having to store the whole intermediate tensor \\( \mathbf{Y}_{\text{int}} \\) in memory. Assuming \\( c_{f}=1 \\) for our example we can illustrate the incremental computation of the output for position \\( i=9 \\) as follows. ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/chunked_feed_forward.png) By processing the inputs in chunks of size 1, the only tensors that have to be stored in memory at the same time are \\( \mathbf{Y}_\text{out} \\) of a maximum size of \\( 16 \times d_{h} \\), \\( \mathbf{y}_{\text{int}, i} \\) of size \\( d_{f} \\) and the input \\( \mathbf{\overline{Z}} \\) of size \\( 16 \times d_{h} \\), with \\( d_{h} \\) being `config.hidden_size` \\(^{3}\\). Finally, it is important to remember that *chunked linear layers* yield a mathematically equivalent output to conventional linear layers and can therefore be applied to all transformer linear layers. Making use of `config.chunk_size_feed_forward` therefore allows a better trade-off between memory and speed in certain use cases. --- \\( {}^1 \\) For a simpler explanation, the layer norm layer which is normally applied to \\( \mathbf{\overline{Z}} \\) before being processed by the feed forward layers is omitted for now. \\( {}^2 \\) In `bert-base-uncased`, *e.g.* the intermediate dimension \\( d_{f} \\) is with 3072 four times larger than the output dimension \\( d_{h} \\). \\( {}^3 \\) As a reminder, the output `config.num_attention_heads` is assumed to be 1 for the sake of clarity and illustration in this notebook, so that the output of the self-attention layers can be assumed to be of size `config.hidden_size`. More information on chunked linear / feed forward layers can also be found [here](https://huggingface.co/transformers/glossary.html#feed-forward-chunking) on the 🤗Transformers docs. ### Benchmark Let's test how much memory can be saved by using chunked feed forward layers. ``` #@title Installs and Imports # pip installs !pip -qq install git+https://github.com/huggingface/transformers.git !pip install -qq py3nvml from transformers import ReformerConfig, PyTorchBenchmark, PyTorchBenchmarkArguments ``` Building wheel for transformers (setup.py) ... [?25l[?25hdone First, let's compare the default `google/reformer-enwik8` model without chunked feed forward layers to the one with chunked feed forward layers. ``` config_no_chunk = ReformerConfig.from_pretrained("google/reformer-enwik8") # no chunk config_chunk = ReformerConfig.from_pretrained("google/reformer-enwik8", chunk_size_feed_forward=1) # feed forward chunk benchmark_args = PyTorchBenchmarkArguments(sequence_lengths=[1024, 2048, 4096], batch_sizes=[8], models=["Reformer-No-Chunk", "Reformer-Chunk"], no_speed=True, no_env_print=True) benchmark = PyTorchBenchmark(configs=[config_no_chunk, config_chunk], args=benchmark_args) result = benchmark.run() ``` 1 / 2 Doesn't fit on GPU. CUDA out of memory. Tried to allocate 2.00 GiB (GPU 0; 11.17 GiB total capacity; 7.85 GiB already allocated; 1.74 GiB free; 9.06 GiB reserved in total by PyTorch) 2 / 2 Doesn't fit on GPU. CUDA out of memory. Tried to allocate 2.00 GiB (GPU 0; 11.17 GiB total capacity; 7.85 GiB already allocated; 1.24 GiB free; 9.56 GiB reserved in total by PyTorch) ==================== INFERENCE - MEMORY - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Memory in MB -------------------------------------------------------------------------------- Reformer-No-Chunk 8 1024 4281 Reformer-No-Chunk 8 2048 7607 Reformer-No-Chunk 8 4096 N/A Reformer-Chunk 8 1024 4309 Reformer-Chunk 8 2048 7669 Reformer-Chunk 8 4096 N/A -------------------------------------------------------------------------------- Interesting, chunked feed forward layers do not seem to help here at all. The reason is that `config.feed_forward_size` is not sufficiently large to make a real difference. Only at longer sequence lengths of 4096, a slight decrease in memory usage can be seen. Let's see what happens to the memory peak usage if we increase the size of the feed forward layer by a factor of 4 and reduce the number of attention heads also by a factor of 4 so that the feed forward layer becomes the memory bottleneck. ``` config_no_chunk = ReformerConfig.from_pretrained("google/reformer-enwik8", chunk_size_feed_forward=0, num_attention_{h}eads=2, feed_forward_size=16384) # no chuck config_chunk = ReformerConfig.from_pretrained("google/reformer-enwik8", chunk_size_feed_forward=1, num_attention_{h}eads=2, feed_forward_size=16384) # feed forward chunk benchmark_args = PyTorchBenchmarkArguments(sequence_lengths=[1024, 2048, 4096], batch_sizes=[8], models=["Reformer-No-Chunk", "Reformer-Chunk"], no_speed=True, no_env_print=True) benchmark = PyTorchBenchmark(configs=[config_no_chunk, config_chunk], args=benchmark_args) result = benchmark.run() ``` 1 / 2 2 / 2 ==================== INFERENCE - MEMORY - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Memory in MB -------------------------------------------------------------------------------- Reformer-No-Chunk 8 1024 3743 Reformer-No-Chunk 8 2048 5539 Reformer-No-Chunk 8 4096 9087 Reformer-Chunk 8 1024 2973 Reformer-Chunk 8 2048 3999 Reformer-Chunk 8 4096 6011 -------------------------------------------------------------------------------- Now a clear decrease in peak memory usage can be seen for longer input sequences. As a conclusion, it should be noted chunked feed forward layers only makes sense for models having few attention heads and large feed forward layers. ## 3. Reversible Residual Layers Reversible residual layers were first introduced in [N. Gomez et al](https://arxiv.org/abs/1707.04585) and used to reduce memory consumption when training the popular *ResNet* model. Mathematically, reversible residual layers are slightly different to "real" residual layers but do not require the activations to be saved during the forward pass, which can drastically reduce memory consumption for training. ### Reversible Residual Layers in Reformer Let's start by investigating why training a model requires much more memory than the inference of the model. When running a model in inference, the required memory equals more or less the memory it takes to compute the **single** largest tensor in the model. On the other hand, when training a model, the required memory equals more or less the **sum** of all differentiable tensors. This is not surprising when considering how auto differentiation works in deep learning frameworks. These lecture [slides](https://www.cs.toronto.edu/~rgrosse/courses/csc321_2018/slides/lec10.pdf) by Roger Grosse of the University of Toronto are great to better understand auto differentiation. In a nutshell, in order to calculate the gradient of a differentiable function (*e.g.* a layer), auto differentiation requires the gradient of the function's output and the function's input and output tensor. While the gradients are dynamically computed and subsequently discarded, the input and output tensors (*a.k.a* activations) of a function are stored during the forward pass. Alright, let's apply this to a transformer model. A transformer model includes a stack of multiple so-called transformer layers. Each additional transformer layer forces the model to store more activations during the forward pass and thus increases the required memory for training. Let's take a more detailed look. A transformer layer essentially consists of two residual layers. The first residual layer represents the *self-attention* mechanism as explained in section 1) and the second residual layer represents the *linear* or feed-forward layers as explained in section 2). Using the same notation as before, the input of a transformer layer *i.e.* \\( \mathbf{X} \\) is first normalized \\( ^{1} \\) and subsequently processed by the self-attention layer to get the output \\( \mathbf{Z} = \text{SelfAttn}(\text{LayerNorm}(\mathbf{X})) \\). We will abbreviate these two layers with \\( G \\) so that \\( \mathbf{Z} = G(\mathbf{X}) \\). Next, the residual \\( \mathbf{Z} \\) is added to the input \\( \mathbf{\overline{Z}} = \mathbf{Z} + \mathbf{X} \\) and the sum is fed into the second residual layer - the two linear layers. \\( \mathbf{\overline{Z}} \\) is processed by a second normalization layer, followed by the two linear layers to get \\( \mathbf{Y} = \text{Linear}(\text{LayerNorm}(\mathbf{Z} + \mathbf{X})) \\). We will abbreviate the second normalization layer and the two linear layers with \\( F \\) yielding \\( \mathbf{Y} = F(\mathbf{\overline{Z}}) \\). Finally, the residual \\( \mathbf{Y} \\) is added to \\( \mathbf{\overline{Z}} \\) to give the output of the transformer layer \\( \mathbf{\overline{Y}} = \mathbf{Y} + \mathbf{\overline{Z}} \\). Let's illustrate a complete transformer layer using the example of \\( \mathbf{x}_1, \ldots, \mathbf{x}_{16} \\). ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/normal_trans_resnet.png) To calculate the gradient of *e.g.* the self-attention block \\( G \\), three tensors have to be known beforehand: the gradient \\( \partial \mathbf{Z} \\), the output \\( \mathbf{Z} \\), and the input \\( \mathbf{X} \\). While \\( \partial \mathbf{Z} \\) can be calculated on-the-fly and discarded afterward, the values for \\( \mathbf{Z} \\) and \\( \mathbf{X} \\) have to be calculated and stored during the forward pass since it is not possible to recalculate them easily on-the-fly during backpropagation. Therefore, during the forward pass, large tensor outputs, such as the query-key dot product matrix \\( \mathbf{Q}\mathbf{K}^T \\) or the intermediate output of the linear layers \\( \mathbf{Y}^{\text{int}} \\), have to be stored in memory \\( ^{2} \\). Here, reversible residual layers come to our help. The idea is relatively straight-forward. The residual block is designed in a way so that instead of having to store the input and output tensor of a function, both can easily be recalculated during the backward pass so that no tensor has to be stored in memory during the forward pass. This is achieved by using two input streams \\( \mathbf{X}^{(1)}, \mathbf{X}^{(2)} \\), and two output streams \\( \mathbf{\overline{Y}}^{(1)}, \mathbf{\overline{Y}}^{(2)} \\). The first residual \\( \mathbf{Z} \\) is computed by the first output stream \\( \mathbf{Z} = G(\mathbf{X}^{(1)}) \\) and subsequently added to the input of the second input stream, so that \\( \mathbf{\overline{Z}} = \mathbf{Z} + \mathbf{X}^{(2)} \\). Similarly, the residual \\( \mathbf{Y} = F(\mathbf{\overline{Z}}) \\) is added to the first input stream again, so that the two output streams are defined by \\( \mathbf{Y}^{(1)} = \mathbf{Y} + \mathbf{X}^{(1)} \\) and \\( \mathbf{Y}^{(2)} = \mathbf{X}^{(2)} + \mathbf{Z} = \mathbf{\overline{Z}} \\). The reversible transformer layer can be visualized for \\( \mathbf{x}_1, \ldots, \mathbf{x}_{16} \\) as follows. ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/rev_trans_resnet.png) As can be seen, the outputs \\( \mathbf{\overline{Y}}^{(1)}, \mathbf{\overline{Y}}^{(2)} \\) are calculated in a very similar way than \\( \mathbf{\overline{Y}} \\) of the non-reversible layer, but they are mathematically different. The authors of Reformer observe in some initial experiments that the performance of a reversible transformer model matches the performance of a standard transformer model. The first visible difference to the standard transformer layer is that there are two input streams and output streams \\( ^{3} \\), which at first slightly increases the required memory for both the forward pass. The two-stream architecture is crucial though for not having to save any activations during the forward pass. Let's explain. For backpropagation, the reversible transformer layer has to calculate the gradients \\( \partial G \\) and \\( \partial F \\). In addition to the gradients \\( \partial \mathbf{Y} \\) and \\( \partial \mathbf{Z} \\) which can be calculated on-the-fly, the tensor values \\( \mathbf{Y} \\), \\( \mathbf{\overline{Z}} \\) have to be known for \\( \partial F \\) and the tensor values \\( \mathbf{Z} \\) and \\( \mathbf{X}^{(1)} \\) for \\( \partial G \\) to make auto-differentiation work. If we assume to know \\( \mathbf{\overline{Y}}^{(1)}, \mathbf{\overline{Y}}^{(2)} \\), it can easily be depicted from the graph that one can calculate \\( \mathbf{X}^{(1)}, \mathbf{X}^{(2)} \\) as follows. \\( \mathbf{X}^{(1)} = F(\mathbf{\overline{Y}}^{(1)}) - \mathbf{\overline{Y}}^{(1)} \\). Great, now that \\( \mathbf{X}^{(1)} \\) is known, \\( \mathbf{X}^{(2)} \\) can be computed by \\( \mathbf{X}^{(2)} = \mathbf{\overline{Y}}^{(1)} - G(\mathbf{X}^{(1)}) \\). Alright now, \\( \mathbf{Z} \\) and \\( \mathbf{Y} \\) are trivial to compute via \\( \mathbf{Y} = \mathbf{\overline{Y}}^{(1)} - \mathbf{X}^{(1)} \\) and \\( \mathbf{Z} = \mathbf{\overline{Y}}^{(2)} - \mathbf{X}^{(2)} \\). So as a conclusion, if only the outputs \\( \mathbf{\overline{Y}}^{(1)}, \mathbf{\overline{Y}}^{(2)} \\) of the **last** reversible transformer layer are stored during the forward pass, all other relevant activations can be derived by making use of \\( G \\) and \\( F \\) during the backward pass and passing \\( \mathbf{X}^{(1)} \\) and \\( \mathbf{X}^{(2)} \\). The overhead of two forward passes of \\( G \\) and \\( F \\) per reversible transformer layer during the backpropagation is traded against not having to store any activations during the forward pass. Not a bad deal! **Note**: Since recently, major deep learning frameworks have released code that allows to store only certain activations and recompute larger ones during the backward propagation (Tensoflow [here](https://www.tensorflow.org/api_docs/python/tf/recompute_grad) and PyTorch [here](https://pytorch.org/docs/stable/checkpoint.html)). For standard reversible layers, this still means that at least one activation has to be stored for each transformer layer, but by defining which activations can dynamically be recomputed a lot of memory can be saved. --- \\( ^{1} \\) In the previous two sections, we have omitted the layer norm layers preceding both the self-attention layer and the linear layers. The reader should know that both \\( \mathbf{X} \\) and \\( \mathbf{\overline{Z}} \\) are both processed by layer normalization before being fed into self-attention and the linear layers respectively. \\( ^{2} \\) While in the design the dimension of \\( \mathbf{Q}\mathbf{K} \\) is written as \\( n \times n \\), in a *LSH self-attention* or *local self-attention* layer the dimension would only be \\( n \times l_{c} \times n_{h} \\) or \\( n \times l_{c} \\) respectively with \\( l_{c} \\) being the chunk length and \\( n_{h} \\) the number of hashes \\( ^{3} \\) In the first reversible transformer layer \\( \mathbf{X}^{(2)} \\) is set to be equal to \\( \mathbf{X}^{(1)} \\). ### Benchmark In order to measure the effect of reversible residual layers, we will compare the memory consumption of BERT with Reformer in training for an increasing number of layers. ``` #@title Installs and Imports # pip installs !pip -qq install git+https://github.com/huggingface/transformers.git !pip install -qq py3nvml from transformers import ReformerConfig, BertConfig, PyTorchBenchmark, PyTorchBenchmarkArguments ``` Let's measure the required memory for the standard `bert-base-uncased` BERT model by increasing the number of layers from 4 to 12. ``` config_4_layers_bert = BertConfig.from_pretrained("bert-base-uncased", num_hidden_layers=4) config_8_layers_bert = BertConfig.from_pretrained("bert-base-uncased", num_hidden_layers=8) config_12_layers_bert = BertConfig.from_pretrained("bert-base-uncased", num_hidden_layers=12) benchmark_args = PyTorchBenchmarkArguments(sequence_lengths=[512], batch_sizes=[8], models=["Bert-4-Layers", "Bert-8-Layers", "Bert-12-Layers"], training=True, no_inference=True, no_speed=True, no_env_print=True) benchmark = PyTorchBenchmark(configs=[config_4_layers_bert, config_8_layers_bert, config_12_layers_bert], args=benchmark_args) result = benchmark.run() ``` HBox(children=(FloatProgress(value=0.0, description='Downloading', max=433.0, style=ProgressStyle(description_… 1 / 3 2 / 3 3 / 3 ==================== TRAIN - MEMORY - RESULTS ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Memory in MB -------------------------------------------------------------------------------- Bert-4-Layers 8 512 4103 Bert-8-Layers 8 512 5759 Bert-12-Layers 8 512 7415 -------------------------------------------------------------------------------- It can be seen that adding a single layer of BERT linearly increases the required memory by more than 400MB. ``` config_4_layers_reformer = ReformerConfig.from_pretrained("google/reformer-enwik8", num_hidden_layers=4, num_hashes=1) config_8_layers_reformer = ReformerConfig.from_pretrained("google/reformer-enwik8", num_hidden_layers=8, num_hashes=1) config_12_layers_reformer = ReformerConfig.from_pretrained("google/reformer-enwik8", num_hidden_layers=12, num_hashes=1) benchmark_args = PyTorchBenchmarkArguments(sequence_lengths=[512], batch_sizes=[8], models=["Reformer-4-Layers", "Reformer-8-Layers", "Reformer-12-Layers"], training=True, no_inference=True, no_speed=True, no_env_print=True) benchmark = PyTorchBenchmark(configs=[config_4_layers_reformer, config_8_layers_reformer, config_12_layers_reformer], args=benchmark_args) result = benchmark.run() ``` 1 / 3 2 / 3 3 / 3 ==================== TRAIN - MEMORY - RESULTS ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Memory in MB -------------------------------------------------------------------------------- Reformer-4-Layers 8 512 4607 Reformer-8-Layers 8 512 4987 Reformer-12-Layers 8 512 5367 -------------------------------------------------------------------------------- For Reformer, on the other hand, adding a layer adds significantly less memory in practice. Adding a single layer increases the required memory on average by less than 100MB so that a much larger 12-Layer `reformer-enwik8` model requires less memory than a 12-Layer `bert-base-uncased` model. ## 4. Axial Positional Encodings Reformer makes it possible to process huge input sequences. However, for such long input sequences standard positional encoding weight matrices alone would use more than 1GB to store its weights. To prevent such large positional encoding matrices, the official Reformer code introduced *Axial Position Encodings*. **Important:** *Axial Position Encodings were not explained in the official paper, but can be well understood from looking into the code and talking to the authors* ### Axial Positional Encodings in Reformer Transformers need positional encodings to account for the order of words in the input because self-attention layers have *no notion of order*. Positional encodings are usually defined by a simple look-up matrix \\( \mathbf{E} = \left[\mathbf{e}_1, \ldots, \mathbf{e}_{n_\text{max}}\right] \\) The positional encoding vector \\( \mathbf{e}_{i} \\) is then simply added to the *ith* input vector \\( \mathbf{x}_{i} + \mathbf{e}_{i} \\) so that the model can distinguish if an input vector (*a.k.a* token) is at position \\( i \\) or \\( j \\). For every input position, the model needs to be able to look up the corresponding positional encoding vector so that the dimension of \\( \mathbf{E} \\) is defined by the maximum length of input vectors the model can process `config.max_position_embeddings`, *i.e.* \\( n_\text{max} \\), and the `config.hidden_size`, *i.e.* \\( d_{h} \\) of the input vectors. Assuming \\( d_{h}=4 \\) and \\( n_\text{max}=49 \\), such a positional encoding matrix can be visualized as follows: ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/positional_encodings_default.png) Here, we showcase only the positional encodings \\( \mathbf{e}_{1} \\), \\( \mathbf{e}_{2} \\), and \\( \mathbf{e}_{49} \\) each of dimension, *a.k.a* height 4. Let's imagine, we want to train a Reformer model on sequences of a length of up to 0.5M tokens and an input vector `config.hidden_size` of 1024 (see notebook [here](https://github.com/patrickvonplaten/notebooks/blob/master/PyTorch_Reformer.ipynb)). The corresponding positional embeddings have a size of \\( 0.5M \times 1024 \sim 512M \\) parameters, which corresponds to a size of 2GB. Such positional encodings would use an unnecessarily large amount of memory both when loading the model in memory and when saving the model on a hard drive. The Reformer authors managed to drastically shrink the positional encodings in size by cutting the `config.hidden_size` dimension in two and smartly factorizing the \\( n_\text{max} \\) dimension. In Transformer, the user can decide into which shape \\( n_\text{max} \\) can be factorized into by setting `config.axial_pos_shape` to an appropriate list of two values \\( n_\text{max}^1 \\) and \\( n_\text{max}^2 \\) so that \\( n_\text{max}^1 \times n_\text{max}^2 = n_\text{max} \\). By setting `config.axial_pos_embds_dim` to an appropriate list of two values \\( d_{h}^{1} \\) and \\( d_{h}^2 \\) so that \\( d_{h}^1 + d_{h}^2 = d_{h} \\), the user can decide how the hidden size dimension should be cut. Now, let's visualize and explain more intuitively. One can think of factorizing \\( n_{\text{max}} \\) as folding the dimension into a third axis, which is shown in the following for the factorization `config.axial_pos_shape = [7, 7]`: ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/3d_positional_encoding.png) Each of the three standing rectangular prisms corresponds to one of the encoding vectors \\( \mathbf{e}_{1}, \mathbf{e}_{2}, \mathbf{e}_{49} \\), but we can see that the 49 encoding vectors are divided into 7 rows of 7 vectors each. Now the idea is to use only one row of 7 encoding vectors and expand those vectors to the other 6 rows, essentially reusing their values. Because it is discouraged to have the same values for different encoding vectors, each vector of dimension (*a.k.a* height) `config.hidden_size=4` is cut into the lower encoding vector \\( \mathbf{e}_\text{down} \\) of size \\( 1 \\) and \\( \mathbf{e}_\text{up} \\) of size \\( 3 \\), so that the lower part can be expanded along the row dimension and the upper part can be expanded along the column dimension. Let's visualize for more clarity. ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/3d_positional_encoding_cut.png) We can see that we have cut the embedding vectors into \\( \mathbf{e}_\text{down} \\) (*in blue*) and \\( \mathbf{e}_\text{up} \\) (*in yellow*). Now for the "sub"-vectors \\( \mathbf{E}_\text{down} = \left[\mathbf{e}_{\text{down},1}, \ldots, \mathbf{e}_{\text{down},49}\right] \\) only the first row, *a.k.a.* the width in the graphic, of \\( 7 \\) is kept and expanded along the column dimension, *a.k.a.* the depth of the graphic. Inversely, for the "sub"-vectors \\( \mathbf{E}_\text{up} = \left[\mathbf{e}_{\text{up},1}, \ldots, \mathbf{e}_{\text{up},49}\right] \\) only the first column of \\( 7 \\) is kept and expanded along the row dimension. The resulting embedding vectors \\( \mathbf{e'}_{i} \\) then correspond to $$\mathbf{e'}_{i} = \left[ \left[\mathbf{e}_{\text{down, } i \% n_\text{max}^1}\right]^T, \left[\mathbf{e}_{\text{up, } \left \lfloor{\frac{i}{{n}^2_{\text{max}}}}\right \rfloor} \right]^T \right]^T $$ whereas \\( n_\text{max}^1 = 7 \\) and \\( n_\text{max}^2 = 7 \\) in our example. These new encodings \\( \mathbf{E'} = \left[\mathbf{e'}_{1}, \ldots, \mathbf{e'}_{n_\text{max}}\right] \\) are called **Axial Position Encodings**. In the following, these axial position encodings are illustrated in more detail for our example. ![alt text](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/reformer_benchmark/axial_pos_encoding.png) Now it should be more understandable how the final positional encoding vectors \\( \mathbf{E'} \\) are calculated only from \\( \mathbf{E}_{\text{down}} \\) of dimension \\( d_{h}^1 \times n_{\text{max}^1} \\) and \\( \mathbf{E}_{\text{up}} \\) of dimension \\( d_{h}^2 \times n_{\text{max}}^2 \\). The crucial aspect to see here is that Axial Positional Encodings make sure that none of the vectors \\( \left[\mathbf{e'}_1, \ldots, \mathbf{e'}_{n_{\text{max}}}\right] \\) are equal to each other by design and that the overall size of the encoding matrix is reduced from \\( n_{\text{max}} \times d_{h} \\) to \\( n_{\text{max}}^1 \times d_{h}^1 + n_\text{max}^2 \times d_{h}^2 \\). By allowing each axial positional encoding vector to be different by design the model is given much more flexibility to learn efficient positional representations if axial positional encodings are learned by the model. To demonstrate the drastic reduction in size, let's assume we would have set `config.axial_pos_shape = [1024, 512]` and `config.axial_pos_embds_dim = [512, 512]` for a Reformer model that can process inputs up to a length of 0.5M tokens. The resulting axial positional encoding matrix would have had a size of only \\( 1024 \times 512 + 512 \times 512 \sim 800K \\) parameters which corresponds to roughly 3MB. This is a drastic reduction from the 2GB a standard positional encoding matrix would require in this case. For a more condensed and math-heavy explanation please refer to the 🤗Transformers docs [here](https://huggingface.co/transformers/model_doc/reformer.html#axial-positional-encodings). ### Benchmark Lastly, let's also compare the peak memory consumption of conventional positional embeddings to *axial positional embeddings*. ``` #@title Installs and Imports # pip installs !pip -qq install git+https://github.com/huggingface/transformers.git !pip install -qq py3nvml from transformers import ReformerConfig, PyTorchBenchmark, PyTorchBenchmarkArguments, ReformerModel ``` Positional embeddings depend only on two configuration parameters: The maximum allowed length of input sequences `config.max_position_embeddings` and `config.hidden_size`. Let's use a model that pushes the maximum allowed length of input sequences to half a million tokens, called `google/reformer-crime-and-punishment`, to see the effect of using axial positional embeddings. To begin with, we will compare the shape of axial position encodings with standard positional encodings and the number of parameters in the model. ``` config_no_pos_axial_embeds = ReformerConfig.from_pretrained("google/reformer-crime-and-punishment", axial_pos_embds=False) # disable axial positional embeddings config_pos_axial_embeds = ReformerConfig.from_pretrained("google/reformer-crime-and-punishment", axial_pos_embds=True, axial_pos_embds_dim=(64, 192), axial_pos_shape=(512, 1024)) # enable axial positional embeddings print("Default Positional Encodings") print(20 * '-') model = ReformerModel(config_no_pos_axial_embeds) print(f"Positional embeddings shape: {model.embeddings.position_embeddings}") print(f"Num parameters of model: {model.num_parameters()}") print(20 * '-' + '\n\n') print("Axial Positional Encodings") print(20 * '-') model = ReformerModel(config_pos_axial_embeds) print(f"Positional embeddings shape: {model.embeddings.position_embeddings}") print(f"Num parameters of model: {model.num_parameters()}") print(20 * '-' + '\n\n') ``` HBox(children=(FloatProgress(value=0.0, description='Downloading', max=1151.0, style=ProgressStyle(description… Default Positional Encodings -------------------- Positional embeddings shape: PositionEmbeddings( (embedding): Embedding(524288, 256) ) Num parameters of model: 136572416 -------------------- Axial Positional Encodings -------------------- Positional embeddings shape: AxialPositionEmbeddings( (weights): ParameterList( (0): Parameter containing: [torch.FloatTensor of size 512x1x64] (1): Parameter containing: [torch.FloatTensor of size 1x1024x192] ) ) Num parameters of model: 2584064 -------------------- Having read the theory, the shape of the axial positional encoding weights should not be a surprise to the reader. Regarding the results, it can be seen that for models being capable of processing such long input sequences, it is not practical to use default positional encodings. In the case of `google/reformer-crime-and-punishment`, standard positional encodings alone contain more than 100M parameters. Axial positional encodings reduce this number to just over 200K. Lastly, let's also compare the required memory at inference time. ``` benchmark_args = PyTorchBenchmarkArguments(sequence_lengths=[512], batch_sizes=[8], models=["Reformer-No-Axial-Pos-Embeddings", "Reformer-Axial-Pos-Embeddings"], no_speed=True, no_env_print=True) benchmark = PyTorchBenchmark(configs=[config_no_pos_axial_embeds, config_pos_axial_embeds], args=benchmark_args) result = benchmark.run() ``` 1 / 2 2 / 2 ==================== INFERENCE - MEMORY - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Memory in MB -------------------------------------------------------------------------------- Reformer-No-Axial-Pos-Embeddin 8 512 959 Reformer-Axial-Pos-Embeddings 8 512 447 -------------------------------------------------------------------------------- It can be seen that using axial positional embeddings reduces the memory requirement to approximately half in the case of `google/reformer-crime-and-punishment`.
Block Sparse Matrices for Smaller and Faster Language Models
madlag
Sep 10, 2020
pytorch_block_sparse
research, nlp
https://huggingface.co/blog/pytorch_block_sparse
# Block Sparse Matrices for Smaller and Faster Language Models ## Saving space and time, one zero at a time In previous [blog](https://medium.com/huggingface/is-the-future-of-neural-networks-sparse-an-introduction-1-n-d03923ecbd70) [posts](https://medium.com/huggingface/sparse-neural-networks-2-n-gpu-performance-b8bc9ce950fc) we introduced sparse matrices and what they could do to improve neural networks. The basic assumption is that full dense layers are often overkill and can be pruned without a significant loss in precision. In some cases sparse linear layers can even *improve precision or/and generalization*. The main issue is that currently available code that supports sparse algebra computation is severely lacking efficiency. We are also [still waiting](https://openai.com/blog/openai-pytorch/) for official PyTorch support. That's why we ran out of patience and took some time this summer to address this "lacuna". Today, we are excited to **release the extension [pytorch_block_sparse](https://github.com/huggingface/pytorch_block_sparse)**. By itself, or even better combined with other methods like [distillation](https://medium.com/huggingface/distilbert-8cf3380435b5) and [quantization](https://medium.com/microsoftazure/faster-and-smaller-quantized-nlp-with-hugging-face-and-onnx-runtime-ec5525473bb7), this library enables **networks** which are both **smaller and faster**, something Hugging Face considers crucial to let anybody use neural networks in production at **low cost**, and to **improve the experience** for the end user. ## Usage The provided `BlockSparseLinear` module is a drop in replacement for `torch.nn.Linear`, and it is trivial to use it in your models: ```python # from torch.nn import Linear from pytorch_block_sparse import BlockSparseLinear ... # self.fc = nn.Linear(1024, 256) self.fc = BlockSparseLinear(1024, 256, density=0.1) ``` The extension also provides a `BlockSparseModelPatcher` that allows to modify an existing model "on the fly", which is shown in this [example notebook](https://github.com/huggingface/pytorch_block_sparse/blob/master/doc/notebooks/ModelSparsification.ipynb). Such a model can then be trained as usual, without any change in your model source code. ## NVIDIA CUTLASS This extension is based on the [cutlass tilesparse](https://github.com/YulhwaKim/cutlass_tilesparse) proof of concept by [Yulhwa Kim](https://github.com/YulhwaKim). It is using **C++ CUDA templates** for block-sparse matrix multiplication based on **[CUTLASS](https://developer.nvidia.com/blog/cutlass-linear-algebra-cuda/)**. CUTLASS is a collection of CUDA C++ templates for implementing high-performance CUDA kernels. With CUTLASS, approching cuBLAS performance on custom kernels is possible without resorting to assembly language code. The latest versions include all the **Ampere Tensor Core primitives**, providing **x10 or more speedups** with a limited loss of precision. Next versions of pytorch_block_sparse will make use of these primitives, as block sparsity is 100% compatible with Tensor Cores requirements. ## Performance At the current stage of the library, the performances for sparse matrices are roughly two times slower than their cuBLAS optimized dense counterpart, and we are confident that we can improve this in the future. This is a huge improvement on PyTorch sparse matrices: their current implementation is an order of magnitude slower than the dense one. But the more important point is that the performance gain of using sparse matrices grows with the sparsity, so a **75% sparse matrix** is roughly **2x** faster than the dense equivalent. The memory savings are even more significant: for **75% sparsity**, memory consumption is reduced by **4x** as you would expect. ## Future work Being able to efficiently train block-sparse linear layers was just the first step. The sparsity pattern is currenly fixed at initialization, and of course optimizing it during learning will yield large improvements. So in future versions, you can expect tools to measure the "usefulness" of parameters to be able to **optimize the sparsity pattern**. **NVIDIA Ampere 50% sparse pattern** within blocks will probably yield another significant performance gain, just as upgrading to more recent versions of CUTLASS does. So, stay tuned for more sparsity goodness in a near future!
Transformer-based Encoder-Decoder Models
patrickvonplaten
October 10, 2020
encoder-decoder
research, nlp
https://huggingface.co/blog/encoder-decoder
# Transformers-based Encoder-Decoder Models <a target="_blank" href="https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Encoder_Decoder_Model.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> # **Transformer-based Encoder-Decoder Models** ```bash !pip install transformers==4.2.1 !pip install sentencepiece==0.1.95 ``` The *transformer-based* encoder-decoder model was introduced by Vaswani et al. in the famous [Attention is all you need paper](https://arxiv.org/abs/1706.03762) and is today the *de-facto* standard encoder-decoder architecture in natural language processing (NLP). Recently, there has been a lot of research on different *pre-training* objectives for transformer-based encoder-decoder models, *e.g.* T5, Bart, Pegasus, ProphetNet, Marge, *etc*\..., but the model architecture has stayed largely the same. The goal of the blog post is to give an **in-detail** explanation of **how** the transformer-based encoder-decoder architecture models *sequence-to-sequence* problems. We will focus on the mathematical model defined by the architecture and how the model can be used in inference. Along the way, we will give some background on sequence-to-sequence models in NLP and break down the *transformer-based* encoder-decoder architecture into its **encoder** and **decoder** parts. We provide many illustrations and establish the link between the theory of *transformer-based* encoder-decoder models and their practical usage in 🤗Transformers for inference. Note that this blog post does *not* explain how such models can be trained - this will be the topic of a future blog post. Transformer-based encoder-decoder models are the result of years of research on _representation learning_ and _model architectures_. This notebook provides a short summary of the history of neural encoder-decoder models. For more context, the reader is advised to read this awesome [blog post](https://ruder.io/a-review-of-the-recent-history-of-nlp/) by Sebastion Ruder. Additionally, a basic understanding of the _self-attention architecture_ is recommended. The following blog post by Jay Alammar serves as a good refresher on the original Transformer model [here](http://jalammar.github.io/illustrated-transformer/). At the time of writing this notebook, 🤗Transformers comprises the encoder-decoder models *T5*, *Bart*, *MarianMT*, and *Pegasus*, which are summarized in the docs under [model summaries](https://huggingface.co/transformers/model_summary.html#sequence-to-sequence-models). The notebook is divided into four parts: - **Background** - *A short history of neural encoder-decoder models is given with a focus on RNN-based models.* - **Encoder-Decoder** - *The transformer-based encoder-decoder model is presented and it is explained how the model is used for inference.* - **Encoder** - *The encoder part of the model is explained in detail.* - **Decoder** - *The decoder part of the model is explained in detail.* Each part builds upon the previous part, but can also be read on its own. ## **Background** Tasks in natural language generation (NLG), a subfield of NLP, are best expressed as sequence-to-sequence problems. Such tasks can be defined as finding a model that maps a sequence of input words to a sequence of target words. Some classic examples are *summarization* and *translation*. In the following, we assume that each word is encoded into a vector representation. \\(n\\) input words can then be represented as a sequence of \\(n\\) input vectors: $$\mathbf{X}_{1:n} = \{\mathbf{x}_1, \ldots, \mathbf{x}_n\}.$$ Consequently, sequence-to-sequence problems can be solved by finding a mapping \\(f\\) from an input sequence of \\(n\\) vectors \\(\mathbf{X}_{1:n}\\) to a sequence of \\(m\\) target vectors \\(\mathbf{Y}_{1:m}\\), whereas the number of target vectors \\(m\\) is unknown apriori and depends on the input sequence: $$ f: \mathbf{X}_{1:n} \to \mathbf{Y}_{1:m}. $$ [Sutskever et al. (2014)](https://arxiv.org/abs/1409.3215) noted that deep neural networks (DNN)s, \"*despite their flexibility and power can only define a mapping whose inputs and targets can be sensibly encoded with vectors of fixed dimensionality.*\" \\({}^1\\) Using a DNN model \\({}^2\\) to solve sequence-to-sequence problems would therefore mean that the number of target vectors \\(m\\) has to be known *apriori* and would have to be independent of the input \\(\mathbf{X}_{1:n}\\). This is suboptimal because, for tasks in NLG, the number of target words usually depends on the input \\(\mathbf{X}_{1:n}\\) and not just on the input length \\(n\\). *E.g.*, an article of 1000 words can be summarized to both 200 words and 100 words depending on its content. In 2014, [Cho et al.](https://arxiv.org/pdf/1406.1078.pdf) and [Sutskever et al.](https://arxiv.org/abs/1409.3215) proposed to use an encoder-decoder model purely based on recurrent neural networks (RNNs) for *sequence-to-sequence* tasks. In contrast to DNNS, RNNs are capable of modeling a mapping to a variable number of target vectors. Let\'s dive a bit deeper into the functioning of RNN-based encoder-decoder models. During inference, the encoder RNN encodes an input sequence \\(\mathbf{X}_{1:n}\\) by successively updating its *hidden state* \\({}^3\\). After having processed the last input vector \\(\mathbf{x}_n\\), the encoder\'s hidden state defines the input encoding \\(\mathbf{c}\\). Thus, the encoder defines the mapping: $$ f_{\theta_{enc}}: \mathbf{X}_{1:n} \to \mathbf{c}. $$ Then, the decoder\'s hidden state is initialized with the input encoding and during inference, the decoder RNN is used to auto-regressively generate the target sequence. Let\'s explain. Mathematically, the decoder defines the probability distribution of a target sequence \\(\mathbf{Y}_{1:m}\\) given the hidden state \\(\mathbf{c}\\): $$ p_{\theta_{dec}}(\mathbf{Y}_{1:m} |\mathbf{c}). $$ By Bayes\' rule the distribution can be decomposed into conditional distributions of single target vectors as follows: $$ p_{\theta_{dec}}(\mathbf{Y}_{1:m} |\mathbf{c}) = \prod_{i=1}^{m} p_{\theta_{\text{dec}}}(\mathbf{y}_i | \mathbf{Y}_{0: i-1}, \mathbf{c}). $$ Thus, if the architecture can model the conditional distribution of the next target vector, given all previous target vectors: $$ p_{\theta_{\text{dec}}}(\mathbf{y}_i | \mathbf{Y}_{0: i-1}, \mathbf{c}), \forall i \in \{1, \ldots, m\},$$ then it can model the distribution of any target vector sequence given the hidden state \\(\mathbf{c}\\) by simply multiplying all conditional probabilities. So how does the RNN-based decoder architecture model \\(p_{\theta_{\text{dec}}}(\mathbf{y}_i | \mathbf{Y}_{0: i-1}, \mathbf{c})\\)? In computational terms, the model sequentially maps the previous inner hidden state \\(\mathbf{c}_{i-1}\\) and the previous target vector \\(\mathbf{y}_{i-1}\\) to the current inner hidden state \\(\mathbf{c}_i\\) and a *logit vector* \\(\mathbf{l}_i\\) (shown in dark red below): $$ f_{\theta_{\text{dec}}}(\mathbf{y}_{i-1}, \mathbf{c}_{i-1}) \to \mathbf{l}_i, \mathbf{c}_i.$$ \\(\mathbf{c}_0\\) is thereby defined as \\(\mathbf{c}\\) being the output hidden state of the RNN-based encoder. Subsequently, the *softmax* operation is used to transform the logit vector \\(\mathbf{l}_i\\) to a conditional probablity distribution of the next target vector: $$ p(\mathbf{y}_i | \mathbf{l}_i) = \textbf{Softmax}(\mathbf{l}_i), \text{ with } \mathbf{l}_i = f_{\theta_{\text{dec}}}(\mathbf{y}_{i-1}, \mathbf{c}_{\text{prev}}). $$ For more detail on the logit vector and the resulting probability distribution, please see footnote \\({}^4\\). From the above equation, we can see that the distribution of the current target vector \\(\mathbf{y}_i\\) is directly conditioned on the previous target vector \\(\mathbf{y}_{i-1}\\) and the previous hidden state \\(\mathbf{c}_{i-1}\\). Because the previous hidden state \\(\mathbf{c}_{i-1}\\) depends on all previous target vectors \\(\mathbf{y}_0, \ldots, \mathbf{y}_{i-2}\\), it can be stated that the RNN-based decoder *implicitly* (*e.g.* *indirectly*) models the conditional distribution \\(p_{\theta_{\text{dec}}}(\mathbf{y}_i | \mathbf{Y}_{0: i-1}, \mathbf{c})\\). The space of possible target vector sequences \\(\mathbf{Y}_{1:m}\\) is prohibitively large so that at inference, one has to rely on decoding methods \\({}^5\\) that efficiently sample high probability target vector sequences from \\(p_{\theta_{dec}}(\mathbf{Y}_{1:m} |\mathbf{c})\\). Given such a decoding method, during inference, the next input vector \\(\mathbf{y}_i\\) can then be sampled from \\(p_{\theta_{\text{dec}}}(\mathbf{y}_i | \mathbf{Y}_{0: i-1}, \mathbf{c})\\) and is consequently appended to the input sequence so that the decoder RNN then models \\(p_{\theta_{\text{dec}}}(\mathbf{y}_{i+1} | \mathbf{Y}_{0: i}, \mathbf{c})\\) to sample the next input vector \\(\mathbf{y}_{i+1}\\) and so on in an *auto-regressive* fashion. An important feature of RNN-based encoder-decoder models is the definition of *special* vectors, such as the \\(\text{EOS}\\) and \\(\text{BOS}\\) vector. The \\(\text{EOS}\\) vector often represents the final input vector \\(\mathbf{x}_n\\) to \"cue\" the encoder that the input sequence has ended and also defines the end of the target sequence. As soon as the \\(\text{EOS}\\) is sampled from a logit vector, the generation is complete. The \\(\text{BOS}\\) vector represents the input vector \\(\mathbf{y}_0\\) fed to the decoder RNN at the very first decoding step. To output the first logit \\(\mathbf{l}_1\\), an input is required and since no input has been generated at the first step a special \\(\text{BOS}\\) input vector is fed to the decoder RNN. Ok - quite complicated! Let\'s illustrate and walk through an example. ![](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/encoder_decoder/rnn_seq2seq.png) The unfolded RNN encoder is colored in green and the unfolded RNN decoder is colored in red. The English sentence \"I want to buy a car\", represented by \\(\mathbf{x}_1 = \text{I}\\), \\(\mathbf{x}_2 = \text{want}\\), \\(\mathbf{x}_3 = \text{to}\\), \\(\mathbf{x}_4 = \text{buy}\\), \\(\mathbf{x}_5 = \text{a}\\), \\(\mathbf{x}_6 = \text{car}\\) and \\(\mathbf{x}_7 = \text{EOS}\\) is translated into German: \"Ich will ein Auto kaufen\" defined as \\(\mathbf{y}_0 = \text{BOS}\\), \\(\mathbf{y}_1 = \text{Ich}\\), \\(\mathbf{y}_2 = \text{will}\\), \\(\mathbf{y}_3 = \text{ein}\\), \\(\mathbf{y}_4 = \text{Auto}, \mathbf{y}_5 = \text{kaufen}\\) and \\(\mathbf{y}_6=\text{EOS}\\). To begin with, the input vector \\(\mathbf{x}_1 = \text{I}\\) is processed by the encoder RNN and updates its hidden state. Note that because we are only interested in the final encoder\'s hidden state \\(\mathbf{c}\\), we can disregard the RNN encoder\'s target vector. The encoder RNN then processes the rest of the input sentence \\(\text{want}\\), \\(\text{to}\\), \\(\text{buy}\\), \\(\text{a}\\), \\(\text{car}\\), \\(\text{EOS}\\) in the same fashion, updating its hidden state at each step until the vector \\(\mathbf{x}_7={EOS}\\) is reached \\({}^6\\). In the illustration above the horizontal arrow connecting the unfolded encoder RNN represents the sequential updates of the hidden state. The final hidden state of the encoder RNN, represented by \\(\mathbf{c}\\) then completely defines the *encoding* of the input sequence and is used as the initial hidden state of the decoder RNN. This can be seen as *conditioning* the decoder RNN on the encoded input. To generate the first target vector, the decoder is fed the \\(\text{BOS}\\) vector, illustrated as \\(\mathbf{y}_0\\) in the design above. The target vector of the RNN is then further mapped to the logit vector \\(\mathbf{l}_1\\) by means of the *LM Head* feed-forward layer to define the conditional distribution of the first target vector as explained above: $$ p_{\theta_{dec}}(\mathbf{y} | \text{BOS}, \mathbf{c}). $$ The word \\(\text{Ich}\\) is sampled (shown by the grey arrow, connecting \\(\mathbf{l}_1\\) and \\(\mathbf{y}_1\\)) and consequently the second target vector can be sampled: $$ \text{will} \sim p_{\theta_{dec}}(\mathbf{y} | \text{BOS}, \text{Ich}, \mathbf{c}). $$ And so on until at step \\(i=6\\), the \\(\text{EOS}\\) vector is sampled from \\(\mathbf{l}_6\\) and the decoding is finished. The resulting target sequence amounts to \\(\mathbf{Y}_{1:6} = \{\mathbf{y}_1, \ldots, \mathbf{y}_6\}\\), which is \"Ich will ein Auto kaufen\" in our example above. To sum it up, an RNN-based encoder-decoder model, represented by \\(f_{\theta_{\text{enc}}}\\) and \\( p_{\theta_{\text{dec}}} \\) defines the distribution \\(p(\mathbf{Y}_{1:m} | \mathbf{X}_{1:n})\\) by factorization: $$ p_{\theta_{\text{enc}}, \theta_{\text{dec}}}(\mathbf{Y}_{1:m} | \mathbf{X}_{1:n}) = \prod_{i=1}^{m} p_{\theta_{\text{enc}}, \theta_{\text{dec}}}(\mathbf{y}_i | \mathbf{Y}_{0: i-1}, \mathbf{X}_{1:n}) = \prod_{i=1}^{m} p_{\theta_{\text{dec}}}(\mathbf{y}_i | \mathbf{Y}_{0: i-1}, \mathbf{c}), \text{ with } \mathbf{c}=f_{\theta_{enc}}(X). $$ During inference, efficient decoding methods can auto-regressively generate the target sequence \\(\mathbf{Y}_{1:m}\\). The RNN-based encoder-decoder model took the NLG community by storm. In 2016, Google announced to fully replace its heavily feature engineered translation service by a single RNN-based encoder-decoder model (see [here](https://www.oreilly.com/radar/what-machine-learning-means-for-software-development/#:~:text=Machine%20learning%20is%20already%20making,of%20code%20in%20Google%20Translate.)). Nevertheless, RNN-based encoder-decoder models have two pitfalls. First, RNNs suffer from the vanishing gradient problem, making it very difficult to capture long-range dependencies, *cf.* [Hochreiter et al. (2001)](https://www.bioinf.jku.at/publications/older/ch7.pdf). Second, the inherent recurrent architecture of RNNs prevents efficient parallelization when encoding, *cf.* [Vaswani et al. (2017)](https://arxiv.org/abs/1706.03762). ------------------------------------------------------------------------ \\({}^1\\) The original quote from the paper is \"*Despite their flexibility and power, DNNs can only be applied to problems whose inputs and targets can be sensibly encoded with vectors of fixed dimensionality*\", which is slightly adapted here. \\({}^2\\) The same holds essentially true for convolutional neural networks (CNNs). While an input sequence of variable length can be fed into a CNN, the dimensionality of the target will always be dependent on the input dimensionality or fixed to a specific value. \\({}^3\\) At the first step, the hidden state is initialized as a zero vector and fed to the RNN together with the first input vector \\(\mathbf{x}_1\\). \\({}^4\\) A neural network can define a probability distribution over all words, *i.e.* \\(p(\mathbf{y} | \mathbf{c}, \mathbf{Y}_{0: i-1})\\) as follows. First, the network defines a mapping from the inputs \\(\mathbf{c}, \mathbf{Y}_{0: i-1}\\) to an embedded vector representation \\(\mathbf{y'}\\), which corresponds to the RNN target vector. The embedded vector representation \\(\mathbf{y'}\\) is then passed to the \"language model head\" layer, which means that it is multiplied by the *word embedding matrix*, *i.e.* \\(\mathbf{Y}^{\text{vocab}}\\), so that a score between \\(\mathbf{y'}\\) and each encoded vector \\(\mathbf{y} \in \mathbf{Y}^{\text{vocab}}\\) is computed. The resulting vector is called the logit vector \\( \mathbf{l} = \mathbf{Y}^{\text{vocab}} \mathbf{y'} \\) and can be mapped to a probability distribution over all words by applying a softmax operation: \\(p(\mathbf{y} | \mathbf{c}) = \text{Softmax}(\mathbf{Y}^{\text{vocab}} \mathbf{y'}) = \text{Softmax}(\mathbf{l})\\). \\({}^5\\) Beam-search decoding is an example of such a decoding method. Different decoding methods are out of scope for this notebook. The reader is advised to refer to this [interactive notebook](https://huggingface.co/blog/how-to-generate) on decoding methods. \\({}^6\\) [Sutskever et al. (2014)](https://arxiv.org/abs/1409.3215) reverses the order of the input so that in the above example the input vectors would correspond to \\(\mathbf{x}_1 = \text{car}\\), \\(\mathbf{x}_2 = \text{a}\\), \\(\mathbf{x}_3 = \text{buy}\\), \\(\mathbf{x}_4 = \text{to}\\), \\(\mathbf{x}_5 = \text{want}\\), \\(\mathbf{x}_6 = \text{I}\\) and \\(\mathbf{x}_7 = \text{EOS}\\). The motivation is to allow for a shorter connection between corresponding word pairs such as \\(\mathbf{x}_6 = \text{I}\\) and \\(\mathbf{y}_1 = \text{Ich}\\). The research group emphasizes that the reversal of the input sequence was a key reason for their model\'s improved performance on machine translation. ## **Encoder-Decoder** In 2017, Vaswani et al. introduced the **Transformer** and thereby gave birth to *transformer-based* encoder-decoder models. Analogous to RNN-based encoder-decoder models, transformer-based encoder-decoder models consist of an encoder and a decoder which are both stacks of *residual attention blocks*. The key innovation of transformer-based encoder-decoder models is that such residual attention blocks can process an input sequence \\(\mathbf{X}_{1:n}\\) of variable length \\(n\\) without exhibiting a recurrent structure. Not relying on a recurrent structure allows transformer-based encoder-decoders to be highly parallelizable, which makes the model orders of magnitude more computationally efficient than RNN-based encoder-decoder models on modern hardware. As a reminder, to solve a *sequence-to-sequence* problem, we need to find a mapping of an input sequence \\(\mathbf{X}_{1:n}\\) to an output sequence \\(\mathbf{Y}_{1:m}\\) of variable length \\(m\\). Let\'s see how transformer-based encoder-decoder models are used to find such a mapping. Similar to RNN-based encoder-decoder models, the transformer-based encoder-decoder models define a conditional distribution of target vectors \\(\mathbf{Y}_{1:n}\\) given an input sequence \\(\mathbf{X}_{1:n}\\): $$ p_{\theta_{\text{enc}}, \theta_{\text{dec}}}(\mathbf{Y}_{1:m} | \mathbf{X}_{1:n}). $$ The transformer-based encoder part encodes the input sequence \\(\mathbf{X}_{1:n}\\) to a *sequence* of *hidden states* \\(\mathbf{\overline{X}}_{1:n}\\), thus defining the mapping: $$ f_{\theta_{\text{enc}}}: \mathbf{X}_{1:n} \to \mathbf{\overline{X}}_{1:n}. $$ The transformer-based decoder part then models the conditional probability distribution of the target vector sequence \\(\mathbf{Y}_{1:n}\\) given the sequence of encoded hidden states \\(\mathbf{\overline{X}}_{1:n}\\): $$ p_{\theta_{dec}}(\mathbf{Y}_{1:n} | \mathbf{\overline{X}}_{1:n}).$$ By Bayes\' rule, this distribution can be factorized to a product of conditional probability distribution of the target vector \\(\mathbf{y}_i\\) given the encoded hidden states \\(\mathbf{\overline{X}}_{1:n}\\) and all previous target vectors \\(\mathbf{Y}_{0:i-1}\\): $$ p_{\theta_{dec}}(\mathbf{Y}_{1:n} | \mathbf{\overline{X}}_{1:n}) = \prod_{i=1}^{n} p_{\theta_{\text{dec}}}(\mathbf{y}_i | \mathbf{Y}_{0: i-1}, \mathbf{\overline{X}}_{1:n}). $$ The transformer-based decoder hereby maps the sequence of encoded hidden states \\(\mathbf{\overline{X}}_{1:n}\\) and all previous target vectors \\(\mathbf{Y}_{0:i-1}\\) to the *logit* vector \\(\mathbf{l}_i\\). The logit vector \\(\mathbf{l}_i\\) is then processed by the *softmax* operation to define the conditional distribution \\(p_{\theta_{\text{dec}}}(\mathbf{y}_i | \mathbf{Y}_{0: i-1}, \mathbf{\overline{X}}_{1:n})\\), just as it is done for RNN-based decoders. However, in contrast to RNN-based decoders, the distribution of the target vector \\(\mathbf{y}_i\\) is *explicitly* (or directly) conditioned on all previous target vectors \\(\mathbf{y}_0, \ldots, \mathbf{y}_{i-1}\\) as we will see later in more detail. The 0th target vector \\(\mathbf{y}_0\\) is hereby represented by a special \"begin-of-sentence\" \\(\text{BOS}\\) vector. Having defined the conditional distribution \\(p_{\theta_{\text{dec}}}(\mathbf{y}_i | \mathbf{Y}_{0: i-1}, \mathbf{\overline{X}}_{1:n})\\), we can now *auto-regressively* generate the output and thus define a mapping of an input sequence \\(\mathbf{X}_{1:n}\\) to an output sequence \\(\mathbf{Y}_{1:m}\\) at inference. Let\'s visualize the complete process of *auto-regressive* generation of *transformer-based* encoder-decoder models. ![texte du lien](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/encoder_decoder/EncoderDecoder.png) The transformer-based encoder is colored in green and the transformer-based decoder is colored in red. As in the previous section, we show how the English sentence \"I want to buy a car\", represented by \\(\mathbf{x}_1 = \text{I}\\), \\(\mathbf{x}_2 = \text{want}\\), \\(\mathbf{x}_3 = \text{to}\\), \\(\mathbf{x}_4 = \text{buy}\\), \\(\mathbf{x}_5 = \text{a}\\), \\(\mathbf{x}_6 = \text{car}\\), and \\(\mathbf{x}_7 = \text{EOS}\\) is translated into German: \"Ich will ein Auto kaufen\" defined as \\(\mathbf{y}_0 = \text{BOS}\\), \\(\mathbf{y}_1 = \text{Ich}\\), \\(\mathbf{y}_2 = \text{will}\\), \\(\mathbf{y}_3 = \text{ein}\\), \\(\mathbf{y}_4 = \text{Auto}, \mathbf{y}_5 = \text{kaufen}\\), and \\(\mathbf{y}_6=\text{EOS}\\). To begin with, the encoder processes the complete input sequence \\(\mathbf{X}_{1:7}\\) = \"I want to buy a car\" (represented by the light green vectors) to a contextualized encoded sequence \\(\mathbf{\overline{X}}_{1:7}\\). *E.g.* \\(\mathbf{\overline{x}}_4\\) defines an encoding that depends not only on the input \\(\mathbf{x}_4\\) = \"buy\", but also on all other words \"I\", \"want\", \"to\", \"a\", \"car\" and \"EOS\", *i.e.* the context. Next, the input encoding \\(\mathbf{\overline{X}}_{1:7}\\) together with the BOS vector, *i.e.* \\(\mathbf{y}_0\\), is fed to the decoder. The decoder processes the inputs \\(\mathbf{\overline{X}}_{1:7}\\) and \\(\mathbf{y}_0\\) to the first logit \\(\mathbf{l}_1\\) (shown in darker red) to define the conditional distribution of the first target vector \\(\mathbf{y}_1\\): $$ p_{\theta_{enc, dec}}(\mathbf{y} | \mathbf{y}_0, \mathbf{X}_{1:7}) = p_{\theta_{enc, dec}}(\mathbf{y} | \text{BOS}, \text{I want to buy a car EOS}) = p_{\theta_{dec}}(\mathbf{y} | \text{BOS}, \mathbf{\overline{X}}_{1:7}). $$ Next, the first target vector \\(\mathbf{y}_1\\) = \\(\text{Ich}\\) is sampled from the distribution (represented by the grey arrows) and can now be fed to the decoder again. The decoder now processes both \\(\mathbf{y}_0\\) = \"BOS\" and \\(\mathbf{y}_1\\) = \"Ich\" to define the conditional distribution of the second target vector \\(\mathbf{y}_2\\): $$ p_{\theta_{dec}}(\mathbf{y} | \text{BOS Ich}, \mathbf{\overline{X}}_{1:7}). $$ We can sample again and produce the target vector \\(\mathbf{y}_2\\) = \"will\". We continue in auto-regressive fashion until at step 6 the EOS vector is sampled from the conditional distribution: $$ \text{EOS} \sim p_{\theta_{dec}}(\mathbf{y} | \text{BOS Ich will ein Auto kaufen}, \mathbf{\overline{X}}_{1:7}). $$ And so on in auto-regressive fashion. It is important to understand that the encoder is only used in the first forward pass to map \\(\mathbf{X}_{1:n}\\) to \\(\mathbf{\overline{X}}_{1:n}\\). As of the second forward pass, the decoder can directly make use of the previously calculated encoding \\(\mathbf{\overline{X}}_{1:n}\\). For clarity, let\'s illustrate the first and the second forward pass for our example above. ![texte du lien](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/encoder_decoder/EncoderDecoder_step_by_step.png) As can be seen, only in step \\(i=1\\) do we have to encode \"I want to buy a car EOS\" to \\(\mathbf{\overline{X}}_{1:7}\\). At step \\(i=2\\), the contextualized encodings of \"I want to buy a car EOS\" are simply reused by the decoder. In 🤗Transformers, this auto-regressive generation is done under-the-hood when calling the `.generate()` method. Let\'s use one of our translation models to see this in action. ```python from transformers import MarianMTModel, MarianTokenizer tokenizer = MarianTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-de") model = MarianMTModel.from_pretrained("Helsinki-NLP/opus-mt-en-de") # create ids of encoded input vectors input_ids = tokenizer("I want to buy a car", return_tensors="pt").input_ids # translate example output_ids = model.generate(input_ids)[0] # decode and print print(tokenizer.decode(output_ids)) ``` _Output:_ ``` <pad> Ich will ein Auto kaufen ``` Calling `.generate()` does many things under-the-hood. First, it passes the `input_ids` to the encoder. Second, it passes a pre-defined token, which is the \\(\text{<pad>}\\) symbol in the case of `MarianMTModel` along with the encoded `input_ids` to the decoder. Third, it applies the beam search decoding mechanism to auto-regressively sample the next output word of the *last* decoder output \\({}^1\\). For more detail on how beam search decoding works, one is advised to read [this](https://huggingface.co/blog/how-to-generate) blog post. In the Appendix, we have included a code snippet that shows how a simple generation method can be implemented \"from scratch\". To fully understand how *auto-regressive* generation works under-the-hood, it is highly recommended to read the Appendix. To sum it up: - The transformer-based encoder defines a mapping from the input sequence \\(\mathbf{X}_{1:n}\\) to a contextualized encoding sequence \\(\mathbf{\overline{X}}_{1:n}\\). - The transformer-based decoder defines the conditional distribution \\(p_{\theta_{\text{dec}}}(\mathbf{y}_i | \mathbf{Y}_{0: i-1}, \mathbf{\overline{X}}_{1:n})\\). - Given an appropriate decoding mechanism, the output sequence \\(\mathbf{Y}_{1:m}\\) can auto-regressively be sampled from \\(p_{\theta_{\text{dec}}}(\mathbf{y}_i | \mathbf{Y}_{0: i-1}, \mathbf{\overline{X}}_{1:n}), \forall i \in \{1, \ldots, m\}\\). Great, now that we have gotten a general overview of how *transformer-based* encoder-decoder models work, we can dive deeper into both the encoder and decoder part of the model. More specifically, we will see exactly how the encoder makes use of the self-attention layer to yield a sequence of context-dependent vector encodings and how self-attention layers allow for efficient parallelization. Then, we will explain in detail how the self-attention layer works in the decoder model and how the decoder is conditioned on the encoder\'s output with *cross-attention* layers to define the conditional distribution \\(p_{\theta_{\text{dec}}}(\mathbf{y}_i | \mathbf{Y}_{0: i-1}, \mathbf{\overline{X}}_{1:n})\\). Along, the way it will become obvious how transformer-based encoder-decoder models solve the long-range dependencies problem of RNN-based encoder-decoder models. ------------------------------------------------------------------------ \\({}^1\\) In the case of `"Helsinki-NLP/opus-mt-en-de"`, the decoding parameters can be accessed [here](https://s3.amazonaws.com/models.huggingface.co/bert/Helsinki-NLP/opus-mt-en-de/config.json), where we can see that model applies beam search with `num_beams=6`. ## **Encoder** As mentioned in the previous section, the *transformer-based* encoder maps the input sequence to a contextualized encoding sequence: $$ f_{\theta_{\text{enc}}}: \mathbf{X}_{1:n} \to \mathbf{\overline{X}}_{1:n}. $$ Taking a closer look at the architecture, the transformer-based encoder is a stack of residual _encoder blocks_. Each encoder block consists of a **bi-directional** self-attention layer, followed by two feed-forward layers. For simplicity, we disregard the normalization layers in this notebook. Also, we will not further discuss the role of the two feed-forward layers, but simply see it as a final vector-to-vector mapping required in each encoder block \\({}^1\\). The bi-directional self-attention layer puts each input vector \\(\mathbf{x'}_j, \forall j \in \{1, \ldots, n\}\\) into relation with all input vectors \\(\mathbf{x'}_1, \ldots, \mathbf{x'}_n\\) and by doing so transforms the input vector \\(\mathbf{x'}_j\\) to a more \"refined\" contextual representation of itself, defined as \\(\mathbf{x''}_j\\). Thereby, the first encoder block transforms each input vector of the input sequence \\(\mathbf{X}_{1:n}\\) (shown in light green below) from a *context-independent* vector representation to a *context-dependent* vector representation, and the following encoder blocks further refine this contextual representation until the last encoder block outputs the final contextual encoding \\(\mathbf{\overline{X}}_{1:n}\\) (shown in darker green below). Let\'s visualize how the encoder processes the input sequence \"I want to buy a car EOS\" to a contextualized encoding sequence. Similar to RNN-based encoders, transformer-based encoders also add a special \"end-of-sequence\" input vector to the input sequence to hint to the model that the input vector sequence is finished \\({}^2\\). ![texte du lien](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/encoder_decoder/Encoder_block.png) Our exemplary *transformer-based* encoder is composed of three encoder blocks, whereas the second encoder block is shown in more detail in the red box on the right for the first three input vectors \\(\mathbf{x}_1, \mathbf{x}_2 and \mathbf{x}_3\\). The bi-directional self-attention mechanism is illustrated by the fully-connected graph in the lower part of the red box and the two feed-forward layers are shown in the upper part of the red box. As stated before, we will focus only on the bi-directional self-attention mechanism. As can be seen each output vector of the self-attention layer \\(\mathbf{x''}_i, \forall i \in \{1, \ldots, 7\}\\) depends *directly* on *all* input vectors \\(\mathbf{x'}_1, \ldots, \mathbf{x'}_7\\). This means, *e.g.* that the input vector representation of the word \"want\", *i.e.* \\(\mathbf{x'}_2\\), is put into direct relation with the word \"buy\", *i.e.* \\(\mathbf{x'}_4\\), but also with the word \"I\",*i.e.* \\(\mathbf{x'}_1\\). The output vector representation of \"want\", *i.e.* \\(\mathbf{x''}_2\\), thus represents a more refined contextual representation for the word \"want\". Let\'s take a deeper look at how bi-directional self-attention works. Each input vector \\(\mathbf{x'}_i\\) of an input sequence \\(\mathbf{X'}_{1:n}\\) of an encoder block is projected to a key vector \\(\mathbf{k}_i\\), value vector \\(\mathbf{v}_i\\) and query vector \\(\mathbf{q}_i\\) (shown in orange, blue, and purple respectively below) through three trainable weight matrices \\(\mathbf{W}_q, \mathbf{W}_v, \mathbf{W}_k\\): $$ \mathbf{q}_i = \mathbf{W}_q \mathbf{x'}_i,$$ $$ \mathbf{v}_i = \mathbf{W}_v \mathbf{x'}_i,$$ $$ \mathbf{k}_i = \mathbf{W}_k \mathbf{x'}_i, $$ $$ \forall i \in \{1, \ldots n \}.$$ Note, that the **same** weight matrices are applied to each input vector \\(\mathbf{x}_i, \forall i \in \{i, \ldots, n\}\\). After projecting each input vector \\(\mathbf{x}_i\\) to a query, key, and value vector, each query vector \\(\mathbf{q}_j, \forall j \in \{1, \ldots, n\}\\) is compared to all key vectors \\(\mathbf{k}_1, \ldots, \mathbf{k}_n\\). The more similar one of the key vectors \\(\mathbf{k}_1, \ldots \mathbf{k}_n\\) is to a query vector \\(\mathbf{q}_j\\), the more important is the corresponding value vector \\(\mathbf{v}_j\\) for the output vector \\(\mathbf{x''}_j\\). More specifically, an output vector \\(\mathbf{x''}_j\\) is defined as the weighted sum of all value vectors \\(\mathbf{v}_1, \ldots, \mathbf{v}_n\\) plus the input vector \\(\mathbf{x'}_j\\). Thereby, the weights are proportional to the cosine similarity between \\(\mathbf{q}_j\\) and the respective key vectors \\(\mathbf{k}_1, \ldots, \mathbf{k}_n\\), which is mathematically expressed by \\(\textbf{Softmax}(\mathbf{K}_{1:n}^\intercal \mathbf{q}_j)\\) as illustrated in the equation below. For a complete description of the self-attention layer, the reader is advised to take a look at [this](http://jalammar.github.io/illustrated-transformer/) blog post or the original [paper](https://arxiv.org/abs/1706.03762). Alright, this sounds quite complicated. Let\'s illustrate the bi-directional self-attention layer for one of the query vectors of our example above. For simplicity, it is assumed that our exemplary *transformer-based* decoder uses only a single attention head `config.num_heads = 1` and that no normalization is applied. ![texte du lien](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/encoder_decoder/encoder_detail.png) On the left, the previously illustrated second encoder block is shown again and on the right, an in detail visualization of the bi-directional self-attention mechanism is given for the second input vector \\(\mathbf{x'}_2\\) that corresponds to the input word \"want\". At first all input vectors \\(\mathbf{x'}_1, \ldots, \mathbf{x'}_7\\) are projected to their respective query vectors \\(\mathbf{q}_1, \ldots, \mathbf{q}_7\\) (only the first three query vectors are shown in purple above), value vectors \\(\mathbf{v}_1, \ldots, \mathbf{v}_7\\) (shown in blue), and key vectors \\(\mathbf{k}_1, \ldots, \mathbf{k}_7\\) (shown in orange). The query vector \\(\mathbf{q}_2\\) is then multiplied by the transpose of all key vectors, *i.e.* \\(\mathbf{K}_{1:7}^{\intercal}\\) followed by the softmax operation to yield the _self-attention weights_. The self-attention weights are finally multiplied by the respective value vectors and the input vector \\(\mathbf{x'}_2\\) is added to output the \"refined\" representation of the word \"want\", *i.e.* \\(\mathbf{x''}_2\\) (shown in dark green on the right). The whole equation is illustrated in the upper part of the box on the right. The multiplication of \\(\mathbf{K}_{1:7}^{\intercal}\\) and \\(\mathbf{q}_2\\) thereby makes it possible to compare the vector representation of \"want\" to all other input vector representations \"I\", \"to\", \"buy\", \"a\", \"car\", \"EOS\" so that the self-attention weights mirror the importance each of the other input vector representations \\(\mathbf{x'}_j \text{, with } j \ne 2\\) for the refined representation \\(\mathbf{x''}_2\\) of the word \"want\". To further understand the implications of the bi-directional self-attention layer, let\'s assume the following sentence is processed: \"*The house is beautiful and well located in the middle of the city where it is easily accessible by public transport*\". The word \"it\" refers to \"house\", which is 12 \"positions away\". In transformer-based encoders, the bi-directional self-attention layer performs a single mathematical operation to put the input vector of \"house\" into relation with the input vector of \"it\" (compare to the first illustration of this section). In contrast, in an RNN-based encoder, a word that is 12 \"positions away\", would require at least 12 mathematical operations meaning that in an RNN-based encoder a linear number of mathematical operations are required. This makes it much harder for an RNN-based encoder to model long-range contextual representations. Also, it becomes clear that a transformer-based encoder is much less prone to lose important information than an RNN-based encoder-decoder model because the sequence length of the encoding is kept the same, *i.e.* \\(\textbf{len}(\mathbf{X}_{1:n}) = \textbf{len}(\mathbf{\overline{X}}_{1:n}) = n\\), while an RNN compresses the length from \\(*\textbf{len}((\mathbf{X}_{1:n}) = n\\) to just \\(\textbf{len}(\mathbf{c}) = 1\\), which makes it very difficult for RNNs to effectively encode long-range dependencies between input words. In addition to making long-range dependencies more easily learnable, we can see that the Transformer architecture is able to process text in parallel.Mathematically, this can easily be shown by writing the self-attention formula as a product of query, key, and value matrices: $$\mathbf{X''}_{1:n} = \mathbf{V}_{1:n} \text{Softmax}(\mathbf{Q}_{1:n}^\intercal \mathbf{K}_{1:n}) + \mathbf{X'}_{1:n}. $$ The output \\(\mathbf{X''}_{1:n} = \mathbf{x''}_1, \ldots, \mathbf{x''}_n\\) is computed via a series of matrix multiplications and a softmax operation, which can be parallelized effectively. Note, that in an RNN-based encoder model, the computation of the hidden state \\(\mathbf{c}\\) has to be done sequentially: Compute hidden state of the first input vector \\(\mathbf{x}_1\\), then compute the hidden state of the second input vector that depends on the hidden state of the first hidden vector, etc. The sequential nature of RNNs prevents effective parallelization and makes them much more inefficient compared to transformer-based encoder models on modern GPU hardware. Great, now we should have a better understanding of a) how transformer-based encoder models effectively model long-range contextual representations and b) how they efficiently process long sequences of input vectors. Now, let\'s code up a short example of the encoder part of our `MarianMT` encoder-decoder models to verify that the explained theory holds in practice. ------------------------------------------------------------------------ \\({}^1\\) An in-detail explanation of the role the feed-forward layers play in transformer-based models is out-of-scope for this notebook. It is argued in [Yun et. al, (2017)](https://arxiv.org/pdf/1912.10077.pdf) that feed-forward layers are crucial to map each contextual vector \\(\mathbf{x'}_i\\) individually to the desired output space, which the _self-attention_ layer does not manage to do on its own. It should be noted here, that each output token \\(\mathbf{x'}\\) is processed by the same feed-forward layer. For more detail, the reader is advised to read the paper. \\({}^2\\) However, the EOS input vector does not have to be appended to the input sequence, but has been shown to improve performance in many cases. In contrast to the _0th_ \\(\text{BOS}\\) target vector of the transformer-based decoder is required as a starting input vector to predict a first target vector. ```python from transformers import MarianMTModel, MarianTokenizer import torch tokenizer = MarianTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-de") model = MarianMTModel.from_pretrained("Helsinki-NLP/opus-mt-en-de") embeddings = model.get_input_embeddings() # create ids of encoded input vectors input_ids = tokenizer("I want to buy a car", return_tensors="pt").input_ids # pass input_ids to encoder encoder_hidden_states = model.base_model.encoder(input_ids, return_dict=True).last_hidden_state # change the input slightly and pass to encoder input_ids_perturbed = tokenizer("I want to buy a house", return_tensors="pt").input_ids encoder_hidden_states_perturbed = model.base_model.encoder(input_ids_perturbed, return_dict=True).last_hidden_state # compare shape and encoding of first vector print(f"Length of input embeddings {embeddings(input_ids).shape[1]}. Length of encoder_hidden_states {encoder_hidden_states.shape[1]}") # compare values of word embedding of "I" for input_ids and perturbed input_ids print("Is encoding for `I` equal to its perturbed version?: ", torch.allclose(encoder_hidden_states[0, 0], encoder_hidden_states_perturbed[0, 0], atol=1e-3)) ``` _Outputs:_ ``` Length of input embeddings 7. Length of encoder_hidden_states 7 Is encoding for `I` equal to its perturbed version?: False ``` We compare the length of the input word embeddings, *i.e.* `embeddings(input_ids)` corresponding to \\(\mathbf{X}_{1:n}\\), with the length of the `encoder_hidden_states`, corresponding to \\(\mathbf{\overline{X}}_{1:n}\\). Also, we have forwarded the word sequence \"I want to buy a car\" and a slightly perturbated version \"I want to buy a house\" through the encoder to check if the first output encoding, corresponding to \"I\", differs when only the last word is changed in the input sequence. As expected the output length of the input word embeddings and encoder output encodings, *i.e.* \\(\textbf{len}(\mathbf{X}_{1:n})\\) and \\(\textbf{len}(\mathbf{\overline{X}}_{1:n})\\), is equal. Second, it can be noted that the values of the encoded output vector of \\(\mathbf{\overline{x}}_1 = \text{"I"}\\) are different when the last word is changed from \"car\" to \"house\". This however should not come as a surprise if one has understood bi-directional self-attention. On a side-note, _autoencoding_ models, such as BERT, have the exact same architecture as _transformer-based_ encoder models. _Autoencoding_ models leverage this architecture for massive self-supervised pre-training on open-domain text data so that they can map any word sequence to a deep bi-directional representation. In [Devlin et al. (2018)](https://arxiv.org/abs/1810.04805), the authors show that a pre-trained BERT model with a single task-specific classification layer on top can achieve SOTA results on eleven NLP tasks. All *autoencoding* models of 🤗Transformers can be found [here](https://huggingface.co/transformers/model_summary.html#autoencoding-models). ## **Decoder** As mentioned in the *Encoder-Decoder* section, the *transformer-based* decoder defines the conditional probability distribution of a target sequence given the contextualized encoding sequence: $$ p_{\theta_{dec}}(\mathbf{Y}_{1: m} | \mathbf{\overline{X}}_{1:n}), $$ which by Bayes\' rule can be decomposed into a product of conditional distributions of the next target vector given the contextualized encoding sequence and all previous target vectors: $$ p_{\theta_{dec}}(\mathbf{Y}_{1:m} | \mathbf{\overline{X}}_{1:n}) = \prod_{i=1}^{m} p_{\theta_{dec}}(\mathbf{y}_i | \mathbf{Y}_{0: i-1}, \mathbf{\overline{X}}_{1:n}). $$ Let\'s first understand how the transformer-based decoder defines a probability distribution. The transformer-based decoder is a stack of *decoder blocks* followed by a dense layer, the \"LM head\". The stack of decoder blocks maps the contextualized encoding sequence \\(\mathbf{\overline{X}}_{1:n}\\) and a target vector sequence prepended by the \\(\text{BOS}\\) vector and cut to the last target vector, *i.e.* \\(\mathbf{Y}_{0:i-1}\\), to an encoded sequence of target vectors \\(\mathbf{\overline{Y}}_{0: i-1}\\). Then, the \"LM head\" maps the encoded sequence of target vectors \\(\mathbf{\overline{Y}}_{0: i-1}\\) to a sequence of logit vectors \\(\mathbf{L}_{1:n} = \mathbf{l}_1, \ldots, \mathbf{l}_n\\), whereas the dimensionality of each logit vector \\(\mathbf{l}_i\\) corresponds to the size of the vocabulary. This way, for each \\(i \in \{1, \ldots, n\}\\) a probability distribution over the whole vocabulary can be obtained by applying a softmax operation on \\(\mathbf{l}_i\\). These distributions define the conditional distribution: $$p_{\theta_{dec}}(\mathbf{y}_i | \mathbf{Y}_{0: i-1}, \mathbf{\overline{X}}_{1:n}), \forall i \in \{1, \ldots, n\},$$ respectively. The \"LM head\" is often tied to the transpose of the word embedding matrix, *i.e.* \\(\mathbf{W}_{\text{emb}}^{\intercal} = \left[\mathbf{y}^1, \ldots, \mathbf{y}^{\text{vocab}}\right]^{\intercal}\\) \\({}^1\\). Intuitively this means that for all \\(i \in \{0, \ldots, n - 1\}\\) the \"LM Head\" layer compares the encoded output vector \\(\mathbf{\overline{y}}_i\\) to all word embeddings in the vocabulary \\(\mathbf{y}^1, \ldots, \mathbf{y}^{\text{vocab}}\\) so that the logit vector \\(\mathbf{l}_{i+1}\\) represents the similarity scores between the encoded output vector and each word embedding. The softmax operation simply transformers the similarity scores to a probability distribution. For each \\(i \in \{1, \ldots, n\}\\), the following equations hold: $$ p_{\theta_{dec}}(\mathbf{y} | \mathbf{\overline{X}}_{1:n}, \mathbf{Y}_{0:i-1})$$ $$ = \text{Softmax}(f_{\theta_{\text{dec}}}(\mathbf{\overline{X}}_{1:n}, \mathbf{Y}_{0:i-1}))$$ $$ = \text{Softmax}(\mathbf{W}_{\text{emb}}^{\intercal} \mathbf{\overline{y}}_{i-1})$$ $$ = \text{Softmax}(\mathbf{l}_i). $$ Putting it all together, in order to model the conditional distribution of a target vector sequence \\(\mathbf{Y}_{1: m}\\), the target vectors \\(\mathbf{Y}_{1:m-1}\\) prepended by the special \\(\text{BOS}\\) vector, *i.e.* \\(\mathbf{y}_0\\), are first mapped together with the contextualized encoding sequence \\(\mathbf{\overline{X}}_{1:n}\\) to the logit vector sequence \\(\mathbf{L}_{1:m}\\). Consequently, each logit target vector \\(\mathbf{l}_i\\) is transformed into a conditional probability distribution of the target vector \\(\mathbf{y}_i\\) using the softmax operation. Finally, the conditional probabilities of all target vectors \\(\mathbf{y}_1, \ldots, \mathbf{y}_m\\) multiplied together to yield the conditional probability of the complete target vector sequence: $$ p_{\theta_{dec}}(\mathbf{Y}_{1:m} | \mathbf{\overline{X}}_{1:n}) = \prod_{i=1}^{m} p_{\theta_{dec}}(\mathbf{y}_i | \mathbf{Y}_{0: i-1}, \mathbf{\overline{X}}_{1:n}).$$ In contrast to transformer-based encoders, in transformer-based decoders, the encoded output vector \\(\mathbf{\overline{y}}_i\\) should be a good representation of the *next* target vector \\(\mathbf{y}_{i+1}\\) and not of the input vector itself. Additionally, the encoded output vector \\(\mathbf{\overline{y}}_i\\) should be conditioned on all contextualized encoding sequence \\(\mathbf{\overline{X}}_{1:n}\\). To meet these requirements each decoder block consists of a **uni-directional** self-attention layer, followed by a **cross-attention** layer and two feed-forward layers \\({}^2\\). The uni-directional self-attention layer puts each of its input vectors \\(\mathbf{y'}_j\\) only into relation with all previous input vectors \\(\mathbf{y'}_i, \text{ with } i \le j\\) for all \\(j \in \{1, \ldots, n\}\\) to model the probability distribution of the next target vectors. The cross-attention layer puts each of its input vectors \\(\mathbf{y''}_j\\) into relation with all contextualized encoding vectors \\(\mathbf{\overline{X}}_{1:n}\\) to condition the probability distribution of the next target vectors on the input of the encoder as well. Alright, let\'s visualize the *transformer-based* decoder for our English to German translation example. ![](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/encoder_decoder/encoder_decoder_detail.png) We can see that the decoder maps the input \\(\mathbf{Y}_{0:5}\\) \"BOS\", \"Ich\", \"will\", \"ein\", \"Auto\", \"kaufen\" (shown in light red) together with the contextualized sequence of \"I\", \"want\", \"to\", \"buy\", \"a\", \"car\", \"EOS\", *i.e.* \\(\mathbf{\overline{X}}_{1:7}\\) (shown in dark green) to the logit vectors \\(\mathbf{L}_{1:6}\\) (shown in dark red). Applying a softmax operation on each \\(\mathbf{l}_1, \mathbf{l}_2, \ldots, \mathbf{l}_5\\) can thus define the conditional probability distributions: $$ p_{\theta_{dec}}(\mathbf{y} | \text{BOS}, \mathbf{\overline{X}}_{1:7}), $$ $$ p_{\theta_{dec}}(\mathbf{y} | \text{BOS Ich}, \mathbf{\overline{X}}_{1:7}), $$ $$ \ldots, $$ $$ p_{\theta_{dec}}(\mathbf{y} | \text{BOS Ich will ein Auto kaufen}, \mathbf{\overline{X}}_{1:7}). $$ The overall conditional probability of: $$ p_{\theta_{dec}}(\text{Ich will ein Auto kaufen EOS} | \mathbf{\overline{X}}_{1:n})$$ can therefore be computed as the following product: $$ p_{\theta_{dec}}(\text{Ich} | \text{BOS}, \mathbf{\overline{X}}_{1:7}) \times \ldots \times p_{\theta_{dec}}(\text{EOS} | \text{BOS Ich will ein Auto kaufen}, \mathbf{\overline{X}}_{1:7}). $$ The red box on the right shows a decoder block for the first three target vectors \\(\mathbf{y}_0, \mathbf{y}_1, \mathbf{y}_2\\). In the lower part, the uni-directional self-attention mechanism is illustrated and in the middle, the cross-attention mechanism is illustrated. Let\'s first focus on uni-directional self-attention. As in bi-directional self-attention, in uni-directional self-attention, the query vectors \\(\mathbf{q}_0, \ldots, \mathbf{q}_{m-1}\\) (shown in purple below), key vectors \\(\mathbf{k}_0, \ldots, \mathbf{k}_{m-1}\\) (shown in orange below), and value vectors \\(\mathbf{v}_0, \ldots, \mathbf{v}_{m-1}\\) (shown in blue below) are projected from their respective input vectors \\(\mathbf{y'}_0, \ldots, \mathbf{y'}_{m-1}\\) (shown in light red below). However, in uni-directional self-attention, each query vector \\(\mathbf{q}_i\\) is compared *only* to its respective key vector and all previous ones, namely \\(\mathbf{k}_0, \ldots, \mathbf{k}_i\\) to yield the respective *attention weights*. This prevents an output vector \\(\mathbf{y''}_j\\) (shown in dark red below) to include any information about the following input vector \\(\mathbf{y}_i, \text{ with } i > j\\) for all \\(j \in \{0, \ldots, m - 1 \}\\). As is the case in bi-directional self-attention, the attention weights are then multiplied by their respective value vectors and summed together. We can summarize uni-directional self-attention as follows: $$\mathbf{y''}_i = \mathbf{V}_{0: i} \textbf{Softmax}(\mathbf{K}_{0: i}^\intercal \mathbf{q}_i) + \mathbf{y'}_i. $$ Note that the index range of the key and value vectors is \\(0:i\\) instead of \\(0: m-1\\) which would be the range of the key vectors in bi-directional self-attention. Let\'s illustrate uni-directional self-attention for the input vector \\(\mathbf{y'}_1\\) for our example above. ![](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/encoder_decoder/causal_attn.png) As can be seen \\(\mathbf{y''}_1\\) only depends on \\(\mathbf{y'}_0\\) and \\(\mathbf{y'}_1\\). Therefore, we put the vector representation of the word \"Ich\", *i.e.* \\(\mathbf{y'}_1\\) only into relation with itself and the \"BOS\" target vector, *i.e.* \\(\mathbf{y'}_0\\), but **not** with the vector representation of the word \"will\", *i.e.* \\(\mathbf{y'}_2\\). So why is it important that we use uni-directional self-attention in the decoder instead of bi-directional self-attention? As stated above, a transformer-based decoder defines a mapping from a sequence of input vector \\(\mathbf{Y}_{0: m-1}\\) to the logits corresponding to the **next** decoder input vectors, namely \\(\mathbf{L}_{1:m}\\). In our example, this means, *e.g.* that the input vector \\(\mathbf{y}_1\\) = \"Ich\" is mapped to the logit vector \\(\mathbf{l}_2\\), which is then used to predict the input vector \\(\mathbf{y}_2\\). Thus, if \\(\mathbf{y'}_1\\) would have access to the following input vectors \\(\mathbf{Y'}_{2:5}\\), the decoder would simply copy the vector representation of \"will\", *i.e.* \\(\mathbf{y'}_2\\), to be its output \\(\mathbf{y''}_1\\). This would be forwarded to the last layer so that the encoded output vector \\(\mathbf{\overline{y}}_1\\) would essentially just correspond to the vector representation \\(\mathbf{y}_2\\). This is obviously disadvantageous as the transformer-based decoder would never learn to predict the next word given all previous words, but just copy the target vector \\(\mathbf{y}_i\\) through the network to \\(\mathbf{\overline{y}}_{i-1}\\) for all \\(i \in \{1, \ldots, m \}\\). In order to define a conditional distribution of the next target vector, the distribution cannot be conditioned on the next target vector itself. It does not make much sense to predict \\(\mathbf{y}_i\\) from \\(p(\mathbf{y} | \mathbf{Y}_{0:i}, \mathbf{\overline{X}})\\) because the distribution is conditioned on the target vector it is supposed to model. The uni-directional self-attention architecture, therefore, allows us to define a *causal* probability distribution, which is necessary to effectively model a conditional distribution of the next target vector. Great! Now we can move to the layer that connects the encoder and decoder - the *cross-attention* mechanism! The cross-attention layer takes two vector sequences as inputs: the outputs of the uni-directional self-attention layer, *i.e.* \\(\mathbf{Y''}_{0: m-1}\\) and the contextualized encoding vectors \\(\mathbf{\overline{X}}_{1:n}\\). As in the self-attention layer, the query vectors \\(\mathbf{q}_0, \ldots, \mathbf{q}_{m-1}\\) are projections of the output vectors of the previous layer, *i.e.* \\(\mathbf{Y''}_{0: m-1}\\). However, the key and value vectors \\(\mathbf{k}_0, \ldots, \mathbf{k}_{m-1}\\) and \\(\mathbf{v}_0, \ldots, \mathbf{v}_{m-1}\\) are projections of the contextualized encoding vectors \\(\mathbf{\overline{X}}_{1:n}\\). Having defined key, value, and query vectors, a query vector \\(\mathbf{q}_i\\) is then compared to *all* key vectors and the corresponding score is used to weight the respective value vectors, just as is the case for *bi-directional* self-attention to give the output vector \\(\mathbf{y'''}_i\\) for all \\(i \in {0, \ldots, m-1}\\). Cross-attention can be summarized as follows: $$ \mathbf{y'''}_i = \mathbf{V}_{1:n} \textbf{Softmax}(\mathbf{K}_{1: n}^\intercal \mathbf{q}_i) + \mathbf{y''}_i. $$ Note that the index range of the key and value vectors is \\(1:n\\) corresponding to the number of contextualized encoding vectors. Let\'s visualize the cross-attention mechanism for the input vector \\(\mathbf{y''}_1\\) for our example above. ![](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/encoder_decoder/cross_attention.png) We can see that the query vector \\(\mathbf{q}_1\\) (shown in purple) is derived from \\(\mathbf{y''}_1\\)(shown in red) and therefore relies on a vector representation of the word \"Ich\". The query vector \\(\mathbf{q}_1\\) is then compared to the key vectors \\(\mathbf{k}_1, \ldots, \mathbf{k}_7\\) (shown in yellow) corresponding to the contextual encoding representation of all encoder input vectors \\(\mathbf{X}_{1:n}\\) = \"I want to buy a car EOS\". This puts the vector representation of \"Ich\" into direct relation with all encoder input vectors. Finally, the attention weights are multiplied by the value vectors \\(\mathbf{v}_1, \ldots, \mathbf{v}_7\\) (shown in turquoise) to yield in addition to the input vector \\(\mathbf{y''}_1\\) the output vector \\(\mathbf{y'''}_1\\) (shown in dark red). So intuitively, what happens here exactly? Each output vector \\(\mathbf{y'''}_i\\) is a weighted sum of all value projections of the encoder inputs \\(\mathbf{v}_{1}, \ldots, \mathbf{v}_7\\) plus the input vector itself \\(\mathbf{y''}_i\\) (*c.f.* illustrated formula above). The key mechanism to understand is the following: Depending on how similar a query projection of the *input decoder vector* \\(\mathbf{q}_i\\) is to a key projection of the *encoder input vector* \\(\mathbf{k}_j\\), the more important is the value projection of the encoder input vector \\(\mathbf{v}_j\\). In loose terms this means, the more \"related\" a decoder input representation is to an encoder input representation, the more does the input representation influence the decoder output representation. Cool! Now we can see how this architecture nicely conditions each output vector \\(\mathbf{y'''}_i\\) on the interaction between the encoder input vectors \\(\mathbf{\overline{X}}_{1:n}\\) and the input vector \\(\mathbf{y''}_i\\). Another important observation at this point is that the architecture is completely independent of the number \\(n\\) of contextualized encoding vectors \\(\mathbf{\overline{X}}_{1:n}\\) on which the output vector \\(\mathbf{y'''}_i\\) is conditioned on. All projection matrices \\(\mathbf{W}^{\text{cross}}_{k}\\) and \\(\mathbf{W}^{\text{cross}}_{v}\\) to derive the key vectors \\(\mathbf{k}_1, \ldots, \mathbf{k}_n\\) and the value vectors \\(\mathbf{v}_1, \ldots, \mathbf{v}_n\\) respectively are shared across all positions \\(1, \ldots, n\\) and all value vectors \\( \mathbf{v}_1, \ldots, \mathbf{v}_n \\) are summed together to a single weighted averaged vector. Now it becomes obvious as well, why the transformer-based decoder does not suffer from the long-range dependency problem, the RNN-based decoder suffers from. Because each decoder logit vector is *directly* dependent on every single encoded output vector, the number of mathematical operations to compare the first encoded output vector and the last decoder logit vector amounts essentially to just one. To conclude, the uni-directional self-attention layer is responsible for conditioning each output vector on all previous decoder input vectors and the current input vector and the cross-attention layer is responsible to further condition each output vector on all encoded input vectors. To verify our theoretical understanding, let\'s continue our code example from the encoder section above. ------------------------------------------------------------------------ \\({}^1\\) The word embedding matrix \\(\mathbf{W}_{\text{emb}}\\) gives each input word a unique *context-independent* vector representation. This matrix is often fixed as the \"LM Head\" layer. However, the \"LM Head\" layer can very well consist of a completely independent \"encoded vector-to-logit\" weight mapping. \\({}^2\\) Again, an in-detail explanation of the role the feed-forward layers play in transformer-based models is out-of-scope for this notebook. It is argued in [Yun et. al, (2017)](https://arxiv.org/pdf/1912.10077.pdf) that feed-forward layers are crucial to map each contextual vector \\(\mathbf{x'}_i\\) individually to the desired output space, which the *self-attention* layer does not manage to do on its own. It should be noted here, that each output token \\(\mathbf{x'}\\) is processed by the same feed-forward layer. For more detail, the reader is advised to read the paper. ```python from transformers import MarianMTModel, MarianTokenizer import torch tokenizer = MarianTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-de") model = MarianMTModel.from_pretrained("Helsinki-NLP/opus-mt-en-de") embeddings = model.get_input_embeddings() # create token ids for encoder input input_ids = tokenizer("I want to buy a car", return_tensors="pt").input_ids # pass input token ids to encoder encoder_output_vectors = model.base_model.encoder(input_ids, return_dict=True).last_hidden_state # create token ids for decoder input decoder_input_ids = tokenizer("<pad> Ich will ein", return_tensors="pt", add_special_tokens=False).input_ids # pass decoder input ids and encoded input vectors to decoder decoder_output_vectors = model.base_model.decoder(decoder_input_ids, encoder_hidden_states=encoder_output_vectors).last_hidden_state # derive embeddings by multiplying decoder outputs with embedding weights lm_logits = torch.nn.functional.linear(decoder_output_vectors, embeddings.weight, bias=model.final_logits_bias) # change the decoder input slightly decoder_input_ids_perturbed = tokenizer("<pad> Ich will das", return_tensors="pt", add_special_tokens=False).input_ids decoder_output_vectors_perturbed = model.base_model.decoder(decoder_input_ids_perturbed, encoder_hidden_states=encoder_output_vectors).last_hidden_state lm_logits_perturbed = torch.nn.functional.linear(decoder_output_vectors_perturbed, embeddings.weight, bias=model.final_logits_bias) # compare shape and encoding of first vector print(f"Shape of decoder input vectors {embeddings(decoder_input_ids).shape}. Shape of decoder logits {lm_logits.shape}") # compare values of word embedding of "I" for input_ids and perturbed input_ids print("Is encoding for `Ich` equal to its perturbed version?: ", torch.allclose(lm_logits[0, 0], lm_logits_perturbed[0, 0], atol=1e-3)) ``` _Output:_ ``` Shape of decoder input vectors torch.Size([1, 5, 512]). Shape of decoder logits torch.Size([1, 5, 58101]) Is encoding for `Ich` equal to its perturbed version?: True ``` We compare the output shape of the decoder input word embeddings, *i.e.* `embeddings(decoder_input_ids)` (corresponds to \\(\mathbf{Y}_{0: 4}\\), here `<pad>` corresponds to BOS and \"Ich will das\" is tokenized to 4 tokens) with the dimensionality of the `lm_logits`(corresponds to \\(\mathbf{L}_{1:5}\\)). Also, we have passed the word sequence \"`<pad>` Ich will ein\" and a slightly perturbated version \"`<pad>` Ich will das\" together with the `encoder_output_vectors` through the decoder to check if the second `lm_logit`, corresponding to \"Ich\", differs when only the last word is changed in the input sequence (\"ein\" -\> \"das\"). As expected the output shapes of the decoder input word embeddings and lm\_logits, *i.e.* the dimensionality of \\(\mathbf{Y}_{0: 4}\\) and \\(\mathbf{L}_{1:5}\\) are different in the last dimension. While the sequence length is the same (=5), the dimensionality of a decoder input word embedding corresponds to `model.config.hidden_size`, whereas the dimensionality of a `lm_logit` corresponds to the vocabulary size `model.config.vocab_size`, as explained above. Second, it can be noted that the values of the encoded output vector of \\(\mathbf{l}_1 = \text{"Ich"}\\) are the same when the last word is changed from \"ein\" to \"das\". This however should not come as a surprise if one has understood uni-directional self-attention. On a final side-note, _auto-regressive_ models, such as GPT2, have the same architecture as _transformer-based_ decoder models **if** one removes the cross-attention layer because stand-alone auto-regressive models are not conditioned on any encoder outputs. So auto-regressive models are essentially the same as *auto-encoding* models but replace bi-directional attention with uni-directional attention. These models can also be pre-trained on massive open-domain text data to show impressive performances on natural language generation (NLG) tasks. In [Radford et al. (2019)](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf), the authors show that a pre-trained GPT2 model can achieve SOTA or close to SOTA results on a variety of NLG tasks without much fine-tuning. All *auto-regressive* models of 🤗Transformers can be found [here](https://huggingface.co/transformers/model_summary.html#autoregressive-models). Alright, that\'s it! Now, you should have gotten a good understanding of *transformer-based* encoder-decoder models and how to use them with the 🤗Transformers library. Thanks a lot to Victor Sanh, Sasha Rush, Sam Shleifer, Oliver Åstrand, ‪Ted Moskovitz and Kristian Kyvik for giving valuable feedback. ## **Appendix** As mentioned above, the following code snippet shows how one can program a simple generation method for *transformer-based* encoder-decoder models. Here, we implement a simple *greedy* decoding method using `torch.argmax` to sample the target vector. ```python from transformers import MarianMTModel, MarianTokenizer import torch tokenizer = MarianTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-de") model = MarianMTModel.from_pretrained("Helsinki-NLP/opus-mt-en-de") # create ids of encoded input vectors input_ids = tokenizer("I want to buy a car", return_tensors="pt").input_ids # create BOS token decoder_input_ids = tokenizer("<pad>", add_special_tokens=False, return_tensors="pt").input_ids assert decoder_input_ids[0, 0].item() == model.config.decoder_start_token_id, "`decoder_input_ids` should correspond to `model.config.decoder_start_token_id`" # STEP 1 # pass input_ids to encoder and to decoder and pass BOS token to decoder to retrieve first logit outputs = model(input_ids, decoder_input_ids=decoder_input_ids, return_dict=True) # get encoded sequence encoded_sequence = (outputs.encoder_last_hidden_state,) # get logits lm_logits = outputs.logits # sample last token with highest prob next_decoder_input_ids = torch.argmax(lm_logits[:, -1:], axis=-1) # concat decoder_input_ids = torch.cat([decoder_input_ids, next_decoder_input_ids], axis=-1) # STEP 2 # reuse encoded_inputs and pass BOS + "Ich" to decoder to second logit lm_logits = model(None, encoder_outputs=encoded_sequence, decoder_input_ids=decoder_input_ids, return_dict=True).logits # sample last token with highest prob again next_decoder_input_ids = torch.argmax(lm_logits[:, -1:], axis=-1) # concat again decoder_input_ids = torch.cat([decoder_input_ids, next_decoder_input_ids], axis=-1) # STEP 3 lm_logits = model(None, encoder_outputs=encoded_sequence, decoder_input_ids=decoder_input_ids, return_dict=True).logits next_decoder_input_ids = torch.argmax(lm_logits[:, -1:], axis=-1) decoder_input_ids = torch.cat([decoder_input_ids, next_decoder_input_ids], axis=-1) # let's see what we have generated so far! print(f"Generated so far: {tokenizer.decode(decoder_input_ids[0], skip_special_tokens=True)}") # This can be written in a loop as well. ``` _Outputs:_ ``` Generated so far: Ich will ein ``` In this code example, we show exactly what was described earlier. We pass an input \"I want to buy a car\" together with the \\(\text{BOS}\\) token to the encoder-decoder model and sample from the first logit \\(\mathbf{l}_1\\) (*i.e.* the first `lm_logits` line). Hereby, our sampling strategy is simple: greedily choose the next decoder input vector that has the highest probability. In an auto-regressive fashion, we then pass the sampled decoder input vector together with the previous inputs to the encoder-decoder model and sample again. We repeat this a third time. As a result, the model has generated the words \"Ich will ein\". The result is spot-on - this is the beginning of the correct translation of the input. In practice, more complicated decoding methods are used to sample the `lm_logits`. Most of which are covered in [this](https://huggingface.co/blog/how-to-generate) blog post.
Hyperparameter Search with Transformers and Ray Tune
ray-project
November 2, 2020
ray-tune
open-source-collab, nlp
https://huggingface.co/blog/ray-tune
# Hyperparameter Search with Transformers and Ray Tune ##### A guest blog post by Richard Liaw from the Anyscale team With cutting edge research implementations, thousands of trained models easily accessible, the Hugging Face [transformers](https://github.com/huggingface/transformers) library has become critical to the success and growth of natural language processing today. For any machine learning model to achieve good performance, users often need to implement some form of parameter tuning. Yet, nearly everyone ([1](https://medium.com/@prakashakshay90/fine-tuning-bert-model-using-pytorch-f34148d58a37), [2](https://mccormickml.com/2019/07/22/BERT-fine-tuning/#advantages-of-fine-tuning)) either ends up disregarding hyperparameter tuning or opting to do a simplistic grid search with a small search space. However, simple experiments are able to show the benefit of using an advanced tuning technique. Below is [a recent experiment run on a BERT](https://medium.com/distributed-computing-with-ray/hyperparameter-optimization-for-transformers-a-guide-c4e32c6c989b) model from [Hugging Face transformers](https://github.com/huggingface/transformers) on the [RTE dataset](https://aclweb.org/aclwiki/Textual_Entailment_Resource_Pool). Genetic optimization techniques like [PBT](https://docs.ray.io/en/latest/tune/api_docs/schedulers.html#population-based-training-tune-schedulers-populationbasedtraining) can provide large performance improvements compared to standard hyperparameter optimization techniques. <table> <tr> <td><strong>Algorithm</strong> </td> <td><strong>Best Val Acc.</strong> </td> <td><strong>Best Test Acc.</strong> </td> <td><strong>Total GPU min</strong> </td> <td><strong>Total $ cost</strong> </td> </tr> <tr> <td>Grid Search </td> <td>74% </td> <td>65.4% </td> <td>45 min </td> <td>$2.30 </td> </tr> <tr> <td>Bayesian Optimization +Early Stop </td> <td>77% </td> <td>66.9% </td> <td>104 min </td> <td>$5.30 </td> </tr> <tr> <td>Population-based Training </td> <td>78% </td> <td>70.5% </td> <td>48 min </td> <td>$2.45 </td> </tr> </table> If you’re leveraging [Transformers](https://github.com/huggingface/transformers), you’ll want to have a way to easily access powerful hyperparameter tuning solutions without giving up the customizability of the Transformers framework. ![alt_text](/blog/assets/06_ray_tune/ray-hf.jpg "image_tooltip") In the Transformers 3.1 release, [Hugging Face Transformers](https://github.com/huggingface/transformers) and [Ray Tune](https://docs.ray.io/en/latest/tune/index.html) teamed up to provide a simple yet powerful integration. [Ray Tune](https://docs.ray.io/en/latest/tune/index.html) is a popular Python library for hyperparameter tuning that provides many state-of-the-art algorithms out of the box, along with integrations with the best-of-class tooling, such as [Weights and Biases](https://wandb.ai/) and tensorboard. To demonstrate this new [Hugging Face](https://github.com/huggingface/transformers) + [Ray Tune](https://docs.ray.io/en/latest/tune/index.html) integration, we leverage the [Hugging Face Datasets library](https://github.com/huggingface/datasets) to fine tune BERT on [MRPC](https://www.microsoft.com/en-us/download/details.aspx?id=52398). To run this example, please first run: **`pip install "ray[tune]" transformers datasets scipy sklearn torch`** Simply plug in one of Ray’s standard tuning algorithms by just adding a few lines of code. ```python from datasets import load_dataset, load_metric from transformers import (AutoModelForSequenceClassification, AutoTokenizer, Trainer, TrainingArguments) tokenizer = AutoTokenizer.from_pretrained('distilbert-base-uncased') dataset = load_dataset('glue', 'mrpc') metric = load_metric('glue', 'mrpc') def encode(examples): outputs = tokenizer( examples['sentence1'], examples['sentence2'], truncation=True) return outputs encoded_dataset = dataset.map(encode, batched=True) def model_init(): return AutoModelForSequenceClassification.from_pretrained( 'distilbert-base-uncased', return_dict=True) def compute_metrics(eval_pred): predictions, labels = eval_pred predictions = predictions.argmax(axis=-1) return metric.compute(predictions=predictions, references=labels) # Evaluate during training and a bit more often # than the default to be able to prune bad trials early. # Disabling tqdm is a matter of preference. training_args = TrainingArguments( "test", evaluation_strategy="steps", eval_steps=500, disable_tqdm=True) trainer = Trainer( args=training_args, tokenizer=tokenizer, train_dataset=encoded_dataset["train"], eval_dataset=encoded_dataset["validation"], model_init=model_init, compute_metrics=compute_metrics, ) # Default objective is the sum of all metrics # when metrics are provided, so we have to maximize it. trainer.hyperparameter_search( direction="maximize", backend="ray", n_trials=10 # number of trials ) ``` By default, each trial will utilize 1 CPU, and optionally 1 GPU if available. You can leverage multiple [GPUs for a parallel hyperparameter search](https://docs.ray.io/en/latest/tune/user-guide.html#resources-parallelism-gpus-distributed) by passing in a `resources_per_trial` argument. You can also easily swap different parameter tuning algorithms such as [HyperBand](https://docs.ray.io/en/latest/tune/api_docs/schedulers.html#asha-tune-schedulers-ashascheduler), [Bayesian Optimization](https://docs.ray.io/en/latest/tune/api_docs/suggestion.html), [Population-Based Training](https://docs.ray.io/en/latest/tune/api_docs/schedulers.html#population-based-training-tune-schedulers-populationbasedtraining): To run this example, first run: **`pip install hyperopt`** ```python from ray.tune.suggest.hyperopt import HyperOptSearch from ray.tune.schedulers import ASHAScheduler trainer = Trainer( args=training_args, tokenizer=tokenizer, train_dataset=encoded_dataset["train"], eval_dataset=encoded_dataset["validation"], model_init=model_init, compute_metrics=compute_metrics, ) best_trial = trainer.hyperparameter_search( direction="maximize", backend="ray", # Choose among many libraries: # https://docs.ray.io/en/latest/tune/api_docs/suggestion.html search_alg=HyperOptSearch(metric="objective", mode="max"), # Choose among schedulers: # https://docs.ray.io/en/latest/tune/api_docs/schedulers.html scheduler=ASHAScheduler(metric="objective", mode="max")) ``` It also works with [Weights and Biases](https://wandb.ai/) out of the box! ![alt_text](/blog/assets/06_ray_tune/ray-wandb.png "image_tooltip") ### Try it out today: * `pip install -U ray` * `pip install -U transformers datasets` * Check out the [Hugging Face documentation](https://huggingface.co/transformers/) and [Discussion thread](https://discuss.huggingface.co/t/using-hyperparameter-search-in-trainer/785/10) * [End-to-end example of using Hugging Face hyperparameter search for text classification](https://github.com/huggingface/notebooks/blob/master/examples/text_classification.ipynb) If you liked this blog post, be sure to check out: * [Transformers + GLUE + Ray Tune example](https://docs.ray.io/en/latest/tune/examples/index.html#hugging-face-huggingface-transformers) * Our [Weights and Biases report](https://wandb.ai/amogkam/transformers/reports/Hyperparameter-Optimization-for-Huggingface-Transformers--VmlldzoyMTc2ODI) on Hyperparameter Optimization for Transformers * The [simplest way to serve your NLP model](https://medium.com/distributed-computing-with-ray/the-simplest-way-to-serve-your-nlp-model-in-production-with-pure-python-d42b6a97ad55) from scratch
Porting fairseq wmt19 translation system to transformers
stas
November 3, 2020
porting-fsmt
open-source-collab, nlp
https://huggingface.co/blog/porting-fsmt
# Porting fairseq wmt19 translation system to transformers ##### A guest blog post by Stas Bekman This article is an attempt to document how [fairseq wmt19 translation system](https://github.com/pytorch/fairseq/tree/master/examples/wmt19) was ported to [`transformers`](https://github.com/huggingface/transformers/). I was looking for some interesting project to work on and [Sam Shleifer](https://github.com/sshleifer) suggested I work on [porting a high quality translator](https://github.com/huggingface/transformers/issues/5419). I read the short paper: [Facebook FAIR's WMT19 News Translation Task Submission](https://arxiv.org/abs/1907.06616) that describes the original system and decided to give it a try. Initially, I had no idea how to approach this complex project and Sam helped me to [break it down to smaller tasks](https://github.com/huggingface/transformers/issues/5419), which was of a great help. I chose to work with the pre-trained `en-ru`/`ru-en` models during porting as I speak both languages. It'd have been much more difficult to work with `de-en`/`en-de` pairs as I don't speak German, and being able to evaluate the translation quality by just reading and making sense of the outputs at the advanced stages of the porting process saved me a lot of time. Also, as I did the initial porting with the `en-ru`/`ru-en` models, I was totally unaware that the `de-en`/`en-de` models used a merged vocabulary, whereas the former used 2 separate vocabularies of different sizes. So once I did the more complicated work of supporting 2 separate vocabularies, it was trivial to get the merged vocabulary to work. ## Let's cheat The first step was to cheat, of course. Why make a big effort when one can make a little one. So I wrote a [short notebook](https://github.com/stas00/porting/tree/master/transformers/fairseq-wmt19/nbs/cheat.ipynb) that in a few lines of code provided a proxy to `fairseq` and emulated `transformers` API. If no other things, but basic translation, was required, this would have been enough. But, of course, we wanted to have the full porting, so after having this small victory, I moved onto much harder things. ## Preparations For the sake of this article let's assume that we work under `~/porting`, and therefore let's create this directory: ``` mkdir ~/porting cd ~/porting ``` We need to install a few things for this work: ``` # install fairseq git clone https://github.com/pytorch/fairseq cd fairseq pip install -e . # install mosesdecoder under fairseq git clone https://github.com/moses-smt/mosesdecoder # install fastBPE under fairseq git clone [email protected]:glample/fastBPE.git cd fastBPE; g++ -std=c++11 -pthread -O3 fastBPE/main.cc -IfastBPE -o fast; cd - cd - # install transformers git clone https://github.com/huggingface/transformers/ pip install -e .[dev] ``` ## Files As a quick overview, the following files needed to be created and written: * [`src/transformers/configuration_fsmt.py`](https://github.com/huggingface/transformers/blob/129fdae04033fe4adfe013b734deaec6ec34ae2e/src/transformers/configuration_fsmt.py) - a short configuration class. * [`src/transformers/convert_fsmt_original_pytorch_checkpoint_to_pytorch.py`](https://github.com/huggingface/transformers/blob/129fdae04033fe4adfe013b734deaec6ec34ae2e/src/transformers/convert_fsmt_original_pytorch_checkpoint_to_pytorch.py) - a complex conversion script. * [`src/transformers/modeling_fsmt.py`](https://github.com/huggingface/transformers/blob/129fdae04033fe4adfe013b734deaec6ec34ae2e/src/transformers/modeling_fsmt.py) - this is where the model architecture is implemented. * [`src/transformers/tokenization_fsmt.py`](https://github.com/huggingface/transformers/blob/129fdae04033fe4adfe013b734deaec6ec34ae2e/src/transformers/tokenization_fsmt.py) - a tokenizer code. * [`tests/test_modeling_fsmt.py`](https://github.com/huggingface/transformers/blob/129fdae04033fe4adfe013b734deaec6ec34ae2e/tests/test_modeling_fsmt.py) - model tests. * [`tests/test_tokenization_fsmt.py`](https://github.com/huggingface/transformers/blob/129fdae04033fe4adfe013b734deaec6ec34ae2e/tests/test_tokenization_fsmt.py) - tokenizer tests. * [`docs/source/model_doc/fsmt.rst`](https://github.com/huggingface/transformers/blob/129fdae04033fe4adfe013b734deaec6ec34ae2e/docs/source/model_doc/fsmt.rst) - a doc file. There are other files that needed to be modified as well, we will talk about those towards the end. ## Conversion One of the most important parts of the porting process is to create a script that will take all the available source data provided by the original developer of the model, which includes a checkpoint with pre-trained weights, model and training configuration, dictionaries and tokenizer support files, and convert them into a new set of model files supported by `transformers`. You will find the final conversion script here: [src/transformers/convert_fsmt_original_pytorch_checkpoint_to_pytorch.py](https://github.com/huggingface/transformers/blob/129fdae04033fe4adfe013b734deaec6ec34ae2e/src/transformers/convert_fsmt_original_pytorch_checkpoint_to_pytorch.py) I started this process by copying one of the existing conversion scripts `src/transformers/convert_bart_original_pytorch_checkpoint_to_pytorch.py`, gutted most of it out and then gradually added parts to it as I was progressing in the porting process. During the development I was testing all my code against a local copy of the converted model files, and only at the very end when everything was ready I uploaded the files to 🤗 s3 and then continued testing against the online version. ## fairseq model and its support files Let's first look at what data we get with the `fairseq` pre-trained model. We are going to use the convenient `torch.hub` API, which makes it very easy to deploy models submitted to [that hub](https://pytorch.org/hub/): ``` import torch torch.hub.load('pytorch/fairseq', 'transformer.wmt19.en-ru', checkpoint_file='model4.pt', tokenizer='moses', bpe='fastbpe') ``` This code downloads the pre-trained model and its support files. I found this information at the page corresponding to [fairseq](https://pytorch.org/hub/pytorch_fairseq_translation/) on the pytorch hub. To see what's inside the downloaded files, we have to first hunt down the right folder under `~/.cache`. ``` ls -1 ~/.cache/torch/hub/pytorch_fairseq/ ``` shows: ``` 15bca559d0277eb5c17149cc7e808459c6e307e5dfbb296d0cf1cfe89bb665d7.ded47c1b3054e7b2d78c0b86297f36a170b7d2e7980d8c29003634eb58d973d9 15bca559d0277eb5c17149cc7e808459c6e307e5dfbb296d0cf1cfe89bb665d7.ded47c1b3054e7b2d78c0b86297f36a170b7d2e7980d8c29003634eb58d973d9.json ``` You may have more than one entry there if you have been using the `hub` for other models. Let's make a symlink so that we can easily refer to that obscure cache folder name down the road: ``` ln -s /code/data/cache/torch/hub/pytorch_fairseq/15bca559d0277eb5c17149cc7e808459c6e307e5dfbb296d0cf1cfe89bb665d7.ded47c1b3054e7b2d78c0b86297f36a170b7d2e7980d8c29003634eb58d973d9 \ ~/porting/pytorch_fairseq_model ``` Note: the path could be different when you try it yourself, since the hash value of the model could change. You will find the right one in `~/.cache/torch/hub/pytorch_fairseq/` If we look inside that folder: ``` ls -l ~/porting/pytorch_fairseq_model/ total 13646584 -rw-rw-r-- 1 stas stas 532048 Sep 8 21:29 bpecodes -rw-rw-r-- 1 stas stas 351706 Sep 8 21:29 dict.en.txt -rw-rw-r-- 1 stas stas 515506 Sep 8 21:29 dict.ru.txt -rw-rw-r-- 1 stas stas 3493170533 Sep 8 21:28 model1.pt -rw-rw-r-- 1 stas stas 3493170532 Sep 8 21:28 model2.pt -rw-rw-r-- 1 stas stas 3493170374 Sep 8 21:28 model3.pt -rw-rw-r-- 1 stas stas 3493170386 Sep 8 21:29 model4.pt ``` we have: 1. `model*.pt` - 4 checkpoints (pytorch `state_dict` with all the pre-trained weights, and various other things) 2. `dict.*.txt` - source and target dictionaries 3. `bpecodes` - special map file used by the tokenizer We are going to investigate each of these files in the following sections. ## How translation systems work Here is a very brief introduction to how computers translate text nowadays. Computers can't read text, but can only handle numbers. So when working with text we have to map one or more letters into numbers, and hand those to a computer program. When the program completes it too returns numbers, which we need to convert back into text. Let's start with two sentences in Russian and English and assign a unique number to each word: ``` я люблю следовательно я существую 10 11 12 10 13 I love therefore I am 20 21 22 20 23 ``` The numbers starting with 10 map Russian words to unique numbers. The numbers starting with 20 do the same for English words. If you don't speak Russian, you can still see that the word `я` (means 'I') repeats twice in the sentence and it gets the same number 10 associated with it. Same goes for `I` (20), which also repeats twice. A translation system works in the following stages: ``` 1. [я люблю следовательно я существую] # tokenize sentence into words 2. [10 11 12 10 13] # look up words in the input dictionary and convert to ids 3. [black box] # machine learning system magic 4. [20 21 22 20 23] # look up numbers in the output dictionary and convert to text 5. [I love therefore I am] # detokenize the tokens back into a sentence ``` If we combine the first two and the last two steps we get 3 stages: 1. **Encode input**: break input text into tokens, create a dictionary (vocab) of these tokens and remap each token into a unique id in that dictionary. 2. **Generate translation**: take input numbers, run them through a pre-trained machine learning model which predicts the best translation, and return output numbers. 3. **Decode output**: take output numbers, look them up in the target language dictionary, convert them back to text, and finally merge the converted tokens into the translated sentence. The second stage may return one or several possible translations. In the case of the latter the caller then can choose the most suitable outcome. In this article I will refer to [the beam search algorithm](https://en.wikipedia.org/wiki/Beam_search), which is one of the ways multiple possible results are searched for. And the size of the beam refers to how many results are returned. If there is only one result that's requested, the model will choose the one with the highest likelihood probability. If multiple results are requested it will return those results sorted by their probabilities. Note that this same idea applies to the majority of NLP tasks, and not just translation. ## Tokenization Early systems tokenized sentences into words and punctuation marks. But since many languages have hundreds of thousands of words, it is very taxing to work with huge vocabularies, as it dramatically increases the compute resource requirements and the length of time to complete the task. As of 2020 there are quite a few different tokenizing methods, but most of the recent ones are based on sub-word tokenization - that is instead of breaking input text down into words, these modern tokenizers break the input text down into word segments and letters, using some kind of training to obtain the most optimal tokenization. Let's see how this approach helps to reduce memory and computation requirements. If we have an input vocabulary of 6 common words: go, going, speak, speaking, sleep, sleeping - with word-level tokenization we end up with 6 tokens. However, if we break these down into: go, go-ing, speak, speak-ing, etc., then we have only 4 tokens in our vocabulary: go, speak, sleep, ing. This simple change made a 33% improvement! Except, the sub-word tokenizers don't use grammar rules, but they are trained on massive text inputs to find such splits. In this example I used a simple grammar rule as it's easy to understand. Another important advantage of this approach is when dealing with input text words, that aren't in our vocabulary. For example, let's say our system encounters the word `grokking` (*), which can't be found in its vocabulary. If we split it into `grokk'-'ing', then the machine learning model might not know what to do with the first part of the word, but it gets a useful insight that 'ing' indicates a continuous tense, so it'll be able to produce a better translation. In such situation the tokenizer will split the unknown segments into segments it knows, in the worst case reducing them to individual letters. * footnote: to grok was coined in 1961 by Robert A. Heinlein in "Stranger in a Strange Land": to understand (something) intuitively or by empathy. There are many other nuances to why the modern tokenization approach is much more superior than simple word tokenization, which won't be covered in the scope of this article. Most of these systems are very complex to how they do the tokenization, as compared to the simple example of splitting `ing` endings that was just demonstrated, but the principle is similar. ## Tokenizer porting The first step was to port the encoder part of the tokenizer, where text is converted to ids. The decoder part won't be needed until the very end. ### fairseq's tokenizer workings Let's understand how `fairseq`'s tokenizer works. `fairseq` (*) uses the [Byte Pair Encoding](https://en.wikipedia.org/wiki/Byte_pair_encoding) (BPE) algorithm for tokenization. * footnote: from here on when I refer to `fairseq`, I refer [to this specific model implementation](https://github.com/pytorch/fairseq/tree/master/examples/wmt19) - the `fairseq` project itself has dozens of different implementations of different models. Let's see what BPE does: ``` import torch sentence = "Machine Learning is great" checkpoint_file='model4.pt' model = torch.hub.load('pytorch/fairseq', 'transformer.wmt19.en-ru', checkpoint_file=checkpoint_file, tokenizer='moses', bpe='fastbpe') # encode step by step tokens = model.tokenize(sentence) print("tokenize ", tokens) bpe = model.apply_bpe(tokens) print("apply_bpe: ", bpe) bin = model.binarize(bpe) print("binarize: ", len(bin), bin) # compare to model.encode - should give us the same output expected = model.encode(sentence) print("encode: ", len(expected), expected) ``` gives us: ``` ('tokenize ', 'Machine Learning is great') ('apply_bpe: ', 'Mach@@ ine Lear@@ ning is great') ('binarize: ', 7, tensor([10217, 1419, 3, 2515, 21, 1054, 2])) ('encode: ', 7, tensor([10217, 1419, 3, 2515, 21, 1054, 2])) ``` You can see that `model.encode` does `tokenize+apply_bpe+binarize` - as we get the same output. The steps were: 1. `tokenize`: normally it'd escape apostrophes and do other pre-processing, in this example it just returned the input sentence without any changes 2. `apply_bpe`: BPE splits the input into words and sub-words according to its `bpecodes` file supplied by the tokenizer - we get 6 BPE chunks 3. `binarize`: this simply remaps the BPE chunks from the previous step into their corresponding ids in the vocabulary (which is also downloaded with the model) You can refer to [this notebook](https://github.com/stas00/porting/tree/master/transformers/fairseq-wmt19/nbs/tokenizer.ipynb) to see more details. This is a good time to look inside the `bpecodes` file. Here is the top of the file: ``` $ head -15 ~/porting/pytorch_fairseq_model/bpecodes e n</w> 1423551864 e r 1300703664 e r</w> 1142368899 i n 1130674201 c h 933581741 a n 845658658 t h 811639783 e n 780050874 u n 661783167 s t 592856434 e i 579569900 a r 494774817 a l 444331573 o r 439176406 th e</w> 432025210 [...] ``` The top entries of this file include very frequent short 1-letter sequences. As we will see in a moment the bottom includes the most common multi-letter sub-words and even full long words. A special token `</w>` indicates the end of the word. So in several lines quoted above we find: ``` e n</w> 1423551864 e r</w> 1142368899 th e</w> 432025210 ``` If the second column doesn't include `</w>`, it means that this segment is found in the middle of the word and not at the end of it. The last column declares the number of times this BPE code has been encountered while being trained. The `bpecodes` file is sorted by this column - so the most common BPE codes are on top. By looking at the counts we now know that when this tokenizer was trained it encountered 1,423,551,864 words ending in `en`, 1,142,368,899 words ending in `er` and 432,025,210 words ending in `the`. For the latter it most likely means the actual word `the`, but it would also include words like `lathe`, `loathe`, `tithe`, etc. These huge numbers also indicate to us that this tokenizer was trained on an enormous amount of text! If we look at the bottom of the same file: ``` $ tail -10 ~/porting/pytorch_fairseq_model/bpecodes 4 x 109019 F ische</w> 109018 sal aries</w> 109012 e kt 108978 ver gewal 108978 Sten cils</w> 108977 Freiwilli ge</w> 108969 doub les</w> 108965 po ckets</w> 108953 Gö tz</w> 108943 ``` we see complex combinations of sub-words which are still pretty frequent, e.g. `sal aries` for 109,012 times! So it got its own dedicated entry in the `bpecodes` map file. How `apply_bpe` does its work? By looking up the various combinations of letters in the `bpecodes` map file and when finding the longest fitting entry it uses that. Going back to our example, we saw that it split `Machine` into: `Mach@@` + `ine` - let's check: ``` $ grep -i ^mach ~/porting/pytorch_fairseq_model/bpecodes mach ine</w> 463985 Mach t 376252 Mach ines</w> 374223 mach ines</w> 214050 Mach th 119438 ``` You can see that it has `mach ine</w>`. We don't see `Mach ine` in there - so it must be handling lower cased look ups when normal case is not matching. Now let's check: `Lear@@` + `ning` ``` $ grep -i ^lear ~/porting/pytorch_fairseq_model/bpecodes lear n</w> 675290 lear ned</w> 505087 lear ning</w> 417623 ``` We find `lear ning</w>` is there (again the case is not the same). Thinking more about it, the case probably doesn't matter for tokenization, as long as there is a unique entry for `Mach`/`Lear` and `mach`/`lear` in the dictionary where it's very critical to have each case covered. Hopefully, you can now see how this works. One confusing thing is that if you remember the `apply_bpe` output was: ``` ('apply_bpe: ', 6, ['Mach@@', 'ine', 'Lear@@', 'ning', 'is', 'great']) ``` Instead of marking endings of the words with `</w>`, it leaves those as is, but, instead, marks words that were not the endings with `@@`. This is probably so, because `fastBPE` implementation is used by `fairseq` and that's how it does things. I had to change this to fit the `transformers` implementation, which doesn't use `fastBPE`. One last thing to check is the remapping of the BPE codes to vocabulary ids. To repeat, we had: ``` ('apply_bpe: ', 'Mach@@ ine Lear@@ ning is great') ('binarize: ', 7, tensor([10217, 1419, 3, 2515, 21, 1054, 2])) ``` `2` - the last token id is a `eos` (end of stream) token. It's used to indicate to the model the end of input. And then `Mach@@` gets remapped to `10217`, and `ine` to `1419`. Let's check that the dictionary file is in agreement: ``` $ grep ^Mach@@ ~/porting/pytorch_fairseq_model/dict.en.txt Mach@@ 6410 $ grep "^ine " ~/porting/pytorch_fairseq_model/dict.en.txt ine 88376 ``` Wait a second - those aren't the ids that we got after `binarize`, which should be `10217` and `1419` correspondingly. It took some investigating to find out that the vocab file ids aren't the ids used by the model and that internally it remaps them to new ids once the vocab file is loaded. Luckily, I didn't need to figure out how exactly it was done. Instead, I just used `fairseq.data.dictionary.Dictionary.load` to load the dictionary (*), which performed all the re-mappings, - and I then saved the final dictionary. I found out about that `Dictionary` class by stepping through `fairseq` code with debugger. * footnote: the more I work on porting models and datasets, the more I realize that putting the original code to work for me, rather than trying to replicate it, is a huge time saver and most importantly that code has already been tested - it's too easy to miss something and down the road discover big problems! After all, at the end, none of this conversion code will matter, since only the data it generated will be used by `transformers` and its end users. Here is the relevant part of the conversion script: ``` from fairseq.data.dictionary import Dictionary def rewrite_dict_keys(d): # (1) remove word breaking symbol # (2) add word ending symbol where the word is not broken up, # e.g.: d = {'le@@': 5, 'tt@@': 6, 'er': 7} => {'le': 5, 'tt': 6, 'er</w>': 7} d2 = dict((re.sub(r"@@$", "", k), v) if k.endswith("@@") else (re.sub(r"$", "</w>", k), v) for k, v in d.items()) keep_keys = "<s> <pad> </s> <unk>".split() # restore the special tokens for k in keep_keys: del d2[f"{k}</w>"] d2[k] = d[k] # restore return d2 src_dict_file = os.path.join(fsmt_folder_path, f"dict.{src_lang}.txt") src_dict = Dictionary.load(src_dict_file) src_vocab = rewrite_dict_keys(src_dict.indices) src_vocab_size = len(src_vocab) src_vocab_file = os.path.join(pytorch_dump_folder_path, "vocab-src.json") print(f"Generating {src_vocab_file}") with open(src_vocab_file, "w", encoding="utf-8") as f: f.write(json.dumps(src_vocab, ensure_ascii=False, indent=json_indent)) # we did the same for the target dict - omitted quoting it here # and we also had to save `bpecodes`, it's called `merges.txt` in the transformers land ``` After running the conversion script, let's check the converted dictionary: ``` $ grep '"Mach"' /code/huggingface/transformers-fair-wmt/data/wmt19-en-ru/vocab-src.json "Mach": 10217, $ grep '"ine</w>":' /code/huggingface/transformers-fair-wmt/data/wmt19-en-ru/vocab-src.json "ine</w>": 1419, ``` We have the correct ids in the `transformers` version of the vocab file. As you can see I also had to re-write the vocabularies to match the `transformers` BPE implementation. We have to change: ``` ['Mach@@', 'ine', 'Lear@@', 'ning', 'is', 'great'] ``` to: ``` ['Mach', 'ine</w>', 'Lear', 'ning</w>', 'is</w>', 'great</w>'] ``` Instead of marking chunks that are segments of a word, with the exception of the last segment, we mark segments or words that are the final segment. One can easily go from one style of encoding to another and back. This successfully completed the porting of the first part of the model files. You can see the final version of the code [here](https://github.com/huggingface/transformers/blob/129fdae04033fe4adfe013b734deaec6ec34ae2e/src/transformers/convert_fsmt_original_pytorch_checkpoint_to_pytorch.py#L128). If you're curious to look deeper there are more tinkering bits in [this notebook](https://github.com/stas00/porting/tree/master/transformers/fairseq-wmt19/nbs/tokenizer-dev.ipynb). ### Porting tokenizer's encoder to transformers `transformers` can't rely on [`fastBPE`](https://github.com/glample/fastBPE) since the latter requires a C-compiler, but luckily someone already implemented a python version of the same in [`tokenization_xlm.py`](https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_xlm.py). So I just copied it to `src/transformers/tokenization_fsmt.py` and renamed the class names: ``` cp tokenization_xlm.py tokenization_fsmt.py perl -pi -e 's|XLM|FSMT|ig; s|xlm|fsmt|g;' tokenization_fsmt.py ``` and with very few changes I had a working encoder part of the tokenizer. There was a lot of code that didn't apply to the languages I needed to support, so I removed that code. Since I needed 2 different vocabularies, instead of one here in tokenizer and everywhere else I had to change the code to support both. So for example I had to override the super-class' methods: ``` def get_vocab(self) -> Dict[str, int]: return self.get_src_vocab() @property def vocab_size(self) -> int: return self.src_vocab_size ``` Since `fairseq` didn't use `bos` (beginning of stream) tokens, I also had to change the code to not include those (*): ``` - return bos + token_ids_0 + sep - return bos + token_ids_0 + sep + token_ids_1 + sep + return token_ids_0 + sep + return token_ids_0 + sep + token_ids_1 + sep ``` * footnote: this is the output of `diff(1)` which shows the difference between two chunks of code - lines starting with `-` show what was removed, and with `+` what was added. `fairseq` was also escaping characters and performing an aggressive dash splitting, so I had to also change: ``` - [...].tokenize(text, return_str=False, escape=False) + [...].tokenize(text, return_str=False, escape=True, aggressive_dash_splits=True) ``` If you're following along, and would like to see all the changes I did to the original `tokenization_xlm.py`, you can do: ``` cp tokenization_xlm.py tokenization_orig.py perl -pi -e 's|XLM|FSMT|g; s|xlm|fsmt|g;' tokenization_orig.py diff -u tokenization_orig.py tokenization_fsmt.py | less ``` Just make sure you're checking out the repository [around the time fsmt was released](https://github.com/huggingface/transformers/tree/129fdae04033fe4adfe013b734deaec6ec34ae2e), since the 2 files could have diverged since then. The final stage was to run through a bunch of inputs and to ensure that the ported tokenizer produced the same ids as the original. You can see this is done in [this notebook](https://github.com/stas00/porting/tree/master/transformers/fairseq-wmt19/nbs/tokenizer.ipynb), which I was running repeatedly while trying to figure out how to make the outputs match. This is how most of the porting process went, I'd take a small feature, run it the `fairseq`-way, get the outputs, do the same with the `transformers` code, try to make the outputs match - fiddle with the code until it did, then try a different kind of input make sure it produced the same outputs, and so on, until all inputs produced outputs that matched. ## Porting the core translation functionality Having had a relatively quick success with porting the tokenizer (obviously, thanks to most of the code being there already), the next stage was much more complex. This is the `generate()` function which takes inputs ids, runs them through the model and returns output ids. I had to break it down into multiple sub-tasks. I had to 1. port the model weights. 2. make `generate()` work for a single beam (i.e. return just one result). 3. and then multiple beams (i.e. return multiple results). I first researched which of the existing architectures were the closest to my needs. It was BART that fit the closest, so I went ahead and did: ``` cp modeling_bart.py modeling_fsmt.py perl -pi -e 's|Bart|FSMT|ig; s|bart|fsmt|g;' modeling_fsmt.py ``` This was my starting point that I needed to tweak to work with the model weights provided by `fairseq`. ### Porting weights and configuration The first thing I did is to look at what was inside the publicly shared checkpoint. [This notebook](https://github.com/stas00/porting/tree/master/transformers/fairseq-wmt19/nbs/config.ipynb) shows what I did there. I discovered that there were 4 checkpoints in there. I had no idea what to do about it, so I started with a simpler job of using just the first checkpoint. Later I discovered that `fairseq` used all 4 checkpoints in an ensemble to get the best predictions, and that `transformers` currently doesn't support that feature. When the porting was completed and I was able to measure the performance scores, I found out that the `model4.pt` checkpoint provided the best score. But during the porting performance didn't matter much. Since I was using only one checkpoint it was crucial that when I was comparing outputs, I had `fairseq` also use just one and the same checkpoint. To accomplish that I used a slightly different `fairseq` API: ``` from fairseq import hub_utils #checkpoint_file = 'model1.pt:model2.pt:model3.pt:model4.pt' checkpoint_file = 'model1.pt' model_name_or_path = 'transformer.wmt19.ru-en' data_name_or_path = '.' cls = fairseq.model_parallel.models.transformer.ModelParallelTransformerModel models = cls.hub_models() kwargs = {'bpe': 'fastbpe', 'tokenizer': 'moses'} ru2en = hub_utils.from_pretrained( model_name_or_path, checkpoint_file, data_name_or_path, archive_map=models, **kwargs ) ``` First I looked at the model: ``` print(ru2en["models"][0]) ``` ``` TransformerModel( (encoder): TransformerEncoder( (dropout_module): FairseqDropout() (embed_tokens): Embedding(31232, 1024, padding_idx=1) (embed_positions): SinusoidalPositionalEmbedding() (layers): ModuleList( (0): TransformerEncoderLayer( (self_attn): MultiheadAttention( (dropout_module): FairseqDropout() (k_proj): Linear(in_features=1024, out_features=1024, bias=True) (v_proj): Linear(in_features=1024, out_features=1024, bias=True) (q_proj): Linear(in_features=1024, out_features=1024, bias=True) (out_proj): Linear(in_features=1024, out_features=1024, bias=True) ) [...] # the full output is in the notebook ``` which looked very similar to BART's architecture, with some slight differences in a few layers - some were added, others removed. So this was great news as I didn't have to re-invent the wheel, but to only tweak a well-working design. Note that in the code sample above I'm not using `torch.load()` to load `state_dict`. This is what I initially did and the result was most puzzling - I was missing `self_attn.(k|q|v)_proj` weights and instead had a single `self_attn.in_proj`. When I tried loading the model using `fairseq` API, it fixed things up - apparently that model was old and was using an old architecture that had one set of weights for `k/q/v` and the newer architecture has them separate. When `fairseq` loads this old model, it rewrites the weights to match the modern architecture. I also used [this notebook](https://github.com/stas00/porting/tree/master/transformers/fairseq-wmt19/nbs/visualize-models.ipynb) to compare the `state_dict`s visually. In that notebook you will also see that `fairseq` fetches a 2.2GB-worth of data in `last_optimizer_state`, which we can safely ignore, and have a 3 times leaner final model size. In the conversion script I also had to remove some `state_dict` keys, which I wasn't going to use, e.g. `model.encoder.version`, `model.model` and a few others. Next we look at the configuration args: ``` args = dict(vars(ru2en["args"])) pprint(args) ``` ``` 'activation_dropout': 0.0, 'activation_fn': 'relu', 'adam_betas': '(0.9, 0.98)', 'adam_eps': 1e-08, 'adaptive_input': False, 'adaptive_softmax_cutoff': None, 'adaptive_softmax_dropout': 0, 'arch': 'transformer_wmt_en_de_big', 'attention_dropout': 0.1, 'bpe': 'fastbpe', [... full output is in the notebook ...] ``` ok, we will copy those to configure the model. I had to rename some of the argument names, wherever `transformers` used different names for the corresponding configuration setting. So the re-mapping of configuration looks as following: ``` model_conf = { "architectures": ["FSMTForConditionalGeneration"], "model_type": "fsmt", "activation_dropout": args["activation_dropout"], "activation_function": "relu", "attention_dropout": args["attention_dropout"], "d_model": args["decoder_embed_dim"], "dropout": args["dropout"], "init_std": 0.02, "max_position_embeddings": args["max_source_positions"], "num_hidden_layers": args["encoder_layers"], "src_vocab_size": src_vocab_size, "tgt_vocab_size": tgt_vocab_size, "langs": [src_lang, tgt_lang], [...] "bos_token_id": 0, "pad_token_id": 1, "eos_token_id": 2, "is_encoder_decoder": True, "scale_embedding": not args["no_scale_embedding"], "tie_word_embeddings": args["share_all_embeddings"], } ``` All that remains is to save the configuration into `config.json` and create a new `state_dict` dump into `pytorch.dump`: ``` print(f"Generating {fsmt_tokenizer_config_file}") with open(fsmt_tokenizer_config_file, "w", encoding="utf-8") as f: f.write(json.dumps(tokenizer_conf, ensure_ascii=False, indent=json_indent)) [...] print(f"Generating {pytorch_weights_dump_path}") torch.save(model_state_dict, pytorch_weights_dump_path) ``` We have the configuration and the model's `state_dict` ported - yay! You will find the final conversion code [here](https://github.com/huggingface/transformers/blob/129fdae04033fe4adfe013b734deaec6ec34ae2e/src/transformers/convert_fsmt_original_pytorch_checkpoint_to_pytorch.py#L162). ### Porting the architecture code Now that we have the model weights and the model configuration ported, we *just* need to adjust the code copied from `modeling_bart.py` to match `fairseq`'s functionality. The first step was to take a sentence, encode it and then feed to the `generate` function - for `fairseq` and for `transformers`. After a few very failing attempts to get somewhere (*) - I quickly realized that with the current level of complexity using `print` as debugging method will get me nowhere, and neither will the basic `pdb` debugger. In order to be efficient and to be able to watch multiple variables and have watches that are code-evaluations I needed a serious visual debugger. I spent a day trying all kinds of python debuggers and only when I tried `pycharm` I realized that it was the tool that I needed. It was my first time using `pycharm`, but I quickly figured out how to use it, as it was quite intuitive. * footnote: the model was generating 'nononono' in Russian - that was fair and hilarious! Over time I found a great feature in `pycharm` that allowed me to group breakpoints by functionality and I could turn whole groups on and off depending on what I was debugging. For example, here I have beam-search related break-points off and decoder ones on: ![break point group](./assets/07_porting_fsmt/pycharm-break-point-groups.png) Now that I have used this debugger to port FSMT, I know that it would have taken me many times over to use pdb to do the same - I may have even given it up. I started with 2 scripts: * [fseq-translate](https://github.com/stas00/porting/tree/master/transformers/fairseq-wmt19/scripts/fseq-translate.py) * [fsmt-translate](https://github.com/stas00/porting/tree/master/transformers/fairseq-wmt19/scripts/fsmt-translate.py) (without the `decode` part first) running both side by side, stepping through with debugger on each side and comparing values of relevant variables - until I found the first divergence. I then studied the code, made adjustments inside `modeling_fsmt.py`, restarted the debugger, quickly jumped to the point of divergence and re-checked the outputs. This cycle has been repeated multiple times until the outputs matched. The first things I had to change was to remove a few layers that weren't used by `fairseq` and then add some new layers it was using instead. And then the rest was primarily figuring out when to switch to `src_vocab_size` and when to `tgt_vocab_size` - since in the core modules it's just `vocab_size`, which weren't accounting for a possible model that has 2 dictionaries. Finally, I discovered that a few hyperparameter configurations weren't the same, and so those were changed too. I first did this process for the simpler no-beam search, and once the outputs were 100% matching I repeated it with the more complicated beam search. Here, for example, I discovered that `fairseq` was using the equivalent of `early_stopping=True`, whereas `transformers` has it as `False` by default. When early stopping is enabled it stops looking for new candidates as soon as there are as many candidates as the beam size, whereas when it's disabled, the algorithm stops searching only when it can't find higher probability candidates than what it already has. The `fairseq` paper mentions that a huge beam size of 50 was used, which compensates for using early stopping. ## Tokenizer decoder porting Once I had the ported `generate` function produce pretty similar results to `fairseq`'s `generate` I next needed to complete the last stage of decoding the outputs into the human readable text. This allowed me to use my eyes for a quick comparison and the quality of translation - something I couldn't do with output ids. Similar to the encoding process, this one was done in reverse. The steps were: 1. convert output ids into text strings 2. remove BPE encodings 3. detokenize - handle escaped characters, etc. After doing some more debugging here, I had to change the way BPE was dealt with from the original approach in `tokenization_xlm.py` and also run the outputs through the `moses` detokenizer. ``` def convert_tokens_to_string(self, tokens): """ Converts a sequence of tokens (string) in a single string. """ - out_string = "".join(tokens).replace("</w>", " ").strip() - return out_string + # remove BPE + tokens = [t.replace(" ", "").replace("</w>", " ") for t in tokens] + tokens = "".join(tokens).split() + # detokenize + text = self.moses_detokenize(tokens, self.tgt_lang) + return text ``` And all was good. ## Uploading models to s3 Once the conversion script did a complete job of porting all the required files to `transformers`, I uploaded the models to my 🤗 s3 account: ``` cd data transformers-cli upload -y wmt19-ru-en transformers-cli upload -y wmt19-en-ru transformers-cli upload -y wmt19-de-en transformers-cli upload -y wmt19-en-de cd - ``` For the duration of testing I was using my 🤗 s3 account and once my PR with the complete changes was ready to be merged I asked in the PR to move the models to the `facebook` organization account, since these models belong there. Several times I had to update just the config files, and I didn't want to re-upload the large models, so I wrote this little script that produces the right upload commands, which otherwise were too long to type and as a result were error-prone: ``` perl -le 'for $f (@ARGV) { print qq[transformers-cli upload -y $_/$f --filename $_/$f] \ for map { "wmt19-$_" } ("en-ru", "ru-en", "de-en", "en-de")}' \ vocab-src.json vocab-tgt.json tokenizer_config.json config.json # add/remove files as needed ``` So if, for example, I only needed to update all the `config.json` files, the script above gave me a convenient copy-n-paste: ``` transformers-cli upload -y wmt19-en-ru/config.json --filename wmt19-en-ru/config.json transformers-cli upload -y wmt19-ru-en/config.json --filename wmt19-ru-en/config.json transformers-cli upload -y wmt19-de-en/config.json --filename wmt19-de-en/config.json transformers-cli upload -y wmt19-en-de/config.json --filename wmt19-en-de/config.json ``` Once the upload was completed, these models could be accessed as (*): ``` tokenizer = FSMTTokenizer.from_pretrained("stas/wmt19-en-ru") ``` * footnote:`stas` is my username at https://huggingface.co. Before I made this upload I had to use the local path to the folder with the model files, e.g.: ``` tokenizer = FSMTTokenizer.from_pretrained("/code/huggingface/transformers-fair-wmt/data/wmt19-en-ru") ``` Important: If you update the model files, and re-upload them, you must be aware that due to CDN caching the uploaded model may be unavailable for up to 24 hours after the upload - i.e. the old cached model will be delivered. So the only way to start using the new model sooner is by either: 1. downloading it to a local path and using that path as an argument that gets passed to `from_pretrained()`. 2. or using: `from_pretrained(..., use_cdn=False)` everywhere for the next 24h - it's not enough to do it once. ## AutoConfig, AutoTokenizer, etc. One other change I needed to do is to plug the newly ported model into the automated model `transformers` system. This is used primarily on the [models website](https://huggingface.co/models) to load the model configuration, tokenizer and the main class without providing any specific class names. For example, in the case of `FSMT` one can do: ``` from transformers import AutoTokenizer, AutoModelWithLMHead mname = "facebook/wmt19-en-ru" tokenizer = AutoTokenizer.from_pretrained(mname) model = AutoModelWithLMHead.from_pretrained(mname) ``` There are 3 `*auto*` files that have mappings to enable that: ``` -rw-rw-r-- 1 stas stas 16K Sep 23 13:53 src/transformers/configuration_auto.py -rw-rw-r-- 1 stas stas 65K Sep 23 13:53 src/transformers/modeling_auto.py -rw-rw-r-- 1 stas stas 13K Sep 23 13:53 src/transformers/tokenization_auto.py ``` Then the are the pipelines, which completely hide all the NLP complexities from the end user and provide a very simple API to just pick a model and use it for a task at hand. For example, here is how one could perform a summarization task using `pipeline`: ``` summarizer = pipeline("summarization", model="t5-base", tokenizer="t5-base") summary = summarizer("Some long document here", min_length=5, max_length=20) print(summary) ``` The translation pipelines are a work in progress as of this writing, watch [this document](https://huggingface.co/transformers/main_classes/pipelines.html) for updates for when translation will be supported (currently only a few specific models/languages are supported). Finally, there is `src/transforers/__init__.py` to edit so that one could do: ``` from transformers import FSMTTokenizer, FSMTForConditionalGeneration ``` instead of: ``` from transformers.tokenization_fsmt import FSMTTokenizer from transformers.modeling_fsmt import FSMTForConditionalGeneration ``` but either way works. To find all the places I needed to plug FSMT in, I mimicked `BartConfig`, `BartForConditionalGeneration` and `BartTokenizer`. I just `grep`ped which files had it and inserted corresponding entries for `FSMTConfig`, `FSMTForConditionalGeneration` and `FSMTTokenizer`. ``` $ egrep -l "(BartConfig|BartForConditionalGeneration|BartTokenizer)" src/transformers/*.py \ | egrep -v "(marian|bart|pegasus|rag|fsmt)" src/transformers/configuration_auto.py src/transformers/generation_utils.py src/transformers/__init__.py src/transformers/modeling_auto.py src/transformers/pipelines.py src/transformers/tokenization_auto.py ``` In the `grep` search I excluded the files that also include those classes. ## Manual testing Until now I was primarily using my own scripts to do the testing. Once I had the translator working, I converted the reversed `ru-en` model and then wrote two paraphrase scripts: * [fseq-paraphrase](https://github.com/stas00/porting/tree/master/transformers/fairseq-wmt19/scripts/fseq-paraphrase.py) * [fsmt-paraphrase](https://github.com/stas00/porting/tree/master/transformers/fairseq-wmt19/scripts/fsmt-paraphrase.py) which took a sentence in the source language, translated it to another language and then translated the result of that back to the original language. This process usually results in a paraphrased outcome, due to differences in how different languages express similar things. With the help of these scripts I found some more problems with the detokenizer, stepped through with the debugger and made the fsmt script produce the same results as the `fairseq` version. At this stage no-beam search was producing mostly identical results, but there was still some divergence in the beam search. In order to identify the special cases, I wrote a [fsmt-port-validate.py](https://github.com/stas00/porting/tree/master/transformers/fairseq-wmt19/scripts/fsmt-port-validate.py) script that used as inputs `sacrebleu` test data and it run that data through both `fairseq` and `transformers` translation and reported only mismatches. It quickly identified a few remaining problems and observing the patterns I was able to fix those issues as well. ## Porting other models I next proceeded to port the `en-de` and `de-en` models. I was surprised to discover that these weren't built in the same way. Each of these had a merged dictionary, so for a moment I felt frustration, since I thought I'd now have to do another huge change to support that. But, I didn't need to make any changes, as the merged dictionary fit in without needing any changes. I just used 2 identical dictionaries - one as a source and a copy of it as a target. I wrote another script to test all ported models' basic functionality: [fsmt-test-all.py](https://github.com/stas00/porting/tree/master/transformers/fairseq-wmt19/scripts/fsmt-test-all.py). ## Test Coverage This next step was very important - I needed to prepare an extensive testing for the ported model. In the `transformers` test suite most tests that deal with large models are marked as `@slow` and those don't get to run normally on CI (Continual Integration), as they are, well, slow. So I needed to also create a tiny model, that has the same structure as a normal pre-trained model, but it had to be very small and it could have random weights. This tiny model is then can be used to test the ported functionality. It just can't be used for quality testing, since it has just a few weights and thus can't really be trained to do anything practical. [fsmt-make-tiny-model.py](https://github.com/stas00/porting/tree/master/transformers/fairseq-wmt19/scripts/fsmt-make-tiny-model.py) creates such a tiny model. The generated model with all of its dictionary and config files was just 3MB in size. I uploaded it to `s3` using `transformers-cli upload` and now I was able to use it in the test suite. Just like with the code, I started by copying `tests/test_modeling_bart.py` and converting it to use `FSMT`, and then tweaking it to work with the new model. I then converted a few of my scripts I used for manual testing into unit tests - that was easy. `transformers` has a huge set of common tests that each model runs through - I had to do some more tweaks to make these tests work for `FSMT` (primarily to adjust for the 2 dictionary setup) and I had to override a few tests, that weren't possible to run due to the uniqueness of this model, in order to skip them. You can see the results [here](https://github.com/huggingface/transformers/blob/129fdae04033fe4adfe013b734deaec6ec34ae2e/tests/test_tokenization_fsmt.py). I added one more test that performs a light BLEU evaluation - I used just 8 text inputs for each of the 4 models and measured BLEU scores on those. Here is the [test](https://github.com/huggingface/transformers/blob/129fdae04033fe4adfe013b734deaec6ec34ae2e/examples/seq2seq/test_fsmt_bleu_score.py) and the [script that generated data](https://github.com/huggingface/transformers/blob/129fdae04033fe4adfe013b734deaec6ec34ae2e/examples/seq2seq/test_data/fsmt/build-eval-data.py). ## SinusoidalPositionalEmbedding `fairseq` used a slightly different implementation of `SinusoidalPositionalEmbedding` than the one used by `transformers`. Initially I copied the `fairseq` implementation. But when trying to get the test suite to work I couldn't get the `torchscript` tests to pass. `SinusoidalPositionalEmbedding` was written so that it won't be part of `state_dict` and not get saved with the model weights - all the weights generated by this class are deterministic and are not trained. `fairseq` used a trick to make this work transparently by not making its weights a parameter or a buffer, and then during `forward` switching the weights to the correct device. `torchscript` wasn't taking this well, as it wanted all the weights to be on the correct device before the first `forward` call. I had to rewrite the implementation to convert it to a normal `nn.Embedding` subclass and then add functionality to not save these weights during `save_pretrained()` and for `from_pretrained()` to not complain if it can't find those weights during the `state_dict` loading. ## Evaluation I knew that the ported model was doing quite well based on my manual testing with a large body of text, but I didn't know how well the ported model performed comparatively to the original. So it was time to evaluate. For the task of translation [BLEU score](https://en.wikipedia.org/wiki/BLEU) is used as an evaluation metric. `transformers` has a script [run_eval.py](https://github.com/huggingface/transformers/blob/129fdae04033fe4adfe013b734deaec6ec34ae2e/examples/seq2seq/run_eval.py`) to perform the evaluation. Here is an evaluation for the `ru-en` pair ``` export PAIR=ru-en export MODEL=facebook/wmt19-$PAIR export DATA_DIR=data/$PAIR export SAVE_DIR=data/$PAIR export BS=64 export NUM_BEAMS=5 export LENGTH_PENALTY=1.1 mkdir -p $DATA_DIR sacrebleu -t wmt19 -l $PAIR --echo src > $DATA_DIR/val.source sacrebleu -t wmt19 -l $PAIR --echo ref > $DATA_DIR/val.target PYTHONPATH="src:examples/seq2seq" python examples/seq2seq/run_eval.py $MODEL \ $DATA_DIR/val.source $SAVE_DIR/test_translations.txt --reference_path $DATA_DIR/val.target \ --score_path $SAVE_DIR/test_bleu.json --bs $BS --task translation --num_beams $NUM_BEAMS \ --length_penalty $LENGTH_PENALTY --info $MODEL --dump-args ``` which took a few minutes to run and returned: ``` {'bleu': 39.0498, 'n_obs': 2000, 'runtime': 184, 'seconds_per_sample': 0.092, 'num_beams': 5, 'length_penalty': 1.1, 'info': 'ru-en'} ``` You can see that the BLEU score was `39.0498` and that it evaluated using 2000 test inputs, provided by `sacrebleu` using the `wmt19` dataset. Remember, I couldn't use the model ensemble, so I next needed to find the best performing checkpoint. For that purpose I wrote a script [fsmt-bleu-eval-each-chkpt.py](https://github.com/stas00/porting/tree/master/transformers/fairseq-wmt19/scripts/fsmt-bleu-eval-each-chkpt.sh) which converted each checkpoint, run the eval script and reported the best one. As a result I knew that `model4.pt` was delivering the best performance, out of the 4 available checkpoints. I wasn't getting the same BLEU scores as the ones reported in the original paper, so I next needed to make sure that we were comparing the same data using the same tools. Through asking at the `fairseq` issue I was given the code that was used by `fairseq` developers to get their BLEU scores - you will find it [here](https://github.com/stas00/porting/tree/master/transformers/fairseq-wmt19/scripts/fseq-reproduce-bleu.sh). But, alas, their method was using a re-ranking approach which wasn't disclosed. Moreover, they evaled on outputs before detokenization and not the real output, which apparently scores better. Bottom line - we weren't scoring in the same way (*). * footnote: the paper [A Call for Clarity in Reporting BLEU Scores](https://arxiv.org/abs/1804.08771) invites developers to start using the same method for calculating the metrics (tldr: use `sacrebleu`). Currently, this ported model is slightly behind the original on the BLEU scores, because model ensemble is not used, but it's impossible to tell the exact difference until the same measuring method is used. ## Porting new models After uploading the 4 `fairseq` models [here](https://huggingface.co/models?filter=facebook&tag=fsmt) it was then suggested to port 3 `wmt16` and 2 `wmt19` AllenAI models ([Jungo Kasai, et al](https://github.com/jungokasai/deep-shallow/)). The porting was a breeze, as I only had to figure out how to put all the source files together, since they were spread out through several unrelated archives. Once this was done the conversion worked without a hitch. The only issue I discovered after porting is that I was getting a lower BLEU score than the original. Jungo Kasai, the creator of these models, was very helpful at suggesting that a custom hyper-parameter`length_penalty=0.6` was used, and once I plugged that in I was getting much better results. This discovery lead me to write a new script: [run_eval_search.py](https://github.com/huggingface/transformers/blob/129fdae04033fe4adfe013b734deaec6ec34ae2e/examples/seq2seq/run_eval_search.py`), which can be used to search various hyper-parameters that would lead to the best BLEU scores. Here is an example of its usage: ``` # search space export PAIR=ru-en export DATA_DIR=data/$PAIR export SAVE_DIR=data/$PAIR export BS=32 mkdir -p $DATA_DIR sacrebleu -t wmt19 -l $PAIR --echo src > $DATA_DIR/val.source sacrebleu -t wmt19 -l $PAIR --echo ref > $DATA_DIR/val.target PYTHONPATH="src:examples/seq2seq" python examples/seq2seq/run_eval_search.py stas/wmt19-$PAIR \ $DATA_DIR/val.source $SAVE_DIR/test_translations.txt --reference_path $DATA_DIR/val.target \ --score_path $SAVE_DIR/test_bleu.json --bs $BS --task translation \ --search="num_beams=5:8:11:15 length_penalty=0.6:0.7:0.8:0.9:1.0:1.1 early_stopping=true:false" ``` Here it searches though all the possible combinations of `num_beams`, `length_penalty` and `early_stopping`. Once finished executing it reports: ``` bleu | num_beams | length_penalty | early_stopping ----- | --------- | -------------- | -------------- 39.20 | 15 | 1.1 | 0 39.13 | 11 | 1.1 | 0 39.05 | 5 | 1.1 | 0 39.05 | 8 | 1.1 | 0 39.03 | 15 | 1.0 | 0 39.00 | 11 | 1.0 | 0 38.93 | 8 | 1.0 | 0 38.92 | 15 | 1.1 | 1 [...] ``` You can see that in the case of `transformers` `early_stopping=False` performs better (`fairseq` uses the `early_stopping=True` equivalent). So for the 5 new models I used this script to find the best default parameters and I used those when converting the models. User can still override these parameters, when invoking `generate()`, but why not provide the best defaults. You will find the 5 ported AllenAI models [here](https://huggingface.co/models?filter=allenai&tag=fsmt). ## More scripts As each ported group of models has its own nuances, I made dedicated scripts to each one of them, so that it will be easy to re-build things in the future or to create new scripts to convert new models. You will find all the conversion, evaluation, and other scripts [here](https://github.com/huggingface/transformers/blob/129fdae04033fe4adfe013b734deaec6ec34ae2e/scripts/fsmt/). ### Model cards One other important thing is that it's not enough to port a model and make it available to others. One needs to provide information on how to use it, nuances about hyper-parameters, sources of datasets, evaluation metrics, etc. This is all done by creating model cards, which is just a `README.md` file, that starts with some metadata that is used by [the models website](https://huggingface.co/models), followed by all the useful information that can be shared. For example, let's take [the `facebook/wmt19-en-ru` model card](https://github.com/huggingface/transformers/tree/129fdae04033fe4adfe013b734deaec6ec34ae2e/model_cards/facebook/wmt19-en-ru/README.md). Here is its top: ``` --- language: - en - ru thumbnail: tags: - translation - wmt19 - facebook license: apache-2.0 datasets: - wmt19 metrics: - bleu --- # FSMT ## Model description This is a ported version of [...] ``` As you can see we define the languages, tags, license, datasets, and metrics. There is a full guide for writing these at [Model sharing and uploading](https://huggingface.co/transformers/model_sharing.html#add-a-model-card). The rest is the markdown document describing the model and its nuances. You can also try out the models directly from the model pages thanks to the Inference widgets. For example for English-to-russian translation: https://huggingface.co/facebook/wmt19-en-ru?text=My+name+is+Diego+and+I+live+in+Moscow. ![break point group](./assets/07_porting_fsmt/inference_api.png) ## Documentation Finally, the documentation needed to be added. Luckily, most of the documentation is autogenerated from the docstrings in the module files. As before, I copied `docs/source/model_doc/bart.rst` and adapted it to `FSMT`. When it was ready I linked to it by adding `fsmt` entry inside `docs/source/index.rst` I used: ``` make docs ``` to test that the newly added document was building correctly. The file I needed to check after running that target was `docs/_build/html/model_doc/fsmt.html` - I just loaded in my browser and verified that it rendered correctly. Here is the final source document [docs/source/model_doc/fsmt.rst](https://github.com/huggingface/transformers/blob/129fdae04033fe4adfe013b734deaec6ec34ae2e/docs/source/model_doc/fsmt.rst) and its [rendered version](https://huggingface.co/transformers/model_doc/fsmt.html). ## It's PR time Once I felt my work was quite complete, I was ready to submit my PR. Since this work involved many git commits, I wanted to make a clean PR, so I used the following technique to squash all the commits into one in a new branch. This kept all the initial commits in place if I wanted to access any of them later. The branch I was developing on was called `fair-wmt`, and the new branch that I was going to submit the PR from I named `fair-wmt-clean`, so here is what I did: ``` git checkout master git checkout -b fair-wmt-clean git merge --squash fair-wmt git commit -m "Ready for PR" git push origin fair-wmt-clean ``` Then I went to github and submitted this [PR](https://github.com/huggingface/transformers/pull/6940) based on the `fair-wmt-clean` branch. It took two weeks of several cycles of feedback, followed by modifications, and more such cycles. Eventually it was all satisfactory and the PR got merged. While this process was going on, I was finding issues here and there, adding new tests, improving the documentation, etc., so it was time well spent. I subsequently filed a few more PRs with changes after I improved and reworked a few features, adding various build scripts, models cards, etc. Since the models I ported were belonging to `facebook` and `allenai` organizations, I had to ask Sam to move those model files from my account on `s3` to the corresponding organizations. ## Closing thoughts - While I couldn't port the model ensemble as `transformers` doesn't support it, on the plus side the download size of the final `facebook/wmt19-*` models is 1.1GB and not 13GB as in the original. For some reason the original includes the optimizer state saved in the model - so it adds almost 9GB (4x2.2GB) of dead weight for those who just want to download the model to use it as is to translate text. - While the job of porting looked very challenging at the beginning as I didn't know the internals of neither `transformers` nor `fairseq`, looking back it wasn't that difficult after all. This was primarily due to having most of the components already available to me in the various parts of `transformers` - I *just* needed to find parts that I needed, mostly borrowing heavily from other models, and then tweak them to do what I needed. This was true for both the code and the tests. Let's rephrase that - porting was difficult - but it'd have been much more difficult if I had to write it all from scratch. And finding the right parts wasn't easy. ## Appreciations - Having [Sam Shleifer](https://github.com/sshleifer) mentor me through this process was of an extreme help to me, both thanks to his technical support and just as importantly for inspiring and encouraging me when I was getting stuck. - The PR merging process took a good couple of weeks before it was accepted. During this stage, besides Sam, [Lysandre Debut](https://github.com/LysandreJik) and [Sylvain Gugger](https://github.com/sgugger) contributed a lot through their insights and suggestions, which I integrating into the codebase. - I'm grateful to everybody who has contributed to the `transformers` codebase, which paved the way for my work. ## Notes ### Autoprint all in Jupyter Notebook My jupyter notebook is configured to automatically print all expressions, so I don't have to explicitly `print()` them. The default behavior is to print only the last expression of each cell. So if you read the outputs in my notebooks they may not the be same as if you were to run them yourself, unless you have the same setup. You can enable the print-all feature in your jupyter notebook setup by adding the following to `~/.ipython/profile_default/ipython_config.py` (create it if you don't have one): ``` c = get_config() # Run all nodes interactively c.InteractiveShell.ast_node_interactivity = "all" # restore to the original behavior # c.InteractiveShell.ast_node_interactivity = "last_expr" ``` and restarting your jupyter notebook server. ### Links to the github versions of files In order to ensure that all links work if you read this article much later after it has been written, the links were made to a specific SHA version of the code and not necessarily the latest version. This is so that if files were renamed or removed you will still find the code this article is referring to. If you want to ensure you're looking at the latest version of the code, replace the hash code in the links with `master`. For example, a link: ``` https://github.com/huggingface/transformers/blob/129fdae04033fe4adfe013b734deaec6ec34ae2e/src/transformers/modeling_fsmt.py ``` becomes: ``` https://github.com/huggingface/transformers/blob/master/src/transformers/convert_fsmt_original_pytorch_checkpoint_to_pytorch.py ``` Thank you for reading!
Leveraging Pre-trained Language Model Checkpoints for Encoder-Decoder Models
patrickvonplaten
November 09, 2020
warm-starting-encoder-decoder
guide, nlp
https://huggingface.co/blog/warm-starting-encoder-decoder
# Leveraging Pre-trained Language Model Checkpoints for Encoder-Decoder Models <a target="_blank" href="https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Leveraging_Pre_trained_Checkpoints_for_Encoder_Decoder_Models.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> Transformer-based encoder-decoder models were proposed in [Vaswani et al. (2017)](https://arxiv.org/pdf/1706.03762.pdf) and have recently experienced a surge of interest, *e.g.* [Lewis et al. (2019)](https://arxiv.org/abs/1910.13461), [Raffel et al. (2019)](https://arxiv.org/abs/1910.10683), [Zhang et al. (2020)](https://arxiv.org/abs/1912.08777), [Zaheer et al. (2020)](https://arxiv.org/abs/2007.14062), [Yan et al. (2020)](https://arxiv.org/pdf/2001.04063.pdf). Similar to BERT and GPT2, massive pre-trained encoder-decoder models have shown to significantly boost performance on a variety of *sequence-to-sequence* tasks [Lewis et al. (2019)](https://arxiv.org/abs/1910.13461), [Raffel et al. (2019)](https://arxiv.org/abs/1910.10683). However, due to the enormous computational cost attached to pre-training encoder-decoder models, the development of such models is mainly limited to large companies and institutes. In [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks (2020)](https://arxiv.org/pdf/1907.12461.pdf), Sascha Rothe, Shashi Narayan and Aliaksei Severyn initialize encoder-decoder model with pre-trained *encoder and/or decoder-only* checkpoints (*e.g.* BERT, GPT2) to skip the costly pre-training. The authors show that such *warm-started* encoder-decoder models yield competitive results to large pre-trained encoder-decoder models, such as [*T5*](https://arxiv.org/abs/1910.10683), and [*Pegasus*](https://arxiv.org/abs/1912.08777) on multiple *sequence-to-sequence* tasks at a fraction of the training cost. In this notebook, we will explain in detail how encoder-decoder models can be warm-started, give practical tips based on [Rothe et al. (2020)](https://arxiv.org/pdf/1907.12461.pdf), and finally go over a complete code example showing how to warm-start encoder-decoder models with 🤗Transformers. This notebook is divided into 4 parts: - **Introduction** - *Short summary of pre-trained language models in NLP and the need for warm-starting encoder-decoder models.* - **Warm-starting encoder-decoder models (Theory)** - *Illustrative explanation on how encoder-decoder models are warm-started?* - **Warm-starting encoder-decoder models (Analysis)** - *Summary of [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks (2020)](https://arxiv.org/pdf/1907.12461.pdf) - What model combinations are effective to warm-start encoder-decoder models; How does it differ from task to task?* - **Warm-starting encoder-decoder models with 🤗Transformers (Practice)** - *Complete code example showcasing in-detail how to use the* `EncoderDecoderModel` *framework to warm-start transformer-based encoder-decoder models.* It is highly recommended (probably even necessary) to have read [this blog post](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Encoder_Decoder_Model.ipynb) about transformer-based encoder-decoder models. Let\'s start by giving some back-ground on warm-starting encoder-decoder models. ## **Introduction** Recently, pre-trained language models \\({}^1\\) have revolutionized the field of natural language processing (NLP). The first pre-trained language models were based on recurrent neural networks (RNN) as proposed [Dai et al. (2015)](https://arxiv.org/pdf/1511.01432.pdf). *Dai et. al* showed that pre-training an RNN-based model on unlabelled data and subsequently fine-tuning \\({}^2\\) it on a specific task yields better results than training a randomly initialized model directly on such a task. However, it was only in 2018, when pre-trained language models become widely accepted in NLP. [ELMO by Peters et al.](https://arxiv.org/abs/1802.05365) and [ULMFit by Howard et al.](https://arxiv.org/pdf/1801.06146.pdf) were the first pre-trained language model to significantly improve the state-of-the-art on an array of natural language understanding (NLU) tasks. Just a couple of months later, OpenAI and Google published *transformer-based* pre-trained language models, called [GPT by Radford et al.](https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf) and [BERT by Devlin et al.](https://arxiv.org/abs/1810.04805) respectively. The improved efficiency of *transformer-based* language models over RNNs allowed GPT2 and BERT to be pre-trained on massive amounts of unlabeled text data. Once pre-trained, BERT and GPT were shown to require very little fine-tuning to shatter state-of-art results on more than a dozen NLU tasks \\({}^3\\). The capability of pre-trained language models to effectively transfer *task-agnostic* knowledge to *task-specific* knowledge turned out to be a great catalyst for NLU. Whereas engineers and researchers previously had to train a language model from scratch, now publicly available checkpoints of large pre-trained language models can be fine-tuned at a fraction of the cost and time. This can save millions in industry and allows for faster prototyping and better benchmarks in research. Pre-trained language models have established a new level of performance on NLU tasks and more and more research has been built upon leveraging such pre-trained language models for improved NLU systems. However, standalone BERT and GPT models have been less successful for *sequence-to-sequence* tasks, *e.g.* *text-summarization*, *machine translation*, *sentence-rephrasing*, etc. Sequence-to-sequence tasks are defined as a mapping from an input sequence \\(\mathbf{X}_{1:n}\\) to an output sequence \\(\mathbf{Y}_{1:m}\\) of *a-priori* unknown output length \\(m\\). Hence, a sequence-to-sequence model should define the conditional probability distribution of the output sequence \\(\mathbf{Y}_{1:m}\\) conditioned on the input sequence \\(\mathbf{X}_{1:n}\\): $$ p_{\theta_{\text{model}}}(\mathbf{Y}_{1:m} | \mathbf{X}_{1:n}). $$ Without loss of generality, an input word sequence of \\(n\\) words is hereby represented by the vector sequnece \\(\mathbf{X}_{1:n} = \mathbf{x}_1, \ldots, \mathbf{x}_n\\) and an output sequence of \\(m\\) words as \\(\mathbf{Y}_{1:m} = \mathbf{y}_1, \ldots, \mathbf{y}_m\\). Let\'s see how BERT and GPT2 would be fit to model sequence-to-sequence tasks. ### **BERT** BERT is an *encoder-only* model, which maps an input sequence \\(\mathbf{X}_{1:n}\\) to a *contextualized* encoded sequence \\(\mathbf{\overline{X}}_{1:n}\\): $$ f_{\theta_{\text{BERT}}}: \mathbf{X}_{1:n} \to \mathbf{\overline{X}}_{1:n}. $$ BERT\'s contextualized encoded sequence \\(\mathbf{\overline{X}}_{1:n}\\) can then further be processed by a classification layer for NLU classification tasks, such as *sentiment analysis*, *natural language inference*, etc. To do so, the classification layer, *i.e.* typically a pooling layer followed by a feed-forward layer, is added as a final layer on top of BERT to map the contextualized encoded sequence \\(\mathbf{\overline{X}}_{1:n}\\) to a class \\(c\\): $$ f_{\theta{\text{p,c}}}: \mathbf{\overline{X}}_{1:n} \to c. $$ It has been shown that adding a pooling- and classification layer, defined as \\(\theta_{\text{p,c}}\\), on top of a pre-trained BERT model \\(\theta_{\text{BERT}}\\) and subsequently fine-tuning the complete model \\(\{\theta_{\text{p,c}}, \theta_{\text{BERT}}\}\\) can yield state-of-the-art performances on a variety of NLU tasks, *cf.* to [BERT by Devlin et al.](https://arxiv.org/abs/1810.04805). Let\'s visualize BERT. ![texte du lien](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/bert.png) The BERT model is shown in grey. The model stacks multiple *BERT blocks*, each of which is composed of *bi-directional* self-attention layers (shown in the lower part of the red box) and two feed-forward layers (short in the upper part of the red box). Each BERT block makes use of **bi-directional** self-attention to process an input sequence \\(\mathbf{x'}_1, \ldots, \mathbf{x'}_n\\) (shown in light grey) to a more \"refined\" contextualized output sequence \\(\mathbf{x''}_1, \ldots, \mathbf{x''}_n\\) (shown in slightly darker grey) \\({}^4\\). The contextualized output sequence of the final BERT block, *i.e.* \\(\mathbf{\overline{X}}_{1:n}\\), can then be mapped to a single output class \\(c\\) by adding a *task-specific* classification layer (shown in orange) as explained above. *Encoder-only* models can only map an input sequence to an output sequence of *a priori* known output length. In conclusion, the output dimension does not depend on the input sequence, which makes it disadvantageous and impractical to use encoder-only models for sequence-to-sequence tasks. As for all *encoder-only* models, BERT\'s architecture corresponds exactly to the architecture of the encoder part of *transformer-based* encoder-decoder models as shown in the \"Encoder\" section in the [Encoder-Decoder notebook](https://colab.research.google.com/drive/19wkOLQIjBBXQ-j3WWTEiud6nGBEw4MdF?usp=sharing). ### **GPT2** GPT2 is a *decoder-only* model, which makes use of *uni-directional* (*i.e.* \"causal\") self-attention to define a mapping from an input sequence \\(\mathbf{Y}_{0: m - 1}\\) \\({}^1\\) to a \"next-word\" logit vector sequence \\(\mathbf{L}_{1:m}\\): $$ f_{\theta_{\text{GPT2}}}: \mathbf{Y}_{0: m - 1} \to \mathbf{L}_{1:m}. $$ By processing the logit vectors \\(\mathbf{L}_{1:m}\\) with the *softmax* operation, the model can define the probability distribution of the word sequence \\(\mathbf{Y}_{1:m}\\). To be exact, the probability distribution of the word sequence \\(\mathbf{Y}_{1:m}\\) can be factorized into \\(m-1\\) conditional \"next word\" distributions: $$ p_{\theta_{\text{GPT2}}}(\mathbf{Y}_{1:m}) = \prod_{i=1}^{m} p_{\theta_{\text{GPT2}}}(\mathbf{y}_i | \mathbf{Y}_{0:i-1}). $$ \\(p_{\theta_{\text{GPT2}}}(\mathbf{y}_i | \mathbf{Y}_{0:i-1})\\) hereby presents the probability distribution of the next word \\(\mathbf{y}_i\\) given all previous words \\(\mathbf{y}_0, \ldots, \mathbf{y}_{i-1}\\) \\({}^3\\) and is defined as the softmax operation applied on the logit vector \\(\mathbf{l}_i\\). To summarize, the following equations hold true. $$ p_{\theta_{\text{gpt2}}}(\mathbf{y}_i | \mathbf{Y}_{0:i-1}) = \textbf{Softmax}(\mathbf{l}_i) = \textbf{Softmax}(f_{\theta_{\text{GPT2}}}(\mathbf{Y}_{0: i - 1})).$$ For more detail, please refer to the [decoder](https://huggingface.co/blog/encoder-decoder#decoder) section of the encoder-decoder blog post. Let\'s visualize GPT2 now as well. ![texte du lien](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/gpt2.png) Analogous to BERT, GPT2 is composed of a stack of *GPT2 blocks*. In contrast to BERT block, GPT2 block makes use of **uni-directional** self-attention to process some input vectors \\(\mathbf{y'}_0, \ldots, \mathbf{y'}_{m-1}\\) (shown in light blue on the bottom right) to an output vector sequence \\(\mathbf{y''}_0, \ldots, \mathbf{y''}_{m-1}\\) (shown in darker blue on the top right). In addition to the GPT2 block stack, the model also has a linear layer, called *LM Head*, which maps the output vectors of the final GPT2 block to the logit vectors \\(\mathbf{l}_1, \ldots, \mathbf{l}_m\\). As mentioned earlier, a logit vector \\(\mathbf{l}_i\\) can then be used to sample of new input vector \\(\mathbf{y}_i\\) \\({}^5\\). GPT2 is mainly used for *open-domain* text generation. First, an input prompt \\(\mathbf{Y}_{0:i-1}\\) is fed to the model to yield the conditional distribution \\(p_{\theta_{\text{gpt2}}}(\mathbf{y} | \mathbf{Y}_{0:i-1})\\). Then the next word \\(\mathbf{y}_i\\) is sampled from the distribution (represented by the grey arrows in the graph above) and consequently append to the input. In an auto-regressive fashion the word \\(\mathbf{y}_{i+1}\\) can then be sampled from \\(p_{\theta_{\text{gpt2}}}(\mathbf{y} | \mathbf{Y}_{0:i})\\) and so on. GPT2 is therefore well-suited for *language generation*, but less so for *conditional* generation. By setting the input prompt \\(\mathbf{Y}_{0: i-1}\\) equal to the sequence input \\(\mathbf{X}_{1:n}\\), GPT2 can very well be used for conditional generation. However, the model architecture has a fundamental drawback compared to the encoder-decoder architecture as explained in [Raffel et al. (2019)](https://arxiv.org/abs/1910.10683) on page 17. In short, uni-directional self-attention forces the model\'s representation of the sequence input \\(\mathbf{X}_{1:n}\\) to be unnecessarily limited since \\(\mathbf{x}_i\\) cannot depend on \\(\mathbf{x}_{i+1}, \forall i \in \{1,\ldots, n\}\\). ### **Encoder-Decoder** Because *encoder-only* models require to know the output length *a priori*, they seem unfit for sequence-to-sequence tasks. *Decoder-only* models can function well for sequence-to-sequence tasks, but also have certain architectural limitations as explained above. The current predominant approach to tackle *sequence-to-sequence* tasks are *transformer-based* **encoder-decoder** models - often also called *seq2seq transformer* models. Encoder-decoder models were introduced in [Vaswani et al. (2017)](https://arxiv.org/abs/1706.03762) and since then have been shown to perform better on *sequence-to-sequence* tasks than stand-alone language models (*i.e.* decoder-only models), *e.g.* [Raffel et al. (2020)](https://arxiv.org/pdf/1910.10683.pdf). In essence, an encoder-decoder model is the combination of a *stand-alone* encoder, such as BERT, and a *stand-alone* decoder model, such as GPT2. For more details on the exact architecture of transformer-based encoder-decoder models, please refer to [this blog post](https://huggingface.co/blog/encoder-decoder). Now, we know that freely available checkpoints of large pre-trained *stand-alone* encoder and decoder models, such as *BERT* and *GPT*, can boost performance and reduce training cost for many NLU tasks, We also know that encoder-decoder models are essentially the combination of *stand-alone* encoder and decoder models. This naturally brings up the question of how one can leverage stand-alone model checkpoints for encoder-decoder models and which model combinations are most performant on certain *sequence-to-sequence* tasks. In 2020, Sascha Rothe, Shashi Narayan, and Aliaksei Severyn investigated exactly this question in their paper [**Leveraging Pre-trained Checkpoints for Sequence Generation Tasks**](https://arxiv.org/abs/1907.12461). The paper offers a great analysis of different encoder-decoder model combinations and fine-tuning techniques, which we will study in more detail later. Composing an encoder-decoder model of pre-trained stand-alone model checkpoints is defined as *warm-starting* the encoder-decoder model. The following sections show how warm-starting an encoder-decoder model works in theory, how one can put the theory into practice with 🤗Transformers, and also gives practical tips for better performance. ------------------------------------------------------------------------ \\({}^1\\) A *pre-trained language model* is defined as a neural network: - that has been trained on *unlabeled* text data, *i.e.* in a task-agnostic, unsupervised fashion, and - that processes a sequence of input words into a *context-dependent* embedding. *E.g.* the *continuous bag-of-words* and *skip-gram* model from [Mikolov et al. (2013)](https://arxiv.org/abs/1301.3781) is not considered a pre-trained language model because the embeddings are context-agnostic. \\({}^2\\) *Fine-tuning* is defined as the *task-specific* training of a model that has been initialized with the weights of a pre-trained language model. \\({}^3\\) The input vector \\(\mathbf{y}_0\\) corresponds hereby to the \\(\text{BOS}\\) embedding vector required to predict the very first output word \\(\mathbf{y}_1\\). \\({}^4\\) Without loss of generalitiy, we exclude the normalization layers to not clutter the equations and illustrations. \\({}^5\\) For more detail on why uni-directional self-attention is used for \"decoder-only\" models, such as GPT2, and how sampling works exactly, please refer to the [decoder](https://huggingface.co/blog/encoder-decoder#decoder) section of the encoder-decoder blog post. ## **Warm-starting encoder-decoder models (Theory)** Having read the introduction, we are now familiar with *encoder-only*- and *decoder-only* models. We have noticed that the encoder-decoder model architecture is essentially a composition of a *stand-alone* encoder model and a *stand-alone* decoder model, which led us to the question of how one can *warm-start* encoder-decoder models from *stand-alone* model checkpoints. There are multiple possibilities to warm-start an encoder-decoder model. One can 1. initialize both the encoder and decoder part from an *encoder-only* model checkpoint, *e.g.* BERT, 2. initialize the encoder part from an *encoder-only* model checkpoint, *e.g.* BERT, and the decoder part from and a *decoder-only* checkpoint, *e.g.* GPT2, 3. initialize only the encoder part with an *encoder-only* model checkpoint, or 4. initialize only the decoder part with a *decoder-only* model checkpoint. In the following, we will put the focus on possibilities 1. and 2. Possibilities 3. and 4. are trivial after having understood the first two. ### **Recap Encoder-Decoder Model** First, let\'s do a quick recap of the encoder-decoder architecture. ![texte du lien](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/encoder_decoder_reap.png) The encoder (shown in green) is a stack of *encoder blocks*. Each encoder block is composed of a *bi-directional self-attention* layer, and two feed-forward layers \\({}^1\\). The decoder (shown in orange) is a stack of *decoder blocks*, followed by a dense layer, called *LM Head*. Each decoder block is composed of a *uni-directional self-attention* layer, a *cross-attention* layer, and two feed-forward layers. The encoder maps the input sequence \\(\mathbf{X}_{1:n}\\) to a contextualized encoded sequence \\(\mathbf{\overline{X}}_{1:n}\\) in the exact same way BERT does. The decoder then maps the contextualized encoded sequence \\(\mathbf{\overline{X}}_{1:n}\\) and a target sequence \\(\mathbf{Y}_{0:m-1}\\) to the logit vectors \\(\mathbf{L}_{1:m}\\). Analogous to GPT2, the logits are then used to define the distribution of the target sequence \\(\mathbf{Y}_{1:m}\\) conditioned on the input sequence \\(\mathbf{X}_{1:n}\\) by means of a *softmax* operation. To put it into mathematical terms, first, the conditional distribution is factorized into \\(m - 1\\) conditional distributions of the next word \\(\mathbf{y}_i\\) by Bayes\' rule. $$ p_{\theta_{\text{enc, dec}}}(\mathbf{Y}_{1:m} | \mathbf{X}_{1:n}) = p_{\theta_{\text{dec}}}(\mathbf{Y}_{1:m} | \mathbf{\overline{X}}_{1:n}) = \prod_{i=1}^m p_{\theta_{\text{dec}}}(\mathbf{y}_i | \mathbf{Y}_{0: i -1}, \mathbf{\overline{X}}_{1:n}), \text{ with } \mathbf{\overline{X}}_{1:n} = f_{\theta_{\text{enc}}}(\mathbf{X}_{1:n}). $$ Each \"next-word\" conditional distributions is thereby defined by the *softmax* of the logit vector as follows. $$ p_{\theta_{\text{dec}}}(\mathbf{y}_i | \mathbf{Y}_{0: i -1}, \mathbf{\overline{X}}_{1:n}) = \textbf{Softmax}(\mathbf{l}_i). $$ For more detail, please refer to the [Encoder-Decoder notebook](https://colab.research.google.com/drive/19wkOLQIjBBXQ-j3WWTEiud6nGBEw4MdF?usp=sharing). ### **Warm-staring Encoder-Decoder with BERT** Let\'s now illustrate how a pre-trained BERT model can be used to warm-start the encoder-decoder model. BERT\'s pre-trained weight parameters are used to both initialize the encoder\'s weight parameters as well as the decoder\'s weight parameters. To do so, BERT\'s architecture is compared to the encoder\'s architecture and all layers of the encoder that also exist in BERT will be initialized with the pre-trained weight parameters of the respective layers. All layers of the encoder that do not exist in BERT will simply have their weight parameters be randomly initialized. Let\'s visualize. ![texte du lien](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/encoder_decoder/leverage_encoder.png) We can see that the encoder architecture corresponds 1-to-1 to BERT\'s architecture. The weight parameters of the *bi-directional self-attention layer* and the two *feed-forward layers* of **all** encoder blocks are initialized with the weight parameters of the respective BERT blocks. This is illustrated examplary for the second encoder block (red boxes at bottow) whose weight parameters \\(\theta_{\text{enc}}^{\text{self-attn}, 2}\\) and \\(\theta_{\text{enc}}^{\text{feed-forward}, 2}\\) are set to BERT\'s weight parameters \\(\theta_{\text{BERT}}^{\text{feed-forward}, 2}\\) and \\(\theta_{\text{BERT}}^{\text{self-attn}, 2}\\), respectively at initialization. Before fine-tuning, the encoder therefore behaves exactly like a pre-trained BERT model. Assuming the input sequence \\(\mathbf{x}_1, \ldots, \mathbf{x}_n\\) (shown in green) passed to the encoder is equal to the input sequence \\(\mathbf{x}_1^{\text{BERT}}, \ldots, \mathbf{x}_n^{\text{BERT}}\\) (shown in grey) passed to BERT, this means that the respective output vector sequences \\(\mathbf{\overline{x}}_1, \ldots, \mathbf{\overline{x}}_n\\) (shown in darker green) and \\(\mathbf{\overline{x}}_1^{\text{BERT}}, \ldots, \mathbf{\overline{x}}_n^{\text{BERT}}\\) (shown in darker grey) also have to be equal. Next, let\'s illustrate how the decoder is warm-started. ![texte du lien](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/encoder_decoder/leverage_decoder.png) The architecture of the decoder is different from BERT\'s architecture in three ways. 1. First, the decoder has to be conditioned on the contextualized encoded sequence \\(\mathbf{\overline{X}}_{1:n}\\) by means of cross-attention layers. Consequently, randomly initialized cross-attention layers are added between the self-attention layer and the two feed-forward layers in each BERT block. This is represented exemplary for the second block by \\(+\theta_{\text{dec}}^{\text{cross-attention, 2}}\\) and illustrated by the newly added fully connected graph in red in the lower red box on the right. This necessarily changes the behavior of each modified BERT block so that an input vector, *e.g.* \\(\mathbf{y'}_0\\) now yields a random output vector \\(\mathbf{y''}_0\\) (highlighted by the red border around the output vector \\(\mathbf{y''}_0\\)). 2. Second, BERT\'s *bi-directional* self-attention layers have to be changed to *uni-directional* self-attention layers to comply with auto-regressive generation. Because both the bi-directional and the uni-directional self-attention layer are based on the same *key*, *query* and *value* projection weights, the decoder\'s self-attention layer weights can be initialized with BERT\'s self-attention layer weights. *E.g.* the query, key and value weight parameters of the decoder\'s uni-directional self-attention layer are initialized with those of BERT\'s bi-directional self-attention layer \\(\theta_{\text{BERT}}^{\text{self-attn}, 2} = \{\mathbf{W}_{\text{BERT}, k}^{\text{self-attn}, 2}, \mathbf{W}_{\text{BERT}, v}^{\text{self-attn}, 2}, \mathbf{W}_{\text{BERT}, q}^{\text{self-attn}, 2} \} \to \theta_{\text{dec}}^{\text{self-attn}, 2} = \{\mathbf{W}_{\text{dec}, k}^{\text{self-attn}, 2}, \mathbf{W}_{\text{dec}, v}^{\text{self-attn}, 2}, \mathbf{W}_{\text{dec}, q}^{\text{self-attn}, 2} \}. \\) However, in *uni-directional* self-attention each token only attends to all previous tokens, so that the decoder\'s self-attention layers yield different output vectors than BERT\'s self-attention layers even though they share the same weights. Compare *e.g.*, the decoder\'s causally connected graph in the right box versus BERT\'s fully connected graph in the left box. 3. Third, the decoder outputs a sequence of logit vectors \\(\mathbf{L}_{1:m}\\) in order to define the conditional probability distribution \\(p_{\theta_{\text{dec}}}(\mathbf{Y}_{1:n} | \mathbf{\overline{X}})\\). As a result, a *LM Head* layer is added on top of the last decoder block. The weight parameters of the *LM Head* layer usually correspond to the weight parameters of the word embedding \\(\mathbf{W}_{\text{emb}}\\) and thus are not randomly initialized. This is illustrated in the top by the initialization \\(\theta_{\text{BERT}}^{\text{word-emb}} \to \theta_{\text{dec}}^{\text{lm-head}}\\). To conclude, when warm-starting the decoder from a pre-trained BERT model only the cross-attention layer weights are randomly initialized. All other weights including those of the self-attention layer and LM Head are initialized with BERT\'s pre-trained weight parameters. Having warm-stared the encoder-decoder model, the weights are then fine-tuned on a *sequence-to-sequence* downstream task, such as summarization. ### **Warm-staring Encoder-Decoder with BERT and GPT2** Instead of warm-starting both the encoder and decoder with a BERT checkpoint, we can instead leverage the BERT checkpoint for the encoder and a GPT2 checkpoint for the decoder. At first glance, a decoder-only GPT2 checkpoint seems to be better-suited to warm-start the decoder because it has already been trained on causal language modeling and uses *uni-directional* self-attention layers. Let\'s illustrate how a GPT2 checkpoint can be used to warm-start the decoder. ![texte du lien](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/encoder_decoder/leverage_decoder_gpt2.png) We can see that decoder is more similar to GPT2 than it is to BERT. The weight parameters of decoder\'s *LM Head* can directly be initialized with GPT2\'s *LM Head* weight parameters, *e.g.* \\(\theta_{\text{GPT2}}^{\text{lm-head}} \to \theta_{\text{dec}}^{\text{lm-head}}\\). In addition, the blocks of the decoder and GPT2 both make use of *uni-directional* self-attention so that the output vectors of the decoder\'s self-attention layer are equivalent to GPT2\'s output vectors assuming the input vectors are the same, *e.g.* \\(\mathbf{y'}_0^{\text{GPT2}} = \mathbf{y'}_0\\). In contrast to the BERT-initialized decoder, the GPT2-initialized decoder, therefore, keeps the causal connected graph of the self-attention layer as can be seen in the red boxes on the bottom. Nevertheless, the GPT2-initialized decoder also has to condition the decoder on \\(\mathbf{\overline{X}}_{1:n}\\). Analoguos to the BERT-initialized decoder, randomly initialized weight parameters for the cross-attention layer are therefore added to each decoder block. This is illustrated *e.g.* for the second encoder block by \\(+\theta_{\text{dec}}^{\text{cross-attention, 2}}\\). Even though GPT2 resembles the decoder part of an encoder-decoder model more than BERT, a GPT2-initialized decoder will also yield random logit vectors \\(\mathbf{L}_{1:m}\\) without fine-tuning due to randomly initialized cross-attention layers in every decoder block. It would be interesting to investigate whether a GPT2-initialized decoder yields better results or can be fine-tuned more efficiently. ### **Encoder-Decoder Weight Sharing** In [Raffel et al. (2020)](https://arxiv.org/pdf/1910.10683.pdf), the authors show that a randomly-initialized encoder-decoder model that shares the encoder\'s weights with the decoder, and therefore reduces the memory footprint by half, performs only slightly worse than its \"non-shared\" version. Sharing the encoder\'s weights with the decoder means that all layers of the decoder that are found at the same position in the encoder share the same weight parameters, *i.e.* the same node in the network\'s computation graph.\ *E.g.* the query, key, and value projection matrices of the self-attention layer in the third encoder block, defined as \\(\mathbf{W}^{\text{self-attn}, 3}_{\text{Enc}, k}\\), \\(\mathbf{W}^{\text{self-attn}, 3}_{\text{Enc}, v}\\), \\(\mathbf{W}^{\text{self-attn}, 3}_{\text{Enc}, q}\\) are identical to the respective query, key, and value projections matrices of the self-attention layer in the third decoder block \\({}^2\\): $$ \mathbf{W}^{\text{self-attn}, 3}_{k} = \mathbf{W}^{\text{self-attn}, 3}_{\text{enc}, k} \equiv \mathbf{W}^{\text{self-attn}, 3}_{\text{dec}, k}, $$ $$ \mathbf{W}^{\text{self-attn}, 3}_{q} = \mathbf{W}^{\text{self-attn}, 3}_{\text{enc}, q} \equiv \mathbf{W}^{\text{self-attn}, 3}_{\text{dec}, q}, $$ $$ \mathbf{W}^{\text{self-attn}, 3}_{v} = \mathbf{W}^{\text{self-attn}, 3}_{\text{enc}, v} \equiv \mathbf{W}^{\text{self-attn}, 3}_{\text{dec}, v}, $$ As a result, the key projection weights \\(\mathbf{W}^{\text{self-attn}, 3}_{k}, \mathbf{W}^{\text{self-attn}, 3}_{v}, \mathbf{W}^{\text{self-attn}, 3}_{q}\\) are updated twice for each backward propagation pass - once when the gradient is backpropagated through the third decoder block and once when the gradient is backprapageted thourgh the third encoder block. In the same way, we can warm-start an encoder-decoder model by sharing the encoder weights with the decoder. Being able to share the weights between the encoder and decoder requires the decoder architecture (excluding the cross-attention weights) to be identical to the encoder architecture. Therefore, *encoder-decoder weight sharing* is only relevant if the encoder-decoder model is warm-started from a single *encoder-only* pre-trained checkpoint. Great! That was the theory about warm-starting encoder-decoder models. Let\'s now look at some results. ------------------------------------------------------------------------ \\({}^1\\) Without loss of generality, we exclude the normalization layers to not clutter the equations and illustrations. \\({}^2\\) For more detail on how self-attention layers function, please refer to [this section](https://huggingface.co/blog/encoder-decoder#encoder) of the transformer-based encoder-decoder model blog post for the encoder-part (and [this section](https://huggingface.co/blog/encoder-decoder#decoder) for the decoder part respectively). ## **Warm-starting encoder-decoder models (Analysis)** In this section, we will summarize the findings on warm-starting encoder-decoder models as presented in [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, and Aliaksei Severyn. The authors compared the performance of warm-started encoder-decoder models to randomly initialized encoder-decoder models on multiple *sequence-to-sequence* tasks, notably *summarization*, *translation*, *sentence splitting*, and *sentence fusion*. To be more precise, the publicly available pre-trained checkpoints of **BERT**, **RoBERTa**, and **GPT2** were leveraged in different variations to warm-start an encoder-decoder model. *E.g.* a BERT-initialised encoder was paired with a BERT-initialized decoder yielding a BERT2BERT model *or* a RoBERTa-initialized encoder was paired with a GPT2-initialized decoder to yield a *RoBERTa2GPT2* model. Additionally, the effect of sharing the encoder and decoder weights (as explained in the previous section) was investigated for RoBERTa, *i.e.* **RoBERTaShare**, and for BERT, *i.e.* **BERTShare**. Randomly or partly randomly initialized encoder-decoder models were used as a baseline, such as a fully randomly initialized encoder-decoder model, coined **Rnd2Rnd** or a BERT-initialized decoder paired with a randomly initialized encoder, defined as **Rnd2BERT**. The following table shows a complete list of all investigated model variants including the number of randomly initialized weights, *i.e.* \"random\", and the number of weights initialized from the respective pre-trained checkpoints, *i.e.* \"leveraged\". All models are based on a 12-layer architecture with 768-dim hidden size embeddings, corresponding to the `bert-base-cased`, `bert-base-uncased`, `roberta-base`, and `gpt2` checkpoints in the 🤗Transformers model hub. |Model |random |leveraged |total |-------------- |:------- |---------- |------- |Rnd2Rnd |221M |0 |221M |Rnd2BERT |112M |109M |221M |BERT2Rnd |112M |109M |221M |Rnd2GPT2 |114M |125M |238M |BERT2BERT |26M |195M |221M |BERTShare |26M |109M |135M |RoBERTaShare |26M |126M |152M |BERT2GPT2 |26M |234M |260M |RoBERTa2GPT2 |26M |250M |276M The model *Rnd2Rnd*, which is based on the BERT2BERT architecture, contains 221M weight parameters - all of which are randomly initialized. The other two \"BERT-based\" baselines *Rnd2BERT* and *BERT2Rnd* have roughly half of their weights, *i.e.* 112M parameters, randomly initialized. The other 109M weight parameters are leveraged from the pre-trained `bert-base-uncased` checkpoint for the encoder- or decoder part respectively. The models *BERT2BERT*, *BERT2GPT2*, and *RoBERTa2GPT2* have all of their encoder weight parameters leveraged (from `bert-base-uncased`, `roberta-base` respectively) and most of the decoder weight parameter weights as well (from `gpt2`, `bert-base-uncased` respectively). 26M decoder weight parameters, which correspond to the 12 cross-attention layers, are thereby randomly initialized. RoBERTa2GPT2 and BERT2GPT2 are compared to the *Rnd2GPT2* baseline. Also, it should be noted that the shared model variants *BERTShare* and *RoBERTaShare* have significantly fewer parameters because all encoder weight parameters are shared with the respective decoder weight parameters. ### **Experiments** The above models were trained and evaluated on four sequence-to-sequence tasks of increasing complexity: sentence-level fusion, sentence-level splitting, translation, and abstractive summarization. The following table shows which datasets were used for each task. |Seq2Seq Task |Datasets |Paper |🤗datasets | |-------------------------- |-----------------------------------------------------------------------|----------------------------------------------------------------------- |----------------------------------------------------------------------------------------- | |Sentence Fusion |DiscoFuse |[Geva et al. (2019)](https://arxiv.org/abs/1902.10526) |[link](https://huggingface.co/nlp/viewer/?dataset=discofuse&config=discofuse-wikipedia) | |Sentence Splitting |WikiSplit |[Botha et al. (2018)](https://arxiv.org/abs/1808.09468) |\-| |Translation |WMT14 EN =\> DE |[Bojar et al. (2014)](http://www.aclweb.org/anthology/W/W14/W14-3302) |[link](https://huggingface.co/nlp/viewer/?dataset=wmt14&config=de-en)| |WMT14 DE =\> EN |[Bojar et al. (2014)](http://www.aclweb.org/anthology/W/W14/W14-3302) | |[link](https://huggingface.co/nlp/viewer/?dataset=wmt14&config=de-en) | |Abstractive Summarizaion |CNN/Dailymail | [Hermann et al. (2015)](http://arxiv.org/abs/1704.04368) |[link](https://huggingface.co/nlp/viewer/?dataset=cnn_dailymail&config=3.0.0)| |BBC XSum |[Narayan et al. (2018a)](https://arxiv.org/abs/1808.08745) | |[link](https://huggingface.co/nlp/viewer/?dataset=xsum) | |Gigaword |[Napoles et al. (2012)](http://dx.doi.org/10.18653/v1/D15-1044) | |[link](https://huggingface.co/nlp/viewer/?dataset=gigaword) | Depending on the task, a slightly different training regime was used. *E.g.* according to the size of the dataset and the specific task, the number of training steps ranges from 200K to 500K, the batch size is set to either 128 or 256, the input length ranges from 128 to 512 and the output length varies between 32 to 128. It shall be emphasized however that within each task, all models were trained and evaluated using the same hyperparameters to ensure a fair comparison. For more information on the task-specific hyperparameter settings, the reader is advised to see the *Experiments* section in the [paper](https://arxiv.org/pdf/1907.12461.pdf). We will now give a condensed overview of the results for each task. ### Sentence Fusion and -Splitting (DiscoFuse, WikiSplit) **Sentence Fusion** is the task of combining multiple sentences into a single coherent sentence. *E.g.* the two sentences: *As a run-blocker, Zeitler moves relatively well.* *Zeitler too often struggles at the point of contact in space.* should be connected with a fitting *linking word*, such as: *As a run-blocker, Zeitler moves relatively well. **However**, **he** too often struggles at the point of contact in space.* As can be seen the linking word \"however\" provides a coherent transition from the first sentence to the second one. A model that is capable of generating such a linking word has arguably learned to infer that the two sentences above contrast to each other. The inverse task is called **Sentence splitting** and consists of splitting a single complex sentence into multiple simpler ones that together retain the same meaning. Sentence splitting is considered as an important task in text simplification, *cf.* to [Botha et al. (2018)](https://arxiv.org/pdf/1808.09468.pdf). As an example, the sentence: *Street Rod is the first in a series of two games released for the PC and Commodore 64 in 1989* can be simplified into *Street Rod is the first in a series of two games **.** **It** was released for the PC and Commodore 64 in 1989* It can be seen that the long sentence tries to convey two important pieces of information. One is that the game was the first of two games being released for the PC, and the second being the year in which it was released. Sentence splitting, therefore, requires the model to understand which part of the sentence should be divided into two sentences, making the task more difficult than sentence fusion. A common metric to evaluate the performance of models on sentence fusion resp. -splitting tasks is *SARI* [(Wu et al. (2016)](https://www.aclweb.org/anthology/Q16-1029/), which is broadly based on the F1-score of label and model output. Let\'s see how the models perform on sentence fusion and -splitting. |Model | 100% DiscoFuse (SARI) |10% DiscoFuse (SARI) |100% WikiSplit (SARI) |---------------------- |----------------------- |---------------------- |----------------------- |Rnd2Rnd | 86.9 | 81.5 | 61.7 |Rnd2BERT | 87.6 | 82.1 | 61.8 |BERT2Rnd | 89.3 | 86.1 | 63.1 |Rnd2GPT2 | 86.5 | 81.4 | 61.3 |BERT2BERT | 89.3 | 86.1 | 63.2 |BERTShare | 89.2 | 86.0 | **63.5** |RoBERTaShare | 89.7 | 86.0 | 63.4 |BERT2GPT2 | 88.4 | 84.1 | 62.4 |RoBERTa2GPT2 | **89.9** | **87.1** | 63.2 |\-\-- | \-\-- | \-\-- | \-\-- |RoBERTaShare (large) | **90.3** | **87.7** | **63.8** The first two columns show the performance of the encoder-decoder models on the DiscoFuse evaluation data. The first column states the results of encoder-decoder models trained on all (100%) of the training data, while the second column shows the results of the models trained only on 10% of the training data. We observe that warm-started models perform significantly better than the randomly initialized baseline models *Rnd2Rnd*, *Rnd2Bert*, and *Rnd2GPT2*. A warm-started *RoBERTa2GPT2* model trained only on 10% of the training data is on par with an *Rnd2Rnd* model trained on 100% of the training data. Interestingly, the *Bert2Rnd* baseline performs equally well as a fully warm-started *Bert2Bert* model, which indicates that warm-starting the encoder-part is more effective than warm-starting the decoder-part. The best results are obtained by *RoBERTa2GPT2*, followed by *RobertaShare*. Sharing encoder and decoder weight parameters does seem to slightly increase the model\'s performance. On the more difficult sentence splitting task, a similar pattern emerges. Warm-started encoder-decoder models significantly outperform encoder-decoder models whose encoder is randomly initialized and encoder-decoder models with shared weight parameters yield better results than those with uncoupled weight parameters. On sentence splitting the *BertShare* models yields the best performance closely followed by *RobertaShare*. In addition to the 12-layer model variants, the authors also trained and evaluated a 24-layer *RobertaShare (large)* model which outperforms all 12-layer models significantly. ### Machine Translation (WMT14) Next, the authors evaluated warm-started encoder-decoder models on the probably most common benchmark in machine translation (MT) - the *En* \\(\to\\) *De* and *De* \\(\to\\) *En* WMT14 dataset. In this notebook, we present the results on the *newstest2014* eval dataset. Because the benchmark requires the model to understand both an English and a German vocabulary the BERT-initialized encoder-decoder models were warm-started from the multilingual pre-trained checkpoint `bert-base-multilingual-cased`. Because there is no publicly available multilingual RoBERTa checkpoint, RoBERTa-initialized encoder-decoder models were excluded for MT. GPT2-initialized models were initialized from the `gpt2` pre-trained checkpoint as in the previous experiment. The translation results are reported using the BLUE-4 score metric \\({}^1\\). |Model |En \\(\to\\) De (BLEU-4) |De \\(\to\\) En (BLEU-4) |--------------------------- |---------------------- |---------------------- |Rnd2Rnd | 26.0 | 29.1 |Rnd2BERT | 27.2 | 30.4 |BERT2Rnd | **30.1** | **32.7** |Rnd2GPT2 | 19.6 | 23.2 |BERT2BERT | **30.1** | **32.7** |BERTShare | 29.6 | 32.6 |BERT2GPT2 | 23.2 | 31.4 |\-\-- | \-\-- | \-\-- |BERT2Rnd (large, custom) | **31.7** | **34.2** |BERTShare (large, custom) | 30.5 | 33.8 Again, we observe a significant performance boost by warm-starting the encoder-part, with *BERT2Rnd* and *BERT2BERT* yielding the best results on both the *En* \\(\to\\) *De* and *De* \\(\to\\) *En* tasks. *GPT2* initialized models perform significantly worse even than the *Rnd2Rnd* baseline on *En* \\(\to\\) *De*. Taking into consideration that the `gpt2` checkpoint was trained only on English text, it is not very surprising that *BERT2GPT2* and *Rnd2GPT2* models have difficulties generating German translations. This hypothesis is supported by the competitive results (*e.g.* 31.4 vs. 32.7) of *BERT2GPT2* on the *De* \\(\to\\) *En* task for which GPT2\'s vocabulary fits the English output format. Contrary to the results obtained on sentence fusion and sentence splitting, sharing encoder and decoder weight parameters does not yield a performance boost in MT. Possible reasons for this as stated by the authors include - *the encoder-decoder model capacity is an important factor in MT, and* - *the encoder and decoder have to deal with different grammar and vocabulary* Since the *bert-base-multilingual-cased* checkpoint was trained on more than 100 languages, its vocabulary is probably undesirably large for *En* \\(\to\\) *De* and *De* \\(\to\\) *En* MT. Thus, the authors pre-trained a large BERT encoder-only checkpoint on the English and German subset of the Wikipedia dump and subsequently used it to warm-start a *BERT2Rnd* and *BERTShare* encoder-decoder model. Thanks to the improved vocabulary, another significant performance boost is observed, with *BERT2Rnd (large, custom)* significantly outperforming all other models. ### Summarization (CNN/Dailymail, BBC XSum, Gigaword) Finally, the encoder-decoder models were evaluated on the arguably most challenging sequence-to-sequence task - *summarization*. The authors picked three summarization datasets with different characteristics for evaluation: Gigaword (*headline generation*), BBC XSum (*extreme summarization*), and CNN/Dailymayl (*abstractive summarization*). The Gigaword dataset contains sentence-level abstractive summarizations, requiring the model to learn sentence-level understanding, abstraction, and eventually paraphrasing. A typical data sample in Gigaword, such as \"*venezuelan president hugo chavez said thursday he has ordered a probe into a suspected coup plot allegedly involving active and retired military officers .*\", would have a corresponding headline as its label, *e.g.*: \"*chavez orders probe into suspected coup plot*\". The BBC XSum dataset consists of much longer *article-like* text inputs with the labels being mostly single sentence summarizations. This dataset requires the model not only to learn document-level inference but also a high level of abstractive paraphrasing. Some data samples of the BBC XSUM datasets are shown [here](https://huggingface.co/nlp/viewer/?dataset=xsum). For the CNN/Dailmail dataset, documents, which are of similar length than those in the BBC XSum dataset, have to be summarized to bullet-point story highlights. The labels therefore often consist of multiple sentences. Besides document-level understanding, the CNN/Dailymail dataset requires models to be good at copying the most salient information. Some examples can be viewed [here](https://huggingface.co/nlp/viewer/?dataset=cnn_dailymail). The models are evaluated using the [Rouge metric](https://www.aclweb.org/anthology/N03-1020/), whereas the Rouge-2 scores are shown below. Alright, let\'s take a look at the results. |Model |CNN/Dailymail (Rouge-2) |BBC XSum (Rouge-2) |Gigaword (Rouge-2) |---------------------- |------------------------- |-------------------- |-------------------- |Rnd2Rnd | 14.00 | 10.23 | 18.71 |Rnd2BERT | 15.55 | 11.52 | 18.91 |BERT2Rnd | 17.76 | 15.83 | 19.26 |Rnd2GPT2 | 8.81 | 8.77 | 18.39 |BERT2BERT | 17.84 | 15.24 | 19.68 |BERTShare | 18.10 | 16.12 | **19.81** |RoBERTaShare | **18.95** | **17.50** | 19.70 |BERT2GPT2 | 4.96 | 8.37 | 18.23 |RoBERTa2GPT2 | 14.72 | 5.20 | 19.21 |\-\-- | \-\-- | \-\-- | \-\-- |RoBERTaShare (large) | 18.91 | **18.79** | 19.78 We observe again that warm-starting the encoder-part gives a significant improvement over models with randomly-initialized encoders, which is especially visible for document-level abstraction tasks, *i.e.* CNN/Dailymail and BBC XSum. This shows that tasks requiring a high level of abstraction benefit more from a pre-trained encoder part than those requiring only sentence-level abstraction. Except for Gigaword GPT2-based encoder-decoder models seem to be unfit for summarization. Furthermore, the shared encoder-decoder models are the best performing models for summarization. *RoBERTaShare* and *BERTShare* are the best performing models on all datasets whereas the margin is especially significant on the BBC XSum dataset on which *RoBERTaShare (large)* outperforms *BERT2BERT* and *BERT2Rnd* by *ca.* 3 Rouge-2 points and *Rnd2Rnd* by more than 8 Rouge-2 points. As brought forward by the authors, \"*this is probably because the BBC summary sentences follow a distribution that is similar to that of the sentences in the document, whereas this is not necessarily the case for the Gigaword headlines and the CNN/DailyMail bullet-point highlights*\". Intuitively this means that in BBC XSum, the input sentences processed by the encoder are very similar in structure to the single sentence summary processed by the decoder, *i.e.* same length, similar choice of words, similar syntax. ### **Conclusion** Alright, let\'s draw a conclusion and try to derive some practical tips. - We have observed on all tasks that a warm-started encoder-part gives a significant performance boost compared to encoder-decoder models having a randomly initialized encoder. On the other hand, warm-starting the decoder seems to be less important, with *BERT2BERT* being on par with *BERT2Rnd* on most tasks. An intuitive reason would be that since a BERT- or RoBERTa-initialized encoder part has none of its weight parameters randomly initialized, the encoder can fully exploit the acquired knowledge of BERT\'s or RoBERTa\'s pre-trained checkpoints, respectively. In contrast, the warm-started decoder always has parts of its weight parameters randomly initialized which possibly makes it much harder to effectively leverage the knowledge acquired by the checkpoint used to initialize the decoder. - Next, we noticed that it is often beneficial to share encoder and decoder weights, especially if the target distribution is similar to the input distribution (*e.g.* BBC XSum). However, for datasets whose target data distribution differs more significantly from the input data distribution and for which model capacity \\({}^2\\) is known to play an important role, *e.g.* WMT14, encoder-decoder weight sharing seems to be disadvantageous. - Finally, we have seen that it is very important that the vocabulary of the pre-trained \"stand-alone\" checkpoints fit the vocabulary required to solve the sequence-to-sequence task. *E.g.* a warm-started BERT2GPT2 encoder-decoder will perform poorly on *En* \\(\to\\) *De* MT because GPT2 was pre-trained on English whereas the target language is German. The overall poor performance of the *BERT2GPT2*, *Rnd2GPT2*, and *RoBERTa2GPT2* compared to *BERT2BERT*, *BERTShared*, and *RoBERTaShared* suggests that it is more effective to have a shared vocabulary. Also, it shows that initializing the decoder part with a pre-trained GPT2 checkpoint is *not* more effective than initializing it with a pre-trained BERT checkpoint besides GPT2 being more similar to the decoder in its architecture. For each of the above tasks, the most performant models were ported to 🤗Transformers and can be accessed here: - *RoBERTaShared (large)* - *Wikisplit*: [google/roberta2roberta\_L-24\_wikisplit](https://huggingface.co/google/roberta2roberta_L-24_wikisplit). - *RoBERTaShared (large)* - *Discofuse*: [google/roberta2roberta\_L-24\_discofuse](https://huggingface.co/google/roberta2roberta_L-24_discofuse). - *BERT2BERT (large)* - *WMT en \\(\to\\) de*: [google/bert2bert\_L-24\_wmt\_en\_de](https://huggingface.co/google/bert2bert_L-24_wmt_en_de). - *BERT2BERT (large)* - *WMT de \\(\to\\) en*: [google/bert2bert\_L-24\_wmt\_de\_en](https://huggingface.co/google/bert2bert_L-24_wmt_de_en). - *RoBERTaShared (large)* - *CNN/Dailymail*: [google/roberta2roberta\_L-24\_cnn\_daily\_mail](https://huggingface.co/google/roberta2roberta_L-24_cnn_daily_mail). - *RoBERTaShared (large)* - *BBC XSum*: [google/roberta2roberta\_L-24\_bbc](https://huggingface.co/google/roberta2roberta_L-24_bbc). - *RoBERTaShared (large)* - *Gigaword*: [google/roberta2roberta\_L-24\_gigaword](https://huggingface.co/google/roberta2roberta_L-24_gigaword). ------------------------------------------------------------------------ \\({}^1\\) To retrieve BLEU-4 scores, a script from the Tensorflow Official Transformer implementation <https://github.com/tensorflow/models/tree> master/official/nlp/transformer was used. Note that, differently from the tensor2tensor/utils/ `get_ende_bleu.sh` used by Vaswani et al. (2017), this script does not split noun compounds, but utf-8 quotes were normalized to ascii quotes after having noted that the pre-processed training set contains only ascii quotes. \\({}^2\\) Model capacity is an informal definition of how good the model is at modeling complex patterns. It is also sometimes defined as *the ability of a model to learn from more and more data*. Model capacity is broadly measured by the number of trainable parameters - the more parameters, the higher the model capacity. # **Warm-starting encoder-decoder models with 🤗Transformers (Practice)** We have explained the theory of warm-starting encoder-decoder models, analyzed empirical results on multiple datasets, and have derived practical conclusions. Let\'s now walk through a complete code example showcasing how a **BERT2BERT** model can be warm-started and consequently fine-tuned on the *CNN/Dailymail* summarization task. We will be leveraging the 🤗datasets and 🤗Transformers libraries. In addition, the following list provides a condensed version of this and other notebooks on warm-starting other combinations of encoder-decoder models. - for **BERT2BERT** on *CNN/Dailymail* (a condensed version of this notebook), click [here](https://colab.research.google.com/drive/1Ekd5pUeCX7VOrMx94_czTkwNtLN32Uyu?usp=sharing). - for **RoBERTaShare** on *BBC XSum*, click [here](https://colab.research.google.com/drive/1vHZHXOCFqOXIvdsF8j4WBRaAOAjAroTi?usp=sharing). - for **BERT2Rnd** on *WMT14 En \\(\to\\) De*, click [here](). - for **RoBERTa2GPT2** on *DiscoFuse*, click [here](). ***Note***: This notebook only uses a few training, validation, and test data samples for demonstration purposes. To fine-tune an encoder-decoder model on the full training data, the user should change the training and data preprocessing parameters accordingly as highlighted by the comments. ### **Data Preprocessing** In this section, we show how the data can be pre-processed for training. More importantly, we try to give the reader some insight into the process of deciding how to preprocess the data. We will need datasets and transformers to be installed. ```python !pip install datasets==1.0.2 !pip install transformers==4.2.1 ``` Let's start by downloading the *CNN/Dailymail* dataset. ```python import datasets train_data = datasets.load_dataset("cnn_dailymail", "3.0.0", split="train") ``` Alright, let\'s get a first impression of the dataset. Alternatively, the dataset can also be visualized using the awesome [datasets viewer](https://huggingface.co/nlp/viewer/?dataset=cnn_dailymail&config=3.0.0) online. ```python train_data.info.description ``` Our input is called *article* and our labels are called *highlights*. Let\'s now print out the first example of the training data to get a feeling for the data. ```python import pandas as pd from IPython.display import display, HTML from datasets import ClassLabel df = pd.DataFrame(train_data[:1]) del df["id"] for column, typ in train_data.features.items(): if isinstance(typ, ClassLabel): df[column] = df[column].transform(lambda i: typ.names[i]) display(HTML(df.to_html())) ``` ```python OUTPUT: ------- Article: """It's official: U.S. President Barack Obama wants lawmakers to weigh in on whether to use military force in Syria. Obama sent a letter to the heads of the House and Senate on Saturday night, hours after announcing that he believes military action against Syrian targets is the right step to take over the alleged use of chemical weapons. The proposed legislation from Obama asks Congress to approve the use of military force "to deter, disrupt, prevent and degrade the potential for future uses of chemical weapons or other weapons of mass destruction." It's a step that is set to turn an international crisis into a fierce domestic political battle. There are key questions looming over the debate: What did U.N. weapons inspectors find in Syria? What happens if Congress votes no? And how will the Syrian government react? In a televised address from the White House Rose Garden earlier Saturday, the president said he would take his case to Congress, not because he has to -- but because he wants to. "While I believe I have the authority to carry out this military action without specific congressional authorization, I know that the country will be stronger if we take this course, and our actions will be even more effective," he said. "We should have this debate, because the issues are too big for business as usual." Obama said top congressional leaders had agreed to schedule a debate when the body returns to Washington on September 9. The Senate Foreign Relations Committee will hold a hearing over the matter on Tuesday, Sen. Robert Menendez said. Transcript: Read Obama's full remarks . Syrian crisis: Latest developments . U.N. inspectors leave Syria . Obama's remarks came shortly after U.N. inspectors left Syria, carrying evidence that will determine whether chemical weapons were used in an attack early last week in a Damascus suburb. "The aim of the game here, the mandate, is very clear -- and that is to ascertain whether chemical weapons were used -- and not by whom," U.N. spokesman Martin Nesirky told reporters on Saturday. But who used the weapons in the reported toxic gas attack in a Damascus suburb on August 21 has been a key point of global debate over the Syrian crisis. Top U.S. officials have said there's no doubt that the Syrian government was behind it, while Syrian officials have denied responsibility and blamed jihadists fighting with the rebels. British and U.S. intelligence reports say the attack involved chemical weapons, but U.N. officials have stressed the importance of waiting for an official report from inspectors. The inspectors will share their findings with U.N. Secretary-General Ban Ki-moon Ban, who has said he wants to wait until the U.N. team's final report is completed before presenting it to the U.N. Security Council. The Organization for the Prohibition of Chemical Weapons, which nine of the inspectors belong to, said Saturday that it could take up to three weeks to analyze the evidence they collected. "It needs time to be able to analyze the information and the samples," Nesirky said. He noted that Ban has repeatedly said there is no alternative to a political solution to the crisis in Syria, and that "a military solution is not an option." Bergen: Syria is a problem from hell for the U.S. Obama: 'This menace must be confronted' Obama's senior advisers have debated the next steps to take, and the president's comments Saturday came amid mounting political pressure over the situation in Syria. Some U.S. lawmakers have called for immediate action while others warn of stepping into what could become a quagmire. Some global leaders have expressed support, but the British Parliament's vote against military action earlier this week was a blow to Obama's hopes of getting strong backing from key NATO allies. On Saturday, Obama proposed what he said would be a limited military action against Syrian President Bashar al-Assad. Any military attack would not be open-ended or include U.S. ground forces, he said. Syria's alleged use of chemical weapons earlier this month "is an assault on human dignity," the president said. A failure to respond with force, Obama argued, "could lead to escalating use of chemical weapons or their proliferation to terrorist groups who would do our people harm. In a world with many dangers, this menace must be confronted." Syria missile strike: What would happen next? Map: U.S. and allied assets around Syria . Obama decision came Friday night . On Friday night, the president made a last-minute decision to consult lawmakers. What will happen if they vote no? It's unclear. A senior administration official told CNN that Obama has the authority to act without Congress -- even if Congress rejects his request for authorization to use force. Obama on Saturday continued to shore up support for a strike on the al-Assad government. He spoke by phone with French President Francois Hollande before his Rose Garden speech. "The two leaders agreed that the international community must deliver a resolute message to the Assad regime -- and others who would consider using chemical weapons -- that these crimes are unacceptable and those who violate this international norm will be held accountable by the world," the White House said. Meanwhile, as uncertainty loomed over how Congress would weigh in, U.S. military officials said they remained at the ready. 5 key assertions: U.S. intelligence report on Syria . Syria: Who wants what after chemical weapons horror . Reactions mixed to Obama's speech . A spokesman for the Syrian National Coalition said that the opposition group was disappointed by Obama's announcement. "Our fear now is that the lack of action could embolden the regime and they repeat his attacks in a more serious way," said spokesman Louay Safi. "So we are quite concerned." Some members of Congress applauded Obama's decision. House Speaker John Boehner, Majority Leader Eric Cantor, Majority Whip Kevin McCarthy and Conference Chair Cathy McMorris Rodgers issued a statement Saturday praising the president. "Under the Constitution, the responsibility to declare war lies with Congress," the Republican lawmakers said. "We are glad the president is seeking authorization for any military action in Syria in response to serious, substantive questions being raised." More than 160 legislators, including 63 of Obama's fellow Democrats, had signed letters calling for either a vote or at least a "full debate" before any U.S. action. British Prime Minister David Cameron, whose own attempt to get lawmakers in his country to support military action in Syria failed earlier this week, responded to Obama's speech in a Twitter post Saturday. "I understand and support Barack Obama's position on Syria," Cameron said. An influential lawmaker in Russia -- which has stood by Syria and criticized the United States -- had his own theory. "The main reason Obama is turning to the Congress: the military operation did not get enough support either in the world, among allies of the US or in the United States itself," Alexei Pushkov, chairman of the international-affairs committee of the Russian State Duma, said in a Twitter post. In the United States, scattered groups of anti-war protesters around the country took to the streets Saturday. "Like many other Americans...we're just tired of the United States getting involved and invading and bombing other countries," said Robin Rosecrans, who was among hundreds at a Los Angeles demonstration. What do Syria's neighbors think? Why Russia, China, Iran stand by Assad . Syria's government unfazed . After Obama's speech, a military and political analyst on Syrian state TV said Obama is "embarrassed" that Russia opposes military action against Syria, is "crying for help" for someone to come to his rescue and is facing two defeats -- on the political and military levels. Syria's prime minister appeared unfazed by the saber-rattling. "The Syrian Army's status is on maximum readiness and fingers are on the trigger to confront all challenges," Wael Nader al-Halqi said during a meeting with a delegation of Syrian expatriates from Italy, according to a banner on Syria State TV that was broadcast prior to Obama's address. An anchor on Syrian state television said Obama "appeared to be preparing for an aggression on Syria based on repeated lies." A top Syrian diplomat told the state television network that Obama was facing pressure to take military action from Israel, Turkey, some Arabs and right-wing extremists in the United States. "I think he has done well by doing what Cameron did in terms of taking the issue to Parliament," said Bashar Jaafari, Syria's ambassador to the United Nations. Both Obama and Cameron, he said, "climbed to the top of the tree and don't know how to get down." The Syrian government has denied that it used chemical weapons in the August 21 attack, saying that jihadists fighting with the rebels used them in an effort to turn global sentiments against it. British intelligence had put the number of people killed in the attack at more than 350. On Saturday, Obama said "all told, well over 1,000 people were murdered." U.S. Secretary of State John Kerry on Friday cited a death toll of 1,429, more than 400 of them children. No explanation was offered for the discrepancy. Iran: U.S. military action in Syria would spark 'disaster' Opinion: Why strikes in Syria are a bad idea .""" Summary: """Syrian official: Obama climbed to the top of the tree, "doesn't know how to get down"\nObama sends a letter to the heads of the House and Senate .\nObama to seek congressional approval on military action against Syria .\nAim is to determine whether CW were used, not by whom, says U.N. spokesman""" ``` The input data seems to consist of short news articles. Interestingly, the labels appear to be bullet-point-like summaries. At this point, one should probably take a look at a couple of other examples to get a better feeling for the data. One should also notice here that the text is *case-sensitive*. This means that we have to be careful if we want to use *case-insensitive* models. As *CNN/Dailymail* is a summarization dataset, the model will be evaluated using the *ROUGE* metric. Checking the description of *ROUGE* in 🤗datasets, *cf.* [here](https://huggingface.co/metrics/rouge), we can see that the metric is *case-insensitive*, meaning that *upper case* letters will be normalized to *lower case* letters during evaluation. Thus, we can safely leverage *uncased* checkpoints, such as `bert-base-uncased`. Cool! Next, let\'s get a sense of the length of input data and labels. As models compute length in *token-length*, we will make use of the `bert-base-uncased` tokenizer to compute the article and summary length. First, we load the tokenizer. ```python from transformers import BertTokenizerFast tokenizer = BertTokenizerFast.from_pretrained("bert-base-uncased") ``` Next, we make use of `.map()` to compute the length of the article and its summary. Since we know that the maximum length that `bert-base-uncased` can process amounts to 512, we are also interested in the percentage of input samples being longer than the maximum length. Similarly, we compute the percentage of summaries that are longer than 64, and 128 respectively. We can define the `.map()` function as follows. ```python # map article and summary len to dict as well as if sample is longer than 512 tokens def map_to_length(x): x["article_len"] = len(tokenizer(x["article"]).input_ids) x["article_longer_512"] = int(x["article_len"] > 512) x["summary_len"] = len(tokenizer(x["highlights"]).input_ids) x["summary_longer_64"] = int(x["summary_len"] > 64) x["summary_longer_128"] = int(x["summary_len"] > 128) return x ``` It should be sufficient to look at the first 10000 samples. We can speed up the mapping by using multiple processes with `num_proc=4`. ```python sample_size = 10000 data_stats = train_data.select(range(sample_size)).map(map_to_length, num_proc=4) ``` Having computed the length for the first 10000 samples, we should now average them together. For this, we can make use of the `.map()` function with `batched=True` and `batch_size=-1` to have access to all 10000 samples within the `.map()` function. ```python def compute_and_print_stats(x): if len(x["article_len"]) == sample_size: print( "Article Mean: {}, %-Articles > 512:{}, Summary Mean:{}, %-Summary > 64:{}, %-Summary > 128:{}".format( sum(x["article_len"]) / sample_size, sum(x["article_longer_512"]) / sample_size, sum(x["summary_len"]) / sample_size, sum(x["summary_longer_64"]) / sample_size, sum(x["summary_longer_128"]) / sample_size, ) ) output = data_stats.map( compute_and_print_stats, batched=True, batch_size=-1, ) ``` ```python OUTPUT: ------- Article Mean: 847.6216, %-Articles > 512:0.7355, Summary Mean:57.7742, %-Summary > 64:0.3185, %-Summary > 128:0.0 ``` We can see that on average an article contains 848 tokens with *ca.* 3/4 of the articles being longer than the model\'s `max_length` 512. The summary is on average 57 tokens long. Over 30% of our 10000-sample summaries are longer than 64 tokens, but none are longer than 128 tokens. `bert-base-cased` is limited to 512 tokens, which means we would have to cut possibly important information from the article. Because most of the important information is often found at the beginning of articles and because we want to be computationally efficient, we decide to stick to `bert-base-cased` with a `max_length` of 512 in this notebook. This choice is not optimal but has shown to yield [good results](https://arxiv.org/abs/1907.12461) on CNN/Dailymail. Alternatively, one could leverage long-range sequence models, such as [Longformer](https://huggingface.co/allenai/longformer-large-4096) to be used as the encoder. Regarding the summary length, we can see that a length of 128 already includes all of the summary labels. 128 is easily within the limits of `bert-base-cased`, so we decide to limit the generation to 128. Again, we will make use of the `.map()` function - this time to transform each training batch into a batch of model inputs. `"article"` and `"highlights"` are tokenized and prepared as the Encoder\'s `"input_ids"` and Decoder\'s `"decoder_input_ids"` respectively. `"labels"` are shifted automatically to the left for language modeling training. Lastly, it is very important to remember to ignore the loss of the padded labels. In 🤗Transformers this can be done by setting the label to -100. Great, let\'s write down our mapping function then. ```python encoder_max_length=512 decoder_max_length=128 def process_data_to_model_inputs(batch): # tokenize the inputs and labels inputs = tokenizer(batch["article"], padding="max_length", truncation=True, max_length=encoder_max_length) outputs = tokenizer(batch["highlights"], padding="max_length", truncation=True, max_length=decoder_max_length) batch["input_ids"] = inputs.input_ids batch["attention_mask"] = inputs.attention_mask batch["labels"] = outputs.input_ids.copy() # because BERT automatically shifts the labels, the labels correspond exactly to `decoder_input_ids`. # We have to make sure that the PAD token is ignored batch["labels"] = [[-100 if token == tokenizer.pad_token_id else token for token in labels] for labels in batch["labels"]] return batch ``` In this notebook, we train and evaluate the model just on a few training examples for demonstration and set the `batch_size` to 4 to prevent out-of-memory issues. The following line reduces the training data to only the first `32` examples. The cell can be commented out or not run for a full training run. Good results were obtained with a `batch_size` of 16. ```python train_data = train_data.select(range(32)) ``` Alright, let\'s prepare the training data. ```python # batch_size = 16 batch_size=4 train_data = train_data.map( process_data_to_model_inputs, batched=True, batch_size=batch_size, remove_columns=["article", "highlights", "id"] ) ``` Taking a look at the processed training dataset we can see that the column names `article`, `highlights`, and `id` have been replaced by the arguments expected by the `EncoderDecoderModel`. ```python train_data ``` ```python OUTPUT: ------- Dataset(features: {'attention_mask': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'decoder_attention_mask': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'decoder_input_ids': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'input_ids': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'labels': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None)}, num_rows: 32) ``` So far, the data was manipulated using Python\'s `List` format. Let\'s convert the data to PyTorch Tensors to be trained on GPU. ```python train_data.set_format( type="torch", columns=["input_ids", "attention_mask", "labels"], ) ``` Awesome, the data processing of the training data is finished. Analogous, we can do the same for the validation data. First, we load 10% of the validation dataset: ```python val_data = datasets.load_dataset("cnn_dailymail", "3.0.0", split="validation[:10%]") ``` For demonstration purposes, the validation data is then reduced to just 8 samples, ```python val_data = val_data.select(range(8)) ``` the mapping function is applied, ```python val_data = val_data.map( process_data_to_model_inputs, batched=True, batch_size=batch_size, remove_columns=["article", "highlights", "id"] ) ``` and, finally, the validation data is also converted to PyTorch tensors. ```python val_data.set_format( type="torch", columns=["input_ids", "attention_mask", "labels"], ) ``` Great! Now we can move to warm-starting the `EncoderDecoderModel`. ### **Warm-starting the Encoder-Decoder Model** This section explains how an Encoder-Decoder model can be warm-started using the `bert-base-cased` checkpoint. Let\'s start by importing the `EncoderDecoderModel`. For more detailed information about the `EncoderDecoderModel` class, the reader is advised to take a look at the [documentation](https://huggingface.co/transformers/model_doc/encoderdecoder.html). ```python from transformers import EncoderDecoderModel ``` In contrast to other model classes in 🤗Transformers, the `EncoderDecoderModel` class has two methods to load pre-trained weights, namely: 1. the \"standard\" `.from_pretrained(...)` method is derived from the general `PretrainedModel.from_pretrained(...)` method and thus corresponds exactly to the the one of other model classes. The function expects a single model identifier, *e.g.* `.from_pretrained("google/bert2bert_L-24_wmt_de_en")` and will load a single `.pt` checkpoint file into the `EncoderDecoderModel` class. 2. a special `.from_encoder_decoder_pretrained(...)` method, which can be used to warm-start an encoder-decoder model from two model identifiers - one for the encoder and one for the decoder. The first model identifier is thereby used to load the *encoder*, via `AutoModel.from_pretrained(...)` (see doc [here](https://huggingface.co/transformers/master/model_doc/auto.html?highlight=automodel#automodel)) and the second model identifier is used to load the *decoder* via `AutoModelForCausalLM` (see doc [here](https://huggingface.co/transformers/master/model_doc/auto.html#automodelforcausallm). Alright, let\'s warm-start our *BERT2BERT* model. As mentioned earlier we will warm-start both the encoder and decoder with the `"bert-base-cased"` checkpoint. ```python bert2bert = EncoderDecoderModel.from_encoder_decoder_pretrained("bert-base-uncased", "bert-base-uncased") ``` ```python OUTPUT: ------- """Some weights of the model checkpoint at bert-base-uncased were not used when initializing BertLMHeadModel: ['cls.seq_relationship.weight', 'cls.seq_relationship.bias'] - This IS expected if you are initializing BertLMHeadModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model). - This IS NOT expected if you are initializing BertLMHeadModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of BertLMHeadModel were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['bert.encoder.layer.0.crossattention.self.query.weight', 'bert.encoder.layer.0.crossattention.self.query.bias', 'bert.encoder.layer.0.crossattention.self.key.weight', 'bert.encoder.layer.0.crossattention.self.key.bias', 'bert.encoder.layer.0.crossattention.self.value.weight', 'bert.encoder.layer.0.crossattention.self.value.bias', 'bert.encoder.layer.0.crossattention.output.dense.weight', 'bert.encoder.layer.0.crossattention.output.dense.bias', 'bert.encoder.layer.0.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.0.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.1.crossattention.self.query.weight', 'bert.encoder.layer.1.crossattention.self.query.bias', 'bert.encoder.layer.1.crossattention.self.key.weight', 'bert.encoder.layer.1.crossattention.self.key.bias', 'bert.encoder.layer.1.crossattention.self.value.weight', 'bert.encoder.layer.1.crossattention.self.value.bias', 'bert.encoder.layer.1.crossattention.output.dense.weight', 'bert.encoder.layer.1.crossattention.output.dense.bias', 'bert.encoder.layer.1.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.1.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.2.crossattention.self.query.weight', 'bert.encoder.layer.2.crossattention.self.query.bias', 'bert.encoder.layer.2.crossattention.self.key.weight', 'bert.encoder.layer.2.crossattention.self.key.bias', 'bert.encoder.layer.2.crossattention.self.value.weight', 'bert.encoder.layer.2.crossattention.self.value.bias', 'bert.encoder.layer.2.crossattention.output.dense.weight', 'bert.encoder.layer.2.crossattention.output.dense.bias', 'bert.encoder.layer.2.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.2.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.3.crossattention.self.query.weight', 'bert.encoder.layer.3.crossattention.self.query.bias', 'bert.encoder.layer.3.crossattention.self.key.weight', 'bert.encoder.layer.3.crossattention.self.key.bias', 'bert.encoder.layer.3.crossattention.self.value.weight', 'bert.encoder.layer.3.crossattention.self.value.bias', 'bert.encoder.layer.3.crossattention.output.dense.weight', 'bert.encoder.layer.3.crossattention.output.dense.bias', 'bert.encoder.layer.3.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.3.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.4.crossattention.self.query.weight', 'bert.encoder.layer.4.crossattention.self.query.bias', 'bert.encoder.layer.4.crossattention.self.key.weight', 'bert.encoder.layer.4.crossattention.self.key.bias', 'bert.encoder.layer.4.crossattention.self.value.weight', 'bert.encoder.layer.4.crossattention.self.value.bias', 'bert.encoder.layer.4.crossattention.output.dense.weight', 'bert.encoder.layer.4.crossattention.output.dense.bias', 'bert.encoder.layer.4.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.4.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.5.crossattention.self.query.weight', 'bert.encoder.layer.5.crossattention.self.query.bias', 'bert.encoder.layer.5.crossattention.self.key.weight', 'bert.encoder.layer.5.crossattention.self.key.bias', 'bert.encoder.layer.5.crossattention.self.value.weight', 'bert.encoder.layer.5.crossattention.self.value.bias', 'bert.encoder.layer.5.crossattention.output.dense.weight', 'bert.encoder.layer.5.crossattention.output.dense.bias', 'bert.encoder.layer.5.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.5.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.6.crossattention.self.query.weight', 'bert.encoder.layer.6.crossattention.self.query.bias', 'bert.encoder.layer.6.crossattention.self.key.weight', 'bert.encoder.layer.6.crossattention.self.key.bias', 'bert.encoder.layer.6.crossattention.self.value.weight', 'bert.encoder.layer.6.crossattention.self.value.bias', 'bert.encoder.layer.6.crossattention.output.dense.weight', 'bert.encoder.layer.6.crossattention.output.dense.bias', 'bert.encoder.layer.6.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.6.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.7.crossattention.self.query.weight', 'bert.encoder.layer.7.crossattention.self.query.bias', 'bert.encoder.layer.7.crossattention.self.key.weight', 'bert.encoder.layer.7.crossattention.self.key.bias', 'bert.encoder.layer.7.crossattention.self.value.weight', 'bert.encoder.layer.7.crossattention.self.value.bias', 'bert.encoder.layer.7.crossattention.output.dense.weight', 'bert.encoder.layer.7.crossattention.output.dense.bias', 'bert.encoder.layer.7.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.7.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.8.crossattention.self.query.weight', 'bert.encoder.layer.8.crossattention.self.query.bias', 'bert.encoder.layer.8.crossattention.self.key.weight', 'bert.encoder.layer.8.crossattention.self.key.bias', 'bert.encoder.layer.8.crossattention.self.value.weight', 'bert.encoder.layer.8.crossattention.self.value.bias', 'bert.encoder.layer.8.crossattention.output.dense.weight', 'bert.encoder.layer.8.crossattention.output.dense.bias', 'bert.encoder.layer.8.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.8.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.9.crossattention.self.query.weight', 'bert.encoder.layer.9.crossattention.self.query.bias', 'bert.encoder.layer.9.crossattention.self.key.weight', 'bert.encoder.layer.9.crossattention.self.key.bias', 'bert.encoder.layer.9.crossattention.self.value.weight', 'bert.encoder.layer.9.crossattention.self.value.bias', 'bert.encoder.layer.9.crossattention.output.dense.weight', 'bert.encoder.layer.9.crossattention.output.dense.bias', 'bert.encoder.layer.9.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.9.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.10.crossattention.self.query.weight', 'bert.encoder.layer.10.crossattention.self.query.bias', 'bert.encoder.layer.10.crossattention.self.key.weight', 'bert.encoder.layer.10.crossattention.self.key.bias', 'bert.encoder.layer.10.crossattention.self.value.weight', 'bert.encoder.layer.10.crossattention.self.value.bias', 'bert.encoder.layer.10.crossattention.output.dense.weight', 'bert.encoder.layer.10.crossattention.output.dense.bias', 'bert.encoder.layer.10.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.10.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.11.crossattention.self.query.weight', 'bert.encoder.layer.11.crossattention.self.query.bias', 'bert.encoder.layer.11.crossattention.self.key.weight', 'bert.encoder.layer.11.crossattention.self.key.bias', 'bert.encoder.layer.11.crossattention.self.value.weight', 'bert.encoder.layer.11.crossattention.self.value.bias', 'bert.encoder.layer.11.crossattention.output.dense.weight', 'bert.encoder.layer.11.crossattention.output.dense.bias', 'bert.encoder.layer.11.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.11.crossattention.output.LayerNorm.bias']""" You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.""" ``` For once, we should take a good look at the warning here. We can see that two weights corresponding to a `"cls"` layer were not used. This should not be a problem because we don\'t need BERT\'s CLS layer for *sequence-to-sequence* tasks. Also, we notice that a lot of weights are \"newly\" or randomly initialized. When taking a closer look these weights all correspond to the cross-attention layer, which is exactly what we would expect after having read the theory above. Let\'s take a closer look at the model. ```python bert2bert ``` ```python OUTPUT: ------- EncoderDecoderModel( (encoder): BertModel( (embeddings): BertEmbeddings( (word_embeddings): Embedding(30522, 768, padding_idx=0) (position_embeddings): Embedding(512, 768) (token_type_embeddings): Embedding(2, 768) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) (encoder): BertEncoder( (layer): ModuleList( (0): BertLayer( (attention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) ) (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ), ... , (11): BertLayer( (attention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) ) (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) ) ) (pooler): BertPooler( (dense): Linear(in_features=768, out_features=768, bias=True) (activation): Tanh() ) ) (decoder): BertLMHeadModel( (bert): BertModel( (embeddings): BertEmbeddings( (word_embeddings): Embedding(30522, 768, padding_idx=0) (position_embeddings): Embedding(512, 768) (token_type_embeddings): Embedding(2, 768) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) (encoder): BertEncoder( (layer): ModuleList( (0): BertLayer( (attention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (crossattention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) ) (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ), ..., (11): BertLayer( (attention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (crossattention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) ) (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) ) ) ) (cls): BertOnlyMLMHead( (predictions): BertLMPredictionHead( (transform): BertPredictionHeadTransform( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) ) (decoder): Linear(in_features=768, out_features=30522, bias=True) ) ) ) ) ``` We see that `bert2bert.encoder` is an instance of `BertModel` and that `bert2bert.decoder` one of `BertLMHeadModel`. However, both instances are now combined into a single `torch.nn.Module` and can thus be saved as a single `.pt` checkpoint file. Let\'s try it out using the standard `.save_pretrained(...)` method. ```python bert2bert.save_pretrained("bert2bert") ``` Similarly, the model can be reloaded using the standard `.from_pretrained(...)` method. ```python bert2bert = EncoderDecoderModel.from_pretrained("bert2bert") ``` Awesome. Let\'s also checkpoint the config. ```python bert2bert.config ``` ```python OUTPUT: ------- EncoderDecoderConfig { "_name_or_path": "bert2bert", "architectures": [ "EncoderDecoderModel" ], "decoder": { "_name_or_path": "bert-base-uncased", "add_cross_attention": true, "architectures": [ "BertForMaskedLM" ], "attention_probs_dropout_prob": 0.1, "bad_words_ids": null, "bos_token_id": null, "chunk_size_feed_forward": 0, "decoder_start_token_id": null, "do_sample": false, "early_stopping": false, "eos_token_id": null, "finetuning_task": null, "gradient_checkpointing": false, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "id2label": { "0": "LABEL_0", "1": "LABEL_1" }, "initializer_range": 0.02, "intermediate_size": 3072, "is_decoder": true, "is_encoder_decoder": false, "label2id": { "LABEL_0": 0, "LABEL_1": 1 }, "layer_norm_eps": 1e-12, "length_penalty": 1.0, "max_length": 20, "max_position_embeddings": 512, "min_length": 0, "model_type": "bert", "no_repeat_ngram_size": 0, "num_attention_heads": 12, "num_beams": 1, "num_hidden_layers": 12, "num_return_sequences": 1, "output_attentions": false, "output_hidden_states": false, "pad_token_id": 0, "prefix": null, "pruned_heads": {}, "repetition_penalty": 1.0, "return_dict": false, "sep_token_id": null, "task_specific_params": null, "temperature": 1.0, "tie_encoder_decoder": false, "tie_word_embeddings": true, "tokenizer_class": null, "top_k": 50, "top_p": 1.0, "torchscript": false, "type_vocab_size": 2, "use_bfloat16": false, "use_cache": true, "vocab_size": 30522, "xla_device": null }, "encoder": { "_name_or_path": "bert-base-uncased", "add_cross_attention": false, "architectures": [ "BertForMaskedLM" ], "attention_probs_dropout_prob": 0.1, "bad_words_ids": null, "bos_token_id": null, "chunk_size_feed_forward": 0, "decoder_start_token_id": null, "do_sample": false, "early_stopping": false, "eos_token_id": null, "finetuning_task": null, "gradient_checkpointing": false, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "id2label": { "0": "LABEL_0", "1": "LABEL_1" }, "initializer_range": 0.02, "intermediate_size": 3072, "is_decoder": false, "is_encoder_decoder": false, "label2id": { "LABEL_0": 0, "LABEL_1": 1 }, "layer_norm_eps": 1e-12, "length_penalty": 1.0, "max_length": 20, "max_position_embeddings": 512, "min_length": 0, "model_type": "bert", "no_repeat_ngram_size": 0, "num_attention_heads": 12, "num_beams": 1, "num_hidden_layers": 12, "num_return_sequences": 1, "output_attentions": false, "output_hidden_states": false, "pad_token_id": 0, "prefix": null, "pruned_heads": {}, "repetition_penalty": 1.0, "return_dict": false, "sep_token_id": null, "task_specific_params": null, "temperature": 1.0, "tie_encoder_decoder": false, "tie_word_embeddings": true, "tokenizer_class": null, "top_k": 50, "top_p": 1.0, "torchscript": false, "type_vocab_size": 2, "use_bfloat16": false, "use_cache": true, "vocab_size": 30522, "xla_device": null }, "is_encoder_decoder": true, "model_type": "encoder_decoder" } ``` The config is similarly composed of an encoder config and a decoder config both of which are instances of `BertConfig` in our case. However, the overall config is of type `EncoderDecoderConfig` and is therefore saved as a single `.json` file. In conclusion, one should remember that once an `EncoderDecoderModel` object is instantiated, it provides the same functionality as any other Encoder-Decoder model in 🤗Transformers, *e.g.* [BART](https://huggingface.co/transformers/model_doc/bart.html), [T5](https://huggingface.co/transformers/model_doc/t5.html), [ProphetNet](https://huggingface.co/transformers/model_doc/prophetnet.html), \... The only difference is that an `EncoderDecoderModel` provides the additional `from_encoder_decoder_pretrained(...)` function allowing the model class to be warm-started from any two encoder and decoder checkpoints. On a side-note, if one would want to create a shared encoder-decoder model, the parameter `tie_encoder_decoder=True` can additionally be passed as follows: ```python shared_bert2bert = EncoderDecoderModel.from_encoder_decoder_pretrained("bert-base-cased", "bert-base-cased", tie_encoder_decoder=True) ``` As a comparison, we can see that the tied model has much fewer parameters as expected. ```python print(f"\n\nNum Params. Shared: {shared_bert2bert.num_parameters()}, Non-Shared: {bert2bert.num_parameters()}") ``` ```python OUTPUT: ------- Num Params. Shared: 137298244, Non-Shared: 247363386 ``` In this notebook, we will however train a non-shared *Bert2Bert* model, so we continue with `bert2bert` and not `shared_bert2bert`. ```python # free memory del shared_bert2bert ``` We have warm-started a `bert2bert` model, but we have not defined all the relevant parameters used for beam search decoding yet. Let\'s start by setting the special tokens. `bert-base-cased` does not have a `decoder_start_token_id` or `eos_token_id`, so we will use its `cls_token_id` and `sep_token_id` respectively. Also, we should define a `pad_token_id` on the config and make sure the correct `vocab_size` is set. ```python bert2bert.config.decoder_start_token_id = tokenizer.cls_token_id bert2bert.config.eos_token_id = tokenizer.sep_token_id bert2bert.config.pad_token_id = tokenizer.pad_token_id bert2bert.config.vocab_size = bert2bert.config.encoder.vocab_size ``` Next, let\'s define all parameters related to beam search decoding. Since `bart-large-cnn` yields good results on CNN/Dailymail, we will just copy its beam search decoding parameters. For more details on what each of these parameters does, please take a look at [this](https://huggingface.co/blog/how-to-generate) blog post or the [docs](https://huggingface.co/transformers/main_classes/model.html#generative-models). ```python bert2bert.config.max_length = 142 bert2bert.config.min_length = 56 bert2bert.config.no_repeat_ngram_size = 3 bert2bert.config.early_stopping = True bert2bert.config.length_penalty = 2.0 bert2bert.config.num_beams = 4 ``` Alright, let\'s now start fine-tuning the warm-started *BERT2BERT* model. ### **Fine-Tuning Warm-Started Encoder-Decoder Models** In this section, we will show how one can make use of the `Seq2SeqTrainer` to fine-tune a warm-started encoder-decoder model. Let\'s first import the `Seq2SeqTrainer` and its training arguments `Seq2SeqTrainingArguments`. ```python from transformers import Seq2SeqTrainer, Seq2SeqTrainingArguments ``` In addition, we need a couple of python packages to make the `Seq2SeqTrainer` work. ```python !pip install git-python==1.0.3 !pip install rouge_score !pip install sacrebleu ``` The `Seq2SeqTrainer` extends 🤗Transformer\'s Trainer for encoder-decoder models. In short, it allows using the `generate(...)` function during evaluation, which is necessary to validate the performance of encoder-decoder models on most *sequence-to-sequence* tasks, such as *summarization*. For more information on the `Trainer`, one should read through [this](https://huggingface.co/transformers/training.html#trainer) short tutorial. Let\'s begin by configuring the `Seq2SeqTrainingArguments`. The argument `predict_with_generate` should be set to `True`, so that the `Seq2SeqTrainer` runs the `generate(...)` on the validation data and passes the generated output as `predictions` to the `compute_metric(...)` function which we will define later. The additional arguments are derived from `TrainingArguments` and can be read upon [here](https://huggingface.co/transformers/main_classes/trainer.html#trainingarguments). For a complete training run, one should change those arguments as needed. Good default values are commented out below. For more information on the `Seq2SeqTrainer`, the reader is advised to take a look at the [code](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/seq2seq_trainer.py). ```python training_args = Seq2SeqTrainingArguments( predict_with_generate=True, evaluation_strategy="steps", per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, fp16=True, output_dir="./", logging_steps=2, save_steps=10, eval_steps=4, # logging_steps=1000, # save_steps=500, # eval_steps=7500, # warmup_steps=2000, # save_total_limit=3, ) ``` Also, we need to define a function to correctly compute the ROUGE score during validation. Since we activated `predict_with_generate`, the `compute_metrics(...)` function expects `predictions` that were obtained using the `generate(...)` function. Like most summarization tasks, CNN/Dailymail is typically evaluated using the ROUGE score. Let\'s first load the ROUGE metric using the 🤗datasets library. ```python rouge = datasets.load_metric("rouge") ``` Next, we will define the `compute_metrics(...)` function. The `rouge` metric computes the score from two lists of strings. Thus we decode both the `predictions` and `labels` - making sure that `-100` is correctly replaced by the `pad_token_id` and remove all special characters by setting `skip_special_tokens=True`. ```python def compute_metrics(pred): labels_ids = pred.label_ids pred_ids = pred.predictions pred_str = tokenizer.batch_decode(pred_ids, skip_special_tokens=True) labels_ids[labels_ids == -100] = tokenizer.pad_token_id label_str = tokenizer.batch_decode(labels_ids, skip_special_tokens=True) rouge_output = rouge.compute(predictions=pred_str, references=label_str, rouge_types=["rouge2"])["rouge2"].mid return { "rouge2_precision": round(rouge_output.precision, 4), "rouge2_recall": round(rouge_output.recall, 4), "rouge2_fmeasure": round(rouge_output.fmeasure, 4), } ``` Great, now we can pass all arguments to the `Seq2SeqTrainer` and start finetuning. Executing the following cell will take *ca.* 10 minutes ☕. Finetuning *BERT2BERT* on the complete *CNN/Dailymail* training data takes *ca.* model takes *ca.* 8h on a single *TITAN RTX* GPU. ```python # instantiate trainer trainer = Seq2SeqTrainer( model=bert2bert, tokenizer=tokenizer, args=training_args, compute_metrics=compute_metrics, train_dataset=train_data, eval_dataset=val_data, ) trainer.train() ``` Awesome, we should now be fully equipped to finetune a warm-started encoder-decoder model. To check the result of our fine-tuning let\'s take a look at the saved checkpoints. ```python !ls ``` ```bash OUTPUT: ------- bert2bert checkpoint-20 runs seq2seq_trainer.py checkpoint-10 __pycache__ sample_data seq2seq_training_args.py ``` Finally, we can load the checkpoint as usual via the `EncoderDecoderModel.from_pretrained(...)` method. ```python dummy_bert2bert = EncoderDecoderModel.from_pretrained("./checkpoint-20") ``` ### **Evaluation** In a final step, we might want to evaluate the *BERT2BERT* model on the test data. To start, instead of loading the dummy model, let\'s load a *BERT2BERT* model that was finetuned on the full training dataset. Also, we load its tokenizer, which is just a copy of `bert-base-cased`\'s tokenizer. ```python from transformers import BertTokenizer bert2bert = EncoderDecoderModel.from_pretrained("patrickvonplaten/bert2bert_cnn_daily_mail").to("cuda") tokenizer = BertTokenizer.from_pretrained("patrickvonplaten/bert2bert_cnn_daily_mail") ``` Next, we load just 2% of *CNN/Dailymail\'s* test data. For the full evaluation, one should obviously use 100% of the data. ```python test_data = datasets.load_dataset("cnn_dailymail", "3.0.0", split="test[:2%]") ``` Now, we can again leverage 🤗dataset\'s handy `map()` function to generate a summary for each test sample. For each data sample we: - first, tokenize the `"article"`, - second, generate the output token ids, and - third, decode the output token ids to obtain our predicted summary. ```python def generate_summary(batch): # cut off at BERT max length 512 inputs = tokenizer(batch["article"], padding="max_length", truncation=True, max_length=512, return_tensors="pt") input_ids = inputs.input_ids.to("cuda") attention_mask = inputs.attention_mask.to("cuda") outputs = bert2bert.generate(input_ids, attention_mask=attention_mask) output_str = tokenizer.batch_decode(outputs, skip_special_tokens=True) batch["pred_summary"] = output_str return batch ``` Let\'s run the map function to obtain the *results* dictionary that has the model\'s predicted summary stored for each sample. Executing the following cell may take *ca.* 10min ☕. ```python batch_size = 16 # change to 64 for full evaluation results = test_data.map(generate_summary, batched=True, batch_size=batch_size, remove_columns=["article"]) ``` Finally, we compute the ROUGE score. ```python rouge.compute(predictions=results["pred_summary"], references=results["highlights"], rouge_types=["rouge2"])["rouge2"].mid ``` ```python OUTPUT: ------- Score(precision=0.10389454113300968, recall=0.1564771201053348, fmeasure=0.12175271663717585) ``` That\'s it. We\'ve shown how to warm-start a *BERT2BERT* model and fine-tune/evaluate it on the CNN/Dailymail dataset. The fully trained *BERT2BERT* model is uploaded to the 🤗model hub under [patrickvonplaten/bert2bert\_cnn\_daily\_mail](https://huggingface.co/patrickvonplaten/bert2bert_cnn_daily_mail). The model achieves a ROUGE-2 score of **18.22** on the full evaluation data, which is even a little better than reported in the paper. For some summarization examples, the reader is advised to use the online inference API of the model, [here](https://huggingface.co/patrickvonplaten/bert2bert_cnn_daily_mail). Thanks a lot to Sascha Rothe, Shashi Narayan, and Aliaksei Severyn from Google Research, and Victor Sanh, Sylvain Gugger, and Thomas Wolf from 🤗Hugging Face for proof-reading and giving very much appreciated feedback.
How we sped up transformer inference 100x for 🤗 API customers
Narsil
January 18, 2021
accelerated-inference
analysis, nlp
https://huggingface.co/blog/accelerated-inference
# How we sped up transformer inference 100x for 🤗 API customers 🤗 Transformers has become the default library for data scientists all around the world to explore state of the art NLP models and build new NLP features. With over 5,000 pre-trained and fine-tuned models available, in over 250 languages, it is a rich playground, easily accessible whichever framework you are working in. While experimenting with models in 🤗 Transformers is easy, deploying these large models into production with maximum performance, and managing them into an architecture that scales with usage is a **hard engineering challenge** for any Machine Learning Engineer. This 100x performance gain and built-in scalability is why subscribers of our hosted [Accelerated Inference API](https://huggingface.co/pricing) chose to build their NLP features on top of it. To get to the **last 10x of performance** boost, the optimizations need to be low-level, specific to the model, and to the target hardware. This post shares some of our approaches squeezing every drop of compute juice for our customers. 🍋 ## Getting to the first 10x speedup The first leg of the optimization journey is the most accessible, all about using the best combination of techniques offered by the [Hugging Face libraries](https://github.com/huggingface/), independent of the target hardware. We use the most efficient methods built into Hugging Face model [pipelines](https://huggingface.co/transformers/main_classes/pipelines.html) to reduce the amount of computation during each forward pass. These methods are specific to the architecture of the model and the target task, for instance for a text-generation task on a GPT architecture, we reduce the dimensionality of the attention matrices computation by focusing on the new attention of the last token in each pass: -| Naive version | Optimized version | -|:---------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:| -|![](/blog/assets/09_accelerated_inference/unoptimized_graph.png)|![](/blog/assets/09_accelerated_inference/optimized_graph.png)| Tokenization is often a bottleneck for efficiency during inference. We use the most efficient methods from the [🤗 Tokenizers](https://github.com/huggingface/tokenizers/) library, leveraging the Rust implementation of the model tokenizer in combination with smart caching to get up to 10x speedup for the overall latency. Leveraging the latest features of the Hugging Face libraries, we achieve a reliable 10x speed up compared to an out-of-box deployment for a given model/hardware pair. As new releases of Transformers and Tokenizers typically ship every month, our API customers do not need to constantly adapt to new optimization opportunities, their models just keep running faster. ## Compilation FTW: the hard to get 10x Now this is where it gets really tricky. In order to get the best possible performance we will need to modify the model and compile it targeting the specific hardware for inference. The choice of hardware itself will depend on both the model (size in memory) and the demand profile (request batching). Even when serving predictions from the same model, some API customers may benefit more from Accelerated CPU inference, and others from Accelerated GPU inference, each with different optimization techniques and libraries applied. Once the compute platform has been selected for the use case, we can go to work. Here are some CPU-specific techniques that can be applied with a static graph: - Optimizing the graph (Removing unused flow) - Fusing layers (with specific CPU instructions) - Quantizing the operations Using out-of-box functions from open source libraries (e.g. 🤗 Transformers with [ONNX Runtime](https://github.com/microsoft/onnxruntime)) won’t produce the best results, or could result in a significant loss of accuracy, particularly during quantization. There is no silver bullet, and the best path is different for each model architecture. But diving deep into the Transformers code and ONNX Runtime documentation, the stars can be aligned to achieve another 10x speedup. ## Unfair advantage The Transformer architecture was a decisive inflection point for Machine Learning performance, starting with NLP, and over the last 3 years the rate of improvement in Natural Language Understanding and Generation has been steep and accelerating. Another metric which accelerated accordingly, is the average size of the models, from the 110M parameters of BERT to the now 175Bn of GPT-3. This trend has introduced daunting challenges for Machine Learning Engineers when deploying the latest models into production. While 100x speedup is a high bar to reach, that’s what it takes to serve predictions with acceptable latency in real-time consumer applications. To reach that bar, as Machine Learning Engineers at Hugging Face we certainly have an unfair advantage sitting in the same (virtual) offices as the 🤗 Transformers and 🤗 Tokenizers maintainers 😬. We are also extremely lucky for the rich partnerships we have developed through open source collaborations with hardware and cloud vendors like Intel, NVIDIA, Qualcomm, Amazon and Microsoft that enable us to tune our models x infrastructure with the latest hardware optimizations techniques. If you want to feel the speed on our infrastructure, start a [free trial](https://huggingface.co/pricing) and we’ll get in touch. If you want to benefit from our experience optimizing inference on your own infrastructure participate in our [🤗 Expert Acceleration Program](https://huggingface.co/support).
Fit More and Train Faster With ZeRO via DeepSpeed and FairScale
stas
January 19, 2021
zero-deepspeed-fairscale
guide
https://huggingface.co/blog/zero-deepspeed-fairscale
# Fit More and Train Faster With ZeRO via DeepSpeed and FairScale ##### A guest blog post by Hugging Face fellow Stas Bekman As recent Machine Learning models have been growing much faster than the amount of GPU memory added to newly released cards, many users are unable to train or even just load some of those huge models onto their hardware. While there is an ongoing effort to distill some of those huge models to be of a more manageable size -- that effort isn't producing models small enough soon enough. In the fall of 2019 Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase and Yuxiong He published a paper: [ZeRO: Memory Optimizations Toward Training Trillion Parameter Models](https://arxiv.org/abs/1910.02054), which contains a plethora of ingenious new ideas on how one could make their hardware do much more than what it was thought possible before. A short time later [DeepSpeed](https://github.com/microsoft/deepspeed) has been released and it gave to the world the open source implementation of most of the ideas in that paper (a few ideas are still in works) and in parallel a team from Facebook released [FairScale](https://github.com/facebookresearch/fairscale/) which also implemented some of the core ideas from the ZeRO paper. If you use the Hugging Face Trainer, as of `transformers` v4.2.0 you have the experimental support for DeepSpeed's and FairScale's ZeRO features. The new `--sharded_ddp` and `--deepspeed` command line `Trainer` arguments provide FairScale and DeepSpeed integration respectively. Here is [the full documentation](https://huggingface.co/transformers/master/main_classes/trainer.html#trainer-integrations). This blog post will describe how you can benefit from ZeRO regardless of whether you own just a single GPU or a whole stack of them. # Huge Speedups with Multi-GPU Setups Let's do a small finetuning with translation task experiment, using a `t5-large` model and the `finetune_trainer.py` script which you can find under [`examples/seq2seq`](https://github.com/huggingface/transformers/tree/master/examples/seq2seq) in the `transformers` GitHub repo. We have 2x 24GB (Titan RTX) GPUs to test with. This is just a proof of concept benchmarks so surely things can be improved further, so we will benchmark on a small sample of 2000 items for training and 500 items for evalulation to perform the comparisons. Evaluation does by default a beam search of size 4, so it's slower than training with the same number of samples, that's why 4x less eval items were used in these tests. Here are the key command line arguments of our baseline: ``` export BS=16 python -m torch.distributed.launch --nproc_per_node=2 ./finetune_trainer.py \ --model_name_or_path t5-large --n_train 2000 --n_val 500 \ --per_device_eval_batch_size $BS --per_device_train_batch_size $BS \ --task translation_en_to_ro [...] ``` We are just using the `DistributedDataParallel` (DDP) and nothing else to boost the performance for the baseline. I was able to fit a batch size (BS) of 16 before hitting Out of Memory (OOM) error. Note, that for simplicity and to make it easier to understand, I have only shown the command line arguments important for this demonstration. You will find the complete command line at [this post](https://github.com/huggingface/transformers/issues/8771#issuecomment-759248400). Next, we are going to re-run the benchmark every time adding one of the following: 1. `--fp16` 2. `--sharded_ddp` (fairscale) 3. `--sharded_ddp --fp16` (fairscale) 4. `--deepspeed` without cpu offloading 5. `--deepspeed` with cpu offloading Since the key optimization here is that each technique deploys GPU RAM more efficiently, we will try to continually increase the batch size and expect the training and evaluation to complete faster (while keeping the metrics steady or even improving some, but we won't focus on these here). Remember that training and evaluation stages are very different from each other, because during training model weights are being modified, gradients are being calculated, and optimizer states are stored. During evaluation, none of these happen, but in this particular task of translation the model will try to search for the best hypothesis, so it actually has to do multiple runs before it's satisfied. That's why it's not fast, especially when a model is large. Let's look at the results of these six test runs: | Method | max BS | train time | eval time | |---------------------------|--------|-------------|-------------| | baseline | 16 | 30.9458 | 56.3310 | | fp16 | 20 | 21.4943 | 53.4675 | | sharded_ddp | 30 | 25.9085 | 47.5589 | | sharded_ddp+fp16 | 30 | 17.3838 | 45.6593 | | deepspeed w/o cpu offload | 40 | **10.4007** | 34.9289 | | deepspeed w/ cpu offload | **50** | 20.9706 | **32.1409** | It's easy to see that both FairScale and DeepSpeed provide great improvements over the baseline, in the total train and evaluation time, but also in the batch size. DeepSpeed implements more magic as of this writing and seems to be the short term winner, but Fairscale is easier to deploy. For DeepSpeed you need to write a simple configuration file and change your command line's launcher, with Fairscale you only need to add the `--sharded_ddp` command line argument, so you may want to try it first as it's the most low-hanging fruit. Following the 80:20 rule, I have only spent a few hours on these benchmarks and I haven't tried to squeeze every MB and second by refining the command line arguments and configuration, since it's pretty obvious from the simple table what you'd want to try next. When you will face a real project that will be running for hours and perhaps days, definitely spend more time to make sure you use the most optimal hyper-parameters to get your job done faster and at a minimal cost. If you would like to experiment with this benchmark yourself or want to know more details about the hardware and software used to run it, please, refer to [this post](https://github.com/huggingface/transformers/issues/8771#issuecomment-759248400). # Fitting A Huge Model Onto One GPU While Fairscale gives us a boost only with multiple GPUs, DeepSpeed has a gift even for those of us with a single GPU. Let's try the impossible - let's train [t5-3b](https://huggingface.co/t5-3b) on a 24GB RTX-3090 card. First let's try to finetune the huge `t5-3b` using the normal single GPU setup: ``` export BS=1 CUDA_VISIBLE_DEVICES=0 ./finetune_trainer.py \ --model_name_or_path t5-3b --n_train 60 --n_val 10 \ --per_device_eval_batch_size $BS --per_device_train_batch_size $BS \ --task translation_en_to_ro --fp16 [...] ``` No cookie, even with BS=1 we get: ``` RuntimeError: CUDA out of memory. Tried to allocate 64.00 MiB (GPU 0; 23.70 GiB total capacity; 21.37 GiB already allocated; 45.69 MiB free; 22.05 GiB reserved in total by PyTorch) ``` Note, as earlier I'm showing only the important parts and the full command line arguments can be found [here](https://github.com/huggingface/transformers/issues/8771#issuecomment-759176685). Now update your `transformers` to v4.2.0 or higher, then install DeepSpeed: ``` pip install deepspeed ``` and let's try again, this time adding DeepSpeed to the command line: ``` export BS=20 CUDA_VISIBLE_DEVICES=0 deepspeed --num_gpus=1 ./finetune_trainer.py \ --model_name_or_path t5-3b --n_train 60 --n_val 10 \ --per_device_eval_batch_size $BS --per_device_train_batch_size $BS \ --task translation_en_to_ro --fp16 --deepspeed ds_config_1gpu.json [...] ``` et voila! We get a batch size of 20 trained just fine. I could probably push it even further. The program failed with OOM at ``BS=30``. Here are the relevant results: ``` 2021-01-12 19:06:31 | INFO | __main__ | train_n_objs = 60 2021-01-12 19:06:31 | INFO | __main__ | train_runtime = 8.8511 2021-01-12 19:06:35 | INFO | __main__ | val_n_objs = 10 2021-01-12 19:06:35 | INFO | __main__ | val_runtime = 3.5329 ``` We can't compare these to the baseline, since the baseline won't even start and immediately failed with OOM. Simply amazing! I used only a tiny sample since I was primarily interested in being able to train and evaluate with this huge model that normally won't fit onto a 24GB GPU. If you would like to experiment with this benchmark yourself or want to know more details about the hardware and software used to run it, please, refer to [this post](https://github.com/huggingface/transformers/issues/8771#issuecomment-759176685). # The Magic Behind ZeRO Since `transformers` only integrated these fabulous solutions and wasn't part of their invention I will share the resources where you can discover all the details for yourself. But here are a few quick insights that may help understand how ZeRO manages these amazing feats. The key feature of ZeRO is adding distributed data storage to the quite familiar concept of data parallel training. The computation on each GPU is exactly the same as data parallel training, but the parameter, gradients and optimizer states are stored in a distributed/partitioned fashion across all the GPUs and fetched only when needed. The following diagram, coming from this [blog post](https://www.microsoft.com/en-us/research/blog/zero-deepspeed-new-system-optimizations-enable-training-models-with-over-100-billion-parameters/) illustrates how this works: ![ZeRO Partitioning](./assets/11_zero_deepspeed_fairscale/zero-partitioning.png) ZeRO's ingenious approach is to partition the params, gradients and optimizer states equally across all GPUs and give each GPU just a single partition (also referred to as a shard). This leads to zero overlap in data storage between GPUs. At runtime each GPU builds up each layer's data on the fly by asking participating GPUs to send the information it's lacking. This idea could be difficult to grasp, and you will find my attempt at an explanation [here](https://github.com/huggingface/transformers/issues/8771#issuecomment-758418429). As of this writing FairScale and DeepSpeed only perform Partitioning (Sharding) for the optimizer states and gradients. Model parameters sharding is supposedly coming soon in DeepSpeed and FairScale. The other powerful feature is ZeRO-Offload ([paper](https://arxiv.org/abs/2101.06840)). This feature offloads some of the processing and memory needs to the host's CPU, thus allowing more to be fit onto the GPU. You saw its dramatic impact in the success at running `t5-3b` on a 24GB GPU. One other problem that a lot of people complain about on pytorch forums is GPU memory fragmentation. One often gets an OOM error that may look like this: ``` RuntimeError: CUDA out of memory. Tried to allocate 1.48 GiB (GPU 0; 23.65 GiB total capacity; 16.22 GiB already allocated; 111.12 MiB free; 22.52 GiB reserved in total by PyTorch) ``` The program wants to allocate ~1.5GB and the GPU still has some 6-7GBs of unused memory, but it reports to have only ~100MB of contiguous free memory and it fails with the OOM error. This happens as chunks of different size get allocated and de-allocated again and again, and over time holes get created leading to memory fragmentation, where there is a lot of unused memory but no contiguous chunks of the desired size. In the example above the program could probably allocate 100MB of contiguous memory, but clearly it can't get 1.5GB in a single chunk. DeepSpeed attacks this problem by managing GPU memory by itself and ensuring that long term memory allocations don't mix with short-term ones and thus there is much less fragmentation. While the paper doesn't go into details, the [source code](https://github.com/microsoft/DeepSpeed) is available, so it's possible to see how DeepSpeed accomplishes that. As ZeRO stands for Zero Redundancy Optimizer, it's easy to see that it lives up to its name. # The Future Besides the anticipated upcoming support for model params sharding in DeepSpeed, it already released new features that we haven't explored yet. These include DeepSpeed Sparse Attention and 1-bit Adam, which are supposed to decrease memory usage and dramatically reduce inter-GPU communication overhead, which should lead to an even faster training and support even bigger models. I trust we are going to see new gifts from the FairScale team as well. I think they are working on ZeRO stage 3 as well. Even more exciting, [ZeRO is being integrated into pytorch](https://github.com/pytorch/pytorch/pull/46750). # Deployment If you found the results shared in this blog post enticing, please proceed [here](https://huggingface.co/transformers/master/main_classes/trainer.html#trainer-integrations) for details on how to use DeepSpeed and FairScale with the `transformers` Trainer. You can, of course, modify your own trainer to integrate DeepSpeed and FairScale, based on each project's instructions or you can "cheat" and see how we did it in the `transformers` Trainer. If you go for the latter, to find your way around `grep` the source code for `deepspeed` and/or `sharded_ddp`. The good news is that ZeRO requires no model modification. The only required modifications are in the training code. # Issues If you encounter any issues with the integration part of either of these projects please open an Issue in [transformers](https://github.com/huggingface/transformers/issues). But if you have problems with DeepSpeed and FairScale installation, configuration and deployment - you need to ask the experts in their domains, therefore, please, use [DeepSpeed Issue](https://github.com/microsoft/DeepSpeed/issues) or [FairScale Issue](https://github.com/facebookresearch/fairscale/issues) instead. # Resources While you don't really need to understand how any of these projects work and you can just deploy them via the `transformers` Trainer, should you want to figure out the whys and hows please refer to the following resources. * [FairScale GitHub](https://github.com/facebookresearch/fairscale) * [DeepSpeed GitHub](https://github.com/microsoft/DeepSpeed) * Paper: [ZeRO: Memory Optimizations Toward Training Trillion Parameter Models](https://arxiv.org/abs/1910.02054). The paper is very interesting, but it's very terse. * Here is a good [video discussion](https://www.youtube.com/watch?v=tC01FRB0M7w) of the paper with visuals * Paper: [ZeRO-Offload: Democratizing Billion-Scale Model Training](https://arxiv.org/abs/2101.06840). Just published - this one goes into the details of ZeRO Offload feature. * DeepSpeed [configuration and tutorials](https://www.deepspeed.ai/getting-started/) * In addition to the paper, I highly recommend to read the following detailed blog posts with diagrams: - [DeepSpeed: Extreme-scale model training for everyone]( https://www.microsoft.com/en-us/research/blog/deepspeed-extreme-scale-model-training-for-everyone/) - [ZeRO & DeepSpeed: New system optimizations enable training models with over 100 billion parameters](https://www.microsoft.com/en-us/research/blog/zero-deepspeed-new-system-optimizations-enable-training-models-with-over-100-billion-parameters/) - [Turing-NLG: A 17-billion-parameter language model by Microsoft](https://www.microsoft.com/en-us/research/blog/turing-nlg-a-17-billion-parameter-language-model-by-microsoft/) * DeepSpeed [examples on GitHub](https://github.com/microsoft/DeepSpeedExamples) # Gratitude We were quite astonished at the amazing level of support we received from the FairScale and DeepSpeed developer teams while working on integrating those projects into `transformers`. In particular I'd like to thank: * Benjamin Lefaudeux [@blefaudeux](https://github.com/blefaudeux) * Mandeep Baines [@msbaines](https://github.com/msbaines) from the FairScale team and: * Jeff Rasley [@jeffra](https://github.com/jeffra) * Olatunji Ruwase [@tjruwase](https://github.com/tjruwase) * Samyam Rajbhandari [@samyam](https://github.com/samyam) from the DeepSpeed team for your generous and caring support and prompt resolution of the issues we have encountered. And HuggingFace for providing access to hardware the benchmarks were run on. Sylvain Gugger [@sgugger](https://github.com/sgugger/) and Stas Bekman [@stas00](https://github.com/stas00) worked on the integration of these projects.
Faster TensorFlow models in Hugging Face Transformers
jplu
January 26, 2021
tf-serving
guide, nlp
https://huggingface.co/blog/tf-serving
# Faster TensorFlow models in Hugging Face Transformers <a target="_blank" href="https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/10_tf_serving.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"> </a> In the last few months, the Hugging Face team has been working hard on improving Transformers’ TensorFlow models to make them more robust and faster. The recent improvements are mainly focused on two aspects: 1. Computational performance: BERT, RoBERTa, ELECTRA and MPNet have been improved in order to have a much faster computation time. This gain of computational performance is noticeable for all the computational aspects: graph/eager mode, TF Serving and for CPU/GPU/TPU devices. 2. TensorFlow Serving: each of these TensorFlow model can be deployed with TensorFlow Serving to benefit of this gain of computational performance for inference. ## Computational Performance To demonstrate the computational performance improvements, we have done a thorough benchmark where we compare BERT's performance with TensorFlow Serving of v4.2.0 to the official implementation from [Google](https://github.com/tensorflow/models/tree/master/official/nlp/bert). The benchmark has been run on a GPU V100 using a sequence length of 128 (times are in millisecond): | Batch size | Google implementation | v4.2.0 implementation | Relative difference Google/v4.2.0 implem | |:----------:|:---------------------:|:---------------------:|:----------------------------------------:| | 1 | 6.7 | 6.26 | 6.79% | | 2 | 9.4 | 8.68 | 7.96% | | 4 | 14.4 | 13.1 | 9.45% | | 8 | 24 | 21.5 | 10.99% | | 16 | 46.6 | 42.3 | 9.67% | | 32 | 83.9 | 80.4 | 4.26% | | 64 | 171.5 | 156 | 9.47% | | 128 | 338.5 | 309 | 9.11% | The current implementation of Bert in v4.2.0 is faster than the Google implementation by up to ~10%. Apart from that it is also twice as fast as the implementations in the 4.1.1 release. ## TensorFlow Serving The previous section demonstrates that the brand new Bert model got a dramatic increase in computational performance in the last version of Transformers. In this section, we will show you step-by-step how to deploy a Bert model with TensorFlow Serving to benefit from the increase in computational performance in a production environment. ### What is TensorFlow Serving? TensorFlow Serving belongs to the set of tools provided by [TensorFlow Extended (TFX)](https://www.tensorflow.org/tfx/guide/serving) that makes the task of deploying a model to a server easier than ever. TensorFlow Serving provides two APIs, one that can be called upon using HTTP requests and another one using gRPC to run inference on the server. ### What is a SavedModel? A SavedModel contains a standalone TensorFlow model, including its weights and its architecture. It does not require the original source of the model to be run, which makes it useful for sharing or deploying with any backend that supports reading a SavedModel such as Java, Go, C++ or JavaScript among others. The internal structure of a SavedModel is represented as such: ``` savedmodel /assets -> here the needed assets by the model (if any) /variables -> here the model checkpoints that contains the weights saved_model.pb -> protobuf file representing the model graph ``` ### How to install TensorFlow Serving? There are three ways to install and use TensorFlow Serving: - through a Docker container, - through an apt package, - or using [pip](https://pypi.org/project/pip/). To make things easier and compliant with all the existing OS, we will use Docker in this tutorial. ### How to create a SavedModel? SavedModel is the format expected by TensorFlow Serving. Since Transformers v4.2.0, creating a SavedModel has three additional features: 1. The sequence length can be modified freely between runs. 2. All model inputs are available for inference. 3. `hidden states` or `attention` are now grouped into a single output when returning them with `output_hidden_states=True` or `output_attentions=True`. Below, you can find the inputs and outputs representations of a `TFBertForSequenceClassification` saved as a TensorFlow SavedModel: ``` The given SavedModel SignatureDef contains the following input(s): inputs['attention_mask'] tensor_info: dtype: DT_INT32 shape: (-1, -1) name: serving_default_attention_mask:0 inputs['input_ids'] tensor_info: dtype: DT_INT32 shape: (-1, -1) name: serving_default_input_ids:0 inputs['token_type_ids'] tensor_info: dtype: DT_INT32 shape: (-1, -1) name: serving_default_token_type_ids:0 The given SavedModel SignatureDef contains the following output(s): outputs['attentions'] tensor_info: dtype: DT_FLOAT shape: (12, -1, 12, -1, -1) name: StatefulPartitionedCall:0 outputs['logits'] tensor_info: dtype: DT_FLOAT shape: (-1, 2) name: StatefulPartitionedCall:1 Method name is: tensorflow/serving/predict ``` To directly pass `inputs_embeds` (the token embeddings) instead of `input_ids` (the token IDs) as input, we need to subclass the model to have a new serving signature. The following snippet of code shows how to do so: ```python from transformers import TFBertForSequenceClassification import tensorflow as tf # Creation of a subclass in order to define a new serving signature class MyOwnModel(TFBertForSequenceClassification): # Decorate the serving method with the new input_signature # an input_signature represents the name, the data type and the shape of an expected input @tf.function(input_signature=[{ "inputs_embeds": tf.TensorSpec((None, None, 768), tf.float32, name="inputs_embeds"), "attention_mask": tf.TensorSpec((None, None), tf.int32, name="attention_mask"), "token_type_ids": tf.TensorSpec((None, None), tf.int32, name="token_type_ids"), }]) def serving(self, inputs): # call the model to process the inputs output = self.call(inputs) # return the formated output return self.serving_output(output) # Instantiate the model with the new serving method model = MyOwnModel.from_pretrained("bert-base-cased") # save it with saved_model=True in order to have a SavedModel version along with the h5 weights. model.save_pretrained("my_model", saved_model=True) ``` The serving method has to be overridden by the new `input_signature` argument of the `tf.function` decorator. See the [official documentation](https://www.tensorflow.org/api_docs/python/tf/function#args_1) to know more about the `input_signature` argument. The `serving` method is used to define how will behave a SavedModel when deployed with TensorFlow Serving. Now the SavedModel looks like as expected, see the new `inputs_embeds` input: ``` The given SavedModel SignatureDef contains the following input(s): inputs['attention_mask'] tensor_info: dtype: DT_INT32 shape: (-1, -1) name: serving_default_attention_mask:0 inputs['inputs_embeds'] tensor_info: dtype: DT_FLOAT shape: (-1, -1, 768) name: serving_default_inputs_embeds:0 inputs['token_type_ids'] tensor_info: dtype: DT_INT32 shape: (-1, -1) name: serving_default_token_type_ids:0 The given SavedModel SignatureDef contains the following output(s): outputs['attentions'] tensor_info: dtype: DT_FLOAT shape: (12, -1, 12, -1, -1) name: StatefulPartitionedCall:0 outputs['logits'] tensor_info: dtype: DT_FLOAT shape: (-1, 2) name: StatefulPartitionedCall:1 Method name is: tensorflow/serving/predict ``` ## How to deploy and use a SavedModel? Let’s see step by step how to deploy and use a BERT model for sentiment classification. ### Step 1 Create a SavedModel. To create a SavedModel, the Transformers library lets you load a PyTorch model called `nateraw/bert-base-uncased-imdb` trained on the IMDB dataset and convert it to a TensorFlow Keras model for you: ```python from transformers import TFBertForSequenceClassification model = TFBertForSequenceClassification.from_pretrained("nateraw/bert-base-uncased-imdb", from_pt=True) # the saved_model parameter is a flag to create a SavedModel version of the model in same time than the h5 weights model.save_pretrained("my_model", saved_model=True) ``` ### Step 2 Create a Docker container with the SavedModel and run it. First, pull the TensorFlow Serving Docker image for CPU (for GPU replace serving by serving:latest-gpu): ``` docker pull tensorflow/serving ``` Next, run a serving image as a daemon named serving_base: ``` docker run -d --name serving_base tensorflow/serving ``` copy the newly created SavedModel into the serving_base container's models folder: ``` docker cp my_model/saved_model serving_base:/models/bert ``` commit the container that serves the model by changing MODEL_NAME to match the model's name (here `bert`), the name (`bert`) corresponds to the name we want to give to our SavedModel: ``` docker commit --change "ENV MODEL_NAME bert" serving_base my_bert_model ``` and kill the serving_base image ran as a daemon because we don't need it anymore: ``` docker kill serving_base ``` Finally, Run the image to serve our SavedModel as a daemon and we map the ports 8501 (REST API), and 8500 (gRPC API) in the container to the host and we name the the container `bert`. ``` docker run -d -p 8501:8501 -p 8500:8500 --name bert my_bert_model ``` ### Step 3 Query the model through the REST API: ```python from transformers import BertTokenizerFast, BertConfig import requests import json import numpy as np sentence = "I love the new TensorFlow update in transformers." # Load the corresponding tokenizer of our SavedModel tokenizer = BertTokenizerFast.from_pretrained("nateraw/bert-base-uncased-imdb") # Load the model config of our SavedModel config = BertConfig.from_pretrained("nateraw/bert-base-uncased-imdb") # Tokenize the sentence batch = tokenizer(sentence) # Convert the batch into a proper dict batch = dict(batch) # Put the example into a list of size 1, that corresponds to the batch size batch = [batch] # The REST API needs a JSON that contains the key instances to declare the examples to process input_data = {"instances": batch} # Query the REST API, the path corresponds to http://host:port/model_version/models_root_folder/model_name:method r = requests.post("http://localhost:8501/v1/models/bert:predict", data=json.dumps(input_data)) # Parse the JSON result. The results are contained in a list with a root key called "predictions" # and as there is only one example, takes the first element of the list result = json.loads(r.text)["predictions"][0] # The returned results are probabilities, that can be positive or negative hence we take their absolute value abs_scores = np.abs(result) # Take the argmax that correspond to the index of the max probability. label_id = np.argmax(abs_scores) # Print the proper LABEL with its index print(config.id2label[label_id]) ``` This should return POSITIVE. It is also possible to pass by the gRPC (google Remote Procedure Call) API to get the same result: ```python from transformers import BertTokenizerFast, BertConfig import numpy as np import tensorflow as tf from tensorflow_serving.apis import predict_pb2 from tensorflow_serving.apis import prediction_service_pb2_grpc import grpc sentence = "I love the new TensorFlow update in transformers." tokenizer = BertTokenizerFast.from_pretrained("nateraw/bert-base-uncased-imdb") config = BertConfig.from_pretrained("nateraw/bert-base-uncased-imdb") # Tokenize the sentence but this time with TensorFlow tensors as output already batch sized to 1. Ex: # { # 'input_ids': <tf.Tensor: shape=(1, 3), dtype=int32, numpy=array([[ 101, 19082, 102]])>, # 'token_type_ids': <tf.Tensor: shape=(1, 3), dtype=int32, numpy=array([[0, 0, 0]])>, # 'attention_mask': <tf.Tensor: shape=(1, 3), dtype=int32, numpy=array([[1, 1, 1]])> # } batch = tokenizer(sentence, return_tensors="tf") # Create a channel that will be connected to the gRPC port of the container channel = grpc.insecure_channel("localhost:8500") # Create a stub made for prediction. This stub will be used to send the gRPC request to the TF Server. stub = prediction_service_pb2_grpc.PredictionServiceStub(channel) # Create a gRPC request made for prediction request = predict_pb2.PredictRequest() # Set the name of the model, for this use case it is bert request.model_spec.name = "bert" # Set which signature is used to format the gRPC query, here the default one request.model_spec.signature_name = "serving_default" # Set the input_ids input from the input_ids given by the tokenizer # tf.make_tensor_proto turns a TensorFlow tensor into a Protobuf tensor request.inputs["input_ids"].CopyFrom(tf.make_tensor_proto(batch["input_ids"])) # Same with attention mask request.inputs["attention_mask"].CopyFrom(tf.make_tensor_proto(batch["attention_mask"])) # Same with token type ids request.inputs["token_type_ids"].CopyFrom(tf.make_tensor_proto(batch["token_type_ids"])) # Send the gRPC request to the TF Server result = stub.Predict(request) # The output is a protobuf where the only one output is a list of probabilities # assigned to the key logits. As the probabilities as in float, the list is # converted into a numpy array of floats with .float_val output = result.outputs["logits"].float_val # Print the proper LABEL with its index print(config.id2label[np.argmax(np.abs(output))]) ``` ## Conclusion Thanks to the last updates applied on the TensorFlow models in transformers, one can now easily deploy its models in production using TensorFlow Serving. One of the next steps we are thinking about is to directly integrate the preprocessing part inside the SavedModel to make things even easier.
Hugging Face on PyTorch / XLA TPUs
jysohn23
February 9, 2021
pytorch-xla
open-source-collab
https://huggingface.co/blog/pytorch-xla
# Hugging Face on PyTorch / XLA TPUs: Faster and cheaper training <a href="https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/13_pytorch_xla.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ## Training Your Favorite Transformers on Cloud TPUs using PyTorch / XLA The PyTorch-TPU project originated as a collaborative effort between the Facebook PyTorch and Google TPU teams and officially launched at the 2019 PyTorch Developer Conference 2019. Since then, we’ve worked with the Hugging Face team to bring first-class support to training on Cloud TPUs using [PyTorch / XLA](https://github.com/pytorch/xla). This new integration enables PyTorch users to run and scale up their models on Cloud TPUs while maintaining the exact same Hugging Face trainers interface. This blog post provides an overview of changes made in the Hugging Face library, what the PyTorch / XLA library does, an example to get you started training your favorite transformers on Cloud TPUs, and some performance benchmarks. If you can’t wait to get started with TPUs, please skip ahead to the [“Train Your Transformer on Cloud TPUs”](#train-your-transformer-on-cloud-tpus) section - we handle all the PyTorch / XLA mechanics for you within the `Trainer` module! ### XLA:TPU Device Type PyTorch / XLA adds a new `xla` device type to PyTorch. This device type works just like other PyTorch device types. For example, here's how to create and print an XLA tensor: ```python import torch import torch_xla import torch_xla.core.xla_model as xm t = torch.randn(2, 2, device=xm.xla_device()) print(t.device) print(t) ``` This code should look familiar. PyTorch / XLA uses the same interface as regular PyTorch with a few additions. Importing `torch_xla` initializes PyTorch / XLA, and `xm.xla_device()` returns the current XLA device. This may be a CPU, GPU, or TPU depending on your environment, but for this blog post we’ll focus primarily on TPU. The `Trainer` module leverages a `TrainingArguments` dataclass in order to define the training specifics. It handles multiple arguments, from batch sizes, learning rate, gradient accumulation and others, to the devices used. Based on the above, in `TrainingArguments._setup_devices()` when using XLA:TPU devices, we simply return the TPU device to be used by the `Trainer`: ```python @dataclass class TrainingArguments: ... @cached_property @torch_required def _setup_devices(self) -> Tuple["torch.device", int]: ... elif is_torch_tpu_available(): device = xm.xla_device() n_gpu = 0 ... return device, n_gpu ``` ### XLA Device Step Computation In a typical XLA:TPU training scenario we’re training on multiple TPU cores in parallel (a single Cloud TPU device includes 8 TPU cores). So we need to ensure that all the gradients are exchanged between the data parallel replicas by consolidating the gradients and taking an optimizer step. For this we provide the `xm.optimizer_step(optimizer)` which does the gradient consolidation and step-taking. In the Hugging Face trainer, we correspondingly update the train step to use the PyTorch / XLA APIs: ```python class Trainer: … def train(self, *args, **kwargs): ... if is_torch_tpu_available(): xm.optimizer_step(self.optimizer) ``` ### PyTorch / XLA Input Pipeline There are two main parts to running a PyTorch / XLA model: (1) tracing and executing your model’s graph lazily (refer to below [“PyTorch / XLA Library”](https://github.com/pytorch/xla) section for a more in-depth explanation) and (2) feeding your model. Without any optimization, the tracing/execution of your model and input feeding would be executed serially, leaving chunks of time during which your host CPU and your TPU accelerators would be idle, respectively. To avoid this, we provide an API, which pipelines the two and thus is able to overlap the tracing of step n+1 while step n is still executing. ![alt text](/blog/assets/13_pytorch_xla/training_pipeline.png) ```python import torch_xla.distributed.parallel_loader as pl ... dataloader = pl.MpDeviceLoader(dataloader, device) ``` ### Checkpoint Writing and Loading When a tensor is checkpointed from a XLA device and then loaded back from the checkpoint, it will be loaded back to the original device. Before checkpointing tensors in your model, you want to ensure that all of your tensors are on CPU devices instead of XLA devices. This way, when you load back the tensors, you’ll load them through CPU devices and then have the opportunity to place them on whatever XLA devices you desire. We provide the `xm.save()` API for this, which already takes care of only writing to storage location from only one process on each host (or one globally if using a shared file system across hosts). ```python class PreTrainedModel(nn.Module, ModuleUtilsMixin, GenerationMixin): … def save_pretrained(self, save_directory): ... if getattr(self.config, "xla_device", False): import torch_xla.core.xla_model as xm if xm.is_master_ordinal(): # Save configuration file model_to_save.config.save_pretrained(save_directory) # xm.save takes care of saving only from master xm.save(state_dict, output_model_file) ``` ```python class Trainer: … def train(self, *args, **kwargs): ... if is_torch_tpu_available(): xm.rendezvous("saving_optimizer_states") xm.save(self.optimizer.state_dict(), os.path.join(output_dir, "optimizer.pt")) xm.save(self.lr_scheduler.state_dict(), os.path.join(output_dir, "scheduler.pt")) ``` ## PyTorch / XLA Library PyTorch / XLA is a Python package that uses the XLA linear algebra compiler to connect the PyTorch deep learning framework with XLA devices, which includes CPU, GPU, and Cloud TPUs. Part of the following content is also available in our [API_GUIDE.md](https://github.com/pytorch/xla/blob/master/API_GUIDE.md). ### PyTorch / XLA Tensors are Lazy Using XLA tensors and devices requires changing only a few lines of code. However, even though XLA tensors act a lot like CPU and CUDA tensors, their internals are different. CPU and CUDA tensors launch operations immediately or eagerly. XLA tensors, on the other hand, are lazy. They record operations in a graph until the results are needed. Deferring execution like this lets XLA optimize it. A graph of multiple separate operations might be fused into a single optimized operation. Lazy execution is generally invisible to the caller. PyTorch / XLA automatically constructs the graphs, sends them to XLA devices, and synchronizes when copying data between an XLA device and the CPU. Inserting a barrier when taking an optimizer step explicitly synchronizes the CPU and the XLA device. This means that when you call `model(input)` forward pass, calculate your loss `loss.backward()`, and take an optimization step `xm.optimizer_step(optimizer)`, the graph of all operations is being built in the background. Only when you either explicitly evaluate the tensor (ex. Printing the tensor or moving it to a CPU device) or mark a step (this will be done by the `MpDeviceLoader` everytime you iterate through it), does the full step get executed. ### Trace, Compile, Execute, and Repeat From a user’s point of view, a typical training regimen for a model running on PyTorch / XLA involves running a forward pass, backward pass, and optimizer step. From the PyTorch / XLA library point of view, things look a little different. While a user runs their forward and backward passes, an intermediate representation (IR) graph is traced on the fly. The IR graph leading to each root/output tensor can be inspected as following: ```python >>> import torch >>> import torch_xla >>> import torch_xla.core.xla_model as xm >>> t = torch.tensor(1, device=xm.xla_device()) >>> s = t*t >>> print(torch_xla._XLAC._get_xla_tensors_text([s])) IR { %0 = s64[] prim::Constant(), value=1 %1 = s64[] prim::Constant(), value=0 %2 = s64[] xla::as_strided_view_update(%1, %0), size=(), stride=(), storage_offset=0 %3 = s64[] aten::as_strided(%2), size=(), stride=(), storage_offset=0 %4 = s64[] aten::mul(%3, %3), ROOT=0 } ``` This live graph is accumulated while the forward and backward passes are run on the user's program, and once `xm.mark_step()` is called (indirectly by `pl.MpDeviceLoader`), the graph of live tensors is cut. This truncation marks the completion of one step and subsequently we lower the IR graph into XLA Higher Level Operations (HLO), which is the IR language for XLA. This HLO graph then gets compiled into a TPU binary and subsequently executed on the TPU devices. However, this compilation step can be costly, typically taking longer than a single step, so if we were to compile the user’s program every single step, overhead would be high. To avoid this, we have caches that store compiled TPU binaries keyed by their HLO graphs’ unique hash identifiers. So once this TPU binary cache has been populated on the first step, subsequent steps will typically not have to re-compile new TPU binaries; instead, they can simply look up the necessary binaries from the cache. Since TPU compilations are typically much slower than the step execution time, this means that if the graph keeps changing in shape, we’ll have cache misses and compile too frequently. To minimize compilation costs, we recommend keeping tensor shapes static whenever possible. Hugging Face library’s shapes are already static for the most part with input tokens being padded appropriately, so throughout training the cache should be consistently hit. This can be checked using the debugging tools that PyTorch / XLA provides. In the example below, you can see that compilation only happened 5 times (`CompileTime`) whereas execution happened during each of 1220 steps (`ExecuteTime`): ```python >>> import torch_xla.debug.metrics as met >>> print(met.metrics_report()) Metric: CompileTime TotalSamples: 5 Accumulator: 28s920ms153.731us ValueRate: 092ms152.037us / second Rate: 0.0165028 / second Percentiles: 1%=428ms053.505us; 5%=428ms053.505us; 10%=428ms053.505us; 20%=03s640ms888.060us; 50%=03s650ms126.150us; 80%=11s110ms545.595us; 90%=11s110ms545.595us; 95%=11s110ms545.595us; 99%=11s110ms545.595us Metric: DeviceLockWait TotalSamples: 1281 Accumulator: 38s195ms476.007us ValueRate: 151ms051.277us / second Rate: 4.54374 / second Percentiles: 1%=002.895us; 5%=002.989us; 10%=003.094us; 20%=003.243us; 50%=003.654us; 80%=038ms978.659us; 90%=192ms495.718us; 95%=208ms893.403us; 99%=221ms394.520us Metric: ExecuteTime TotalSamples: 1220 Accumulator: 04m22s555ms668.071us ValueRate: 923ms872.877us / second Rate: 4.33049 / second Percentiles: 1%=045ms041.018us; 5%=213ms379.757us; 10%=215ms434.912us; 20%=217ms036.764us; 50%=219ms206.894us; 80%=222ms335.146us; 90%=227ms592.924us; 95%=231ms814.500us; 99%=239ms691.472us Counter: CachedCompile Value: 1215 Counter: CreateCompileHandles Value: 5 ... ``` ### Train Your Transformer on Cloud TPUs To configure your VM and Cloud TPUs, please follow [“Set up a Compute Engine instance”](https://cloud.google.com/tpu/docs/tutorials/transformer-pytorch#set_up_a_instance) and [“Launch a Cloud TPU resource”](https://cloud.google.com/tpu/docs/tutorials/transformer-pytorch#launch-tpu) (pytorch-1.7 version as of writing) sections. Once you have your VM and Cloud TPU created, using them is as simple as SSHing to your GCE VM and running the following commands to get `bert-large-uncased` training kicked off (batch size is for v3-8 device, may OOM on v2-8): ```bash conda activate torch-xla-1.7 export TPU_IP_ADDRESS="ENTER_YOUR_TPU_IP_ADDRESS" # ex. 10.0.0.2 export XRT_TPU_CONFIG="tpu_worker;0;$TPU_IP_ADDRESS:8470" git clone -b v4.2.2 https://github.com/huggingface/transformers.git cd transformers && pip install . pip install datasets==1.2.1 python examples/xla_spawn.py \ --num_cores 8 \ examples/language-modeling/run_mlm.py \ --dataset_name wikitext \ --dataset_config_name wikitext-103-raw-v1 \ --max_seq_length 512 \ --pad_to_max_length \ --logging_dir ./tensorboard-metrics \ --cache_dir ./cache_dir \ --do_train \ --do_eval \ --overwrite_output_dir \ --output_dir language-modeling \ --overwrite_cache \ --tpu_metrics_debug \ --model_name_or_path bert-large-uncased \ --num_train_epochs 3 \ --per_device_train_batch_size 8 \ --per_device_eval_batch_size 8 \ --save_steps 500000 ``` The above should complete training in roughly less than 200 minutes with an eval perplexity of ~3.25. ## Performance Benchmarking The following table shows the performance of training bert-large-uncased on a v3-8 Cloud TPU system (containing 4 TPU v3 chips) running PyTorch / XLA. The dataset used for all benchmarking measurements is the [WikiText103](https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/) dataset, and we use the [run_mlm.py](https://github.com/huggingface/transformers/blob/v4.2.2/examples/language-modeling/run_mlm.py) script provided in Hugging Face examples. To ensure that the workloads are not host-CPU-bound, we use the n1-standard-96 CPU configuration for these tests, but you may be able to use smaller configurations as well without impacting performance. | Name | Dataset | Hardware | Global Batch Size | Precision | Training Time (mins) | |--------------------|-------------|---------------------------|-------------------|-----------|----------------------| | bert-large-uncased | WikiText103 | 4 TPUv3 chips (i.e. v3-8) | 64 | FP32 | 178.4 | | bert-large-uncased | WikiText103 | 4 TPUv3 chips (i.e. v3-8) | 128 | BF16 | 106.4 | ## Get Started with PyTorch / XLA on TPUs See the [“Running on TPUs”](https://github.com/huggingface/transformers/tree/master/examples#running-on-tpus) section under the Hugging Face examples to get started. For a more detailed description of our APIs, check out our [API_GUIDE](https://github.com/pytorch/xla/blob/master/API_GUIDE.md), and for performance best practices, take a look at our [TROUBLESHOOTING](https://github.com/pytorch/xla/blob/master/TROUBLESHOOTING.md) guide. For generic PyTorch / XLA examples, run the following [Colab Notebooks](https://github.com/pytorch/xla/tree/master/contrib/colab) we offer with free Cloud TPU access. To run directly on GCP, please see our tutorials labeled “PyTorch” on our [documentation site](https://cloud.google.com/tpu/docs/tutorials). Have any other questions or issues? Please open an issue or question at https://github.com/huggingface/transformers/issues or directly at https://github.com/pytorch/xla/issues.
Retrieval Augmented Generation with Huggingface Transformers and Ray
amogkam
February 10, 2021
ray-rag
open-source-collab, nlp
https://huggingface.co/blog/ray-rag
# Retrieval Augmented Generation with Huggingface Transformers and Ray ##### A guest blog post by <a href="/amogkam">Amog Kamsetty</a> from the Anyscale team [Huggingface Transformers](https://huggingface.co/) recently added the [Retrieval Augmented Generation (RAG)](https://twitter.com/huggingface/status/1310597560906780680) model, a new NLP architecture that leverages external documents (like Wikipedia) to augment its knowledge and achieve state of the art results on knowledge-intensive tasks. In this blog post, we introduce the integration of [Ray](https://docs.ray.io/en/master/), a library for building scalable applications, into the RAG contextual document retrieval mechanism. This speeds up retrieval calls by 2x and improves the scalability of RAG distributed [fine-tuning](https://github.com/huggingface/transformers/tree/master/examples/research_projects/rag). ### What is Retrieval Augmented Generation (RAG)? ![alt_text](assets/12_ray_rag/rag_gif.gif "image_tooltip") _An overview of RAG. The model retrieves contextual documents from an external dataset as part of its execution. These contextual documents are used in conjunction with the original input to produce an output. The GIF is taken from [Facebook's original blog post](https://ai.facebook.com/blog/retrieval-augmented-generation-streamlining-the-creation-of-intelligent-natural-language-processing-models)._ Recently, [Huggingface](https://huggingface.co/) partnered with [Facebook AI](https://ai.facebook.com/) to introduce the [RAG](https://twitter.com/huggingface/status/1310597560906780680) model as part of its Transformers library. [RAG](https://ai.facebook.com/blog/retrieval-augmented-generation-streamlining-the-creation-of-intelligent-natural-language-processing-models/) acts just like any other [seq2seq model](https://blog.keras.io/a-ten-minute-introduction-to-sequence-to-sequence-learning-in-keras.html). However, [RAG](https://ai.facebook.com/blog/retrieval-augmented-generation-streamlining-the-creation-of-intelligent-natural-language-processing-models/) has an intermediate component that retrieves contextual documents from an external knowledge base (like a Wikipedia text corpus). These documents are then used in conjunction with the input sequence and passed into the underlying seq2seq [generator](https://huggingface.co/blog/how-to-generate). This information retrieval step allows [RAG](https://ai.facebook.com/blog/retrieval-augmented-generation-streamlining-the-creation-of-intelligent-natural-language-processing-models/) to make use of multiple sources of knowledge -- those that are baked into the model parameters and the information that is contained in the contextual passages, allowing it to outperform other state-of-the-art models in tasks like question answering. You can try it for yourself using this [demo provided by Huggingface](https://huggingface.co/rag/)! ### Scaling up fine-tuning This retrieval of contextual documents is crucial for RAG's state-of-the-art results but introduces an extra layer of complexity. When scaling up the training process via a data-parallel training routine, a naive implementation of the document lookup can become a bottleneck for training. Further, the **document index** used in the retrieval component is often quite large, making it infeasible for each training worker to load its own replicated copy of the index. The previous implementation of RAG fine-tuning leveraged the [torch.distributed](https://pytorch.org/docs/stable/distributed.html) communication package for the document retrieval portion. However, this implementation sometimes proved to be inflexible and limited in scalability. Instead, a framework-agnostic and a more flexible implementation for ad-hoc concurrent programming is required. [Ray](https://ray.io/) fits the bill perfectly. Ray is a simple, yet powerful Python library for general-purpose distributed and parallel programming. Using Ray for distributed document retrieval, we achieved a **2x speedup per retrieval call compared to `torch.distributed`**, and overall better fine-tuning scalability. ### Ray for Document Retrieval ![alt_text](assets/12_ray_rag/torch_distributed_document_retrieval.png "image_tooltip") _Document retrieval with the torch.distributed implementation_ The main drawback of the [torch.distributed](https://pytorch.org/docs/stable/distributed.html) implementation for document retrieval was that it latched onto the same process group used for training and only the rank 0 training worker loaded the index into memory. As a result, this implementation had some limitations: 1. **Synchronization bottleneck**: The rank 0 worker had to receive the inputs from all workers, perform the index query, and then send the results back to the other workers. This limited performance with multiple training workers. 2. **PyTorch specific**: The document retrieval process group had to latch onto the existing process group used for training, meaning that PyTorch had to be used for training as well. ![alt_text](assets/12_ray_rag/ray_arch_updated.png "image_tooltip") _Document retrieval with the Ray implementation_ To overcome these limitations, we introduced a novel implementation of distributed retrieval based on Ray. With [Ray’s stateful actor abstractions](https://docs.ray.io/en/master/actors.html), multiple processes that are separate from the training processes are used to load the index and handle the retrieval queries. With multiple Ray actors, retrieval is no longer a bottleneck and PyTorch is no longer a requirement for RAG. And as you can see below, using the [Ray](https://docs.ray.io/en/master/) based implementation leads to better retrieval performance for multi-GPU fine-tuning. The following results show the seconds per retrieval call and we can see that as we increase the number of GPUs that we train on, using Ray has comparatively better performance than `torch.distributed`. Also, if we increase the number of Ray processes that perform retrieval, we also get better performance with more training workers since a single retrieval process is no longer a bottleneck. <table> <tr> <td> </td> <td>2 GPU </td> <td>3 GPU </td> <td>4 GPU </td> </tr> <tr> <td>torch.distributed </td> <td>2.12 sec/retrieval </td> <td>2.62 sec/retrieve </td> <td>3.438 sec/retrieve </td> </tr> <tr> <td>Ray 2 retrieval processes </td> <td>1.49 sec/retrieve </td> <td>1.539 sec/retrieve </td> <td>2.029 sec/retrieve </td> </tr> <tr> <td>Ray 4 retrieval processes </td> <td>1.145 sec/retrieve </td> <td>1.484 sec/retrieve </td> <td>1.66 sec/retrieve </td> </tr> </table> _A performance comparison of different retrieval implementations. For each document retrieval implementation, we run 500 training steps with a per-GPU batch size of 8, and measure the time it takes to retrieve the contextual documents for each batch on the rank 0 training worker. As the results show, using multiple retrieval processes improves performance, especially as we scale training to multiple GPUs._ ### How do I use it? [Huggingface](https://huggingface.co/) provides a [PyTorch Lightning](https://github.com/PyTorchLightning/pytorch-lightning) based [fine tuning script](https://github.com/huggingface/transformers/tree/master/examples/research_projects/rag), and we extended it to add the Ray retrieval implementation as an option. To try it out, first install the necessary requirements ```bash pip install ray pip install transformers pip install -r transformers/examples/research_projects/rag/requirements.txt ``` Then, you can specify your data paths and other configurations and run [finetune-rag-ray.sh](https://github.com/huggingface/transformers/blob/master/examples/research_projects/rag/finetune_rag_ray.sh)! ```bash # Sample script to finetune RAG using Ray for distributed retrieval. # Add parent directory to python path to access lightning_base.py export PYTHONPATH="../":"${PYTHONPATH}" # Start a single-node Ray cluster. ray start --head # A sample finetuning run, you need to specify data_dir, output_dir and model_name_or_path # run ./examples/rag/finetune_rag_ray.sh --help to see all the possible options python examples/rag/finetune_rag.py \ --data_dir $DATA_DIR \ --output_dir $OUTPUT_DIR \ --model_name_or_path $MODEL_NAME_OR_PATH \ --model_type rag_sequence \ --fp16 \ --gpus 8 \ --profile \ --do_train \ --do_predict \ --n_val -1 \ --train_batch_size 8 \ --eval_batch_size 1 \ --max_source_length 128 \ --max_target_length 25 \ --val_max_target_length 25 \ --test_max_target_length 25 \ --label_smoothing 0.1 \ --dropout 0.1 \ --attention_dropout 0.1 \ --weight_decay 0.001 \ --adam_epsilon 1e-08 \ --max_grad_norm 0.1 \ --lr_scheduler polynomial \ --learning_rate 3e-05 \ --num_train_epochs 100 \ --warmup_steps 500 \ --gradient_accumulation_steps 1 \ --distributed_retriever ray \ --num_retrieval_workers 4 # Stop the Ray cluster. ray stop ``` ## What’s next? Using RAG with [Huggingface transformers](https://github.com/huggingface/transformers/tree/master/examples/research_projects/rag) and the [Ray retrieval implementation](https://github.com/huggingface/transformers/blob/master/examples/research_projects/rag/finetune_rag_ray.sh) for faster distributed fine-tuning, you can leverage RAG for retrieval-based generation on your own knowledge-intensive tasks. Also, hyperparameter tuning is another aspect of transformer fine tuning and can have [huge impacts on accuracy](https://medium.com/distributed-computing-with-ray/hyperparameter-optimization-for-transformers-a-guide-c4e32c6c989b). For scalable and easy hyperparameter tuning, check out the [Ray Tune](https://docs.ray.io/en/latest/tune/) library. By using [Ray Tune’s integration with PyTorch Lightning](https://medium.com/distributed-computing-with-ray/scaling-up-pytorch-lightning-hyperparameter-tuning-with-ray-tune-4bd9e1ff9929), or the [built-in integration with Huggingface transformers](https://huggingface.co/blog/ray-tune), you can run experiments to find the perfect hyperparameters for your RAG model. And lastly, stay tuned for a potential Tensorflow implementation of [RAG](https://ai.facebook.com/blog/retrieval-augmented-generation-streamlining-the-creation-of-intelligent-natural-language-processing-models) on [Huggingface](https://huggingface.co/)! If you plan to try RAG+Ray integration out, please feel free to share your experiences on the [Ray Discourse](https://discuss.ray.io/) or join the [Ray community Slack](https://docs.google.com/forms/d/e/1FAIpQLSfAcoiLCHOguOm8e7Jnn-JJdZaCxPGjgVCvFijHB5PLaQLeig/viewform) for further discussion -- we’d love to hear from you! > Also published at https://medium.com/distributed-computing-with-ray/retrieval-augmented-generation-with-huggingface-transformers-and-ray-b09b56161b1e
Simple considerations for simple people building fancy neural networks
VictorSanh
February 25, 2021
simple-considerations
guide
https://huggingface.co/blog/simple-considerations
![Builders](/blog/assets/13_simple-considerations/henry-co-3coKbdfnAFg-unsplash.jpg) <span class="text-gray-500 text-xs">Photo by [Henry & Co.](https://unsplash.com/@hngstrm?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText) on [Unsplash](https://unsplash.com/s/photos/builder?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText)</span> # 🚧 Simple considerations for simple people building fancy neural networks As machine learning continues penetrating all aspects of the industry, neural networks have never been so hyped. For instance, models like GPT-3 have been all over social media in the past few weeks and continue to make headlines outside of tech news outlets with fear-mongering titles. ![Builders](/blog/assets/13_simple-considerations/1_sENCNdlC7zK4bg22r43KiA.png) <div class="text-center text-xs text-gray-500"> <a class="text-gray-500" href="https://www.theguardian.com/commentisfree/2020/sep/08/robot-wrote-this-article-gpt-3">An article</a> from The Guardian </div> At the same time, deep learning frameworks, tools, and specialized libraries democratize machine learning research by making state-of-the-art research easier to use than ever. It is quite common to see these almost-magical/plug-and-play 5 lines of code that promise (near) state-of-the-art results. Working at [Hugging Face](https://huggingface.co/) 🤗, I admit that I am partially guilty of that. 😅 It can give an inexperienced user the misleading impression that neural networks are now a mature technology while in fact, the field is in constant development. In reality, **building and training neural networks can often be an extremely frustrating experience**: * It is sometimes hard to understand if your performance comes from a bug in your model/code or is simply limited by your model’s expressiveness. * You can make tons of tiny mistakes at every step of the process without realizing at first, and your model will still train and give a decent performance. **In this post, I will try to highlight a few steps of my mental process when it comes to building and debugging neural networks.** By “debugging”, I mean making sure you align what you have built and what you have in mind. I will also point out things you can look at when you are not sure what your next step should be by listing the typical questions I ask myself. _A lot of these thoughts stem from my experience doing research in natural language processing but most of these principles can be applied to other fields of machine learning._ ## 1. 🙈 Start by putting machine learning aside It might sound counter-intuitive but the very first step of building a neural network is to **put aside machine learning and simply focus on your data**. Look at the examples, their labels, the diversity of the vocabulary if you are working with text, their length distribution, etc. You should dive into the data to get a first sense of the raw product you are working with and focus on extracting general patterns that a model might be able to catch. Hopefully, by looking at a few hundred examples, you will be able to identify high-level patterns. A few standard questions you can ask yourself: * Are the labels balanced? * Are there gold-labels that you do not agree with? * How were the data obtained? What are the possible sources of noise in this process? * Are there any preprocessing steps that seem natural (tokenization, URL or hashtag removing, etc.)? * How diverse are the examples? * What rule-based algorithm would perform decently on this problem? It is important to get a **high-level feeling (qualitative) of your dataset along with a fine-grained analysis (quantitative)**. If you are working with a public dataset, someone else might have already dived into the data and reported their analysis (it is quite common in Kaggle competition for instance) so you should absolutely have a look at these! ## 2. 📚 Continue as if you just started machine learning Once you have a deep and broad understanding of your data, I always recommend **to put yourself in the shoes of your old self when you just started machine learning** and were watching introduction classes from Andrew Ng on Coursera. **Start as simple as possible to get a sense of the difficulty of your task and how well standard baselines would perform.** For instance, if you work with text, standard baselines for binary text classification can include a logistic regression trained on top of word2vec or fastText embeddings. With the current tools, running these baselines is as easy (if not more) as running BERT which can arguably be considered one of the standard tools for many natural language processing problems. If other baselines are available, run (or implement) some of them. It will help you get even more familiar with the data. As developers, it easy to feel good when building something fancy but it is sometimes hard to rationally justify it if it beats easy baselines by only a few points, so it is central to make sure you have reasonable points of comparisons: * How would a random predictor perform (especially in classification problems)? Dataset can be unbalanced… * What would the loss look like for a random predictor? * What is (are) the best metric(s) to measure progress on my task? * What are the limits of this metric? If it’s perfect, what can I conclude? What can’t I conclude? * What is missing in “simple approaches” to reach a perfect score? * Are there architectures in my neural network toolbox that would be good to model the inductive bias of the data? ## 3. 🦸‍♀️ Don’t be afraid to look under the hood of these 5-liners templates Next, you can start building your model based on the insights and understanding you acquired previously. As mentioned earlier, implementing neural networks can quickly become quite tricky: there are many moving parts that work together (the optimizer, the model, the input processing pipeline, etc.), and many small things can go wrong when implementing these parts and connecting them to each other. **The challenge lies in the fact that you can make these mistakes, train a model without it ever crashing, and still get a decent performance…** Yet, it is a good habit when you think you have finished implementing to **overfit a small batch of examples** (16 for instance). If your implementation is (nearly) correct, your model will be able to overfit and remember these examples by displaying a 0-loss (make sure you remove any form of regularization such as weight decay). If not, it is highly possible that you did something wrong in your implementation. In some rare cases, it means that your model is not expressive enough or lacks capacity. Again, **start with a small-scale model** (fewer layers for instance): you are looking to debug your model so you want a quick feedback loop, not a high performance. > Pro-tip: in my experience working with pre-trained language models, freezing the embeddings modules to their pre-trained values doesn’t affect much the fine-tuning task performance while considerably speeding up the training. Some common errors include: * Wrong indexing… (these are really the worst 😅). Make sure you are gathering tensors along the correct dimensions for instance… * You forgot to call `model.eval()` in evaluation mode (in PyTorch) or `model.zero\_grad()` to clean the gradients * Something went wrong in the pre-processing of the inputs * The loss got wrong arguments (for instance passing probabilities when it expects logits) * Initialization doesn’t break the symmetry (usually happens when you initialize a whole matrix with a single constant value) * Some parameters are never called during the forward pass (and thus receive no gradients) * The learning rate is taking funky values like 0 all the time * Your inputs are being truncated in a suboptimal way > Pro-tip: when you work with language, have a serious **look at the outputs of the tokenizers**. I can’t count the number of lost hours I spent trying to reproduce results (and sometimes my own old results) because something went wrong with the tokenization.🤦‍♂️ Another useful tool is **deep-diving into the training dynamic** and plot (in Tensorboard for instance) the evolution of multiple scalars through training. At the bare minimum, you should look at the dynamic of your loss(es), the parameters, and their gradients. As the loss decreases, you also want to look at the model’s predictions: either by evaluating on your development set or, my personal favorite, **print a couple of model outputs**. For instance, if you are training a machine translation model, it is quite satisfying to see the generations become more and more convincing through the training. You want to be more specifically careful about overfitting: your training loss continues to decreases while your evaluation loss is aiming at the stars.💫 ## 4. 👀 Tune but don’t tune blindly Once you have everything up and running, you might want to tune your hyperparameters to find the best configuration for your setup. I generally stick with a random grid search as it turns out to be fairly effective in practice. > Some people report successes using fancy hyperparameter tuning methods such as Bayesian optimization but in my experience, random over a reasonably manually defined grid search is still a tough-to-beat baseline. Most importantly, there is no point of launching 1000 runs with different hyperparameters (or architecture tweaks like activation functions): **compare a couple of runs with different hyperparameters to get an idea of which hyperparameters have the highest impact** but in general, it is delusional to expect to get your biggest jumps of performance by simply tuning a few values. For instance, if your best performing model is trained with a learning rate of 4e2, there is probably something more fundamental happening inside your neural network and you want to identify and understand this behavior so that you can re-use this knowledge outside of your current specific context. On average, experts use fewer resources to find better solutions. To conclude, a piece of general advice that has helped me become better at building neural networks is to **favor (as most as possible) a deep understanding of each component of your neural network instead of blindly (not to say magically) tweak the architecture**. Keep it simple and avoid small tweaks that you can’t reasonably justify even after trying really hard. Obviously, there is the right balance to find between a “trial-and-error” and an “analysis approach” but a lot of these intuitions feel more natural as you accumulate practical experience. **You too are training your internal model.** 🤯 A few related pointers to complete your reading: * [Reproducibility (in ML) as a vehicle for engineering best practices](https://docs.google.com/presentation/d/1yHLPvPhUs2KGI5ZWo0sU-PKU3GimAk3iTsI38Z-B5Gw/edit#slide=id.p) from Joel Grus * [Checklist for debugging neural networks](https://towardsdatascience.com/checklist-for-debugging-neural-networks-d8b2a9434f21) from Cecelia Shao * [How to unit test machine learning code](https://medium.com/@keeper6928/how-to-unit-test-machine-learning-code-57cf6fd81765) from Chase Roberts * [A recipe for Training Neural Networks](http://karpathy.github.io/2019/04/25/recipe/) from Andrej Karpathy
Hugging Face Reads, Feb. 2021 - Long-range Transformers
VictorSanh
March 09, 2021
long-range-transformers
research, nlp
https://huggingface.co/blog/long-range-transformers
<figure> <img src="/blog/assets/14_long_range_transformers/EfficientTransformerTaxonomy.png" alt="Efficient Transformers taxonomy"/> <figcaption>Efficient Transformers taxonomy from Efficient Transformers: a Survey by Tay et al.</figcaption> </figure> # Hugging Face Reads, Feb. 2021 - Long-range Transformers Co-written by Teven Le Scao, Patrick Von Platen, Suraj Patil, Yacine Jernite and Victor Sanh. > Each month, we will choose a topic to focus on, reading a set of four papers recently published on the subject. We will then write a short blog post summarizing their findings and the common trends between them, and questions we had for follow-up work after reading them. The first topic for January 2021 was [Sparsity and Pruning](https://discuss.huggingface.co/t/hugging-face-reads-01-2021-sparsity-and-pruning/3144), in February 2021 we addressed Long-Range Attention in Transformers. ## Introduction After the rise of large transformer models in 2018 and 2019, two trends have quickly emerged to bring their compute requirements down. First, conditional computation, quantization, distillation, and pruning have unlocked inference of large models in compute-constrained environments; we’ve already touched upon this in part in our [last reading group post](https://discuss.huggingface.co/t/hugging-face-reads-01-2021-sparsity-and-pruning/3144). The research community then moved to reduce the cost of pre-training. In particular, one issue has been at the center of the efforts: the quadratic cost in memory and time of transformer models with regard to the sequence length. In order to allow efficient training of very large models, 2020 saw an onslaught of papers to address that bottleneck and scale transformers beyond the usual 512- or 1024- sequence lengths that were the default in NLP at the start of the year. This topic has been a key part of our research discussions from the start, and our own Patrick Von Platen has already dedicated [a 4-part series to Reformer](https://huggingface.co/blog/reformer). In this reading group, rather than trying to cover every approach (there are so many!), we’ll focus on four main ideas: * Custom attention patterns (with [Longformer](https://arxiv.org/abs/2004.05150)) * Recurrence (with [Compressive Transformer](https://arxiv.org/abs/1911.05507)) * Low-rank approximations (with [Linformer](https://arxiv.org/abs/2006.04768)) * Kernel approximations (with [Performer](https://arxiv.org/abs/2009.14794)) For exhaustive views of the subject, check out [Efficient Transfomers: A Survey](https://arxiv.org/abs/2009.06732) and [Long Range Arena](https://arxiv.org/abs/2011.04006). ## Summaries ### [Longformer - The Long-Document Transformer](https://arxiv.org/abs/2004.05150) Iz Beltagy, Matthew E. Peters, Arman Cohan Longformer addresses the memory bottleneck of transformers by replacing conventional self-attention with a combination of windowed/local/sparse (cf. [Sparse Transformers (2019)](https://arxiv.org/abs/1904.10509)) attention and global attention that scales linearly with the sequence length. As opposed to previous long-range transformer models (e.g. [Transformer-XL (2019)](https://arxiv.org/abs/1901.02860), [Reformer (2020)](https://arxiv.org/abs/2001.04451), [Adaptive Attention Span (2019)](https://arxiv.org/abs/1905.07799)), Longformer’s self-attention layer is designed as a drop-in replacement for the standard self-attention, thus making it possible to leverage pre-trained checkpoints for further pre-training and/or fine-tuning on long sequence tasks. The standard self-attention matrix (Figure a) scales quadratically with the input length: <figure> <img src="/blog/assets/14_long_range_transformers/Longformer.png" alt="Longformer attention"/> <figcaption>Figure taken from Longformer</figcaption> </figure> Longformer uses different attention patterns for autoregressive language modeling, encoder pre-training & fine-tuning, and sequence-to-sequence tasks. * For autoregressive language modeling, the strongest results are obtained by replacing causal self-attention (a la GPT2) with dilated windowed self-attention (Figure c). With \\(n\\) being the sequence length and \\(w\\) being the window length, this attention pattern reduces the memory consumption from \\(n^2\\) to \\(wn\\), which under the assumption that \\(w << n\\), scales linearly with the sequence length. * For encoder pre-training, Longformer replaces the bi-directional self-attention (a la BERT) with a combination of local windowed and global bi-directional self-attention (Figure d). This reduces the memory consumption from \\(n^2\\) to \\(w n + g n\\) with \\(g\\) being the number of tokens that are attended to globally, which again scales linearly with the sequence length. * For sequence-to-sequence models, only the encoder layers (a la BART) are replaced with a combination of local and global bi-directional self-attention (Figure d) because for most seq2seq tasks, only the encoder processes very large inputs (e.g. summarization). The memory consumption is thus reduced from \\(n_s^2+ n_s n_t +n_t^2\\) to \\(w n_s +gn_s +n_s n_t +n_t^2\\) with \\(n_s\\) and \\(n_t\\) being the source (encoder input) and target (decoder input) lengths respectively. For Longformer Encoder-Decoder to be efficient, it is assumed that \\(n_s\\) is much bigger than \\(n_t\\). #### Main findings * The authors proposed the dilated windowed self-attention (Figure c) and showed that it yields better results on language modeling compared to just windowed/sparse self-attention (Figure b). The window sizes are increased through the layers. This pattern further outperforms previous architectures (such as Transformer-XL, or adaptive span attention) on downstream benchmarks. * Global attention allows the information to flow through the whole sequence and applying the global attention to task-motivated tokens (such as the tokens of the question in QA, CLS token for sentence classification) leads to stronger performance on downstream tasks. Using this global pattern, Longformer can be successfully applied to document-level NLP tasks in the transfer learning setting. * Standard pre-trained models can be adapted to long-range inputs by simply replacing the standard self-attention with the long-range self-attention proposed in this paper and then fine-tuning on the downstream task. This avoids costly pre-training specific to long-range inputs. #### Follow-up questions * The increasing size (throughout the layers) of the dilated windowed self-attention echoes findings in computer vision on increasing the receptive field of stacked CNN. How do these two findings relate? What are the transposable learnings? * Longformer’s Encoder-Decoder architecture works well for tasks that do not require a long target length (e.g. summarization). However, how would it work for long-range seq2seq tasks which require a long target length (e.g. document translation, speech recognition, etc.) especially considering the cross-attention layer of encoder-decoder’s models? * In practice, the sliding window self-attention relies on many indexing operations to ensure a symmetric query-key weights matrix. Those operations are very slow on TPUs which highlights the question of the applicability of such patterns on other hardware. ### [Compressive Transformers for Long-Range Sequence Modelling](https://arxiv.org/abs/1911.05507) Jack W. Rae, Anna Potapenko, Siddhant M. Jayakumar, Timothy P. Lillicrap [Transformer-XL (2019)](https://arxiv.org/abs/1901.02860) showed that caching previously computed layer activations in a memory can boost performance on language modeling tasks (such as *enwik8*). Instead of just attending the current \\(n\\) input tokens, the model can also attend to the past \\(n_m\\) tokens, with \\(n_m\\) being the memory size of the model. Transformer-XL has a memory complexity of \\(O(n^2+ n n_m)\\), which shows that memory cost can increase significantly for very large \\(n_m\\). Hence, Transformer-XL has to eventually discard past activations from the memory when the number of cached activations gets larger than \\(n_m\\). Compressive Transformer addresses this problem by adding an additional compressed memory to efficiently cache past activations that would have otherwise eventually been discarded. This way the model can learn better long-range sequence dependencies having access to significantly more past activations. <figure> <img src="/blog/assets/14_long_range_transformers/CompressiveTransformer.png" alt="Compressive Tranformer recurrence"/> <figcaption>Figure taken from Compressive Transfomer</figcaption> </figure> A compression factor \\(c\\) (equal to 3 in the illustration) is chosen to decide the rate at which past activations are compressed. The authors experiment with different compression functions \\(f_c\\) such as max/mean pooling (parameter-free) and 1D convolution (trainable layer). The compression function is trained with backpropagation through time or local auxiliary compression losses. In addition to the current input of length \\(n\\), the model attends to \\(n_m\\) cached activations in the regular memory and \\(n_{cm}\\) compressed memory activations allowing a long temporal dependency of \\(l × (n_m + c n_{cm})\\), with \\(l\\) being the number of attention layers. This increases Transformer-XL’s range by additional \\(l × c × n_{cm}\\) tokens and the memory cost amounts to \\(O(n^2+ n n_m+ n n_{cm})\\). Experiments are conducted on Reinforcement learning, audio generation, and natural language processing. The authors also introduce a new long-range language modeling benchmark called [PG19](https://huggingface.co/datasets/pg19). #### Main findings * Compressive Transformer significantly outperforms the state-of-the-art perplexity on language modeling, namely on the enwik8 and WikiText-103 datasets. In particular, compressed memory plays a crucial role in modeling rare words occurring on long sequences. * The authors show that the model learns to preserve salient information by increasingly attending the compressed memory instead of the regular memory, which goes against the trend of older memories being accessed less frequently. * All compression functions (average pooling, max pooling, 1D convolution) yield similar results confirming that memory compression is an effective way to store past information. #### Follow-up questions * Compressive Transformer requires a special optimization schedule in which the effective batch size is progressively increased to avoid significant performance degradation for lower learning rates. This effect is not well understood and calls into more analysis. * The Compressive Transformer has many more hyperparameters compared to a simple model like BERT or GPT2: the compression rate, the compression function and loss, the regular and compressed memory sizes, etc. It is not clear whether those parameters generalize well across different tasks (other than language modeling) or similar to the learning rate, make the training also very brittle. * It would be interesting to probe the regular memory and compressed memory to analyze what kind of information is memorized through the long sequences. Shedding light on the most salient pieces of information can inform methods such as [Funnel Transformer](https://arxiv.org/abs/2006.03236) which reduces the redundancy in maintaining a full-length token-level sequence. ### [Linformer: Self-Attention with Linear Complexity](https://arxiv.org/abs/2006.04768) Sinong Wang, Belinda Z. Li, Madian Khabsa, Han Fang, Hao Ma The goal is to reduce the complexity of the self-attention with respect to the sequence length \\(n\\)) from quadratic to linear. This paper makes the observation that the attention matrices are low rank (i.e. they don’t contain \\(n × n\\) worth of information) and explores the possibility of using high-dimensional data compression techniques to build more memory efficient transformers. The theoretical foundations of the proposed approach are based on the Johnson-Lindenstrauss lemma. Let’s consider \\(m\\)) points in a high-dimensional space. We want to project them to a low-dimensional space while preserving the structure of the dataset (i.e. the mutual distances between points) with a margin of error \\(\varepsilon\\). The Johnson-Lindenstrauss lemma states we can choose a small dimension \\(k \sim 8 \log(m) / \varepsilon^2\\) and find a suitable projection into Rk in polynomial time by simply trying random orthogonal projections. Linformer projects the sequence length into a smaller dimension by learning a low-rank decomposition of the attention context matrix. The matrix multiplication of the self-attention can be then cleverly re-written such that no matrix of size \\(n × n\\) needs to be ever computed and stored. Standard transformer: $$\text{Attention}(Q, K, V) = \text{softmax}(Q * K) * V$$ (n * h) (n * n) (n * h) Linformer: $$\text{LinAttention}(Q, K, V) = \text{softmax}(Q * K * W^K) * W^V * V$$ (n * h) (n * d) (d * n) (n * h) #### Main findings * The self-attention matrix is low-rank which implies that most of its information can be recovered by its first few highest eigenvalues and can be approximated by a low-rank matrix. * Lot of works focus on reducing the dimensionality of the hidden states. This paper shows that reducing the sequence length with learned projections can be a strong alternative while shrinking the memory complexity of the self-attention from quadratic to linear. * Increasing the sequence length doesn’t affect the inference speed (time-clock) of Linformer, when transformers have a linear increase. Moreover, the convergence speed (number of updates) is not impacted by Linformer's self-attention. <figure> <img src="/blog/assets/14_long_range_transformers/Linformer.png" alt="Linformer performance"/> <figcaption>Figure taken from Linformer</figcaption> </figure> #### Follow-up questions * Even though the projections matrices are shared between layers, the approach presented here comes in contrast with the Johnson-Lindenstrauss that states that random orthogonal projections are sufficient (in polynomial time). Would random projections have worked here? This is reminiscent of Reformer which uses random projections in locally sensitive hashing to reduce the memory complexity of the self-attention. ### [Rethinking Attention with Performers](https://arxiv.org/abs/2009.14794) Krzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Davis, Afroz Mohiuddin, Lukasz Kaiser, David Belanger, Lucy Colwell, Adrian Weller The goal is (again!) to reduce the complexity of the self-attention with respect to the sequence length \\(n\\)) from quadratic to linear. In contrast to other papers, the authors note that the sparsity and low-rankness priors of the self-attention may not hold in other modalities (speech, protein sequence modeling). Thus the paper explores methods to reduce the memory burden of the self-attention without any priors on the attention matrix. The authors observe that if we could perform the matrix multiplication \\(K × V\\) through the softmax ( \\(\text{softmax}(Q × K) × V\\) ), we wouldn’t have to compute the \\(Q x K\\) matrix of size \\(n x n\\) which is the memory bottleneck. They use random feature maps (aka random projections) to approximate the softmax by: $$\text{softmax}(Q * K) \sim Q’ * K’ = \phi(Q) * \phi(K)$$ , where \\(phi\\) is a non-linear suitable function. And then: $$\text{Attention}(Q, K, V) \sim \phi(Q) * (\phi(K) * V)$$ Taking inspiration from machine learning papers from the early 2000s, the authors introduce **FAVOR+** (**F**ast **A**ttention **V**ia **O**rthogonal **R**andom positive (**+**) **F**eatures) a procedure to find unbiased or nearly-unbiased estimations of the self-attention matrix, with uniform convergence and low estimation variance. #### Main findings * The FAVOR+ procedure can be used to approximate self-attention matrices with high accuracy, without any priors on the form of the attention matrix, making it applicable as a drop-in replacement of standard self-attention and leading to strong performances in multiple applications and modalities. * The very thorough mathematical investigation of how-to and not-to approximate softmax highlights the relevance of principled methods developed in the early 2000s even in the deep learning era. * FAVOR+ can also be applied to efficiently model other kernelizable attention mechanisms beyond softmax. #### Follow-up questions * Even if the approximation of the attention mechanism is tight, small errors propagate through the transformer layers. This raises the question of the convergence and stability of fine-tuning a pre-trained network with FAVOR+ as an approximation of self-attention. * The FAVOR+ algorithm is the combination of multiple components. It is not clear which of these components have the most empirical impact on the performance, especially in view of the variety of modalities considered in this work. ## Reading group discussion The developments in pre-trained transformer-based language models for natural language understanding and generation are impressive. Making these systems efficient for production purposes has become a very active research area. This emphasizes that we still have much to learn and build both on the methodological and practical sides to enable efficient and general deep learning based systems, in particular for applications that require modeling long-range inputs. The four papers above offer different ways to deal with the quadratic memory complexity of the self-attention mechanism, usually by reducing it to linear complexity. Linformer and Longformer both rely on the observation that the self-attention matrix does not contain \\(n × n\\) worth of information (the attention matrix is low-rank and sparse). Performer gives a principled method to approximate the softmax-attention kernel (and any kernelizable attention mechanisms beyond softmax). Compressive Transformer offers an orthogonal approach to model long range dependencies based on recurrence. These different inductive biases have implications in terms of computational speed and generalization beyond the training setup. In particular, Linformer and Longformer lead to different trade-offs: Longformer explicitly designs the sparse attention patterns of the self-attention (fixed patterns) while Linformer learns the low-rank matrix factorization of the self-attention matrix. In our experiments, Longformer is less efficient than Linformer, and is currently highly dependent on implementation details. On the other hand, Linformer’s decomposition only works for fixed context length (fixed at training) and cannot generalize to longer sequences without specific adaptation. Moreover, it cannot cache previous activations which can be extremely useful in the generative setup. Interestingly, Performer is conceptually different: it learns to approximate the softmax attention kernel without relying on any sparsity or low-rank assumption. The question of how these inductive biases compare to each other for varying quantities of training data remains. All these works highlight the importance of long-range inputs modeling in natural language. In the industry, it is common to encounter use-cases such as document translation, document classification or document summarization which require modeling very long sequences in an efficient and robust way. Recently, zero-shot examples priming (a la GPT3) has also emerged as a promising alternative to standard fine-tuning, and increasing the number of priming examples (and thus the context size) steadily increases the performance and robustness. Finally, it is common in other modalities such as speech or protein modeling to encounter long sequences beyond the standard 512 time steps. Modeling long inputs is not antithetical to modeling short inputs but instead should be thought from the perspective of a continuum from shorter to longer sequences. [Shortformer](https://arxiv.org/abs/2012.15832), Longformer and BERT provide evidence that training the model on short sequences and gradually increasing sequence lengths lead to an accelerated training and stronger downstream performance. This observation is coherent with the intuition that the long-range dependencies acquired when little data is available can rely on spurious correlations instead of robust language understanding. This echoes some experiments Teven Le Scao has run on language modeling: LSTMs are stronger learners in the low data regime compared to transformers and give better perplexities on small-scale language modeling benchmarks such as Penn Treebank. From a practical point of view, the question of positional embeddings is also a crucial methodological aspect with computational efficiency trade-offs. Relative positional embeddings (introduced in Transformer-XL and used in Compressive Transformers) are appealing because they can easily be extended to yet-unseen sequence lengths, but at the same time, relative positional embeddings are computationally expensive. On the other side, absolute positional embeddings (used in Longformer and Linformer) are less flexible for sequences longer than the ones seen during training, but are computationally more efficient. Interestingly, [Shortformer](https://arxiv.org/abs/2012.15832) introduces a simple alternative by adding the positional information to the queries and keys of the self-attention mechanism instead of adding it to the token embeddings. The method is called position-infused attention and is shown to be very efficient while producing strong results. ## @Hugging Face 🤗: Long-range modeling The Longformer implementation and the associated open-source checkpoints are available through the Transformers library and the [model hub](https://huggingface.co/models?search=longformer). Performer and Big Bird, which is a long-range model based on sparse attention, are currently in the works as part of our [call for models](https://twitter.com/huggingface/status/1359903233976762368), an effort involving the community in order to promote open-source contributions. We would be pumped to hear from you if you’ve wondered how to contribute to `transformers` but did not know where to start! For further reading, we recommend checking Patrick Platen’s blog on [Reformer](https://arxiv.org/abs/2001.04451), Teven Le Scao’s post on [Johnson-Lindenstrauss approximation](https://tevenlescao.github.io/blog/fastpages/jupyter/2020/06/18/JL-Lemma-+-Linformer.html), [Efficient Transfomers: A Survey](https://arxiv.org/abs/2009.06732), and [Long Range Arena: A Benchmark for Efficient Transformers](https://arxiv.org/abs/2011.04006). Next month, we'll cover self-training methods and applications. See you in March!
Fine-Tune Wav2Vec2 for English ASR with 🤗 Transformers
patrickvonplaten
March 12, 2021
fine-tune-wav2vec2-english
guide, audio
https://huggingface.co/blog/fine-tune-wav2vec2-english
# Fine-Tune Wav2Vec2 for English ASR with 🤗 Transformers <a target="_blank" href="https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Fine_tuning_Wav2Vec2_for_English_ASR.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> Wav2Vec2 is a pretrained model for Automatic Speech Recognition (ASR) and was released in [September 2020](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) by Alexei Baevski, Michael Auli, and Alex Conneau. Using a novel contrastive pretraining objective, Wav2Vec2 learns powerful speech representations from more than 50.000 hours of unlabeled speech. Similar, to [BERT\'s masked language modeling](http://jalammar.github.io/illustrated-bert/), the model learns contextualized speech representations by randomly masking feature vectors before passing them to a transformer network. ![wav2vec2\_structure](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/wav2vec2.png) For the first time, it has been shown that pretraining, followed by fine-tuning on very little labeled speech data achieves competitive results to state-of-the-art ASR systems. Using as little as 10 minutes of labeled data, Wav2Vec2 yields a word error rate (WER) of less than 5% on the clean test set of [LibriSpeech](https://huggingface.co/datasets/librispeech_asr) - *cf.* with Table 9 of the [paper](https://arxiv.org/pdf/2006.11477.pdf). In this notebook, we will give an in-detail explanation of how Wav2Vec2\'s pretrained checkpoints can be fine-tuned on any English ASR dataset. Note that in this notebook, we will fine-tune Wav2Vec2 without making use of a language model. It is much simpler to use Wav2Vec2 without a language model as an end-to-end ASR system and it has been shown that a standalone Wav2Vec2 acoustic model achieves impressive results. For demonstration purposes, we fine-tune the \"base\"-sized [pretrained checkpoint](https://huggingface.co/facebook/wav2vec2-base) on the rather small [Timit](https://huggingface.co/datasets/timit_asr) dataset that contains just 5h of training data. Wav2Vec2 is fine-tuned using Connectionist Temporal Classification (CTC), which is an algorithm that is used to train neural networks for sequence-to-sequence problems and mainly in Automatic Speech Recognition and handwriting recognition. I highly recommend reading the blog post [Sequence Modeling with CTC (2017)](https://distill.pub/2017/ctc/) very well-written blog post by Awni Hannun. Before we start, let\'s install both `datasets` and `transformers` from master. Also, we need the `soundfile` package to load audio files and the `jiwer` to evaluate our fine-tuned model using the [word error rate (WER)](https://huggingface.co/metrics/wer) metric \\({}^1\\). ```bash !pip install datasets>=1.18.3 !pip install transformers==4.11.3 !pip install librosa !pip install jiwer ``` Next we strongly suggest to upload your training checkpoints directly to the [Hugging Face Hub](https://huggingface.co/) while training. The Hub has integrated version control so you can be sure that no model checkpoint is getting lost during training. To do so you have to store your authentication token from the Hugging Face website (sign up [here](https://huggingface.co/join) if you haven't already!) ```python from huggingface_hub import notebook_login notebook_login() ``` **Print Output:** ```bash Login successful Your token has been saved to /root/.huggingface/token Authenticated through git-crendential store but this isn't the helper defined on your machine. You will have to re-authenticate when pushing to the Hugging Face Hub. Run the following command in your terminal to set it as the default git config --global credential.helper store ``` Then you need to install Git-LFS to upload your model checkpoints: ```python !apt install git-lfs ``` ------------------------------------------------------------------------ \\({}^1\\) Timit is usually evaluated using the phoneme error rate (PER), but by far the most common metric in ASR is the word error rate (WER). To keep this notebook as general as possible we decided to evaluate the model using WER. Prepare Data, Tokenizer, Feature Extractor ------------------------------------------ ASR models transcribe speech to text, which means that we both need a feature extractor that processes the speech signal to the model\'s input format, *e.g.* a feature vector, and a tokenizer that processes the model\'s output format to text. In 🤗 Transformers, the Wav2Vec2 model is thus accompanied by both a tokenizer, called [Wav2Vec2CTCTokenizer](https://huggingface.co/transformers/master/model_doc/wav2vec2.html#wav2vec2ctctokenizer), and a feature extractor, called [Wav2Vec2FeatureExtractor](https://huggingface.co/transformers/master/model_doc/wav2vec2.html#wav2vec2featureextractor). Let\'s start by creating the tokenizer responsible for decoding the model\'s predictions. ### Create Wav2Vec2CTCTokenizer The [pretrained Wav2Vec2 checkpoint](https://huggingface.co/facebook/wav2vec2-base) maps the speech signal to a sequence of context representations as illustrated in the figure above. A fine-tuned Wav2Vec2 checkpoint needs to map this sequence of context representations to its corresponding transcription so that a linear layer has to be added on top of the transformer block (shown in yellow). This linear layer is used to classifies each context representation to a token class analogous how, *e.g.*, after pretraining a linear layer is added on top of BERT\'s embeddings for further classification - *cf.* with *\"BERT\"* section of this [blog post](https://huggingface.co/blog/warm-starting-encoder-decoder). The output size of this layer corresponds to the number of tokens in the vocabulary, which does **not** depend on Wav2Vec2\'s pretraining task, but only on the labeled dataset used for fine-tuning. So in the first step, we will take a look at Timit and define a vocabulary based on the dataset\'s transcriptions. Let\'s start by loading the dataset and taking a look at its structure. ```python from datasets import load_dataset, load_metric timit = load_dataset("timit_asr") print(timit) ``` **Print Output:** ```bash DatasetDict({ train: Dataset({ features: ['file', 'audio', 'text', 'phonetic_detail', 'word_detail', 'dialect_region', 'sentence_type', 'speaker_id', 'id'], num_rows: 4620 }) test: Dataset({ features: ['file', 'audio', 'text', 'phonetic_detail', 'word_detail', 'dialect_region', 'sentence_type', 'speaker_id', 'id'], num_rows: 1680 }) }) ``` Many ASR datasets only provide the target text, `'text'` for each audio file `'file'`. Timit actually provides much more information about each audio file, such as the `'phonetic_detail'`, etc., which is why many researchers choose to evaluate their models on phoneme classification instead of speech recognition when working with Timit. However, we want to keep the notebook as general as possible, so that we will only consider the transcribed text for fine-tuning. ```python timit = timit.remove_columns(["phonetic_detail", "word_detail", "dialect_region", "id", "sentence_type", "speaker_id"]) ``` Let\'s write a short function to display some random samples of the dataset and run it a couple of times to get a feeling for the transcriptions. ```python from datasets import ClassLabel import random import pandas as pd from IPython.display import display, HTML def show_random_elements(dataset, num_examples=10): assert num_examples <= len(dataset), "Can't pick more elements than there are in the dataset." picks = [] for _ in range(num_examples): pick = random.randint(0, len(dataset)-1) while pick in picks: pick = random.randint(0, len(dataset)-1) picks.append(pick) df = pd.DataFrame(dataset[picks]) display(HTML(df.to_html())) show_random_elements(timit["train"].remove_columns(["file", "audio"])) ``` **Print Output:** | Idx | Transcription | |----------|:-------------:| | 1 | Who took the kayak down the bayou? | | 2 | As such it acts as an anchor for the people. | | 3 | She had your dark suit in greasy wash water all year. | | 4 | We're not drunkards, she said. | | 5 | The most recent geological survey found seismic activity. | | 6 | Alimony harms a divorced man's wealth. | | 7 | Our entire economy will have a terrific uplift. | | 8 | Don't ask me to carry an oily rag like that. | | 9 | The gorgeous butterfly ate a lot of nectar. | | 10 | Where're you takin' me? | Alright! The transcriptions look very clean and the language seems to correspond more to written text than dialogue. This makes sense taking into account that [Timit](https://huggingface.co/datasets/timit_asr) is a read speech corpus. We can see that the transcriptions contain some special characters, such as `,.?!;:`. Without a language model, it is much harder to classify speech chunks to such special characters because they don\'t really correspond to a characteristic sound unit. *E.g.*, the letter `"s"` has a more or less clear sound, whereas the special character `"."` does not. Also in order to understand the meaning of a speech signal, it is usually not necessary to include special characters in the transcription. In addition, we normalize the text to only have lower case letters. ```python import re chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"]' def remove_special_characters(batch): batch["text"] = re.sub(chars_to_ignore_regex, '', batch["text"]).lower() return batch timit = timit.map(remove_special_characters) ``` Let's take a look at the preprocessed transcriptions. ```python show_random_elements(timit["train"].remove_columns(["file", "audio"])) ``` **Print Output:** | Idx | Transcription | |----------|:-------------:| | 1 | anyhow it was high time the boy was salted | | 2 | their basis seems deeper than mere authority | | 3 | only the best players enjoy popularity | | 4 | tornados often destroy acres of farm land | | 5 | where're you takin' me | | 6 | soak up local color | | 7 | satellites sputniks rockets balloons what next | | 8 | i gave them several choices and let them set the priorities | | 9 | reading in poor light gives you eyestrain | | 10 | that dog chases cats mercilessly | Good! This looks better. We have removed most special characters from transcriptions and normalized them to lower-case only. In CTC, it is common to classify speech chunks into letters, so we will do the same here. Let\'s extract all distinct letters of the training and test data and build our vocabulary from this set of letters. We write a mapping function that concatenates all transcriptions into one long transcription and then transforms the string into a set of chars. It is important to pass the argument `batched=True` to the `map(...)` function so that the mapping function has access to all transcriptions at once. ```python def extract_all_chars(batch): all_text = " ".join(batch["text"]) vocab = list(set(all_text)) return {"vocab": [vocab], "all_text": [all_text]} vocabs = timit.map(extract_all_chars, batched=True, batch_size=-1, keep_in_memory=True, remove_columns=timit.column_names["train"]) ``` Now, we create the union of all distinct letters in the training dataset and test dataset and convert the resulting list into an enumerated dictionary. ```python vocab_list = list(set(vocabs["train"]["vocab"][0]) | set(vocabs["test"]["vocab"][0])) vocab_dict = {v: k for k, v in enumerate(vocab_list)} vocab_dict ``` **Print Output:** ```bash { ' ': 21, "'": 13, 'a': 24, 'b': 17, 'c': 25, 'd': 2, 'e': 9, 'f': 14, 'g': 22, 'h': 8, 'i': 4, 'j': 18, 'k': 5, 'l': 16, 'm': 6, 'n': 7, 'o': 10, 'p': 19, 'q': 3, 'r': 20, 's': 11, 't': 0, 'u': 26, 'v': 27, 'w': 1, 'x': 23, 'y': 15, 'z': 12 } ``` Cool, we see that all letters of the alphabet occur in the dataset (which is not really surprising) and we also extracted the special characters `" "` and `'`. Note that we did not exclude those special characters because: - The model has to learn to predict when a word finished or else the model prediction would always be a sequence of chars which would make it impossible to separate words from each other. - In English, we need to keep the `'` character to differentiate between words, *e.g.*, `"it's"` and `"its"` which have very different meanings. To make it clearer that `" "` has its own token class, we give it a more visible character `|`. In addition, we also add an \"unknown\" token so that the model can later deal with characters not encountered in Timit\'s training set. ```python vocab_dict["|"] = vocab_dict[" "] del vocab_dict[" "] ``` Finally, we also add a padding token that corresponds to CTC\'s \"*blank token*\". The \"blank token\" is a core component of the CTC algorithm. For more information, please take a look at the \"Alignment\" section [here](https://distill.pub/2017/ctc/). ```python vocab_dict["[UNK]"] = len(vocab_dict) vocab_dict["[PAD]"] = len(vocab_dict) print(len(vocab_dict)) ``` **Print Output:** ```bash 30 ``` Cool, now our vocabulary is complete and consists of 30 tokens, which means that the linear layer that we will add on top of the pretrained Wav2Vec2 checkpoint will have an output dimension of 30. Let\'s now save the vocabulary as a json file. ```python import json with open('vocab.json', 'w') as vocab_file: json.dump(vocab_dict, vocab_file) ``` In a final step, we use the json file to instantiate an object of the `Wav2Vec2CTCTokenizer` class. ```python from transformers import Wav2Vec2CTCTokenizer tokenizer = Wav2Vec2CTCTokenizer("./vocab.json", unk_token="[UNK]", pad_token="[PAD]", word_delimiter_token="|") ``` If one wants to re-use the just created tokenizer with the fine-tuned model of this notebook, it is strongly advised to upload the `tokenizer` to the [🤗 Hub](https://huggingface.co/). Let's call the repo to which we will upload the files `"wav2vec2-large-xlsr-turkish-demo-colab"`: ```python repo_name = "wav2vec2-base-timit-demo-colab" ``` and upload the tokenizer to the [🤗 Hub](https://huggingface.co/). ```python tokenizer.push_to_hub(repo_name) ``` Great, you can see the just created repository under `https://huggingface.co/<your-username>/wav2vec2-base-timit-demo-colab` ### Create Wav2Vec2 Feature Extractor Speech is a continuous signal and to be treated by computers, it first has to be discretized, which is usually called **sampling**. The sampling rate hereby plays an important role in that it defines how many data points of the speech signal are measured per second. Therefore, sampling with a higher sampling rate results in a better approximation of the *real* speech signal but also necessitates more values per second. A pretrained checkpoint expects its input data to have been sampled more or less from the same distribution as the data it was trained on. The same speech signals sampled at two different rates have a very different distribution, *e.g.*, doubling the sampling rate results in data points being twice as long. Thus, before fine-tuning a pretrained checkpoint of an ASR model, it is crucial to verify that the sampling rate of the data that was used to pretrain the model matches the sampling rate of the dataset used to fine-tune the model. Wav2Vec2 was pretrained on the audio data of [LibriSpeech](https://huggingface.co/datasets/librispeech_asr) and LibriVox which both were sampling with 16kHz. Our fine-tuning dataset, [Timit](hhtps://huggingface.co/datasets/timit_asr), was luckily also sampled with 16kHz. If the fine-tuning dataset would have been sampled with a rate lower or higher than 16kHz, we first would have had to up or downsample the speech signal to match the sampling rate of the data used for pretraining. A Wav2Vec2 feature extractor object requires the following parameters to be instantiated: - `feature_size`: Speech models take a sequence of feature vectors as an input. While the length of this sequence obviously varies, the feature size should not. In the case of Wav2Vec2, the feature size is 1 because the model was trained on the raw speech signal \\({}^2\\) . - `sampling_rate`: The sampling rate at which the model is trained on. - `padding_value`: For batched inference, shorter inputs need to be padded with a specific value - `do_normalize`: Whether the input should be *zero-mean-unit-variance* normalized or not. Usually, speech models perform better when normalizing the input - `return_attention_mask`: Whether the model should make use of an `attention_mask` for batched inference. In general, models should **always** make use of the `attention_mask` to mask padded tokens. However, due to a very specific design choice of `Wav2Vec2`\'s \"base\" checkpoint, better results are achieved when using no `attention_mask`. This is **not** recommended for other speech models. For more information, one can take a look at [this](https://github.com/pytorch/fairseq/issues/3227) issue. **Important** If you want to use this notebook to fine-tune [large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60), this parameter should be set to `True`. ```python from transformers import Wav2Vec2FeatureExtractor feature_extractor = Wav2Vec2FeatureExtractor(feature_size=1, sampling_rate=16000, padding_value=0.0, do_normalize=True, return_attention_mask=False) ``` Great, Wav2Vec2\'s feature extraction pipeline is thereby fully defined! To make the usage of Wav2Vec2 as user-friendly as possible, the feature extractor and tokenizer are *wrapped* into a single `Wav2Vec2Processor` class so that one only needs a `model` and `processor` object. ```python from transformers import Wav2Vec2Processor processor = Wav2Vec2Processor(feature_extractor=feature_extractor, tokenizer=tokenizer) ``` ### Preprocess Data So far, we have not looked at the actual values of the speech signal but just the transcription. In addition to sentence, our datasets include two more column names path and audio. path states the absolute path of the audio file. Let's take a look. ```python print(timit[0]["path"]) ``` **Print Output:** ```bash '/root/.cache/huggingface/datasets/downloads/extracted/404950a46da14eac65eb4e2a8317b1372fb3971d980d91d5d5b221275b1fd7e0/data/TRAIN/DR4/MMDM0/SI681.WAV' ``` **`Wav2Vec2`** expects the input in the format of a 1-dimensional array of 16 kHz. This means that the audio file has to be loaded and resampled. Thankfully, datasets does this automatically by calling the other column audio. Let try it out. ```python common_voice_train[0]["audio"] ``` **Print Output:** ```bash {'array': array([-2.1362305e-04, 6.1035156e-05, 3.0517578e-05, ..., -3.0517578e-05, -9.1552734e-05, -6.1035156e-05], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/404950a46da14eac65eb4e2a8317b1372fb3971d980d91d5d5b221275b1fd7e0/data/TRAIN/DR4/MMDM0/SI681.WAV', 'sampling_rate': 16000} ``` We can see that the audio file has automatically been loaded. This is thanks to the new [`"Audio" feature`](https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=audio#datasets.Audio) introduced in datasets == 4.13.3, which loads and resamples audio files on-the-fly upon calling. The sampling rate is set to 16kHz which is what `Wav2Vec2` expects as an input. Great, let's listen to a couple of audio files to better understand the dataset and verify that the audio was correctly loaded. ```python import IPython.display as ipd import numpy as np import random rand_int = random.randint(0, len(timit["train"])) print(timit["train"][rand_int]["text"]) ipd.Audio(data=np.asarray(timit["train"][rand_int]["audio"]["array"]), autoplay=True, rate=16000) ``` It can be heard, that the speakers change along with their speaking rate, accent, etc. Overall, the recordings sound relatively clear though, which is to be expected from a read speech corpus. Let's do a final check that the data is correctly prepared, by printing the shape of the speech input, its transcription, and the corresponding sampling rate. ```python rand_int = random.randint(0, len(timit["train"])) print("Target text:", timit["train"][rand_int]["text"]) print("Input array shape:", np.asarray(timit["train"][rand_int]["audio"]["array"]).shape) print("Sampling rate:", timit["train"][rand_int]["audio"]["sampling_rate"]) ``` **Print Output:** ```bash Target text: she had your dark suit in greasy wash water all year Input array shape: (52941,) Sampling rate: 16000 ``` Good! Everything looks fine - the data is a 1-dimensional array, the sampling rate always corresponds to 16kHz, and the target text is normalized. Finally, we can process the dataset to the format expected by the model for training. We will make use of the `map(...)` function. First, we load and resample the audio data, simply by calling `batch["audio"]`. Second, we extract the `input_values` from the loaded audio file. In our case, the `Wav2Vec2Processor` only normalizes the data. For other speech models, however, this step can include more complex feature extraction, such as [Log-Mel feature extraction](https://en.wikipedia.org/wiki/Mel-frequency_cepstrum). Third, we encode the transcriptions to label ids. **Note**: This mapping function is a good example of how the `Wav2Vec2Processor` class should be used. In "normal" context, calling `processor(...)` is redirected to `Wav2Vec2FeatureExtractor`'s call method. When wrapping the processor into the `as_target_processor` context, however, the same method is redirected to `Wav2Vec2CTCTokenizer`'s call method. For more information please check the [docs](https://huggingface.co/transformers/master/model_doc/wav2vec2.html#transformers.Wav2Vec2Processor.__call__). ```python def prepare_dataset(batch): audio = batch["audio"] # batched output is "un-batched" to ensure mapping is correct batch["input_values"] = processor(audio["array"], sampling_rate=audio["sampling_rate"]).input_values[0] with processor.as_target_processor(): batch["labels"] = processor(batch["text"]).input_ids return batch ``` Let's apply the data preparation function to all examples. ```python timit = timit.map(prepare_dataset, remove_columns=timit.column_names["train"], num_proc=4) ``` **Note**: Currently `datasets` make use of [`torchaudio`](https://pytorch.org/audio/stable/index.html) and [`librosa`](https://librosa.org/doc/latest/index.html) for audio loading and resampling. If you wish to implement your own costumized data loading/sampling, feel free to just make use of the `"path"` column instead and disregard the `"audio"` column. Training & Evaluation --------------------- The data is processed so that we are ready to start setting up the training pipeline. We will make use of 🤗\'s [Trainer](https://huggingface.co/transformers/master/main_classes/trainer.html?highlight=trainer) for which we essentially need to do the following: - Define a data collator. In contrast to most NLP models, Wav2Vec2 has a much larger input length than output length. *E.g.*, a sample of input length 50000 has an output length of no more than 100. Given the large input sizes, it is much more efficient to pad the training batches dynamically meaning that all training samples should only be padded to the longest sample in their batch and not the overall longest sample. Therefore, fine-tuning Wav2Vec2 requires a special padding data collator, which we will define below - Evaluation metric. During training, the model should be evaluated on the word error rate. We should define a `compute_metrics` function accordingly - Load a pretrained checkpoint. We need to load a pretrained checkpoint and configure it correctly for training. - Define the training configuration. After having fine-tuned the model, we will correctly evaluate it on the test data and verify that it has indeed learned to correctly transcribe speech. ### Set-up Trainer Let\'s start by defining the data collator. The code for the data collator was copied from [this example](https://github.com/huggingface/transformers/blob/7e61d56a45c19284cfda0cee8995fb552f6b1f4e/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py#L219). Without going into too many details, in contrast to the common data collators, this data collator treats the `input_values` and `labels` differently and thus applies to separate padding functions on them (again making use of Wav2Vec2\'s context manager). This is necessary because in speech input and output are of different modalities meaning that they should not be treated by the same padding function. Analogous to the common data collators, the padding tokens in the labels with `-100` so that those tokens are **not** taken into account when computing the loss. ```python import torch from dataclasses import dataclass, field from typing import Any, Dict, List, Optional, Union @dataclass class DataCollatorCTCWithPadding: """ Data collator that will dynamically pad the inputs received. Args: processor (:class:`~transformers.Wav2Vec2Processor`) The processor used for proccessing the data. padding (:obj:`bool`, :obj:`str` or :class:`~transformers.tokenization_utils_base.PaddingStrategy`, `optional`, defaults to :obj:`True`): Select a strategy to pad the returned sequences (according to the model's padding side and padding index) among: * :obj:`True` or :obj:`'longest'`: Pad to the longest sequence in the batch (or no padding if only a single sequence if provided). * :obj:`'max_length'`: Pad to a maximum length specified with the argument :obj:`max_length` or to the maximum acceptable input length for the model if that argument is not provided. * :obj:`False` or :obj:`'do_not_pad'` (default): No padding (i.e., can output a batch with sequences of different lengths). max_length (:obj:`int`, `optional`): Maximum length of the ``input_values`` of the returned list and optionally padding length (see above). max_length_labels (:obj:`int`, `optional`): Maximum length of the ``labels`` returned list and optionally padding length (see above). pad_to_multiple_of (:obj:`int`, `optional`): If set will pad the sequence to a multiple of the provided value. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta). """ processor: Wav2Vec2Processor padding: Union[bool, str] = True max_length: Optional[int] = None max_length_labels: Optional[int] = None pad_to_multiple_of: Optional[int] = None pad_to_multiple_of_labels: Optional[int] = None def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]: # split inputs and labels since they have to be of different lengths and need # different padding methods input_features = [{"input_values": feature["input_values"]} for feature in features] label_features = [{"input_ids": feature["labels"]} for feature in features] batch = self.processor.pad( input_features, padding=self.padding, max_length=self.max_length, pad_to_multiple_of=self.pad_to_multiple_of, return_tensors="pt", ) with self.processor.as_target_processor(): labels_batch = self.processor.pad( label_features, padding=self.padding, max_length=self.max_length_labels, pad_to_multiple_of=self.pad_to_multiple_of_labels, return_tensors="pt", ) # replace padding with -100 to ignore loss correctly labels = labels_batch["input_ids"].masked_fill(labels_batch.attention_mask.ne(1), -100) batch["labels"] = labels return batch ``` Let's initialize the data collator. ```python data_collator = DataCollatorCTCWithPadding(processor=processor, padding=True) ``` Next, the evaluation metric is defined. As mentioned earlier, the predominant metric in ASR is the word error rate (WER), hence we will use it in this notebook as well. ```python wer_metric = load_metric("wer") ``` The model will return a sequence of logit vectors: $$ \mathbf{y}_1, \ldots, \mathbf{y}_m $$, with \\(\mathbf{y}_1 = f_{\theta}(x_1, \ldots, x_n)[0]\\) and \\(n >> m\\). A logit vector \\( \mathbf{y}_1 \\) contains the log-odds for each word in the vocabulary we defined earlier, thus \\(\text{len}(\mathbf{y}_i) =\\) `config.vocab_size`. We are interested in the most likely prediction of the model and thus take the `argmax(...)` of the logits. Also, we transform the encoded labels back to the original string by replacing `-100` with the `pad_token_id` and decoding the ids while making sure that consecutive tokens are **not** grouped to the same token in CTC style \\({}^1\\). ```python def compute_metrics(pred): pred_logits = pred.predictions pred_ids = np.argmax(pred_logits, axis=-1) pred.label_ids[pred.label_ids == -100] = processor.tokenizer.pad_token_id pred_str = processor.batch_decode(pred_ids) # we do not want to group tokens when computing the metrics label_str = processor.batch_decode(pred.label_ids, group_tokens=False) wer = wer_metric.compute(predictions=pred_str, references=label_str) return {"wer": wer} ``` Now, we can load the pretrained `Wav2Vec2` checkpoint. The tokenizer\'s `pad_token_id` must be to define the model\'s `pad_token_id` or in the case of `Wav2Vec2ForCTC` also CTC\'s *blank token* \\({}^2\\). To save GPU memory, we enable PyTorch\'s [gradient checkpointing](https://pytorch.org/docs/stable/checkpoint.html) and also set the loss reduction to \"*mean*\". ```python from transformers import Wav2Vec2ForCTC model = Wav2Vec2ForCTC.from_pretrained( "facebook/wav2vec2-base", ctc_loss_reduction="mean", pad_token_id=processor.tokenizer.pad_token_id, ) ``` **Print Output:** ```bash Some weights of Wav2Vec2ForCTC were not initialized from the model checkpoint at facebook/wav2vec2-base and are newly initialized: ['lm_head.weight', 'lm_head.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. ``` The first component of Wav2Vec2 consists of a stack of CNN layers that are used to extract acoustically meaningful - but contextually independent - features from the raw speech signal. This part of the model has already been sufficiently trained during pretrainind and as stated in the [paper](https://arxiv.org/abs/2006.11477) does not need to be fine-tuned anymore. Thus, we can set the `requires_grad` to `False` for all parameters of the *feature extraction* part. ```python model.freeze_feature_extractor() ``` In a final step, we define all parameters related to training. To give more explanation on some of the parameters: - `group_by_length` makes training more efficient by grouping training samples of similar input length into one batch. This can significantly speed up training time by heavily reducing the overall number of useless padding tokens that are passed through the model - `learning_rate` and `weight_decay` were heuristically tuned until fine-tuning has become stable. Note that those parameters strongly depend on the Timit dataset and might be suboptimal for other speech datasets. For more explanations on other parameters, one can take a look at the [docs](https://huggingface.co/transformers/master/main_classes/trainer.html?highlight=trainer#trainingarguments). During training, a checkpoint will be uploaded asynchronously to the hub every 400 training steps. It allows you to also play around with the demo widget even while your model is still training. **Note**: If one does not want to upload the model checkpoints to the hub, simply set `push_to_hub=False`. ```python from transformers import TrainingArguments training_args = TrainingArguments( output_dir=repo_name, group_by_length=True, per_device_train_batch_size=32, evaluation_strategy="steps", num_train_epochs=30, fp16=True, gradient_checkpointing=True, save_steps=500, eval_steps=500, logging_steps=500, learning_rate=1e-4, weight_decay=0.005, warmup_steps=1000, save_total_limit=2, ) ``` Now, all instances can be passed to Trainer and we are ready to start training! ```python from transformers import Trainer trainer = Trainer( model=model, data_collator=data_collator, args=training_args, compute_metrics=compute_metrics, train_dataset=timit_prepared["train"], eval_dataset=timit_prepared["test"], tokenizer=processor.feature_extractor, ) ``` ------------------------------------------------------------------------ \\({}^1\\) To allow models to become independent of the speaker rate, in CTC, consecutive tokens that are identical are simply grouped as a single token. However, the encoded labels should not be grouped when decoding since they don\'t correspond to the predicted tokens of the model, which is why the `group_tokens=False` parameter has to be passed. If we wouldn\'t pass this parameter a word like `"hello"` would incorrectly be encoded, and decoded as `"helo"`. \\({}^2\\) The blank token allows the model to predict a word, such as `"hello"` by forcing it to insert the blank token between the two l\'s. A CTC-conform prediction of `"hello"` of our model would be `[PAD] [PAD] "h" "e" "e" "l" "l" [PAD] "l" "o" "o" [PAD]`. ### Training Training will take between 90 and 180 minutes depending on the GPU allocated to the google colab attached to this notebook. While the trained model yields satisfying results on *Timit*\'s test data, it is by no means an optimally fine-tuned model. The purpose of this notebook is to demonstrate how Wav2Vec2\'s [base](https://huggingface.co/facebook/wav2vec2-base), [large](https://huggingface.co/facebook/wav2vec2-large), and [large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) checkpoints can be fine-tuned on any English dataset. In case you want to use this google colab to fine-tune your model, you should make sure that your training doesn\'t stop due to inactivity. A simple hack to prevent this is to paste the following code into the console of this tab (*right mouse click -\> inspect -\> Console tab and insert code*). ```javascript function ConnectButton(){ console.log("Connect pushed"); document.querySelector("#top-toolbar > colab-connect-button").shadowRoot.querySelector("#connect").click() } setInterval(ConnectButton,60000); ``` ```python trainer.train() ``` Depending on your GPU, it might be possible that you are seeing an `"out-of-memory"` error here. In this case, it's probably best to reduce `per_device_train_batch_size` to 16 or even less and eventually make use of [`gradient_accumulation`](https://huggingface.co/transformers/master/main_classes/trainer.html#trainingarguments). **Print Output:** | Step | Training Loss | Validation Loss | WER | Runtime | Samples per Second | |---|---|---|---|---|---| | 500 | 3.758100 | 1.686157 | 0.945214 | 97.299000 | 17.266000 | | 1000 | 0.691400 | 0.476487 | 0.391427 | 98.283300 | 17.093000 | | 1500 | 0.202400 | 0.403425 | 0.330715 | 99.078100 | 16.956000 | | 2000 | 0.115200 | 0.405025 | 0.307353 | 98.116500 | 17.122000 | | 2500 | 0.075000 | 0.428119 | 0.294053 | 98.496500 | 17.056000 | | 3000 | 0.058200 | 0.442629 | 0.287299 | 98.871300 | 16.992000 | | 3500 | 0.047600 | 0.442619 | 0.285783 | 99.477500 | 16.888000 | | 4000 | 0.034500 | 0.456989 | 0.282200 | 99.419100 | 16.898000 | The final WER should be below 0.3 which is reasonable given that state-of-the-art phoneme error rates (PER) are just below 0.1 (see [leaderboard](https://paperswithcode.com/sota/speech-recognition-on-timit)) and that WER is usually worse than PER. You can now upload the result of the training to the Hub, just execute this instruction: ```python trainer.push_to_hub() ``` You can now share this model with all your friends, family, favorite pets: they can all load it with the identifier "your-username/the-name-you-picked" so for instance: ```python from transformers import AutoModelForCTC, Wav2Vec2Processor model = AutoModelForCTC.from_pretrained("patrickvonplaten/wav2vec2-base-timit-demo-colab") processor = Wav2Vec2Processor.from_pretrained("patrickvonplaten/wav2vec2-base-timit-demo-colab") ``` ### Evaluation In the final part, we evaluate our fine-tuned model on the test set and play around with it a bit. Let\'s load the `processor` and `model`. ```python processor = Wav2Vec2Processor.from_pretrained(repo_name) model = Wav2Vec2ForCTC.from_pretrained(repo_name) ``` Now, we will make use of the `map(...)` function to predict the transcription of every test sample and to save the prediction in the dataset itself. We will call the resulting dictionary `"results"`. **Note**: we evaluate the test data set with `batch_size=1` on purpose due to this [issue](https://github.com/pytorch/fairseq/issues/3227). Since padded inputs don\'t yield the exact same output as non-padded inputs, a better WER can be achieved by not padding the input at all. ```python def map_to_result(batch): with torch.no_grad(): input_values = torch.tensor(batch["input_values"], device="cuda").unsqueeze(0) logits = model(input_values).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_str"] = processor.batch_decode(pred_ids)[0] batch["text"] = processor.decode(batch["labels"], group_tokens=False) return batch results = timit["test"].map(map_to_result, remove_columns=timit["test"].column_names) ``` Let\'s compute the overall WER now. ```python print("Test WER: {:.3f}".format(wer_metric.compute(predictions=results["pred_str"], references=results["text"]))) ``` **Print Output:** ```bash Test WER: 0.221 ``` 22.1% WER - not bad! Our demo model would have probably made it on the official [leaderboard](https://paperswithcode.com/sota/speech-recognition-on-timit). Let's take a look at some predictions to see what errors are made by the model. **Print Output:** ```python show_random_elements(results.remove_columns(["speech", "sampling_rate"])) ``` | pred_str | target_text | |----------|:-------------:| | am to balence your employe you benefits package | aim to balance your employee benefit package | | the fawlg prevented them from ariving on tom | the fog prevented them from arriving on time | | young children should avoide exposure to contagieous diseases | young children should avoid exposure to contagious diseases | | artifficial intelligence is for real | artificial intelligence is for real | | their pcrops were two step latters a chair and a polmb fan | their props were two stepladders a chair and a palm fan | | if people were more generous there would be no need for wealfare | if people were more generous there would be no need for welfare | | the fish began to leep frantically on the surface of the small ac | the fish began to leap frantically on the surface of the small lake | | her right hand eggs whenever the barametric pressur changes | her right hand aches whenever the barometric pressure changes | | only lawyers loved miliunears | only lawyers love millionaires | | the nearest cennagade may not be within wallkin distance | the nearest synagogue may not be within walking distance | It becomes clear that the predicted transcriptions are acoustically very similar to the target transcriptions, but often contain spelling or grammatical errors. This shouldn\'t be very surprising though given that we purely rely on Wav2Vec2 without making use of a language model. Finally, to better understand how CTC works, it is worth taking a deeper look at the exact output of the model. Let\'s run the first test sample through the model, take the predicted ids and convert them to their corresponding tokens. ```python model.to("cuda") with torch.no_grad(): logits = model(torch.tensor(timit["test"][:1]["input_values"], device="cuda")).logits pred_ids = torch.argmax(logits, dim=-1) # convert ids to tokens " ".join(processor.tokenizer.convert_ids_to_tokens(pred_ids[0].tolist())) ``` **Print Output:** ```bash [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] t t h e e | | b b [PAD] u u n n n g g [PAD] a [PAD] [PAD] l l [PAD] o o o [PAD] | w w a a [PAD] s s | | [PAD] [PAD] p l l e e [PAD] [PAD] s s e n n t t t [PAD] l l y y | | | s s [PAD] i i [PAD] t t t [PAD] u u u u [PAD] [PAD] [PAD] a a [PAD] t t e e e d d d | n n e e a a a r | | t h h e | | s s h h h [PAD] o o o [PAD] o o r r [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] ``` The output should make it a bit clearer how CTC works in practice. The model is to some extent invariant to speaking rate since it has learned to either just repeat the same token in case the speech chunk to be classified still corresponds to the same token. This makes CTC a very powerful algorithm for speech recognition since the speech file\'s transcription is often very much independent of its length. I again advise the reader to take a look at [this](https://distill.pub/2017/ctc) very nice blog post to better understand CTC.
My Journey to a serverless transformers pipeline on Google Cloud
Maxence
March 18, 2021
how-to-deploy-a-pipeline-to-google-clouds
guide
https://huggingface.co/blog/how-to-deploy-a-pipeline-to-google-clouds
# My Journey to a serverless transformers pipeline on <br>Google Cloud > ##### A guest blog post by community member <a href="/Maxence">Maxence Dominici</a> This article will discuss my journey to deploy the `transformers` _sentiment-analysis_ pipeline on [Google Cloud](https://cloud.google.com). We will start with a quick introduction to `transformers` and then move to the technical part of the implementation. Finally, we'll summarize this implementation and review what we have achieved. ## The Goal ![img.png](assets/14_how_to_deploy_a_pipeline_to_google_clouds/Customer_review.png) I wanted to create a micro-service that automatically detects whether a customer review left in Discord is positive or negative. This would allow me to treat the comment accordingly and improve the customer experience. For instance, if the review was negative, I could create a feature which would contact the customer, apologize for the poor quality of service, and inform him/her that our support team will contact him/her as soon as possible to assist him and hopefully fix the problem. Since I don't plan to get more than 2,000 requests per month, I didn't impose any performance constraints regarding the time and the scalability. ## The Transformers library I was a bit confused at the beginning when I downloaded the .h5 file. I thought it would be compatible with `tensorflow.keras.models.load_model`, but this wasn't the case. After a few minutes of research I was able to figure out that the file was a weights checkpoint rather than a Keras model. After that, I tried out the API that Hugging Face offers and read a bit more about the pipeline feature they offer. Since the results of the API & the pipeline were great, I decided that I could serve the model through the pipeline on my own server. Below is the [official example](https://github.com/huggingface/transformers#quick-tour) from the Transformers GitHub page. ```python from transformers import pipeline # Allocate a pipeline for sentiment-analysis classifier = pipeline('sentiment-analysis') classifier('We are very happy to include pipeline into the transformers repository.') [{'label': 'POSITIVE', 'score': 0.9978193640708923}] ``` ## Deploy transformers to Google Cloud > GCP is chosen as it is the cloud environment I am using in my personal organization. ### Step 1 - Research I already knew that I could use an API-Service like `flask` to serve a `transformers` model. I searched in the Google Cloud AI documentation and found a service to host Tensorflow models named [AI-Platform Prediction](https://cloud.google.com/ai-platform/prediction/docs). I also found [App Engine](https://cloud.google.com/appengine) and [Cloud Run](https://cloud.google.com/run) there, but I was concerned about the memory usage for App Engine and was not very familiar with Docker. ### Step 2 - Test on AI-Platform Prediction As the model is not a "pure TensorFlow" saved model but a checkpoint, and I couldn't turn it into a "pure TensorFlow model", I figured out that the example on [this page](https://cloud.google.com/ai-platform/prediction/docs/deploying-models) wouldn't work. From there I saw that I could write some custom code, allowing me to load the `pipeline` instead of having to handle the model, which seemed is easier. I also learned that I could define a pre-prediction & post-prediction action, which could be useful in the future for pre- or post-processing the data for customers' needs. I followed Google's guide but encountered an issue as the service is still in beta and everything is not stable. This issue is detailed [here](https://github.com/huggingface/transformers/issues/9926). ### Step 3 - Test on App Engine I moved to Google's [App Engine](https://cloud.google.com/appengine) as it's a service that I am familiar with, but encountered an installation issue with TensorFlow due to a missing system dependency file. I then tried with PyTorch which worked with an F4_1G instance, but it couldn't handle more than 2 requests on the same instance, which isn't really great performance-wise. ### Step 4 - Test on Cloud Run Lastly, I moved to [Cloud Run](https://cloud.google.com/run) with a docker image. I followed [this guide](https://cloud.google.com/run/docs/quickstarts/build-and-deploy#python) to get an idea of how it works. In Cloud Run, I could configure a higher memory and more vCPUs to perform the prediction with PyTorch. I ditched Tensorflow as PyTorch seems to load the model faster. ## Implementation of the serverless pipeline The final solution consists of four different components: - `main.py` handling the request to the pipeline - `Dockerfile` used to create the image that will be deployed on Cloud Run. - Model folder having the `pytorch_model.bin`, `config.json` and `vocab.txt`. - Model : [DistilBERT base uncased finetuned SST-2 ](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) - To download the model folder, follow the instructions in the button. ![img.png](assets/14_how_to_deploy_a_pipeline_to_google_clouds/Download_instructions_button.png) - You don't need to keep the `rust_model.ot` or the `tf_model.h5` as we will use [PyTorch](https://pytorch.org/). - `requirement.txt` for installing the dependencies The content on the `main.py` is really simple. The idea is to receive a `GET` request containing two fields. First the review that needs to be analysed, second the API key to "protect" the service. The second parameter is optional, I used it to avoid setting up the oAuth2 of Cloud Run. After these arguments are provided, we load the pipeline which is built based on the model `distilbert-base-uncased-finetuned-sst-2-english` (provided above). In the end, the best match is returned to the client. ```python import os from flask import Flask, jsonify, request from transformers import pipeline app = Flask(__name__) model_path = "./model" @app.route('/') def classify_review(): review = request.args.get('review') api_key = request.args.get('api_key') if review is None or api_key != "MyCustomerApiKey": return jsonify(code=403, message="bad request") classify = pipeline("sentiment-analysis", model=model_path, tokenizer=model_path) return classify("that was great")[0] if __name__ == '__main__': # This is used when running locally only. When deploying to Google Cloud # Run, a webserver process such as Gunicorn will serve the app. app.run(debug=False, host="0.0.0.0", port=int(os.environ.get("PORT", 8080))) ``` Then the `DockerFile` which will be used to create a docker image of the service. We specify that our service runs with python:3.7, plus that we need to install our requirements. Then we use `gunicorn` to handle our process on the port `5000`. ```dockerfile # Use Python37 FROM python:3.7 # Allow statements and log messages to immediately appear in the Knative logs ENV PYTHONUNBUFFERED True # Copy requirements.txt to the docker image and install packages COPY requirements.txt / RUN pip install -r requirements.txt # Set the WORKDIR to be the folder COPY . /app # Expose port 5000 EXPOSE 5000 ENV PORT 5000 WORKDIR /app # Use gunicorn as the entrypoint CMD exec gunicorn --bind :$PORT main:app --workers 1 --threads 1 --timeout 0 ``` It is important to note the arguments `--workers 1 --threads 1` which means that I want to execute my app on only one worker (= 1 process) with a single thread. This is because I don't want to have 2 instances up at once because it might increase the billing. One of the downsides is that it will take more time to process if the service receives two requests at once. After that, I put the limit to one thread due to the memory usage needed for loading the model into the pipeline. If I were using 4 threads, I might have 4 Gb / 4 = 1 Gb only to perform the full process, which is not enough and would lead to a memory error. Finally, the `requirement.txt` file ```python Flask==1.1.2 torch===1.7.1 transformers~=4.2.0 gunicorn>=20.0.0 ``` ## Deployment instructions First, you will need to meet some requirements such as having a project on Google Cloud, enabling the billing and installing the `gcloud` cli. You can find more details about it in the [Google's guide - Before you begin](https://cloud.google.com/run/docs/quickstarts/build-and-deploy#before-you-begin), Second, we need to build the docker image and deploy it to cloud run by selecting the correct project (replace `PROJECT-ID`) and set the name of the instance such as `ai-customer-review`. You can find more information about the deployment on [Google's guide - Deploying to](https://cloud.google.com/run/docs/quickstarts/build-and-deploy#deploying_to). ```shell gcloud builds submit --tag gcr.io/PROJECT-ID/ai-customer-review gcloud run deploy --image gcr.io/PROJECT-ID/ai-customer-review --platform managed ``` After a few minutes, you will also need to upgrade the memory allocated to your Cloud Run instance from 256 MB to 4 Gb. To do so, head over to the [Cloud Run Console](https://console.cloud.google.com/run) of your project. There you should find your instance, click on it. ![img.png](assets/14_how_to_deploy_a_pipeline_to_google_clouds/Cloud_run_instance.png) After that you will have a blue button labelled "edit and deploy new revision" on top of the screen, click on it and you'll be prompt many configuration fields. At the bottom you should find a "Capacity" section where you can specify the memory. ![img.png](assets/14_how_to_deploy_a_pipeline_to_google_clouds/Edit_memory.png) ## Performances ![img.png](assets/14_how_to_deploy_a_pipeline_to_google_clouds/Request_Result.png) Handling a request takes less than five seconds from the moment you send the request including loading the model into the pipeline, and prediction. The cold start might take up an additional 10 seconds more or less. We can improve the request handling performance by warming the model, it means loading it on start-up instead on each request (global variable for example), by doing so, we win time and memory usage. ## Costs I simulated the cost based on the Cloud Run instance configuration with [Google pricing simulator](https://cloud.google.com/products/calculator#id=cd314cba-1d9a-4bc6-a7c0-740bbf6c8a78) ![Estimate of the monthly cost](./assets/14_how_to_deploy_a_pipeline_to_google_clouds/Estimate_of_the_monthly_cost.png) For my micro-service, I am planning to near 1,000 requests per month, optimistically. 500 may more likely for my usage. That's why I considered 2,000 requests as an upper bound when designing my microservice. Due to that low number of requests, I didn't bother so much regarding the scalability but might come back into it if my billing increases. Nevertheless, it's important to stress that you will pay the storage for each Gigabyte of your build image. It's roughly €0.10 per Gb per month, which is fine if you don't keep all your versions on the cloud since my version is slightly above 1 Gb (Pytorch for 700 Mb & the model for 250 Mb). ## Conclusion By using Transformers' sentiment analysis pipeline, I saved a non-negligible amount of time. Instead of training/fine-tuning a model, I could find one ready to be used in production and start the deployment in my system. I might fine-tune it in the future, but as shown on my test, the accuracy is already amazing! I would have liked a "pure TensorFlow" model, or at least a way to load it in TensorFlow without Transformers dependencies to use the AI platform. It would also be great to have a lite version.
The Partnership: Amazon SageMaker and Hugging Face
philschmid
March 23, 2021
the-partnership-amazon-sagemaker-and-hugging-face
partnerships, aws
https://huggingface.co/blog/the-partnership-amazon-sagemaker-and-hugging-face
<img src="/blog/assets/17_the_partnership_amazon_sagemaker_and_hugging_face/cover.png" alt="hugging-face-and-aws-logo" class="w-full"> > Look at these smiles! # **The Partnership: Amazon SageMaker and Hugging Face** Today, we announce a strategic partnership between Hugging Face and [Amazon](https://huggingface.co/amazon) to make it easier for companies to leverage State of the Art Machine Learning models, and ship cutting-edge NLP features faster. Through this partnership, Hugging Face is leveraging Amazon Web Services as its Preferred Cloud Provider to deliver services to its customers. As a first step to enable our common customers, Hugging Face and Amazon are introducing new Hugging Face Deep Learning Containers (DLCs) to make it easier than ever to train Hugging Face Transformer models in [Amazon SageMaker](https://aws.amazon.com/sagemaker/). To learn how to access and use the new Hugging Face DLCs with the Amazon SageMaker Python SDK, check out the guides and resources below. > _On July 8th, 2021 we extended the Amazon SageMaker integration to add easy deployment and inference of Transformers models. If you want to learn how you can [deploy Hugging Face models easily with Amazon SageMaker](https://huggingface.co/blog/deploy-hugging-face-models-easily-with-amazon-sagemaker) take a look at the [new blog post](https://huggingface.co/blog/deploy-hugging-face-models-easily-with-amazon-sagemaker) and the [documentation](https://huggingface.co/docs/sagemaker/inference)._ --- ## **Features & Benefits 🔥** ## One Command is All you Need With the new Hugging Face Deep Learning Containers available in Amazon SageMaker, training cutting-edge Transformers-based NLP models has never been simpler. There are variants specially optimized for TensorFlow and PyTorch, for single-GPU, single-node multi-GPU and multi-node clusters. ## Accelerating Machine Learning from Science to Production In addition to Hugging Face DLCs, we created a first-class Hugging Face extension to the SageMaker Python-sdk to accelerate data science teams, reducing the time required to set up and run experiments from days to minutes. You can use the Hugging Face DLCs with the Automatic Model Tuning capability of Amazon SageMaker, in order to automatically optimize your training hyperparameters and quickly increase the accuracy of your models. Thanks to the SageMaker Studio web-based Integrated Development Environment (IDE), you can easily track and compare your experiments and your training artifacts. ## Built-in Performance With the Hugging Face DLCs, SageMaker customers will benefit from built-in performance optimizations for PyTorch or TensorFlow, to train NLP models faster, and with the flexibility to choose the training infrastructure with the best price/performance ratio for your workload. The Hugging Face DLCs are fully integrated with the [SageMaker distributed training libraries](https://docs.aws.amazon.com/sagemaker/latest/dg/distributed-training.html), to train models faster than was ever possible before, using the latest generation of instances available on Amazon EC2. --- ## **Resources, Documentation & Samples 📄** Below you can find all the important resources to all published blog posts, videos, documentation, and sample Notebooks/scripts. ## Blog/Video - [AWS: Embracing natural language processing with Hugging Face](https://aws.amazon.com/de/blogs/opensource/embracing-natural-language-processing-with-hugging-face/) - [Deploy Hugging Face models easily with Amazon SageMaker](https://huggingface.co/blog/deploy-hugging-face-models-easily-with-amazon-sagemaker) - [AWS and Hugging Face collaborate to simplify and accelerate adoption of natural language processing models](https://aws.amazon.com/blogs/machine-learning/aws-and-hugging-face-collaborate-to-simplify-and-accelerate-adoption-of-natural-language-processing-models/) - [Walkthrough: End-to-End Text Classification](https://youtu.be/ok3hetb42gU) - [Working with Hugging Face models on Amazon SageMaker](https://youtu.be/leyrCgLAGjMn) - [Distributed Training: Train BART/T5 for Summarization using 🤗 Transformers and Amazon SageMaker](https://huggingface.co/blog/sagemaker-distributed-training-seq2seq) - [Deploy a Hugging Face Transformers Model from S3 to Amazon SageMaker](https://youtu.be/pfBGgSGnYLs) - [Deploy a Hugging Face Transformers Model from the Model Hub to Amazon SageMaker](https://youtu.be/l9QZuazbzWM) ## Documentation - [Hugging Face documentation for Amazon SageMaker](https://huggingface.co/docs/sagemaker/main) - [Run training on Amazon SageMaker](https://huggingface.co/docs/sagemaker/train) - [Deploy models to Amazon SageMaker](https://huggingface.co/docs/sagemaker/inference) - [Frequently Asked Questions](https://huggingface.co/docs/sagemaker/faq) - [Amazon SageMaker documentation for Hugging Face](https://docs.aws.amazon.com/sagemaker/latest/dg/hugging-face.html) - [Python SDK SageMaker documentation for Hugging Face](https://sagemaker.readthedocs.io/en/stable/frameworks/huggingface/index.html) - [Deep Learning Container](https://github.com/aws/deep-learning-containers/blob/master/available_images.md#huggingface-training-containers) - [SageMaker's Distributed Data Parallel Library](https://docs.aws.amazon.com/sagemaker/latest/dg/data-parallel.html) - [SageMaker's Distributed Model Parallel Library](https://docs.aws.amazon.com/sagemaker/latest/dg/model-parallel.html) ## Sample Notebook - [all Notebooks](https://github.com/huggingface/notebooks/tree/master/sagemaker) - [Getting Started Pytorch](https://github.com/huggingface/notebooks/blob/master/sagemaker/01_getting_started_pytorch/sagemaker-notebook.ipynb) - [Getting Started Tensorflow](https://github.com/huggingface/notebooks/blob/master/sagemaker/02_getting_started_tensorflow/sagemaker-notebook.ipynb) - [Distributed Training Data Parallelism](https://github.com/huggingface/notebooks/blob/master/sagemaker/03_distributed_training_data_parallelism/sagemaker-notebook.ipynb) - [Distributed Training Model Parallelism](https://github.com/huggingface/notebooks/blob/master/sagemaker/04_distributed_training_model_parallelism/sagemaker-notebook.ipynb) - [Spot Instances and continue training](https://github.com/huggingface/notebooks/blob/master/sagemaker/05_spot_instances/sagemaker-notebook.ipynb) - [SageMaker Metrics](https://github.com/huggingface/notebooks/blob/master/sagemaker/06_sagemaker_metrics/sagemaker-notebook.ipynb) - [Distributed Training Data Parallelism Tensorflow](https://github.com/huggingface/notebooks/blob/master/sagemaker/07_tensorflow_distributed_training_data_parallelism/sagemaker-notebook.ipynb) - [Distributed Training Summarization](https://github.com/huggingface/notebooks/blob/master/sagemaker/08_distributed_summarization_bart_t5/sagemaker-notebook.ipynb) - [Image Classification with Vision Transformer](https://github.com/huggingface/notebooks/blob/master/sagemaker/09_image_classification_vision_transformer/sagemaker-notebook.ipynb) - [Deploy one of the 10,000+ Hugging Face Transformers to Amazon SageMaker for Inference](https://github.com/huggingface/notebooks/blob/master/sagemaker/11_deploy_model_from_hf_hub/deploy_transformer_model_from_hf_hub.ipynb) - [Deploy a Hugging Face Transformer model from S3 to SageMaker for inference](https://github.com/huggingface/notebooks/blob/master/sagemaker/10_deploy_model_from_s3/deploy_transformer_model_from_s3.ipynb) --- ## **Getting started: End-to-End Text Classification 🧭** In this getting started guide, we will use the new Hugging Face DLCs and Amazon SageMaker extension to train a transformer model on binary text classification using the transformers and datasets libraries. We will use an Amazon SageMaker Notebook Instance for the example. You can learn [here how to set up a Notebook Instance](https://docs.aws.amazon.com/sagemaker/latest/dg/nbi.html). **What are we going to do:** - set up a development environment and install sagemaker - create the training script `train.py` - preprocess our data and upload it to [Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) - create a [HuggingFace Estimator](https://huggingface.co/transformers/sagemaker.html) and train our model ## Set up a development environment and install sagemaker As mentioned above we are going to use SageMaker Notebook Instances for this. To get started you need to jump into your Jupyer Notebook or JupyterLab and create a new Notebook with the conda_pytorch_p36 kernel. _**Note:** The use of Jupyter is optional: We could also launch SageMaker Training jobs from anywhere we have an SDK installed, connectivity to the cloud and appropriate permissions, such as a Laptop, another IDE or a task scheduler like Airflow or AWS Step Functions._ After that we can install the required dependencies ```bash pip install "sagemaker>=2.31.0" "transformers==4.6.1" "datasets[s3]==1.6.2" --upgrade ``` To run training on SageMaker we need to create a sagemaker Session and provide an IAM role with the right permission. This IAM role will be later attached to the TrainingJob enabling it to download data, e.g. from Amazon S3. ```python import sagemaker sess = sagemaker.Session() # sagemaker session bucket -> used for uploading data, models and logs # sagemaker will automatically create this bucket if it not exists sagemaker_session_bucket=None if sagemaker_session_bucket is None and sess is not None: # set to default bucket if a bucket name is not given sagemaker_session_bucket = sess.default_bucket() role = sagemaker.get_execution_role() sess = sagemaker.Session(default_bucket=sagemaker_session_bucket) print(f"sagemaker role arn: {role}") print(f"sagemaker bucket: {sess.default_bucket()}") print(f"sagemaker session region: {sess.boto_region_name}") ``` ## Create the training script `train.py` In a SageMaker `TrainingJob` we are executing a python script with named arguments. In this example, we use PyTorch together with transformers. The script will - pass the incoming parameters (hyperparameters from HuggingFace Estimator) - load our dataset - define our compute metrics function - set up our `Trainer` - run training with `trainer.train()` - evaluate the training and save our model at the end to S3. ```bash from transformers import AutoModelForSequenceClassification, Trainer, TrainingArguments from sklearn.metrics import accuracy_score, precision_recall_fscore_support from datasets import load_from_disk import random import logging import sys import argparse import os import torch if __name__ == "__main__": parser = argparse.ArgumentParser() # hyperparameters sent by the client are passed as command-line arguments to the script. parser.add_argument("--epochs", type=int, default=3) parser.add_argument("--train-batch-size", type=int, default=32) parser.add_argument("--eval-batch-size", type=int, default=64) parser.add_argument("--warmup_steps", type=int, default=500) parser.add_argument("--model_name", type=str) parser.add_argument("--learning_rate", type=str, default=5e-5) # Data, model, and output directories parser.add_argument("--output-data-dir", type=str, default=os.environ["SM_OUTPUT_DATA_DIR"]) parser.add_argument("--model-dir", type=str, default=os.environ["SM_MODEL_DIR"]) parser.add_argument("--n_gpus", type=str, default=os.environ["SM_NUM_GPUS"]) parser.add_argument("--training_dir", type=str, default=os.environ["SM_CHANNEL_TRAIN"]) parser.add_argument("--test_dir", type=str, default=os.environ["SM_CHANNEL_TEST"]) args, _ = parser.parse_known_args() # Set up logging logger = logging.getLogger(__name__) logging.basicConfig( level=logging.getLevelName("INFO"), handlers=[logging.StreamHandler(sys.stdout)], format="%(asctime)s - %(name)s - %(levelname)s - %(message)s", ) # load datasets train_dataset = load_from_disk(args.training_dir) test_dataset = load_from_disk(args.test_dir) logger.info(f" loaded train_dataset length is: {len(train_dataset)}") logger.info(f" loaded test_dataset length is: {len(test_dataset)}") # compute metrics function for binary classification def compute_metrics(pred): labels = pred.label_ids preds = pred.predictions.argmax(-1) precision, recall, f1, _ = precision_recall_fscore_support(labels, preds, average="binary") acc = accuracy_score(labels, preds) return {"accuracy": acc, "f1": f1, "precision": precision, "recall": recall} # download model from model hub model = AutoModelForSequenceClassification.from_pretrained(args.model_name) # define training args training_args = TrainingArguments( output_dir=args.model_dir, num_train_epochs=args.epochs, per_device_train_batch_size=args.train_batch_size, per_device_eval_batch_size=args.eval_batch_size, warmup_steps=args.warmup_steps, evaluation_strategy="epoch", logging_dir=f"{args.output_data_dir}/logs", learning_rate=float(args.learning_rate), ) # create Trainer instance trainer = Trainer( model=model, args=training_args, compute_metrics=compute_metrics, train_dataset=train_dataset, eval_dataset=test_dataset, ) # train model trainer.train() # evaluate model eval_result = trainer.evaluate(eval_dataset=test_dataset) # writes eval result to file which can be accessed later in s3 output with open(os.path.join(args.output_data_dir, "eval_results.txt"), "w") as writer: print(f"***** Eval results *****") for key, value in sorted(eval_result.items()): writer.write(f"{key} = {value}\\n") # Saves the model to s3; default is /opt/ml/model which SageMaker sends to S3 trainer.save_model(args.model_dir) ``` ## Preprocess our data and upload it to s3 We use the `datasets` library to download and preprocess our `imdb` dataset. After preprocessing, the dataset will be uploaded to the current session’s default s3 bucket `sess.default_bucket()` used within our training job. The `imdb` dataset consists of 25000 training and 25000 testing highly polar movie reviews. ```python import botocore from datasets import load_dataset from transformers import AutoTokenizer from datasets.filesystems import S3FileSystem # tokenizer used in preprocessing tokenizer_name = 'distilbert-base-uncased' # filesystem client for s3 s3 = S3FileSystem() # dataset used dataset_name = 'imdb' # s3 key prefix for the data s3_prefix = 'datasets/imdb' # load dataset dataset = load_dataset(dataset_name) # download tokenizer tokenizer = AutoTokenizer.from_pretrained(tokenizer_name) # tokenizer helper function def tokenize(batch): return tokenizer(batch['text'], padding='max_length', truncation=True) # load dataset train_dataset, test_dataset = load_dataset('imdb', split=['train', 'test']) test_dataset = test_dataset.shuffle().select(range(10000)) # smaller the size for test dataset to 10k # tokenize dataset train_dataset = train_dataset.map(tokenize, batched=True, batch_size=len(train_dataset)) test_dataset = test_dataset.map(tokenize, batched=True, batch_size=len(test_dataset)) # set format for pytorch train_dataset = train_dataset.rename_column("label", "labels") train_dataset.set_format('torch', columns=['input_ids', 'attention_mask', 'labels']) test_dataset = test_dataset.rename_column("label", "labels") test_dataset.set_format('torch', columns=['input_ids', 'attention_mask', 'labels']) # save train_dataset to s3 training_input_path = f's3://{sess.default_bucket()}/{s3_prefix}/train' train_dataset.save_to_disk(training_input_path,fs=s3) # save test_dataset to s3 test_input_path = f's3://{sess.default_bucket()}/{s3_prefix}/test' test_dataset.save_to_disk(test_input_path,fs=s3) ``` ## Create a HuggingFace Estimator and train our model In order to create a SageMaker `Trainingjob` we can use a HuggingFace Estimator. The Estimator handles the end-to-end Amazon SageMaker training. In an Estimator, we define which fine-tuning script should be used as `entry_point`, which `instance_type` should be used, which hyperparameters are passed in. In addition to this, a number of advanced controls are available, such as customizing the output and checkpointing locations, specifying the local storage size or network configuration. SageMaker takes care of starting and managing all the required Amazon EC2 instances for us with the Hugging Face DLC, it uploads the provided fine-tuning script, for example, our `train.py`, then downloads the data from the S3 bucket, `sess.default_bucket()`, into the container. Once the data is ready, the training job will start automatically by running. ```bash /opt/conda/bin/python train.py --epochs 1 --model_name distilbert-base-uncased --train_batch_size 32 ``` The hyperparameters you define in the HuggingFace Estimator are passed in as named arguments. ```python from sagemaker.huggingface import HuggingFace # hyperparameters, which are passed into the training job hyperparameters={'epochs': 1, 'train_batch_size': 32, 'model_name':'distilbert-base-uncased' } # create the Estimator huggingface_estimator = HuggingFace( entry_point='train.py', source_dir='./scripts', instance_type='ml.p3.2xlarge', instance_count=1, role=role, transformers_version='4.6', pytorch_version='1.7', py_version='py36', hyperparameters = hyperparameters ) ``` To start our training we call the .fit() method and pass our S3 uri as input. ```python # starting the train job with our uploaded datasets as input huggingface_estimator.fit({'train': training_input_path, 'test': test_input_path}) ``` --- ## **Additional Features 🚀** In addition to the Deep Learning Container and the SageMaker SDK, we have implemented other additional features. ## Distributed Training: Data-Parallel You can use [SageMaker Data Parallelism Library](https://aws.amazon.com/blogs/aws/managed-data-parallelism-in-amazon-sagemaker-simplifies-training-on-large-datasets/) out of the box for distributed training. We added the functionality of Data Parallelism directly into the Trainer. If your train.py uses the Trainer API you only need to define the distribution parameter in the HuggingFace Estimator. - [Example Notebook PyTorch](https://github.com/huggingface/notebooks/blob/master/sagemaker/04_distributed_training_model_parallelism/sagemaker-notebook.ipynb) - [Example Notebook TensorFlow](https://github.com/huggingface/notebooks/blob/master/sagemaker/07_tensorflow_distributed_training_data_parallelism/sagemaker-notebook.ipynb) ```python # configuration for running training on smdistributed Data Parallel distribution = {'smdistributed':{'dataparallel':{ 'enabled': True }}} # create the Estimator huggingface_estimator = HuggingFace( entry_point='train.py', source_dir='./scripts', instance_type='ml.p3dn.24xlarge', instance_count=2, role=role, transformers_version='4.4.2', pytorch_version='1.6.0', py_version='py36', hyperparameters = hyperparameters distribution = distribution ) ``` The "Getting started: End-to-End Text Classification 🧭" example can be used for distributed training without any changes. ## Distributed Training: Model Parallel You can use [SageMaker Model Parallelism Library](https://aws.amazon.com/blogs/aws/amazon-sagemaker-simplifies-training-deep-learning-models-with-billions-of-parameters/) out of the box for distributed training. We added the functionality of Model Parallelism directly into the [Trainer](https://huggingface.co/transformers/main_classes/trainer.html). If your `train.py` uses the [Trainer](https://huggingface.co/transformers/main_classes/trainer.html) API you only need to define the distribution parameter in the HuggingFace Estimator. For detailed information about the adjustments take a look [here](https://sagemaker.readthedocs.io/en/stable/api/training/smd_model_parallel_general.html?highlight=modelparallel#required-sagemaker-python-sdk-parameters). - [Example Notebook](https://github.com/huggingface/notebooks/blob/master/sagemaker/04_distributed_training_model_parallelism/sagemaker-notebook.ipynb) ```python # configuration for running training on smdistributed Model Parallel mpi_options = { "enabled" : True, "processes_per_host" : 8 } smp_options = { "enabled":True, "parameters": { "microbatches": 4, "placement_strategy": "spread", "pipeline": "interleaved", "optimize": "speed", "partitions": 4, "ddp": True, } } distribution={ "smdistributed": {"modelparallel": smp_options}, "mpi": mpi_options } # create the Estimator huggingface_estimator = HuggingFace( entry_point='train.py', source_dir='./scripts', instance_type='ml.p3dn.24xlarge', instance_count=2, role=role, transformers_version='4.4.2', pytorch_version='1.6.0', py_version='py36', hyperparameters = hyperparameters, distribution = distribution ) ``` ## Spot instances With the creation of HuggingFace Framework extension for the SageMaker Python SDK we can also leverage the benefit of [fully-managed EC2 spot instances](https://docs.aws.amazon.com/sagemaker/latest/dg/model-managed-spot-training.html) and save up to 90% of our training cost. _Note: Unless your training job will complete quickly, we recommend you use [checkpointing](https://docs.aws.amazon.com/sagemaker/latest/dg/model-checkpoints.html) with managed spot training, therefore you need to define the `checkpoint_s3_uri`._ To use spot instances with the `HuggingFace` Estimator we have to set the `use_spot_instances` parameter to `True` and define your `max_wait` and `max_run` time. You can read more about [the managed spot training lifecycle here](https://docs.aws.amazon.com/sagemaker/latest/dg/model-managed-spot-training.html). - [Example Notebook](https://github.com/huggingface/notebooks/blob/master/sagemaker/05_spot_instances/sagemaker-notebook.ipynb) ```python # hyperparameters, which are passed into the training job hyperparameters={'epochs': 1, 'train_batch_size': 32, 'model_name':'distilbert-base-uncased', 'output_dir':'/opt/ml/checkpoints' } # create the Estimator huggingface_estimator = HuggingFace( entry_point='train.py', source_dir='./scripts', instance_type='ml.p3.2xlarge', instance_count=1, checkpoint_s3_uri=f's3://{sess.default_bucket()}/checkpoints' use_spot_instances=True, max_wait=3600, # This should be equal to or greater than max_run in seconds' max_run=1000, role=role, transformers_version='4.4', pytorch_version='1.6', py_version='py36', hyperparameters = hyperparameters ) # Training seconds: 874 # Billable seconds: 105 # Managed Spot Training savings: 88.0% ``` ## Git Repositories When you create an `HuggingFace` Estimator, you can specify a [training script that is stored in a GitHub](https://sagemaker.readthedocs.io/en/stable/overview.html#use-scripts-stored-in-a-git-repository) repository as the entry point for the estimator, so that you don’t have to download the scripts locally. If Git support is enabled, then `entry_point` and `source_dir` should be relative paths in the Git repo if provided. As an example to use `git_config` with an [example script from the transformers repository](https://github.com/huggingface/transformers/tree/master/examples/text-classification). _Be aware that you need to define `output_dir` as a hyperparameter for the script to save your model to S3 after training. Suggestion: define output_dir as `/opt/ml/model` since it is the default `SM_MODEL_DIR` and will be uploaded to S3._ - [Example Notebook](https://github.com/huggingface/notebooks/blob/master/sagemaker/02_getting_started_tensorflow/sagemaker-notebook.ipynb) ```python # configure git settings git_config = {'repo': 'https://github.com/huggingface/transformers.git','branch': 'master'} # create the Estimator huggingface_estimator = HuggingFace( entry_point='run_glue.py', source_dir='./examples/text-classification', git_config=git_config, instance_type='ml.p3.2xlarge', instance_count=1, role=role, transformers_version='4.4', pytorch_version='1.6', py_version='py36', hyperparameters=hyperparameters ) ``` ## SageMaker Metrics [SageMaker Metrics](https://docs.aws.amazon.com/sagemaker/latest/dg/training-metrics.html#define-train-metrics) can automatically parse the logs for metrics and send those metrics to CloudWatch. If you want SageMaker to parse logs you have to specify the metrics that you want SageMaker to send to CloudWatch when you configure the training job. You specify the name of the metrics that you want to send and the regular expressions that SageMaker uses to parse the logs that your algorithm emits to find those metrics. - [Example Notebook](https://github.com/huggingface/notebooks/blob/master/sagemaker/06_sagemaker_metrics/sagemaker-notebook.ipynb) ```python # define metrics definitions metric_definitions = [ {"Name": "train_runtime", "Regex": "train_runtime.*=\D*(.*?)$"}, {"Name": "eval_accuracy", "Regex": "eval_accuracy.*=\D*(.*?)$"}, {"Name": "eval_loss", "Regex": "eval_loss.*=\D*(.*?)$"}, ] # create the Estimator huggingface_estimator = HuggingFace( entry_point='train.py', source_dir='./scripts', instance_type='ml.p3.2xlarge', instance_count=1, role=role, transformers_version='4.4', pytorch_version='1.6', py_version='py36', metric_definitions=metric_definitions, hyperparameters = hyperparameters ) ``` --- ## **FAQ 🎯** You can find the complete [Frequently Asked Questions](https://huggingface.co/docs/sagemaker/faq) in the [documentation](https://huggingface.co/docs/sagemaker/faq). _Q: What are Deep Learning Containers?_ A: Deep Learning Containers (DLCs) are Docker images pre-installed with deep learning frameworks and libraries (e.g. transformers, datasets, tokenizers) to make it easy to train models by letting you skip the complicated process of building and optimizing your environments from scratch. _Q: Do I have to use the SageMaker Python SDK to use the Hugging Face Deep Learning Containers?_ A: You can use the HF DLC without the SageMaker Python SDK and launch SageMaker Training jobs with other SDKs, such as the [AWS CLI](https://docs.aws.amazon.com/cli/latest/reference/sagemaker/create-training-job.html) or [boto3](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker.html#SageMaker.Client.create_training_job). The DLCs are also available through Amazon ECR and can be pulled and used in any environment of choice. _Q: Why should I use the Hugging Face Deep Learning Containers?_ A: The DLCs are fully tested, maintained, optimized deep learning environments that require no installation, configuration, or maintenance. _Q: Why should I use SageMaker Training to train Hugging Face models?_ A: SageMaker Training provides numerous benefits that will boost your productivity with Hugging Face : (1) first it is cost-effective: the training instances live only for the duration of your job and are paid per second. No risk anymore to leave GPU instances up all night: the training cluster stops right at the end of your job! It also supports EC2 Spot capacity, which enables up to 90% cost reduction. (2) SageMaker also comes with a lot of built-in automation that facilitates teamwork and MLOps: training metadata and logs are automatically persisted to a serverless managed metastore, and I/O with S3 (for datasets, checkpoints and model artifacts) is fully managed. Finally, SageMaker also allows to drastically scale up and out: you can launch multiple training jobs in parallel, but also launch large-scale distributed training jobs _Q: Once I've trained my model with Amazon SageMaker, can I use it with 🤗/Transformers ?_ A: Yes, you can download your trained model from S3 and directly use it with transformers or upload it to the [Hugging Face Model Hub](https://huggingface.co/models). _Q: How is my data and code secured by Amazon SageMaker?_ A: Amazon SageMaker provides numerous security mechanisms including [encryption at rest](https://docs.aws.amazon.com/sagemaker/latest/dg/encryption-at-rest-nbi.html) and [in transit](https://docs.aws.amazon.com/sagemaker/latest/dg/encryption-in-transit.html), [Virtual Private Cloud (VPC) connectivity](https://docs.aws.amazon.com/sagemaker/latest/dg/interface-vpc-endpoint.html) and [Identity and Access Management (IAM)](https://docs.aws.amazon.com/sagemaker/latest/dg/security_iam_service-with-iam.html). To learn more about security in the AWS cloud and with Amazon SageMaker, you can visit [Security in Amazon SageMaker](https://docs.aws.amazon.com/sagemaker/latest/dg/security_iam_service-with-iam.html) and [AWS Cloud Security](https://docs.aws.amazon.com/sagemaker/latest/dg/security_iam_service-with-iam.html). _Q: Is this available in my region?_ A: For a list of the supported regions, please visit the [AWS region table](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/) for all AWS global infrastructure. _Q: Do I need to pay for a license from Hugging Face to use the DLCs?_ A: No - the Hugging Face DLCs are open source and licensed under Apache 2.0. _Q: How can I run inference on my trained models?_ A: You have multiple options to run inference on your trained models. One option is to use Hugging Face [Accelerated Inference-API](https://api-inference.huggingface.co/docs/python/html/index.html) hosted service: start by [uploading the trained models to your Hugging Face account](https://huggingface.co/new) to deploy them publicly, or privately. Another great option is to use [SageMaker Inference](https://docs.aws.amazon.com/sagemaker/latest/dg/your-algorithms-inference-main.html) to run your own inference code in Amazon SageMaker. We are working on offering an integrated solution for Amazon SageMaker with Hugging Face Inference DLCs in the future - stay tuned! _Q: Do you offer premium support or support SLAs for this solution?_ A: AWS Technical Support tiers are available from AWS and cover development and production issues for AWS products and services - please refer to AWS Support for specifics and scope. If you have questions which the Hugging Face community can help answer and/or benefit from, please [post them in the Hugging Face forum](https://discuss.huggingface.co/c/sagemaker/17). If you need premium support from the Hugging Face team to accelerate your NLP roadmap, our Expert Acceleration Program offers direct guidance from our open source, science and ML Engineering team - [contact us to learn more](mailto:[email protected]). _Q: What are you planning next through this partnership?_ A: Our common goal is to democratize state of the art Machine Learning. We will continue to innovate to make it easier for researchers, data scientists and ML practitioners to manage, train and run state of the art models. If you have feature requests for integration in AWS with Hugging Face, please [let us know in the Hugging Face community forum](https://discuss.huggingface.co/c/sagemaker/17). _Q: I use Hugging Face with Azure Machine Learning or Google Cloud Platform, what does this partnership mean for me?_ A: A foundational goal for Hugging Face is to make the latest AI accessible to as many people as possible, whichever framework or development environment they work in. While we are focusing integration efforts with Amazon Web Services as our Preferred Cloud Provider, we will continue to work hard to serve all Hugging Face users and customers, no matter what compute environment they run on.
Understanding BigBird's Block Sparse Attention
vasudevgupta
March 31, 2021
big-bird
community, research, nlp
https://huggingface.co/blog/big-bird
# Understanding BigBird's Block Sparse Attention ## Introduction Transformer-based models have shown to be very useful for many NLP tasks. However, a major limitation of transformers-based models is its \\(O(n^2)\\) time & memory complexity (where \\(n\\) is sequence length). Hence, it's computationally very expensive to apply transformer-based models on long sequences \\(n > 512\\). Several recent papers, *e.g.* `Longformer`, `Performer`, `Reformer`, `Clustered attention` try to remedy this problem by approximating the full attention matrix. You can checkout 🤗's recent blog [post](https://huggingface.co/blog/long-range-transformers) in case you are unfamiliar with these models. `BigBird` (introduced in [paper](https://arxiv.org/abs/2007.14062)) is one of such recent models to address this issue. `BigBird` relies on **block sparse attention** instead of normal attention (*i.e.* BERT's attention) and can handle sequences up to a length of **4096** at a much lower computational cost compared to BERT. It has achieved SOTA on various tasks involving very long sequences such as long documents summarization, question-answering with long contexts. **BigBird RoBERTa-like** model is now available in 🤗Transformers. The goal of this post is to give the reader an **in-depth** understanding of big bird implementation & ease one's life in using BigBird with 🤗Transformers. But, before going into more depth, it is important to remember that the `BigBird's` attention is an approximation of `BERT`'s full attention and therefore does not strive to be **better** than `BERT's` full attention, but rather to be more efficient. It simply allows to apply transformer-based models to much longer sequences since BERT's quadratic memory requirement quickly becomes unbearable. Simply put, if we would have \\(\infty\\) compute & \\(\infty\\) time, BERT's attention would be preferred over block sparse attention (which we are going to discuss in this post). If you wonder why we need more compute when working with longer sequences, this blog post is just right for you! --- Some of the main questions one might have when working with standard `BERT`-like attention include: * Do all tokens really have to attend to all other tokens? * Why not compute attention only over important tokens? * How to decide what tokens are important? * How to attend to just a few tokens in a very efficient way? --- In this blog post, we will try to answer those questions. ### What tokens should be attended to? We will give a practical example of how attention works by considering the sentence "BigBird is now available in HuggingFace for extractive question answering". In `BERT`-like attention, every word would simply attend to all other tokens. Put mathematically, this would mean that each queried token \\( \text{query-token} \in \{\text{BigBird},\text{is},\text{now},\text{available},\text{in},\text{HuggingFace},\text{for},\text{extractive},\text{question},\text{answering}\} \\), would attend to the full list of \\( \text{key-tokens} = \left[\text{BigBird},\text{is},\text{now},\text{available},\text{in},\text{HuggingFace},\text{for},\text{extractive},\text{question},\text{answering} \right]\\). Let's think about a sensible choice of key tokens that a queried token actually only should attend to by writing some pseudo-code. Will will assume that the token `available` is queried and build a sensible list of key tokens to attend to. ```python >>> # let's consider following sentence as an example >>> example = ['BigBird', 'is', 'now', 'available', 'in', 'HuggingFace', 'for', 'extractive', 'question', 'answering'] >>> # further let's assume, we're trying to understand the representation of 'available' i.e. >>> query_token = 'available' >>> # We will initialize an empty `set` and fill up the tokens of our interest as we proceed in this section. >>> key_tokens = [] # => currently 'available' token doesn't have anything to attend ``` Nearby tokens should be important because, in a sentence (sequence of words), the current word is highly dependent on neighboring past & future tokens. This intuition is the idea behind the concept of `sliding attention`. ```python >>> # considering `window_size = 3`, we will consider 1 token to left & 1 to right of 'available' >>> # left token: 'now' ; right token: 'in' >>> sliding_tokens = ["now", "available", "in"] >>> # let's update our collection with the above tokens >>> key_tokens.append(sliding_tokens) ``` **Long-range dependencies:** For some tasks, it is crucial to capture long-range relationships between tokens. *E.g.*, in `question-answering the model needs to compare each token of the context to the whole question to be able to figure out which part of the context is useful for a correct answer. If most of the context tokens would just attend to other context tokens, but not to the question, it becomes much harder for the model to filter important context tokens from less important context tokens. Now, `BigBird` proposes two ways of allowing long-term attention dependencies while staying computationally efficient. * **Global tokens:** Introduce some tokens which will attend to every token and which are attended by every token. Eg: *"HuggingFace is building nice libraries for easy NLP"*. Now, let's say *'building'* is defined as a global token, and the model needs to know the relation among *'NLP'* & *'HuggingFace'* for some task (Note: these 2 tokens are at two extremes); Now having *'building'* attend globally to all other tokens will probably help the model to associate *'NLP'* with *'HuggingFace'*. ```python >>> # let's assume 1st & last token to be `global`, then >>> global_tokens = ["BigBird", "answering"] >>> # fill up global tokens in our key tokens collection >>> key_tokens.append(global_tokens) ``` * **Random tokens:** Select some tokens randomly which will transfer information by transferring to other tokens which in turn can transfer to other tokens. This may reduce the cost of information travel from one token to other. ```python >>> # now we can choose `r` token randomly from our example sentence >>> # let's choose 'is' assuming `r=1` >>> random_tokens = ["is"] # Note: it is chosen compleletly randomly; so it can be anything else also. >>> # fill random tokens to our collection >>> key_tokens.append(random_tokens) >>> # it's time to see what tokens are in our `key_tokens` list >>> key_tokens {'now', 'is', 'in', 'answering', 'available', 'BigBird'} # Now, 'available' (query we choose in our 1st step) will attend only these tokens instead of attending the complete sequence ``` This way, the query token attends only to a subset of all possible tokens while yielding a good approximation of full attention. The same approach will is used for all other queried tokens. But remember, the whole point here is to approximate `BERT`'s full attention as efficiently as possible. Simply making each queried token attend all key tokens as it's done for BERT can be computed very effectively as a sequence of matrix multiplication on modern hardware, like GPUs. However, a combination of sliding, global & random attention appears to imply sparse matrix multiplication, which is harder to implement efficiently on modern hardware. One of the major contributions of `BigBird` is the proposition of a `block sparse` attention mechanism that allows computing sliding, global & random attention effectively. Let's look into it! ### Understanding the need for global, sliding, random keys with Graphs First, let's get a better understanding of `global`, `sliding` & `random` attention using graphs and try to understand how the combination of these three attention mechanisms yields a very good approximation of standard `Bert-like` attention. <img src="assets/18_big_bird/global.png" width=250 height=250> <img src="assets/18_big_bird/sliding.png" width=250 height=250> <img src="assets/18_big_bird/random.png" width=250 height=250> <br> *The above figure shows `global` (left), `sliding` (middle) & `random` (right) connections respectively as a graph. Each node corresponds to a token and each line represents an attention score. If no connection is made between 2 tokens, then an attention score is assumed to 0.* ![](assets/18_big_bird/graph.gif) <img src="assets/18_big_bird/full.png" width=230 height=230> **BigBird block sparse attention** is a combination of sliding, global & random connections (total 10 connections) as shown in `gif` in left. While a graph of **normal attention** (right) will have all 15 connections (note: total 6 nodes are present). You can simply think of normal attention as all the tokens attending globally \\( {}^1 \\). **Normal attention:** Model can transfer information from one token to another token directly in a single layer since each token is queried over every other token and is attended by every other token. Let's consider an example similar to what is shown in the above figures. If the model needs to associate *'going'* with *'now'*, it can simply do that in a single layer since there is a direct connection joining both the tokens. **Block sparse attention:** If the model needs to share information between two nodes (or tokens), information will have to travel across various other nodes in the path for some of the tokens; since all the nodes are not directly connected in a single layer. *Eg.*, assuming model needs to associate *'going'* with *'now'*, then if only sliding attention is present the flow of information among those 2 tokens, is defined by the path: `going -> am -> i -> now` (i.e. it will have to travel over 2 other tokens). Hence, we may need multiple layers to capture the entire information of the sequence. Normal attention can capture this in a single layer. In an extreme case, this could mean that as many layers as input tokens are needed. If, however, we introduce some global tokens information can travel via the path: `going -> i -> now` (which is shorter). If we in addition introduce random connections it can travel via: `going -> am -> now`. With the help of random connections & global connections, information can travel very rapidly (with just a few layers) from one token to the next. In case, we have many global tokens, then we may not need random connections since there will be multiple short paths through which information can travel. This is the idea behind keeping `num_random_tokens = 0` when working with a variant of BigBird, called ETC (more on this in later sections). \\( {}^1 \\) In these graphics, we are assuming that the attention matrix is symmetric **i.e.** \\(\mathbf{A}_{ij} = \mathbf{A}_{ji}\\) since in a graph if some token **A** attends **B**, then **B** will also attend **A**. You can see from the figure of the attention matrix shown in the next section that this assumption holds for most tokens in BigBird | Attention Type | `global_tokens` | `sliding_tokens` | `random_tokens` | |-----------------|-------------------|------------------|------------------------------------| | `original_full` | `n` | 0 | 0 | | `block_sparse` | 2 x `block_size` | 3 x `block_size` | `num_random_blocks` x `block_size` | *`original_full` represents `BERT`'s attention while `block_sparse` represents `BigBird`'s attention. Wondering what the `block_size` is? We will cover that in later sections. For now, consider it to be 1 for simplicity* ## BigBird block sparse attention BigBird block sparse attention is just an efficient implementation of what we discussed above. Each token is attending some **global tokens**, **sliding tokens**, & **random tokens** instead of attending to **all** other tokens. The authors hardcoded the attention matrix for multiple query components separately; and used a cool trick to speed up training/inference on GPU and TPU. ![BigBird block sparse attention](assets/18_big_bird/attn.png) *Note: on the top, we have 2 extra sentences. As you can notice, every token is just switched by one place in both sentences. This is how sliding attention is implemented. When `q[i]` is multiplied with `k[i,0:3]`, we will get a sliding attention score for `q[i]` (where `i` is index of element in sequence).* You can find the actual implementation of `block_sparse` attention [here](https://github.com/vasudevgupta7/transformers/blob/5f2d6a0c93ca2017961199aa04a344b9b779d454/src/transformers/models/big_bird/modeling_big_bird.py#L513). This may look very scary 😨😨 now. But this article will surely ease your life in understanding the code. ### Global Attention For global attention, each query is simply attending to all the other tokens in the sequence & is attended by every other token. Let's assume `Vasudev` (1st token) & `them` (last token) to be global (in the above figure). You can see that these tokens are directly connected to all other tokens (blue boxes). ```python # pseudo code Q -> Query martix (seq_length, head_dim) K -> Key matrix (seq_length, head_dim) # 1st & last token attends all other tokens Q[0] x [K[0], K[1], K[2], ......, K[n-1]] Q[n-1] x [K[0], K[1], K[2], ......, K[n-1]] # 1st & last token getting attended by all other tokens K[0] x [Q[0], Q[1], Q[2], ......, Q[n-1]] K[n-1] x [Q[0], Q[1], Q[2], ......, Q[n-1]] ``` ### Sliding Attention The sequence of key tokens is copied 2 times with each element shifted to the right in one of the copies and to the left in the other copy. Now if we multiply query sequence vectors by these 3 sequence vectors, we will cover all the sliding tokens. Computational complexity is simply `O(3xn) = O(n)`. Referring to the above picture, the orange boxes represent the sliding attention. You can see 3 sequences at the top of the figure with 2 of them shifted by one token (1 to the left, 1 to the right). ```python # what we want to do Q[i] x [K[i-1], K[i], K[i+1]] for i = 1:-1 # efficient implementation in code (assume dot product multiplication 👇) [Q[0], Q[1], Q[2], ......, Q[n-2], Q[n-1]] x [K[1], K[2], K[3], ......, K[n-1], K[0]] [Q[0], Q[1], Q[2], ......, Q[n-1]] x [K[n-1], K[0], K[1], ......, K[n-2]] [Q[0], Q[1], Q[2], ......, Q[n-1]] x [K[0], K[1], K[2], ......, K[n-1]] # Each sequence is getting multiplied by only 3 sequences to keep `window_size = 3`. # Some computations might be missing; this is just a rough idea. ``` ### Random Attention Random attention is ensuring that each query token will attend a few random tokens as well. For the actual implementation, this means that the model gathers some tokens randomly and computes their attention score. ```python # r1, r2, r are some random indices; Note: r1, r2, r3 are different for each row 👇 Q[1] x [Q[r1], Q[r2], ......, Q[r]] . . . Q[n-2] x [Q[r1], Q[r2], ......, Q[r]] # leaving 0th & (n-1)th token since they are already global ``` **Note:** The current implementation further divides sequence into blocks & each notation is defined w.r.to block instead of tokens. Let's discuss this in more detail in the next section. ### Implementation **Recap:** In regular BERT attention, a sequence of tokens i.e. \\( X = x_1, x_2, ...., x_n \\) is projected through a dense layer into \\( Q,K,V \\) and the attention score \\( Z \\) is calculated as \\( Z=Softmax(QK^T) \\). In the case of BigBird block sparse attention, the same algorithm is used but only with some selected query & key vectors. Let's have a look at how bigbird block sparse attention is implemented. To begin with, let's assume \\(b, r, s, g\\) represent `block_size`, `num_random_blocks`, `num_sliding_blocks`, `num_global_blocks`, respectively. Visually, we can illustrate the components of big bird's block sparse attention with \\(b=4, r=1, g=2, s=3, d=5\\) as follows: <img src="assets/18_big_bird/intro.png" width=500 height=250> Attention scores for \\({q}_{1}, {q}_{2}, {q}_{3:n-2}, {q}_{n-1}, {q}_{n}\\) are calculated separately as described below: --- Attention score for \\(\mathbf{q}_{1}\\) represented by \\(a_1\\) where \\(a_1=Softmax(q_1 * K^T)\\), is nothing but attention score between all the tokens in 1st block with all the other tokens in the sequence. ![BigBird block sparse attention](assets/18_big_bird/q1.png) \\(q_1\\) represents 1st block, \\(g_i\\) represents \\(i\\) block. We are simply performing normal attention operation between \\(q_1\\) & \\(g\\) (i.e. all the keys). --- For calculating attention score for tokens in seconcd block, we are gathering the first three blocks, the last block, and the fifth block. Then we can compute \\(a_2 = Softmax(q_2 * concat(k_1, k_2, k_3, k_5, k_7)\\). ![BigBird block sparse attention](assets/18_big_bird/q2.png) *I am representing tokens by \\(g, r, s\\) just to represent their nature explicitly (i.e. showing global, random, sliding tokens), else they are \\(k\\) only.* --- For calculating attention score for \\({q}_{3:n-2}\\), we will gather global, sliding, random keys & will compute the normal attention operation over \\({q}_{3:n-2}\\) and the gathered keys. Note that sliding keys are gathered using the special shifting trick as discussed earlier in the sliding attention section. ![BigBird block sparse attention](assets/18_big_bird/q_middle.png) --- For calculating attention score for tokens in previous to last block (i.e. \\({q}_{n-1}\\)), we are gathering the first block, last three blocks, and the third block. Then we can apply the formula \\({a}_{n-1} = Softmax({q}_{n-1} * concat(k_1, k_3, k_5, k_6, k_7))\\). This is very similar to what we did for \\(q_2\\). ![BigBird block sparse attention](assets/18_big_bird/qlast_sec.png) --- Attention score for \\(\mathbf{q}_{n}\\) is represented by \\(a_n\\) where \\(a_n=Softmax(q_n * K^T)\\), and is nothing but attention score between all the tokens in the last block with all the other tokens in sequence. This is very similar to what we did for \\( q_1 \\) . ![BigBird block sparse attention](assets/18_big_bird/qlast.png) --- Let's combine the above matrices to get the final attention matrix. This attention matrix can be used to get a representation of all the tokens. ![BigBird block sparse attention](assets/18_big_bird/block-sparse-attn.gif) *`blue -> global blocks`, `red -> random blocks`, `orange -> sliding blocks` This attention matrix is just for illustration. During the forward pass, we aren't storing `white` blocks, but are computing a weighted value matrix (i.e. representation of each token) directly for each separated components as discussed above.* Now, we have covered the hardest part of block sparse attention, i.e. its implementation. Hopefully, you now have a better background to understand the actual code. Feel free to dive into it and to connect each part of the code with one of the components above. ## Time & Memory complexity | Attention Type | Sequence length | Time & Memory Complexity | |-----------------|-----------------|--------------------------| | `original_full` | 512 | `T` | | | 1024 | 4 x `T` | | | 4096 | 64 x `T` | | `block_sparse` | 1024 | 2 x `T` | | | 4096 | 8 x `T` | *Comparison of time & space complexity of BERT attention and BigBird block sparse attention.* <details> <summary>Expand this snippet in case you wanna see the calculations</summary> ```md BigBird time complexity = O(w x n + r x n + g x n) BERT time complexity = O(n^2) Assumptions: w = 3 x 64 r = 3 x 64 g = 2 x 64 When seqlen = 512 => **time complexity in BERT = 512^2** When seqlen = 1024 => time complexity in BERT = (2 x 512)^2 => **time complexity in BERT = 4 x 512^2** => time complexity in BigBird = (8 x 64) x (2 x 512) => **time complexity in BigBird = 2 x 512^2** When seqlen = 4096 => time complexity in BERT = (8 x 512)^2 => **time complexity in BERT = 64 x 512^2** => compute in BigBird = (8 x 64) x (8 x 512) => compute in BigBird = 8 x (512 x 512) => **time complexity in BigBird = 8 x 512^2** ``` </details> ## ITC vs ETC The BigBird model can be trained using 2 different strategies: **ITC** & **ETC**. ITC (internal transformer construction) is simply what we discussed above. In ETC (extended transformer construction), some additional tokens are made global such that they will attend to / will be attended by all tokens. ITC requires less compute since very few tokens are global while at the same time the model can capture sufficient global information (also with the help of random attention). On the other hand, ETC can be very helpful for tasks in which we need a lot of global tokens such as `question-answering for which the entire question should be attended to globally by the context to be able to relate the context correctly to the question. ***Note:** It is shown in the Big Bird paper that in many ETC experiments, the number of random blocks is set to 0. This is reasonable given our discussions above in the graph section.* The table below summarizes ITC & ETC: | | ITC | ETC | |----------------------------------------------|---------------------------------------|--------------------------------------| | Attention Matrix with global attention | \\( A = \begin{bmatrix} 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & & & & & & 1 \\ 1 & & & & & & 1 \\ 1 & & & & & & 1 \\ 1 & & & & & & 1 \\ 1 & & & & & & 1 \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 \end{bmatrix} \\) | \\( B = \begin{bmatrix} 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & & & & & & 1 \\ 1 & 1 & 1 & & & & & & 1 \\ 1 & 1 & 1 & & & & & & 1 \\ 1 & 1 & 1 & & & & & & 1 \\ 1 & 1 & 1 & & & & & & 1 \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \end{bmatrix} \\) | | `global_tokens` | 2 x `block_size` | `extra_tokens` + 2 x `block_size` | | `random_tokens` | `num_random_blocks` x `block_size` | `num_random_blocks` x `block_size` | | `sliding_tokens` | 3 x `block_size` | 3 x `block_size` | ## Using BigBird with 🤗Transformers You can use `BigBirdModel` just like any other 🤗 model. Let's see some code below: ```python from transformers import BigBirdModel # loading bigbird from its pretrained checkpoint model = BigBirdModel.from_pretrained("google/bigbird-roberta-base") # This will init the model with default configuration i.e. attention_type = "block_sparse" num_random_blocks = 3, block_size = 64. # But You can freely change these arguments with any checkpoint. These 3 arguments will just change the number of tokens each query token is going to attend. model = BigBirdModel.from_pretrained("google/bigbird-roberta-base", num_random_blocks=2, block_size=16) # By setting attention_type to `original_full`, BigBird will be relying on the full attention of n^2 complexity. This way BigBird is 99.9 % similar to BERT. model = BigBirdModel.from_pretrained("google/bigbird-roberta-base", attention_type="original_full") ``` There are total **3 checkpoints** available in **🤗Hub** (at the point of writing this article): [`bigbird-roberta-base`](https://huggingface.co/google/bigbird-roberta-base), [`bigbird-roberta-large`](https://huggingface.co/google/bigbird-roberta-large), [`bigbird-base-trivia-itc`](https://huggingface.co/google/bigbird-base-trivia-itc). The first two checkpoints come from pretraining `BigBirdForPretraining` with `masked_lm loss`; while the last one corresponds to the checkpoint after finetuning `BigBirdForQuestionAnswering` on `trivia-qa` dataset. Let's have a look at minimal code you can write (in case you like to use your PyTorch trainer), to use 🤗's BigBird model for fine-tuning your tasks. ```python # let's consider our task to be question-answering as an example from transformers import BigBirdForQuestionAnswering, BigBirdTokenizer import torch device = torch.device("cpu") if torch.cuda.is_available(): device = torch.device("cuda") # lets initialize bigbird model from pretrained weights with randomly initialized head on its top model = BigBirdForQuestionAnswering.from_pretrained("google/bigbird-roberta-base", block_size=64, num_random_blocks=3) tokenizer = BigBirdTokenizer.from_pretrained("google/bigbird-roberta-base") model.to(device) dataset = "torch.utils.data.DataLoader object" optimizer = "torch.optim object" epochs = ... # very minimal training loop for e in range(epochs): for batch in dataset: model.train() batch = {k: batch[k].to(device) for k in batch} # forward pass output = model(**batch) # back-propogation output["loss"].backward() optimizer.step() optimizer.zero_grad() # let's save final weights in a local directory model.save_pretrained("<YOUR-WEIGHTS-DIR>") # let's push our weights to 🤗Hub from huggingface_hub import ModelHubMixin ModelHubMixin.push_to_hub("<YOUR-WEIGHTS-DIR>", model_id="<YOUR-FINETUNED-ID>") # using finetuned model for inference question = ["How are you doing?", "How is life going?"] context = ["<some big context having ans-1>", "<some big context having ans-2>"] batch = tokenizer(question, context, return_tensors="pt") batch = {k: batch[k].to(device) for k in batch} model = BigBirdForQuestionAnswering.from_pretrained("<YOUR-FINETUNED-ID>") model.to(device) with torch.no_grad(): start_logits, end_logits = model(**batch).to_tuple() # now decode start_logits, end_logits with what ever strategy you want. # Note: # This was very minimal code (in case you want to use raw PyTorch) just for showing how BigBird can be used very easily # I would suggest using 🤗Trainer to have access for a lot of features ``` It's important to keep the following points in mind while working with big bird: * Sequence length must be a multiple of block size i.e. `seqlen % block_size = 0`. You need not worry since 🤗Transformers will automatically `<pad>` (to smallest multiple of block size which is greater than sequence length) if batch sequence length is not a multiple of `block_size`. * Currently, HuggingFace version **doesn't support ETC** and hence only 1st & last block will be global. * Current implementation doesn't support `num_random_blocks = 0`. * It's recommended by authors to set `attention_type = "original_full"` when sequence length < 1024. * This must hold: `seq_length > global_token + random_tokens + sliding_tokens + buffer_tokens` where `global_tokens = 2 x block_size`, `sliding_tokens = 3 x block_size`, `random_tokens = num_random_blocks x block_size` & `buffer_tokens = num_random_blocks x block_size`. In case you fail to do that, 🤗Transformers will automatically switch `attention_type` to `original_full` with a warning. * When using big bird as decoder (or using `BigBirdForCasualLM`), `attention_type` should be `original_full`. But you need not worry, 🤗Transformers will automatically switch `attention_type` to `original_full` in case you forget to do that. ## What's next? [@patrickvonplaten](https://github.com/patrickvonplaten) has made a really cool [notebook](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Evaluating_Big_Bird_on_TriviaQA.ipynb) on how to evaluate `BigBirdForQuestionAnswering` on the `trivia-qa` dataset. Feel free to play with BigBird using that notebook. You will soon find **BigBird Pegasus-like** model in the library for **long document summarization**💥. ## End Notes The original implementation of **block sparse attention matrix** can be found [here](https://github.com/google-research/bigbird/blob/master/bigbird/core/attention.py). You can find 🤗's version [here](https://github.com/huggingface/transformers/tree/master/src/transformers/models/big_bird).
Distributed Training: Train BART/T5 for Summarization using 🤗 Transformers and Amazon SageMaker
philschmid
April 8, 2021
sagemaker-distributed-training-seq2seq
guide, partnerships, aws, nlp
https://huggingface.co/blog/sagemaker-distributed-training-seq2seq
# Distributed Training: Train BART/T5 for Summarization using 🤗 Transformers and Amazon SageMaker <a target="_blank" href="https://github.com/huggingface/notebooks/blob/master/sagemaker/08_distributed_summarization_bart_t5/sagemaker-notebook.ipynb"> <img src="https://badgen.net/badge/Github/Open/black?icon=github" alt="Open on Github"/> </a> In case you missed it: on March 25th [we announced a collaboration with Amazon SageMaker](https://huggingface.co/blog/the-partnership-amazon-sagemaker-and-hugging-face) to make it easier to create State-of-the-Art Machine Learning models, and ship cutting-edge NLP features faster. Together with the SageMaker team, we built 🤗 Transformers optimized [Deep Learning Containers](https://github.com/aws/deep-learning-containers/blob/master/available_images.md#huggingface-training-containers) to accelerate training of Transformers-based models. Thanks AWS friends!🤗 🚀 With the new HuggingFace estimator in the [SageMaker Python SDK](https://sagemaker.readthedocs.io/en/stable/), you can start training with a single line of code. ![thumbnail](assets/19_sagemaker_distributed_training_seq2seq/thumbnail.png) The [announcement blog post](https://huggingface.co/blog/the-partnership-amazon-sagemaker-and-hugging-face) provides all the information you need to know about the integration, including a "Getting Started" example and links to documentation, examples, and features. listed again here: - [🤗 Transformers Documentation: Amazon SageMaker](https://huggingface.co/transformers/sagemaker.html) - [Example Notebooks](https://github.com/huggingface/notebooks/tree/master/sagemaker) - [Amazon SageMaker documentation for Hugging Face](https://docs.aws.amazon.com/sagemaker/latest/dg/hugging-face.html) - [Python SDK SageMaker documentation for Hugging Face](https://sagemaker.readthedocs.io/en/stable/frameworks/huggingface/index.html) - [Deep Learning Container](https://github.com/aws/deep-learning-containers/blob/master/available_images.md#huggingface-training-containers) If you're not familiar with Amazon SageMaker: *"Amazon SageMaker is a fully managed service that provides every developer and data scientist with the ability to build, train, and deploy machine learning (ML) models quickly. SageMaker removes the heavy lifting from each step of the machine learning process to make it easier to develop high quality models." [[REF](https://aws.amazon.com/sagemaker/faqs/)]* --- # Tutorial We will use the new [Hugging Face DLCs](https://github.com/aws/deep-learning-containers/tree/master/huggingface) and [Amazon SageMaker extension](https://sagemaker.readthedocs.io/en/stable/frameworks/huggingface/sagemaker.huggingface.html#huggingface-estimator) to train a distributed Seq2Seq-transformer model on the `summarization` task using the `transformers` and `datasets` libraries, and then upload the model to [huggingface.co](http://huggingface.co) and test it. As [distributed training strategy](https://huggingface.co/transformers/sagemaker.html#distributed-training-data-parallel) we are going to use [SageMaker Data Parallelism](https://aws.amazon.com/blogs/aws/managed-data-parallelism-in-amazon-sagemaker-simplifies-training-on-large-datasets/), which has been built into the [Trainer](https://huggingface.co/transformers/main_classes/trainer.html) API. To use data-parallelism we only have to define the `distribution` parameter in our `HuggingFace` estimator. ```python # configuration for running training on smdistributed Data Parallel distribution = {'smdistributed':{'dataparallel':{ 'enabled': True }}} ``` In this tutorial, we will use an Amazon SageMaker Notebook Instance for running our training job. You can learn [here how to set up a Notebook Instance](https://docs.aws.amazon.com/sagemaker/latest/dg/nbi.html). **What are we going to do:** - Set up a development environment and install sagemaker - Choose 🤗 Transformers `examples/` script - Configure distributed training and hyperparameters - Create a `HuggingFace` estimator and start training - Upload the fine-tuned model to [huggingface.co](http://huggingface.co) - Test inference ### Model and Dataset We are going to fine-tune [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the [samsum](https://huggingface.co/datasets/samsum) dataset. *"BART is sequence-to-sequence model trained with denoising as pretraining objective."* [[REF](https://github.com/pytorch/fairseq/blob/master/examples/bart/README.md)] The `samsum` dataset contains about 16k messenger-like conversations with summaries. ```json {"id": "13818513", "summary": "Amanda baked cookies and will bring Jerry some tomorrow.", "dialogue": "Amanda: I baked cookies. Do you want some?\r\nJerry: Sure!\r\nAmanda: I'll bring you tomorrow :-)"} ``` --- ## Set up a development environment and install sagemaker After our SageMaker Notebook Instance is running we can select either Jupyer Notebook or JupyterLab and create a new Notebook with the `conda_pytorch_p36 kernel`. _**Note:** The use of Jupyter is optional: We could also launch SageMaker Training jobs from anywhere we have an SDK installed, connectivity to the cloud and appropriate permissions, such as a Laptop, another IDE or a task scheduler like Airflow or AWS Step Functions._ After that we can install the required dependencies ```bash !pip install transformers "datasets[s3]" sagemaker --upgrade ``` [install](https://github.com/git-lfs/git-lfs/wiki/Installation) `git-lfs` for model upload. ```bash !curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.rpm.sh | sudo bash !sudo yum install git-lfs -y !git lfs install ``` To run training on SageMaker we need to create a sagemaker Session and provide an IAM role with the right permission. This IAM role will be later attached to the `TrainingJob` enabling it to download data, e.g. from Amazon S3. ```python import sagemaker sess = sagemaker.Session() role = sagemaker.get_execution_role() print(f"IAM role arn used for running training: {role}") print(f"S3 bucket used for storing artifacts: {sess.default_bucket()}") ``` --- # Choose 🤗 Transformers `examples/` script The [🤗 Transformers repository](https://github.com/huggingface/transformers/tree/master/examples) contains several `examples/`scripts for fine-tuning models on tasks from `language-modeling` to `token-classification`. In our case, we are using the `run_summarization.py` from the `seq2seq/` examples. ***Note**: you can use this tutorial as-is to train your model on a different examples script.* Since the `HuggingFace` Estimator has git support built-in, we can specify a [training script stored in a GitHub repository](https://sagemaker.readthedocs.io/en/stable/overview.html#use-scripts-stored-in-a-git-repository) as `entry_point` and `source_dir`. We are going to use the `transformers 4.4.2` DLC which means we need to configure the `v4.4.2` as the branch to pull the compatible example scripts. ```python #git_config = {'repo': 'https://github.com/huggingface/transformers.git','branch': 'v4.4.2'} # v4.4.2 is referring to the `transformers_version you use in the estimator. # used due an missing package in v4.4.2 git_config = {'repo': 'https://github.com/philschmid/transformers.git','branch': 'master'} # v4.4.2 is referring to the `transformers_version you use in the estimator. ``` --- ## Configure distributed training and hyperparameters Next, we will define our `hyperparameters` and configure our distributed training strategy. As hyperparameter, we can define any [Seq2SeqTrainingArguments](https://huggingface.co/transformers/main_classes/trainer.html#seq2seqtrainingarguments) and the ones defined in [run_summarization.py](https://github.com/huggingface/transformers/tree/main/examples/legacy/seq2seq#sequence-to-sequence-training-and-evaluation). ```python # hyperparameters, which are passed into the training job hyperparameters={ 'per_device_train_batch_size': 4, 'per_device_eval_batch_size': 4, 'model_name_or_path':'facebook/bart-large-cnn', 'dataset_name':'samsum', 'do_train':True, 'do_predict': True, 'predict_with_generate': True, 'output_dir':'/opt/ml/model', 'num_train_epochs': 3, 'learning_rate': 5e-5, 'seed': 7, 'fp16': True, } # configuration for running training on smdistributed Data Parallel distribution = {'smdistributed':{'dataparallel':{ 'enabled': True }}} ``` Since, we are using [SageMaker Data Parallelism](https://aws.amazon.com/blogs/aws/managed-data-parallelism-in-amazon-sagemaker-simplifies-training-on-large-datasets/) our `total_batch_size` will be `per_device_train_batch_size` * `n_gpus`. --- ## Create a `HuggingFace` estimator and start training The last step before training is creating a `HuggingFace` estimator. The Estimator handles the end-to-end Amazon SageMaker training. We define which fine-tuning script should be used as `entry_point`, which `instance_type` should be used, and which `hyperparameters` are passed in. ```python from sagemaker.huggingface import HuggingFace # create the Estimator huggingface_estimator = HuggingFace( entry_point='run_summarization.py', # script source_dir='./examples/seq2seq', # relative path to example git_config=git_config, instance_type='ml.p3dn.24xlarge', instance_count=2, transformers_version='4.4.2', pytorch_version='1.6.0', py_version='py36', role=role, hyperparameters = hyperparameters, distribution = distribution ) ``` As `instance_type` we are using `ml.p3dn.24xlarge`, which contains 8x NVIDIA A100 with an `instance_count` of 2. This means we are going to run training on 16 GPUs and a `total_batch_size` of 16*4=64. We are going to train a 400 Million Parameter model with a `total_batch_size` of 64, which is just wow. To start our training we call the `.fit()` method. ```python # starting the training job huggingface_estimator.fit() ``` ```bash 2021-04-01 13:00:35 Starting - Starting the training job... 2021-04-01 13:01:03 Starting - Launching requested ML instancesProfilerReport-1617282031: InProgress 2021-04-01 13:02:23 Starting - Preparing the instances for training...... 2021-04-01 13:03:25 Downloading - Downloading input data... 2021-04-01 13:04:04 Training - Downloading the training image............... 2021-04-01 13:06:33 Training - Training image download completed. Training in progress .... .... 2021-04-01 13:16:47 Uploading - Uploading generated training model 2021-04-01 13:27:49 Completed - Training job completed Training seconds: 2882 Billable seconds: 2882 ``` The training seconds are 2882 because they are multiplied by the number of instances. If we calculate 2882/2=1441 is it the duration from "Downloading the training image" to "Training job completed". Converted to real money, our training on 16 NVIDIA Tesla V100-GPU for a State-of-the-Art summarization model comes down to ~28$. --- ## Upload the fine-tuned model to [huggingface.co](http://huggingface.co) Since our model achieved a pretty good score we are going to upload it to [huggingface.co](http://huggingface.co), create a `model_card` and test it with the Hosted Inference widget. To upload a model you need to [create an account here](https://huggingface.co/join). We can download our model from Amazon S3 and unzip it using the following snippet. ```python import os import tarfile from sagemaker.s3 import S3Downloader local_path = 'my_bart_model' os.makedirs(local_path, exist_ok = True) # download model from S3 S3Downloader.download( s3_uri=huggingface_estimator.model_data, # s3 uri where the trained model is located local_path=local_path, # local path where *.tar.gz will be saved sagemaker_session=sess # sagemaker session used for training the model ) # unzip model tar = tarfile.open(f"{local_path}/model.tar.gz", "r:gz") tar.extractall(path=local_path) tar.close() os.remove(f"{local_path}/model.tar.gz") ``` Before we are going to upload our model to [huggingface.co](http://huggingface.co) we need to create a `model_card`. The `model_card` describes the model and includes hyperparameters, results, and specifies which dataset was used for training. To create a `model_card` we create a `README.md` in our `local_path` ```python # read eval and test results with open(f"{local_path}/eval_results.json") as f: eval_results_raw = json.load(f) eval_results={} eval_results["eval_rouge1"] = eval_results_raw["eval_rouge1"] eval_results["eval_rouge2"] = eval_results_raw["eval_rouge2"] eval_results["eval_rougeL"] = eval_results_raw["eval_rougeL"] eval_results["eval_rougeLsum"] = eval_results_raw["eval_rougeLsum"] with open(f"{local_path}/test_results.json") as f: test_results_raw = json.load(f) test_results={} test_results["test_rouge1"] = test_results_raw["test_rouge1"] test_results["test_rouge2"] = test_results_raw["test_rouge2"] test_results["test_rougeL"] = test_results_raw["test_rougeL"] test_results["test_rougeLsum"] = test_results_raw["test_rougeLsum"] ``` After we extract all the metrics we want to include we are going to create our `README.md`. Additionally to the automated generation of the results table we add the metrics manually to the `metadata` of our model card under `model-index` ```python import json MODEL_CARD_TEMPLATE = """ --- language: en tags: - sagemaker - bart - summarization license: apache-2.0 datasets: - samsum model-index: - name: {model_name} results: - task: name: Abstractive Text Summarization type: abstractive-text-summarization dataset: name: "SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization" type: samsum metrics: - name: Validation ROGUE-1 type: rogue-1 value: 42.621 - name: Validation ROGUE-2 type: rogue-2 value: 21.9825 - name: Validation ROGUE-L type: rogue-l value: 33.034 - name: Test ROGUE-1 type: rogue-1 value: 41.3174 - name: Test ROGUE-2 type: rogue-2 value: 20.8716 - name: Test ROGUE-L type: rogue-l value: 32.1337 widget: - text: | Jeff: Can I train a 🤗 Transformers model on Amazon SageMaker? Philipp: Sure you can use the new Hugging Face Deep Learning Container. Jeff: ok. Jeff: and how can I get started? Jeff: where can I find documentation? Philipp: ok, ok you can find everything here. https://huggingface.co/blog/the-partnership-amazon-sagemaker-and-hugging-face --- ## `{model_name}` This model was trained using Amazon SageMaker and the new Hugging Face Deep Learning container. For more information look at: - [🤗 Transformers Documentation: Amazon SageMaker](https://huggingface.co/transformers/sagemaker.html) - [Example Notebooks](https://github.com/huggingface/notebooks/tree/master/sagemaker) - [Amazon SageMaker documentation for Hugging Face](https://docs.aws.amazon.com/sagemaker/latest/dg/hugging-face.html) - [Python SDK SageMaker documentation for Hugging Face](https://sagemaker.readthedocs.io/en/stable/frameworks/huggingface/index.html) - [Deep Learning Container](https://github.com/aws/deep-learning-containers/blob/master/available_images.md#huggingface-training-containers) ## Hyperparameters {hyperparameters} ## Usage from transformers import pipeline summarizer = pipeline("summarization", model="philschmid/{model_name}") conversation = '''Jeff: Can I train a 🤗 Transformers model on Amazon SageMaker? Philipp: Sure you can use the new Hugging Face Deep Learning Container. Jeff: ok. Jeff: and how can I get started? Jeff: where can I find documentation? Philipp: ok, ok you can find everything here. https://huggingface.co/blog/the-partnership-amazon-sagemaker-and-hugging-face ''' nlp(conversation) ## Results | key | value | | --- | ----- | {eval_table} {test_table} """ # Generate model card (todo: add more data from Trainer) model_card = MODEL_CARD_TEMPLATE.format( model_name=f"{hyperparameters['model_name_or_path'].split('/')[1]}-{hyperparameters['dataset_name']}", hyperparameters=json.dumps(hyperparameters, indent=4, sort_keys=True), eval_table="\n".join(f"| {k} | {v} |" for k, v in eval_results.items()), test_table="\n".join(f"| {k} | {v} |" for k, v in test_results.items()), ) with open(f"{local_path}/README.md", "w") as f: f.write(model_card) ``` After we have our unzipped model and model card located in `my_bart_model` we can use the either `huggingface_hub` SDK to create a repository and upload it to [huggingface.co](https://huggingface.co) – or just to https://huggingface.co/new an create a new repository and upload it. ```python from getpass import getpass from huggingface_hub import HfApi, Repository hf_username = "philschmid" # your username on huggingface.co hf_email = "[email protected]" # email used for commit repository_name = f"{hyperparameters['model_name_or_path'].split('/')[1]}-{hyperparameters['dataset_name']}" # repository name on huggingface.co password = getpass("Enter your password:") # creates a prompt for entering password # get hf token token = HfApi().login(username=hf_username, password=password) # create repository repo_url = HfApi().create_repo(token=token, name=repository_name, exist_ok=True) # create a Repository instance model_repo = Repository(use_auth_token=token, clone_from=repo_url, local_dir=local_path, git_user=hf_username, git_email=hf_email) # push model to the hub model_repo.push_to_hub() ``` --- ## Test inference After we uploaded our model we can access it at `https://huggingface.co/{hf_username}/{repository_name}` ```python print(f"https://huggingface.co/{hf_username}/{repository_name}") ``` And use the "Hosted Inference API" widget to test it. [https://huggingface.co/philschmid/bart-large-cnn-samsum](https://huggingface.co/philschmid/bart-large-cnn-samsum) ![inference](assets/19_sagemaker_distributed_training_seq2seq/inference-test.png)
Introducing 🤗 Accelerate
sgugger
April 16, 2021
accelerate-library
guide
https://huggingface.co/blog/accelerate-library
# Introducing 🤗 Accelerate ## 🤗 Accelerate Run your **raw** PyTorch training scripts on any kind of device. Most high-level libraries above PyTorch provide support for distributed training and mixed precision, but the abstraction they introduce require a user to learn a new API if they want to customize the underlying training loop. 🤗 Accelerate was created for PyTorch users who like to have full control over their training loops but are reluctant to write (and maintain) the boilerplate code needed to use distributed training (for multi-GPU on one or several nodes, TPUs, ...) or mixed precision training. Plans forward include support for fairscale, deepseed, AWS SageMaker specific data-parallelism and model parallelism. It provides two things: a simple and consistent API that abstracts that boilerplate code and a launcher command to easily run those scripts on various setups. ### Easy integration! Let's first have a look at an example: ```diff import torch import torch.nn.functional as F from datasets import load_dataset + from accelerate import Accelerator + accelerator = Accelerator() - device = 'cpu' + device = accelerator.device model = torch.nn.Transformer().to(device) optim = torch.optim.Adam(model.parameters()) dataset = load_dataset('my_dataset') data = torch.utils.data.DataLoader(dataset, shuffle=True) + model, optim, data = accelerator.prepare(model, optim, data) model.train() for epoch in range(10): for source, targets in data: source = source.to(device) targets = targets.to(device) optimizer.zero_grad() output = model(source) loss = F.cross_entropy(output, targets) - loss.backward() + accelerator.backward(loss) optimizer.step() ``` By just adding five lines of code to any standard PyTorch training script, you can now run said script on any kind of distributed setting, as well as with or without mixed precision. 🤗 Accelerate even handles the device placement for you, so you can simplify the training loop above even further: ```diff import torch import torch.nn.functional as F from datasets import load_dataset + from accelerate import Accelerator + accelerator = Accelerator() - device = 'cpu' - model = torch.nn.Transformer().to(device) + model = torch.nn.Transformer() optim = torch.optim.Adam(model.parameters()) dataset = load_dataset('my_dataset') data = torch.utils.data.DataLoader(dataset, shuffle=True) + model, optim, data = accelerator.prepare(model, optim, data) model.train() for epoch in range(10): for source, targets in data: - source = source.to(device) - targets = targets.to(device) optimizer.zero_grad() output = model(source) loss = F.cross_entropy(output, targets) - loss.backward() + accelerator.backward(loss) optimizer.step() ``` In contrast, here are the changes needed to have this code run with distributed training are the followings: ```diff + import os import torch import torch.nn.functional as F from datasets import load_dataset + from torch.utils.data import DistributedSampler + from torch.nn.parallel import DistributedDataParallel + local_rank = int(os.environ.get("LOCAL_RANK", -1)) - device = 'cpu' + device = device = torch.device("cuda", local_rank) model = torch.nn.Transformer().to(device) + model = DistributedDataParallel(model) optim = torch.optim.Adam(model.parameters()) dataset = load_dataset('my_dataset') + sampler = DistributedSampler(dataset) - data = torch.utils.data.DataLoader(dataset, shuffle=True) + data = torch.utils.data.DataLoader(dataset, sampler=sampler) model.train() for epoch in range(10): + sampler.set_epoch(epoch) for source, targets in data: source = source.to(device) targets = targets.to(device) optimizer.zero_grad() output = model(source) loss = F.cross_entropy(output, targets) loss.backward() optimizer.step() ``` These changes will make your training script work for multiple GPUs, but your script will then stop working on CPU or one GPU (unless you start adding if statements everywhere). Even more annoying, if you wanted to test your script on TPUs you would need to change different lines of codes. Same for mixed precision training. The promise of 🤗 Accelerate is: - to keep the changes to your training loop to the bare minimum so you have to learn as little as possible. - to have the same functions work for any distributed setup, so only have to learn one API. ### How does it work? To see how the library works in practice, let's have a look at each line of code we need to add to a training loop. ```python accelerator = Accelerator() ``` On top of giving the main object that you will use, this line will analyze from the environment the type of distributed training run and perform the necessary initialization. You can force a training on CPU or a mixed precision training by passing `cpu=True` or `fp16=True` to this init. Both of those options can also be set using the launcher for your script. ```python model, optim, data = accelerator.prepare(model, optim, data) ``` This is the main bulk of the API and will prepare the three main type of objects: models (`torch.nn.Module`), optimizers (`torch.optim.Optimizer`) and dataloaders (`torch.data.dataloader.DataLoader`). #### Model Model preparation include wrapping it in the proper container (for instance `DistributedDataParallel`) and putting it on the proper device. Like with a regular distributed training, you will need to unwrap your model for saving, or to access its specific methods, which can be done with `accelerator.unwrap_model(model)`. #### Optimizer The optimizer is also wrapped in a special container that will perform the necessary operations in the step to make mixed precision work. It will also properly handle device placement of the state dict if its non-empty or loaded from a checkpoint. #### DataLoader This is where most of the magic is hidden. As you have seen in the code example, the library does not rely on a `DistributedSampler`, it will actually work with any sampler you might pass to your dataloader (if you ever had to write a distributed version of your custom sampler, there is no more need for that!). The dataloader is wrapped in a container that will only grab the indices relevant to the current process in the sampler (or skip the batches for the other processes if you use an `IterableDataset`) and put the batches on the proper device. For this to work, Accelerate provides a utility function that will synchronize the random number generators on each of the processes run during distributed training. By default, it only synchronizes the `generator` of your sampler, so your data augmentation will be different on each process, but the random shuffling will be the same. You can of course use this utility to synchronize more RNGs if you need it. ```python accelerator.backward(loss) ``` This last line adds the necessary steps for the backward pass (mostly for mixed precision but other integrations will require some custom behavior here). ### What about evaluation? Evaluation can either be run normally on all processes, or if you just want it to run on the main process, you can use the handy test: ```python if accelerator.is_main_process(): # Evaluation loop ``` But you can also very easily run a distributed evaluation using Accelerate, here is what you would need to add to your evaluation loop: ```diff + eval_dataloader = accelerator.prepare(eval_dataloader) predictions, labels = [], [] for source, targets in eval_dataloader: with torch.no_grad(): output = model(source) - predictions.append(output.cpu().numpy()) - labels.append(targets.cpu().numpy()) + predictions.append(accelerator.gather(output).cpu().numpy()) + labels.append(accelerator.gather(targets).cpu().numpy()) predictions = np.concatenate(predictions) labels = np.concatenate(labels) + predictions = predictions[:len(eval_dataloader.dataset)] + labels = label[:len(eval_dataloader.dataset)] metric_compute(predictions, labels) ``` Like for the training, you need to add one line to prepare your evaluation dataloader. Then you can just use `accelerator.gather` to gather across processes the tensors of predictions and labels. The last line to add truncates the predictions and labels to the number of examples in your dataset because the prepared evaluation dataloader will return a few more elements to make sure batches all have the same size on each process. ### One launcher to rule them all The scripts using Accelerate will be completely compatible with your traditional launchers, such as `torch.distributed.launch`. But remembering all the arguments to them is a bit annoying and when you've setup your instance with 4 GPUs, you'll run most of your trainings using them all. Accelerate comes with a handy CLI that works in two steps: ```bash accelerate config ``` This will trigger a little questionnaire about your setup, which will create a config file you can edit with all the defaults for your training commands. Then ```bash accelerate launch path_to_script.py --args_to_the_script ``` will launch your training script using those default. The only thing you have to do is provide all the arguments needed by your training script. To make this launcher even more awesome, you can use it to spawn an AWS instance using SageMaker. Look at [this guide](https://huggingface.co/docs/accelerate/sagemaker.html) to discover how! ### How to get involved? To get started, just `pip install accelerate` or see the [documentation](https://huggingface.co/docs/accelerate/installation.html) for more install options. Accelerate is a fully open-sourced project, you can find it on [GitHub](https://github.com/huggingface/accelerate), have a look at its [documentation](https://huggingface.co/docs/accelerate/) or skim through our [basic examples](https://github.com/huggingface/accelerate/tree/main/examples). Please let us know if you have any issue or feature you would like the library to support. For all questions, the [forums](https://discuss.huggingface.co/c/accelerate) is the place to check! For more complex examples in situation, you can look at the official [Transformers examples](https://github.com/huggingface/transformers/tree/master/examples). Each folder contains a `run_task_no_trainer.py` that leverages the Accelerate library!
Scaling-up BERT Inference on CPU (Part 1)
mfuntowicz
April 20, 2021
bert-cpu-scaling-part-1
guide, nlp, partnerships, intel
https://huggingface.co/blog/bert-cpu-scaling-part-1
<style> .centered { display: block; margin: 0 auto; } figure { text-align: center; display: table; max-width: 85%; /* demo; set some amount (px or %) if you can */ margin: 10px auto; /* not needed unless you want centered */ } </style> # Scaling up BERT-like model Inference on modern CPU - Part 1 ## 1. Context and Motivations Back in October 2019, my colleague Lysandre Debut published a comprehensive _(at the time)_ [inference performance benchmarking blog (1)](https://medium.com/huggingface/benchmarking-transformers-pytorch-and-tensorflow-e2917fb891c2). Since then, [🤗 transformers (2)](https://github.com/huggingface/transformers) welcomed a tremendous number of new architectures and thousands of new models were added to the [🤗 hub (3)](https://huggingface.co/models) which now counts more than 9,000 of them as of first quarter of 2021. As the NLP landscape keeps trending towards more and more BERT-like models being used in production, it remains challenging to efficiently deploy and run these architectures at scale. This is why we recently introduced our [🤗 Inference API](https://api-inference.huggingface.co/docs/python/html/index.html): to let you focus on building value for your users and customers, rather than digging into all the highly technical aspects of running such models. This blog post is the first part of a series which will cover most of the hardware and software optimizations to better leverage CPUs for BERT model inference. For this initial blog post, we will cover the hardware part: - Setting up a baseline - Out of the box results - Practical & technical considerations when leveraging modern CPUs for CPU-bound tasks - Core count scaling - Does increasing the number of cores actually give better performance? - Batch size scaling - Increasing throughput with multiple parallel & independent model instances We decided to focus on the most famous Transformer model architecture, [BERT (Delvin & al. 2018) (4)](https://arxiv.org/abs/1810.04805v1). While we focus this blog post on BERT-like models to keep the article concise, all the described techniques can be applied to any architecture on the Hugging Face model hub. In this blog post we will not describe in detail the Transformer architecture - to learn about that I can't recommend enough the [Illustrated Transformer blogpost from Jay Alammar (5)](https://jalammar.github.io/illustrated-transformer/). Today's goals are to give you an idea of where we are from an Open Source perspective using BERT-like models for inference on PyTorch and TensorFlow, and also what you can easily leverage to speedup inference. ## 2. Benchmarking methodology When it comes to leveraging BERT-like models from Hugging Face's model hub, there are many knobs which can be tuned to make things faster. Also, in order to quantify what "faster" means, we will rely on widely adopted metrics: - **Latency**: Time it takes for a single execution of the model (i.e. forward call) - **Throughput**: Number of executions performed in a fixed amount of time These two metrics will help us understand the benefits and tradeoffs along this blog post. The benchmarking methodology was reimplemented from scratch in order to integrate the latest features provided by transformers and also to let the community run and share benchmarks in an __hopefully easier__ way. The whole framework is now based on [Facebook AI & Research's Hydra configuration library](https://hydra.cc/) allowing us to easily report and track all the items involved while running the benchmark, hence increasing the overall reproducibility. You can find the whole structure of the project [here](https://github.com/huggingface/tune) On the 2021 version, we kept the ability to run inference workloads through PyTorch and Tensorflow as in the previous blog [(1)](https://medium.com/huggingface/benchmarking-transformers-pytorch-and-tensorflow-e2917fb891c2) along with their traced counterpart [TorchScript (6)](https://pytorch.org/docs/stable/jit.html), [Google Accelerated Linear Algebra (XLA) (7)](https://www.tensorflow.org/xla). Also, we decided to include support for [ONNX Runtime (8)](https://www.onnxruntime.ai/) as it provides many optimizations specifically targeting transformers based models which makes it a strong candidate to consider when discussing performance. Last but not least, this new unified benchmarking environment will allow us to easily run inference for different scenarios such as [Quantized Models (Zafrir & al.) (9)](https://arxiv.org/abs/1910.06188) using less precise number representations (`float16`, `int8`, `int4`). This method known as **quantization** has seen an increased adoption among all major hardware providers. In the near future, we would like to integrate additional methods we are actively working on at Hugging Face, namely Distillation, Pruning & Sparsificaton. ## 3. Baselines All the results below were run on [Amazon Web Services (AWS) c5.metal instance](https://aws.amazon.com/ec2/instance-types/c5) leveraging an Intel Xeon Platinum 8275 CPU (48 cores/96 threads). The choice of this instance provides all the useful CPU features to speedup Deep Learning workloads such as: - AVX512 instructions set (_which might not be leveraged out-of-the-box by the various frameworks_) - Intel Deep Learning Boost (also known as Vector Neural Network Instruction - VNNI) which provides specialized CPU instructions for running quantized networks (_using int8 data type_) The choice of using _metal_ instance is to avoid any virtualization issue which can arise when using cloud providers. This gives us full control of the hardware, especially while targeting the NUMA (Non-Unified Memory Architecture) controller, which we will cover later in this post. _The operating system was Ubuntu 20.04 (LTS) and all the experiments were conducted using Hugging Face transformers version 4.5.0, PyTorch 1.8.1 & Google TensorFlow 2.4.0_ ## 4. Out of the box results <br> <figure class="image"> <img alt="pytorch versus tensorflow out of the box" src="assets/21_bert_cpu_scaling_part_1/imgs/pytorch_vs_tf_oob.svg" /> <figcaption>Figure 1. PyTorch (1.8.1) vs Google TensorFlow (2.4.1) out of the box</figcaption> </figure> <br> <br> <figure class="image"> <img alt="pytorch versus tensorflow out of the box bigger batch sizes" src="assets/21_bert_cpu_scaling_part_1/imgs/pytorch_vs_tf_oob_big_batch.svg" /> <figcaption>Figure 2. PyTorch (1.8.1) vs Google TensorFlow (2.4.1) out of the box - (Bigger Batch Size)</figcaption> </figure> <br> Straigh to the point, out-of-the-box, PyTorch shows better inference results over TensorFlow for all the configurations tested here. It is important to note the results out-of-the-box might not reflect the "optimal" setup for both PyTorch and TensorFlow and thus it can look deceiving here. One possible way to explain such difference between the two frameworks might be the underlying technology to execute parallel sections within operators. PyTorch internally uses [OpenMP (10)](https://www.openmp.org/) along with [Intel MKL (now oneDNN) (11)](https://software.intel.com/content/www/us/en/develop/documentation/oneapi-programming-guide/top/api-based-programming/intel-oneapi-deep-neural-network-library-onednn.html) for efficient linear algebra computations whereas TensorFlow relies on Eigen and its own threading implementation. ## 5. Scaling BERT Inference to increase overall throughput on modern CPU ### 5.1. Introduction There are multiple ways to improve the latency and throughput for tasks such as BERT inference. Improvements and tuning can be performed at various levels from enabling Operating System features, swapping dependent libraries with more performant ones, carefully tuning framework properties and, last but not least, using parallelization logic leveraging all the cores on the CPU(s). For the remainder of this blog post we will focus on the latter, also known as **Multiple Inference Stream**. The idea is simple: Allocate **multiple instances** of the same model and assign the execution of each instance to a **dedicated, non-overlapping subset of the CPU cores** in order to have truly parallel instances. ### 5.2. Cores and Threads on Modern CPUs On our way towards optimizing CPU inference for better usage of the CPU cores you might have already seen -_at least for the past 20 years_- modern CPUs specifications report "cores" and "hardware threads" or "physical" and "logical" numbers. These notions refer to a mechanism called **Simultaneous Multi-Threading** (SMT) or **Hyper-Threading** on Intel's platforms. To illustrate this, imagine two tasks **A** and **B**, executing in parallel, each on its own software thread. At some point, there is a high probability these two tasks will have to wait for some resources to be fetched from main memory, SSD, HDD or even the network. If the threads are scheduled on different physical cores, with no hyper-threading, during these periods the core executing the task is in an **Idle** state waiting for the resources to arrive, and effectively doing nothing... and hence not getting fully utilized Now, with **SMT**, the **two software threads for task A and B** can be scheduled on the same **physical core**, such that their execution is interleaved on that physical core: Task A and Task B will execute simultaneously on the physical core and when one task is halted, the other task can still continue execution on the core thereby increasing the utilization of that core. <br> <figure class="image"> <img class="centered" alt="Intel Hyper Threading technology" src="assets/21_bert_cpu_scaling_part_1/imgs/hyper_threading_explained.png" /> <figcaption>Figure 3. Illustration of Intel Hyper Threading technology (SMT)</figcaption> </figure> <br> The figure 3. above simplifies the situation by assuming single core setup. If you want some more details on how SMT works on multi-cores CPUs, please refer to these two articles with very deep technical explanations of the behavior: - [Intel® Hyper-Threading Technology - Technical User Guide (12)](http://www.cslab.ece.ntua.gr/courses/advcomparch/2007/material/readings/Intel%20Hyper-Threading%20Technology.pdf) - [Introduction to Hyper-Threading Technology (13)](https://software.intel.com/content/www/us/en/develop/articles/introduction-to-hyper-threading-technology.html) Back to our model inference workload... If you think about it, in a perfect world with a fully optimized setup, computations take the majority of time. In this context, using the logical cores shouldn't bring us any performance benefit because both logical cores (hardware threads) compete for the core’s execution resources. As a result, the tasks being a majority of general matrix multiplications (_[gemms (14)](https://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms#Level_3)_), they are inherently CPU bounds and **does not benefits** from SMT. ### 5.3. Leveraging Multi-Socket servers and CPU affinity Nowadays servers bring many cores, some of them even support multi-socket setups (_i.e. multiple CPUs on the motherboard_). On Linux, the command `lscpu` reports all the specifications and topology of the CPUs present on the system: ```shell ubuntu@some-ec2-machine:~$ lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 46 bits physical, 48 bits virtual CPU(s): 96 On-line CPU(s) list: 0-95 Thread(s) per core: 2 Core(s) per socket: 24 Socket(s): 2 NUMA node(s): 2 Vendor ID: GenuineIntel CPU family: 6 Model: 85 Model name: Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz Stepping: 7 CPU MHz: 1200.577 CPU max MHz: 3900.0000 CPU min MHz: 1200.0000 BogoMIPS: 6000.00 Virtualization: VT-x L1d cache: 1.5 MiB L1i cache: 1.5 MiB L2 cache: 48 MiB L3 cache: 71.5 MiB NUMA node0 CPU(s): 0-23,48-71 NUMA node1 CPU(s): 24-47,72-95 ``` In our case we have a machine with **2 sockets**, each socket providing **24 physical cores** with **2 threads per cores** (SMT). Another interesting characteristic is the notion of **NUMA** node (0, 1) which represents how cores and memory are being mapped on the system. Non-Uniform Memory Access (**NUMA**) is the opposite of Uniform Memory Access (**UMA**) where the whole memory pool is accessible by all the cores through a single unified bus between sockets and the main memory. **NUMA** on the other hand splits the memory pool and each CPU socket is responsible to address a subset of the memory, reducing the congestion on the bus. <br> <figure class="image"> <img class="centered" alt="Non-Uniform Memory Access and Uniform Memory Access architectures" src="assets/21_bert_cpu_scaling_part_1/imgs/UMA_NUMA.png" /> <figcaption>Figure 5. Difference illustration of UMA and NUMA architectures <a href="https://software.intel.com/content/www/us/en/develop/articles/optimizing-applications-for-numa.html">(source (15))</a></figcaption> </figure> <br> In order to fully utilize the potential of such a beefy machine, we need to ensure our model instances are correctly dispatched across all the **physical** cores on all sockets along with enforcing memory allocation to be "NUMA-aware". On Linux, NUMA's process configuration can be tuned through [`numactl`](https://linux.die.net/man/8/numactl) which provides an interface to bind a process to a set of CPU cores (referred as **Thread Affinity**). Also, it allows tuning the memory allocation policy, making sure the memory allocated for the process is as close as possible to the cores' memory pool (referred as **Explicit Memory Allocation Directives**). _Note: Setting both cores and memory affinities is important here. Having computations done on socket 0 and memory allocated on socket 1 would ask the system to go over the sockets shared bus to exchange memory, thus leading to an undesired overhead._ ### 5.4. Tuning Thread Affinity & Memory Allocation Policy Now that we have all the knobs required to control the resources' allocation of our model instances we go further and see how to effectively deploy those and see the impact on latency and throughput. Let's go gradually to get a sense of what is the impact of each command and parameter. First, we start by launching our inference model without any tuning, and we observe how the computations are being dispatched on CPU cores (_Left_). ```shell python3 src/main.py model=bert-base-cased backend.name=pytorch batch_size=1 sequence_length=128 ``` Then we specify the core and memory affinity through `numactl` using all the **physical** cores and only a single thread (thread 0) per core (_Right_): ```shell numactl -C 0-47 -m 0,1 python3 src/main.py model=bert-base-cased backend.name=pytorch batch_size=1 sequence_length=128 ``` <br> <figure class="image"> <img class="centered" alt="htop CPU usage without and with numactl thread affinity set" src="assets/21_bert_cpu_scaling_part_1/imgs/numa_combined.svg" /> <figcaption>Figure 6. Linux htop command side-by-side results without & with Thread Affinity set</figcaption> </figure> <br> As you can see, without any specific tuning, PyTorch and TensorFlow dispatch the work on a single socket, using all the logical cores in that socket (both threads on 24 cores). Also, as we highlighted earlier, we do not want to leverage the **SMT** feature in our case, so we set the process' thread affinity to target only 1 hardware thread. _Note, this is specific to this run and can vary depending on individual setups. Hence, it is recommended to check thread affinity settings for each specific use-case._ Let's take sometime from here to highlight what we did with `numactl`: - `-C 0-47` indicates to `numactl` what is the thread affinity (cores 0 to 47). - `-m 0,1` indicates to `numactl` to allocate memory on both CPU sockets If you wonder why we are binding the process to cores [0...47], you need to go back to look at the output of `lscpu`. From there you will find the section `NUMA node0` and `NUMA node1` which has the form `NUMA node<X> <logical ids>` In our case, each socket is one NUMA node and there are 2 NUMA nodes. Each socket or each NUMA node has 24 physical cores and 2 hardware threads per core, so 48 logical cores. For NUMA node 0, 0-23 are hardware thread 0 and 24-47 are hardware thread 1 on the 24 physical cores in socket 0. Likewise, for NUMA node 1, 48-71 are hardware thread 0 and 72-95 are hardware thread 1 on the 24 physical cores in socket 1. As we are targeting just 1 thread per physical core, as explained earlier, we pick only thread 0 on each core and hence logical processors 0-47. Since we are using both sockets, we need to also bind the memory allocations accordingly (0,1). _Please note that using both sockets may not always give the best results, particularly for small problem sizes. The benefit of using compute resources across both sockets might be reduced or even negated by cross-socket communication overhead._ ## 6. Core count scaling - Does using more cores actually improve performance? When thinking about possible ways to improve our model inference performances, the first rational solution might be to throw some more resources to do the same amount of work. Through the rest of this blog series, we will refer to this setup as **Core Count Scaling** meaning, only the number of cores used on the system to achieve the task will vary. This is also often referred as Strong Scaling in the HPC world. At this stage, you may wonder what is the point of allocating only a subset of the cores rather than throwing all the horses at the task to achieve minimum latency. Indeed, depending on the problem-size, throwing more resources to the task might give better results. It is also possible that for small problems putting more CPU cores at work doesn't improve the final latency. In order to illustrate this, the figure 6. below takes different problem sizes (`batch_size = 1, sequence length = {32, 128, 512}`) and reports the latencies with respect to the number of CPU cores used for running computations for both PyTorch and TensorFlow. Limiting the number of resources involved in computation is done by limiting the CPU cores involved in **intra** operations (_**intra** here means inside an operator doing computation, also known as "kernel"_). This is achieved through the following APIs: - PyTorch: `torch.set_num_threads(x)` - TensorFlow: `tf.config.threading.set_intra_op_parallelism_threads(x)` <br> <figure class="image"> <img alt="" src="assets/21_bert_cpu_scaling_part_1/imgs/core_count_scaling.svg" /> <figcaption>Figure 7. Latency measurements</figcaption> </figure> <br> As you can see, depending on the problem size, the number of threads involved in the computations has a positive impact on the latency measurements. For small-sized problems & medium-sized problems using only one socket would give the best performance. For large-sized problems, the overhead of the cross-socket communication is covered by the computations cost, thus benefiting from using all the cores available on the both sockets. ## 7. Multi-Stream Inference - Using multiple instances in parallel If you're still reading this, you should now be in good shape to set up parallel inference workloads on CPU. Now, we are going to highlight some possibilities offered by the powerful hardware we have, and tuning the knobs described before, to scale our inference as linearly as possible. In the following section we will explore another possible scaling solution **Batch Size Scaling**, but before diving into this, let's take a look at how we can leverage Linux tools in order to assign Thread Affinity allowing effective model instance parallelism. Instead of throwing more cores to the task as you would do in the core count scaling setup, now we will be using more model instances. Each instance will run independently on its own subset of the hardware resources in a truly parallel fashion on a subset of the CPU cores. ### 7.1. How-to allocate multiple independent instances Let's start simple, if we want to spawn 2 instances, one on each socket with 24 cores assigned: ```shell numactl -C 0-23 -m 0 python3 src/main.py model=bert-base-cased batch_size=1 sequence_length=128 backend.name=pytorch backend.num_threads=24 numactl -C 24-47 -m 1 python3 src/main.py model=bert-base-cased batch_size=1 sequence_length=128 backend.name=pytorch backend.num_threads=24 ``` Starting from here, each instance does not share any resource with the other, and everything is operating at maximum efficiency from a hardware perspective. The latency measurements are identical to what a single instance would achieve, but throughput is actually 2x higher as the two instances operate in a truly parallel way. We can further increase the number of instances, lowering the number of cores assigned for each instance. Let's run 4 independent instances, each of them effectively bound to 12 CPU cores. ```shell numactl -C 0-11 -m 0 python3 src/main.py model=bert-base-cased batch_size=1 sequence_length=128 backend.name=pytorch backend.num_threads=12 numactl -C 12-23 -m 0 python3 src/main.py model=bert-base-cased batch_size=1 sequence_length=128 backend.name=pytorch backend.num_threads=12 numactl -C 24-35 -m 1 python3 src/main.py model=bert-base-cased batch_size=1 sequence_length=128 backend.name=pytorch backend.num_threads=12 numactl -C 36-47 -m 1 python3 src/main.py model=bert-base-cased batch_size=1 sequence_length=128 backend.name=pytorch backend.num_threads=12 ``` The outcomes remain the same, our 4 instances are effectively running in a truly parallel manner. The latency will be slightly higher than the example before (2x less cores being used), but the throughput will be again 2x higher. ### 7.2. Smart dispatching - Allocating different model instances for different problem sizes One another possibility offered by this setup is to have multiple instances carefully tuned for various problem sizes. With a smart dispatching approach, one can redirect incoming requests to the right configuration giving the best latency depending on the request workload. ```shell # Small-sized problems (sequence length <= 32) use only 8 cores (on socket 0 - 8/24 cores used) numactl -C 0-7 -m 0 python3 src/main.py model=bert-base-cased batch_size=1 sequence_length=32 backend.name=pytorch backend.num_threads=8 # Medium-sized problems (32 > sequence <= 384) use remaining 16 cores (on socket 0 - (8+16)/24 cores used) numactl -C 8-23 -m 0 python3 src/main.py model=bert-base-cased batch_size=1 sequence_length=128 backend.name=pytorch backend.num_threads=16 # Large sized problems (sequence >= 384) use the entire CPU (on socket 1 - 24/24 cores used) numactl -C 24-37 -m 1 python3 src/main.py model=bert-base-cased batch_size=1 sequence_length=384 backend.name=pytorch backend.num_threads=24 ``` ## 8. Batch size scaling - Improving throughput and latency with multiple parallel & independent model instances One another very interesting direction for scaling up inference is to actually put some more model instances into the pool along with reducing the actual workload each instance receives proportionally. This method actually changes both the size of the problem (_batch size_), and the resources involved in the computation (_cores_). To illustrate, imagine you have a server with `C` CPU cores, and you want to run a workload containing B samples with S tokens. You can represent this workload as a tensor of shape `[B, S]`, B being the size of the batch and S being the maximum sequence length within the B samples. For all the instances (`N`), each of them executes on `C / N` cores and would receive a subset of the task `[B / N, S]`. Each instance doesn't receive the global batch but instead, they all receive a subset of it `[B / N, S]` thus the name **Batch Size Scaling**. In order to highlight the benefits of such scaling method, the charts below reports both the latencies when scaling up model instances along with the effects on the throughput. When looking at the results, let's focus on the latency and the throughput aspects: On one hand, we are taking the maximum latency over the pool of instances to reflect the time it takes to process all the samples in the batch. Putting it differently, as instances operate in a truly parallel fashion, the time it takes to gather all the batch chunks from all the instances is driven by the longest time it takes for individual instance in the pool to get their chunk done. As you can see below on Figure 7., the actual latency gain when increasing the number of instances is really dependent of the problem size. In all cases, we can find an optimal resource allocation (batch size & number of instances) to minimize our latency but, there is no specific pattern on the number of cores to involve in the computation. Also, it is important to notice the results might look totally different on another system _(i.e. Operating System, Kernel Version, Framework version, etc.)_ Figure 8. sums up the best multi-instance configuration when targeting minimum latency by taking the minimum over the number of instances involved. For instance, for `{batch = 8, sequence length = 128}` using 4 instances (each with `{batch = 2}` and 12 cores) gives the best latency measurements. The Figure 9. reports all the setups minimizing latency for both PyTorch and TensorFlow for various problem-sizes. _**Spoiler**: There are numerous other optimizations we will discuss in a follow-up blog post which will substantially impact this chart._ <br> <figure class="image"> <img alt="Batch scaling experiment for PyTorch and Tensorflow" src="assets/21_bert_cpu_scaling_part_1/imgs/batch_scaling_exp.svg" style="width:100%"/> <figcaption>Figure 8. Max latency evolution with respect to number of instances for a total batch size of 8</figcaption> </figure> <br> <br> <figure class="image"> <img alt="Optimal number of instance minimizing overall latency for a total batch size of 8" src="assets/21_bert_cpu_scaling_part_1/imgs/batch_size_scaling_latency_optimal_nb_instances.svg" style="width:100%"/> <figcaption>Figure 9. Optimal number of instance minimizing overall latency for a total batch size of 8</figcaption> </figure> <br> On a second hand, we observe the throughput as the sum of all the model instance executing in parallel. It allows us to visualize the scalability of the system when adding more and more instances each of them with fewer resources but also proportional workload. Here, the results show almost linear scalability and thus an optimal hardware usage. <figure class="image"> <img alt="Batch scaling experiment for PyTorch and Tensorflow" src="assets/21_bert_cpu_scaling_part_1/imgs/batch_scaling_exp_throughput.svg" style="width:100%"/> <figcaption>Figure 10. Sum throughput with respect to number of instances for a total batch size of 8</figcaption> </figure> <br> ## 9. Conclusion Through this blog post, we covered out-of-box BERT inference performance one can expect for PyTorch and TensorFlow, from a simple PyPi install and without further tuning. It is important to highlight results provided here reflects out-of-the-box framework setup hence, they might not provide the absolute best performances. We decided to not include optimizations as part of this blog post to focus on hardware and efficiency. Optimizations will be discussed in the second part! 🚀 Then, we covered and detailed the impact, and the importance of setting the thread affinity along with the trade-off between the target problem size, and the number of cores required for achieving the task. Also, it is important to define **which criteria** _(i.e. latency vs throughput)_ to use when optimizing your deployment as the resulting setups might be totally different. On a more general note, small problem sizes (_short sequences and/or small batches_) might require much fewer cores to achieve the best possible latency than big problems (_very long sequences and/or big batches_). It is interesting to cover all these aspects when thinking about the final deployment platform as it might cut the cost of the infrastructure drastically. For instance, our 48 cores machine charges **4.848\$/h** whereas a smaller instances with only 8 cores lowers the cost to **0.808\$/h**, leading to a **6x cost reduction**. Last but not least, many of the knobs discussed along this blog post can be automatically tuned through a [launcher script](https://github.com/huggingface/tune/blob/main/launcher.py) highly inspired from the original script made by Intel and available [here](https://github.com/intel/intel-extension-for-pytorch/blob/master/intel_pytorch_extension_py/launch.py). The launcher script is able to automatically starts your python process(es) with the correct thread affinity, effectively splitting resources across instances along with many other performances tips! We will detail many of this tips in the second part 🧐. In the follow-up blog post, more advanced settings and tuning techniques to decrease model latency even further will be involved, such as: - Launcher script walk-through - Tuning the memory allocation library - Using Linux's Transparent Huge Pages mechanisms - Using vendor-specific Math/Parallel libraries Stay tuned! 🤗 ## Acknowledgments - [Omry Yadan](https://github.com/omry) (Facebook FAIR) - Author of [OmegaConf](https://github.com/omry/omegaconf) & [Hydra](https://github.com/facebookresearch/hydra) for all the tips setting up Hydra correctly. - All Intel & Intel Labs' NLP colleagues - For the ongoing optimizations and research efforts they are putting into transformers and more generally in the NLP field. - Hugging Face colleagues - For all the comments and improvements in the reviewing process. ## References 1. [Benchmarking Transformers: PyTorch and TensorFlow](https://medium.com/huggingface/benchmarking-transformers-pytorch-and-tensorflow-e2917fb891c2) 2. [HuggingFace's Transformers: State-of-the-art Natural Language Processing](https://arxiv.org/abs/1910.03771v2) 3. [HuggingFace's Model Hub](https://huggingface.co/models) 4. [BERT - Pre-training of Deep Bidirectional Transformers for Language Understanding (Devlin & al. 2018)](https://arxiv.org/abs/1810.04805v1) 5. [Illustrated Transformer blogpost from Jay Alammar](https://jalammar.github.io/illustrated-transformer/) 6. [PyTorch - TorchScript](https://pytorch.org/docs/stable/jit.html) 7. [Google Accelerated Linear Algebra (XLA)](https://www.tensorflow.org/xla) 8. [ONNX Runtime - Optimize and Accelerate Machine Learning Inferencing and Training](https://www.onnxruntime.ai/) 9. [Q8BERT - Quantized 8Bit BERT (Zafrir & al. 2019)](https://arxiv.org/abs/1910.06188) 10. [OpenMP](https://www.openmp.org/) 11. [Intel oneDNN](https://software.intel.com/content/www/us/en/develop/documentation/oneapi-programming-guide/top/api-based-programming/intel-oneapi-deep-neural-network-library-onednn.html) 12. [Intel® Hyper-Threading Technology - Technical User Guide](http://www.cslab.ece.ntua.gr/courses/advcomparch/2007/material/readings/Intel%20Hyper-Threading%20Technology.pdf) 13. [Introduction to Hyper-Threading Technology](https://software.intel.com/content/www/us/en/develop/articles/introduction-to-hyper-threading-technology.html) 14. [BLAS (Basic Linear Algebra Subprogram) - Wikipedia](https://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms#Level_3) 15. [Optimizing Applications for NUMA](https://software.intel.com/content/www/us/en/develop/articles/optimizing-applications-for-numa.html)
Using & Mixing Hugging Face Models with Gradio 2.0
abidlabs
May 25, 2021
gradio
open-source-collab, guide
https://huggingface.co/blog/gradio
# Using & Mixing Hugging Face Models with Gradio 2.0 > ##### Cross-posted from the&nbsp;[Gradio blog](https://gradio.app/blog/using-huggingface-models). The **[Hugging Face Model Hub](https://huggingface.co/models)** has more than 10,000 machine learning models submitted by users. You’ll find all kinds of natural language processing models that, for example, translate between Finnish and English or recognize Chinese speech. More recently, the Hub has expanded to even include models for image classification and audio processing. Hugging Face has always worked to make models accessible and easy to use. The `transformers` library makes it possible to load a model in a few lines of code. After a model is loaded, it can be used to make predictions on new data programmatically. _But it’s not just programmers that are using machine learning models!_ An increasingly common scenario in machine learning is **demoing models to interdisciplinary teams** or letting **non-programmers use models** (to help discover biases, failure points, etc.). The **[Gradio library](https://gradio.app/)** lets machine learning developers create demos and GUIs from machine learning models very easily, and share them for free with your collaborators as easily as sharing a Google docs link. Now, we’re excited to share that the Gradio 2.0 library lets you **_load and use almost any Hugging Face model_ _with a GUI_** **_in just 1 line of code_**. Here’s an example: ![GIF of Gradio 2.0](./assets/22_gradio/recording-20.gif) By default, this uses HuggingFace’s hosted Inference API (you can supply your own API key or use the public access without an API key), or you can also run `pip install transformers` and run the model computations locally if you’d like. Do you want to customize the demo? You can override any of the default parameters of the [Interface class](https://gradio.app/docs) by passing in your own parameters: ![GIF of Gradio 2.0](./assets/22_gradio/recording-21.gif) **_But wait, there’s more!_** With 10,000 models already on Model Hub, we see models not just as standalone pieces of code, but as lego pieces that can be **composed and mixed** to create more sophisticated applications and demos. For example, Gradio lets you load multiple models in _parallel_ (imagine you want to compare 4 different text generation models from Hugging Face to see which one is the best for your use case): ![GIF of Gradio 2.0](./assets/22_gradio/recording-22.gif) Or put your models in _series_. This makes it easy to build complex applications built from multiple machine learning models. For example, here we can build an application to translate and summarize Finnish news articles in 3 lines of code: ![GIF of Gradio 2.0](./assets/22_gradio/recording-24.gif) You can even mix multiple models in _series_ compared to each other in _parallel_ (we’ll let you try that yourself!). To try any of this out, just install Gradio (`pip install gradio`) and pick a Hugging Face model you want to try. Start building with Gradio and Hugging Face 🧱⛏️
Few-shot learning in practice: GPT-NEO and the 🤗 Accelerated Inference API
philschmid
June 3, 2021
few-shot-learning-gpt-neo-and-inference-api
guide, nlp
https://huggingface.co/blog/few-shot-learning-gpt-neo-and-inference-api
# Few-shot learning in practice: GPT-Neo and the 🤗 Accelerated Inference API In many Machine Learning applications, the amount of available labeled data is a barrier to producing a high-performing model. The latest developments in NLP show that you can overcome this limitation by providing a few examples at inference time with a large language model - a technique known as Few-Shot Learning. In this blog post, we'll explain what Few-Shot Learning is, and explore how a large language model called GPT-Neo, and the 🤗 Accelerated Inference API, can be used to generate your own predictions. ## What is Few-Shot Learning? Few-Shot Learning refers to the practice of feeding a machine learning model with a very small amount of training data to guide its predictions, like a few examples at inference time, as opposed to standard fine-tuning techniques which require a relatively large amount of training data for the pre-trained model to adapt to the desired task with accuracy. This technique has been mostly used in computer vision, but with some of the latest Language Models, like [EleutherAI GPT-Neo](https://www.eleuther.ai/research/projects/gpt-neo/) and [OpenAI GPT-3](https://openai.com/blog/gpt-3-apps/), we can now use it in Natural Language Processing (NLP). In NLP, Few-Shot Learning can be used with Large Language Models, which have learned to perform a wide number of tasks implicitly during their pre-training on large text datasets. This enables the model to generalize, that is to understand related but previously unseen tasks, with just a few examples. Few-Shot NLP examples consist of three main components: - **Task Description**: A short description of what the model should do, e.g. "Translate English to French" - **Examples**: A few examples showing the model what it is expected to predict, e.g. "sea otter => loutre de mer" - **Prompt**: The beginning of a new example, which the model should complete by generating the missing text, e.g. "cheese => " ![few-shot-prompt](assets/22_few_shot_learning_gpt_neo_and_inference_api/few-shot-prompt.png) <small>Image from <a href="https://arxiv.org/abs/2005.14165" target="_blank">Language Models are Few-Shot Learners</a></small> Creating these few-shot examples can be tricky, since you need to articulate the “task” you want the model to perform through them. A common issue is that models, especially smaller ones, are very sensitive to the way the examples are written. An approach to optimize Few-Shot Learning in production is to learn a common representation for a task and then train task-specific classifiers on top of this representation. OpenAI showed in the [GPT-3 Paper](https://arxiv.org/abs/2005.14165) that the few-shot prompting ability improves with the number of language model parameters. ![few-shot-performance](assets/22_few_shot_learning_gpt_neo_and_inference_api/few-shot-performance.png) <small>Image from <a href="https://arxiv.org/abs/2005.14165" target="_blank">Language Models are Few-Shot Learners</a></small> Let's now take a look at how at how GPT-Neo and the 🤗 Accelerated Inference API can be used to generate your own Few-Shot Learning predictions! --- ## What is GPT-Neo? GPT⁠-⁠Neo is a family of transformer-based language models from [EleutherAI](https://www.eleuther.ai/projects/gpt-neo/) based on the GPT architecture. [EleutherAI](https://www.eleuther.ai)'s primary goal is to train a model that is equivalent in size to GPT⁠-⁠3 and make it available to the public under an open license. All of the currently available GPT-Neo checkpoints are trained with the Pile dataset, a large text corpus that is extensively documented in ([Gao et al., 2021](https://arxiv.org/abs/2101.00027)). As such, it is expected to function better on the text that matches the distribution of its training text; we recommend keeping this in mind when designing your examples. --- ## 🤗 Accelerated Inference API The [Accelerated Inference API](https://huggingface.co/inference-api) is our hosted service to run inference on any of the 10,000+ models publicly available on the 🤗 Model Hub, or your own private models, via simple API calls. The API includes acceleration on CPU and GPU with [up to 100x speedup](https://huggingface.co/blog/accelerated-inference) compared to out of the box deployment of Transformers. To integrate Few-Shot Learning predictions with `GPT-Neo` in your own apps, you can use the 🤗 Accelerated Inference API with the code snippet below. You can find your API Token [here](https://huggingface.co/settings/token), if you don't have an account you can get started [here](https://huggingface.co/pricing). ```python import json import requests API_TOKEN = "" def query(payload='',parameters=None,options={'use_cache': False}): API_URL = "https://api-inference.huggingface.co/models/EleutherAI/gpt-neo-2.7B" headers = {"Authorization": f"Bearer {API_TOKEN}"} body = {"inputs":payload,'parameters':parameters,'options':options} response = requests.request("POST", API_URL, headers=headers, data= json.dumps(body)) try: response.raise_for_status() except requests.exceptions.HTTPError: return "Error:"+" ".join(response.json()['error']) else: return response.json()[0]['generated_text'] parameters = { 'max_new_tokens':25, # number of generated tokens 'temperature': 0.5, # controlling the randomness of generations 'end_sequence': "###" # stopping sequence for generation } prompt="...." # few-shot prompt data = query(prompt,parameters,options) ``` --- ## Practical Insights Here are some practical insights, which help you get started using `GPT-Neo` and the 🤗 Accelerated Inference API. Since `GPT-Neo` (2.7B) is about 60x smaller than `GPT-3` (175B), it does not generalize as well to zero-shot problems and needs 3-4 examples to achieve good results. When you provide more examples `GPT-Neo` understands the task and takes the `end_sequence` into account, which allows us to control the generated text pretty well. ![insights-benefit-of-examples](assets/22_few_shot_learning_gpt_neo_and_inference_api/insights-benefit-of-examples.png) The hyperparameter `End Sequence`, `Token Length` & `Temperature` can be used to control the `text-generation` of the model and you can use this to your advantage to solve the task you need. The `Temperature` controlls the randomness of your generations, lower temperature results in less random generations and higher temperature results in more random generations. ![insights-benefit-of-hyperparameter](assets/22_few_shot_learning_gpt_neo_and_inference_api/insights-benefit-of-hyperparameter.png) In the example, you can see how important it is to define your hyperparameter. These can make the difference between solving your task or failing miserably. --- ## Responsible Use Few-Shot Learning is a powerful technique but also presents unique pitfalls that need to be taken into account when designing uses cases. To illustrate this, let's consider the default `Sentiment Analysis` setting provided in the widget. After seeing three examples of sentiment classification, the model makes the following predictions 4 times out of 5, with `temperature` set to 0.1: > ### > Tweet: "I'm a disabled happy person" > Sentiment: Negative What could go wrong? Imagine that you are using sentiment analysis to aggregate reviews of products on an online shopping website: a possible outcome could be that items useful to people with disabilities would be automatically down-ranked - a form of automated discrimination. For more on this specific issue, we recommend the ACL 2020 paper [Social Biases in NLP Models as Barriers for Persons with Disabilities](https://www.aclweb.org/anthology/2020.acl-main.487.pdf). Because Few-Shot Learning relies more directly on information and associations picked up from pre-training, it makes it more sensitive to this type of failures. How to minimize the risk of harm? Here are some practical recommendations. ### Best practices for responsible use - Make sure people know which parts of their user experience depend on the outputs of the ML system - If possible, give users the ability to opt-out - Provide a mechanism for users to give feedback on the model decision, and to override it - Monitor feedback, especially model failures, for groups of users that may be disproportionately affected What needs most to be avoided is to use the model to automatically make decisions for, or about, a user, without opportunity for a human to provide input or correct the output. Several regulations, such as [GDPR](https://gdpr-info.eu/) in Europe, require that users be provided an explanation for automatic decisions made about them. --- To use GPT-Neo or any Hugging Face model in your own application, you can [start a free trial](https://huggingface.co/pricing) of the 🤗 Accelerated Inference API. If you need help mitigating bias in models and AI systems, or leveraging Few-Shot Learning, the 🤗 Expert Acceleration Program can [offer your team direct premium support from the Hugging Face team](https://huggingface.co/support).
Sentence Transformers in the 🤗 Hub
nreimers
June 28, 2021
sentence-transformers-in-the-hub
open-source-collab, nlp
https://huggingface.co/blog/sentence-transformers-in-the-hub
# Sentence Transformers in the Hugging Face Hub Over the past few weeks, we've built collaborations with many Open Source frameworks in the machine learning ecosystem. One that gets us particularly excited is Sentence Transformers. [Sentence Transformers](https://github.com/UKPLab/sentence-transformers) is a framework for sentence, paragraph and image embeddings. This allows to derive semantically meaningful embeddings (1) which is useful for applications such as semantic search or multi-lingual zero shot classification. As part of Sentence Transformers [v2 release](https://github.com/UKPLab/sentence-transformers/releases/tag/v2.0.0), there are a lot of cool new features: - Sharing your models in the Hub easily. - Widgets and Inference API for sentence embeddings and sentence similarity. - Better sentence-embeddings models available ([benchmark](https://www.sbert.net/docs/pretrained_models.html#sentence-embedding-models) and [models](https://huggingface.co/sentence-transformers) in the Hub). With over 90 pretrained Sentence Transformers models for more than 100 languages in the Hub, anyone can benefit from them and easily use them. Pre-trained models can be loaded and used directly with few lines of code: ```python from sentence_transformers import SentenceTransformer sentences = ["Hello World", "Hallo Welt"] model = SentenceTransformer('sentence-transformers/paraphrase-MiniLM-L6-v2') embeddings = model.encode(sentences) print(embeddings) ``` But not only this. People will probably want to either demo their models or play with other models easily, so we're happy to announce the release of two new widgets in the Hub! The first one is the `feature-extraction` widget which shows the sentence embedding. <div><a class="text-xs block mb-3 text-gray-300" href="/sentence-transformers/distilbert-base-nli-max-tokens"><code>sentence-transformers/distilbert-base-nli-max-tokens</code></a> <div class="p-5 shadow-sm rounded-xl bg-white max-w-md"><div class="SVELTE_HYDRATER " data-props="{&quot;apiUrl&quot;:&quot;https://api-inference.huggingface.co&quot;,&quot;model&quot;:{&quot;author&quot;:&quot;sentence-transformers&quot;,&quot;autoArchitecture&quot;:&quot;AutoModel&quot;,&quot;branch&quot;:&quot;main&quot;,&quot;cardData&quot;:{&quot;pipeline_tag&quot;:&quot;feature-extraction&quot;,&quot;tags&quot;:[&quot;sentence-transformers&quot;,&quot;feature-extraction&quot;,&quot;sentence-similarity&quot;,&quot;transformers&quot;]},&quot;cardSource&quot;:true,&quot;config&quot;:{&quot;architectures&quot;:[&quot;DistilBertModel&quot;],&quot;model_type&quot;:&quot;distilbert&quot;},&quot;id&quot;:&quot;sentence-transformers/distilbert-base-nli-max-tokens&quot;,&quot;pipeline_tag&quot;:&quot;feature-extraction&quot;,&quot;library_name&quot;:&quot;sentence-transformers&quot;,&quot;mask_token&quot;:&quot;[MASK]&quot;,&quot;modelId&quot;:&quot;sentence-transformers/distilbert-base-nli-max-tokens&quot;,&quot;private&quot;:false,&quot;siblings&quot;:[{&quot;rfilename&quot;:&quot;.gitattributes&quot;},{&quot;rfilename&quot;:&quot;README.md&quot;},{&quot;rfilename&quot;:&quot;config.json&quot;},{&quot;rfilename&quot;:&quot;config_sentence_transformers.json&quot;},{&quot;rfilename&quot;:&quot;modules.json&quot;},{&quot;rfilename&quot;:&quot;pytorch_model.bin&quot;},{&quot;rfilename&quot;:&quot;sentence_bert_config.json&quot;},{&quot;rfilename&quot;:&quot;special_tokens_map.json&quot;},{&quot;rfilename&quot;:&quot;tokenizer.json&quot;},{&quot;rfilename&quot;:&quot;tokenizer_config.json&quot;},{&quot;rfilename&quot;:&quot;vocab.txt&quot;},{&quot;rfilename&quot;:&quot;1_Pooling/config.json&quot;}],&quot;tags&quot;:[&quot;pytorch&quot;,&quot;distilbert&quot;,&quot;arxiv:1908.10084&quot;,&quot;sentence-transformers&quot;,&quot;feature-extraction&quot;,&quot;sentence-similarity&quot;,&quot;transformers&quot;,&quot;pipeline_tag:feature-extraction&quot;],&quot;tag_objs&quot;:[{&quot;id&quot;:&quot;feature-extraction&quot;,&quot;label&quot;:&quot;Feature Extraction&quot;,&quot;type&quot;:&quot;pipeline_tag&quot;},{&quot;id&quot;:&quot;pytorch&quot;,&quot;label&quot;:&quot;PyTorch&quot;,&quot;type&quot;:&quot;library&quot;},{&quot;id&quot;:&quot;sentence-transformers&quot;,&quot;label&quot;:&quot;Sentence Transformers&quot;,&quot;type&quot;:&quot;library&quot;},{&quot;id&quot;:&quot;transformers&quot;,&quot;label&quot;:&quot;Transformers&quot;,&quot;type&quot;:&quot;library&quot;},{&quot;id&quot;:&quot;arxiv:1908.10084&quot;,&quot;label&quot;:&quot;arxiv:1908.10084&quot;,&quot;type&quot;:&quot;arxiv&quot;},{&quot;id&quot;:&quot;distilbert&quot;,&quot;label&quot;:&quot;distilbert&quot;,&quot;type&quot;:&quot;other&quot;},{&quot;id&quot;:&quot;sentence-similarity&quot;,&quot;label&quot;:&quot;sentence-similarity&quot;,&quot;type&quot;:&quot;other&quot;},{&quot;id&quot;:&quot;pipeline_tag:feature-extraction&quot;,&quot;label&quot;:&quot;pipeline_tag:feature-extraction&quot;,&quot;type&quot;:&quot;other&quot;}]},&quot;shouldUpdateUrl&quot;:true}" data-target="InferenceWidget"><div class="flex flex-col w-full max-w-full"> <div class="font-semibold flex items-center mb-2"><div class="text-lg flex items-center"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="-ml-1 mr-1 text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg> Hosted inference API</div> <a target="_blank" href="/docs"><svg class="ml-1.5 text-sm text-gray-400 hover:text-black" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M17 22v-8h-4v2h2v6h-3v2h8v-2h-3z" fill="currentColor"></path><path d="M16 8a1.5 1.5 0 1 0 1.5 1.5A1.5 1.5 0 0 0 16 8z" fill="currentColor"></path><path d="M16 30a14 14 0 1 1 14-14a14 14 0 0 1-14 14zm0-26a12 12 0 1 0 12 12A12 12 0 0 0 16 4z" fill="currentColor"></path></svg></a></div> <div class="flex items-center text-sm text-gray-500 mb-1.5"><div class="inline-flex items-center"><svg class="mr-1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M27 3H5a2 2 0 0 0-2 2v22a2 2 0 0 0 2 2h22a2 2 0 0 0 2-2V5a2 2 0 0 0-2-2zm0 2v4H5V5zm-10 6h10v7H17zm-2 7H5v-7h10zM5 20h10v7H5zm12 7v-7h10v7z"></path></svg> <span>Feature Extraction</span></div> <div class="ml-auto"></div></div> <form><div class="flex h-10"><input class="form-input-alt flex-1 rounded-r-none " placeholder="Your sentence here..." required="" type="text"> <button class="btn-widget w-24 h-10 px-5 rounded-l-none border-l-0 " type="submit">Compute</button></div></form> <div class="mt-1.5"><div class="text-gray-400 text-xs">This model is currently loaded and running on the Inference API.</div> </div> <div class="mt-auto pt-4 flex items-center text-xs text-gray-500"><button class="flex items-center cursor-not-allowed text-gray-300" disabled=""><svg class="mr-1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32" style="transform: rotate(360deg);"><path d="M31 16l-7 7l-1.41-1.41L28.17 16l-5.58-5.59L24 9l7 7z" fill="currentColor"></path><path d="M1 16l7-7l1.41 1.41L3.83 16l5.58 5.59L8 23l-7-7z" fill="currentColor"></path><path d="M12.419 25.484L17.639 6l1.932.518L14.35 26z" fill="currentColor"></path></svg> JSON Output</button> <button class="flex items-center ml-auto"><svg class="mr-1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M22 16h2V8h-8v2h6v6z" fill="currentColor"></path><path d="M8 24h8v-2h-6v-6H8v8z" fill="currentColor"></path><path d="M26 28H6a2.002 2.002 0 0 1-2-2V6a2.002 2.002 0 0 1 2-2h20a2.002 2.002 0 0 1 2 2v20a2.002 2.002 0 0 1-2 2zM6 6v20h20.001L26 6z" fill="currentColor"></path></svg> Maximize</button></div> </div></div></div> But seeing a bunch of numbers might not be very useful to you (unless you're able to understand the embeddings from a quick look, which would be impressive!). We're also introducing a new widget for a common use case of Sentence Transformers: computing sentence similarity. <!-- Hackiest hack ever for the draft --> <div><a class="text-xs block mb-3 text-gray-300" href="/sentence-transformers/paraphrase-MiniLM-L6-v2"><code>sentence-transformers/paraphrase-MiniLM-L6-v2</code></a> <div class="p-5 shadow-sm rounded-xl bg-white max-w-md"><div class="SVELTE_HYDRATER " data-props="{&quot;apiUrl&quot;:&quot;https://api-inference.huggingface.co&quot;,&quot;model&quot;:{&quot;author&quot;:&quot;sentence-transformers&quot;,&quot;autoArchitecture&quot;:&quot;AutoModel&quot;,&quot;branch&quot;:&quot;main&quot;,&quot;cardData&quot;:{&quot;tags&quot;:[&quot;sentence-transformers&quot;,&quot;sentence-similarity&quot;]},&quot;cardSource&quot;:true,&quot;config&quot;:{&quot;architectures&quot;:[&quot;RobertaModel&quot;],&quot;model_type&quot;:&quot;roberta&quot;},&quot;pipeline_tag&quot;:&quot;sentence-similarity&quot;,&quot;library_name&quot;:&quot;sentence-transformers&quot;,&quot;mask_token&quot;:&quot;<mask>&quot;,&quot;modelId&quot;:&quot;sentence-transformers/paraphrase-MiniLM-L6-v2&quot;,&quot;private&quot;:false,&quot;tags&quot;:[&quot;pytorch&quot;,&quot;jax&quot;,&quot;roberta&quot;,&quot;sentence-transformers&quot;,&quot;sentence-similarity&quot;],&quot;tag_objs&quot;:[{&quot;id&quot;:&quot;sentence-similarity&quot;,&quot;label&quot;:&quot;Sentence Similarity&quot;,&quot;type&quot;:&quot;pipeline_tag&quot;},{&quot;id&quot;:&quot;pytorch&quot;,&quot;label&quot;:&quot;PyTorch&quot;,&quot;type&quot;:&quot;library&quot;},{&quot;id&quot;:&quot;jax&quot;,&quot;label&quot;:&quot;JAX&quot;,&quot;type&quot;:&quot;library&quot;},{&quot;id&quot;:&quot;sentence-transformers&quot;,&quot;label&quot;:&quot;Sentence Transformers&quot;,&quot;type&quot;:&quot;library&quot;},{&quot;id&quot;:&quot;roberta&quot;,&quot;label&quot;:&quot;roberta&quot;,&quot;type&quot;:&quot;other&quot;}],&quot;widgetData&quot;:[{&quot;source_sentence&quot;:&quot;That is a happy person&quot;,&quot;sentences&quot;:[&quot;That is a happy dog&quot;,&quot;That is a very happy person&quot;,&quot;Today is a sunny day&quot;]}]},&quot;shouldUpdateUrl&quot;:false}" data-target="InferenceWidget"><div class="flex flex-col w-full max-w-full "> <div class="font-semibold flex items-center mb-2"><div class="text-lg flex items-center"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="-ml-1 mr-1 text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg> Hosted inference API</div> <a target="_blank" href="/docs"><svg class="ml-1.5 text-sm text-gray-400 hover:text-black" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M17 22v-8h-4v2h2v6h-3v2h8v-2h-3z" fill="currentColor"></path><path d="M16 8a1.5 1.5 0 1 0 1.5 1.5A1.5 1.5 0 0 0 16 8z" fill="currentColor"></path><path d="M16 30a14 14 0 1 1 14-14a14 14 0 0 1-14 14zm0-26a12 12 0 1 0 12 12A12 12 0 0 0 16 4z" fill="currentColor"></path></svg></a></div> <div class="flex items-center text-sm text-gray-500 mb-1.5"><div class="inline-flex items-center"><svg class="mr-1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 15H17V2h-2v13H2v2h13v13h2V17h13v-2z"></path><path d="M25.586 20L27 21.414L23.414 25L27 28.586L25.586 30l-5-5l5-5z"></path><path d="M11 30H3a1 1 0 0 1-.894-1.447l4-8a1.041 1.041 0 0 1 1.789 0l4 8A1 1 0 0 1 11 30zm-6.382-2h4.764L7 23.236z"></path><path d="M28 12h-6a2.002 2.002 0 0 1-2-2V4a2.002 2.002 0 0 1 2-2h6a2.002 2.002 0 0 1 2 2v6a2.002 2.002 0 0 1-2 2zm-6-8v6h6.001L28 4z"></path><path d="M7 12a5 5 0 1 1 5-5a5.006 5.006 0 0 1-5 5zm0-8a3 3 0 1 0 3 3a3.003 3.003 0 0 0-3-3z"></path></svg> <span>Sentence Similarity</span></div> <div class="ml-auto"></div></div> <form class="flex flex-col space-y-2"><label class="block "> <span class="text-sm text-gray-500">Source Sentence</span> <input class="mt-1.5 form-input-alt block w-full " placeholder="Your sentence here..." type="text"></label> <label class="block "> <span class="text-sm text-gray-500">Sentences to compare to</span> <input class="mt-1.5 form-input-alt block w-full " placeholder="Your sentence here..." type="text"></label> <label class="block "> <input class=" form-input-alt block w-full " placeholder="Your sentence here..." type="text"></label><label class="block "> <input class=" form-input-alt block w-full " placeholder="Your sentence here..." type="text"></label> <button class="btn-widget w-full h-10 px-5" type="submit">Add Sentence</button> <button class="btn-widget w-24 h-10 px-5 " type="submit">Compute</button></form> <div class="mt-1.5"><div class="text-gray-400 text-xs">This model can be loaded on the Inference API on-demand.</div> </div> <div class="mt-auto pt-4 flex items-center text-xs text-gray-500"><button class="flex items-center cursor-not-allowed text-gray-300" disabled=""><svg class="mr-1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32" style="transform: rotate(360deg);"><path d="M31 16l-7 7l-1.41-1.41L28.17 16l-5.58-5.59L24 9l7 7z" fill="currentColor"></path><path d="M1 16l7-7l1.41 1.41L3.83 16l5.58 5.59L8 23l-7-7z" fill="currentColor"></path><path d="M12.419 25.484L17.639 6l1.932.518L14.35 26z" fill="currentColor"></path></svg> JSON Output</button> <button class="flex items-center ml-auto"><svg class="mr-1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M22 16h2V8h-8v2h6v6z" fill="currentColor"></path><path d="M8 24h8v-2h-6v-6H8v8z" fill="currentColor"></path><path d="M26 28H6a2.002 2.002 0 0 1-2-2V6a2.002 2.002 0 0 1 2-2h20a2.002 2.002 0 0 1 2 2v20a2.002 2.002 0 0 1-2 2zM6 6v20h20.001L26 6z" fill="currentColor"></path></svg> Maximize</button></div> </div></div></div> </div> Of course, on top of the widgets, we also provide API endpoints in our Inference API that you can use to programmatically call your models! ```python import json import requests API_URL = "https://api-inference.huggingface.co/models/sentence-transformers/paraphrase-MiniLM-L6-v2" headers = {"Authorization": "Bearer YOUR_TOKEN"} def query(payload): response = requests.post(API_URL, headers=headers, json=payload) return response.json() data = query( { "inputs": { "source_sentence": "That is a happy person", "sentences": [ "That is a happy dog", "That is a very happy person", "Today is a sunny day" ] } } ) ``` ## Unleashing the Power of Sharing So why is this powerful? In a matter of minutes, you can share your trained models with the whole community. ```python from sentence_transformers import SentenceTransformer # Load or train a model model.save_to_hub("my_new_model") ``` Now you will have a [repository](https://huggingface.co/osanseviero/my_new_model) in the Hub which hosts your model. A model card was automatically created. It describes the architecture by listing the layers and shows how to use the model with both `Sentence Transformers` and `🤗 Transformers`. You can also try out the widget and use the Inference API straight away! If this was not exciting enough, your models will also be easily discoverable by [filtering for all](https://huggingface.co/models?filter=sentence-transformers) `Sentence Transformers` models. ## What's next? Moving forward, we want to make this integration even more useful. In our roadmap, we expect training and evaluation data to be included in the automatically created model card, like is the case in `transformers` from version `v4.8`. And what's next for you? We're very excited to see your contributions! If you already have a `Sentence Transformer` repo in the Hub, you can now enable the widget and Inference API by changing the model card metadata. ```yaml --- tags: - sentence-transformers - sentence-similarity # Or feature-extraction! --- ``` If you don't have any model in the Hub and want to learn more about Sentence Transformers, head to [www.SBERT.net](https://www.sbert.net)! ## Would you like to integrate your library to the Hub? This integration is possible thanks to the [`huggingface_hub`](https://github.com/huggingface/huggingface_hub) library which has all our widgets and the API for all our supported libraries. If you would like to integrate your library to the Hub, we have a [guide](https://huggingface.co/docs/hub/models-adding-libraries) for you! ## References 1. Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. [https://arxiv.org/abs/1908.10084](https://arxiv.org/abs/1908.10084)
Deploy Hugging Face models easily with Amazon SageMaker
philschmid
July 8, 2021
deploy-hugging-face-models-easily-with-amazon-sagemaker
guide, partnerships, aws
https://huggingface.co/blog/deploy-hugging-face-models-easily-with-amazon-sagemaker
<img src="/blog/assets/17_the_partnership_amazon_sagemaker_and_hugging_face/cover.png" alt="hugging-face-and-aws-logo" class="w-full"> # **Deploy Hugging Face models easily with Amazon SageMaker 🏎** Earlier this year[ we announced a strategic collaboration with Amazon](https://huggingface.co/blog/the-partnership-amazon-sagemaker-and-hugging-face) to make it easier for companies to use Hugging Face in Amazon SageMaker, and ship cutting-edge Machine Learning features faster. We introduced new Hugging Face Deep Learning Containers (DLCs) to[ train Hugging Face Transformer models in Amazon SageMaker](https://huggingface.co/transformers/sagemaker.html#getting-started-train-a-transformers-model). Today, we are excited to share a new inference solution with you that makes it easier than ever to deploy Hugging Face Transformers with Amazon SageMaker! With the new Hugging Face Inference DLCs, you can deploy your trained models for inference with just one more line of code, or select any of the 10,000+ publicly available models from the[ Model Hub](https://huggingface.co/models), and deploy them with Amazon SageMaker. Deploying models in SageMaker provides you with production-ready endpoints that scale easily within your AWS environment, with built-in monitoring and a ton of enterprise features. It's been an amazing collaboration and we hope you will take advantage of it! Here's how to use the new[ SageMaker Hugging Face Inference Toolkit](https://github.com/aws/sagemaker-huggingface-inference-toolkit) to deploy Transformers-based models: ```python from sagemaker.huggingface import HuggingFaceModel # create Hugging Face Model Class and deploy it as SageMaker Endpoint huggingface_model = HuggingFaceModel(...).deploy() ``` That's it! 🚀 To learn more about accessing and using the new Hugging Face DLCs with the Amazon SageMaker Python SDK, check out the guides and resources below. --- # **Resources, Documentation & Samples 📄** Below you can find all the important resources for deploying your models to Amazon SageMaker. ## **Blog/Video** - [Video: Deploy a Hugging Face Transformers Model from S3 to Amazon SageMaker](https://youtu.be/pfBGgSGnYLs) - [Video: Deploy a Hugging Face Transformers Model from the Model Hub to Amazon SageMaker](https://youtu.be/l9QZuazbzWM) ## **Samples/Documentation** - [Hugging Face documentation for Amazon SageMaker](https://huggingface.co/docs/sagemaker/main) - [Deploy models to Amazon SageMaker](https://huggingface.co/docs/sagemaker/inference) - [Amazon SageMaker documentation for Hugging Face](https://docs.aws.amazon.com/sagemaker/latest/dg/hugging-face.html) - [Python SDK SageMaker documentation for Hugging Face](https://sagemaker.readthedocs.io/en/stable/frameworks/huggingface/index.html) - [Deep Learning Container](https://github.com/aws/deep-learning-containers/blob/master/available_images.md#huggingface-training-containers) - [Notebook: Deploy one of the 10 000+ Hugging Face Transformers to Amazon SageMaker for Inference](https://github.com/huggingface/notebooks/blob/master/sagemaker/11_deploy_model_from_hf_hub/deploy_transformer_model_from_hf_hub.ipynb) - [Notebook: Deploy a Hugging Face Transformer model from S3 to SageMaker for inference](https://github.com/huggingface/notebooks/blob/master/sagemaker/10_deploy_model_from_s3/deploy_transformer_model_from_s3.ipynb) --- # **SageMaker Hugging Face Inference Toolkit ⚙️** In addition to the Hugging Face Transformers-optimized Deep Learning Containers for inference, we have created a new[ Inference Toolkit](https://github.com/aws/sagemaker-huggingface-inference-toolkit) for Amazon SageMaker. This new Inference Toolkit leverages the `pipelines` from the `transformers` library to allow zero-code deployments of models without writing any code for pre- or post-processing. In the "Getting Started" section below you find two examples of how to deploy your models to Amazon SageMaker. In addition to the zero-code deployment, the Inference Toolkit supports "bring your own code" methods, where you can override the default methods. You can learn more about "bring your own code" in the documentation[ here](https://github.com/aws/sagemaker-huggingface-inference-toolkit#-user-defined-codemodules) or you can check out the sample notebook "deploy custom inference code to Amazon SageMaker". ## **API - Inference Toolkit Description** Using the` transformers pipelines`, we designed an API, which makes it easy for you to benefit from all `pipelines` features. The API has a similar interface than the[ 🤗 Accelerated Inference API](https://api-inference.huggingface.co/docs/python/html/detailed_parameters.html), meaning your inputs need to be defined in the `inputs` key and if you want additional supported `pipelines` parameters you can add them in the `parameters` key. Below you can find examples for requests. ```python # text-classification request body { "inputs": "Camera - You are awarded a SiPix Digital Camera! call 09061221066 fromm landline. Delivery within 28 days." } # question-answering request body { "inputs": { "question": "What is used for inference?", "context": "My Name is Philipp and I live in Nuremberg. This model is used with sagemaker for inference." } } # zero-shot classification request body { "inputs": "Hi, I recently bought a device from your company but it is not working as advertised and I would like to get reimbursed!", "parameters": { "candidate_labels": [ "refund", "legal", "faq" ] } } ``` # **Getting started 🧭** In this guide we will use the new Hugging Face Inference DLCs and Amazon SageMaker Python SDK to deploy two transformer models for inference. In the first example, we deploy for inference a Hugging Face Transformer model trained in Amazon SageMaker. In the second example, we directly deploy one of the 10,000+ publicly available Hugging Face Transformers models from the[ Model Hub](https://huggingface.co/models) to Amazon SageMaker for Inference. ## **Setting up the environment** We will use an Amazon SageMaker Notebook Instance for the example. You can learn[ here how to set up a Notebook Instance.](https://docs.aws.amazon.com/sagemaker/latest/dg/nbi.html) To get started, jump into your Jupyter Notebook or JupyterLab and create a new Notebook with the `conda_pytorch_p36` kernel. **_Note: The use of Jupyter is optional: We could also launch SageMaker API calls from anywhere we have an SDK installed, connectivity to the cloud, and appropriate permissions, such as a Laptop, another IDE, or a task scheduler like Airflow or AWS Step Functions._** After that we can install the required dependencies. ```bash pip install "sagemaker>=2.48.0" --upgrade ``` To deploy a model on SageMaker, we need to create a `sagemaker` Session and provide an IAM role with the right permission. The `get_execution_role` method is provided by the SageMaker SDK as an optional convenience. You can also specify the role by writing the specific role ARN you want your endpoint to use. This IAM role will be later attached to the Endpoint, e.g. download the model from Amazon S3. ```python import sagemaker sess = sagemaker.Session() role = sagemaker.get_execution_role() ``` --- ## **Deploy a trained Hugging Face Transformer model to SageMaker for inference** There are two ways to deploy your SageMaker trained Hugging Face model. You can either deploy it after your training is finished, or you can deploy it later, using the `model_data` pointing to your saved model on Amazon S3. In addition to the two below-mentioned options, you can also instantiate Hugging Face endpoints with lower-level SDK such as `boto3` and `AWS CLI`, `Terraform` and with CloudFormation templates. ### **Deploy the model directly after training with the Estimator class** If you deploy your model directly after training, you need to ensure that all required model artifacts are saved in your training script, including the tokenizer and the model. A benefit of deploying directly after training is that SageMaker model container metadata will contain the source training job, providing lineage from training job to deployed model. ```python from sagemaker.huggingface import HuggingFace ############ pseudo code start ############ # create HuggingFace estimator for running training huggingface_estimator = HuggingFace(....) # starting the train job with our uploaded datasets as input huggingface_estimator.fit(...) ############ pseudo code end ############ # deploy model to SageMaker Inference predictor = hf_estimator.deploy(initial_instance_count=1, instance_type="ml.m5.xlarge") # example request, you always need to define "inputs" data = { "inputs": "Camera - You are awarded a SiPix Digital Camera! call 09061221066 fromm landline. Delivery within 28 days." } # request predictor.predict(data) ``` After we run our request we can delete the endpoint again with. ```python # delete endpoint predictor.delete_endpoint() ``` ### **Deploy the model from pre-trained checkpoints using the <code>HuggingFaceModel</code> class** If you've already trained your model and want to deploy it at some later time, you can use the `model_data` argument to specify the location of your tokenizer and model weights. ```python from sagemaker.huggingface.model import HuggingFaceModel # create Hugging Face Model Class huggingface_model = HuggingFaceModel( model_data="s3://models/my-bert-model/model.tar.gz", # path to your trained sagemaker model role=role, # iam role with permissions to create an Endpoint transformers_version="4.6", # transformers version used pytorch_version="1.7", # pytorch version used ) # deploy model to SageMaker Inference predictor = huggingface_model.deploy( initial_instance_count=1, instance_type="ml.m5.xlarge" ) # example request, you always need to define "inputs" data = { "inputs": "Camera - You are awarded a SiPix Digital Camera! call 09061221066 fromm landline. Delivery within 28 days." } # request predictor.predict(data) ``` After we run our request, we can delete the endpoint again with: ```python # delete endpoint predictor.delete_endpoint() ``` ## **Deploy one of the 10,000+ Hugging Face Transformers to Amazon SageMaker for Inference** To deploy a model directly from the Hugging Face Model Hub to Amazon SageMaker, we need to define two environment variables when creating the `HuggingFaceModel`. We need to define: * HF_MODEL_ID: defines the model id, which will be automatically loaded from[ huggingface.co/models](http://huggingface.co/models) when creating or SageMaker Endpoint. The 🤗 Hub provides 10,000+ models all available through this environment variable. * HF_TASK: defines the task for the used 🤗 Transformers pipeline. A full list of tasks can be found[ here](https://huggingface.co/transformers/main_classes/pipelines.html). ```python from sagemaker.huggingface.model import HuggingFaceModel # Hub Model configuration. <https://huggingface.co/models> hub = { 'HF_MODEL_ID':'distilbert-base-uncased-distilled-squad', # model_id from hf.co/models 'HF_TASK':'question-answering' # NLP task you want to use for predictions } # create Hugging Face Model Class huggingface_model = HuggingFaceModel( env=hub, # configuration for loading model from Hub role=role, # iam role with permissions to create an Endpoint transformers_version="4.6", # transformers version used pytorch_version="1.7", # pytorch version used ) # deploy model to SageMaker Inference predictor = huggingface_model.deploy( initial_instance_count=1, instance_type="ml.m5.xlarge" ) # example request, you always need to define "inputs" data = { "inputs": { "question": "What is used for inference?", "context": "My Name is Philipp and I live in Nuremberg. This model is used with sagemaker for inference." } } # request predictor.predict(data) ``` After we run our request we can delete the endpoint again with. ```python # delete endpoint predictor.delete_endpoint() ``` --- # **FAQ 🎯** You can find the complete [Frequently Asked Questions](https://huggingface.co/docs/sagemaker/faq) in the [documentation](https://huggingface.co/docs/sagemaker/faq). _Q: Which models can I deploy for Inference?_ A: You can deploy: * any 🤗 Transformers model trained in Amazon SageMaker, or other compatible platforms and that can accommodate the SageMaker Hosting design * any of the 10,000+ publicly available Transformer models from the Hugging Face[ Model Hub](https://huggingface.co/models), or * your private models hosted in your Hugging Face premium account! _Q: Which pipelines, tasks are supported by the Inference Toolkit?_ A: The Inference Toolkit and DLC support any of the `transformers` `pipelines`. You can find the full list [here](https://huggingface.co/transformers/main_classes/pipelines.html) _Q: Do I have to use the `transformers pipelines` when hosting SageMaker endpoints?_ A: No, you can also write your custom inference code to serve your own models and logic, documented [here](https://huggingface.co/docs/sagemaker/inference#user-defined-codemodules). _Q: Do I have to use the SageMaker Python SDK to use the Hugging Face Deep Learning Containers (DLCs)?_ A: You can use the Hugging Face DLC without the SageMaker Python SDK and deploy your models to SageMaker with other SDKs, such as the [AWS CLI](https://docs.aws.amazon.com/cli/latest/reference/sagemaker/create-training-job.html), [boto3](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker.html#SageMaker.Client.create_training_job) or [Cloudformation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-sagemaker-endpoint.html). The DLCs are also available through Amazon ECR and can be pulled and used in any environment of choice. _Q: Why should I use the Hugging Face Deep Learning Containers?_ A: The DLCs are fully tested, maintained, optimized deep learning environments that require no installation, configuration, or maintenance. In particular, our inference DLC comes with a pre-written serving stack, which drastically lowers the technical bar of DL serving. _Q: How is my data and code secured by Amazon SageMaker?_ A: Amazon SageMaker provides numerous security mechanisms including **[encryption at rest](https://docs.aws.amazon.com/sagemaker/latest/dg/encryption-at-rest-nbi.html)** and **[in transit](https://docs.aws.amazon.com/sagemaker/latest/dg/encryption-in-transit.html)**, **[Virtual Private Cloud (VPC) connectivity](https://docs.aws.amazon.com/sagemaker/latest/dg/interface-vpc-endpoint.html),** and **[Identity and Access Management (IAM)](https://docs.aws.amazon.com/sagemaker/latest/dg/security_iam_service-with-iam.html)**. To learn more about security in the AWS cloud and with Amazon SageMaker, you can visit **[Security in Amazon SageMaker](https://docs.aws.amazon.com/sagemaker/latest/dg/security_iam_service-with-iam.html)** and **[AWS Cloud Security](https://docs.aws.amazon.com/sagemaker/latest/dg/security_iam_service-with-iam.html)**. _Q: Is this available in my region?_ A: For a list of the supported regions, please visit the **[AWS region table](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/)** for all AWS global infrastructure. _Q: Do you offer premium support or support SLAs for this solution?_ A: AWS Technical Support tiers are available from AWS and cover development and production issues for AWS products and services - please refer to AWS Support for specifics and scope. If you have questions which the Hugging Face community can help answer and/or benefit from, please **[post them in the Hugging Face forum](https://discuss.huggingface.co/c/sagemaker/17)**. --- If you need premium support from the Hugging Face team to accelerate your NLP roadmap, our[ Expert Acceleration Program](https://huggingface.co/support) offers direct guidance from our open-source, science, and ML Engineering teams.
Welcome spaCy to the 🤗 Hub
osanseviero
July 13, 2021
spacy
open-source-collab, nlp
https://huggingface.co/blog/spacy
# Welcome spaCy to the Hugging Face Hub [spaCy](https://github.com/explosion/spaCy) is a popular library for advanced Natural Language Processing used widely across industry. spaCy makes it easy to use and train pipelines for tasks like named entity recognition, text classification, part of speech tagging and more, and lets you build powerful applications to process and analyze large volumes of text. Hugging Face makes it really easy to share your spaCy pipelines with the community! With a single command, you can upload any pipeline package, with a pretty model card and all required metadata auto-generated for you. The inference API currently supports NER out-of-the-box, and you can try out your pipeline interactively in your browser. You'll also get a live URL for your package that you can `pip install` from anywhere for a smooth path from prototype all the way to production! ### Finding models Over 60 canonical models can be found in the [spaCy](https://hf.co/spacy) org. These models are from the [latest 3.1 release](https://explosion.ai/blog/spacy-v3-1), so you can try the latest realesed models right now! On top of this, you can find all spaCy models from the community here https://huggingface.co/models?filter=spacy. ### Widgets This integration includes support for NER widgets, so all models with a NER component will have this out of the box! Coming soon there will be support for text classification and POS. <div><a class="text-xs block mb-3 text-gray-300" href="/spacy/en_core_web_sm"><code>spacy/en_core_web_sm</code></a> <div class="SVELTE_HYDRATER " data-props="{&quot;apiUrl&quot;:&quot;https://api-inference.huggingface.co&quot;,&quot;model&quot;:{&quot;author&quot;:&quot;spacy&quot;,&quot;autoArchitecture&quot;:&quot;AutoModel&quot;,&quot;branch&quot;:&quot;main&quot;,&quot;cardData&quot;:{&quot;tags&quot;:[&quot;spacy&quot;,&quot;token-classification&quot;],&quot;language&quot;:[&quot;en&quot;],&quot;license&quot;:&quot;MIT&quot;,&quot;model-index&quot;:[{&quot;name&quot;:&quot;en_core_web_sm&quot;,&quot;results&quot;:[{&quot;tasks&quot;:{&quot;name&quot;:&quot;NER&quot;,&quot;type&quot;:&quot;token-classification&quot;,&quot;metrics&quot;:[{&quot;name&quot;:&quot;Precision&quot;,&quot;type&quot;:&quot;precision&quot;,&quot;value&quot;:0.8424355924},{&quot;name&quot;:&quot;Recall&quot;,&quot;type&quot;:&quot;recall&quot;,&quot;value&quot;:0.8335336538},{&quot;name&quot;:&quot;F Score&quot;,&quot;type&quot;:&quot;f_score&quot;,&quot;value&quot;:0.8379609817}]}},{&quot;tasks&quot;:{&quot;name&quot;:&quot;POS&quot;,&quot;type&quot;:&quot;token-classification&quot;,&quot;metrics&quot;:[{&quot;name&quot;:&quot;Accuracy&quot;,&quot;type&quot;:&quot;accuracy&quot;,&quot;value&quot;:0.9720712187}]}},{&quot;tasks&quot;:{&quot;name&quot;:&quot;SENTER&quot;,&quot;type&quot;:&quot;token-classification&quot;,&quot;metrics&quot;:[{&quot;name&quot;:&quot;Precision&quot;,&quot;type&quot;:&quot;precision&quot;,&quot;value&quot;:0.9074955788},{&quot;name&quot;:&quot;Recall&quot;,&quot;type&quot;:&quot;recall&quot;,&quot;value&quot;:0.8801372122},{&quot;name&quot;:&quot;F Score&quot;,&quot;type&quot;:&quot;f_score&quot;,&quot;value&quot;:0.893607046}]}},{&quot;tasks&quot;:{&quot;name&quot;:&quot;UNLABELED_DEPENDENCIES&quot;,&quot;type&quot;:&quot;token-classification&quot;,&quot;metrics&quot;:[{&quot;name&quot;:&quot;Accuracy&quot;,&quot;type&quot;:&quot;accuracy&quot;,&quot;value&quot;:0.9185392711}]}},{&quot;tasks&quot;:{&quot;name&quot;:&quot;LABELED_DEPENDENCIES&quot;,&quot;type&quot;:&quot;token-classification&quot;,&quot;metrics&quot;:[{&quot;name&quot;:&quot;Accuracy&quot;,&quot;type&quot;:&quot;accuracy&quot;,&quot;value&quot;:0.9185392711}]}}]}]},&quot;cardSource&quot;:true,&quot;id&quot;:&quot;spacy/en_core_web_sm&quot;,&quot;pipeline_tag&quot;:&quot;token-classification&quot;,&quot;library_name&quot;:&quot;spacy&quot;,&quot;modelId&quot;:&quot;spacy/en_core_web_sm&quot;,&quot;private&quot;:false,&quot;siblings&quot;:[{&quot;rfilename&quot;:&quot;.gitattributes&quot;},{&quot;rfilename&quot;:&quot;LICENSE&quot;},{&quot;rfilename&quot;:&quot;LICENSES_SOURCES&quot;},{&quot;rfilename&quot;:&quot;README.md&quot;},{&quot;rfilename&quot;:&quot;accuracy.json&quot;},{&quot;rfilename&quot;:&quot;config.cfg&quot;},{&quot;rfilename&quot;:&quot;en_core_web_sm-any-py3-none-any.whl&quot;},{&quot;rfilename&quot;:&quot;meta.json&quot;},{&quot;rfilename&quot;:&quot;tokenizer&quot;},{&quot;rfilename&quot;:&quot;attribute_ruler/patterns&quot;},{&quot;rfilename&quot;:&quot;lemmatizer/lookups/lookups.bin&quot;},{&quot;rfilename&quot;:&quot;ner/cfg&quot;},{&quot;rfilename&quot;:&quot;ner/model&quot;},{&quot;rfilename&quot;:&quot;ner/moves&quot;},{&quot;rfilename&quot;:&quot;vocab/lookups.bin&quot;},{&quot;rfilename&quot;:&quot;vocab/strings.json&quot;},{&quot;rfilename&quot;:&quot;vocab/vectors&quot;}],&quot;tags&quot;:[&quot;en&quot;,&quot;spacy&quot;,&quot;token-classification&quot;,&quot;license:mit&quot;,&quot;model-index&quot;],&quot;tag_objs&quot;:[{&quot;id&quot;:&quot;token-classification&quot;,&quot;label&quot;:&quot;Token Classification&quot;,&quot;type&quot;:&quot;pipeline_tag&quot;},{&quot;id&quot;:&quot;spacy&quot;,&quot;label&quot;:&quot;spaCy&quot;,&quot;type&quot;:&quot;library&quot;},{&quot;id&quot;:&quot;en&quot;,&quot;label&quot;:&quot;en&quot;,&quot;type&quot;:&quot;language&quot;},{&quot;id&quot;:&quot;license:mit&quot;,&quot;label&quot;:&quot;mit&quot;,&quot;type&quot;:&quot;license&quot;},{&quot;id&quot;:&quot;model-index&quot;,&quot;label&quot;:&quot;model-index&quot;,&quot;type&quot;:&quot;other&quot;}],&quot;widgetData&quot;:[{&quot;text&quot;:&quot;My name is Wolfgang and I live in Berlin&quot;},{&quot;text&quot;:&quot;My name is Sarah and I live in London&quot;},{&quot;text&quot;:&quot;My name is Clara and I live in Berkeley, California.&quot;}]},&quot;shouldUpdateUrl&quot;:true}" data-target="InferenceWidget"><div class="flex flex-col w-full max-w-full "> <div class="font-semibold flex items-center mb-2"><div class="text-lg flex items-center"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="-ml-1 mr-1 text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg> Hosted inference API</div> <a target="_blank" href="/docs"><svg class="ml-1.5 text-sm text-gray-400 hover:text-black" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M17 22v-8h-4v2h2v6h-3v2h8v-2h-3z" fill="currentColor"></path><path d="M16 8a1.5 1.5 0 1 0 1.5 1.5A1.5 1.5 0 0 0 16 8z" fill="currentColor"></path><path d="M16 30a14 14 0 1 1 14-14a14 14 0 0 1-14 14zm0-26a12 12 0 1 0 12 12A12 12 0 0 0 16 4z" fill="currentColor"></path></svg></a></div> <div class="flex items-center text-sm text-gray-500 mb-1.5"><div class="inline-flex items-center"><svg class="mr-1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 18 18"><path d="M11.075 10.1875H12.1625V11.275H11.075V10.1875Z"></path><path d="M15.425 9.10004H16.5125V10.1875H15.425V9.10004Z"></path><path d="M7.8125 3.66254H8.9V4.75004H7.8125V3.66254Z"></path><path d="M8.90001 12.3625H6.72501V9.09998C6.72472 8.81165 6.61005 8.5352 6.40617 8.33132C6.20228 8.12744 5.92584 8.01277 5.63751 8.01248H2.37501C2.08667 8.01277 1.81023 8.12744 1.60635 8.33132C1.40246 8.5352 1.28779 8.81165 1.28751 9.09998V12.3625C1.28779 12.6508 1.40246 12.9273 1.60635 13.1311C1.81023 13.335 2.08667 13.4497 2.37501 13.45H5.63751V15.625C5.63779 15.9133 5.75246 16.1898 5.95635 16.3936C6.16023 16.5975 6.43667 16.7122 6.72501 16.7125H8.90001C9.18834 16.7122 9.46478 16.5975 9.66867 16.3936C9.87255 16.1898 9.98722 15.9133 9.98751 15.625V13.45C9.98722 13.1616 9.87255 12.8852 9.66867 12.6813C9.46478 12.4774 9.18834 12.3628 8.90001 12.3625V12.3625ZM2.37501 12.3625V9.09998H5.63751V12.3625H2.37501ZM6.72501 15.625V13.45H8.90001V15.625H6.72501Z"></path><path d="M15.425 16.7125H13.25C12.9617 16.7122 12.6852 16.5976 12.4813 16.3937C12.2775 16.1898 12.1628 15.9134 12.1625 15.625V13.45C12.1628 13.1617 12.2775 12.8852 12.4813 12.6814C12.6852 12.4775 12.9617 12.3628 13.25 12.3625H15.425C15.7133 12.3628 15.9898 12.4775 16.1937 12.6814C16.3976 12.8852 16.5122 13.1617 16.5125 13.45V15.625C16.5122 15.9134 16.3976 16.1898 16.1937 16.3937C15.9898 16.5976 15.7133 16.7122 15.425 16.7125ZM13.25 13.45V15.625H15.425V13.45H13.25Z"></path><path d="M15.425 1.48752H12.1625C11.8742 1.48781 11.5977 1.60247 11.3938 1.80636C11.19 2.01024 11.0753 2.28668 11.075 2.57502V5.83752H9.98751C9.69917 5.83781 9.42273 5.95247 9.21885 6.15636C9.01496 6.36024 8.9003 6.63668 8.90001 6.92502V8.01252C8.9003 8.30085 9.01496 8.5773 9.21885 8.78118C9.42273 8.98506 9.69917 9.09973 9.98751 9.10002H11.075C11.3633 9.09973 11.6398 8.98506 11.8437 8.78118C12.0476 8.5773 12.1622 8.30085 12.1625 8.01252V6.92502H15.425C15.7133 6.92473 15.9898 6.81006 16.1937 6.60618C16.3976 6.4023 16.5122 6.12585 16.5125 5.83752V2.57502C16.5122 2.28668 16.3976 2.01024 16.1937 1.80636C15.9898 1.60247 15.7133 1.48781 15.425 1.48752ZM9.98751 8.01252V6.92502H11.075V8.01252H9.98751ZM12.1625 5.83752V2.57502H15.425V5.83752H12.1625Z"></path><path d="M4.55001 5.83752H2.37501C2.08667 5.83723 1.81023 5.72256 1.60635 5.51868C1.40246 5.3148 1.28779 5.03835 1.28751 4.75002V2.57502C1.28779 2.28668 1.40246 2.01024 1.60635 1.80636C1.81023 1.60247 2.08667 1.48781 2.37501 1.48752H4.55001C4.83834 1.48781 5.11478 1.60247 5.31867 1.80636C5.52255 2.01024 5.63722 2.28668 5.63751 2.57502V4.75002C5.63722 5.03835 5.52255 5.3148 5.31867 5.51868C5.11478 5.72256 4.83834 5.83723 4.55001 5.83752V5.83752ZM2.37501 2.57502V4.75002H4.55001V2.57502H2.37501Z"></path></svg> <span>Token Classification</span></div> <div class="ml-auto"></div></div> <form><div class="flex h-10"><input class="form-input-alt flex-1 rounded-r-none " placeholder="Your sentence here..." required="" type="text"> <button class="btn-widget w-24 h-10 px-5 rounded-l-none border-l-0 " type="submit">Compute</button></div></form> <div class="mt-1.5"><div class="text-gray-400 text-xs">This model is currently loaded and running on the Inference API.</div> </div> <div class="mt-auto pt-4 flex items-center text-xs text-gray-500"><button class="flex items-center cursor-not-allowed text-gray-300" disabled=""><svg class="mr-1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32" style="transform: rotate(360deg);"><path d="M31 16l-7 7l-1.41-1.41L28.17 16l-5.58-5.59L24 9l7 7z" fill="currentColor"></path><path d="M1 16l7-7l1.41 1.41L3.83 16l5.58 5.59L8 23l-7-7z" fill="currentColor"></path><path d="M12.419 25.484L17.639 6l1.932.518L14.35 26z" fill="currentColor"></path></svg> JSON Output</button> <button class="flex items-center ml-auto"><svg class="mr-1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M22 16h2V8h-8v2h6v6z" fill="currentColor"></path><path d="M8 24h8v-2h-6v-6H8v8z" fill="currentColor"></path><path d="M26 28H6a2.002 2.002 0 0 1-2-2V6a2.002 2.002 0 0 1 2-2h20a2.002 2.002 0 0 1 2 2v20a2.002 2.002 0 0 1-2 2zM6 6v20h20.001L26 6z" fill="currentColor"></path></svg> Maximize</button></div> </div></div></div> ### Using existing models All models from the Hub can be directly installed using `pip install`. ```bash pip install https://huggingface.co/spacy/en_core_web_sm/resolve/main/en_core_web_sm-any-py3-none-any.whl ``` ```python # Using spacy.load(). import spacy nlp = spacy.load("en_core_web_sm") # Importing as module. import en_core_web_sm nlp = en_core_web_sm.load() ``` When you open a repository, you can click `Use in spaCy` and you will be given a working snippet that you can use to install and load the model! ![snippet](assets/23_spacy/snippet.png) ![snippet](assets/23_spacy/snippet2.png) You can even make HTTP requests to call the models from the Inference API, which is useful in production settings. Here is an example of a simple request: ```bash curl -X POST --data '{"inputs": "Hello, this is Omar"}' https://api-inference.huggingface.co/models/spacy/en_core_web_sm >>> [{"entity_group":"PERSON","word":"Omar","start":15,"end":19,"score":1.0}] ``` And for larger-scale use cases, you can click "Deploy > Accelerated Inference" and see how to do this with Python. ### Sharing your models But probably the coolest feature is that now you can very easily share your models with the `spacy-huggingface-hub` [library](https://github.com/explosion/spacy-huggingface-hub), which extends the `spaCy` CLI with a new command, `huggingface-hub push`. ```bash huggingface-cli login python -m spacy package ./en_ner_fashion ./output --build wheel cd ./output/en_ner_fashion-0.0.0/dist python -m spacy huggingface-hub push en_ner_fashion-0.0.0-py3-none-any.whl ``` In just a minute, you can get your packaged model in the Hub, try it out directly in the browser, and share it with the rest of the community. All the required metadata will be uploaded for you and you even get a cool model card. Try it out and share your models with the community! ## Would you like to integrate your library to the Hub? This integration is possible thanks to the [`huggingface_hub`](https://github.com/huggingface/huggingface_hub) library which has all our widgets and the API for all our supported libraries. If you would like to integrate your library to the Hub, we have a [guide](https://huggingface.co/docs/hub/models-adding-libraries) for you!
Deep Learning over the Internet: Training Language Models Collaboratively
mryab
July 15, 2021
collaborative-training
research
https://huggingface.co/blog/collaborative-training
# Deep Learning over the Internet: Training Language Models Collaboratively <small> With the additional help of Quentin Lhoest and Sylvain Lesage. </small> Modern language models often require a significant amount of compute for pretraining, making it impossible to obtain them without access to tens and hundreds of GPUs or TPUs. Though in theory it might be possible to combine the resources of multiple individuals, in practice, such distributed training methods have previously seen limited success because connection speeds over the Internet are way slower than in high-performance GPU supercomputers. In this blog post, we describe [DeDLOC](https://arxiv.org/abs/2106.10207) — a new method for collaborative distributed training that can adapt itself to the network and hardware constraints of participants. We show that it can be successfully applied in real-world scenarios by pretraining [sahajBERT](https://huggingface.co/neuropark/sahajBERT), a model for the Bengali language, with 40 volunteers. On downstream tasks in Bengali, this model achieves nearly state-of-the-art quality with results comparable to much larger models that used hundreds of high-tier accelerators. <div class="aspect-w-16 aspect-h-9"> <iframe src="https://www.youtube.com/embed/v8ShbLasRF8" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> </div> ## Distributed Deep Learning in Open Collaborations ### Why should we do it? These days, many highest-quality NLP systems are based on large pretrained Transformers. In general, their quality improves with size: you can achieve unparalleled results in natural language understanding and generation by scaling up the parameter count and leveraging the abundance of unlabeled text data. Unfortunately, we use these pretrained models not only because it's convenient. The hardware resources for training Transformers on large datasets often exceed anything affordable to a single person and even most commercial or research organizations. Take, for example, BERT: its training was estimated to cost about $7,000, and for the largest models like GPT-3, this number can be as high as $12 million! This resource limitation might seem obvious and inevitable, but is there really no alternative to using pretrained models for the broader ML community? However, there might be a way out of this situation: to come up with a solution, we only need to take a look around. It might be the case that the computational resources we're looking for are already there; for example, many of us have powerful computers with gaming or workstation GPUs at home. You might've already guessed that we're going to join their power similarly to [Folding@home](https://foldingathome.org/), [Rosetta@home](https://boinc.bakerlab.org/), [Leela Chess Zero](https://lczero.org/) or different [BOINC](https://boinc.berkeley.edu/) projects that leverage volunteer computing, but the approach is even more general. For instance, several laboratories can join their smaller clusters to utilize all the available resources, and some might want to join the experiment using inexpensive cloud instances. To a skeptical mind, it might seem that we're missing a key factor here: data transfer in distributed DL is often a bottleneck, since we need to aggregate the gradients from multiple workers. Indeed, any naïve approach to distributed training over the Internet is bound to fail, as most participants don't have gigabit connections and might disconnect from the network at any time. So how on Earth can you train anything with a household data plan? :) As a solution to this problem, we propose a new training algorithm, called Distributed Deep Learning in Open Collaborations (or **DeDLOC**), which is described in detail in our recently released [preprint](https://arxiv.org/abs/2106.10207). Now, let’s find out what are the core ideas behind this algorithm! ### Training with volunteers In its most frequently used version, distributed training with multiple GPUs is pretty straightforward. Recall that when doing deep learning, you usually compute gradients of your loss function averaged across many examples in a batch of training data. In case of _data-parallel_ distributed DL, you simply split the data across multiple workers, compute gradients separately, and then average them once the local batches are processed. When the average gradient is computed on all workers, we adjust the model weights with the optimizer and continue training our model. You can see an illustration of different tasks that are executed below. ![assets/24_sahajBERT/roles_tasks.png](assets/24_sahajBERT/roles_tasks.png) <div style="line-height:105%;font-size:80%"> <p align="center"> Typical machine learning tasks executed by peers in distributed training, possibly with a separation of roles </p> </div> Often, to reduce the amount of synchronization and to stabilize the learning process, we can accumulate the gradients for N batches before averaging, which is equivalent to increasing the actual batch size N times. This approach, combined with the observation that most state-of-the-art language models use large batches, led us to a simple idea: let's accumulate one _very_ large batch across all volunteer devices before each optimizer step! Along with complete equivalence to regular distributed training and easy scalability, this method also has the benefit of built-in fault tolerance, which we illustrate below. Let's consider a couple of potential failure cases that we might encounter throughout a collaborative experiment. By far, the most frequent scenario is that one or several peers disconnect from the training procedure: they might have an unstable connection or simply want to use their GPUs for something else. In this case, we only suffer a minor setback of training: the contribution of these peers gets deducted from the currently accumulated batch size, but other participants will compensate for that with their gradients. Also, if more peers join, the target batch size will simply be reached faster, and our training procedure will naturally speed up. You can see a demonstration of this in the video: <div class="aspect-w-16 aspect-h-9"> <iframe src="https://www.youtube.com/embed/zdVsg5zsGdc" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> </div> ### Adaptive averaging Now that we have discussed the overall training procedure, there remains one more question: how do we actually aggregate the gradients of participants? Most home computers cannot easily accept incoming connections, and the download speed might also become a constraint. Since we rely on volunteer hardware for experiments, a central server is not really a viable option, as it will quickly face overload when scaling to tens of clients and hundreds of millions of parameters. Most data-parallel training runs today don't use this strategy anyway; instead, they rely on All-Reduce — an efficient all-to-all communication primitive. Thanks to clever algorithmic optimizations, each node can compute the global average without sending the entire local gradient to every peer. Because All-Reduce is decentralized, it seems like a good choice; however, we still need to take the diversity of hardware and network setups into account. For example, some volunteers might join from computers that have slow network but powerful GPUs, some might have better connectivity only to a subset of other peers, and some may be firewalled from incoming connections. It turns out we can actually come up with an optimal data transfer strategy on the fly by leveraging this information about performance! On a high level, we split the entire gradient vector into parts depending on the Internet speed of each peer: those with the fastest connection aggregate the largest parts. Also, if some nodes do not accept incoming connections, they simply send their data for aggregation but do not compute the average themselves. Depending on the conditions, this adaptive algorithm can recover well-known distributed DL algorithms and improve on them with a hybrid strategy, as demonstrated below. ![Adaptative strategy](assets/24_sahajBERT/adaptive.png) <div style="line-height:105%;font-size:80%"> <p align="center"> Examples of different averaging strategies with the adaptive algorithm. </p> </div> <div style="line-height:105%;border:1px solid #F5F5F5;background-color:#F5F5F5;color: black"> <p align="center"> 💡 The core techniques for decentralized training are available in <a href="https://github.com/learning-at-home/hivemind">Hivemind</a>.<br> Check out the repo and learn how to use this library in your own projects! </p> </div><br> ## sahajBERT As always, having a well-designed algorithmic framework doesn't mean that it will work as intended in practice, because some assumptions may not hold true in actual training runs. To verify the competitive performance of this technology and to showcase its potential, we organized a special collaborative event to pretrain a masked language model for the Bengali language. Even though it is the fifth most spoken native language in the world, it has [very few](https://huggingface.co/models?filter=bn&pipeline_tag=fill-mask) masked language models openly available, which emphasizes the importance of tools that can empower the community, unlocking a plethora of opportunities in the field. We conducted this experiment with real volunteers from the Neuropark community and used openly available datasets (OSCAR and Wikipedia), because we wanted to have a fully reproducible example that might serve as an inspiration for other groups. Below, we describe the detailed setup of our training run and demonstrate its results. ### Architecture For our experiment, we chose ALBERT _(A Lite BERT)_ — a model for language representations that is pretrained with Masked Language Modeling (MLM) and Sentence Order Prediction (SOP) as objectives. We use this architecture because weight sharing makes it very parameter-efficient: for example, ALBERT-large has ~18M trainable parameters and performs comparably to BERT-base with ~108M weights on the GLUE benchmark. It means that there is less data to exchange between the peers, which is crucial in our setup, as it significantly speeds up each training iteration. <div style="line-height:105%;border:1px solid #F5F5F5;background-color:#F5F5F5;color: black"> <p align="center"> 💡 Want to know more about ALBERT?<br> <a href="https://arxiv.org/abs/1909.11942">Paper</a><br> <a href="https://huggingface.co/transformers/model_doc/albert.html#albert" >Transformers doc</a > </p> </div> ### Tokenizer The first brick of our model is called a _tokenizer_ and takes care of transforming raw text into vocabulary indices. Because we are training a model for Bengali, which is not very similar to English, we need to implement language-specific preprocessing as a part of our tokenizer. We can view it as a sequence of operations: 1. **Normalization:** includes all preprocessing operations on raw text data. This was the step at which we have made the most changes, because removing certain details can either change the meaning of the text or leave it the same, depending on the language. For example, the standard ALBERT normalizer removes the accents, while for the Bengali language, we need to keep them, because they contain information about the vowels. As a result, we use the following operations: NMT normalization, NFKC normalization, removal of multiple spaces, homogenization of recurring Unicode characters in the Bengali language, and lowercasing. 2. **Pretokenization** describes rules for splitting the input (for example, by whitespace) to enforce specific token boundaries. As in the original work, we have chosen to keep the whitespace out of the tokens. Therefore, to distinguish the words from each other and not to have multiple single-space tokens, each token corresponding to the beginning of a word starts with a special character “\_” (U+2581). In addition, we isolated all punctuation and digits from other characters to condense our vocabulary. 3. **Tokenizer modeling:** It is at this level that the text is mapped into a sequence of elements of a vocabulary. There are several algorithms for this, such as Byte-Pair Encoding (BPE) or Unigram, and most of them need to build the vocabulary from a text corpus. Following the setup of ALBERT, we used the **Unigram Language Model** approach, training a vocabulary of 32k tokens on the deduplicated Bengali part of the OSCAR dataset. 4. **Post-processing:** After tokenization, we might want to add several special tokens required by the architecture, such as starting the sequence with a special token `[CLS]` or separating two segments with a special token `[SEP]`. Since our main architecture is the same as the original ALBERT, we keep the same post-processing: specifically, we add a `[CLS]` token at the beginning of each example and a `[SEP]` token both between two segments and at the end. <div style="line-height:105%;border:1px solid #F5F5F5;background-color:#F5F5F5;color: black"> <p align="center"> 💡 Read more information about each component in <a href="https://huggingface.co/docs/tokenizers/python/latest/components.html#components">Tokenizers doc</a> </p> </div> You can reuse our tokenizer by running the following code: ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("neuropark/sahajBERT") ``` ### Dataset The last thing we need to cover is the training dataset. As you probably know, the great strength of pretrained models like BERT or ALBERT is that you don't need an annotated dataset, but just a lot of texts. To train sahajBERT, we used the [Bengali Wikipedia dump from 03/20/2021](https://huggingface.co/datasets/lhoestq/wikipedia_bn) and the Bengali subset of [OSCAR](https://huggingface.co/datasets/oscar) (600MB + 6GB of text). These two datasets can easily be downloaded from the HF Hub. However, loading an entire dataset requires time and storage — two things that our peers do not necessarily have. To make the most of the resources provided by the participants, we have implemented **dataset streaming**, which allows them to train the model nearly as soon as they join the network. Specifically, the examples in the dataset are downloaded and transformed in parallel to the training. We can also shuffle the dataset so that our peers have little chance to process the same examples at the same time. As the dataset is not downloaded and preprocessed in advance, the transformations needed to go from plain text to a training example (shown in the figure below) are done on the fly. ![Create dataset](assets/24_sahajBERT/create_dataset.png) <div style="line-height:105%;font-size:80%"> <p align="center"> From a raw sample to a training sample </p> </div> The dataset streaming mode is available from version v1.9 of the 🤗 datasets library, so you can use it right now as follows: ```python from datasets import load_dataset oscar_dataset = load_dataset("oscar", name="unshuffled_deduplicated_bn", streaming=True) ``` <div style="line-height:105%;border:1px solid #F5F5F5;background-color:#F5F5F5;color: black"> <p align="center"> 💡 Learn more about loading datasets in streaming mode in the <a href="https://huggingface.co/docs/datasets/dataset_streaming.html">documentation</a> </p> </div> ### Collaborative event The sahajBERT collaborative training event took place from May 12 to May 21. The event brought together 40 participants, 30 of whom were Bengali-speaking volunteers, and 10 were volunteers from one of the authors' organizations. These 40 volunteers joined the [Neuropark](https://neuropark.co/) Discord channel to receive all information regarding the event and participate in discussions. To join the experiment, volunteers were asked to: 1. Send their username to the moderators to be allowlisted; 2. Open the provided notebook locally, on Google Colaboratory, or on Kaggle; 3. Run one code cell and fill in their Hugging Face credentials when requested; 4. Watch the training loss decrease on the shared dashboards! For security purposes, we set up an authorization system so that only members of the Neuropark community could train the model. Sparing you the technical details, our authorization protocol allows us to guarantee that every participant is in the allowlist and to acknowledge the individual contribution of each peer. In the following figure, you can see the activity of each volunteer. Over the experiment, the volunteers logged in 600 different sessions. Participants regularly launched multiple runs in parallel, and many of them spread out the runs they launched over time. The runs of individual participants lasted 4 hours on average, and the maximum length was 21 hours. You can read more about the participation statistics in the paper. <iframe width="100%" height="670" frameborder="0" src="https://observablehq.com/embed/@huggingface/sahajbert-bubbles-chart-optimized?cells=c_noaws%2Ct_noaws%2Cviewof+currentDate"></iframe> <div style="line-height:105%;font-size:80%"> <p align="center"> Chart showing participants of the <a href="https://huggingface.co/neuropark/sahajBERT"> sahajBERT</a> experiment. Circle radius is relative to the total number of processed batches, the circle is greyed if the participant is not active. Every purple square represents an active device, darker color corresponds to higher performance </p> </div> Along with the resources provided by participants, we also used 16 preemptible (cheap but frequently interrupted) single-GPU T4 cloud instances to ensure the stability of the run. The cumulative runtime for the experiment was 234 days, and in the figure below you can see parts of the loss curve that each peer contributed to! <p align="center"> <iframe width="80%" height="950" frameborder="0" src="https://observablehq.com/embed/@huggingface/explore-collaborative-training-data-optimized?cells=sessions%2Cviewof+participant%2ClossByParticipant"></iframe> </p> The final model was uploaded to the Model Hub, so you can download and play with it if you want to: [https://hf.co/neuropark/sahajBERT](https://huggingface.co/neuropark/sahajBERT) ### Evaluation To evaluate the performance of sahajBERT, we finetuned it on two downstream tasks in Bengali: - Named entity recognition (NER) on the Bengali split of [WikiANN](https://aclanthology.org/P17-1178/). The goal of this task is to classify each token in the input text into one of the following categories: person, organization, location, or none of them. - News Category Classification (NCC) on the Soham articles dataset from [IndicGLUE](https://aclanthology.org/2020.findings-emnlp.445/). The goal of this task is to predict the category to which belong the input text. We evaluated it during training on the NER task to check that everything was going well; as you can see on the following plot, this was indeed the case! <iframe width="100%" height="476" frameborder="0" src="https://observablehq.com/embed/@huggingface/bengali-exp-eval?cells=evalPlot"></iframe> <div style="line-height:105%;font-size:80%"> <p align="center"> Evaluation metrics of fine-tuned models on the NER task from different checkpoints of pre-trained models. </p> </div> At the end of training, we compared sahajBERT with three other pretrained language models: [XLM-R Large](https://arxiv.org/abs/1911.02116), [IndicBert](https://aclanthology.org/2020.findings-emnlp.445/), and [bnRoBERTa](https://huggingface.co/neuralspace-reverie/indic-transformers-bn-roberta). In the table below, you can see that our model has results comparable to the best Bengali language models available on HF Hub, even though our model has only ~18M trained parameters, while, for instance, XLM-R (a strong multilingual baseline), has ~559M parameters and was trained on several hundred V100 GPUs. | Model | NER F1 (mean ± std) | NCC Accuracy (mean ± std) | |:-------------:|:-------------:|:-------------:| |[sahajBERT](https://huggingface.co/neuropark/sahajBERT) | 95.45 ± 0.53| 91.97 ± 0.47| |[XLM-R-large](https://huggingface.co/xlm-roberta-large) | 96.48 ± 0.22| 90.05 ± 0.38| |[IndicBert](https://huggingface.co/ai4bharat/indic-bert) | 92.52 ± 0.45| 74.46 ± 1.91| |[bnRoBERTa](https://huggingface.co/neuralspace-reverie/indic-transformers-bn-roberta) |82.32 ± 0.67|80.94 ± 0.45| These models are available on the Hub as well. You can test them directly by playing with the Hosted Inference API widget on their Model Cards or by loading them directly in your Python code. #### sahajBERT-NER Model card: [https://hf.co/neuropark/sahajBERT-NER](https://hf.co/neuropark/sahajBERT-NER) ```python from transformers import ( AlbertForTokenClassification, TokenClassificationPipeline, PreTrainedTokenizerFast, ) # Initialize tokenizer tokenizer = PreTrainedTokenizerFast.from_pretrained("neuropark/sahajBERT-NER") # Initialize model model = AlbertForTokenClassification.from_pretrained("neuropark/sahajBERT-NER") # Initialize pipeline pipeline = TokenClassificationPipeline(tokenizer=tokenizer, model=model) raw_text = "এই ইউনিয়নে ৩ টি মৌজা ও ১০ টি গ্রাম আছে ।" # Change me output = pipeline(raw_text) ``` #### sahajBERT-NCC Model card: [https://hf.co/neuropark/sahajBERT-NER](https://hf.co/neuropark/sahajBERT-NCC) ```python from transformers import ( AlbertForSequenceClassification, TextClassificationPipeline, PreTrainedTokenizerFast, ) # Initialize tokenizer tokenizer = PreTrainedTokenizerFast.from_pretrained("neuropark/sahajBERT-NCC") # Initialize model model = AlbertForSequenceClassification.from_pretrained("neuropark/sahajBERT-NCC") # Initialize pipeline pipeline = TextClassificationPipeline(tokenizer=tokenizer, model=model) raw_text = "এই ইউনিয়নে ৩ টি মৌজা ও ১০ টি গ্রাম আছে ।" # Change me output = pipeline(raw_text) ``` ## Conclusion In this blog post, we have discussed the method that can enable collaborative pretraining of neural networks with sahajBERT as the first truly successful example of applying it to a real-world problem. What does this all mean for the broader ML community? First, it is now possible to run large-scale distributed pretraining with your friends, and we hope to see a lot of cool new models that were previously less feasible to obtain. Also, our result might be important for multilingual NLP, since now the community for any language can train their own models without the need for significant computational resources concentrated in one place. ## Acknowledgements The DeDLOC paper and sahajBERT training experiment were created by Michael Diskin, Alexey Bukhtiyarov, Max Ryabinin, Lucile Saulnier, Quentin Lhoest, Anton Sinitsin, Dmitry Popov, Dmitry Pyrkin, Maxim Kashirin, Alexander Borzunov, Albert Villanova del Moral, Denis Mazur, Ilia Kobelev, Yacine Jernite, Thomas Wolf, and Gennady Pekhimenko. This project is the result of a collaboration between [Hugging Face](https://huggingface.co/), [Yandex Research](https://research.yandex.com/), [HSE University](https://www.hse.ru/en/), [MIPT](https://mipt.ru/english/), [University of Toronto](https://www.utoronto.ca/) and [Vector Institute](https://vectorinstitute.ai/). In addition, we would like to thank Stas Bekman, Dmitry Abulkhanov, Roman Zhytar, Alexander Ploshkin, Vsevolod Plokhotnyuk and Roman Kail for their invaluable help with building the training infrastructure. Also, we thank Abhishek Thakur for helping with downstream evaluation and Tanmoy Sarkar with Omar Sanseviero, who helped us organize the collaborative experiment and gave regular status updates to the participants over the course of the training run. Below, you can see all participants of the collaborative experiment: <iframe width="100%" height="380" frameborder="0" src="https://observablehq.com/embed/89470ece1dda817b?cells=humanParticipants"></iframe> ## References "Distributed Deep Learning in Open Collaborations", [ArXiv](https://arxiv.org/abs/2106.10207) Code for [sahajBERT experiments](https://github.com/yandex-research/DeDLOC/tree/main/sahajbert) in the DeDLOC repository.
Introducing Optimum: The Optimization Toolkit for Transformers at Scale
mfuntowicz
September 14, 2021
hardware-partners-program
guide
https://huggingface.co/blog/hardware-partners-program
# Introducing 🤗 Optimum: The Optimization Toolkit for Transformers at Scale This post is the first step of a journey for Hugging Face to democratize state-of-the-art **Machine Learning production performance**. To get there, we will work hand in hand with our Hardware Partners, as we have with Intel below. Join us in this journey, and follow [Optimum](https://github.com/huggingface/optimum), our new open source library! ## Why 🤗 Optimum? ### 🤯 Scaling Transformers is hard What do Tesla, Google, Microsoft and Facebook all have in common? Well many things, but one of them is they all run billions of Transformer model predictions every day. Transformers for AutoPilot to drive your Tesla (lucky you!), for Gmail to complete your sentences, for Facebook to translate your posts on the fly, for Bing to answer your natural language queries. [Transformers](https://github.com/huggingface/transformers) have brought a step change improvement in the accuracy of Machine Learning models, have conquered NLP and are now expanding to other modalities starting with [Speech](https://huggingface.co/models?pipeline_tag=automatic-speech-recognition&sort=downloads) and [Vision](https://huggingface.co/models?pipeline_tag=image-classification&sort=downloads). But taking these massive models into production, and making them run fast at scale is a huge challenge for any Machine Learning Engineering team. What if you don’t have hundreds of highly skilled Machine Learning Engineers on payroll like the above companies? Through Optimum, our new open source library, we aim to build the definitive toolkit for Transformers production performance, and enable maximum efficiency to train and run models on specific hardware. ### 🏭 Optimum puts Transformers to work To get optimal performance training and serving models, the model acceleration techniques need to be specifically compatible with the targeted hardware. Each hardware platform offers specific software tooling, [features and knobs that can have a huge impact on performance](https://huggingface.co/blog/bert-cpu-scaling-part-1). Similarly, to take advantage of advanced model acceleration techniques like sparsity and quantization, optimized kernels need to be compatible with the operators on silicon, and specific to the neural network graph derived from the model architecture. Diving into this 3-dimensional compatibility matrix and how to use model acceleration libraries is daunting work, which few Machine Learning Engineers have experience on. [Optimum](https://github.com/huggingface/optimum) aims to make this work easy, providing performance optimization tools targeting efficient AI hardware, built in collaboration with our Hardware Partners, and turn Machine Learning Engineers into ML Optimization wizards. With the [Transformers](https://github.com/huggingface/transformers) library, we made it easy for researchers and engineers to use state-of-the-art models, abstracting away the complexity of frameworks, architectures and pipelines. With the [Optimum](https://github.com/huggingface/optimum) library, we are making it easy for engineers to leverage all the available hardware features at their disposal, abstracting away the complexity of model acceleration on hardware platforms. ## 🤗 Optimum in practice: how to quantize a model for Intel Xeon CPU ### 🤔 Why quantization is important but tricky to get right Pre-trained language models such as BERT have achieved state-of-the-art results on a wide range of natural language processing tasks, other Transformer based models such as ViT and Speech2Text have achieved state-of-the-art results on computer vision and speech tasks respectively: transformers are everywhere in the Machine Learning world and are here to stay. However, putting transformer-based models into production can be tricky and expensive as they need a lot of compute power to work. To solve this many techniques exist, the most popular being quantization. Unfortunately, in most cases quantizing a model requires a lot of work, for many reasons: 1. The model needs to be edited: some ops need to be replaced by their quantized counterparts, new ops need to be inserted (quantization and dequantization nodes), and others need to be adapted to the fact that weights and activations will be quantized. This part can be very time-consuming because frameworks such as PyTorch work in eager mode, meaning that the changes mentioned above need to be added to the model implementation itself. PyTorch now provides a tool called `torch.fx` that allows you to trace and transform your model without having to actually change the model implementation, but it is tricky to use when tracing is not supported for your model out of the box. On top of the actual editing, it is also necessary to find which parts of the model need to be edited, which ops have an available quantized kernel counterpart and which ops don't, and so on. 2. Once the model has been edited, there are many parameters to play with to find the best quantization settings: - Which kind of observers should I use for range calibration? - Which quantization scheme should I use? - Which quantization related data types (int8, uint8, int16) are supported on my target device? 3. Balance the trade-off between quantization and an acceptable accuracy loss. 4. Export the quantized model for the target device. Although PyTorch and TensorFlow made great progress in making things easy for quantization, the complexities of transformer based models makes it hard to use the provided tools out of the box and get something working without putting up a ton of effort. ### 💡 How Intel is solving quantization and more with Neural Compressor Intel® [Neural Compressor](https://github.com/intel/neural-compressor) (formerly referred to as Low Precision Optimization Tool or LPOT) is an open-source python library designed to help users deploy low-precision inference solutions. The latter applies low-precision recipes for deep-learning models to achieve optimal product objectives, such as inference performance and memory usage, with expected performance criteria. Neural Compressor supports post-training quantization, quantization-aware training and dynamic quantization. In order to specify the quantization approach, objective and performance criteria, the user must provide a configuration yaml file specifying the tuning parameters. The configuration file can either be hosted on the Hugging Face's Model Hub or can be given through a local directory path. ### 🔥 How to easily quantize Transformers for Intel Xeon CPUs with Optimum ![Automatic quantization code snippet](assets/25_hardware_partners_program/carbon_inc_quantizer.png) ## Follow 🤗 Optimum: a journey to democratize ML production performance ### ⚡️State of the Art Hardware Optimum will focus on achieving optimal production performance on dedicated hardware, where software and hardware acceleration techniques can be applied for maximum efficiency. We will work hand in hand with our Hardware Partners to enable, test and maintain acceleration, and deliver it in an easy and accessible way through Optimum, as we did with Intel and Neural Compressor. We will soon announce new Hardware Partners who have joined us on our journey toward Machine Learning efficiency. ### 🔮 State-of-the-Art Models The collaboration with our Hardware Partners will yield hardware-specific optimized model configurations and artifacts, which we will make available to the AI community via the Hugging Face [Model Hub](https://huggingface.co/models). We hope that Optimum and hardware-optimized models will accelerate the adoption of efficiency in production workloads, which represent most of the aggregate energy spent on Machine Learning. And most of all, we hope that Optimum will accelerate the adoption of Transformers at scale, not just for the biggest tech companies, but for all of us. ### 🌟 A journey of collaboration: join us, follow our progress Every journey starts with a first step, and ours was the public release of Optimum. Join us and make your first step by [giving the library a Star](https://github.com/huggingface/optimum), so you can follow along as we introduce new supported hardware, acceleration techniques and optimized models. If you would like to see new hardware and features be supported in Optimum, or you are interested in joining us to work at the intersection of software and hardware, please reach out to us at [email protected]
Hugging Face and Graphcore partner for IPU-optimized Transformers
sallydoherty
September 14, 2021
graphcore
graphcore, partnerships
https://huggingface.co/blog/graphcore
# Hugging Face and Graphcore partner for IPU-optimized Transformers > ##### Speaking at the 2021 AI Hardware Summit, Hugging Face announced the launch of their new Hardware Partner Program, including device-optimized models and software integrations. Here, Graphcore - creators of the Intelligence Processing Unit (IPU) and a founding member of the program – explain how their partnership with Hugging Face will allow developers to easily accelerate their use of state-of-the-art Transformer models. Graphcore and Hugging Face are two companies with a common goal – to make it easier for innovators to harness the power of machine intelligence. Hugging Face’s Hardware Partner Program will allow developers using Graphcore systems to deploy state-of-the-art Transformer models, optimised for our Intelligence Processing Unit (IPU), at production scale, with minimum coding complexity. ## What is an Intelligence Processing Unit? IPUs are the processors that power Graphcore’s IPU-POD datacenter compute systems. This new type of processor is designed to support the very specific computational requirements of AI and machine learning. Characteristics such as fine-grained parallelism, low precision arithmetic, and the ability to handle sparsity have been built into our silicon. Instead of adopting a SIMD/SIMT architecture like GPUs, Graphcore’s IPU uses a massively parallel, MIMD architecture, with ultra-high bandwidth memory placed adjacent to the processor cores, right on the silicon die. This design delivers high performance and new levels of efficiency, whether running today’s most popular models, such as BERT and EfficientNet, or exploring next-generation AI applications. Software plays a vital role in unlocking the IPU’s capabilities. Our Poplar SDK has been co-designed with the processor since Graphcore’s inception. Today it fully integrates with standard machine learning frameworks, including PyTorch and TensorFlow, as well as orchestration and deployment tools such as Docker and Kubernetes. Making Poplar compatible with these widely used, third-party systems allows developers to easily port their models from their other compute platforms and start taking advantage of the IPU’s advanced AI capabilities. ## Optimising Transformers for Production Transformers have completely transformed (pun intended) the field of AI. Models such as BERT are widely used by Graphcore customers in a huge array of applications, across NLP and beyond. These multi-talented models can perform feature extraction, text generation, sentiment analysis, translation and many more functions. Already, Hugging Face plays host to hundreds of Transformers, from the French-language CamemBERT to ViT which applies lessons learned in NLP to computer vision. The Transformers library is downloaded an average of 2 million times every month and demand is growing. With a user base of more than 50,000 developers – Hugging Face has seen the fastest ever adoption of an open-source project. Now, with its Hardware Partner Program, Hugging Face is connecting the ultimate Transformer toolset with today's most advanced AI hardware. Using Optimum, a new open-source library and toolkit, developers will be able to access hardware-optimized models certified by Hugging Face. These are being developed in a collaboration between Graphcore and Hugging Face, with the first IPU-optimized models appearing on Optimum later this year. Ultimately, these will cover a wide range of applications, from vision and speech to translation and text generation. Hugging Face CEO Clément Delangue said: “Developers all want access to the latest and greatest hardware – like the Graphcore IPU, but there’s always that question of whether they’ll have to learn new code or processes. With Optimum and the Hugging Face Hardware Program, that’s just not an issue. It’s essentially plug-and-play". ## SOTA Models meet SOTA Hardware Prior to the announcement of the Hugging Face Partnership, we had demonstrated the power of the IPU to accelerate state-of-the-art Transformer models with a special Graphcore-optimised implementation of Hugging Face BERT using Pytorch. Full details of this example can be found in the Graphcore blog [BERT-Large training on the IPU explained](https://www.graphcore.ai/posts/bert-large-training-on-the-ipu-explained). The dramatic benchmark results for BERT running on a Graphcore system, compared with a comparable GPU-based system are surely a tantalising prospect for anyone currently running the popular NLP model on something other than the IPU. ![assets/24_sahajBERT/roles_tasks.png](assets/26_graphcore-ipu/graphcore-ipu-bert-large.png) This type of acceleration can be game changing for machine learning researchers and engineers, winning them back valuable hours of training time and allowing them many more iterations when developing new models. Now Graphcore users will be able to unlock such performance advantages, through the Hugging Face platform, with its elegant simplicity and superlative range of models. Together, Hugging Face and Graphcore are helping even more people to access the power of Transformers and accelerate the AI revolution. *Visit the [Hugging Face Hardware Partner portal](https://huggingface.co/hardware) to learn more about Graphcore IPU systems and how to gain access*
Summer at Hugging Face ☀️
huggingface
September 24, 2021
summer-at-huggingface
community
https://huggingface.co/blog/summer-at-huggingface
# Summer At Hugging Face 😎 Summer is now officially over and these last few months have been quite busy at Hugging Face. From new features in the Hub to research and Open Source development, our team has been working hard to empower the community through open and collaborative technology. In this blog post you'll catch up on everything that happened at Hugging Face in June, July and August! ![Summer At Hugging Face](assets/27_summer_at_huggingface/summer_intro.gif) This post covers a wide range of areas our team has been working on, so don't hesitate to skip to the parts that interest you the most 🤗 1. [New Features](#new-features) 2. [Community](#community) 3. [Open Source](#open-source) 4. [Solutions](#solutions) 5. [Research](#research) ## New Features In the last few months, the Hub went from 10,000 public model repositories to over 16,000 models! Kudos to our community for sharing so many amazing models with the world. And beyond the numbers, we have a ton of cool new features to share with you! ### Spaces Beta ([hf.co/spaces](/spaces)) Spaces is a simple and free solution to host Machine Learning demo applications directly on your user profile or your organization [hf.co](http://hf.co/) profile. We support two awesome SDKs that let you build cool apps easily in Python: [Gradio](https://gradio.app/) and [Streamlit](https://streamlit.io/). In a matter of minutes you can deploy an app and share it with the community! 🚀 Spaces lets you [set up secrets](/docs/hub/spaces-overview#managing-secrets), permits [custom requirements](/docs/hub/spaces-dependencies), and can even be managed [directly from GitHub repos](/docs/hub/spaces-github-actions). You can sign up for the beta at [hf.co/spaces](/spaces). Here are some of our favorites! - Create recipes with the help of [Chef Transformer](/spaces/flax-community/chef-transformer) - Transcribe speech to text with [HuBERT](https://huggingface.co/spaces/osanseviero/HUBERT) - Do segmentation in a video with the [DINO model](/spaces/nateraw/dino-clips) - Use [Paint Transformer](/spaces/akhaliq/PaintTransformer) to make paintings from a given picture - Or you can just explore any of the over [100 existing Spaces](/spaces)! ![Landing page of Spaces](assets/27_summer_at_huggingface/spaces_landing.png) ### Share Some Love You can now like any model, dataset, or Space on [http://huggingface.co](http://huggingface.co/), meaning you can share some love with the community ❤️. You can also keep an eye on who's liking what by clicking on the likes box 👀. Go ahead and like your own repos, we're not judging 😉. ![Animation giving a like](assets/27_summer_at_huggingface/likes_animation.gif) ### TensorBoard Integration In late June, we launched a TensorBoard integration for all our models. If there are TensorBoard traces in the repo, an automatic, free TensorBoard instance is launched for you. This works with both public and private repositories and for any library that has TensorBoard traces! ![Image of a TensorBoard Instance](assets/27_summer_at_huggingface/tensorboard.png) ### Metrics In July, we added the ability to list evaluation metrics in model repos by adding them to their model card📈. If you add an evaluation metric under the `model-index` section of your model card, it will be displayed proudly in your model repo. ![Evaluation Metrics](assets/27_summer_at_huggingface/metrics.png) If that wasn't enough, these metrics will be automatically linked to the corresponding [Papers With Code](https://paperswithcode.com/) leaderboard. That means as soon as you share your model on the Hub, you can compare your results side-by-side with others in the community. 💪 Check out [this repo](https://huggingface.co/nateraw/vit-base-beans-demo) as an example, paying close attention to `model-index` section of its [model card](https://huggingface.co/nateraw/vit-base-beans-demo/blob/main/README.md#L12-L25) to see how you can do this yourself and find the metrics in Papers with Code [automatically](https://paperswithcode.com/sota/image-classification-on-beans). ### New Widgets The Hub has 18 widgets that allow users to try out models directly in the browser. With our latest integrations to Sentence Transformers, we also introduced two new widgets: feature extraction and sentence similarity. The latest **audio classification** widget enables many cool use cases: language identification, [street sound detection](https://huggingface.co/speechbrain/urbansound8k_ecapa) 🚨, [command recognition](https://huggingface.co/speechbrain/google_speech_command_xvector), [speaker identification](https://huggingface.co/speechbrain/spkrec-xvect-voxceleb), and more! You can try this out with `transformers` and `speechbrain` models today! 🔊 (Beware, when you try some of the models, you might need to bark out loud) You can try our early demo of [structured data classification](https://huggingface.co/julien-c/wine-quality) with Scikit-learn. And finally, we also introduced new widgets for image-related models: **text to image**, **image classification**, and **object detection**. Try image classification with Google's ViT model [here](https://huggingface.co/google/vit-base-patch16-224) and object detection with Facebook AI's DETR model [here](https://huggingface.co/facebook/detr-resnet-50)! ![Object Detection Widget](assets/27_summer_at_huggingface/object-detection.png) ### More Features That's not everything that has happened in the Hub. We've introduced new and improved [documentation](https://huggingface.co/docs/hub/main) of the Hub. We also introduced two widely requested features: users can now transfer/rename repositories and directly upload new files to the Hub. ![Button to upload a file](assets/27_summer_at_huggingface/upload_file.png) ## Community ### Hugging Face Course In June, we launched the first part of our [free online course](https://huggingface.co/course/chapter1)! The course teaches you everything about the 🤗 Ecosystem: Transformers, Tokenizers, Datasets, Accelerate, and the Hub. You can also find links to the course lessons in the official documentation of our libraries. The live sessions for all chapters can be found on our [YouTube channel](https://www.youtube.com/playlist?list=PLo2EIpI_JMQuQ8StH9RwKXwJVqLTDxwwy). Stay tuned for the next part of the course which we'll be launching later this year! ![Course topics](assets/27_summer_at_huggingface/course.png) ### JAX/FLAX Sprint In July we hosted our biggest [community event](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104) ever with almost 800 participants! In this event co-organized with the JAX/Flax and Google Cloud teams, compute-intensive NLP, Computer Vision, and Speech projects were made accessible to a wider audience of engineers and researchers by providing free TPUv3s. The participants created over 170 models, 22 datasets, and 38 Spaces demos 🤯. You can explore all the amazing demos and projects [here](https://huggingface.co/flax-community). There were talks around JAX/Flax, Transformers, large-scale language modeling, and more! You can find all recordings [here](https://github.com/huggingface/transformers/tree/master/examples/research_projects/jax-projects#talks). We're really excited to share the work of the 3 winning teams! 1. [Dall-e mini](https://huggingface.co/spaces/flax-community/dalle-mini). DALL·E mini is a model that generates images from any prompt you give! DALL·E mini is 27 times smaller than the original DALL·E and still has impressive results. ![Image generated of an avocado in space](assets/27_summer_at_huggingface/dalle.png) 2. [DietNerf](https://huggingface.co/spaces/flax-community/DietNerf-Demo). DietNerf is a 3D neural view synthesis model designed for few-shot learning of 3D scene reconstruction using 2D views. This is the first Open Source implementation of the "[Putting Nerf on a Diet](https://arxiv.org/abs/2104.00677)" paper. ![Generated 3D object with NeRF](assets/27_summer_at_huggingface/diet_nerf.png) 3. [CLIP RSIC](https://huggingface.co/spaces/sujitpal/clip-rsicd-demo). CLIP RSIC is a CLIP model fine-tuned on remote sensing image data to enable zero-shot satellite image classification and captioning. This project demonstrates how effective fine-tuned CLIP models can be for specialized domains. ![CLIP search](assets/27_summer_at_huggingface/clip.png) Apart from these very cool projects, we're excited about how these community events enable training large and multi-modal models for multiple languages. For example, we saw the first ever Open Source big LMs for some low-resource languages like [Swahili](https://huggingface.co/models?language=sw), [Polish](https://huggingface.co/flax-community/papuGaPT2) and [Marathi](https://huggingface.co/spaces/flax-community/roberta-base-mr). ## Bonus On top of everything we just shared, our team has been doing lots of other things. Here are just some of them: - 📖 This 3-part [video series](https://www.youtube.com/watch?time_continue=6&v=qmN1fJ7Fdmo&feature=emb_title&ab_channel=NilsR) shows the theory on how to train state-of-the-art sentence embedding models. - We presented at PyTorch Community Voices and participated in a QA ([video](https://www.youtube.com/watch?v=wE3bk7JaH4E&ab_channel=PyTorch)). - Hugging Face has collaborated with [NLP in Spanish](https://twitter.com/NLP_en_ES) and [SpainAI](https://twitter.com/Spain_AI_) in a Spanish [course](https://www.youtube.com/playlist?list=PLBILcz47fTtPspj9QDm2E0oHLe1p67tMz) that teaches concepts and state-of-the art architectures as well as their applications through use cases. - We presented at [MLOps World Demo Days](https://www.youtube.com/watch?v=lWahHp5vpVg). ## Open Source ### New in Transformers Summer has been an exciting time for 🤗 Transformers! The library reached 50,000 stars, 30 million total downloads, and almost 1000 contributors! 🤩 So what's new? JAX/Flax is now the 3rd supported framework with over [5000](https://huggingface.co/models?library=jax&sort=downloads) models in the Hub! You can find actively maintained [examples](https://github.com/huggingface/transformers/tree/master/examples/flax) for different tasks such as text classification. We're also working hard on improving our TensorFlow support: all our [examples](https://github.com/huggingface/transformers/tree/master/examples/tensorflow) have been reworked to be more robust, TensorFlow idiomatic, and clearer. This includes examples such as summarization, translation, and named entity recognition. You can now easily publish your model to the Hub, including automatically authored model cards, evaluation metrics, and TensorBoard instances. There is also increased support for exporting models to ONNX with the new [`transformers.onnx` module](https://huggingface.co/transformers/serialization.html?highlight=onnx). ```bash python -m transformers.onnx --model=bert-base-cased onnx/bert-base-cased/ ``` The last 4 releases introduced many new cool models! - [DETR](https://huggingface.co/transformers/model_doc/detr.html) can do fast end-to-end object detection and image segmentation. Check out some of our community [tutorials](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/DETR)! ![DETR image](assets/27_summer_at_huggingface/detr.png) - [ByT5](https://huggingface.co/transformers/model_doc/byt5.html) is the first tokenizer-free model in the Hub! You can find all available checkpoints [here](https://huggingface.co/models?search=byt5). - [CANINE](https://huggingface.co/transformers/model_doc/canine.html) is another tokenizer-free encoder-only model by Google AI, operating directly at the character level. You can find all (multilingual) checkpoints [here](https://huggingface.co/models?search=canine). - [HuBERT](https://huggingface.co/transformers/model_doc/hubert.html?highlight=hubert) shows exciting results for downstream audio tasks such as [command classification](https://huggingface.co/superb/hubert-base-superb-ks) and [emotion recognition](https://huggingface.co/superb/hubert-base-superb-er). Check the models [here](https://huggingface.co/models?filter=hubert). - [LayoutLMv2](https://huggingface.co/transformers/model_doc/layoutlmv2.html) and [LayoutXLM](https://huggingface.co/transformers/model_doc/layoutxlm.html?highlight=layoutxlm) are two incredible models capable of parsing document images (like PDFs) by incorporating text, layout, and visual information. We built a [Space demo](https://huggingface.co/spaces/nielsr/LayoutLMv2-FUNSD) so you can directly try it out! Demo notebooks can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/LayoutLMv2). ![LayoutLM object detection](assets/27_summer_at_huggingface/layout.png) - [BEiT](https://huggingface.co/transformers/model_doc/beit.html) by Microsoft Research makes self-supervised Vision Transformers outperform supervised ones, using a clever pre-training objective inspired by BERT. - [RemBERT](https://huggingface.co/transformers/model_doc/rembert.html?), a large multilingual Transformer that outperforms XLM-R (and mT5 with a similar number of parameters) in zero-shot transfer. - [Splinter](https://huggingface.co/transformers/model_doc/splinter.html) which can be used for few-shot question answering. Given only 128 examples, Splinter is able to reach ~73% F1 on SQuAD, outperforming MLM-based models by 24 points! The Hub is now integrated into `transformers`, with the ability to push to the Hub configuration, model, and tokenizer files without leaving the Python runtime! The `Trainer` can now push directly to the Hub every time a checkpoint is saved: ![Saving a checkpoint](assets/27_summer_at_huggingface/save_checkpoint.png) ### New in Datasets You can find 1400 public datasets in [https://huggingface.co/datasets](https://huggingface.co/datasets) thanks to the awesome contributions from all our community. 💯 The support for `datasets` keeps growing: it can be used in JAX, process parquet files, use remote files, and has wider support for other domains such as Automatic Speech Recognition and Image Classification. Users can also directly host and share their datasets to the community simply by uploading their data files in a repository on the Dataset Hub. ![Untitled](assets/27_summer_at_huggingface/streaming.png) What are the new datasets highlights? Microsoft CodeXGlue [datasets](https://huggingface.co/datasets?search=code_x_glue) for multiple coding tasks (code completion, generation, search, etc), huge datasets such as [C4](https://huggingface.co/datasets/c4) and [MC4](https://huggingface.co/datasets/mc4), and many more such as [RussianSuperGLUE](https://huggingface.co/datasets/russian_super_glue) and [DISFL-QA](https://huggingface.co/datasets/disfl_qa). ### Welcoming new Libraries to the Hub Apart from having deep integration with `transformers`-based models, the Hub is also building great partnerships with Open Source ML libraries to provide free model hosting and versioning. We've been achieving this with our [huggingface_hub](https://github.com/huggingface/huggingface_hub) Open-Source library as well as new Hub [documentation](https://huggingface.co/docs/hub/main). All spaCy canonical pipelines can now be found in the official spaCy [organization](https://huggingface.co/spacy), and any user can share their pipelines with a single command `python -m spacy huggingface-hub`. To read more about it, head to [https://huggingface.co/blog/spacy](https://huggingface.co/blog/spacy). You can try all canonical spaCy models directly in the Hub in the demo [Space](https://huggingface.co/spaces/spacy/pipeline-visualizer)! ![spaCy NER example](assets/27_summer_at_huggingface/spacy_ner.jpeg) Another exciting integration is Sentence Transformers. You can read more about it in the [blog announcement](https://huggingface.co/blog/sentence-transformers-in-the-hub): you can find over 200 [models](https://huggingface.co/models?library=sentence-transformers) in the Hub, easily share your models with the rest of the community and reuse models from the community. But that's not all! You can now find over 100 Adapter Transformers in the Hub and try out Speechbrain models with widgets directly in the browser for different tasks such as audio classification. If you're interested in our collaborations to integrate new ML libraries to the Hub, you can read more about them [here](https://huggingface.co/docs/hub/libraries). ![Filter of all libraries](assets/27_summer_at_huggingface/filters.png) ## Solutions ### **Coming soon: Infinity** Transformers latency down to 1ms? 🤯🤯🤯 We have been working on a really sleek solution to achieve unmatched efficiency for state-of-the-art Transformer models, for companies to deploy in their own infrastructure. - Infinity comes as a single-container and can be deployed in any production environment. - It can achieve 1ms latency for BERT-like models on GPU and 4-10ms on CPU 🤯🤯🤯 - Infinity meets the highest security requirements and can be integrated into your system without the need for internet access. You have control over all incoming and outgoing traffic. ⚠️ Join us for a [live announcement and demo on Sep 28](https://app.livestorm.co/hugging-face/hugging-face-infinity-launch?type=detailed), where we will be showcasing Infinity for the first time in public! ### **NEW: Hardware Acceleration** Hugging Face is [partnering with leading AI hardware accelerators](http://hf.co/hardware) such as Intel, Qualcomm and GraphCore to make state-of-the-art production performance accessible and extend training capabilities on SOTA hardware. As the first step in this journey, we [introduced a new Open Source library](https://huggingface.co/blog/hardware-partners-program): 🤗 Optimum - the ML optimization toolkit for production performance 🏎. Learn more in this [blog post](https://huggingface.co/blog/graphcore). ### **NEW: Inference on SageMaker** We launched a [new integration with AWS](https://huggingface.co/blog/deploy-hugging-face-models-easily-with-amazon-sagemaker) to make it easier than ever to deploy 🤗 Transformers in SageMaker 🔥. Pick up the code snippet right from the 🤗 Hub model page! Learn more about how to leverage transformers in SageMaker in our [docs](https://huggingface.co/docs/sagemaker/inference) or check out these [video tutorials](https://youtube.com/playlist?list=PLo2EIpI_JMQtPhGR5Eo2Ab0_Vb89XfhDJ). For questions reach out to us on the forum: [https://discuss.huggingface.co/c/sagemaker/17](https://discuss.huggingface.co/c/sagemaker/17) ![Sagemaker](assets/27_summer_at_huggingface/sagemaker.png) ### **NEW: AutoNLP In Your Browser** We released a new [AutoNLP](https://huggingface.co/autonlp) experience: a web interface to train models straight from your browser! Now all it takes is a few clicks to train, evaluate and deploy **🤗** Transformers models on your own data. [Try it out](https://ui.autonlp.huggingface.co/) - NO CODE needed! ![AutoNLP on the web.gif](assets/27_summer_at_huggingface/autonlp.gif) ### Inference API **Webinar**: We hosted a [live webinar](https://youtu.be/p055U0dnEos) to show how to add Machine Learning capabilities with just a few lines of code. We also built a VSCode extension that leverages the Hugging Face Inference API to generate comments describing Python code. <div class="aspect-w-16 aspect-h-9"> <iframe src="https://www.youtube.com/embed/p055U0dnEos" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> </div> **Hugging Face** + **Zapier Demo** 20,000+ Machine Learning models connected to 3,000+ apps? 🤯 By leveraging the [Inference API](https://huggingface.co/landing/inference-api/startups), you can now easily connect models right into apps like Gmail, Slack, Twitter, and more. In this demo video, we created a zap that uses this [code snippet](https://gist.github.com/feconroses/3476a91dc524fdb930a726b3894a1d08) to analyze your Twitter mentions and alerts you on Slack about the negative ones. <div class="aspect-w-16 aspect-h-9"> <iframe src="https://www.youtube.com/embed/sjfpOJ4KA78" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> </div> **Hugging Face + Google Sheets Demo** With the [Inference API](https://huggingface.co/landing/inference-api/startups), you can easily use zero-shot classification right into your spreadsheets in Google Sheets. Just [add this script](https://gist.github.com/feconroses/302474ddd3f3c466dc069ecf16bb09d7) in Tools -> Script Editor: <div class="aspect-w-16 aspect-h-9"> <iframe src="https://www.youtube.com/embed/-A-X3aUYkDs" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> </div> **Few-shot learning in practice** We wrote a [blog post](https://huggingface.co/blog/few-shot-learning-gpt-neo-and-inference-api) about what Few-Shot Learning is and explores how GPT-Neo and 🤗 Accelerated Inference API are used to generate your own predictions. ### **Expert Acceleration Program** Check out out the brand [new home for the Expert Acceleration Program](https://huggingface.co/landing/premium-support); you can now get direct, premium support from our Machine Learning experts and build better ML solutions, faster. ## Research At BigScience we held our first live event (since the kick off) in July BigScience Episode #1. Our second event BigScience Episode #2 was held on September 20th, 2021 with technical talks and updates by the BigScience working groups and invited talks by Jade Abbott (Masakhane), Percy Liang (Stanford CRFM), Stella Biderman (EleutherAI) and more. We have completed the first large-scale training on Jean Zay, a 13B English only decoder model (you can find the details [here](https://github.com/bigscience-workshop/bigscience/blob/master/train/tr1-13B-base/chronicles.md)), and we're currently deciding on the architecture of the second model. The organization working group has filed the application for the second half of the compute budget: Jean Zay V100 : 2,500,000 GPU hours. 🚀 In June, we shared the result of our collaboration with the Yandex research team: [DeDLOC](https://arxiv.org/abs/2106.10207), a method to collaboratively train your large neural networks, i.e. without using an HPC cluster, but with various accessible resources such as Google Colaboratory or Kaggle notebooks, personal computers or preemptible VMs. Thanks to this method, we were able to train [sahajBERT](https://huggingface.co/neuropark/sahajBERT), a Bengali language model, with 40 volunteers! And our model competes with the state of the art, and even is [the best for the downstream task of classification](https://huggingface.co/neuropark/sahajBERT-NCC) on Soham News Article Classification dataset. You can read more about it in this [blog](https://huggingface.co/blog/collaborative-training) post. This is a fascinating line of research because it would make model pre-training much more accessible (financially speaking)! <div class="aspect-w-16 aspect-h-9"> <iframe src="https://www.youtube.com/embed/v8ShbLasRF8" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> </div> In June our [paper](https://arxiv.org/abs/2103.08493), How Many Data Points is a Prompt Worth?, got a Best Paper award at NAACL! In it, we reconcile and compare traditional and prompting approaches to adapt pre-trained models, finding that human-written prompts are worth up to thousands of supervised data points on new tasks. You can also read its blog [post](https://huggingface.co/blog/how_many_data_points/). ![Prompt](assets/27_summer_at_huggingface/prompt.png) We're looking forward to EMNLP this year where we have four accepted papers! - Our [paper](https://arxiv.org/abs/2109.02846) "[Datasets: A Community Library for Natural Language Processing](https://arxiv.org/abs/2109.02846)" documents the Hugging Face Datasets project that has over 300 contributors. This community project gives easy access to hundreds of datasets to researchers. It has facilitated new use cases of cross-dataset NLP, and has advanced features for tasks like indexing and streaming large datasets. - Our collaboration with researchers from TU Darmstadt lead to another paper accepted at the conference (["Avoiding Inference Heuristics in Few-shot Prompt-based Finetuning"](https://arxiv.org/abs/2109.04144)). In this paper, we show that prompt-based fine-tuned language models (which achieve strong performance in few-shot setups) still suffer from learning surface heuristics (sometimes called *dataset biases*), a pitfall that zero-shot models don't exhibit. - Our submission "[Block Pruning For Faster Transformers](https://arxiv.org/abs/2109.04838v1)" has also been accepted as a long paper. In this paper, we show how to use block sparsity to obtain both fast and small Transformer models. Our experiments yield models which are 2.4x faster and 74% smaller than BERT on SQuAD. ## Last words 😎 🔥 Summer was fun! So many things have happened! We hope you enjoyed reading this blog post and looking forward to share the new projects we're working on. See you in the winter! ❄️
Showcase Your Projects in Spaces using Gradio
merve
October 5, 2021
gradio-spaces
guide
https://huggingface.co/blog/gradio-spaces
# Showcase Your Projects in Spaces using Gradio It's so easy to demonstrate a Machine Learning project thanks to [Gradio](https://gradio.app/). In this blog post, we'll walk you through: - the recent Gradio integration that helps you demo models from the Hub seamlessly with few lines of code leveraging the [Inference API](https://huggingface.co/inference-api). - how to use Hugging Face Spaces to host demos of your own models. ## Hugging Face Hub Integration in Gradio You can demonstrate your models in the Hub easily. You only need to define the [Interface](https://gradio.app/docs#interface) that includes: - The repository ID of the model you want to infer with - A description and title - Example inputs to guide your audience After defining your Interface, just call `.launch()` and your demo will start running. You can do this in Colab, but if you want to share it with the community a great option is to use Spaces! Spaces are a simple, free way to host your ML demo apps in Python. To do so, you can create a repository at https://huggingface.co/new-space and select Gradio as the SDK. Once done, you can create a file called `app.py`, copy the code below, and your app will be up and running in a few seconds! ```python import gradio as gr description = "Story generation with GPT-2" title = "Generate your own story" examples = [["Adventurer is approached by a mysterious stranger in the tavern for a new quest."]] interface = gr.Interface.load("huggingface/pranavpsv/gpt2-genre-story-generator", description=description, examples=examples ) interface.launch() ``` You can play with the Story Generation model [here](https://huggingface.co/spaces/merve/GPT-2-story-gen) ![story-gen](assets/28_gradio-spaces/story-gen.png) Under the hood, Gradio calls the Inference API which supports Transformers as well as other popular ML frameworks such as spaCy, SpeechBrain and Asteroid. This integration supports different types of models, `image-to-text`, `speech-to-text`, `text-to-speech` and more. You can check out this example BigGAN ImageNet `text-to-image` model [here](https://huggingface.co/spaces/merve/BigGAN-ImageNET). Implementation is below. ```python import gradio as gr description = "BigGAN text-to-image demo." title = "BigGAN ImageNet" interface = gr.Interface.load("huggingface/osanseviero/BigGAN-deep-128", description=description, title = title, examples=[["american robin"]] ) interface.launch() ``` ![big-gan](assets/28_gradio-spaces/big-gan.png) ## Serving Custom Model Checkpoints with Gradio in Hugging Face Spaces You can serve your models in Spaces even if the Inference API does not support your model. Just wrap your model inference in a Gradio `Interface` as described below and put it in Spaces. ![imagenet-demo](assets/28_gradio-spaces/imagenet-demo.gif) ## Mix and Match Models! Using Gradio Series, you can mix-and-match different models! Here, we've put a French to English translation model on top of the story generator and a English to French translation model at the end of the generator model to simply make a French story generator. ```python import gradio as gr from gradio.mix import Series description = "Generate your own D&D story!" title = "French Story Generator using Opus MT and GPT-2" translator_fr = gr.Interface.load("huggingface/Helsinki-NLP/opus-mt-fr-en") story_gen = gr.Interface.load("huggingface/pranavpsv/gpt2-genre-story-generator") translator_en = gr.Interface.load("huggingface/Helsinki-NLP/opus-mt-en-fr") examples = [["L'aventurier est approché par un mystérieux étranger, pour une nouvelle quête."]] Series(translator_fr, story_gen, translator_en, description = description, title = title, examples=examples, inputs = gr.inputs.Textbox(lines = 10)).launch() ``` You can check out the French Story Generator [here](https://huggingface.co/spaces/merve/french-story-gen) ![story-gen-fr](assets/28_gradio-spaces/story-gen-fr.png) ## Uploading your Models to the Spaces You can serve your demos in Hugging Face thanks to Spaces! To do this, simply create a new Space, and then drag and drop your demos or use Git. ![spaces-demo](assets/28_gradio-spaces/spaces-demo-finalized.gif) Easily build your first demo with Spaces [here](https://huggingface.co/spaces)!
Hosting your Models and Datasets on Hugging Face Spaces using Streamlit
merve
October 5, 2021
streamlit-spaces
guide
https://huggingface.co/blog/streamlit-spaces
# Hosting your Models and Datasets on Hugging Face Spaces using Streamlit ## Showcase your Datasets and Models using Streamlit on Hugging Face Spaces [Streamlit](https://streamlit.io/) allows you to visualize datasets and build demos of Machine Learning models in a neat way. In this blog post we will walk you through hosting models and datasets and serving your Streamlit applications in Hugging Face Spaces. ## Building demos for your models You can load any Hugging Face model and build cool UIs using Streamlit. In this particular example we will recreate ["Write with Transformer"](https://transformer.huggingface.co/doc/gpt2-large) together. It's an application that lets you write anything using transformers like GPT-2 and XLNet. ![write-with-transformers](assets/29_streamlit-spaces/write-tr.png) We will not dive deep into how the inference works. You only need to know that you need to specify some hyperparameter values for this particular application. Streamlit provides many [components](https://docs.streamlit.io/en/stable/api.html) for you to easily implement custom applications. We will use some of them to receive necessary hyperparameters inside the inference code. - The ```.text_area``` component creates a nice area to input sentences to be completed. - The Streamlit ```.sidebar``` method enables you to accept variables in a sidebar. - The ```slider``` is used to take continuous values. Don't forget to give ```slider``` a step, otherwise it will treat the values as integers. - You can let the end-user input integer vaues with ```number_input``` . ``` python import streamlit as st # adding the text that will show in the text box as default default_value = "See how a modern neural network auto-completes your text 🤗 This site, built by the Hugging Face team, lets you write a whole document directly from your browser, and you can trigger the Transformer anywhere using the Tab key. Its like having a smart machine that completes your thoughts 😀 Get started by typing a custom snippet, check out the repository, or try one of the examples. Have fun!" sent = st.text_area("Text", default_value, height = 275) max_length = st.sidebar.slider("Max Length", min_value = 10, max_value=30) temperature = st.sidebar.slider("Temperature", value = 1.0, min_value = 0.0, max_value=1.0, step=0.05) top_k = st.sidebar.slider("Top-k", min_value = 0, max_value=5, value = 0) top_p = st.sidebar.slider("Top-p", min_value = 0.0, max_value=1.0, step = 0.05, value = 0.9) num_return_sequences = st.sidebar.number_input('Number of Return Sequences', min_value=1, max_value=5, value=1, step=1) ``` The inference code returns the generated output, you can print the output using simple ```st.write```. ```st.write(generated_sequences[-1])``` Here's what our replicated version looks like. ![streamlit-rep](assets/29_streamlit-spaces/streamlit-rep.png) You can checkout the full code [here](https://huggingface.co/spaces/merve/write-with-transformer). ## Showcase your Datasets and Data Visualizations Streamlit provides many components to help you visualize datasets. It works seamlessly with 🤗 [Datasets](https://huggingface.co/docs/datasets/), [pandas](https://pandas.pydata.org/docs/index.html), and visualization libraries such as [matplotlib](https://matplotlib.org/stable/index.html), [seaborn](https://seaborn.pydata.org/) and [bokeh](https://bokeh.org/). Let's start by loading a dataset. A new feature in `Datasets`, called [streaming](https://huggingface.co/docs/datasets/dataset_streaming.html), allows you to work immediately with very large datasets, eliminating the need to download all of the examples and load them into memory. ``` python from datasets import load_dataset import streamlit as st dataset = load_dataset("merve/poetry", streaming=True) df = pd.DataFrame.from_dict(dataset["train"]) ``` If you have structured data like mine, you can simply use ```st.dataframe(df) ``` to show your dataset. There are many Streamlit components to plot data interactively. One such component is ```st.barchart() ```, which I used to visualize the most used words in the poem contents. ``` python st.write("Most appearing words including stopwords") st.bar_chart(words[0:50]) ``` If you'd like to use libraries like matplotlib, seaborn or bokeh, all you have to do is to put ```st.pyplot() ``` at the end of your plotting script. ``` python st.write("Number of poems for each author") sns.catplot(x="author", data=df, kind="count", aspect = 4) plt.xticks(rotation=90) st.pyplot() ``` You can see the interactive bar chart, dataframe component and hosted matplotlib and seaborn visualizations below. You can check out the code [here](https://huggingface.co/spaces/merve/streamlit-dataset-demo). ![spaces-streamlit-dataset-demo](assets/29_streamlit-spaces/streamlit-dataset-vid.gif) ## Hosting your Projects in Hugging Face Spaces You can simply drag and drop your files as shown below. Note that you need to include your additional dependencies in the requirements.txt. Also note that the version of Streamlit you have on your local is the same. For seamless usage, refer to [Spaces API reference](https://huggingface.co/docs/hub/spaces-config-reference). ![spaces-streamlit](assets/29_streamlit-spaces/streamlit.gif) There are so many components and [packages](https://streamlit.io/components) you can use to demonstrate your models, datasets, and visualizations. You can get started [here](https://huggingface.co/spaces).
Fine tuning CLIP with Remote Sensing (Satellite) images and captions
arampacha
October 13, 2021
fine-tune-clip-rsicd
community, cv, nlp
https://huggingface.co/blog/fine-tune-clip-rsicd
# Fine tuning CLIP with Remote Sensing (Satellite) images and captions ## Fine tuning CLIP with Remote Sensing (Satellite) images and captions <img src="/blog/assets/30_clip_rsicd/clip-rsicd-header-image.png"/> In July this year, [Hugging Face](https://huggingface.co/) organized a [Flax/JAX Community Week](https://github.com/huggingface/transformers/blob/master/examples/research_projects/jax-projects/README.md), and invited the community to submit projects to train Hugging Face [transformers](https://github.com/huggingface/transformers) models in the areas of Natural Language Processing (NLP) and Computer Vision (CV). Participants used Tensor Processing Units (TPUs) with [Flax](https://github.com/google/flax) and [JAX](https://github.com/google/jax). JAX is a linear algebra library (like `numpy`) that can do automatic differentiation ([Autograd](https://github.com/hips/autograd)) and compile down to [XLA](https://www.tensorflow.org/xla), and Flax is a neural network library and ecosystem for JAX. TPU compute time was provided free by [Google Cloud](https://cloud.google.com/), who co-sponsored the event. Over the next two weeks, teams participated in lectures from Hugging Face and Google, trained one or more models using JAX/Flax, shared them with the community, and provided a [Hugging Face Spaces](https://huggingface.co/spaces) demo showcasing the capabilities of their model. Approximately 100 teams participated in the event, and it resulted in 170 models and 36 demos. Our team, like probably many others, is a distributed one, spanning 12 time zones. Our common thread is that we all belong to the [TWIML Slack Channel](https://twimlai.slack.com/), where we came together based on a shared interest in Artificial Intelligence (AI) and Machine Learning (ML) topics. We fine-tuned the [CLIP Network from OpenAI](https://openai.comclip/) with satellite images and captions from the [RSICD dataset](https://github.com/201528014227051/RSICD_optimal). The CLIP network learns visual concepts by being trained with image and caption pairs in a self-supervised manner, by using text paired with images found across the Internet. During inference, the model can predict the most relevant image given a text description or the most relevant text description given an image. CLIP is powerful enough to be used in zero-shot manner on everyday images. However, we felt that satellite images were sufficiently different from everyday images that it would be useful to fine-tune CLIP with them. Our intuition turned out to be correct, as the evaluation results (described below) shows. In this post, we describe details of our training and evaluation process, and our plans for future work on this project. The goal of our project was to provide a useful service and demonstrate how to use CLIP for practical use cases. Our model can be used by applications to search through large collections of satellite images using textual queries. Such queries could describe the image in totality (for example, beach, mountain, airport, baseball field, etc) or search or mention specific geographic or man-made features within these images. CLIP can similarly be fine-tuned for other domains as well, as shown by the [medclip-demo team](https://huggingface.co/spaces/flax-community/medclip-demo) for medical images. The ability to search through large collections of images using text queries is an immensely powerful feature, and can be used as much for social good as for malign purposes. Possible applications include national defense and anti-terrorism activities, the ability to spot and address effects of climate change before they become unmanageable, etc. Unfortunately, this power can also be misused, such as for military and police surveillance by authoritarian nation-states, so it does raise some ethical questions as well. You can read about the project on our [project page](https://github.com/arampacha/CLIP-rsicd), download our [trained model](https://huggingface.co/flax-community/clip-rsicd-v2) to use for inference on your own data, or see it in action on our [demo](https://huggingface.co/spaces/sujitpal/clip-rsicd-demo). ### Training #### Dataset We fine-tuned the CLIP model primarily with the [RSICD dataset](https://github.com/201528014227051/RSICD_optimal). This dataset consists of about 10,000 images collected from Google Earth, Baidu Map, MapABC, and Tianditu. It is provided freely to the research community to advance remote sensing captioning via [Exploring Models and Data for Remote Sensing Image Caption Generation](https://arxiv.org/abs/1712.0783) (Lu et al, 2017). The images are (224, 224) RGB images at various resolutions, and each image has up to 5 captions associated with it. <img src="/blog/assets/30_clip_rsicd/rsicd-images-sampling.png"/> <center><i>Some examples of images from the RSICD dataset</i></center> In addition, we used the [UCM Dataset](https://mega.nz/folder/wCpSzSoS#RXzIlrv--TDt3ENZdKN8JA) and the [Sydney dataset](https://mega.nz/folder/pG4yTYYA#4c4buNFLibryZnlujsrwEQ) for training, The UCM dataset is based on the UC Merced Land Use dataset. It consists of 2100 images belonging to 21 classes (100 images per class), and each image has 5 captions. The Sydney dataset contains images of Sydney, Australia from Google Earth. It contains 613 images belonging to 7 classes. Images are (500, 500) RGB and provides 5 captions for each image. We used these additional datasets because we were not sure if the RSICD dataset would be large enough to fine-tune CLIP. #### Model Our model is just the fine-tuned version of the original CLIP model shown below. Inputs to the model are a batch of captions and a batch of images passed through the CLIP text encoder and image encoder respectively. The training process uses [contrastive learning](https://towardsdatascience.com/understanding-contrastive-learning-d5b19fd96607) to learn a joint embedding representation of image and captions. In this embedding space, images and their respective captions are pushed close together, as are similar images and similar captions. Conversely, images and captions for different images, or dissimilar images and captions, are likely to be pushed further apart. <img src="/blog/assets/30_clip_rsicd/clip_schematic.png"/> <center><i>CLIP Training and Inference (Image Credit: CLIP: Connecting Text and Images (https://openai.comclip/))</i></center> #### Data Augmentation In order to regularize our dataset and prevent overfitting due to the size of the dataset, we used both image and text augmentation. Image augmentation was done inline using built-in transforms from Pytorch's [Torchvision](https://pytorch.org/vision/stable/index.html) package. The transformations used were Random Cropping, Random Resizing and Cropping, Color Jitter, and Random Horizontal and Vertical flipping. We augmented the text with backtranslation to generate captions for images with less than 5 unique captions per image. The [Marian MT]((https://huggingface.co/transformers/model_doc/marian.html)) family of models from Hugging Face was used to translate the existing captions into French, Spanish, Italian, and Portuguese and back to English to fill out the captions for these images. As shown in these loss plots below, image augmentation reduced overfitting significantly, and text and image augmentation reduced overfitting even further. <img src="/blog/assets/30_clip_rsicd/image-augment-loss.png"/> <img src="/blog/assets/30_clip_rsicd/image-text-aug-loss.png"/> <center><i>Evaluation and Training loss plots comparing (top) no augmentation vs image augmentation, and (bottom) image augmentation vs text+image augmentation</i></center> ### Evaluation #### Metrics A subset of the RSICD test set was used for evaluation. We found 30 categories of images in this subset. The evaluation was done by comparing each image with a set of 30 caption sentences of the form `"An aerial photograph of {category}"`. The model produced a ranked list of the 30 captions, from most relevant to least relevant. Categories corresponding to captions with the top k scores (for k=1, 3, 5, and 10) were compared with the category provided via the image file name. The scores are averaged over the entire set of images used for evaluation and reported for various values of k, as shown below. The `baseline` model represents the pre-trained `openai/clip-vit-base-path32` CLIP model. This model was fine-tuned with captions and images from the RSICD dataset, which resulted in a significant performance boost, as shown below. Our best model was trained with image and text augmentation, with batch size 1024 (128 on each of the 8 TPU cores), and the Adam optimizer with learning rate 5e-6. We trained our second base model with the same hyperparameters, except that we used the Adafactor optimizer with learning rate 1e-4. You can download either model from their model repos linked to in the table below. | Model-name | k=1 | k=3 | k=5 | k=10 | | ---------------------------------------- | ----- | ----- | ----- | ----- | | baseline | 0.572 | 0.745 | 0.837 | 0.939 | | bs128x8-lr1e-4-augs/ckpt-2 | 0.819 | 0.950 | 0.974 | 0.994 | | bs128x8-lr1e-4-imgaugs/ckpt-2 | 0.812 | 0.942 | 0.970 | 0.991 | | [bs128x8-lr1e-4-imgaugs-textaugs/ckpt-4](https://huggingface.co/flax-community/clip-rsicd)<sup>2</sup> | 0.843 | 0.958 | 0.977 | 0.993 | | bs128x8-lr5e-5-imgaugs-textaugs/ckpt-8 | 0.831 | 0.959 | 0.977 | 0.994 | | bs128x8-lr5e-5-imgaugs/ckpt-4 | 0.746 | 0.906 | 0.956 | 0.989 | | bs128x8-lr5e-5-imgaugs-textaugs-2/ckpt-4 | 0.811 | 0.945 | 0.972 | 0.993 | | bs128x8-lr5e-5-imgaugs-textaugs-3/ckpt-5 | 0.823 | 0.946 | 0.971 | 0.992 | | bs128x8-lr5e-5-wd02/ckpt-4 | 0.820 | 0.946 | 0.965 | 0.990 | | [bs128x8-lr5e-6-adam/ckpt-1](https://huggingface.co/flax-community/clip-rsicd-v2)<sup>1</sup> | **0.883** | **0.968** | **0.982** | **0.998** | _1 - our best model, 2 - our second best model_ #### Demo You can access the [CLIP-RSICD Demo](https://huggingface.co/spaces/sujitpal/clip-rsicd-demo) here. It uses our fine-tuned CLIP model to provide the following functionality: * Text to Image search * Image to Image search * Find text feature in image The first two functionalities use the RSICD test set as its image corpus. They are encoded using our best fine-tuned CLIP model and stored in a [NMSLib](https://github.com/nmslib/nmslib) index which allows Approximate Nearest Neighbor based retrieval. For text-to-image and image-to-image search respectively, the query text or image are encoded with our model and matched against the image vectors in the corpus. For the third functionality, we divide the incoming image into patches and encode them, encode the queried text feature, match the text vector with each image patch vector, and return the probability of finding the feature in each patch. ### Future Work We are grateful that we have been given an opportunity to further refine our model. Some ideas we have for future work are as follows: 1. Construct a sequence to sequence model using a CLIP encoder and a GPT-3 decoder and train it for image captioning. 2. Fine-tune the model on more image caption pairs from other datasets and investigate if we can improve its performance. 3. Investigate how fine-tuning affects the performance of model on non-RSICD image caption pairs. 4. Investigate the capability of the fine-tuned model to classify outside the categories it has been fine-tuned on. 5. Evaluate the model using other criteria such as image classification.
The Age of Machine Learning As Code Has Arrived
juliensimon
October 20, 2021
the-age-of-ml-as-code
analysis
https://huggingface.co/blog/the-age-of-ml-as-code
# The Age of Machine Learning As Code Has Arrived The 2021 edition of the [State of AI Report](https://www.stateof.ai/2021-report-launch.html) came out last week. So did the Kaggle [State of Machine Learning and Data Science Survey](https://www.kaggle.com/c/kaggle-survey-2021). There's much to be learned and discussed in these reports, and a couple of takeaways caught my attention. > "AI is increasingly being applied to mission critical infrastructure like national electric grids and automated supermarket warehousing calculations during pandemics. However, there are questions about whether the maturity of the industry has caught up with the enormity of its growing deployment." There's no denying that Machine Learning-powered applications are reaching into every corner of IT. But what does that mean for companies and organizations? How do we build rock-solid Machine Learning workflows? Should we all hire 100 Data Scientists ? Or 100 DevOps engineers? > "Transformers have emerged as a general purpose architecture for ML. Not just for Natural Language Processing, but also Speech, Computer Vision or even protein structure prediction." Old timers have learned the hard way that there is [no silver bullet](https://en.wikipedia.org/wiki/No_Silver_Bullet) in IT. Yet, the [Transformer](https://arxiv.org/abs/1706.03762) architecture is indeed very efficient on a wide variety of Machine Learning tasks. But how can we all keep up with the frantic pace of innovation in Machine Learning? Do we really need expert skills to leverage these state of the art models? Or is there a shorter path to creating business value in less time? Well, here's what I think. ### Machine Learning For The Masses! Machine Learning is everywhere, or at least it's trying to be. A few years ago, Forbes wrote that "[Software ate the world, now AI is eating Software](https://www.forbes.com/sites/cognitiveworld/2019/08/29/software-ate-the-world-now-ai-is-eating-software/)", but what does this really mean? If it means that Machine Learning models should replace thousands of lines of fossilized legacy code, then I'm all for it. Die, evil business rules, die! Now, does it mean that Machine Learning will actually replace Software Engineering? There's certainly a lot of fantasizing right now about [AI-generated code](https://www.wired.com/story/ai-latest-trick-writing-computer-code/), and some techniques are certainly interesting, such as [finding bugs and performance issues](https://aws.amazon.com/codeguru). However, not only shouldn't we even consider getting rid of developers, we should work on empowering as many as we can so that Machine Learning becomes just another boring IT workload (and [boring technology is great](http://boringtechnology.club/)). In other words, what we really need is for Software to eat Machine Learning! ### Things are not different this time For years, I've argued and swashbuckled that decade-old best practices for Software Engineering also apply to Data Science and Machine Learning: versioning, reusability, testability, automation, deployment, monitoring, performance, optimization, etc. I felt alone for a while, and then the Google cavalry unexpectedly showed up: > "Do machine learning like the great engineer you are, not like the great machine learning expert you aren't." - [Rules of Machine Learning](https://developers.google.com/machine-learning/guides/rules-of-ml), Google There's no need to reinvent the wheel either. The DevOps movement solved these problems over 10 years ago. Now, the Data Science and Machine Learning community should adopt and adapt these proven tools and processes without delay. This is the only way we'll ever manage to build robust, scalable and repeatable Machine Learning systems in production. If calling it MLOps helps, fine: I won't argue about another buzzword. It's really high time we stopped considering proof of concepts and sandbox A/B tests as notable achievements. They're merely a small stepping stone toward production, which is the only place where assumptions and business impact can be validated. Every Data Scientist and Machine Learning Engineer should obsess about getting their models in production, as quickly and as often as possible. **An okay production model beats a great sandbox model every time**. ### Infrastructure? So what? It's 2021. IT infrastructure should no longer stand in the way. Software has devoured it a while ago, abstracting it away with cloud APIs, infrastructure as code, Kubeflow and so on. Yes, even on premises. The same is quickly happening for Machine Learning infrastructure. According to the Kaggle survey, 75% of respondents use cloud services, and over 45% use an Enterprise ML platform, with Amazon SageMaker, Databricks and Azure ML Studio taking the top 3 spots. <kbd> <img src="assets/31_age_of_ml_as_code/01_entreprise_ml.png"> </kbd> With MLOps, software-defined infrastructure and platforms, it's never been easier to drag all these great ideas out of the sandbox, and to move them to production. To answer my original question, I'm pretty sure you need to hire more ML-savvy Software and DevOps engineers, not more Data Scientists. But deep down inside, you kind of knew that, right? Now, let's talk about Transformers. --- ### Transformers! Transformers! Transformers! ([Ballmer style](https://www.youtube.com/watch?v=Vhh_GeBPOhs)) Says the State of AI report: "The Transformer architecture has expanded far beyond NLP and is emerging as a general purpose architecture for ML". For example, recent models like Google's [Vision Transformer](https://paperswithcode.com/method/vision-transformer), a convolution-free transformer architecture, and [CoAtNet](https://paperswithcode.com/paper/coatnet-marrying-convolution-and-attention), which mixes transformers and convolution, have set new benchmarks for image classification on ImageNet, while requiring fewer compute resources for training. <kbd> <img src="assets/31_age_of_ml_as_code/02_vision_transformer.png"> </kbd> Transformers also do very well on audio (say, speech recognition), as well as on point clouds, a technique used to model 3D environments like autonomous driving scenes. The Kaggle survey echoes this rise of Transformers. Their usage keeps growing year over year, while RNNs, CNNs and Gradient Boosting algorithms are receding. <kbd> <img src="assets/31_age_of_ml_as_code/03_transformers.png"> </kbd> On top of increased accuracy, Transformers also keep fulfilling the transfer learning promise, allowing teams to save on training time and compute costs, and to deliver business value quicker. <kbd> <img src="assets/31_age_of_ml_as_code/04_general_transformers.png"> </kbd> With Transformers, the Machine Learning world is gradually moving from "*Yeehaa!! Let's build and train our own Deep Learning model from scratch*" to "*Let's pick a proven off the shelf model, fine-tune it on our own data, and be home early for dinner.*" It's a Good Thing in so many ways. State of the art is constantly advancing, and hardly anyone can keep up with its relentless pace. Remember that Google Vision Transformer model I mentioned earlier? Would you like to test it here and now? With Hugging Face, it's [the simplest thing](https://huggingface.co/google/vit-base-patch16-224). <kbd> <img src="assets/31_age_of_ml_as_code/05_vision_transformer.png"> </kbd> How about the latest [zero-shot text generation models](https://huggingface.co/bigscience) from the [Big Science project](https://bigscience.huggingface.co/)? <kbd> <img src="assets/31_age_of_ml_as_code/06_big_science.png"> </kbd> You can do the same with another [16,000+ models](https://huggingface.co/models) and [1,600+ datasets](https://huggingface.co/datasets), with additional tools for [inference](https://huggingface.co/inference-api), [AutoNLP](https://huggingface.co/autonlp), [latency optimization](https://huggingface.co/infinity), and [hardware acceleration](https://huggingface.co/hardware). We can also help you get your project off the ground, [from modeling to production](https://huggingface.co/support). Our mission at Hugging Face is to make Machine Learning as friendly and as productive as possible, for beginners and experts alike. We believe in writing as little code as possible to train, optimize, and deploy models. We believe in built-in best practices. We believe in making infrastructure as transparent as possible. We believe that nothing beats high quality models in production, fast. ### Machine Learning as Code, right here, right now! A lot of you seem to agree. We have over 52,000 stars on [Github](https://github.com/huggingface). For the first year, Hugging Face is also featured in the Kaggle survey, with usage already over 10%. <kbd> <img src="assets/31_age_of_ml_as_code/07_kaggle.png"> </kbd> **Thank you all**. And yeah, we're just getting started. --- *Interested in how Hugging Face can help your organization build and deploy production-grade Machine Learning solutions? Get in touch at [[email protected]](mailto:[email protected]) (no recruiters, no sales pitches, please).*
Train a Sentence Embedding Model with 1B Training Pairs
asi
October 25, 2021
1b-sentence-embeddings
community, nlp
https://huggingface.co/blog/1b-sentence-embeddings
# Train a Sentence Embedding Model with 1 Billion Training Pairs **Sentence embedding** is a method that maps sentences to vectors of real numbers. Ideally, these vectors would capture the semantic of a sentence and be highly generic. Such representations could then be used for many downstream applications such as clustering, text mining, or question answering. We developed state-of-the-art sentence embedding models as part of the project ["Train the Best Sentence Embedding Model Ever with 1B Training Pairs"](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). This project took place during the [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by Hugging Face. We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as guidance from Google’s Flax, JAX, and Cloud team members about efficient deep learning frameworks! ## Training methodology ### Model Unlike words, we can not define a finite set of sentences. Sentence embedding methods, therefore, compose inner words to compute the final representation. For example, SentenceBert model ([Reimers and Gurevych, 2019](https://aclanthology.org/D19-1410.pdf)) uses Transformer, the cornerstone of many NLP applications, followed by a pooling operation over the contextualized word vectors. (c.f. Figure below.) ![snippet](assets/32_1b_sentence_embeddings/model.png) ### Multiple Negative Ranking Loss The parameters from the composition module are usually learned using a self-supervised objective. For the project, we used a contrastive training method illustrated in the figure below. We constitute a dataset with sentence pairs \\( (a_i, p_i) \\) such that sentences from the pair have a close meaning. For example, we consider pairs such as (query, answer-passage), (question, duplicate_question),(paper title, cited paper title). Our model is then trained to map pairs \\( (a_i , p_i) \\) to close vectors while assigning unmatched pairs \\( (a_i , p_j), i \neq j \\) to distant vectors in the embedding space. This training method is also called training with in-batch negatives, InfoNCE or NTXentLoss. ![snippet](assets/32_1b_sentence_embeddings/contrastive_1.png) Formally, given a batch of training samples, the model optimises the following [loss function](https://github.com/UKPLab/sentence-transformers/blob/master/sentence_transformers/losses/MultipleNegativesRankingLoss.py): $$-\frac{1}{n}\sum_{i=1}^n\frac{exp(sim(a_i, p_i))}{\sum_j exp(sim(a_i, p_j))}$$ An illustrative example can be seen below. The model first embeds each sentence from every pair in the batch. Then, we compute a similarity matrix between every possible pair \\( (a_i, p_j) \\). We then compare the similarity matrix with the ground truth, which indicates the original pairs. Finally, we perform the comparison using the cross entropy loss. Intuitively, the model should assign high similarity to the sentences « How many people live in Berlin? » and « Around 3.5 million people live in Berlin » and low similarity to other negative answers such as « The capital of France is Paris » as detailed in the Figure below. ![snippet](assets/32_1b_sentence_embeddings/contrastive_2.png) In the loss equation, `sim` indicates a similarity function between \\( (a, p) \\). The similarity function could be either the Cosine-Similarity or the Dot-Product operator. Both methods have their pros and cons summarized below ([Thakur et al., 2021](https://arxiv.org/abs/2104.08663), [Bachrach et al., 2014](https://dl.acm.org/doi/10.1145/2645710.2645741)): | Cosine-similarity | Dot-product | |---------------------|-------------| | Vector has highest similarity to itself since \\( cos(a, a)=1 \\). | Other vectors can have higher dot-products \\( dot(a, a) < dot (a, b) \\). | | With normalised vectors it is equal to the dot product. The max vector length is equals 1. | It might be slower with certain approximate nearest neighbour methods since the max vector not known. | | With normalised vectors, it is proportional to euclidian distance. It works with k-means clustering. | It does not work with k-means clustering. | In practice, we used a scaled similarity because score differences tends to be too small and apply a scaling factor \\( C \\) such that \\( sim_{scaled}(a, b) = C * sim(a, b) \\) with typically \\( C = 20 \\) ([Henderson and al., 2020]([https://doi.org/10.18653/v1/2020.findings-emnlp.196), [Radford and al., 2021](http://proceedings.mlr.press/v139/radford21a.html)). ### Improving Quality with Better Batches In our method, we build batches of sample pairs \\( (a_i , p_i) \\). We consider all other samples from the batch, \\( (a_i , p_j), i \neq j \\), as negatives sample pairs. The batch composition is therefore a key training aspect. Given the literature in the domain, we mainly focused on three main aspects of the batch. #### 1. Size matters In contrastive learning, a larger batch size is synonymous with better performance. As shown in the Figure extracted from Qu and al., ([2021](https://doi.org/10.18653/v1/2021.naacl-main.466)), a larger batch size increases the results. ![snippet](assets/32_1b_sentence_embeddings/batch-size.png) #### 2. Hard Negatives In the same figure, we observe that including hard negatives also improves performance. Hard negatives are sample \\( p_j \\) which are hard to distinguish from \\( p_i \\). In our example, it could be the pairs « What is the capital of France? » and « What is the capital of the US? » which have a close semantic content and requires precisely understanding the full sentence to be answered correctly. On the contrary, the samples « What is the capital of France? » and «How many Star Wars movies is there?» are less difficult to distinguish since they do not refer to the same topic. #### 3. Cross dataset batches We concatenated multiple datasets to train our models. We built a large batch and gathered samples from the same batch dataset to limit the topic distribution and favor hard negatives. However, we also mix at least two datasets in the batch to learn a global structure between topics and not only a local structure within a topic. ## Training infrastructure and data As mentioned earlier, the quantity of data and the batch size directly impact the model performances. As part of the project, we benefited from efficient hardware infrastructure. We trained our models on [TPUs](https://cloud.google.com/tpu) which are compute units developed by Google and super efficient for matrix multiplications. TPUs have some [hardware specificities](https://huggingface.co/docs/accelerate/quicktour.html#training-on-tpu) which might require some specific code implementation. Additionally, we trained models on a large corpus as we concatenated multiple datasets up to 1 billion sentence pairs! All datasets used are detailed for each model in the [model card](https://huggingface.co/flax-sentence-embeddings/all_datasets_v3_MiniLM-L12). ## Conclusion You can find all models and datasets we created during the challenge in our [HuggingFace repository](https://huggingface.co/flax-sentence-embeddings). We trained 20 general-purpose Sentence Transformers models such as Mini-LM ([Wang and al., 2020](https://proceedings.neurips.cc/paper/2020/hash/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html)), RoBERTa ([liu and al., 2019](https://arxiv.org/abs/1907.11692 )), DistilBERT ([Sanh and al., 2020](http://arxiv.org/abs/1910.01108)) and MPNet ([Song and al., 2020](https://proceedings.neurips.cc/paper/2020/hash/c3a690be93aa602ee2dc0ccab5b7b67e-Abstract.html)). Our models achieve SOTA on multiple general-purpose Sentence Similarity evaluation tasks. We also shared [8 datasets](https://huggingface.co/flax-sentence-embeddings) specialized for Question Answering, Sentence-Similarity, and Gender Evaluation. General sentence embeddings might be used for many applications. We built a [Spaces demo](https://huggingface.co/spaces/flax-sentence-embeddings/sentence-embeddings) to showcase several applications: * The **sentence similarity** module compares the similarity of the main text with other texts of your choice. In the background, the demo extracts the embedding for each text and computes the similarity between the source sentence and the other using cosine similarity. * **Asymmetric QA** compares the answer likeliness of a given query with answer candidates of your choice. * **Search / Cluster** returns nearby answers from a query. For example, if you input « python », it will retrieve closest sentences using dot-product distance. * **Gender Bias Evaluation** report *inherent gender bias* in training set via random sampling of the sentences. Given an anchor text without mentioning gender for target occupation and 2 propositions with gendered pronouns, we compare if models assign a higher similarity to a given proposition and therefore evaluate their proportion to favor a specific gender. The [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104) has been an intense and highly rewarding experience! The quality of Google’s Flax, JAX, and Cloud and Hugging Face team members' guidance and their presence helped us all learn a lot. We hope all projects had as much fun as we did in ours. Whenever you have questions or suggestions, don’t hesitate to contact us!
Large Language Models: A New Moore's Law?
juliensimon
October 26, 2021
large-language-models
analysis, nlp
https://huggingface.co/blog/large-language-models
# Large Language Models: A New Moore's Law? A few days ago, Microsoft and NVIDIA [introduced](https://www.microsoft.com/en-us/research/blog/using-deepspeed-and-megatron-to-train-megatron-turing-nlg-530b-the-worlds-largest-and-most-powerful-generative-language-model/) Megatron-Turing NLG 530B, a Transformer-based model hailed as "*the world’s largest and most powerful generative language model*." This is an impressive show of Machine Learning engineering, no doubt about it. Yet, should we be excited about this mega-model trend? I, for one, am not. Here's why. <kbd> <img src="assets/33_large_language_models/01_model_size.jpg"> </kbd> ### This is your Brain on Deep Learning Researchers estimate that the human brain contains an average of [86 billion neurons](https://pubmed.ncbi.nlm.nih.gov/19226510/) and 100 trillion synapses. It's safe to assume that not all of them are dedicated to language either. Interestingly, GPT-4 is [expected](https://www.wired.com/story/cerebras-chip-cluster-neural-networks-ai/) to have about 100 trillion parameters... As crude as this analogy is, shouldn't we wonder whether building language models that are about the size of the human brain is the best long-term approach? Of course, our brain is a marvelous device, produced by millions of years of evolution, while Deep Learning models are only a few decades old. Still, our intuition should tell us that something doesn't compute (pun intended). ### Deep Learning, Deep Pockets? As you would expect, training a 530-billion parameter model on humongous text datasets requires a fair bit of infrastructure. In fact, Microsoft and NVIDIA used hundreds of DGX A100 multi-GPU servers. At $199,000 a piece, and factoring in networking equipment, hosting costs, etc., anyone looking to replicate this experiment would have to spend close to $100 million dollars. Want fries with that? Seriously, which organizations have business use cases that would justify spending $100 million on Deep Learning infrastructure? Or even $10 million? Very few. So who are these models for, really? ### That Warm Feeling is your GPU Cluster For all its engineering brilliance, training Deep Learning models on GPUs is a brute force technique. According to the spec sheet, each DGX server can consume up to 6.5 kilowatts. Of course, you'll need at least as much cooling power in your datacenter (or your server closet). Unless you're the Starks and need to keep Winterfell warm in winter, that's another problem you'll have to deal with. In addition, as public awareness grows on climate and social responsibility issues, organizations need to account for their carbon footprint. According to this 2019 [study](https://arxiv.org/pdf/1906.02243.pdf) from the University of Massachusetts, "*training BERT on GPU is roughly equivalent to a trans-American flight*". BERT-Large has 340 million parameters. One can only extrapolate what the footprint of Megatron-Turing could be... People who know me wouldn't call me a bleeding-heart environmentalist. Still, some numbers are hard to ignore. ### So? Am I excited by Megatron-Turing NLG 530B and whatever beast is coming next? No. Do I think that the (relatively small) benchmark improvement is worth the added cost, complexity and carbon footprint? No. Do I think that building and promoting these huge models is helping organizations understand and adopt Machine Learning ? No. I'm left wondering what's the point of it all. Science for the sake of science? Good old marketing? Technological supremacy? Probably a bit of each. I'll leave them to it, then. Instead, let me focus on pragmatic and actionable techniques that you can all use to build high quality Machine Learning solutions. ### Use Pretrained Models In the vast majority of cases, you won't need a custom model architecture. Maybe you'll *want* a custom one (which is a different thing), but there be dragons. Experts only! A good starting point is to look for [models](https://huggingface.co/models) that have been pretrained for the task you're trying to solve (say, [summarizing English text](https://huggingface.co/models?language=en&pipeline_tag=summarization&sort=downloads)). Then, you should quickly try out a few models to predict your own data. If metrics tell you that one works well enough, you're done! If you need a little more accuracy, you should consider fine-tuning the model (more on this in a minute). ### Use Smaller Models When evaluating models, you should pick the smallest one that can deliver the accuracy you need. It will predict faster and require fewer hardware resources for training and inference. Frugality goes a long way. It's nothing new either. Computer Vision practitioners will remember when [SqueezeNet](https://arxiv.org/abs/1602.07360) came out in 2017, achieving a 50x reduction in model size compared to [AlexNet](https://papers.nips.cc/paper/2012/hash/c399862d3b9d6b76c8436e924a68c45b-Abstract.html), while meeting or exceeding its accuracy. How clever that was! Downsizing efforts are also under way in the Natural Language Processing community, using transfer learning techniques such as [knowledge distillation](https://en.wikipedia.org/wiki/Knowledge_distillation). [DistilBERT](https://arxiv.org/abs/1910.01108) is perhaps its most widely known achievement. Compared to the original BERT model, it retains 97% of language understanding while being 40% smaller and 60% faster. You can try it [here](https://huggingface.co/distilbert-base-uncased). The same approach has been applied to other models, such as Facebook's [BART](https://arxiv.org/abs/1910.13461), and you can try DistilBART [here](https://huggingface.co/models?search=distilbart). Recent models from the [Big Science](https://bigscience.huggingface.co/) project are also very impressive. As visible in this graph included in the [research paper](https://arxiv.org/abs/2110.08207), their T0 model outperforms GPT-3 on many tasks while being 16x smaller. <kbd> <img src="assets/33_large_language_models/02_t0.png"> </kbd> You can try T0 [here](https://huggingface.co/bigscience/T0pp). This is the kind of research we need more of! ### Fine-Tune Models If you need to specialize a model, there should be very few reasons to train it from scratch. Instead, you should fine-tune it, that is to say train it only for a few epochs on your own data. If you're short on data, maybe of one these [datasets](https://huggingface.co/datasets) can get you started. You guessed it, that's another way to do transfer learning, and it'll help you save on everything! * Less data to collect, store, clean and annotate, * Faster experiments and iterations, * Fewer resources required in production. In other words: save time, save money, save hardware resources, save the world! If you need a tutorial, the Hugging Face [course](https://huggingface.co/course) will get you started in no time. ### Use Cloud-Based Infrastructure Like them or not, cloud companies know how to build efficient infrastructure. Sustainability studies show that cloud-based infrastructure is more energy and carbon efficient than the alternative: see [AWS](https://sustainability.aboutamazon.com/environment/the-cloud), [Azure](https://azure.microsoft.com/en-us/global-infrastructure/sustainability), and [Google](https://cloud.google.com/sustainability). Earth.org [says](https://earth.org/environmental-impact-of-cloud-computing/) that while cloud infrastructure is not perfect, "[*it's] more energy efficient than the alternative and facilitates environmentally beneficial services and economic growth.*" Cloud certainly has a lot going for it when it comes to ease of use, flexibility and pay as you go. It's also a little greener than you probably thought. If you're short on GPUs, why not try fine-tune your Hugging Face models on [Amazon SageMaker](https://aws.amazon.com/sagemaker/), AWS' managed service for Machine Learning? We've got [plenty of examples](https://huggingface.co/docs/sagemaker/train) for you. ### Optimize Your Models From compilers to virtual machines, software engineers have long used tools that automatically optimize their code for whatever hardware they're running on. However, the Machine Learning community is still struggling with this topic, and for good reason. Optimizing models for size and speed is a devilishly complex task, which involves techniques such as: * Specialized hardware that speeds up training ([Graphcore](https://www.graphcore.ai/), [Habana](https://habana.ai/)) and inference ([Google TPU](https://cloud.google.com/tpu), [AWS Inferentia](https://aws.amazon.com/machine-learning/inferentia/)). * Pruning: remove model parameters that have little or no impact on the predicted outcome. * Fusion: merge model layers (say, convolution and activation). * Quantization: storing model parameters in smaller values (say, 8 bits instead of 32 bits) Fortunately, automated tools are starting to appear, such as the [Optimum](https://huggingface.co/hardware) open source library, and [Infinity](https://huggingface.co/infinity), a containerized solution that delivers Transformers accuracy at 1-millisecond latency. ### Conclusion Large language model size has been increasing 10x every year for the last few years. This is starting to look like another [Moore's Law](https://en.wikipedia.org/wiki/Moore%27s_law). We've been there before, and we should know that this road leads to diminishing returns, higher cost, more complexity, and new risks. Exponentials tend not to end well. Remember [Meltdown and Spectre](https://meltdownattack.com/)? Do we want to find out what that looks like for AI? Instead of chasing trillion-parameter models (place your bets), wouldn't all be better off if we built practical and efficient solutions that all developers can use to solve real-world problems? *Interested in how Hugging Face can help your organization build and deploy production-grade Machine Learning solutions? Get in touch at [[email protected]](mailto:[email protected]) (no recruiters, no sales pitches, please).*
Course Launch Community Event
sgugger
October 26, 2021
course-launch-event
community, nlp
https://huggingface.co/blog/course-launch-event
# Course Launch Community Event We are excited to share that after a lot of work from the Hugging Face team, part 2 of the [Hugging Face Course](https://hf.co/course) will be released on November 15th! Part 1 focused on teaching you how to use a pretrained model, fine-tune it on a text classification task then upload the result to the [Model Hub](https://hf.co/models). Part 2 will focus on all the other common NLP tasks: token classification, language modeling (causal and masked), translation, summarization and question answering. It will also take a deeper dive in the whole Hugging Face ecosystem, in particular [🤗 Datasets](https://github.com/huggingface/datasets) and [🤗 Tokenizers](https://github.com/huggingface/tokenizers). To go with this release, we are organizing a large community event to which you are invited! The program includes two days of talks, then team projects focused on fine-tuning a model on any NLP task ending with live demos like [this one](https://huggingface.co/spaces/flax-community/chef-transformer). Those demos will go nicely in your portfolio if you are looking for a new job in Machine Learning. We will also deliver a certificate of completion to all the participants that achieve building one of them. AWS is sponsoring this event by offering free compute to participants via [Amazon SageMaker](https://aws.amazon.com/sagemaker/). <div class="flex justify-center"> <img src="/blog/assets/34_course_launch/amazon_logo_dark.png" width=30% class="hidden dark:block"> <img src="/blog/assets/34_course_launch/amazon_logo_white.png" width=30% class="dark:hidden"> </div> To register, please fill out [this form](https://docs.google.com/forms/d/e/1FAIpQLSd17_u-wMCdO4fcOPOSMLKcJhuIcevJaOT8Y83Gs-H6KFF5ew/viewform). You will find below more details on the two days of talks. ## Day 1 (November 15th): A high-level view of Transformers and how to train them The first day of talks will focus on a high-level presentation of Transformers models and the tools we can use to train or fine-tune them. <div class="container md:grid md:grid-cols-2 gap-2 max-w-7xl" > <div class="text-center flex flex-col items-center"> <img src="/blog/assets/34_course_launch/thom_wolf.png" width=50% style="border-radius: 50%;"> <p><strong>Thomas Wolf: <em>Transfer Learning and the birth of the Transformers library</em></strong></p> <p>Thomas Wolf is co-founder and Chief Science Officer of HuggingFace. The tools created by Thomas Wolf and the Hugging Face team are used across more than 5,000 research organisations including Facebook Artificial Intelligence Research, Google Research, DeepMind, Amazon Research, Apple, the Allen Institute for Artificial Intelligence as well as most university departments. Thomas Wolf is the initiator and senior chair of the largest research collaboration that has ever existed in Artificial Intelligence: <a href="https://bigscience.huggingface.co">“BigScience”</a>, as well as a set of widely used <a href="https://github.com/huggingface/">libraries and tools</a>. Thomas Wolf is also a prolific educator and a thought leader in the field of Artificial Intelligence and Natural Language Processing, a regular invited speaker to conferences all around the world (<a href="https://thomwolf.io">https://thomwolf.io</a>).</p> </div> <div class="text-center flex flex-col items-center"> <img src="/blog/assets/34_course_launch/meg_mitchell.png" width=50% style="border-radius: 50%;"> <p><strong>Margaret Mitchell: <em>On Values in ML Development</em></strong></p> <p>Margaret Mitchell is a researcher working on Ethical AI, currently focused on the ins and outs of ethics-informed AI development in tech. She has published over 50 papers on natural language generation, assistive technology, computer vision, and AI ethics, and holds multiple patents in the areas of conversation generation and sentiment classification. She previously worked at Google AI as a Staff Research Scientist, where she founded and co-led Google&#39;s Ethical AI group, focused on foundational AI ethics research and operationalizing AI ethics Google-internally. Before joining Google, she was a researcher at Microsoft Research, focused on computer vision-to-language generation; and was a postdoc at Johns Hopkins, focused on Bayesian modeling and information extraction. She holds a PhD in Computer Science from the University of Aberdeen and a Master&#39;s in computational linguistics from the University of Washington. While earning her degrees, she also worked from 2005-2012 on machine learning, neurological disorders, and assistive technology at Oregon Health and Science University. She has spearheaded a number of workshops and initiatives at the intersections of diversity, inclusion, computer science, and ethics. Her work has received awards from Secretary of Defense Ash Carter and the American Foundation for the Blind, and has been implemented by multiple technology companies. She likes gardening, dogs, and cats.</p> </div> <div class="text-center flex flex-col items-center"> <img src="/blog/assets/34_course_launch/jakob_uszkoreit.png" width=50% style="border-radius: 50%;"> <p><strong>Jakob Uszkoreit: <em>It Ain&#39;t Broke So <del>Don&#39;t Fix</del> Let&#39;s Break It</em></strong></p> <p>Jakob Uszkoreit is the co-founder of Inceptive. Inceptive designs RNA molecules for vaccines and therapeutics using large-scale deep learning in a tight loop with high throughput experiments with the goal of making RNA-based medicines more accessible, more effective and more broadly applicable. Previously, Jakob worked at Google for more than a decade, leading research and development teams in Google Brain, Research and Search working on deep learning fundamentals, computer vision, language understanding and machine translation.</p> </div> <div class="text-center flex flex-col items-center"> <img src="/blog/assets/34_course_launch/jay_alammar.png" width=50% style="border-radius: 50%;"> <p><strong>Jay Alammar: <em>A gentle visual intro to Transformers models</em></strong></p> <p>Jay Alammar, Cohere. Through his popular ML blog, Jay has helped millions of researchers and engineers visually understand machine learning tools and concepts from the basic (ending up in numPy, pandas docs) to the cutting-edge (Transformers, BERT, GPT-3).</p> </div> <div class="text-center flex flex-col items-center"> <img src="/blog/assets/34_course_launch/matthew_watson.png" width=50% style="border-radius: 50%;"> <p><strong>Matthew Watson: <em>NLP workflows with Keras</em></strong></p> <p>Matthew Watson is a machine learning engineer on the Keras team, with a focus on high-level modeling APIs. He studied Computer Graphics during undergrad and a Masters at Stanford University. An almost English major who turned towards computer science, he is passionate about working across disciplines and making NLP accessible to a wider audience.</p> </div> <div class="text-center flex flex-col items-center"> <img src="/blog/assets/34_course_launch/chen_qian.png" width=50% style="border-radius: 50%;"> <p><strong>Chen Qian: <em>NLP workflows with Keras</em></strong></p> <p>Chen Qian is a software engineer from Keras team, with a focus on high-level modeling APIs. Chen got a Master degree of Electrical Engineering from Stanford University, and he is especially interested in simplifying code implementations of ML tasks and large-scale ML.</p> </div> <div class="text-center flex flex-col items-center"> <img src="/blog/assets/34_course_launch/mark_saroufim.png" width=50% style="border-radius: 50%;"> <p><strong>Mark Saroufim: <em>How to Train a Model with Pytorch</em></strong></p> <p>Mark Saroufim is a Partner Engineer at Pytorch working on OSS production tools including TorchServe and Pytorch Enterprise. In his past lives, Mark was an Applied Scientist and Product Manager at Graphcore, <a href="http://yuri.ai/">yuri.ai</a>, Microsoft and NASA&#39;s JPL. His primary passion is to make programming more fun.</p> </div> </div> ## Day 2 (November 16th): The tools you will use Day 2 will be focused on talks by the Hugging Face, [Gradio](https://www.gradio.app/), and [AWS](https://aws.amazon.com/) teams, showing you the tools you will use. <div class="container md:grid md:grid-cols-2 gap-2 max-w-7xl" > <div class="text-center flex flex-col items-center"> <img src="/blog/assets/34_course_launch/lewis_tunstall.png" width=50% style="border-radius: 50%;"> <p><strong>Lewis Tunstall: <em>Simple Training with the 🤗 Transformers Trainer</em></strong></p> <p>Lewis is a machine learning engineer at Hugging Face, focused on developing open-source tools and making them accessible to the wider community. He is also a co-author of an upcoming O’Reilly book on Transformers and you can follow him on Twitter (@_lewtun) for NLP tips and tricks!</p> </div> <div class="text-center flex flex-col items-center"> <img src="/blog/assets/34_course_launch/matthew_carrigan.png" width=50% style="border-radius: 50%;"> <p><strong>Matthew Carrigan: <em>New TensorFlow Features for 🤗 Transformers and 🤗 Datasets</em></strong></p> <p>Matt is responsible for TensorFlow maintenance at Transformers, and will eventually lead a coup against the incumbent PyTorch faction which will likely be co-ordinated via his Twitter account @carrigmat.</p> </div> <div class="text-center flex flex-col items-center"> <img src="/blog/assets/34_course_launch/lysandre_debut.png" width=50% style="border-radius: 50%;"> <p><strong>Lysandre Debut: <em>The Hugging Face Hub as a means to collaborate on and share Machine Learning projects</em></strong></p> <p>Lysandre is a Machine Learning Engineer at Hugging Face where he is involved in many open source projects. His aim is to make Machine Learning accessible to everyone by developing powerful tools with a very simple API.</p> </div> <div class="text-center flex flex-col items-center"> <img src="/blog/assets/34_course_launch/sylvain_gugger.png" width=50% style="border-radius: 50%;"> <p><strong>Sylvain Gugger: <em>Supercharge your PyTorch training loop with 🤗 Accelerate</em></strong></p> <p>Sylvain is a Research Engineer at Hugging Face and one of the core maintainers of 🤗 Transformers and the developer behind 🤗 Accelerate. He likes making model training more accessible.</p> </div> <div class="text-center flex flex-col items-center"> <img src="/blog/assets/34_course_launch/lucile_saulnier.png" width=50% style="border-radius: 50%;"> <p><strong>Lucile Saulnier: <em>Get your own tokenizer with 🤗 Transformers & 🤗 Tokenizers</em></strong></p> <p>Lucile is a machine learning engineer at Hugging Face, developing and supporting the use of open source tools. She is also actively involved in many research projects in the field of Natural Language Processing such as collaborative training and BigScience.</p> </div> <div class="text-center flex flex-col items-center"> <img src="/blog/assets/34_course_launch/merve_noyan.png" width=50% style="border-radius: 50%;"> <p><strong>Merve Noyan: <em>Showcase your model demos with 🤗 Spaces</em></strong></p> <p>Merve is a developer advocate at Hugging Face, working on developing tools and building content around them to democratize machine learning for everyone.</p> </div> <div class="text-center flex flex-col items-center"> <img src="/blog/assets/34_course_launch/abubakar_abid.png" width=50% style="border-radius: 50%;"> <p><strong>Abubakar Abid: <em>Building Machine Learning Applications Fast</em></strong></p> <p>Abubakar Abid is the CEO of <a href="www.gradio.app">Gradio</a>. He received his Bachelor&#39;s of Science in Electrical Engineering and Computer Science from MIT in 2015, and his PhD in Applied Machine Learning from Stanford in 2021. In his role as the CEO of Gradio, Abubakar works on making machine learning models easier to demo, debug, and deploy.</p> </div> <div class="text-center flex flex-col items-center"> <img src="/blog/assets/34_course_launch/mathieu_desve.png" width=50% style="border-radius: 50%;"> <p><strong>Mathieu Desvé: <em>AWS ML Vision: Making Machine Learning Accessible to all Customers</em></strong></p> <p>Technology enthusiast, maker on my free time. I like challenges and solving problem of clients and users, and work with talented people to learn every day. Since 2004, I work in multiple positions switching from frontend, backend, infrastructure, operations and managements. Try to solve commons technical and managerial issues in agile manner.</p> </div> <div class="text-center flex flex-col items-center"> <img src="/blog/assets/34_course_launch/philipp_schmid.png" width=50% style="border-radius: 50%;"> <p><strong>Philipp Schmid: <em>Managed Training with Amazon SageMaker and 🤗 Transformers</em></strong></p> <p>Philipp Schmid is a Machine Learning Engineer and Tech Lead at Hugging Face, where he leads the collaboration with the Amazon SageMaker team. He is passionate about democratizing and productionizing cutting-edge NLP models and improving the ease of use for Deep Learning.</p> </div> </div>
Scaling up BERT-like model Inference on modern CPU - Part 2
mfuntowicz
November 4, 2021
bert-cpu-scaling-part-2
partnerships, intel, guide, nlp
https://huggingface.co/blog/bert-cpu-scaling-part-2
# Scaling up BERT-like model Inference on modern CPU - Part 2 <script async defer src="https://unpkg.com/medium-zoom-element@0/dist/medium-zoom-element.min.js"></script> ## Introduction: Using Intel Software to Optimize AI Efficiency on CPU As we detailed in our [previous blog post](https://huggingface.co/blog/bert-cpu-scaling-part-1), Intel Xeon CPUs provide a set of features especially designed for AI workloads such as AVX512 or VNNI (Vector Neural Network Instructions) for efficient inference using integer quantized neural network for inference along with additional system tools to ensure the work is being done in the most efficient way. In this blog post, we will focus on software optimizations and give you a sense of the performances of the new Ice Lake generation of Xeon CPUs from Intel. Our goal is to give you a full picture of what’s available on the software side to make the most out of your Intel hardware. As in the previous blog post, we show the performance with benchmark results and charts, along with new tools to make all these knobs and features easy to use. Back in April, Intel launched its [latest generation of Intel Xeon processors](https://www.intel.com/content/www/us/en/products/details/processors/xeon/scalable.html), codename Ice Lake, targeting more efficient and performant AI workloads. More precisely, Ice Lake Xeon CPUs can achieve up to 75% faster inference on a variety of NLP tasks when comparing against the previous generation of Cascade Lake Xeon processors. This is achieved by a combination of both hardware and software improvements, [such as new instructions](https://en.wikichip.org/wiki/x86/avx512_vnni) and PCIe 4.0 featured on the new Sunny Cove architecture to supports Machine Learning and Deep Learning workloads. Last but not least, Intel worked on dedicated optimizations for various frameworks which now come with Intel’s flavors like [Intel’s Extension for Scikit Learn](https://intel.github.io/scikit-learn-intelex/), [Intel TensorFlow](https://www.intel.com/content/www/us/en/developer/articles/guide/optimization-for-tensorflow-installation-guide.html) and [Intel PyTorch Extension](https://www.intel.com/content/www/us/en/developer/articles/containers/pytorch-extension.html). All these features are very low-level in the stack of what Data Scientists and Machine Learning Engineers use in their day-to-day toolset. In a vast majority of situations, it is more common to rely on higher level frameworks and libraries to handle multi-dimensional arrays manipulation such as [PyTorch](https://pytorch.org) and [TensorFlow](https://www.tensorflow.org/) and make use of highly tuned mathematical operators such as [BLAS (Basic Linear Algebra Subroutines)](http://www.netlib.org/blas/) for the computational part. In this area, Intel plays an essential role by providing software components under the oneAPI umbrella which makes it very easy to use highly efficient linear algebra routines through Intel [oneMKL (Math Kernel Library)](https://www.intel.com/content/www/us/en/develop/documentation/oneapi-programming-guide/top/api-based-programming/intel-oneapi-math-kernel-library-onemkl.html), higher-level parallelization framework with Intel OpenMP or the [Threading Building Blocks (oneTBB)](https://www.intel.com/content/www/us/en/developer/tools/oneapi/onetbb.html). Also, oneAPI provides some domain-specific libraries such as Intel [oneDNN](https://www.intel.com/content/www/us/en/developer/tools/oneapi/onednn.html) for deep neural network primitives (ReLU, fully-connected, etc.) or [oneCCL](https://www.intel.com/content/www/us/en/developer/tools/oneapi/oneccl.html) for collective communication especially useful when using distributed setups to access efficient all-reduce operations over multiple hosts. Some of these libraries, especially MKL or oneDNN, are natively included in frameworks such as PyTorch and TensorFlow ([since 2.5.0](https://medium.com/intel-analytics-software/leverage-intel-deep-learning-optimizations-in-tensorflow-129faa80ee07)) to bring all the performance improvements to the end user out of the box. When one would like to target very specific hardware features, Intel provides custom versions of the most common software, especially optimized for the Intel platform. This is for instance the case with TensorFlow, [for which Intel provides custom, highly tuned and optimized versions of the framework](https://www.intel.com/content/www/us/en/developer/articles/guide/optimization-for-tensorflow-installation-guide.html), or with the Intel PyTorch Extension (IPEX) framework which can be considered as a feature laboratory before upstreaming to PyTorch. ## Deep Dive: Leveraging advanced Intel features to improve AI performances ### Performance tuning knobs As highlighted above, we are going to cover a new set of tunable items to improve the performance of our AI application. From a high-level point of view, every machine learning and deep learning framework is made of the same ingredients: 1. A structural way of representing data in memory (vector, matrices, etc.) 2. Implementation of mathematical operators 3. Efficient parallelization of the computations on the target hardware _In addition to the points listed above, deep learning frameworks provide ways to represent data flow and dependencies to compute gradients. This falls out of the scope of this blog post, and it leverages the same components as the ones listed above!_ <br> <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="Intel libraries overview under the oneAPI umbrella" src="assets/35_bert_cpu_scaling_part_2/oneapi.jpg"></medium-zoom> <figcaption>Figure 1. Intel libraries overview under the oneAPI umbrella</figcaption> </figure> <br> ### 1. Memory allocation and management libraries This blog post will deliberately skip the first point about the data representation as it is something rather framework specific. For reference, PyTorch uses its very own implementation, called [ATen](https://github.com/pytorch/pytorch/tree/master/aten/src), while TensorFlow relies on the open source library [Eigen](https://eigen.tuxfamily.org/index.php?title=Main_Page) for this purpose. While it’s very complex to apply generic optimizations to different object structures and layouts, there is one area where we can have an impact: Memory Allocation. As a short reminder, memory allocation here refers to the process of programmatically asking the operating system a dynamic (unknown beforehand) area on the system where we will be able to store items into, such as the malloc and derived in C or the new operator in C++. Memory efficiency, both in terms of speed but also in terms of fragmentation, is a vast scientific and engineering subject with multiple solutions depending on the task and underlying hardware. Over the past years we saw more and more work in this area, with notably: - [jemalloc](http://jemalloc.net/) (Facebook - 2005) - [mimalloc](https://microsoft.github.io/mimalloc/) (Microsoft - 2019) - [tcmalloc](https://abseil.io/blog/20200212-tcmalloc) (Google - 2020) Each pushes forward different approaches to improve aspects of the memory allocation and management on various software. ### 2. Efficient parallelization of computations Now that we have an efficient way to represent our data, we need a way to take the most out of the computational hardware at our disposal. Interestingly, when it comes to inference, CPUs have a potential advantage over GPUs in the sense they are everywhere, and they do not require specific application components and administration staff to operate them. Modern CPUs come with many cores and complex mechanisms to increase the general performances of software. Yet, as we highlighted on [the first blog post](https://hf.co/blog/bert-cpu-scaling-part-1), they also have features which can be tweaked depending on the kind of workload (CPU or I/O bound) you target, to further improve performances for your application. Still, implementing parallel algorithms might not be as simple as throwing more cores to do the work. Many factors, such as data structures used, concurrent data access, CPU caches invalidation - all of which might prevent your algorithm from being effectively faster. As a reference talk, we recommend the talk from [**Scott Meyers: CPU Caches and Why You Care**](https://www.youtube.com/watch?v=WDIkqP4JbkE) if you are interested in diving more into the subject. Thankfully, there are libraries which make the development process of such parallel algorithms easier and less error-prone. Among the most common parallel libraries we can mention OpenMP and TBB (Threading Building Blocks), which work at various levels, from programming API in C/C++ to environment variable tuning and dynamic scheduling. On Intel hardware, it is advised to use the Intel implementation of the OpenMP specification often referred as "IOMP" available as part of the [Intel oneAPI toolkit](https://www.intel.com/content/www/us/en/developer/tools/oneapi/overview.html). <br> <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="Code snippet showing parallel computation done through OpenMP" src="assets/35_bert_cpu_scaling_part_2/openmp.png"></medium-zoom> <figcaption>Figure 2. Code snippet showing parallel computation done through OpenMP</figcaption> </figure> [comment]: <> (<br>) ### 3. Optimized mathematical operators Now that we covered the necessary building blocks for designing efficient data structures and parallel algorithms, the last remaining piece is the one running the computation, the one implementing the variety of mathematical operators and neural network layers to do what we love most, designing neural networks! 😊 In every programmer toolkit, there are multiple levels which can bring mathematical operations support, which can then be optimized differently depending on various factors such as the data storage layout being used (Contiguous memory, Chunked, Packed, etc.), the data format representing each scalar element (Float32, Integer, Long, Bfloat16, etc.) and of course the various instructions being supported by your processor. Nowadays, almost all processors support basic mathematical operations on scalar items (one single item at time) or in vectorized mode (meaning they operate on multiple items within the same CPU instructions, referred as SIMD “Single Instruction Multiple Data”). Famous sets of SIMD instructions are SSE2, AVX, AVX2 and the AVX-512 present on the latest generations of Intel CPUs being able to operate over 16 bytes of content within a single CPU clock. Most of the time, one doesn't have to worry too much about the actual assembly being generated to execute a simple element-wise addition between two vectors, but if you do, again there are some libraries which allow you to go one level higher than writing code calling CPU specific intrinsic to implement efficient mathematical kernels. This is for instance what Intel’s MKL “Math Kernel Library” provides, along with the famous BLAS “Basic Linear Algebra Subroutines” interface to implement all the basic operations for linear algebra. Finally, on top of this, one can find some domain specific libraries such as Intel's oneDNN which brings all the most common and essential building blocks required to implement neural network layers. Intel MKL and oneDNN are natively integrated within the PyTorch framework, where it can enable some performance speedup for certain operations such as Linear + ReLU or Convolution. On the TensorFlow side, oneDNN can be enabled by setting the environment variable `TF_ENABLE_ONEDNN_OPTS=1` (_TensorFlow >= 2.5.0_) to achieve similar machinery under the hood. ## More Efficient AI Processing on latest Intel Ice Lake CPUs In order to report the performances of the Ice Lake product lineup we will closely follow [the methodology we used for the first blog](https://hf.co/blog/bert-cpu-scaling-part-1#2-benchmarking-methodology) post of this series. As a reminder, we will adopt the exact same schema to benchmark the various setups we will highlight through this second blog post. More precisely, the results presented in the following sections are based on: - PyTorch: 1.9.0 - TensorFlow: 2.5.0 - Batch Sizes: 1, 4, 8, 16, 32, 128 - Sequence Lengths: 8, 16, 32, 64, 128, 384, 512 We will present the results through metrics accepted by the field to establish the performances of the proposed optimizations: - Latency: Time it takes to execute a single inference request (i.e., “forward call”) through the model, expressed in millisecond. - Throughput: Number of inference requests (i.e., “forward calls”) the system can sustain within a defined period, expressed in call/sec. We will also provide an initial baseline showing out-of-the-box results and a second baseline applying all the different optimizations we highlighted in the first blogpost. Everything was run on an Intel provided cloud instance featuring the [Ice Lake Xeon Platinum 8380](https://ark.intel.com/content/www/fr/fr/ark/products/205684/intel-xeon-platinum-8380hl-processor-38-5m-cache-2-90-ghz.html) CPU operating on Ubuntu 20.04.2 LTS. You can find the same processors on the various cloud providers: - [AWS m6i / c6i instances](https://aws.amazon.com/fr/blogs/aws/new-amazon-ec2-c6i-instances-powered-by-the-latest-generation-intel-xeon-scalable-processors/) - [Azure Ev5 / Dv5 series](https://azure.microsoft.com/en-us/blog/upgrade-your-infrastructure-with-the-latest-dv5ev5-azure-vms-in-preview/) <br> <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="Intel Ice Lake Xeon 8380 Specifications" src="assets/35_bert_cpu_scaling_part_2/intel_xeon_8380_specs.svg"></medium-zoom> <figcaption>Figure 3. Intel Ice Lake Xeon 8380 Specifications</figcaption> </figure> <br> ### Establishing the baseline As mentioned previously, the baselines will be composed of two different setups: - Out-of-the-box: We are running the workloads as-is, without any tuning - Optimized: We apply the various knobs present in [Blog #1](https://hf.co/blog/bert-cpu-scaling-part-1#2-benchmarking-methodology) Also, from the comments we had about the previous blog post, we wanted to change the way we present the framework within the resulting benchmarks. As such, through the rest of this second blog post, we will split framework benchmarking results according to the following: - Frameworks using “eager” mode for computations (PyTorch, TensorFlow) - Frameworks using “graph” mode for computations (TorchScript, TensorFlow Graph, Intel Tensorflow) #### Baseline: Eager frameworks latencies Frameworks operating in eager mode usually discover the actual graph while executing it. More precisely, the actual computation graph is not known beforehand and you gradually (_eagerly_) execute one operator which will become the input of the next one, etc. until you reach leaf nodes (outputs). These frameworks usually provide more flexibility in the algorithm you implement at the cost of increased runtime overhead and slightly potential more memory usage to keep track of all the required elements for the backward pass. Last but not least, it is usually harder through these frameworks to enable graph optimizations such as operator fusion. For instance, many deep learning libraries such as oneDNN have optimized kernels for Convolution + ReLU but you actually need to know before executing the graph that this pattern will occur within the sequence of operation, which is, by design, not something possible within eager frameworks. <br> <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="PyTorch latencies with respect to the number of cores involved" src="assets/35_bert_cpu_scaling_part_2/baselines/eager_mode_pytorch_baseline.svg"></medium-zoom> <figcaption>Figure 4. PyTorch latencies with respect to the number of cores involved</figcaption> </figure> <br> <br> <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="Google's TensorFlow latencies with respect to the number of cores involved" src="assets/35_bert_cpu_scaling_part_2/baselines/eager_mode_tensorflow_baseline.svg"></medium-zoom> <figcaption> Figure 5. Google's TensorFlow latencies with respect to the number of cores involved</figcaption> </figure> <br> <br> <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="Google's TensorFlow with oneDNN enabled latencies with respect to the number of cores involved" src="assets/35_bert_cpu_scaling_part_2/baselines/eager_mode_tensorflow_onednn_baseline.svg"></medium-zoom> <figcaption>Figure 6. Google's TensorFlow with oneDNN enabled latencies with respect to the number of cores involved</figcaption> </figure> <br> <br> <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="Intel TensorFlow latencies with respect to the number of cores involved" src="assets/35_bert_cpu_scaling_part_2/baselines/eager_mode_intel_tensorflow_baseline.svg"></medium-zoom> <figcaption>Figure 7. Intel TensorFlow latencies with respect to the number of cores involved</figcaption> </figure> <br> The global trend highlights the positive impact of the number of cores on the observed latencies. In most of the cases, increasing the number of cores reduces the computation time across the different workload sizes. Still, putting more cores to the task doesn't result in monotonic latency reductions, there is always a trade-off between the workload’s size and the number of resources you allocate to execute the job. As you can see on the charts above, one very common pattern tends to arise from using all the cores available on systems with more than one CPU (more than one socket). The inter-socket communication introduces a significant latency overhead and results in very little improvement to increased latency overall. Also, this inter-socket communication overhead tends to be less and less perceptive as the workload becomes larger, meaning the usage of all computational resources benefits from using all the available cores. In this domain, it seems PyTorch (Figure 1.) and Intel TensorFlow (Figure 4.) seem to have slightly better parallelism support, as showed on the sequence length 384 and 512 for which using all the cores still reduces the observed latency. #### Baseline: Graph frameworks latencies This time we compare performance when using frameworks in “Graph” mode, where the graph is fully known beforehand, and all the allocations and optimizations such as graph pruning and operators fusing can be made. <br> <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="TorchScript latencies with respect to the number of cores involved" src="assets/35_bert_cpu_scaling_part_2/baselines/graph_mode_torchscript_baseline.svg"></medium-zoom> <figcaption>Figure 8. TorchScript latencies with respect to the number of cores involved</figcaption> </figure> <br> <br> <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="Google's TensorFlow latencies with respect to the number of cores involved" src="assets/35_bert_cpu_scaling_part_2/baselines/graph_mode_tensorflow_baseline.svg"></medium-zoom> <figcaption>Figure 9. Google's TensorFlow latencies with respect to the number of cores involved</figcaption> </figure> <br> <br> <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="Google's TensorFlow with oneDNN enabled latencies with respect to the number of cores involved" src="assets/35_bert_cpu_scaling_part_2/baselines/graph_mode_tensorflow_onednn_baseline.svg"></medium-zoom> <figcaption>Figure 10. Google's TensorFlow with oneDNN enabled latencies with respect to the number of cores involved</figcaption> </figure> <br> <br> <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="Intel TensorFlow latencies with respect to the number of cores involved" src="assets/35_bert_cpu_scaling_part_2/baselines/graph_mode_intel_tensorflow_baseline.svg"></medium-zoom> <figcaption>Figure 11. Intel TensorFlow latencies with respect to the number of cores involved</figcaption> </figure> <br> This is often referred to as “tracing” the graph and, as you can see here, the results are not that different from TorchScript (Graph execution mode from PyTorch) vs TensorFlow(s). All TensorFlow implementations seem to perform better than TorchScript when the parallelization is limited (low number of cores involved in the intra operation computations) but this seems not to scale efficiently as we increase the computation resources, whereas TorchScript seems to be able to better leverage the power of modern CPUs. Still, the margin between all these frameworks in most cases very limited. ### Tuning the Memory Allocator: Can this impact the latencies observed? One crucial component every program dynamically allocating memory relies on is the memory allocator. If you are familiar with C/C++ programming this component provides the low bits to malloc/free or new/delete. Most of the time you don’t have to worry too much about it and the default ones (glibc for instance on most Linux distributions) will provide great performances out of the box. Still, in some situations it might not provide the most efficient performances, as these default allocators are most of the time designed to be “good” most of the time, and not fine-tuned for specific workloads or parallelism. So, what are the alternatives, and when are they more suitable than the default ones? Well, again, it depends on the kind of context around your software. Possible situations are a heavy number of allocations/de-allocations causing fragmentation over time, specific hardware and/or architecture you’re executing your software on and finally the level of parallelism of your application. Do you see where this is going? Deep learning and by extension all the applications doing heavy computations are heavily multi-threaded, that’s also the case for software libraries such as PyTorch, TensorFlow and any other frameworks targeting Machine Learning workloads. The default memory allocator strategies often rely on global memory pools which require the usage of synchronization primitives to operate, increasing the overall pressure on the system, reducing the performance of your application. Some recent works by companies such as Google, Facebook and Microsoft provided alternative memory allocation strategies implemented in custom memory allocator libraries one can easily integrate directly within its software components or use dynamic shared library preload to swap the library being used to achieve the allocation/de-allocation. Among these libraries, we can cite a few of them such as [tcmalloc](), [jemalloc]() and [mimalloc](). <br> <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="Legend - Various allocator benchmarked on different tasks" src="assets/35_bert_cpu_scaling_part_2/allocator_benchmark_legend.png"></medium-zoom> </figure> <br> <br> <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="Various allocator benchmarked on different tasks" src="assets/35_bert_cpu_scaling_part_2/allocator_benchmark.png"></medium-zoom> <figcaption>Figure 12. Various memory allocators benchmarked on different tasks</figcaption> </figure> <br> Through this blog post we will only focus on benchmarking tcmalloc and jemalloc as potential memory allocators drop-in candidates. To be fully transparent, for the scope of the results below we used tcmalloc as part of the gperftools package available on Ubuntu distributions version 2.9 and jemalloc 5.1.0-1. #### Memory allocator benchmarks Again, we first compare performance against frameworks executing in an eager fashion. This is potentially the use case where the allocator can play the biggest role: As the graph is unknown before its execution, each framework must manage the memory required for each operation when it meets the actual execution of the above node, no planning ahead possible. In this context, the allocator is a major component due to all the system calls to allocate and reclaim memory. <br> <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="PyTorch memory allocator and cores scaling latencies" src="assets/35_bert_cpu_scaling_part_2/allocators/allocator_and_cores_pytorch_latency.svg"></medium-zoom> <figcaption>Figure 13. PyTorch memory allocator and cores scaling latencies</figcaption> </figure> <br> <br> <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="Google's TensorFlow memory allocator and cores scaling latencies" src="assets/35_bert_cpu_scaling_part_2/allocators/allocator_and_cores_tensorflow_latency.svg"></medium-zoom> <figcaption>Figure 14. Google's TensorFlow memory allocator and cores scaling latencies</figcaption> </figure> <br> <br> <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="Google's TensorFlow with oneDNN enabled memory allocator and cores scaling latencies" src="assets/35_bert_cpu_scaling_part_2/allocators/allocator_and_cores_tensorflow_onednn_latency.svg"></medium-zoom> <figcaption>Figure 15. Google's TensorFlow with oneDNN enabled memory allocator and cores scaling latencies</figcaption> </figure> <br> <br> <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="Intel TensorFlow memory allocator and cores scaling latencies" src="assets/35_bert_cpu_scaling_part_2/allocators/allocator_and_cores_intel_tensorflow_latency.svg"></medium-zoom> <figcaption>Figure 16. Intel TensorFlow memory allocator and cores scaling latencies</figcaption> </figure> <br> As per the graph above, you can notice that the standard library allocator (glibc) is often behind performance-wise but provides reasonable performance. Jemalloc allocator is sometimes the fastest around but in very specific situations, where the concurrency is not that high, this can be explained by the underlying structure jemalloc uses internally which is out of the scope of this blog, but you can read the [Facebook Engineering blog](https://engineering.fb.com/2011/01/03/core-data/scalable-memory-allocation-using-jemalloc/) if you want to know more about it. Finally, tcmalloc seems to be the one providing generally best performances across all the workloads benchmarked here. Again, tcmalloc has a different approach than Jemalloc in the way it allocates resources, especially tcmalloc maintains a pool of memory segments locally for each thread, which reduces the necessity to have global, exclusive, critical paths. Again, for more details, I invite you to read the full [blog by Google Abseil team](https://abseil.io/blog/20200212-tcmalloc). Now, back to the graph mode where we benchmark framework having an omniscient representation of the overall computation graph. <br> <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="TorchScript memory allocator and cores scaling latencies" src="assets/35_bert_cpu_scaling_part_2/allocators/allocator_and_cores_torchscript_latency.svg"></medium-zoom> <figcaption>Figure 17. TorchScript memory allocator and cores scaling latencies</figcaption> </figure> <br> <br> <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="Google's TensorFlow memory allocator and cores scaling latencies" src="assets/35_bert_cpu_scaling_part_2/allocators/allocator_and_cores_tensorflow_graph_latency.svg"></medium-zoom> <figcaption>Figure 18. Google's TensorFlow memory allocator and cores scaling latencies</figcaption> </figure> <br> <br> <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="Google's TensorFlow with oneDNN enabled memory allocator and cores scaling latencies" src="assets/35_bert_cpu_scaling_part_2/allocators/allocator_and_cores_tensorflow_onednn_graph_latency.svg"></medium-zoom> <figcaption>Figure 19. Google's TensorFlow with oneDNN enabled memory allocator and cores scaling latencies</figcaption> </figure> <br> <br> <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="Intel TensorFlow memory allocator and cores scaling latencies" src="assets/35_bert_cpu_scaling_part_2/allocators/allocator_and_cores_intel_tensorflow_graph_latency.svg"></medium-zoom> <figcaption>Figure 20. Intel TensorFlow memory allocator and cores scaling latencies</figcaption> </figure> <br> This time, by knowing the underlying structure of the operator flows and matrix shapes involved then the framework can plan and reserve the required resources beforehand. In this context, and as it is shown in the chart above, the difference between framework is very small and there is no clear winner between jemalloc and tcmalloc. Of course, glibc is still slightly behind as a general-purpose memory allocator, but the margin is less significant than in the eager setup. To sum it up, tuning the memory allocator can provide an interesting item to grab the last milliseconds' improvement at the end of the optimization process, especially if you are already using traced computation graphs. ### OpenMP In the previous section we talked about the memory management within machine learning software involving mostly CPU-bound workloads. Such software often relies on intermediary frameworks such as PyTorch or TensorFlow for Deep Learning which commonly abstract away all the underlying, highly parallelized, operator implementations. Writing such highly parallel and optimized algorithms is a real engineering challenge, and it requires a very low-level understanding of all the actual elements coming into play operated by the CPU (synchronization, memory cache, cache validity, etc.). In this context, it is very important to be able to leverage primitives to implement such powerful algorithms, reducing the delivery time and computation time by a large margin compared to implementing everything from scratch. There are many libraries available which provide such higher-level features to accelerate the development of algorithms. Among the most common, one can look at OpenMP, Thread Building Blocks and directly from the C++ when targeting a recent version of the standard. In the following part of this blog post, we will restrict ourselves to OpenMP and especially comparing the GNU, open source and community-based implementation, to the Intel OpenMP one. The latter especially targets Intel CPUs and is optimized to provide best of class performances when used as a drop-in replacement against the GNU OpenMP one. OpenMP exposes [many environment variables](https://www.openmp.org/spec-html/5.0/openmpch6.html) to automatically configure the underlying resources which will be involved in the computations, such as the number of threads to use to dispatch computation to (intra-op threads), the way the system scheduler should bind each of these threads with respect to the CPU resources (threads, cores, sockets) and some other variables which bring further control to the user. Intel OpenMP exposes [more of these environment variables](https://www.intel.com/content/www/us/en/develop/documentation/cpp-compiler-developer-guide-and-reference/top/compilation/supported-environment-variables.html) to provide the user even more flexibility to adjust the performance of its software. <br> <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="OpenMP vs Intel OpenMP latencies running PyTorch" src="assets/35_bert_cpu_scaling_part_2/openmp/openmp_pytorch_latencies.svg"></medium-zoom> <figcaption>Figure 21. OpenMP vs Intel OpenMP latencies running PyTorch</figcaption> </figure> <br> <br> <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="OpenMP vs Intel OpenMP latencies running PyTorch" src="assets/35_bert_cpu_scaling_part_2/openmp/openmp_torchscript_latency.svg"></medium-zoom> <figcaption>Figure 22. OpenMP vs Intel OpenMP latencies running PyTorch</figcaption> </figure> <br> As stated above, tuning OpenMP is something you can start to tweak when you tried all the other, system related, tuning knobs. It can bring a final speed up to you model with just a single environment variable to set. Also, it is important to note that tuning OpenMP library will only work within software that uses the OpenMP API internally. More specially, now only PyTorch and TorchScript really make usage of OpenMP and thus benefit from OpenMP backend tuning. This also explains why we reported latencies only for these two frameworks. ## Automatic Performances Tuning: Bayesian Optimization with Intel SigOpt As mentioned above, many knobs can be tweaked to improve latency and throughput on Intel CPUs, but because there are many, tuning all of them to get optimal performance can be cumbersome. For instance, in our experiments, the following knobs were tuned: - The number of cores: although using as many cores as you have is often a good idea, it does not always provide the best performance because it also means more communication between the different threads. On top of that, having better performance with fewer cores can be very useful as it allows to run multiple instances at the same time, resulting in both better latency and throughput. - The memory allocator: which memory allocator out of the default malloc, Google's tcmalloc and Facebook's jemalloc provides the best performance? - The parallelism library: which parallelism library out of GNU OpenMP and Intel OpenMP provides the best performance? - Transparent Huge Pages: does enabling Transparent Huge Pages (THP) on the system provide better performance? - KMP block time parameter: sets the time, in milliseconds, that a thread should wait, after completing the execution of a parallel region, before sleeping. Of course, the brute force approach, consisting of trying out all the possibilities will provide the best knob values to use to get optimal performance but, the size of the search space being `N x 3 x 2 x 2 x 2 = 24N`, it can take a lot of time: on a machine with 80 physical cores, this means trying out at most `24 x 80 = 1920` different setups! 😱 Fortunately, Intel's [SigOpt](https://sigopt.com/), through Bayesian optimization, allows us to make these tuning experiments both faster and more convenient to analyse, while providing similar performance than the brute force approach. When we analyse the relative difference between the absolute best latency and what SigOpt provides, we observe that although it is often not as good as brute force (except for sequence length = 512 in that specific case), it gives very close performance, with **8.6%** being the biggest gap on this figure. <table class="block mx-auto"> <tr> <td> <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="Absolute best latency found by SigOpt automatic tuning vs brute force" src="assets/35_bert_cpu_scaling_part_2/sigopt/Intel%20Ice%20lake%20Xeon%208380%20-%20TorchScript%20-%20Batch%20Size%201%20-%20Absolute%20Best%20Latency%20vs%20SigOpt%20Best%20Latency.svg"></medium-zoom> <figcaption>Figure 23. Absolute best latency found by SigOpt automatic tuning vs brute force</figcaption> </figure> </td> <td> <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="Relative best latency found by SigOpt automatic tuning vs brute force" src="assets/35_bert_cpu_scaling_part_2/sigopt/Intel%20Ice%20lake%20Xeon%208380%20-%20TorchScript%20-%20Batch%20Size%201%20-%20Relative%20Difference%20Absolute%20Best%20Latency%20vs%20SigOpt%20Best%20Latency.svg"></medium-zoom> <figcaption>Figure 24. Relative best latency found by SigOpt automatic tuning vs brute force</figcaption> </figure> </td> </tr> </table> SigOpt is also very useful for analysis: it provides a lot of figures and valuable information. First, it gives the best value it was able to find, the corresponding knobs, and the history of trials and how it improved as trials went, for example, with sequence length = 20: <table> <tr> <td> <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="SigOpt best value display" src="assets/35_bert_cpu_scaling_part_2/sigopt/sigopt_best_value.png"></medium-zoom> <figcaption>Figure 25. SigOpt best value reporting</figcaption> </figure> </td> <td> <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="SigOpt best value display" src="assets/35_bert_cpu_scaling_part_2/sigopt/sigopt_improvements_over_time.png"></medium-zoom> <figcaption>Figure 26. SigOpt best value reporting</figcaption> </figure> </td> </tr> </table> In this specific setup, 16 cores along with the other knobs were able to give the best results, that is very important to know, because as mentioned before, that means that multiple instances of the model can be run in parallel while still having the best latency for each. It also shows that it had converged at roughly 20 trials, meaning that maybe 25 trials instead of 40 would have been enough. A wide range of other valuable information is available, such as Parameter Importance: As expected, the number of cores is, by far, the most important parameter, but the others play a part too, and it is very experiment dependent. For instance, for the sequence length = 512 experiment, this was the Parameter Importance: <table> <tr> <td> <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="SigOpt best value for Batch Size = 1, Sequence Length = 20" src="assets/35_bert_cpu_scaling_part_2/sigopt/sigopt_parameters_importance_seq_20.png"></medium-zoom> <figcaption>Figure 27. SigOpt best value for Batch Size = 1, Sequence Length = 20</figcaption> </figure> </td> <td> <figure class="image table text-center m-0 w-full"`> <medium-zoom background="rgba(0,0,0,.7)" alt="SigOpt best value for Batch Size = 1, Sequence Length = 512" src="assets/35_bert_cpu_scaling_part_2/sigopt/sigopt_parameters_importance_seq_512.png"></medium-zoom> <figcaption>Figure 28. SigOpt best value for Batch Size = 1, Sequence Length = 512</figcaption> </figure> </td> </tr> </table> Here not only the impact of using OpenMP vs Intel OpenMP was bigger than the impact of the allocator, the relative importance of each knob is more balanced than in the sequence length = 20 experiment. And many more figures, often interactive, are available on SigOpt such as: - 2D experiment history, allowing to compare knobs vs knobs or knobs vs objectives - 3D experiment history, allowing to do the same thing as the 2D experiment history with one more knob / objective. ## Conclusion - Accelerating Transformers for Production In this post, we showed how the new Intel Ice Lake Xeon CPUs are suitable for running AI workloads at scale along with the software elements you can swap and tune in order to exploit the full potential of the hardware. All these items are to be considered after setting-up the various lower-level knobs detailed in [the previous blog](https://huggingface.co/blog/bert-cpu-scaling-part-1) to maximize the usage of all the cores and resources. At Hugging Face, we are on a mission to democratize state-of-the-art Machine Learning, and a critical part of our work is to make these state-of-the-art models as efficient as possible, to use less energy and memory at scale, and to be more affordable to run by companies of all sizes. Our collaboration with Intel through the 🤗 [Hardware Partner Program](https://huggingface.co/hardware) enables us to make advanced efficiency and optimization techniques easily available to the community, through our new 🤗 [Optimum open source library](https://github.com/huggingface/optimum) dedicated to production performance. For companies looking to accelerate their Transformer models inference, our new 🤗 [Infinity product offers a plug-and-play containerized solution](https://huggingface.co/infinity), achieving down to 1ms latency on GPU and 2ms on Intel Xeon Ice Lake CPUs. If you found this post interesting or useful to your work, please consider giving Optimum a star. And if this post was music to your ears, consider [joining our Machine Learning Optimization team](https://apply.workable.com/huggingface/)!
Fine-tuning XLS-R for Multi-Lingual ASR with 🤗 Transformers
patrickvonplaten
November 15, 2021
fine-tune-xlsr-wav2vec2
guide, audio
https://huggingface.co/blog/fine-tune-xlsr-wav2vec2
# Fine-tuning XLS-R for Multi-Lingual ASR with 🤗 Transformers <a target="_blank" href="https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Fine_Tune_XLS_R_on_Common_Voice.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> ***New (11/2021)***: *This blog post has been updated to feature XLSR\'s successor, called [XLS-R](https://huggingface.co/models?other=xls_r)*. **Wav2Vec2** is a pretrained model for Automatic Speech Recognition (ASR) and was released in [September 2020](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) by *Alexei Baevski, Michael Auli, and Alex Conneau*. Soon after the superior performance of Wav2Vec2 was demonstrated on one of the most popular English datasets for ASR, called [LibriSpeech](https://huggingface.co/datasets/librispeech_asr), *Facebook AI* presented a multi-lingual version of Wav2Vec2, called [XLSR](https://arxiv.org/abs/2006.13979). XLSR stands for *cross-lingual speech representations* and refers to model\'s ability to learn speech representations that are useful across multiple languages. XLSR\'s successor, simply called **XLS-R** (refering to the [*\'\'XLM-R*](https://ai.facebook.com/blog/-xlm-r-state-of-the-art-cross-lingual-understanding-through-self-supervision/) *for Speech\'\'*), was released in [November 2021](https://ai.facebook.com/blog/xls-r-self-supervised-speech-processing-for-128-languages) by *Arun Babu, Changhan Wang, Andros Tjandra, et al.* XLS-R used almost **half a million** hours of audio data in 128 languages for self-supervised pre-training and comes in sizes ranging from 300 milion up to **two billion** parameters. You can find the pretrained checkpoints on the 🤗 Hub: - [**Wav2Vec2-XLS-R-300M**](https://huggingface.co/facebook/wav2vec2-xls-r-300m) - [**Wav2Vec2-XLS-R-1B**](https://huggingface.co/facebook/wav2vec2-xls-r-1b) - [**Wav2Vec2-XLS-R-2B**](https://huggingface.co/facebook/wav2vec2-xls-r-2b) Similar to [BERT\'s masked language modeling objective](http://jalammar.github.io/illustrated-bert/), XLS-R learns contextualized speech representations by randomly masking feature vectors before passing them to a transformer network during self-supervised pre-training (*i.e.* diagram on the left below). For fine-tuning, a single linear layer is added on top of the pre-trained network to train the model on labeled data of audio downstream tasks such as speech recognition, speech translation and audio classification (*i.e.* diagram on the right below). ![wav2vec2\_structure](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/xls_r.png) XLS-R shows impressive improvements over previous state-of-the-art results on both speech recognition, speech translation and speaker/language identification, *cf.* with Table 3-6, Table 7-10, and Table 11-12 respectively of the official [paper](https://ai.facebook.com/blog/xls-r-self-supervised-speech-processing-for-128-languages). Setup -------------- In this blog, we will give an in-detail explanation of how XLS-R - more specifically the pre-trained checkpoint [**Wav2Vec2-XLS-R-300M**](https://huggingface.co/facebook/wav2vec2-xls-r-300m) - can be fine-tuned for ASR. For demonstration purposes, we fine-tune the model on the low resource ASR dataset of [Common Voice](https://huggingface.co/datasets/common_voice) that contains only *ca.* 4h of validated training data. XLS-R is fine-tuned using Connectionist Temporal Classification (CTC), which is an algorithm that is used to train neural networks for sequence-to-sequence problems, such as ASR and handwriting recognition. I highly recommend reading the well-written blog post [*Sequence Modeling with CTC (2017)*](https://distill.pub/2017/ctc/) by Awni Hannun. Before we start, let\'s install `datasets` and `transformers`. Also, we need the `torchaudio` to load audio files and `jiwer` to evaluate our fine-tuned model using the [word error rate (WER)](https://huggingface.co/metrics/wer) metric \\( {}^1 \\). ```python !pip install datasets==1.18.3 !pip install transformers==4.11.3 !pip install huggingface_hub==0.1 !pip install torchaudio !pip install librosa !pip install jiwer ``` We strongly suggest to upload your training checkpoints directly to the [Hugging Face Hub](https://huggingface.co/) while training. The [Hugging Face Hub](https://huggingface.co/) has integrated version control so you can be sure that no model checkpoint is getting lost during training. To do so you have to store your authentication token from the Hugging Face website (sign up [here](https://huggingface.co/join) if you haven\'t already!) ```python from huggingface_hub import notebook_login notebook_login() ``` **Print Output:** ```bash Login successful Your token has been saved to /root/.huggingface/token ``` Then you need to install Git-LFS to upload your model checkpoints: ```bash apt install git-lfs ``` ------------------------------------------------------------------------ \\( {}^1 \\) In the [paper](https://arxiv.org/pdf/2006.13979.pdf), the model was evaluated using the phoneme error rate (PER), but by far the most common metric in ASR is the word error rate (WER). To keep this notebook as general as possible we decided to evaluate the model using WER. Prepare Data, Tokenizer, Feature Extractor ------------------------------------------ ASR models transcribe speech to text, which means that we both need a feature extractor that processes the speech signal to the model\'s input format, *e.g.* a feature vector, and a tokenizer that processes the model\'s output format to text. In 🤗 Transformers, the XLS-R model is thus accompanied by both a tokenizer, called [Wav2Vec2CTCTokenizer](https://huggingface.co/transformers/master/model_doc/wav2vec2.html#wav2vec2ctctokenizer), and a feature extractor, called [Wav2Vec2FeatureExtractor](https://huggingface.co/transformers/master/model_doc/wav2vec2.html#wav2vec2featureextractor). Let\'s start by creating the tokenizer to decode the predicted output classes to the output transcription. ### Create `Wav2Vec2CTCTokenizer` A pre-trained XLS-R model maps the speech signal to a sequence of context representations as illustrated in the figure above. However, for speech recognition the model has to to map this sequence of context representations to its corresponding transcription which means that a linear layer has to be added on top of the transformer block (shown in yellow in the diagram above). This linear layer is used to classify each context representation to a token class analogous to how a linear layer is added on top of BERT\'s embeddings for further classification after pre-training (*cf.* with *\'BERT\'* section of the following [blog post](https://huggingface.co/blog/warm-starting-encoder-decoder)). after pretraining a linear layer is added on top of BERT\'s embeddings for further classification - *cf.* with *\'BERT\'* section of this [blog post](https://huggingface.co/blog/warm-starting-encoder-decoder). The output size of this layer corresponds to the number of tokens in the vocabulary, which does **not** depend on XLS-R\'s pretraining task, but only on the labeled dataset used for fine-tuning. So in the first step, we will take a look at the chosen dataset of Common Voice and define a vocabulary based on the transcriptions. First, let\'s go to Common Voice [official website](https://commonvoice.mozilla.org/en/datasets) and pick a language to fine-tune XLS-R on. For this notebook, we will use Turkish. For each language-specific dataset, you can find a language code corresponding to your chosen language. On [Common Voice](https://commonvoice.mozilla.org/en/datasets), look for the field \"Version\". The language code then corresponds to the prefix before the underscore. For Turkish, *e.g.* the language code is `"tr"`. Great, now we can use 🤗 Datasets\' simple API to download the data. The dataset name is `"common_voice"`, the configuration name corresponds to the language code, which is `"tr"` in our case. Common Voice has many different splits including `invalidated`, which refers to data that was not rated as \"clean enough\" to be considered useful. In this notebook, we will only make use of the splits `"train"`, `"validation"` and `"test"`. Because the Turkish dataset is so small, we will merge both the validation and training data into a training dataset and only use the test data for validation. ```python from datasets import load_dataset, load_metric, Audio common_voice_train = load_dataset("common_voice", "tr", split="train+validation") common_voice_test = load_dataset("common_voice", "tr", split="test") ``` Many ASR datasets only provide the target text, `'sentence'` for each audio array `'audio'` and file `'path'`. Common Voice actually provides much more information about each audio file, such as the `'accent'`, etc. Keeping the notebook as general as possible, we only consider the transcribed text for fine-tuning. ```python common_voice_train = common_voice_train.remove_columns(["accent", "age", "client_id", "down_votes", "gender", "locale", "segment", "up_votes"]) common_voice_test = common_voice_test.remove_columns(["accent", "age", "client_id", "down_votes", "gender", "locale", "segment", "up_votes"]) ``` Let\'s write a short function to display some random samples of the dataset and run it a couple of times to get a feeling for the transcriptions. ```python from datasets import ClassLabel import random import pandas as pd from IPython.display import display, HTML def show_random_elements(dataset, num_examples=10): assert num_examples <= len(dataset), "Can't pick more elements than there are in the dataset." picks = [] for _ in range(num_examples): pick = random.randint(0, len(dataset)-1) while pick in picks: pick = random.randint(0, len(dataset)-1) picks.append(pick) df = pd.DataFrame(dataset[picks]) display(HTML(df.to_html())) ``` **Print Output:** | Idx | Sentence | |----------|:-------------:| | 1 | Jonuz, kısa süreli görevi kabul eden tek adaydı. | | 2 | Biz umudumuzu bu mücadeleden almaktayız. | | 3 | Sergide beş Hırvat yeniliği sergilendi. | | 4 | Herşey adıyla bilinmeli. | | 5 | Kuruluş özelleştirmeye hazır. | | 6 | Yerleşim yerlerinin manzarası harika. | | 7 | Olayların failleri bulunamadı. | | 8 | Fakat bu çabalar boşa çıktı. | | 9 | Projenin değeri iki virgül yetmiş yedi milyon avro. | | 10 | Büyük yeniden yapım projesi dört aşamaya bölündü. | Alright! The transcriptions look fairly clean. Having translated the transcribed sentences, it seems that the language corresponds more to written-out text than noisy dialogue. This makes sense considering that [Common Voice](https://huggingface.co/datasets/common_voice) is a crowd-sourced read speech corpus. We can see that the transcriptions contain some special characters, such as `,.?!;:`. Without a language model, it is much harder to classify speech chunks to such special characters because they don\'t really correspond to a characteristic sound unit. *E.g.*, the letter `"s"` has a more or less clear sound, whereas the special character `"."` does not. Also in order to understand the meaning of a speech signal, it is usually not necessary to include special characters in the transcription. Let\'s simply remove all characters that don\'t contribute to the meaning of a word and cannot really be represented by an acoustic sound and normalize the text. ```python import re chars_to_remove_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\']' def remove_special_characters(batch): batch["sentence"] = re.sub(chars_to_remove_regex, '', batch["sentence"]).lower() return batch ``` ```python common_voice_train = common_voice_train.map(remove_special_characters) common_voice_test = common_voice_test.map(remove_special_characters) ``` Let\'s look at the processed text labels again. ```python show_random_elements(common_voice_train.remove_columns(["path","audio"])) ``` **Print Output:** | Idx | Transcription | |----------|:-------------:| | 1 | birisi beyazlar için dediler | | 2 | maktouf'un cezası haziran ayında sona erdi | | 3 | orijinalin aksine kıyafetler çıkarılmadı | | 4 | bunların toplam değeri yüz milyon avroyu buluyor | | 5 | masada en az iki seçenek bulunuyor | | 6 | bu hiç de haksız bir heveslilik değil | | 7 | bu durum bin dokuz yüz doksanlarda ülkenin bölünmesiyle değişti | | 8 | söz konusu süre altı ay | | 9 | ancak bedel çok daha yüksek olabilir | | 10 | başkent fira bir tepenin üzerinde yer alıyor | Good! This looks better. We have removed most special characters from transcriptions and normalized them to lower-case only. Before finalizing the pre-processing, it is always advantageous to consult a native speaker of the target language to see whether the text can be further simplified. For this blog post, [Merve](https://twitter.com/mervenoyann) was kind enough to take a quick look and noted that \"hatted\" characters - like `â` - aren\'t really used anymore in Turkish and can be replaced by their \"un-hatted\" equivalent, *e.g.* `a`. This means that we should replace a sentence like `"yargı sistemi hâlâ sağlıksız"` to `"yargı sistemi hala sağlıksız"`. Let\'s write another short mapping function to further simplify the text labels. Remember, the simpler the text labels, the easier it is for the model to learn to predict those labels. ```python def replace_hatted_characters(batch): batch["sentence"] = re.sub('[â]', 'a', batch["sentence"]) batch["sentence"] = re.sub('[î]', 'i', batch["sentence"]) batch["sentence"] = re.sub('[ô]', 'o', batch["sentence"]) batch["sentence"] = re.sub('[û]', 'u', batch["sentence"]) return batch ``` ```python common_voice_train = common_voice_train.map(replace_hatted_characters) common_voice_test = common_voice_test.map(replace_hatted_characters) ``` In CTC, it is common to classify speech chunks into letters, so we will do the same here. Let\'s extract all distinct letters of the training and test data and build our vocabulary from this set of letters. We write a mapping function that concatenates all transcriptions into one long transcription and then transforms the string into a set of chars. It is important to pass the argument `batched=True` to the `map(...)` function so that the mapping function has access to all transcriptions at once. ```python def extract_all_chars(batch): all_text = " ".join(batch["sentence"]) vocab = list(set(all_text)) return {"vocab": [vocab], "all_text": [all_text]} ``` ```python vocab_train = common_voice_train.map(extract_all_chars, batched=True, batch_size=-1, keep_in_memory=True, remove_columns=common_voice_train.column_names) vocab_test = common_voice_test.map(extract_all_chars, batched=True, batch_size=-1, keep_in_memory=True, remove_columns=common_voice_test.column_names) ``` Now, we create the union of all distinct letters in the training dataset and test dataset and convert the resulting list into an enumerated dictionary. ```python vocab_list = list(set(vocab_train["vocab"][0]) | set(vocab_test["vocab"][0])) ``` ```python vocab_dict = {v: k for k, v in enumerate(sorted(vocab_list))} vocab_dict ``` **Print Output:** ```bash { ' ': 0, 'a': 1, 'b': 2, 'c': 3, 'd': 4, 'e': 5, 'f': 6, 'g': 7, 'h': 8, 'i': 9, 'j': 10, 'k': 11, 'l': 12, 'm': 13, 'n': 14, 'o': 15, 'p': 16, 'q': 17, 'r': 18, 's': 19, 't': 20, 'u': 21, 'v': 22, 'w': 23, 'x': 24, 'y': 25, 'z': 26, 'ç': 27, 'ë': 28, 'ö': 29, 'ü': 30, 'ğ': 31, 'ı': 32, 'ş': 33, '̇': 34 } ``` Cool, we see that all letters of the alphabet occur in the dataset (which is not really surprising) and we also extracted the special characters `""` and `'`. Note that we did not exclude those special characters because: The model has to learn to predict when a word is finished or else the model prediction would always be a sequence of chars which would make it impossible to separate words from each other. One should always keep in mind that pre-processing is a very important step before training your model. E.g., we don\'t want our model to differentiate between `a` and `A` just because we forgot to normalize the data. The difference between `a` and `A` does not depend on the \"sound\" of the letter at all, but more on grammatical rules - *e.g.* use a capitalized letter at the beginning of the sentence. So it is sensible to remove the difference between capitalized and non-capitalized letters so that the model has an easier time learning to transcribe speech. To make it clearer that `" "` has its own token class, we give it a more visible character `|`. In addition, we also add an \"unknown\" token so that the model can later deal with characters not encountered in Common Voice\'s training set. ```python vocab_dict["|"] = vocab_dict[" "] del vocab_dict[" "] ``` Finally, we also add a padding token that corresponds to CTC\'s \"*blank token*\". The \"blank token\" is a core component of the CTC algorithm. For more information, please take a look at the \"Alignment\" section [here](https://distill.pub/2017/ctc/). ```python vocab_dict["[UNK]"] = len(vocab_dict) vocab_dict["[PAD]"] = len(vocab_dict) len(vocab_dict) ``` Cool, now our vocabulary is complete and consists of 39 tokens, which means that the linear layer that we will add on top of the pretrained XLS-R checkpoint will have an output dimension of 39. Let\'s now save the vocabulary as a json file. ```python import json with open('vocab.json', 'w') as vocab_file: json.dump(vocab_dict, vocab_file) ``` In a final step, we use the json file to load the vocabulary into an instance of the `Wav2Vec2CTCTokenizer` class. ```python from transformers import Wav2Vec2CTCTokenizer tokenizer = Wav2Vec2CTCTokenizer.from_pretrained("./", unk_token="[UNK]", pad_token="[PAD]", word_delimiter_token="|") ``` If one wants to re-use the just created tokenizer with the fine-tuned model of this notebook, it is strongly advised to upload the `tokenizer` to the [Hugging Face Hub](https://huggingface.co/). Let\'s call the repo to which we will upload the files `"wav2vec2-large-xlsr-turkish-demo-colab"`: ```python repo_name = "wav2vec2-large-xls-r-300m-tr-colab" ``` and upload the tokenizer to the [🤗 Hub](https://huggingface.co/). ```python tokenizer.push_to_hub(repo_name) ``` Great, you can see the just created repository under `https://huggingface.co/<your-username>/wav2vec2-large-xls-r-300m-tr-colab` ### Create `Wav2Vec2FeatureExtractor` Speech is a continuous signal, and, to be treated by computers, it first has to be discretized, which is usually called **sampling**. The sampling rate hereby plays an important role since it defines how many data points of the speech signal are measured per second. Therefore, sampling with a higher sampling rate results in a better approximation of the *real* speech signal but also necessitates more values per second. A pretrained checkpoint expects its input data to have been sampled more or less from the same distribution as the data it was trained on. The same speech signals sampled at two different rates have a very different distribution. For example, doubling the sampling rate results in data points being twice as long. Thus, before fine-tuning a pretrained checkpoint of an ASR model, it is crucial to verify that the sampling rate of the data that was used to pretrain the model matches the sampling rate of the dataset used to fine-tune the model. XLS-R was pretrained on audio data of [Babel](http://www.reading.ac.uk/AcaDepts/ll/speechlab/babel/r), [Multilingual LibriSpeech (MLS)](https://huggingface.co/datasets/multilingual_librispeech), [Common Voice](https://huggingface.co/datasets/common_voice), [VoxPopuli](https://arxiv.org/abs/2101.00390), and [VoxLingua107](https://arxiv.org/abs/2011.12998) at a sampling rate of 16kHz. Common Voice, in its original form, has a sampling rate of 48kHz, thus we will have to downsample the fine-tuning data to 16kHz in the following. A `Wav2Vec2FeatureExtractor` object requires the following parameters to be instantiated: - `feature_size`: Speech models take a sequence of feature vectors as an input. While the length of this sequence obviously varies, the feature size should not. In the case of Wav2Vec2, the feature size is 1 because the model was trained on the raw speech signal \\( {}^2 \\). - `sampling_rate`: The sampling rate at which the model is trained on. - `padding_value`: For batched inference, shorter inputs need to be padded with a specific value - `do_normalize`: Whether the input should be *zero-mean-unit-variance* normalized or not. Usually, speech models perform better when normalizing the input - `return_attention_mask`: Whether the model should make use of an `attention_mask` for batched inference. In general, XLS-R models checkpoints should **always** use the `attention_mask`. ```python from transformers import Wav2Vec2FeatureExtractor feature_extractor = Wav2Vec2FeatureExtractor(feature_size=1, sampling_rate=16000, padding_value=0.0, do_normalize=True, return_attention_mask=True) ``` Great, XLS-R\'s feature extraction pipeline is thereby fully defined! For improved user-friendliness, the feature extractor and tokenizer are *wrapped* into a single `Wav2Vec2Processor` class so that one only needs a `model` and `processor` object. ```python from transformers import Wav2Vec2Processor processor = Wav2Vec2Processor(feature_extractor=feature_extractor, tokenizer=tokenizer) ``` Next, we can prepare the dataset. ### Preprocess Data So far, we have not looked at the actual values of the speech signal but just the transcription. In addition to `sentence`, our datasets include two more column names `path` and `audio`. `path` states the absolute path of the audio file. Let\'s take a look. ```python common_voice_train[0]["path"] ``` XLS-R expects the input in the format of a 1-dimensional array of 16 kHz. This means that the audio file has to be loaded and resampled. Thankfully, `datasets` does this automatically by calling the other column `audio`. Let try it out. ```python common_voice_train[0]["audio"] ``` ```bash {'array': array([ 0.0000000e+00, 0.0000000e+00, 0.0000000e+00, ..., -8.8930130e-05, -3.8027763e-05, -2.9146671e-05], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/05be0c29807a73c9b099873d2f5975dae6d05e9f7d577458a2466ecb9a2b0c6b/cv-corpus-6.1-2020-12-11/tr/clips/common_voice_tr_21921195.mp3', 'sampling_rate': 48000} ``` Great, we can see that the audio file has automatically been loaded. This is thanks to the new [`"Audio"` feature](https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=audio#datasets.Audio) introduced in `datasets == 1.18.3`, which loads and resamples audio files on-the-fly upon calling. In the example above we can see that the audio data is loaded with a sampling rate of 48kHz whereas 16kHz are expected by the model. We can set the audio feature to the correct sampling rate by making use of [`cast_column`](https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=cast_column#datasets.DatasetDict.cast_column): ```python common_voice_train = common_voice_train.cast_column("audio", Audio(sampling_rate=16_000)) common_voice_test = common_voice_test.cast_column("audio", Audio(sampling_rate=16_000)) ``` Let\'s take a look at `"audio"` again. ```python common_voice_train[0]["audio"] ``` ```bash {'array': array([ 0.0000000e+00, 0.0000000e+00, 0.0000000e+00, ..., -7.4556941e-05, -1.4621433e-05, -5.7861507e-05], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/05be0c29807a73c9b099873d2f5975dae6d05e9f7d577458a2466ecb9a2b0c6b/cv-corpus-6.1-2020-12-11/tr/clips/common_voice_tr_21921195.mp3', 'sampling_rate': 16000} ``` This seemed to have worked! Let\'s listen to a couple of audio files to better understand the dataset and verify that the audio was correctly loaded. ```python import IPython.display as ipd import numpy as np import random rand_int = random.randint(0, len(common_voice_train)-1) print(common_voice_train[rand_int]["sentence"]) ipd.Audio(data=common_voice_train[rand_int]["audio"]["array"], autoplay=True, rate=16000) ``` **Print Output:** ```bash sunulan bütün teklifler i̇ngilizce idi ``` It seems like the data is now correctly loaded and resampled. It can be heard, that the speakers change along with their speaking rate, accent, and background environment, etc. Overall, the recordings sound acceptably clear though, which is to be expected from a crowd-sourced read speech corpus. Let\'s do a final check that the data is correctly prepared, by printing the shape of the speech input, its transcription, and the corresponding sampling rate. ```python rand_int = random.randint(0, len(common_voice_train)-1) print("Target text:", common_voice_train[rand_int]["sentence"]) print("Input array shape:", common_voice_train[rand_int]["audio"]["array"].shape) print("Sampling rate:", common_voice_train[rand_int]["audio"]["sampling_rate"]) ``` **Print Output:** ```bash Target text: makedonya bu yıl otuz adet tyetmiş iki tankı aldı Input array shape: (71040,) Sampling rate: 16000 ``` Good! Everything looks fine - the data is a 1-dimensional array, the sampling rate always corresponds to 16kHz, and the target text is normalized. Finally, we can leverage `Wav2Vec2Processor` to process the data to the format expected by `Wav2Vec2ForCTC` for training. To do so let\'s make use of Dataset\'s [`map(...)`](https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=map#datasets.DatasetDict.map) function. First, we load and resample the audio data, simply by calling `batch["audio"]`. Second, we extract the `input_values` from the loaded audio file. In our case, the `Wav2Vec2Processor` only normalizes the data. For other speech models, however, this step can include more complex feature extraction, such as [Log-Mel feature extraction](https://en.wikipedia.org/wiki/Mel-frequency_cepstrum). Third, we encode the transcriptions to label ids. **Note**: This mapping function is a good example of how the `Wav2Vec2Processor` class should be used. In \"normal\" context, calling `processor(...)` is redirected to `Wav2Vec2FeatureExtractor`\'s call method. When wrapping the processor into the `as_target_processor` context, however, the same method is redirected to `Wav2Vec2CTCTokenizer`\'s call method. For more information please check the [docs](https://huggingface.co/transformers/master/model_doc/wav2vec2.html#transformers.Wav2Vec2Processor.__call__). ```python def prepare_dataset(batch): audio = batch["audio"] # batched output is "un-batched" batch["input_values"] = processor(audio["array"], sampling_rate=audio["sampling_rate"]).input_values[0] batch["input_length"] = len(batch["input_values"]) with processor.as_target_processor(): batch["labels"] = processor(batch["sentence"]).input_ids return batch ``` Let\'s apply the data preparation function to all examples. ```python common_voice_train = common_voice_train.map(prepare_dataset, remove_columns=common_voice_train.column_names) common_voice_test = common_voice_test.map(prepare_dataset, remove_columns=common_voice_test.column_names) ``` **Note**: Currently `datasets` make use of [`torchaudio`](https://pytorch.org/audio/stable/index.html) and [`librosa`](https://librosa.org/doc/latest/index.html) for audio loading and resampling. If you wish to implement your own costumized data loading/sampling, feel free to just make use of the `"path"` column instead and disregard the `"audio"` column. Long input sequences require a lot of memory. XLS-R is based on `self-attention`. The memory requirement scales quadratically with the input length for long input sequences (*cf.* with [this](https://www.reddit.com/r/MachineLearning/comments/genjvb/d_why_is_the_maximum_input_sequence_length_of/) reddit post). In case this demo crashes with an \"Out-of-memory\" error for you, you might want to uncomment the following lines to filter all sequences that are longer than 5 seconds for training. ```python #max_input_length_in_sec = 5.0 #common_voice_train = common_voice_train.filter(lambda x: x < max_input_length_in_sec * processor.feature_extractor.sampling_rate, input_columns=["input_length"]) ``` Awesome, now we are ready to start training! Training -------- The data is processed so that we are ready to start setting up the training pipeline. We will make use of 🤗\'s [Trainer](https://huggingface.co/transformers/master/main_classes/trainer.html?highlight=trainer) for which we essentially need to do the following: - Define a data collator. In contrast to most NLP models, XLS-R has a much larger input length than output length. *E.g.*, a sample of input length 50000 has an output length of no more than 100. Given the large input sizes, it is much more efficient to pad the training batches dynamically meaning that all training samples should only be padded to the longest sample in their batch and not the overall longest sample. Therefore, fine-tuning XLS-R requires a special padding data collator, which we will define below - Evaluation metric. During training, the model should be evaluated on the word error rate. We should define a `compute_metrics` function accordingly - Load a pretrained checkpoint. We need to load a pretrained checkpoint and configure it correctly for training. - Define the training configuration. After having fine-tuned the model, we will correctly evaluate it on the test data and verify that it has indeed learned to correctly transcribe speech. ### Set-up Trainer Let\'s start by defining the data collator. The code for the data collator was copied from [this example](https://github.com/huggingface/transformers/blob/7e61d56a45c19284cfda0cee8995fb552f6b1f4e/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py#L219). Without going into too many details, in contrast to the common data collators, this data collator treats the `input_values` and `labels` differently and thus applies to separate padding functions on them (again making use of XLS-R processor\'s context manager). This is necessary because in speech input and output are of different modalities meaning that they should not be treated by the same padding function. Analogous to the common data collators, the padding tokens in the labels with `-100` so that those tokens are **not** taken into account when computing the loss. ```python import torch from dataclasses import dataclass, field from typing import Any, Dict, List, Optional, Union @dataclass class DataCollatorCTCWithPadding: """ Data collator that will dynamically pad the inputs received. Args: processor (:class:`~transformers.Wav2Vec2Processor`) The processor used for proccessing the data. padding (:obj:`bool`, :obj:`str` or :class:`~transformers.tokenization_utils_base.PaddingStrategy`, `optional`, defaults to :obj:`True`): Select a strategy to pad the returned sequences (according to the model's padding side and padding index) among: * :obj:`True` or :obj:`'longest'`: Pad to the longest sequence in the batch (or no padding if only a single sequence if provided). * :obj:`'max_length'`: Pad to a maximum length specified with the argument :obj:`max_length` or to the maximum acceptable input length for the model if that argument is not provided. * :obj:`False` or :obj:`'do_not_pad'` (default): No padding (i.e., can output a batch with sequences of different lengths). """ processor: Wav2Vec2Processor padding: Union[bool, str] = True def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]: # split inputs and labels since they have to be of different lengths and need # different padding methods input_features = [{"input_values": feature["input_values"]} for feature in features] label_features = [{"input_ids": feature["labels"]} for feature in features] batch = self.processor.pad( input_features, padding=self.padding, return_tensors="pt", ) with self.processor.as_target_processor(): labels_batch = self.processor.pad( label_features, padding=self.padding, return_tensors="pt", ) # replace padding with -100 to ignore loss correctly labels = labels_batch["input_ids"].masked_fill(labels_batch.attention_mask.ne(1), -100) batch["labels"] = labels return batch ``` ```python data_collator = DataCollatorCTCWithPadding(processor=processor, padding=True) ``` Next, the evaluation metric is defined. As mentioned earlier, the predominant metric in ASR is the word error rate (WER), hence we will use it in this notebook as well. ```python wer_metric = load_metric("wer") ``` The model will return a sequence of logit vectors: \\( \mathbf{y}_1, \ldots, \mathbf{y}_m \\) with \\( \mathbf{y}_1 = f_{\theta}(x_1, \ldots, x_n)[0] \\) and \\( n >> m \\). A logit vector \\( \mathbf{y}_1 \\) contains the log-odds for each word in the vocabulary we defined earlier, thus \\( \text{len}(\mathbf{y}_i) = \\) `config.vocab_size`. We are interested in the most likely prediction of the model and thus take the `argmax(...)` of the logits. Also, we transform the encoded labels back to the original string by replacing `-100` with the `pad_token_id` and decoding the ids while making sure that consecutive tokens are **not** grouped to the same token in CTC style \\( {}^1 \\). ```python def compute_metrics(pred): pred_logits = pred.predictions pred_ids = np.argmax(pred_logits, axis=-1) pred.label_ids[pred.label_ids == -100] = processor.tokenizer.pad_token_id pred_str = processor.batch_decode(pred_ids) # we do not want to group tokens when computing the metrics label_str = processor.batch_decode(pred.label_ids, group_tokens=False) wer = wer_metric.compute(predictions=pred_str, references=label_str) return {"wer": wer} ``` Now, we can load the pretrained checkpoint of [Wav2Vec2-XLS-R-300M](https://huggingface.co/facebook/wav2vec2-xls-r-300m). The tokenizer\'s `pad_token_id` must be to define the model\'s `pad_token_id` or in the case of `Wav2Vec2ForCTC` also CTC\'s *blank token* \\( {}^2 \\). To save GPU memory, we enable PyTorch\'s [gradient checkpointing](https://pytorch.org/docs/stable/checkpoint.html) and also set the loss reduction to \"*mean*\". Because the dataset is quite small (\~6h of training data) and because Common Voice is quite noisy, fine-tuning Facebook\'s [wav2vec2-xls-r-300m checkpoint](FILL%20ME) seems to require some hyper-parameter tuning. Therefore, I had to play around a bit with different values for dropout, [SpecAugment](https://arxiv.org/abs/1904.08779)\'s masking dropout rate, layer dropout, and the learning rate until training seemed to be stable enough. **Note**: When using this notebook to train XLS-R on another language of Common Voice those hyper-parameter settings might not work very well. Feel free to adapt those depending on your use case. ```python from transformers import Wav2Vec2ForCTC model = Wav2Vec2ForCTC.from_pretrained( "facebook/wav2vec2-xls-r-300m", attention_dropout=0.0, hidden_dropout=0.0, feat_proj_dropout=0.0, mask_time_prob=0.05, layerdrop=0.0, ctc_loss_reduction="mean", pad_token_id=processor.tokenizer.pad_token_id, vocab_size=len(processor.tokenizer), ) ``` The first component of XLS-R consists of a stack of CNN layers that are used to extract acoustically meaningful - but contextually independent - features from the raw speech signal. This part of the model has already been sufficiently trained during pretraining and as stated in the [paper](https://arxiv.org/pdf/2006.13979.pdf) does not need to be fine-tuned anymore. Thus, we can set the `requires_grad` to `False` for all parameters of the *feature extraction* part. ```python model.freeze_feature_extractor() ``` In a final step, we define all parameters related to training. To give more explanation on some of the parameters: - `group_by_length` makes training more efficient by grouping training samples of similar input length into one batch. This can significantly speed up training time by heavily reducing the overall number of useless padding tokens that are passed through the model - `learning_rate` and `weight_decay` were heuristically tuned until fine-tuning has become stable. Note that those parameters strongly depend on the Common Voice dataset and might be suboptimal for other speech datasets. For more explanations on other parameters, one can take a look at the [docs](https://huggingface.co/transformers/master/main_classes/trainer.html?highlight=trainer#trainingarguments). During training, a checkpoint will be uploaded asynchronously to the Hub every 400 training steps. It allows you to also play around with the demo widget even while your model is still training. **Note**: If one does not want to upload the model checkpoints to the Hub, simply set `push_to_hub=False`. ```python from transformers import TrainingArguments training_args = TrainingArguments( output_dir=repo_name, group_by_length=True, per_device_train_batch_size=16, gradient_accumulation_steps=2, evaluation_strategy="steps", num_train_epochs=30, gradient_checkpointing=True, fp16=True, save_steps=400, eval_steps=400, logging_steps=400, learning_rate=3e-4, warmup_steps=500, save_total_limit=2, push_to_hub=True, ) ``` Now, all instances can be passed to Trainer and we are ready to start training! ```python from transformers import Trainer trainer = Trainer( model=model, data_collator=data_collator, args=training_args, compute_metrics=compute_metrics, train_dataset=common_voice_train, eval_dataset=common_voice_test, tokenizer=processor.feature_extractor, ) ``` ------------------------------------------------------------------------ \\( {}^1 \\) To allow models to become independent of the speaker rate, in CTC, consecutive tokens that are identical are simply grouped as a single token. However, the encoded labels should not be grouped when decoding since they don\'t correspond to the predicted tokens of the model, which is why the `group_tokens=False` parameter has to be passed. If we wouldn\'t pass this parameter a word like `"hello"` would incorrectly be encoded, and decoded as `"helo"`. \\( {}^2 \\) The blank token allows the model to predict a word, such as `"hello"` by forcing it to insert the blank token between the two l\'s. A CTC-conform prediction of `"hello"` of our model would be `[PAD] [PAD] "h" "e" "e" "l" "l" [PAD] "l" "o" "o" [PAD]`. ### Training Training will take multiple hours depending on the GPU allocated to this notebook. While the trained model yields somewhat satisfying results on *Common Voice*\'s test data of Turkish, it is by no means an optimally fine-tuned model. The purpose of this notebook is just to demonstrate how to fine-tune XLS-R XLSR-Wav2Vec2\'s on an ASR dataset. Depending on what GPU was allocated to your google colab it might be possible that you are seeing an `"out-of-memory"` error here. In this case, it\'s probably best to reduce `per_device_train_batch_size` to 8 or even less and increase [`gradient_accumulation`](https://huggingface.co/transformers/master/main_classes/trainer.html#trainingarguments). ```python trainer.train() ``` **Print Output:** | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.8842 | 3.67 | 400 | 0.6794 | 0.7000 | | 0.4115 | 7.34 | 800 | 0.4304 | 0.4548 | | 0.1946 | 11.01 | 1200 | 0.4466 | 0.4216 | | 0.1308 | 14.68 | 1600 | 0.4526 | 0.3961 | | 0.0997 | 18.35 | 2000 | 0.4567 | 0.3696 | | 0.0784 | 22.02 | 2400 | 0.4193 | 0.3442 | | 0.0633 | 25.69 | 2800 | 0.4153 | 0.3347 | | 0.0498 | 29.36 | 3200 | 0.4077 | 0.3195 | The training loss and validation WER go down nicely. You can now upload the result of the training to the Hub, just execute this instruction: ```python trainer.push_to_hub() ``` You can now share this model with all your friends, family, favorite pets: they can all load it with the identifier \"your-username/the-name-you-picked\" so for instance: ```python from transformers import AutoModelForCTC, Wav2Vec2Processor model = AutoModelForCTC.from_pretrained("patrickvonplaten/wav2vec2-large-xls-r-300m-tr-colab") processor = Wav2Vec2Processor.from_pretrained("patrickvonplaten/wav2vec2-large-xls-r-300m-tr-colab") ``` For more examples of how XLS-R can be fine-tuned, please take a look at the official [🤗 Transformers examples](https://github.com/huggingface/transformers/tree/master/examples/pytorch/speech-recognition#examples). ### Evaluation As a final check, let\'s load the model and verify that it indeed has learned to transcribe Turkish speech. Let\'s first load the pretrained checkpoint. ```python model = Wav2Vec2ForCTC.from_pretrained(repo_name).to("cuda") processor = Wav2Vec2Processor.from_pretrained(repo_name) ``` Now, we will just take the first example of the test set, run it through the model and take the `argmax(...)` of the logits to retrieve the predicted token ids. ```python input_dict = processor(common_voice_test[0]["input_values"], return_tensors="pt", padding=True) logits = model(input_dict.input_values.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1)[0] ``` It is strongly recommended to pass the ``sampling_rate`` argument to this function.Failing to do so can result in silent errors that might be hard to debug. We adapted `common_voice_test` quite a bit so that the dataset instance does not contain the original sentence label anymore. Thus, we re-use the original dataset to get the label of the first example. ```python common_voice_test_transcription = load_dataset("common_voice", "tr", data_dir="./cv-corpus-6.1-2020-12-11", split="test") ``` Finally, we can decode the example. ```python print("Prediction:") print(processor.decode(pred_ids)) print("\nReference:") print(common_voice_test_transcription[0]["sentence"].lower()) ``` **Print Output:** | pred_str | target_text | |----------|:-------------:| | hatta küçük şeyleri için bir büyt bir şeyleri kolluyor veyınıki çuk şeyler için bir bir mizi inciltiyoruz | hayatta küçük şeyleri kovalıyor ve yine küçük şeyler için birbirimizi incitiyoruz. | Alright! The transcription can definitely be recognized from our prediction, but it is not perfect yet. Training the model a bit longer, spending more time on the data preprocessing, and especially using a language model for decoding would certainly improve the model\'s overall performance. For a demonstration model on a low-resource language, the results are quite acceptable however 🤗.
Accelerating PyTorch distributed fine-tuning with Intel technologies
juliensimon
November 19, 2021
accelerating-pytorch
guide
https://huggingface.co/blog/accelerating-pytorch
# Accelerating PyTorch distributed fine-tuning with Intel technologies For all their amazing performance, state of the art deep learning models often take a long time to train. In order to speed up training jobs, engineering teams rely on distributed training, a divide-and-conquer technique where clustered servers each keep a copy of the model, train it on a subset of the training set, and exchange results to converge to a final model. Graphical Processing Units (GPUs) have long been the _de facto_ choice to train deep learning models. However, the rise of transfer learning is changing the game. Models are now rarely trained from scratch on humungous datasets. Instead, they are frequently fine-tuned on specific (and smaller) datasets, in order to build specialized models that are more accurate than the base model for particular tasks. As these training jobs are much shorter, using a CPU-based cluster can prove to be an interesting option that keeps both training time and cost under control. ### What this post is about In this post, you will learn how to accelerate [PyTorch](https://pytorch.org) training jobs by distributing them on a cluster of Intel Xeon Scalable CPU servers, powered by the Ice Lake architecture and running performance-optimized software libraries. We will build the cluster from scratch using virtual machines, and you should be able to easily replicate the demo on your own infrastructure, either in the cloud or on premise. Running a text classification job, we will fine-tune a [BERT](https://huggingface.co/bert-base-cased) model on the [MRPC](https://www.microsoft.com/en-us/download/details.aspx?id=52398) dataset (one of the tasks included in the [GLUE](https://gluebenchmark.com/) benchmark). The MRPC dataset contains 5,800 sentence pairs extracted from news sources, with a label telling us whether the two sentences in each pair are semantically equivalent. We picked this dataset for its reasonable training time, and trying other GLUE tasks is just a parameter away. Once the cluster is up and running, we will run a baseline job on a single server. Then, we will scale it to 2 servers and 4 servers and measure the speed-up. Along the way, we will cover the following topics: * Listing the required infrastructure and software building blocks, * Setting up our cluster, * Installing dependencies, * Running a single-node job, * Running a distributed job. Let's get to work! ### Using Intel servers For best performance, we will use Intel servers based on the Ice Lake architecture, which supports hardware features such as Intel AVX-512 and Intel Vector Neural Network Instructions (VNNI). These features accelerate operations typically found in deep learning training and inference. You can learn more about them in this [presentation](https://www.intel.com/content/dam/www/public/us/en/documents/product-overviews/dl-boost-product-overview.pdf) (PDF). All three major cloud providers offer virtual machines powered by Intel Ice Lake CPUs: - Amazon Web Services: Amazon EC2 [M6i](https://aws.amazon.com/blogs/aws/new-amazon-ec2-m6i-instances-powered-by-the-latest-generation-intel-xeon-scalable-processors/) and [C6i](https://aws.amazon.com/blogs/aws/new-amazon-ec2-c6i-instances-powered-by-the-latest-generation-intel-xeon-scalable-processors/) instances. - Azure: [Dv5/Dsv5-series](https://docs.microsoft.com/en-us/azure/virtual-machines/dv5-dsv5-series), [Ddv5/Ddsv5-series](https://docs.microsoft.com/en-us/azure/virtual-machines/ddv5-ddsv5-series) and [Edv5/Edsv5-series](https://docs.microsoft.com/en-us/azure/virtual-machines/edv5-edsv5-series) virtual machines. - Google Cloud Platform: [N2](https://cloud.google.com/blog/products/compute/compute-engine-n2-vms-now-available-with-intel-ice-lake) Compute Engine virtual machines. Of course, you can also use your own servers. If they are based on the Cascade Lake architecture (Ice Lake's predecessor), they're good to go as Cascade Lake also includes AVX-512 and VNNI. ### Using Intel performance libraries To leverage AVX-512 and VNNI in PyTorch, Intel has designed the [Intel extension for PyTorch](https://github.com/intel/intel-extension-for-pytorch). This software library provides out of the box speedup for training and inference, so we should definitely install it. When it comes to distributed training, the main performance bottleneck is often networking. Indeed, the different nodes in the cluster need to periodically exchange model state information to stay in sync. As transformers are large models with billions of parameters (sometimes much more), the volume of information is significant, and things only get worse as the number of nodes increase. Thus, it's important to use a communication library optimized for deep learning. In fact, PyTorch includes the [```torch.distributed```](https://pytorch.org/tutorials/intermediate/dist_tuto.html) package, which supports different communication backends. Here, we'll use the Intel oneAPI Collective Communications Library [(oneCCL)](https://github.com/oneapi-src/oneCCL), an efficient implementation of communication patterns used in deep learning ([all-reduce](https://en.wikipedia.org/wiki/Collective_operation), etc.). You can learn about the performance of oneCCL versus other backends in this PyTorch [blog post](https://pytorch.medium.com/optimizing-dlrm-by-using-pytorch-with-oneccl-backend-9f85b8ef6929). Now that we're clear on building blocks, let's talk about the overall setup of our training cluster. ### Setting up our cluster In this demo, I'm using Amazon EC2 instances running Amazon Linux 2 (c6i.16xlarge, 64 vCPUs, 128GB RAM, 25Gbit/s networking). Setup will be different in other environments, but steps should be very similar. Please keep in mind that you will need 4 identical instances, so you may want to plan for some sort of automation to avoid running the same setup 4 times. Here, I will set up one instance manually, create a new Amazon Machine Image [(AMI)](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html) from this instance, and use this AMI to launch three identical instances. From a networking perspective, we will need the following setup: * Open port 22 for ```ssh``` access on all instances for setup and debugging. * Configure [password-less](https://www.redhat.com/sysadmin/passwordless-ssh) ```ssh``` between the master instance (the one you'll launch training from) and all other instances (__master included__). * Open all TCP ports on all instances for oneCCL communication inside the cluster. __Please make sure NOT to open these ports to the external world__. AWS provides a convenient way to do this by only allowing connections from instances running a particular [security group](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html). Here's how my setup looks. <kbd> <img src="assets/36_accelerating_pytorch/01_security_group.png"> </kbd> Now, let's provision the first instance manually. I first create the instance itself, attach the security group above, and add 128GB of storage. To optimize costs, I have launched it as a [spot instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-spot-instances.html). Once the instance is up, I connect to it with ```ssh``` in order to install dependencies. ### Installing dependencies Here are the steps we will follow: * Install Intel toolkits, * Install the Anaconda distribution, * Create a new ```conda``` environment, * Install PyTorch and the Intel extension for PyTorch, * Compile and install oneCCL, * Install the ```transformers``` library. It looks like a lot, but there's nothing complicated. Here we go! __Installing Intel toolkits__ First, we download and install the Intel [OneAPI base toolkit](https://www.intel.com/content/www/us/en/developer/tools/oneapi/base-toolkit-download.html?operatingsystem=linux&distributions=webdownload&options=offline) as well as the [AI toolkit](https://www.intel.com/content/www/us/en/developer/tools/oneapi/ai-analytics-toolkit-download.html?operatingsystem=linux&distributions=webdownload&options=offline). You can learn about them on the Intel [website](https://www.intel.com/content/www/us/en/developer/tools/oneapi/toolkits.html#gs.gmojrp). ``` wget https://registrationcenter-download.intel.com/akdlm/irc_nas/18236/l_BaseKit_p_2021.4.0.3422_offline.sh sudo bash l_BaseKit_p_2021.4.0.3422_offline.sh wget https://registrationcenter-download.intel.com/akdlm/irc_nas/18235/l_AIKit_p_2021.4.0.1460_offline.sh sudo bash l_AIKit_p_2021.4.0.1460_offline.sh ``` __Installing Anaconda__ Then, we [download](https://www.anaconda.com/products/individual) and install the Anaconda distribution. ``` wget https://repo.anaconda.com/archive/Anaconda3-2021.05-Linux-x86_64.sh sh Anaconda3-2021.05-Linux-x86_64.sh ``` __Creating a new conda environment__ We log out and log in again to refresh paths. Then, we create a new ```conda``` environment to keep things neat and tidy. ``` yes | conda create -n transformer python=3.7.9 -c anaconda eval "$(conda shell.bash hook)" conda activate transformer yes | conda install pip cmake ``` __Installing PyTorch and the Intel extension for PyTorch__ Next, we install PyTorch 1.9 and the Intel extension toolkit. __Versions must match__. ``` yes | conda install pytorch==1.9.0 cpuonly -c pytorch pip install torch_ipex==1.9.0 -f https://software.intel.com/ipex-whl-stable ``` __Compiling and installing oneCCL__ Then, we install some native dependencies required to compile oneCCL. ``` sudo yum -y update sudo yum install -y git cmake3 gcc gcc-c++ ``` Next, we clone the oneCCL repository, build the library and install it. __Again, versions must match__. ``` source /opt/intel/oneapi/mkl/latest/env/vars.sh git clone https://github.com/intel/torch-ccl.git cd torch-ccl git checkout ccl_torch1.9 git submodule sync git submodule update --init --recursive python setup.py install cd .. ``` __Installing the transformers library__ Next, we install the ```transformers``` library and dependencies required to run GLUE tasks. ``` pip install transformers datasets yes | conda install scipy scikit-learn ``` Finally, we clone a fork of the ```transformers```repository containing the example we're going to run. ``` git clone https://github.com/kding1/transformers.git cd transformers git checkout dist-sigopt ``` We're done! Let's run a single-node job. ### Launching a single-node job To get a baseline, let's launch a single-node job running the ```run_glue.py``` script in ```transformers/examples/pytorch/text-classification```. This should work on any of the instances, and it's a good sanity check before proceeding to distributed training. ``` python run_glue.py \ --model_name_or_path bert-base-cased --task_name mrpc \ --do_train --do_eval --max_seq_length 128 \ --per_device_train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3 \ --output_dir /tmp/mrpc/ --overwrite_output_dir True ``` <kbd> <img src="assets/36_accelerating_pytorch/02_single_node.png"> </kbd> This job takes __7 minutes and 46 seconds__. Now, let's set up distributed jobs with oneCCL and speed things up! ### Setting up a distributed job with oneCCL Three steps are required to run a distributed training job: * List the nodes of the training cluster, * Define environment variables, * Modify the training script. __Listing the nodes of the training cluster__ On the master instance, in ```transformers/examples/pytorch/text-classification```, we create a text file named ```hostfile```. This file stores the names of the nodes in the cluster (IP addresses would work too). The first line should point to the master instance. Here's my file: ``` ip-172-31-28-17.ec2.internal ip-172-31-30-87.ec2.internal ip-172-31-29-11.ec2.internal ip-172-31-20-77.ec2.internal ``` __Defining environment variables__ Next, we need to set some environment variables on the master node, most notably its IP address. You can find more information on oneCCL variables in the [documentation](https://oneapi-src.github.io/oneCCL/env-variables.html). ``` for nic in eth0 eib0 hib0 enp94s0f0; do master_addr=$(ifconfig $nic 2>/dev/null | grep netmask | awk '{print $2}'| cut -f2 -d:) if [ "$master_addr" ]; then break fi done export MASTER_ADDR=$master_addr source /home/ec2-user/anaconda3/envs/transformer/lib/python3.7/site-packages/torch_ccl-1.3.0+43f48a1-py3.7-linux-x86_64.egg/torch_ccl/env/setvars.sh export LD_LIBRARY_PATH=/home/ec2-user/anaconda3/envs/transformer/lib/python3.7/site-packages/torch_ccl-1.3.0+43f48a1-py3.7-linux-x86_64.egg/:$LD_LIBRARY_PATH export LD_PRELOAD="${CONDA_PREFIX}/lib/libtcmalloc.so:${CONDA_PREFIX}/lib/libiomp5.so" export CCL_WORKER_COUNT=4 export CCL_WORKER_AFFINITY="0,1,2,3,32,33,34,35" export CCL_ATL_TRANSPORT=ofi export ATL_PROGRESS_MODE=0 ``` __Modifying the training script__ The following changes have already been applied to our training script (```run_glue.py```) in order to enable distributed training. You would need to apply similar changes when using your own training code. * Import the ```torch_ccl```package. * Receive the address of the master node and the local rank of the node in the cluster. ``` +import torch_ccl + import datasets import numpy as np from datasets import load_dataset, load_metric @@ -47,7 +49,7 @@ from transformers.utils.versions import require_version # Will error if the minimal version of Transformers is not installed. Remove at your own risks. -check_min_version("4.13.0.dev0") +# check_min_version("4.13.0.dev0") require_version("datasets>=1.8.0", "To fix: pip install -r examples/pytorch/text-classification/requirements.txt") @@ -191,6 +193,17 @@ def main(): # or by passing the --help flag to this script. # We now keep distinct sets of args, for a cleaner separation of concerns. + # add local rank for cpu-dist + sys.argv.append("--local_rank") + sys.argv.append(str(os.environ.get("PMI_RANK", -1))) + + # ccl specific environment variables + if "ccl" in sys.argv: + os.environ["MASTER_ADDR"] = os.environ.get("MASTER_ADDR", "127.0.0.1") + os.environ["MASTER_PORT"] = "29500" + os.environ["RANK"] = str(os.environ.get("PMI_RANK", -1)) + os.environ["WORLD_SIZE"] = str(os.environ.get("PMI_SIZE", -1)) + parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments)) if len(sys.argv) == 2 and sys.argv[1].endswith(".json"): ``` Setup is now complete. Let's scale our training job to 2 nodes and 4 nodes. ### Running a distributed job with oneCCL On the __master node__, I use ```mpirun```to launch a 2-node job: ```-np``` (number of processes) is set to 2 and ```-ppn``` (process per node) is set to 1. Hence, the first two nodes in ```hostfile``` will be selected. ``` mpirun -f hostfile -np 2 -ppn 1 -genv I_MPI_PIN_DOMAIN=[0xfffffff0] \ -genv OMP_NUM_THREADS=28 python run_glue.py \ --model_name_or_path distilbert-base-uncased --task_name mrpc \ --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 32 \ --learning_rate 2e-5 --num_train_epochs 3 --output_dir /tmp/mrpc/ \ --overwrite_output_dir True --xpu_backend ccl --no_cuda True ``` Within seconds, a job starts on the first two nodes. The job completes in __4 minutes and 39 seconds__, a __1.7x__ speedup. <kbd> <img src="assets/36_accelerating_pytorch/03_two_nodes.png"> </kbd> Setting ```-np``` to 4 and launching a new job, I now see one process running on each node of the cluster. <kbd> <img src="assets/36_accelerating_pytorch/04_four_nodes.png"> </kbd> Training completes in __2 minutes and 36 seconds__, a __3x__ speedup. One last thing. Changing ```--task_name``` to ```qqp```, I also ran the Quora Question Pairs GLUE task, which is based on a much larger [dataset](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) (over 400,000 training samples). The fine-tuning times were: * Single-node: 11 hours 22 minutes, * 2 nodes: 6 hours and 38 minutes (1.71x), * 4 nodes: 3 hours and 51 minutes (2.95x). It looks like the speedup is pretty consistent. Feel free to keep experimenting with different learning rates, batch sizes and oneCCL settings. I'm sure you can go even faster! ### Conclusion In this post, you've learned how to build a distributed training cluster based on Intel CPUs and performance libraries, and how to use this cluster to speed up fine-tuning jobs. Indeed, transfer learning is putting CPU training back into the game, and you should definitely consider it when designing and building your next deep learning workflows. Thanks for reading this long post. I hope you found it informative. Feedback and questions are welcome at [email protected]_. Until next time, keep learning! Julien
Introducing the Data Measurements Tool: an Interactive Tool for Looking at Datasets
sasha
November 29, 2021
data-measurements-tool
research
https://huggingface.co/blog/data-measurements-tool
# Introducing the 🤗 Data Measurements Tool: an Interactive Tool for Looking at Datasets ***tl;dr:*** We made a tool you can use online to build, measure, and compare datasets. [Click to access the 🤗 Data Measurements Tool here.](https://huggingface.co/spaces/huggingface/data-measurements-tool) ----- As developers of a fast-growing unified repository for Machine Learning datasets ([Lhoest et al. 2021](https://arxiv.org/abs/2109.02846)), the 🤗 Hugging Face [team](https://huggingface.co/huggingface) has been working on supporting good practices for dataset documentation ([McMillan-Major et al., 2021](https://arxiv.org/abs/2108.07374)). While static (if evolving) documentation represents a necessary first step in this direction, getting a good sense of what is actually in a dataset requires well-motivated measurements and the ability to interact with it, dynamically visualizing different aspects of interest. To this end, we introduce an open-source Python library and no-code interface called the [🤗 Data Measurements Tool](https://huggingface.co/spaces/huggingface/data-measurements-tool), using our [Dataset](https://huggingface.co/datasets) and [Spaces](https://huggingface.co/spaces/launch) Hubs paired with the great [Streamlit tool](https://streamlit.io/). This can be used to help understand, build, curate, and compare datasets. ## What is the 🤗 Data Measurements Tool? The [Data Measurements Tool (DMT)](https://huggingface.co/spaces/huggingface/data-measurements-tool) is an interactive interface and open-source library that lets dataset creators and users automatically calculate metrics that are meaningful and useful for responsible data development. ## Why have we created this tool? Thoughtful curation and analysis of Machine Learning datasets is often overlooked in AI development. Current norms for “big data” in AI ([Luccioni et al., 2021](https://arxiv.org/abs/2105.02732), [Dodge et al., 2021](https://arxiv.org/abs/2104.08758)) include using data scraped from various websites, with little or no attention paid to concrete measurements of what the different data sources represent, nor the nitty-gritty details of how they may influence what a model learns. Although dataset annotation approaches can help to curate datasets that are more in line with a developer’s goals, the methods for “measuring” different aspects of these datasets are fairly limited ([Sambasivan et al., 2021](https://storage.googleapis.com/pub-tools-public-publication-data/pdf/0d556e45afc54afeb2eb6b51a9bc1827b9961ff4.pdf)). A new wave of research in AI has called for a fundamental paradigm shift in how the field approaches ML datasets ([Paullada et al., 2020](https://arxiv.org/abs/2012.05345), [Denton et al., 2021](https://journals.sagepub.com/doi/full/10.1177/20539517211035955)). This includes defining fine-grained requirements for dataset creation from the start ([Hutchinson et al., 2021](https://dl.acm.org/doi/pdf/10.1145/3442188.3445918)), curating datasets in light of problematic content and bias concerns ([Yang et al., 2020](https://dl.acm.org/doi/abs/10.1145/3351095.3375709), [Prabhu and Birhane, 2020](https://arxiv.org/abs/2006.16923)), and making explicit the values inherent in dataset construction and maintenance ([Scheuerman et al., 2021](https://dl.acm.org/doi/pdf/10.1145/3476058), [Birhane et al., 2021](https://arxiv.org/abs/2110.01963)). Although there is general agreement that dataset development is a task that people from many different disciplines should be able to inform, in practice there is often a bottleneck in interfacing with the raw data itself, which tends to require complex coding skills in order to analyze and query the dataset. Despite this, there are few tools openly available to the public to enable people from different disciplines to measure, interrogate, and compare datasets. We aim to help fill this gap. We learn and build from recent tools such as [Know Your Data](https://knowyourdata.withgoogle.com/) and [Data Quality for AI](https://www.ibm.com/products/dqaiapi), as well as research proposals for dataset documentation such as [Vision and Language Datasets (Ferraro et al., 2015)](https://aclanthology.org/D15-1021/), [Datasheets for Datasets (Gebru et al, 2018)](https://arxiv.org/abs/1803.09010), and [Data Statements (Bender & Friedman 2019)](https://aclanthology.org/Q18-1041/). The result is an open-source library for dataset measurements, and an accompanying no-code interface for detailed dataset analysis. ## When can I use the 🤗 Data Measurements Tool? The 🤗 Data Measurements Tool can be used iteratively for exploring one or more existing NLP datasets, and will soon support iterative development of datasets from scratch. It provides actionable insights informed by research on datasets and responsible dataset development, allowing users to hone in on both high-level information and specific items. ## What can I learn using the 🤗 Data Measurements Tool? ### Dataset Basics **For a high-level overview of the dataset** *This begins to answer questions like “What is this dataset? Does it have missing items?”. You can use this as “sanity checks” that the dataset you’re working with is as you expect it to be.* - A description of the dataset (from the Hugging Face Hub) - Number of missing values or NaNs ### Descriptive Statistics **To look at the surface characteristics of the dataset** *This begins to answer questions like “What kind of language is in this dataset? How diverse is it?”* - The dataset vocabulary size and word distribution, for both [open- and closed-class words](https://dictionary.apa.org/open-class-words). - The dataset label distribution and information about class (im)balance. ![image](https://user-images.githubusercontent.com/14205986/144267166-1c9a2fd9-d998-4cdb-aaa1-8b5fea7ae23e.png) - The mean, median, range, and distribution of instance lengths. - The number of duplicates in the dataset and how many times they are repeated. You can use these widgets to check whether what is most and least represented in the dataset make sense for the goals of the dataset. These measurements are intended to inform whether the dataset can be useful in capturing a variety of contexts or if what it captures is more limited, and to measure how ''balanced'' the labels and instance lengths are. You can also use these widgets to identify outliers and duplicates you may want to remove. ### Distributional Statistics **To measure the language patterns in the dataset** *This begins to answer questions like “How does the language behave in this dataset?”* - Adherence to [Zipf’s law](https://en.wikipedia.org/wiki/Zipf%27s_law), which provides measurements of how closely the distribution over words in the dataset fits to the expected distribution of words in natural language. ![image](https://user-images.githubusercontent.com/14205986/144266979-9a5bfea2-c7b8-46fb-9749-e90ee0e5e20e.png) You can use this to figure out whether your dataset represents language as it tends to behave in the natural world or if there are things that are more unnatural about it. If you’re someone who enjoys optimization, then you can view the alpha value this widget calculates as a value to get as close as possible to 1 during dataset development. Further details on alpha values following Zipf’s law in different languages is available here. In general, an alpha greater than 2 or a minimum rank greater than 10 (take with a grain of salt) means that your distribution is relatively unnatural for natural language. This can be a sign of mixed artefacts in the dataset, such as HTML markup. You can use this information to clean up your dataset or to guide you in determining how further language you add to the dataset should be distributed. ### Comparison statistics *This begins to answer questions like “What kinds of topics, biases, and associations are in this dataset?”* - Embedding clusters to pinpoint any clusters of similar language in the dataset. Taking in the diversity of text represented in a dataset can be challenging when it is made up of hundreds to hundreds of thousands of sentences. Grouping these text items based on a measure of similarity can help users gain some insights into their distribution. We show a hierarchical clustering of the text fields in the dataset based on a [Sentence-Transformer](https://hf.co/sentence-transformers/all-mpnet-base-v2) model and a maximum dot product [single-linkage criterion](https://en.wikipedia.org/wiki/Single-linkage_clustering). To explore the clusters, you can: - hover over a node to see the 5 most representative examples (deduplicated) - enter an example in the text box to see which leaf clusters it is most similar to - select a cluster by ID to show all of its examples - The [normalized pointwise mutual information (nPMI)](https://en.wikipedia.org/wiki/Pointwise_mutual_information#Normalized_pointwise_mutual_information_(npmi)) between word pairs in the dataset, which may be used to identify problematic stereotypes. You can use this as a tool in dealing with dataset “bias”, where here the term “bias” refers to stereotypes and prejudices for identity groups along the axes of gender and sexual orientation. We will add further terms in the near future. ![image](https://user-images.githubusercontent.com/14205986/143929481-0577cf78-38b0-4418-9a22-9466302270ff.png) ## What is the status of 🤗 Data Measurements Tool development? We currently present the alpha version (v0) of the tool, demonstrating its usefulness on a handful of popular English-language datasets (e.g. SQuAD, imdb, C4, ...) available on the [Dataset Hub](https://huggingface.co/datasets), with the functionalities described above. The words that we selected for nPMI visualization are a subset of identity terms that came up frequently in the datasets that we were working with. In coming weeks and months, we will be extending the tool to: - Cover more languages and datasets present in the 🤗 Datasets library. - Provide support for user-provided datasets and iterative dataset building. - Add more features and functionalities to the tool itself. For example, we will make it possible to add your own terms for the nPMI visualization so you can pick the words that matter most to you. ### Acknowledgements Thank you to Thomas Wolf for initiating this work, as well as other members of the 🤗 team (Quentin, Lewis, Sylvain, Nate, Julien C., Julien S., Clément, Omar, and many others!) for their help and support.
Getting Started with Hugging Face Transformers for IPUs with Optimum
internetoftim
November 30, 2021
graphcore-getting-started
partnerships, graphcore, guide
https://huggingface.co/blog/graphcore-getting-started
# Getting Started with Hugging Face Transformers for IPUs with Optimum Transformer models have proven to be extremely efficient on a wide range of machine learning tasks, such as natural language processing, audio processing, and computer vision. However, the prediction speed of these large models can make them impractical for latency-sensitive use cases like conversational applications or search. Furthermore, optimizing their performance in the real world requires considerable time, effort and skills that are beyond the reach of many companies and organizations. Luckily, Hugging Face has introduced [Optimum](https://huggingface.co/hardware), an open source library which makes it much easier to reduce the prediction latency of Transformer models on a variety of hardware platforms. In this blog post, you will learn how to accelerate Transformer models for the Graphcore [Intelligence Processing Unit](https://www.graphcore.ai/products/ipu) (IPU), a highly flexible, easy-to-use parallel processor designed from the ground up for AI workloads. ### Optimum Meets Graphcore IPU Through this partnership between Graphcore and Hugging Face, we are now introducing BERT as the first IPU-optimized model. We will be introducing many more of these IPU-optimized models in the coming months, spanning applications such as vision, speech, translation and text generation. Graphcore engineers have implemented and optimized BERT for our IPU systems using Hugging Face transformers to help developers easily train, fine-tune and accelerate their state-of-the-art models. ### Getting started with IPUs and Optimum Let’s use BERT as an example to help you get started with using Optimum and IPUs. In this guide, we will use an [IPU-POD16](https://www.graphcore.ai/products/mk2/ipu-pod16) system in Graphcloud, Graphcore’s cloud-based machine learning platform and follow PyTorch setup instructions found in [Getting Started with Graphcloud](https://docs.graphcore.ai/projects/graphcloud-getting-started/en/latest/index.html). Graphcore’s [Poplar SDK](https://www.graphcore.ai/developer) is already installed on the Graphcloud server. If you have a different setup, you can find the instructions that apply to your system in the [PyTorch for the IPU: User Guide](https://docs.graphcore.ai/projects/poptorch-user-guide/en/latest/intro.html). #### Set up the Poplar SDK Environment You will need to run the following commands to set several environment variables that enable Graphcore tools and Poplar libraries. On the latest system running Poplar SDK version 2.3 on Ubuntu 18.04, you can find <sdk-path> in the folder ```/opt/gc/poplar_sdk-ubuntu_18_04-2.3.0+774-b47c577c2a/```. You would need to run both enable scripts for Poplar and PopART (Poplar Advanced Runtime) to use PyTorch: ``` $ cd /opt/gc/poplar_sdk-ubuntu_18_04-2.3.0+774-b47c577c2a/ $ source poplar-ubuntu_18_04-2.3.0+774-b47c577c2a/enable.sh $ source popart-ubuntu_18_04-2.3.0+774-b47c577c2a/enable.sh ``` #### Set up PopTorch for the IPU PopTorch is part of the Poplar SDK. It provides functions that allow PyTorch models to run on the IPU with minimal code changes. You can create and activate a PopTorch environment following the guide [Setting up PyTorch for the IPU](https://docs.graphcore.ai/projects/graphcloud-pytorch-quick-start/en/latest/pytorch_setup.html): ``` $ virtualenv -p python3 ~/workspace/poptorch_env $ source ~/workspace/poptorch_env/bin/activate $ pip3 install -U pip $ pip3 install /opt/gc/poplar_sdk-ubuntu_18_04-2.3.0+774-b47c577c2a/poptorch-<sdk-version>.whl ``` #### Install Optimum Graphcore Now that your environment has all the Graphcore Poplar and PopTorch libraries available, you need to install the latest 🤗 Optimum Graphcore package in this environment. This will be the interface between the 🤗 Transformers library and Graphcore IPUs. Please make sure that the PopTorch virtual environment you created in the previous step is activated. Your terminal should have a prefix showing the name of the poptorch environment like below: ``` (poptorch_env) user@host:~/workspace/poptorch_env$ pip3 install optimum[graphcore] optuna ``` #### Clone Optimum Graphcore Repository The Optimum Graphcore repository contains the sample code for using Optimum models in IPU. You should clone the repository and change the directory to the ```example/question-answering``` folder which contains the IPU implementation of BERT. ``` $ git clone https://github.com/huggingface/optimum-graphcore.git $ cd optimum-graphcore/examples/question-answering ``` Now, we will use ```run_qa.py``` to fine-tune the IPU implementation of [BERT](https://huggingface.co/bert-large-uncased) on the SQUAD1.1 dataset. #### Run a sample to fine-tune BERT on SQuAD1.1 The ```run_qa.py``` script only works with models that have a fast tokenizer (backed by the 🤗 Tokenizers library), as it uses special features of those tokenizers. This is the case for our [BERT](https://huggingface.co/bert-large-uncased) model, and you should pass its name as the input argument to ```--model_name_or_path```. In order to use the IPU, Optimum will look for the ```ipu_config.json``` file from the path passed to the argument ```--ipu_config_name```. ``` $ python3 run_qa.py \ --ipu_config_name=./ \ --model_name_or_path bert-base-uncased \ --dataset_name squad \ --do_train \ --do_eval \ --output_dir output \ --overwrite_output_dir \ --per_device_train_batch_size 2 \ --per_device_eval_batch_size 2 \ --learning_rate 6e-5 \ --num_train_epochs 3 \ --max_seq_length 384 \ --doc_stride 128 \ --seed 1984 \ --lr_scheduler_type linear \ --loss_scaling 64 \ --weight_decay 0.01 \ --warmup_ratio 0.1 \ --output_dir /tmp/debug_squad/ ``` ### A closer look at Optimum-Graphcore #### Getting the data A very simple way to get datasets is to use the Hugging Face [Datasets library](https://github.com/huggingface/datasets), which makes it easy for developers to download and share datasets on the Hugging Face hub. It also has pre-built data versioning based on git and git-lfs, so you can iterate on updated versions of the data by just pointing to the same repo. Here, the dataset comes with the training and validation files, and dataset configs to help facilitate which inputs to use in each model execution phase. The argument ```--dataset_name==squad``` points to [SQuAD v1.1](https://huggingface.co/datasets/squad) on the Hugging Face Hub. You could also provide your own CSV/JSON/TXT training and evaluation files as long as they follow the same format as the SQuAD dataset or another question-answering dataset in Datasets library. #### Loading the pretrained model and tokenizer To turn words into tokens, this script will require a fast tokenizer. It will show an error if you didn't pass one. For reference, here's the [list](https://huggingface.co/transformers/index.html#supported-frameworks) of supported tokenizers. ``` # Tokenizer check: this script requires a fast tokenizer. if not isinstance(tokenizer, PreTrainedTokenizerFast): raise ValueError("This example script only works for models that have a fast tokenizer. Checkout the big table of models "at https://huggingface.co/transformers/index.html#supported-frameworks to find the model types that meet this " "requirement" ) ``` The argument ```--model_name_or_path==bert-base-uncased`` loads the [bert-base-uncased](https://huggingface.co/bert-base-uncased) model implementation available in the Hugging Face Hub. From the Hugging Face Hub description: "*BERT base model (uncased): Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model is uncased: it does not make a difference between english and English.*" #### Training and Validation You can now use the ```IPUTrainer``` class available in Optimum to leverage the entire Graphcore software and hardware stack, and train your models in IPUs with minimal code changes. Thanks to Optimum, you can plug-and-play state of the art hardware to train your state of the art models. <kbd> <img src="assets/38_getting_started_graphcore/graphcore_1.png"> </kbd> In order to train and validate the BERT model, you can pass the arguments ```--do_train``` and ```--do_eval``` to the ```run_qa.py``` script. After executing the script with the hyper-parameters above, you should see the following training and validation results: ``` "epoch": 3.0, "train_loss": 0.9465060763888888, "train_runtime": 368.4015, "train_samples": 88524, "train_samples_per_second": 720.877, "train_steps_per_second": 2.809 The validation step yields the following results: ***** eval metrics ***** epoch = 3.0 eval_exact_match = 80.6623 eval_f1 = 88.2757 eval_samples = 10784 ``` You can see the rest of the IPU BERT implementation in the [Optimum-Graphcore: SQuAD Examples](https://github.com/huggingface/optimum-graphcore/tree/main/examples/question-answering). ### Resources for Optimum Transformers on IPU Systems * [Optimum-Graphcore: SQuAD Examples](https://github.com/huggingface/optimum-graphcore/tree/main/examples/question-answering) * [Graphcore Hugging Face Models & Datasets](https://github.com/graphcore/tutorials/tree/master/tutorials/pytorch/tut_finetuning_bert#tutorial-on-bert-fine-tuning-on-ipu) * GitHub Tutorial: [BERT Fine-tuning on IPU using Hugging Face transformers](https://github.com/graphcore/tutorials/tree/master/tutorials/pytorch/tut_finetuning_bert#tutorial-on-bert-fine-tuning-on-ipu) * [Graphcore Developer Portal](https://github.com/graphcore/tutorials/tree/master/tutorials/pytorch/tut_finetuning_bert#tutorial-on-bert-fine-tuning-on-ipu) * [Graphcore GitHub](https://github.com/graphcore) * [Graphcore SDK Containers on Docker Hub](https://hub.docker.com/u/graphcore)
Introducing Snowball Fight ☃️, our First ML-Agents Environment
ThomasSimonini
December 2, 2021
snowball-fight
research, rl
https://huggingface.co/blog/snowball-fight
# Introducing Snowball Fight ☃️, our First ML-Agents Environment We're excited to share our **first custom Deep Reinforcement Learning environment**: Snowball Fight 1vs1 🎉. ![gif](assets/39_introducing_snowball_fight/snowballfight.gif) Snowball Fight is a game made with Unity ML-Agents, where you shoot snowballs against a Deep Reinforcement Learning agent. The game is [**hosted on Hugging Face Spaces**](https://hf.co/spaces/launch). 👉 [You can play it online here](https://huggingface.co/spaces/ThomasSimonini/SnowballFight) In this post, we'll cover **the ecosystem we are working on for Deep Reinforcement Learning researchers and enthusiasts that use Unity ML-Agents**. ## Unity ML-Agents at Hugging Face The [Unity Machine Learning Agents Toolkit](https://github.com/Unity-Technologies/ml-agents) is an open source library that allows you to build games and simulations with Unity game engine to **serve as environments for training intelligent agents**. With this first step, our goal is to build an ecosystem on Hugging Face for Deep Reinforcement Learning researchers and enthusiasts that uses ML-Agents, with three features. 1. **Building and sharing custom environments.** We are developing and sharing exciting environments to experiment with new problems: snowball fights, racing, puzzles... All of them will be open source and hosted on the Hugging Face's Hub. 2. **Allowing you to easily host your environments, save models and share them** on the Hugging Face Hub. We have already published the Snowball Fight training environment [here](https://huggingface.co/ThomasSimonini/ML-Agents-SnowballFight-1vs1), but there will be more to come! 3. **You can now easily host your demos on Spaces** and showcase your results quickly with the rest of the ecosystem. ## Be part of the conversation: join our discord server! If you're using ML-Agents or interested in Deep Reinforcement Learning and want to be part of the conversation, **[you can join our discord server](https://discord.gg/YRAq8fMnUG)**. We just added two channels (and we'll add more in the future): - Deep Reinforcement Learning - ML-Agents [Our discord](https://discord.gg/YRAq8fMnUG) is the place where you can exchange about Hugging Face, NLP, Deep RL, and more! It's also in this discord that we'll announce all our new environments and features in the future. ## What's next? In the coming weeks and months, we will be extending the ecosystem by: - Writing some **technical tutorials on ML-Agents**. - Working on a **Snowball Fight 2vs2 version**, where the agents will collaborate in teams using [MA-POCA, a new Deep Reinforcement Learning algorithm](https://blog.unity.com/technology/ml-agents-plays-dodgeball) that trains cooperative behaviors in a team. ![screenshot2vs2](assets/39_introducing_snowball_fight/screenshot2vs2.png) - And we're building **new custom environments that will be hosted in Hugging Face**. ## Conclusion We're excited to see what you're working on with ML-Agents and how we can build features and tools **that help you to empower your work**. Don't forget to [join our discord server](https://discord.gg/YRAq8fMnUG) to be alerted of the new features.
Training CodeParrot 🦜 from Scratch
lvwerra
December 8, 2021
codeparrot
guide, research, nlp
https://huggingface.co/blog/codeparrot
# Training CodeParrot 🦜 from Scratch In this blog post we'll take a look at what it takes to build the technology behind [GitHub CoPilot](https://copilot.github.com/), an application that provides suggestions to programmers as they code. In this step by step guide, we'll learn how to train a large GPT-2 model called CodeParrot 🦜, entirely from scratch. CodeParrot can auto-complete your Python code - give it a spin [here](https://huggingface.co/spaces/lvwerra/codeparrot-generation). Let's get to building it from scratch! ![codeparrot](assets/40_codeparrot/codeparrot.png) ## Creating a Large Dataset of Source Code The first thing we need is a large training dataset. With the goal to train a Python code generation model, we accessed the GitHub dump available on Google's BigQuery and filtered for all Python files. The result is a 180 GB dataset with 20 million files (available [here](http://hf.co/datasets/transformersbook/codeparrot)). After initial training experiments, we found that the duplicates in the dataset severely impacted the model performance. Further investigating the dataset we found that: - 0.1% of the unique files make up 15% of all files - 1% of the unique files make up 35% of all files - 10% of the unique files make up 66% of all files You can learn more about our findings in [this Twitter thread](https://twitter.com/lvwerra/status/1458470994146996225). We removed the duplicates and applied the same cleaning heuristics found in the [Codex paper](https://arxiv.org/abs/2107.03374). Codex is the model behind CoPilot and is a GPT-3 model fine-tuned on GitHub code. The cleaned dataset is still 50GB big and available on the Hugging Face Hub: [codeparrot-clean](http://hf.co/datasets/lvwerra/codeparrot-clean). With that we can setup a new tokenizer and train a model. ## Initializing the Tokenizer and Model First we need a tokenizer. Let's train one specifically on code so it splits code tokens well. We can take an existing tokenizer (e.g. GPT-2) and directly train it on our own dataset with the `train_new_from_iterator()` method. We then push it to the Hub. Note that we omit imports, arguments parsing and logging from the code examples to keep the code blocks compact. But you'll find the full code including preprocessing and downstream task evaluation [here](https://github.com/huggingface/transformers/tree/master/examples/research_projects/codeparrot). ```Python # Iterator for Training def batch_iterator(batch_size=10): for _ in tqdm(range(0, args.n_examples, batch_size)): yield [next(iter_dataset)["content"] for _ in range(batch_size)] # Base tokenizer tokenizer = GPT2Tokenizer.from_pretrained("gpt2") base_vocab = list(bytes_to_unicode().values()) # Load dataset dataset = load_dataset("lvwerra/codeparrot-clean", split="train", streaming=True) iter_dataset = iter(dataset) # Training and saving new_tokenizer = tokenizer.train_new_from_iterator(batch_iterator(), vocab_size=args.vocab_size, initial_alphabet=base_vocab) new_tokenizer.save_pretrained(args.tokenizer_name, push_to_hub=args.push_to_hub) ``` Learn more about tokenizers and how to build them in the [Hugging Face course](https://huggingface.co/course/chapter6/1?fw=pt). See that inconspicuous `streaming=True` argument? This small change has a big impact: instead of downloading the full (50GB) dataset this will stream individual samples as needed saving a lot of disk space! Checkout the [Hugging Face course](https://huggingface.co/course/chapter5/4?fw=pt ) for more information on streaming. Now, we initialize a new model. We’ll use the same hyperparameters as GPT-2 large (1.5B parameters) and adjust the embedding layer to fit our new tokenizer also adding some stability tweaks. The `scale_attn_by_layer_idx` flag makes sure we scale the attention by the layer id and `reorder_and_upcast_attn` mainly makes sure that we compute the attention in full precision to avoid numerical issues. We push the freshly initialized model to the same repo as the tokenizer. ```Python # Load codeparrot tokenizer trained for Python code tokenization tokenizer = AutoTokenizer.from_pretrained(args.tokenizer_name) # Configuration config_kwargs = {"vocab_size": len(tokenizer), "scale_attn_by_layer_idx": True, "reorder_and_upcast_attn": True} # Load model with config and push to hub config = AutoConfig.from_pretrained('gpt2-large', **config_kwargs) model = AutoModelForCausalLM.from_config(config) model.save_pretrained(args.model_name, push_to_hub=args.push_to_hub) ``` Now that we have an efficient tokenizer and a freshly initialized model we can start with the actual training loop. ## Implementing the Training Loop We train with the [🤗 Accelerate](https://github.com/huggingface/accelerate) library which allows us to scale the training from our laptop to a multi-GPU machine without changing a single line of code. We just create an accelerator and do some argument housekeeping: ```Python accelerator = Accelerator() acc_state = {str(k): str(v) for k, v in accelerator.state.__dict__.items()} parser = HfArgumentParser(TrainingArguments) args = parser.parse_args() args = Namespace(**vars(args), **acc_state) samples_per_step = accelerator.state.num_processes * args.train_batch_size set_seed(args.seed) ``` We are now ready to train! Let's use the `huggingface_hub` client library to clone the repository with the new tokenizer and model. We will checkout to a new branch for this experiment. With that setup, we can run many experiments in parallel and in the end we just merge the best one into the main branch. ```Python # Clone model repository if accelerator.is_main_process: hf_repo = Repository(args.save_dir, clone_from=args.model_ckpt) # Checkout new branch on repo if accelerator.is_main_process: hf_repo.git_checkout(run_name, create_branch_ok=True) ``` We can directly load the tokenizer and model from the local repository. Since we are dealing with big models we might want to turn on [gradient checkpointing](https://medium.com/tensorflow/fitting-larger-networks-into-memory-583e3c758ff9) to decrease the GPU memory footprint during training. ```Python # Load model and tokenizer model = AutoModelForCausalLM.from_pretrained(args.save_dir) if args.gradient_checkpointing: model.gradient_checkpointing_enable() tokenizer = AutoTokenizer.from_pretrained(args.save_dir) ``` Next up is the dataset. We make training simpler with a dataset that yields examples with a fixed context size. To not waste too much data (some samples are too short or too long) we can concatenate many examples with an EOS token and then chunk them. ![codeparrot](assets/40_codeparrot/buffer.png) The more sequences we prepare together, the smaller the fraction of tokens we discard (the grey ones in the previous figure). Since we want to stream the dataset instead of preparing everything in advance we use an `IterableDataset`. The full dataset class looks as follows: ```Python class ConstantLengthDataset(IterableDataset): def __init__( self, tokenizer, dataset, infinite=False, seq_length=1024, num_of_sequences=1024, chars_per_token=3.6 ): self.tokenizer = tokenizer self.concat_token_id = tokenizer.bos_token_id self.dataset = dataset self.seq_length = seq_length self.input_characters = seq_length * chars_per_token * num_of_sequences self.epoch = 0 self.infinite = infinite def __iter__(self): iterator = iter(self.dataset) more_examples = True while more_examples: buffer, buffer_len = [], 0 while True: if buffer_len >= self.input_characters: break try: buffer.append(next(iterator)["content"]) buffer_len += len(buffer[-1]) except StopIteration: if self.infinite: iterator = iter(self.dataset) self.epoch += 1 logger.info(f"Dataset epoch: {self.epoch}") else: more_examples = False break tokenized_inputs = self.tokenizer(buffer, truncation=False)["input_ids"] all_token_ids = [] for tokenized_input in tokenized_inputs: all_token_ids.extend(tokenized_input + [self.concat_token_id]) for i in range(0, len(all_token_ids), self.seq_length): input_ids = all_token_ids[i : i + self.seq_length] if len(input_ids) == self.seq_length: yield torch.tensor(input_ids) ``` Texts in the buffer are tokenized in parallel and then concatenated. Chunked samples are then yielded until the buffer is empty and the process starts again. If we set `infinite=True` the dataset iterator restarts at its end. ```Python def create_dataloaders(args): ds_kwargs = {"streaming": True} train_data = load_dataset(args.dataset_name_train, split="train", streaming=True) train_data = train_data.shuffle(buffer_size=args.shuffle_buffer, seed=args.seed) valid_data = load_dataset(args.dataset_name_valid, split="train", streaming=True) train_dataset = ConstantLengthDataset(tokenizer, train_data, infinite=True, seq_length=args.seq_length) valid_dataset = ConstantLengthDataset(tokenizer, valid_data, infinite=False, seq_length=args.seq_length) train_dataloader = DataLoader(train_dataset, batch_size=args.train_batch_size) eval_dataloader = DataLoader(valid_dataset, batch_size=args.valid_batch_size) return train_dataloader, eval_dataloader train_dataloader, eval_dataloader = create_dataloaders(args) ``` Before we start training we need to set up the optimizer and learning rate schedule. We don’t want to apply weight decay to biases and LayerNorm weights so we use a helper function to exclude those. ```Python def get_grouped_params(model, args, no_decay=["bias", "LayerNorm.weight"]): params_with_wd, params_without_wd = [], [] for n, p in model.named_parameters(): if any(nd in n for nd in no_decay): params_without_wd.append(p) else: params_with_wd.append(p) return [{"params": params_with_wd, "weight_decay": args.weight_decay}, {"params": params_without_wd, "weight_decay": 0.0},] optimizer = AdamW(get_grouped_params(model, args), lr=args.learning_rate) lr_scheduler = get_scheduler(name=args.lr_scheduler_type, optimizer=optimizer, num_warmup_steps=args.num_warmup_steps, num_training_steps=args.max_train_steps,) ``` A big question that remains is how all the data and models will be distributed across several GPUs. This sounds like a complex task but actually only requires a single line of code with 🤗 Accelerate. ```Python model, optimizer, train_dataloader, eval_dataloader = accelerator.prepare( model, optimizer, train_dataloader, eval_dataloader) ``` Under the hood it'll use DistributedDataParallel, which means a batch is sent to each GPU worker which has its own copy of the model. There the gradients are computed and then aggregated to update the model on each worker. ![codeparrot](assets/40_codeparrot/ddp.png) We also want to evaluate the model from time to time on the validation set so let’s write a function to do just that. This is done automatically in a distributed fashion and we just need to gather all the losses from the workers. We also want to report the [perplexity](https://huggingface.co/course/chapter7/3#perplexity-for-language-models). ```Python def evaluate(args): model.eval() losses = [] for step, batch in enumerate(eval_dataloader): with torch.no_grad(): outputs = model(batch, labels=batch) loss = outputs.loss.repeat(args.valid_batch_size) losses.append(accelerator.gather(loss)) if args.max_eval_steps > 0 and step >= args.max_eval_steps: break loss = torch.mean(torch.cat(losses)) try: perplexity = torch.exp(loss) except OverflowError: perplexity = float("inf") return loss.item(), perplexity.item() ``` We are now ready to write the main training loop. It will look pretty much like a normal PyTorch training loop. Here and there you can see that we use the accelerator functions rather than native PyTorch. Also, we push the model to the branch after each evaluation. ```Python # Train model model.train() completed_steps = 0 for step, batch in enumerate(train_dataloader, start=1): loss = model(batch, labels=batch, use_cache=False).loss loss = loss / args.gradient_accumulation_steps accelerator.backward(loss) if step % args.gradient_accumulation_steps == 0: accelerator.clip_grad_norm_(model.parameters(), 1.0) optimizer.step() lr_scheduler.step() optimizer.zero_grad() completed_steps += 1 if step % args.save_checkpoint_steps == 0: eval_loss, perplexity = evaluate(args) accelerator.wait_for_everyone() unwrapped_model = accelerator.unwrap_model(model) unwrapped_model.save_pretrained(args.save_dir, save_function=accelerator.save) if accelerator.is_main_process: hf_repo.push_to_hub(commit_message=f"step {step}") model.train() if completed_steps >= args.max_train_steps: break ``` When we call `wait_for_everyone()` and `unwrap_model()` we make sure that all workers are ready and any model layers that have been added by `prepare()` earlier are removed. We also use gradient accumulation and gradient clipping that are easily implemented. Lastly, after training is complete we run a last evaluation and save the final model and push it to the hub. ```Python # Evaluate and save the last checkpoint logger.info("Evaluating and saving model after training") eval_loss, perplexity = evaluate(args) log_metrics(step, {"loss/eval": eval_loss, "perplexity": perplexity}) accelerator.wait_for_everyone() unwrapped_model = accelerator.unwrap_model(model) unwrapped_model.save_pretrained(args.save_dir, save_function=accelerator.save) if accelerator.is_main_process: hf_repo.push_to_hub(commit_message="final model") ``` Done! That's all the code to train a full GPT-2 model from scratch with as little as 150 lines. We did not show the imports and logs of the scripts to make the code a little bit more compact. Now let's actually train it! With this code we trained models for our upcoming [book on Transformers and NLP](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/): a [110M](https://hf.co/lvwerra/codeparrot-small) and [1.5B](https://hf.co/lvwerra/codeparrot) parameter GPT-2 model. We used a 16 x A100 GPU machine to train these models for 1 day and 1 week, respectively. Enough time to get a coffee and read a book or two! ## Evaluation This is still relatively short training time for pretraining but we can already observe good downstream performance as compared to similar models. We evaluated the models on OpenAI's HumanEval benchmark that was introduced in the Codex paper. It measures the performance of code generation models on almost 200 coding challenges. ![codeparrot](assets/40_codeparrot/humaneval.png) Note that we trained CodeParrot on roughly 25-30B tokens whereas GPT-neo was trained on 300B tokens and Codex on 300B (GPT-3 checkpoint) + 100B (code) tokens. So you definitely get a good bang for your buck here. But what is this _pass@k_ metric exactly? Simply put it measures the probability of at least one program passing the unittest of a programming problem given _k_ candidate generations from the model. Naturally, the metric increases with k as more candidates can potentially solve the coding challenge. You can use the metric with `datasets`: ```Python from datasets import load_metric code_eval = datasets.load_metric("code_eval") test_cases = ["assert add(2,3)==5"] candidates = [["def add(a,b): return a*b", "def add(a, b): return a+b"]] pass_at_k, results = code_eval.compute(references=test_cases, predictions=candidates, k=[1, 2]) print(pass_at_k) # -> {'pass@1': 0.5, 'pass@2': 1.0} ``` You can also load OpenAI's HumanEval dataset with `datasets`: ```Python from datasets import load_dataset dataset = load_dataset("openai_humaneval") ``` Now let's have a look at what we get after anxiously waiting for so long! ## Results Let's start slow and check if we can prompt the model to write a function to return the size of a file given its path: **Prompt:** ```Python def get_files_size(filename): ``` **Completion:** ```Python """ Returns the file size of a file. :param filepath: The path to the file. :type filepath: str :return: The file size. :rtype: int """ return os.path.getsize(filepath) ``` So far so good, but let's take this a step further and try something a bit more interesting. One of the less exciting aspects of building production ready code is writing unittest. Let's see if we can use CodeParrot to do this for us! **Prompt:** ```Python def is_even(value): """Returns True if value is an even number.""" return value % 2 == 0 # setup unit tests for is_even import unittest ``` **Completion:** ```Python class TestIsEven(unittest.TestCase): """Tests for is_even function.""" def test_is_even(self): """Test is_even function.""" self.assertTrue(is_even(0)) self.assertTrue(is_even(1)) self.assertTrue ``` Although the test condition itself does not look quite right the model has built all the boilerplate code for us. This alone could already be quite useful and save us a couple of minutes time! But what we really want to do is build better transformer models. Let's use CodeParrot to help us do that: **Prompt:** ```Python from transformers import AutoTokenizer, AutoModelForSequenceClassification # build a BERT classifier ``` **Completion:** ```Python tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased') model = AutoModelForSequenceClassification.from_pretrained('bert-base-uncased') ``` Looking at a few examples is nice but by now you probably want to try it yourself. We prepared a few online demos just for that! ## Demos You can play with the models in two demos we added to [Hugging Face Spaces](https://huggingface.co/spaces/launch). With the first you can quickly generate code with the model and with the second you can highlight your code with the model to spot bugs! - [Code Generation](https://hf.co/spaces/lvwerra/codeparrot-generation) - [Code Highlighting](https://hf.co/spaces/lvwerra/codeparrot-highlighting) You can also directly use the models from the `transformers` library: ```Python from transformers import pipeline pipe = pipeline('text-generation', model='lvwerra/codeparrot') pipe('def hello_world():') ``` ## Summary In this short blog post we walked through all the steps involved for training a large GPT-2 model called CodeParrot 🦜 for code generation. Using 🤗 Accelerate we built a training script with less than 200 lines of code that we can effortlessly scale across many GPUs. With that you can now train your own GPT-2 model! This post gives a brief overview of CodeParrot 🦜, but if you are interested in diving deeper into how to pretrain this models, we recommend reading its dedicated chapter in the upcoming [book on Transformers and NLP](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). This chapter provides many more details around building custom datasets, design considerations when training a new tokenizer, and architecture choice.
Perceiver IO: a scalable, fully-attentional model that works on any modality
nielsr
December 15, 2021
perceiver
research, guide, nlp, audio, cv
https://huggingface.co/blog/perceiver
# Perceiver IO: a scalable, fully-attentional model that works on any modality ### TLDR We've added [Perceiver IO](https://huggingface.co/docs/transformers/model_doc/perceiver) to Transformers, the first Transformer-based neural network that works on all kinds of modalities (text, images, audio, video, point clouds,...) and combinations thereof. Take a look at the following Spaces to view some examples: - predicting [optical flow](https://huggingface.co/spaces/nielsr/perceiver-optical-flow) between images - [classifying images](https://huggingface.co/spaces/nielsr/perceiver-image-classification). We also provide [several notebooks](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/Perceiver). Below, you can find a technical explanation of the model. ### Introduction The [Transformer](https://arxiv.org/abs/1706.03762), originally introduced by Vaswani et al. in 2017, caused a revolution in the AI community, initially improving state-of-the-art (SOTA) results in machine translation. In 2018, [BERT](https://arxiv.org/abs/1810.04805) was released, a Transformer encoder-only model that crushed the benchmarks of natural language processing (NLP), most famously the [GLUE benchmark](https://gluebenchmark.com/). Not long after that, AI researchers started to apply the idea of BERT to other domains. To name a few examples: * [Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2) by Facebook AI illustrated that the architecture could be extended to audio * the [Vision Transformer (ViT)](https://huggingface.co/docs/transformers/model_doc/vit) by Google AI showed that the architecture works really well for vision * most recently the [Video Vision transformer (ViViT)](https://arxiv.org/abs/2103.15691), also by Google AI, applied the architecture to video. In all of these domains, state-of-the-art results were improved dramatically, thanks to the combination of this powerful architecture with large-scale pre-training. However, there's an important limitation to the architecture of the Transformer: due to its [self-attention mechanism](https://jalammar.github.io/illustrated-transformer/), it scales [very poorly](https://arxiv.org/abs/2009.06732v2) in both compute and memory. In every layer, all inputs are used to produce queries and keys, for which a pairwise dot product is computed. Hence, it is not possible to apply self-attention on high-dimensional data without some form of preprocessing. Wav2Vec2, for example, solves this by employing a feature encoder to turn a raw waveform into a sequence of time-based features. The Vision Transformer (ViT) divides an image into a sequence of non-overlapping patches, which serve as "tokens". The Video Vision Transformer (ViViT) extracts non-overlapping, spatio-temporal “tubes” from a video, which serve as "tokens". To make the Transformer work on a particular modality, one typically discretizes it to a sequence of tokens to make it work. ## The Perceiver The [Perceiver](https://arxiv.org/abs/2103.03206) aims to solve this limitation by employing the self-attention mechanism on a set of latent variables, rather than on the inputs. The `inputs` (which could be text, image, audio, video) are only used for doing cross-attention with the latents. This has the advantage that the bulk of compute happens in a latent space, where compute is cheap (one typically uses 256 or 512 latents). The resulting architecture has no quadratic dependence on the input size: the Transformer encoder only depends linearly on the input size, while latent attention is independent of it. In a follow-up paper, called [Perceiver IO](https://arxiv.org/abs/2107.14795), the authors extend this idea to let the Perceiver also handle arbitrary outputs. The idea is similar: one only uses the outputs for doing cross-attention with the latents. Note that I'll use the terms "Perceiver" and "Perceiver IO" interchangeably to refer to the Perceiver IO model throughout this blog post. In the following section, we look in a bit more detail at how Perceiver IO actually works by going over its implementation in [HuggingFace Transformers](https://github.com/huggingface/transformers), a popular library that initially implemented Transformer-based models for NLP, but is now starting to implement them for other domains as well. In the sections below, we explain in detail - in terms of shapes of tensors - how the Perceiver actually pre and post processes modalities of any kind. All Perceiver variants in HuggingFace Transformers are based on the `PerceiverModel` class. To initialize a `PerceiverModel`, one can provide 3 additional instances to the model: - a preprocessor - a decoder - a postprocessor. Note that each of these are optional. A `preprocessor` is only required in case one hasn't already embedded the `inputs` (such as text, image, audio, video) themselves. A `decoder` is only required in case one wants to decode the output of the Perceiver encoder (i.e. the last hidden states of the latents) into something more useful, such as classification logits or optical flow. A `postprocessor` is only required in case one wants to turn the output of the decoder into a specific feature (this is only required when doing auto-encoding, as we will see further). An overview of the architecture is depicted below. <img src="assets/41_perceiver/perceiver_architecture.png" width="800"> <small>The Perceiver architecture.</small> In other words, the `inputs` (which could be any modality, or a combination thereof) are first optionally preprocessed using a `preprocessor`. Next, the preprocessed inputs perform a cross-attention operation with the latent variables of the Perceiver encoder. In this operation, the latent variables produce queries (Q), while the preprocessed inputs produce keys and values (KV). After this operation, the Perceiver encoder employs a (repeatable) block of self-attention layers to update the embeddings of the latents. The encoder will finally produce a tensor of shape (batch_size, num_latents, d_latents), containing the last hidden states of the latents. Next, there's an optional `decoder`, which can be used to decode the final hidden states of the latents into something more useful, such as classification logits. This is done by performing a cross-attention operation, in which trainable embeddings are used to produce queries (Q), while the latents are used to produce keys and values (KV). Finally, there's an optional `postprocessor`, which can be used to postprocess the decoder outputs to specific features. Let's start off by showing how the Perceiver is implemented to work on text. ## Perceiver for text Suppose that one wants to apply the Perceiver to perform text classification. As the memory and time requirements of the Perceiver's self-attention mechanism don't depend on the size of the inputs, one can directly provide raw UTF-8 bytes to the model. This is beneficial, as familar Transformer-based models (like [BERT](https://arxiv.org/abs/1810.04805) and [RoBERTa](https://arxiv.org/abs/1907.11692)) all employ some form of explicit tokenization, such as [WordPiece](https://research.google/pubs/pub37842/), [BPE](https://arxiv.org/abs/1508.07909) or [SentencePiece](https://arxiv.org/abs/1808.06226), which [may be harmful](https://arxiv.org/abs/2004.03720). For a fair comparison to BERT (which uses a sequence length of 512 subword tokens), the authors used input sequences of 2048 bytes. Let's say one also adds a batch dimension, then the `inputs` to the model are of shape (batch_size, 2048). The `inputs` contain the byte IDs (similar to the `input_ids` of BERT) for a single piece of text. One can use `PerceiverTokenizer` to turn a text into a sequence of byte IDs, padded up to a length of 2048: ``` python from transformers import PerceiverTokenizer tokenizer = PerceiverTokenizer.from_pretrained("deepmind/language-perceiver") text = "hello world" inputs = tokenizer(text, padding="max_length", return_tensors="pt").input_ids ``` In this case, one provides `PerceiverTextPreprocessor` as preprocessor to the model, which will take care of embedding the `inputs` (i.e. turn each byte ID into a corresponding vector), as well as adding absolute position embeddings. As decoder, one provides `PerceiverClassificationDecoder` to the model (which will turn the last hidden states of the latents into classification logits). No postprocessor is required. In other words, a Perceiver model for text classification (which is called `PerceiverForSequenceClassification` in HuggingFace Transformers) is implemented as follows: ``` python from torch import nn from transformers import PerceiverModel from transformers.models.perceiver.modeling_perceiver import PerceiverTextPreprocessor, PerceiverClassificationDecoder class PerceiverForSequenceClassification(nn.Module): def __init__(self, config): super().__init__(config) self.perceiver = PerceiverModel( config, input_preprocessor=PerceiverTextPreprocessor(config), decoder=PerceiverClassificationDecoder( config, num_channels=config.d_latents, trainable_position_encoding_kwargs=dict(num_channels=config.d_latents, index_dims=1), use_query_residual=True, ), ) ``` One can already see here that the decoder is initialized with trainable position encoding arguments. Why is that? Well, let's take a look in detail at how Perceiver IO works. At initialization, `PerceiverModel` internally defines a set of latent variables, as follows: ``` python from torch import nn self.latents = nn.Parameter(torch.randn(config.num_latents, config.d_latents)) ``` In the Perceiver IO paper, one uses 256 latents, and sets the dimensionality of the latents to 1280. If one also adds a batch dimension, the Perceiver has latents of shape (batch_size, 256, 1280). First, the preprocessor (which one provides at initialization) will take care of embedding the UTF-8 byte IDs to embedding vectors. Hence, `PerceiverTextPreprocessor` will turn the `inputs` of shape (batch_size, 2048) to a tensor of shape (batch_size, 2048, 768) - assuming that each byte ID is turned into a vector of size 768 (this is determined by the `d_model` attribute of `PerceiverConfig`). After this, Perceiver IO applies cross-attention between the latents (which produce queries) of shape (batch_size, 256, 1280) and the preprocessed inputs (which produce keys and values) of shape (batch_size, 2048, 768). The output of this initial cross-attention operation is a tensor that has the same shape as the queries (which are the latents, in this case). In other words, the output of the cross-attention operation is of shape (batch_size, 256, 1280). Next, a (repeatable) block of self-attention layers is applied to update the representations of the latents. Note that these don't depend on the length of the inputs (i.e. the bytes) one provided, as these were only used during the cross-attention operation. In the Perceiver IO paper, a single block of 26 self-attention layers (each of which has 8 attention heads) were used to update the representations of the latents of the text model. Note that the output after these 26 self-attention layers still has the same shape as what one initially provided as input to the encoder: (batch_size, 256, 1280). These are also called the "last hidden states" of the latents. This is very similar to the "last hidden states" of the tokens one provides to BERT. Ok, so now one has final hidden states of shape (batch_size, 256, 1280). Great, but one actually wants to turn these into classification logits of shape (batch_size, num_labels). How can we make the Perceiver output these? This is handled by `PerceiverClassificationDecoder`. The idea is very similar to what was done when mapping the inputs to the latent space: one uses cross-attention. But now, the latent variables will produce keys and values, and one provides a tensor of whatever shape we'd like - in this case we'll provide a tensor of shape (batch_size, 1, num_labels) which will act as queries (the authors refer to these as "decoder queries", because they are used in the decoder). This tensor will be randomly initialized at the beginning of training, and trained end-to-end. As one can see, one just provides a dummy sequence length dimension of 1. Note that the output of a QKV attention layer always has the same shape as the shape of the queries - hence the decoder will output a tensor of shape (batch_size, 1, num_labels). The decoder then simply squeezes this tensor to have shape (batch_size, num_labels) and boom, one has classification logits<sup id="a1">[1](#f1)</sup>. Great, isn't it? The Perceiver authors also show that it is straightforward to pre-train the Perceiver for masked language modeling, similar to BERT. This model is also available in HuggingFace Transformers, and called `PerceiverForMaskedLM`. The only difference with `PerceiverForSequenceClassification` is that it doesn't use `PerceiverClassificationDecoder` as decoder, but rather `PerceiverBasicDecoder`, to decode the latents to a tensor of shape (batch_size, 2048, 1280). After this, a language modeling head is added, which turns it into a tensor of shape (batch_size, 2048, vocab_size). The vocabulary size of the Perceiver is only 262, namely the 256 UTF-8 byte IDs, as well as 6 special tokens. By pre-training the Perceiver on English Wikipedia and [C4](https://arxiv.org/abs/1910.10683), the authors show that it is possible to achieve an overall score of 81.8 on GLUE after fine-tuning. ## Perceiver for images Now that we've seen how to apply the Perceiver to perform text classification, it is straightforward to apply the Perceiver to do image classification. The only difference is that we'll provide a different `preprocessor` to the model, which will embed the image `inputs`. The Perceiver authors actually tried out 3 different ways of preprocessing: - flattening the pixel values, applying a convolutional layer with kernel size 1 and adding learned absolute 1D position embeddings. - flattening the pixel values and adding fixed 2D Fourier position embeddings. - applying a 2D convolutional + maxpool layer and adding fixed 2D Fourier position embeddings. Each of these are implemented in the Transformers library, and called `PerceiverForImageClassificationLearned`, `PerceiverForImageClassificationFourier` and `PerceiverForImageClassificationConvProcessing` respectively. They only differ in their configuration of `PerceiverImagePreprocessor`. Let's take a closer look at `PerceiverForImageClassificationLearned`. It initializes a `PerceiverModel` as follows: ``` python from torch import nn from transformers import PerceiverModel from transformers.models.perceiver.modeling_perceiver import PerceiverImagePreprocessor, PerceiverClassificationDecoder class PerceiverForImageClassificationLearned(nn.Module): def __init__(self, config): super().__init__(config) self.perceiver = PerceiverModel( config, input_preprocessor=PerceiverImagePreprocessor( config, prep_type="conv1x1", spatial_downsample=1, out_channels=256, position_encoding_type="trainable", concat_or_add_pos="concat", project_pos_dim=256, trainable_position_encoding_kwargs=dict(num_channels=256, index_dims=config.image_size ** 2), ), decoder=PerceiverClassificationDecoder( config, num_channels=config.d_latents, trainable_position_encoding_kwargs=dict(num_channels=config.d_latents, index_dims=1), use_query_residual=True, ), ) ``` One can see that `PerceiverImagePreprocessor` is initialized with `prep_type = "conv1x1"` and that one adds arguments for the trainable position encodings. So how does this preprocessor work in detail? Suppose that one provides a batch of images to the model. Let's say one applies center cropping to a resolution of 224 and normalization of the color channels first, such that the `inputs` are of shape (batch_size, num_channels, height, width) = (batch_size, 3, 224, 224). One can use `PerceiverImageProcessor` for this, as follows: ``` python from transformers import PerceiverImageProcessor import requests from PIL import Image processor = PerceiverImageProcessor.from_pretrained("deepmind/vision-perceiver") url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) inputs = processor(image, return_tensors="pt").pixel_values ``` `PerceiverImagePreprocessor` (with the settings defined above) will first apply a convolutional layer with kernel size (1, 1) to turn the `inputs` into a tensor of shape (batch_size, 256, 224, 224) - hence increasing the channel dimension. It will then place the channel dimension last - so now one has a tensor of shape (batch_size, 224, 224, 256). Next, it flattens the spatial (height + width) dimensions such that one has a tensor of shape (batch_size, 50176, 256). Next, it concatenates it with trainable 1D position embeddings. As the dimensionality of the position embeddings is defined to be 256 (see the `num_channels` argument above), one is left with a tensor of shape (batch_size, 50176, 512). This tensor will be used for the cross-attention operation with the latents. The authors use 512 latents for all image models, and set the dimensionality of the latents to 1024. Hence, the latents are a tensor of shape (batch_size, 512, 1024) - assuming we add a batch dimension. The cross-attention layer takes the queries of shape (batch_size, 512, 1024) and keys + values of shape (batch_size, 50176, 512) as input, and produces a tensor that has the same shape as the queries, so outputs a new tensor of shape (batch_size, 512, 1024). Next, a block of 6 self-attention layers is applied repeatedly (8 times), to produce final hidden states of the latents of shape (batch_size, 512, 1024). To turn these into classification logits, `PerceiverClassificationDecoder` is used, which works similarly to the one for text classification: it uses the latents as keys + values, and uses trainable position embeddings of shape (batch_size, 1, num_labels) as queries. The output of the cross-attention operation is a tensor of shape (batch_size, 1, num_labels), which is squeezed to have classification logits of shape (batch_size, num_labels). The Perceiver authors show that the model is capable of achieving strong results compared to models designed primarily for image classification (such as [ResNet](https://arxiv.org/abs/1512.03385) or [ViT](https://arxiv.org/abs/2010.11929)). After large-scale pre-training on [JFT](https://paperswithcode.com/dataset/jft-300m), the model that uses conv+maxpool preprocessing (`PerceiverForImageClassificationConvProcessing`) achieves 84.5 top-1 accuracy on ImageNet. Remarkably, `PerceiverForImageClassificationLearned`, the model that only employs a 1D fully learned position encoding, achieves a top-1 accuracy of 72.7 despite having no privileged information about the 2D structure of images. ## Perceiver for optical flow The authors show that it's straightforward to make the Perceiver also work on optical flow, which is a decades-old problem in computer vision, with many broader applications. For an introduction to optical flow, I refer to [this blog post](https://medium.com/swlh/what-is-optical-flow-and-why-does-it-matter-in-deep-learning-b3278bb205b5). Given two images of the same scene (e.g. two consecutive frames of a video), the task is to estimate the 2D displacement for each pixel in the first image. Existing algorithms are quite hand-engineered and complex, however with the Perceiver, this becomes relatively simple. The model is implemented in the Transformers library, and available as `PerceiverForOpticalFlow`. It is implemented as follows: ``` python from torch import nn from transformers import PerceiverModel from transformers.models.perceiver.modeling_perceiver import PerceiverImagePreprocessor, PerceiverOpticalFlowDecoder class PerceiverForOpticalFlow(nn.Module): def __init__(self, config): super().__init__(config) fourier_position_encoding_kwargs_preprocessor = dict( num_bands=64, max_resolution=config.train_size, sine_only=False, concat_pos=True, ) fourier_position_encoding_kwargs_decoder = dict( concat_pos=True, max_resolution=config.train_size, num_bands=64, sine_only=False ) image_preprocessor = PerceiverImagePreprocessor( config, prep_type="patches", spatial_downsample=1, conv_after_patching=True, conv_after_patching_in_channels=54, temporal_downsample=2, position_encoding_type="fourier", # position_encoding_kwargs fourier_position_encoding_kwargs=fourier_position_encoding_kwargs_preprocessor, ) self.perceiver = PerceiverModel( config, input_preprocessor=image_preprocessor, decoder=PerceiverOpticalFlowDecoder( config, num_channels=image_preprocessor.num_channels, output_image_shape=config.train_size, rescale_factor=100.0, use_query_residual=False, output_num_channels=2, position_encoding_type="fourier", fourier_position_encoding_kwargs=fourier_position_encoding_kwargs_decoder, ), ) ``` As one can see, `PerceiverImagePreprocessor` is used as preprocessor (i.e. to prepare the 2 images for the cross-attention operation with the latents) and `PerceiverOpticalFlowDecoder` is used as decoder (i.e. to decode the final hidden states of the latents to an actual predicted flow). For each of the 2 frames, the authors extract a 3 x 3 patch around each pixel, leading to 3 x 3 x 3 = 27 values for each pixel (as each pixel also has 3 color channels). The authors use a training resolution of (368, 496). If one stacks 2 frames of size (368, 496) of each training example on top of each other, the `inputs` to the model are of shape (batch_size, 2, 27, 368, 496). The preprocessor (with the settings defined above) will first concatenate the frames along the channel dimension, leading to a tensor of shape (batch_size, 368, 496, 54) - assuming one also moves the channel dimension to be last. The authors explain in their paper (page 8) why concatenation along the channel dimension makes sense. Next, the spatial dimensions are flattened, leading to a tensor of shape (batch_size, 368*496, 54) = (batch_size, 182528, 54). Then, position embeddings (each of which have dimensionality 258) are concatenated, leading to a final preprocessed input of shape (batch_size, 182528, 322). These will be used to perform cross-attention with the latents. The authors use 2048 latents for the optical flow model (yes, 2048!), with a dimensionality of 512 for each latent. Hence, the latents have shape (batch_size, 2048, 512). After the cross-attention, one again has a tensor of the same shape (as the latents act as queries). Next, a single block of 24 self-attention layers (each of which has 16 attention heads) are applied to update the embeddings of the latents. To decode the final hidden states of the latents to an actual predicted flow, `PerceiverOpticalFlowDecoder` simply uses the preprocessed inputs of shape (batch_size, 182528, 322) as queries for the cross-attention operation. Next, these are projected to a tensor of shape (batch_size, 182528, 2). Finally, one rescales and reshapes this back to the original image size to get a predicted flow of shape (batch_size, 368, 496, 2). The authors claim state-of-the-art results on important benchmarks including [Sintel](https://link.springer.com/chapter/10.1007/978-3-642-33783-3_44) and [KITTI](http://www.cvlibs.net/publications/Menze2015CVPR.pdf) when training on [AutoFlow](https://arxiv.org/abs/2104.14544), a large synthetic dataset of 400,000 annotated image pairs. The video below shows the predicted flow on 2 examples. <p float="left"> <img src="https://lh3.googleusercontent.com/Rkhzc3Ckl4oWrOjxviohVmK4ZYGvGGrxaXCaOgBl3YGdBuHeFcQG_0-QjenoHKlTsHR6_6LpmCYu2bghEEzWdpYYp6QksFi0nkI3RNkdJEP-6c13bg=w2048-rw-v1" width="300" style="display:inline" /> <img src="https://lh3.googleusercontent.com/p51q5x-JYJKltxxUtp60lUViVguTnxBpw7dQFfs47FTWpaj3iTmz2RJCGuiIEEpIoJKhZBU19W_k85lJ-8AtywD9YiVXc5KbiubvZakz2qFrNMj-cA=w2048-rw-v1" width="300" style="display:inline" /> <img src="assets/41_perceiver/flow_legend.jpeg" width="300" /> </p> <small> Optical flow estimation by Perceiver IO. The colour of each pixel shows the direction and speed of motion estimated by the model, as indicated by the legend on the right.</small> ## Perceiver for multimodal autoencoding The authors also use the Perceiver for multimodal autoencoding. The goal of multimodal autoencoding is to learn a model that can accurately reconstruct multimodal inputs in the presence of a bottleneck induced by an architecture. The authors train the model on the [Kinetics-700 dataset](https://deepmind.com/research/open-source/kinetics), in which each example consists of a sequence of images (i.e. frames), audio and a class label (one of 700 possible labels). This model is also implemented in HuggingFace Transformers, and available as `PerceiverForMultimodalAutoencoding`. For brevity, I will omit the code of defining this model, but important to note is that it uses `PerceiverMultimodalPreprocessor` to prepare the `inputs` for the model. This preprocessor will first use the respective preprocessor for each modality (image, audio, label) separately. Suppose one has a video of 16 frames of resolution 224x224 and 30,720 audio samples, then the modalities are preprocessed as follows: - The images - actually a sequence of frames - of shape (batch_size, 16, 3, 224, 224) are turned into a tensor of shape (batch_size, 50176, 243) using `PerceiverImagePreprocessor`. This is a “space to depth” transformation, after which fixed 2D Fourier position embeddings are concatenated. - The audio has shape (batch_size, 30720, 1) and is turned into a tensor of shape (batch_size, 1920, 401) using `PerceiverAudioPreprocessor` (which concatenates fixed Fourier position embeddings to the raw audio). - The class label of shape (batch_size, 700) is turned into a tensor of shape (batch_size, 1, 700) using `PerceiverOneHotPreprocessor`. In other words, this preprocessor just adds a dummy time (index) dimension. Note that one initializes the class label with a tensor of zeros during evaluation, so as to let the model act as a video classifier. Next, `PerceiverMultimodalPreprocessor` will pad the preprocessed modalities with modality-specific trainable embeddings to make concatenation along the time dimension possible. In this case, the modality with the highest channel dimension is the class label (it has 700 channels). The authors enforce a minimum padding size of 4, hence each modality will be padded to have 704 channels. They can then be concatenated, hence the final preprocessed input is a tensor of shape (batch_size, 50176 + 1920 + 1, 704) = (batch_size, 52097, 704). The authors use 784 latents, with a dimensionality of 512 for each latent. Hence, the latents have shape (batch_size, 784, 512). After the cross-attention, one again has a tensor of the same shape (as the latents act as queries). Next, a single block of 8 self-attention layers (each of which has 8 attention heads) is applied to update the embeddings of the latents. Next, there is `PerceiverMultimodalDecoder`, which will first create output queries for each modality separately. However, as it is not possible to decode an entire video in a single forward pass, the authors instead auto-encode in chunks. Each chunk will subsample certain index dimensions for every modality. Let's say we process the video in 128 chunks, then the decoder queries will be produced as follows: - For the image modality, the total size of the decoder query is 16x3x224x224 = 802,816. However, when auto-encoding the first chunk, one subsamples the first 802,816/128 = 6272 values. The shape of the image output query is (batch_size, 6272, 195) - the 195 comes from the fact that fixed Fourier position embeddings are used. - For the audio modality, the total input has 30,720 values. However, one only subsamples the first 30720/128/16 = 15 values. Hence, the shape of the audio query is (batch_size, 15, 385). Here, the 385 comes from the fact that fixed Fourier position embeddings are used. - For the class label modality, there's no need to subsample. Hence, the subsampled index is set to 1. The shape of the label output query is (batch_size, 1, 1024). One uses trainable position embeddings (of size 1024) for the queries. Similarly to the preprocessor, `PerceiverMultimodalDecoder` pads the different modalities to the same number of channels, to make concatenation of the modality-specific queries possible along the time dimension. Here, the class label has again the highest number of channels (1024), and the authors enforce a minimum padding size of 2, hence every modality will be padded to have 1026 channels. After concatenation, the final decoder query has shape (batch_size, 6272 + 15 + 1, 1026) = (batch_size, 6288, 1026). This tensor produces queries in the cross-attention operation, while the latents act as keys and values. Hence, the output of the cross-attention operation is a tensor of shape (batch_size, 6288, 1026). Next, `PerceiverMultimodalDecoder` employs a linear layer to reduce the output channels to get a tensor of shape (batch_size, 6288, 512). Finally, there is `PerceiverMultimodalPostprocessor`. This class postprocesses the output of the decoder to produce an actual reconstruction of each modality. It first splits up the time dimension of the decoder output according to the different modalities: (batch_size, 6272, 512) for image, (batch_size, 15, 512) for audio and (batch_size, 1, 512) for the class label. Next, the respective postprocessors for each modality are applied: - The image post processor (which is called `PerceiverProjectionPostprocessor` in Transformers) simply turns the (batch_size, 6272, 512) tensor into a tensor of shape (batch_size, 6272, 3) - i.e. it projects the final dimension to RGB values. - `PerceiverAudioPostprocessor` turns the (batch_size, 15, 512) tensor into a tensor of shape (batch_size, 240). - `PerceiverClassificationPostprocessor` simply takes the first (and only index), to get a tensor of shape (batch_size, 700). So now one ends up with tensors containing the reconstruction of the image, audio and class label modalities respectively. As one auto-encodes an entire video in chunks, one needs to concatenate the reconstruction of each chunk to have a final reconstruction of an entire video. The figure below shows an example: <p float="left"> <img src="assets/41_perceiver/original_video.gif" width="200" style="display:inline"> <img src="assets/41_perceiver/reconstructed_video.gif" width="200" style="display:inline"> <img src="assets/41_perceiver/perceiver_audio_autoencoding.png" width="400"> </p> <small>Above: original video (left), reconstruction of the first 16 frames (right). Video taken from the [UCF101 dataset](https://www.crcv.ucf.edu/data/UCF101.php). Below: reconstructed audio (taken from the paper). </small> <img src="assets/41_perceiver/predicted_labels.png" width="500"> <small>Top 5 predicted labels for the video above. By masking the class label, the Perceiver becomes a video classifier. </small> With this approach, the model learns a joint distribution across 3 modalities. The authors do note that because the latent variables are shared across modalities and not explicitly allocated between them, the quality of reconstructions for each modality is sensitive to the weight of its loss term and other training hyperparameters. By putting stronger emphasis on classification accuracy, they are able to reach 45% top-1 accuracy while maintaining 20.7 PSNR (peak signal-to-noise ratio) for video. ## Other applications of the Perceiver Note that there are no limits on the applications of the Perceiver! In the original [Perceiver paper](https://arxiv.org/abs/2103.03206), the authors showed that the architecture can be used to process 3D point clouds – a common concern for self-driving cars equipped with Lidar sensors. They trained the model on [ModelNet40](https://modelnet.cs.princeton.edu/), a dataset of point clouds derived from 3D triangular meshes spanning 40 object categories. The model was shown to achieve a top-1 accuracy of 85.7 % on the test set, competing with [PointNet++](https://arxiv.org/abs/1706.02413), a highly specialized model that uses extra geometric features and performs more advanced augmentations. The authors also used the Perceiver to replace the original Transformer in [AlphaStar](https://deepmind.com/blog/article/alphastar-mastering-real-time-strategy-game-starcraft-ii), the state-of-the-art reinforcement learning system for the complex game of [StarCraft II](https://starcraft2.com/en-us/). Without tuning any additional parameters, the authors observed that the resulting agent reached the same level of performance as the original AlphaStar agent, reaching an 87% win-rate versus the Elite bot after [behavioral cloning](https://proceedings.neurips.cc/paper/1988/file/812b4ba287f5ee0bc9d43bbf5bbe87fb-Paper.pdf) on human data. It is important to note that the models currently implemented (such as `PerceiverForImageClassificationLearned`, `PerceiverForOpticalFlow`) are just examples of what you can do with the Perceiver. Each of these are different instances of `PerceiverModel`, just with a different preprocessor and/or decoder (and optionally, a postprocessor as is the case for multimodal autoencoding). People can come up with new preprocessors, decoders and postprocessors to make the model solve different problems. For instance, one could extend the Perceiver to perform named-entity recognition (NER) or question-answering similar to BERT, audio classification similar to Wav2Vec2 or object detection similar to DETR. ## Conclusion In this blog post, we went over the architecture of Perceiver IO, an extension of the Perceiver by Google Deepmind, and showed its generality of handling all kinds of modalities. The big advantage of the Perceiver is that the compute and memory requirements of the self-attention mechanism don't depend on the size of the inputs and outputs, as the bulk of compute happens in a latent space (a not-too large set of vectors). Despite its task-agnostic architecture, the model is capabable of achieving great results on modalities such as language, vision, multimodal data, and point clouds. In the future, it might be interesting to train a single (shared) Perceiver encoder on several modalities at the same time, and use modality-specific preprocessors and postprocessors. As [Karpathy puts it](https://twitter.com/karpathy/status/1424469507658031109), it may well be that this architecture can unify all modalities into a shared space, with a library of encoders/decoders. Speaking of a library, the model is available in [HuggingFace Transformers](https://github.com/huggingface/transformers) as of today. It will be exciting to see what people build with it, as its applications seem endless! ### Appendix The implementation in HuggingFace Transformers is based on the original JAX/Haiku implementation which can be found [here](https://github.com/deepmind/deepmind-research/tree/master/perceiver). The documentation of the Perceiver IO model in HuggingFace Transformers is available [here](https://huggingface.co/docs/transformers/model_doc/perceiver). Tutorial notebooks regarding the Perceiver on several modalities can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/Perceiver). ## Footnotes <b id="f1">1</b> Note that in the official paper, the authors used a two-layer MLP to generate the output logits, which was omitted here for brevity. [↩](#a1)
Gradio joins Hugging Face!
abidlabs
December 21, 2021
gradio-joins-hf
community, open-source-collab
https://huggingface.co/blog/gradio-joins-hf
# Gradio is joining Hugging Face! <p>&nbsp;</p> _Gradio is joining Hugging Face! By acquiring Gradio, a machine learning startup, Hugging Face will be able to offer users, developers, and data scientists the tools needed to get to high level results and create better models and tools..._ Hmm, paragraphs about acquisitions like the one above are so common that an algorithm could write them. In fact, one did!! This first paragraph was written with the [Acquisition Post Generator](https://huggingface.co/spaces/abidlabs/The-Acquisition-Post-Generator), a machine learning demo on **Hugging Face Spaces**. You can run it yourself in your browser: provide the names of any two companies and you'll get a reasonable-sounding start to an article announcing their acquisition! The Acquisition Post Generator was built using our open-source Gradio library -- it is just one of our recent collaborations with Hugging Face. And I'm excited to announce that these collaborations are culminating in... 🥁 **Hugging Face's acquisition of Gradio** (so yes, that first paragraph might have been written by an algorithm but it's true!) <img class="max-w-full mx-auto my-6" style="width: 54rem" src="/blog/assets/42_gradio_joins_hf/screenshot.png"> As one of the founders of Gradio, I couldn't be more excited about the next step in our journey. I still remember clearly how we started in 2019: as a PhD student at Stanford, I struggled to share a medical computer vision model with one of my collaborators, who was a doctor. I needed him to test my machine learning model, but he didn't know Python and couldn't easily run the model on his own images. I envisioned a tool that could make it super simple for machine learning engineers to build and share demos of computer vision models, which in turn would lead to better feedback and more reliable models 🔁 I recruited my talented housemates Ali Abdalla, Ali Abid, and Dawood Khan to release the first version of Gradio in 2019. We steadily expanded to cover more areas of machine learning including text, speech, and video. We found that it wasn't just researchers who needed to share machine learning models: interdisciplinary teams in industry, from startups to public companies, were building models and needed to debug them internally or showcase them externally. Gradio could help with both. Since we first released the library, more than 300,000 demos have been built with Gradio. We couldn't have done this without our community of contributors, our supportive investors, and the amazing Ahsen Khaliq who joined our company this year. Demos and GUIs built with Gradio give the power of machine learning to more and more people because they allow non-technical users to access, use, and give feedback on models. And our acquisition by Hugging Face is the next step in this ongoing journey of accessibility. Hugging Face has already radically democratized machine learning so that any software engineer can use state-of-the-art models with a few lines of code. By working together with Hugging Face, we're taking this even further so that machine learning is accessible to literally anyone with an internet connection and a browser. With Hugging Face, we are going to keep growing Gradio and make it the best way to share your machine learning model with anyone, anywhere 🚀 In addition to the shared mission of Gradio and Hugging Face, what delights me is the team that we are joining. Hugging Face's remarkable culture of openness and innovation is well-known. Over the past few months, I've gotten to know the founders as well: they are wonderful people who genuinely care about every single person at Hugging Face and are willing to go to bat for them. On behalf of the entire Gradio team, we couldn't be more thrilled to be working with them to build the future of machine learning 🤗 Also: [we are hiring!!](https://apply.workable.com/huggingface/) ❤️
Active Learning with AutoNLP and Prodigy
abhishek
December 23, 2021
autonlp-prodigy
research, partnerships, nlp
https://huggingface.co/blog/autonlp-prodigy
# Active Learning with AutoNLP and Prodigy Active learning in the context of Machine Learning is a process in which you iteratively add labeled data, retrain a model and serve it to the end user. It is an endless process and requires human interaction for labeling/creating the data. In this article, we will discuss how to use [AutoNLP](https://huggingface.co/autonlp) and [Prodigy](https://prodi.gy/) to build an active learning pipeline. ## AutoNLP [AutoNLP](https://huggingface.co/autonlp) is a framework created by Hugging Face that helps you to build your own state-of-the-art deep learning models on your own dataset with almost no coding at all. AutoNLP is built on the giant shoulders of Hugging Face's [transformers](https://github.com/huggingface/transformers), [datasets](https://github.com/huggingface/datasets), [inference-api](https://huggingface.co/inference-api) and many other tools. With AutoNLP, you can train SOTA transformer models on your own custom dataset, fine-tune them (automatically) and serve them to the end-user. All models trained with AutoNLP are state-of-the-art and production-ready. At the time of writing this article, AutoNLP supports tasks like binary classification, regression, multi class classification, token classification (such as named entity recognition or part of speech), question answering, summarization and more. You can find a list of all the supported tasks [here](https://huggingface.co/autonlp/). AutoNLP supports languages like English, French, German, Spanish, Hindi, Dutch, Swedish and many more. There is also support for custom models with custom tokenizers (in case your language is not supported by AutoNLP). ## Prodigy [Prodigy](https://prodi.gy/) is an annotation tool developed by Explosion (the makers of [spaCy](https://spacy.io/)). It is a web-based tool that allows you to annotate your data in real time. Prodigy supports NLP tasks such as named entity recognition (NER) and text classification, but it's not limited to NLP! It supports Computer Vision tasks and even creating your own tasks! You can try the Prodigy demo: [here](https://prodi.gy/demo). Note that Prodigy is a commercial tool. You can find out more about it [here](https://prodi.gy/buy). We chose Prodigy as it is one of the most popular tools for labeling data and is infinitely customizable. It is also very easy to setup and use. ## Dataset Now begins the most interesting part of this article. After looking at a lot of datasets and different types of problems, we stumbled upon BBC News Classification dataset on Kaggle. This dataset was used in an inclass competition and can be accessed [here](https://www.kaggle.com/c/learn-ai-bbc). Let's take a look at this dataset: <img src="assets/43_autonlp_prodigy/data_view.png" width=500 height=250> As we can see this is a classification dataset. There is a `Text` column which is the text of the news article and a `Category` column which is the class of the article. Overall, there are 5 different classes: `business`, `entertainment`, `politics`, `sport` & `tech`. Training a multi-class classification model on this dataset using AutoNLP is a piece of cake. Step 1: Download the dataset. Step 2: Open [AutoNLP](https://ui.autonlp.huggingface.co/) and create a new project. <img src="assets/43_autonlp_prodigy/autonlp_create_project.png"> Step 3: Upload the training dataset and choose auto-splitting. <img src="assets/43_autonlp_prodigy/autonlp_data_multi_class.png"> Step 4: Accept the pricing and train your models. <img src="assets/43_autonlp_prodigy/autonlp_estimate.png"> Please note that in the above example, we are training 15 different multi-class classification models. AutoNLP pricing can be as low as $10 per model. AutoNLP will select the best models and do hyperparameter tuning for you on its own. So, now, all we need to do is sit back, relax and wait for the results. After around 15 minutes, all models finished training and the results are ready. It seems like the best model scored 98.67% accuracy! <img src="assets/43_autonlp_prodigy/autonlp_multi_class_results.png"> So, we are now able to classify the articles in the dataset with an accuracy of 98.67%! But wait, we were talking about active learning and Prodigy. What happened to those? 🤔 We did use Prodigy as we will see soon. We used it to label this dataset for the named entity recognition task. Before starting the labeling part, we thought it would be cool to have a project in which we are not only able to detect the entities in news articles but also categorize them. That's why we built this classification model on existing labels. ## Active Learning The dataset we used did have categories but it didn't have labels for entity recognition. So, we decided to use Prodigy to label the dataset for another task: named entity recognition. Once you have Prodigy installed, you can simply run: $ prodigy ner.manual bbc blank:en BBC_News_Train.csv --label PERSON,ORG,PRODUCT,LOCATION Let's look at the different values: * `bbc` is the dataset that will be created by Prodigy. * `blank:en` is the `spaCy` tokenizer being used. * `BBC_News_Train.csv` is the dataset that will be used for labeling. * `PERSON,ORG,PRODUCT,LOCATION` is the list of labels that will be used for labeling. Once you run the above command, you can go to the prodigy web interface (usually at localhost:8080) and start labelling the dataset. Prodigy interface is very simple, intuitive and easy to use. The interface looks like the following: <img src="assets/43_autonlp_prodigy/prodigy_ner.png"> All you have to do is select which entity you want to label (PERSON, ORG, PRODUCT, LOCATION) and then select the text that belongs to the entity. Once you are done with one document, you can click on the green button and Prodigy will automatically provide you with next unlabelled document. ![prodigy_ner_demo](assets/43_autonlp_prodigy/prodigy.gif) Using Prodigy, we started labelling the dataset. When we had around 20 samples, we trained a model using AutoNLP. Prodigy doesn't export the data in AutoNLP format, so we wrote a quick and dirty script to convert the data into AutoNLP format: ```python import json import spacy from prodigy.components.db import connect db = connect() prodigy_annotations = db.get_dataset("bbc") examples = ((eg["text"], eg) for eg in prodigy_annotations) nlp = spacy.blank("en") dataset = [] for doc, eg in nlp.pipe(examples, as_tuples=True): try: doc.ents = [doc.char_span(s["start"], s["end"], s["label"]) for s in eg["spans"]] iob_tags = [f"{t.ent_iob_}-{t.ent_type_}" if t.ent_iob_ else "O" for t in doc] iob_tags = [t.strip("-") for t in iob_tags] tokens = [str(t) for t in doc] temp_data = { "tokens": tokens, "tags": iob_tags } dataset.append(temp_data) except: pass with open('data.jsonl', 'w') as outfile: for entry in dataset: json.dump(entry, outfile) outfile.write('\n') ``` This will provide us with a `JSONL` file which can be used for training a model using AutoNLP. The steps will be same as before except we will select `Token Classification` task when creating the AutoNLP project. Using the initial data we had, we trained a model using AutoNLP. The best model had an accuracy of around 86% with 0 precision and recall. We knew the model didn't learn anything. It's pretty obvious, we had only around 20 samples. After labelling around 70 samples, we started getting some results. The accuracy went up to 92%, precision was 0.52 and recall around 0.42. We were getting some results, but still not satisfactory. In the following image, we can see how this model performs on an unseen sample. <img src="assets/43_autonlp_prodigy/a1.png"> As you can see, the model is struggling. But it's much better than before! Previously, the model was not even able to predict anything in the same text. At least now, it's able to figure out that `Bruce` and `David` are names. Thus, we continued. We labelled a few more samples. Please note that, in each iteration, our dataset is getting bigger. All we are doing is uploading the new dataset to AutoNLP and let it do the rest. After labelling around ~150 samples, we started getting some good results. The accuracy went up to 95.7%, precision was 0.64 and recall around 0.76. <img src="assets/43_autonlp_prodigy/a3.png"> Let's take a look at how this model performs on the same unseen sample. <img src="assets/43_autonlp_prodigy/a2.png"> WOW! This is amazing! As you can see, the model is now performing extremely well! Its able to detect many entities in the same text. The precision and recall were still a bit low and thus we continued labeling even more data. After labeling around ~250 samples, we had the best results in terms of precision and recall. The accuracy went up to ~95.9% and precision and recall were 0.73 and 0.79 respectively. At this point, we decided to stop labelling and end the experimentation process. The following graph shows how the accuracy of best model improved as we added more samples to the dataset: <img src="assets/43_autonlp_prodigy/chart.png"> Well, it's a well known fact that more relevant data will lead to better models and thus better results. With this experimentation, we successfully created a model that can not only classify the entities in the news articles but also categorize them. Using tools like Prodigy and AutoNLP, we invested our time and effort only to label the dataset (even that was made simpler by the interface prodigy offers). AutoNLP saved us a lot of time and effort: we didn't have to figure out which models to use, how to train them, how to evaluate them, how to tune the parameters, which optimizer and scheduler to use, pre-processing, post-processing etc. We just needed to label the dataset and let AutoNLP do everything else. We believe with tools like AutoNLP and Prodigy it's very easy to create data and state-of-the-art models. And since the whole process requires almost no coding at all, even someone without a coding background can create datasets which are generally not available to the public, train their own models using AutoNLP and share the model with everyone else in the community (or just use them for their own research / business). We have open-sourced the best model created using this process. You can try it [here](https://huggingface.co/abhishek/autonlp-prodigy-10-3362554). The labelled dataset can also be downloaded [here](https://huggingface.co/datasets/abhishek/autonlp-data-prodigy-10). Models are only state-of-the-art because of the data they are trained on.
Deploy GPT-J 6B for inference using Hugging Face Transformers and Amazon SageMaker
philschmid
January 11, 2022
gptj-sagemaker
partnerships, aws, guide, nlp
https://huggingface.co/blog/gptj-sagemaker
# Deploy GPT-J 6B for inference using Hugging Face Transformers and Amazon SageMaker <script async defer src="https://unpkg.com/medium-zoom-element@0/dist/medium-zoom-element.min.js"></script> Almost 6 months ago to the day, [EleutherAI](https://www.eleuther.ai/) released [GPT-J 6B](https://huggingface.co/EleutherAI/gpt-j-6B), an open-source alternative to [OpenAIs GPT-3](https://openai.com/blog/gpt-3-apps/). [GPT-J 6B](https://huggingface.co/EleutherAI/gpt-j-6B) is the 6 billion parameter successor to [EleutherAIs](https://www.eleuther.ai/) GPT-NEO family, a family of transformer-based language models based on the GPT architecture for text generation. [EleutherAI](https://www.eleuther.ai/)'s primary goal is to train a model that is equivalent in size to GPT⁠-⁠3 and make it available to the public under an open license. Over the last 6 months, `GPT-J` gained a lot of interest from Researchers, Data Scientists, and even Software Developers, but it remained very challenging to deploy `GPT-J` into production for real-world use cases and products. There are some hosted solutions to use `GPT-J` for production workloads, like the [Hugging Face Inference API](https://huggingface.co/inference-api), or for experimenting using [EleutherAIs 6b playground](https://6b.eleuther.ai/), but fewer examples on how to easily deploy it into your own environment. In this blog post, you will learn how to easily deploy `GPT-J` using [Amazon SageMaker](https://aws.amazon.com/de/sagemaker/) and the [Hugging Face Inference Toolkit](https://github.com/aws/sagemaker-huggingface-inference-toolkit) with a few lines of code for scalable, reliable, and secure real-time inference using a regular size GPU instance with NVIDIA T4 (~500$/m). But before we get into it, I want to explain why deploying `GPT-J` into production is challenging. --- ## Background The weights of the 6 billion parameter model represent a ~24GB memory footprint. To load it in float32, one would need at least 2x model size CPU RAM: 1x for initial weights and another 1x to load the checkpoint. So for `GPT-J` it would require at least 48GB of CPU RAM to just load the model. To make the model more accessible, [EleutherAI](https://www.eleuther.ai/) also provides float16 weights, and `transformers` has new options to reduce the memory footprint when loading large language models. Combining all this it should take roughly 12.1GB of CPU RAM to load the model. ```python from transformers import GPTJForCausalLM import torch model = GPTJForCausalLM.from_pretrained( "EleutherAI/gpt-j-6B", revision="float16", torch_dtype=torch.float16, low_cpu_mem_usage=True ) ``` The caveat of this example is that it takes a very long time until the model is loaded into memory and ready for use. In my experiments, it took `3 minutes and 32 seconds` to load the model with the code snippet above on a `P3.2xlarge` AWS EC2 instance (the model was not stored on disk). This duration can be reduced by storing the model already on disk, which reduces the load time to `1 minute and 23 seconds`, which is still very long for production workloads where you need to consider scaling and reliability. For example, Amazon SageMaker has a [60s limit for requests to respond](https://docs.aws.amazon.com/general/latest/gr/sagemaker.html#sagemaker_region), meaning the model needs to be loaded and the predictions to run within 60s, which in my opinion makes a lot of sense to keep the model/endpoint scalable and reliable for your workload. If you have longer predictions, you could use [batch-transform](https://docs.aws.amazon.com/sagemaker/latest/dg/batch-transform.html). In [Transformers](https://github.com/huggingface/transformers) the models loaded with the `from_pretrained` method are following PyTorch's [recommended practice](https://pytorch.org/tutorials/beginner/saving_loading_models.html#save-load-state-dict-recommended), which takes around `1.97 seconds` for BERT [[REF]](https://colab.research.google.com/drive/1-Y5f8PWS8ksoaf1A2qI94jq0GxF2pqQ6?usp=sharing). PyTorch offers an [additional alternative way of saving and loading models](https://pytorch.org/tutorials/beginner/saving_loading_models.html#save-load-entire-model) using `torch.save(model, PATH)` and `torch.load(PATH)`. *“Saving a model in this way will save the entire module using Python’s [pickle](https://docs.python.org/3/library/pickle.html) module. The disadvantage of this approach is that the serialized data is bound to the specific classes and the exact directory structure used when the model is saved.”* This means that when we save a model with `transformers==4.13.2` it could be potentially incompatible when trying to load with `transformers==4.15.0`. However, loading models this way reduces the loading time by **~12x,** down to `0.166s` for BERT. Applying this to `GPT-J` means that we can reduce the loading time from `1 minute and 23 seconds` down to `7.7 seconds`, which is ~10.5x faster. <br> <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="Model Load time of BERT and GPTJ" src="assets/45_gptj_sagemaker/model_load_time.png"></medium-zoom> <figcaption>Figure 1. Model load time of BERT and GPTJ</figcaption> </figure> <br> ## Tutorial With this method of saving and loading models, we achieved model loading performance for `GPT-J` compatible with production scenarios. But we need to keep in mind that we need to align: > Align PyTorch and Transformers version when saving the model with `torch.save(model,PATH)` and loading the model with `torch.load(PATH)` to avoid incompatibility. > ### Save `GPT-J` using `torch.save` To create our `torch.load()` compatible model file we load `GPT-J` using Transformers and the `from_pretrained` method, and then save it with `torch.save()`. ```python from transformers import AutoTokenizer,GPTJForCausalLM import torch # load fp 16 model model = GPTJForCausalLM.from_pretrained("EleutherAI/gpt-j-6B", revision="float16", torch_dtype=torch.float16) # save model with torch.save torch.save(model, "gptj.pt") ``` Now we are able to load our `GPT-J` model with `torch.load()` to run predictions. ```python from transformers import pipeline import torch # load model model = torch.load("gptj.pt") # load tokenizer tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B") # create pipeline gen = pipeline("text-generation",model=model,tokenizer=tokenizer,device=0) # run prediction gen("My Name is philipp") #[{'generated_text': 'My Name is philipp k. and I live just outside of Detroit.... ``` --- ### Create `model.tar.gz` for the Amazon SageMaker real-time endpoint Since we can load our model quickly and run inference on it let’s deploy it to Amazon SageMaker. There are two ways you can deploy transformers to Amazon SageMaker. You can either [“Deploy a model from the Hugging Face Hub”](https://huggingface.co/docs/sagemaker/inference#deploy-a-model-from-the-%F0%9F%A4%97-hub) directly or [“Deploy a model with `model_data` stored on S3”](https://huggingface.co/docs/sagemaker/inference#deploy-with-model_data). Since we are not using the default Transformers method we need to go with the second option and deploy our endpoint with the model stored on S3. For this, we need to create a `model.tar.gz` artifact containing our model weights and additional files we need for inference, e.g. `tokenizer.json`. **We provide uploaded and publicly accessible `model.tar.gz` artifacts, which can be used with the `HuggingFaceModel` to deploy `GPT-J` to Amazon SageMaker.** See [“Deploy `GPT-J` as Amazon SageMaker Endpoint”](https://www.notion.so/Deploy-GPT-J-6B-for-inference-using-Hugging-Face-Transformers-and-Amazon-SageMaker-ce65921edf2246e6a71bb3073e5b3bc7) on how to use them. If you still want or need to create your own `model.tar.gz`, e.g. because of compliance guidelines, you can use the helper script [convert_gpt.py](https://github.com/philschmid/amazon-sagemaker-gpt-j-sample/blob/main/convert_gptj.py) for this purpose, which creates the `model.tar.gz` and uploads it to S3. ```bash # clone directory git clone https://github.com/philschmid/amazon-sagemaker-gpt-j-sample.git # change directory to amazon-sagemaker-gpt-j-sample cd amazon-sagemaker-gpt-j-sample # create and upload model.tar.gz pip3 install -r requirements.txt python3 convert_gptj.py --bucket_name {model_storage} ``` The `convert_gpt.py` should print out an S3 URI similar to this. `s3://hf-sagemaker-inference/gpt-j/model.tar.gz`. ### Deploy `GPT-J` as Amazon SageMaker Endpoint To deploy our Amazon SageMaker Endpoint we are going to use the [Amazon SageMaker Python SDK](https://sagemaker.readthedocs.io/en/stable/) and the `HuggingFaceModel` class. The snippet below uses the `get_execution_role` which is only available inside Amazon SageMaker Notebook Instances or Studio. If you want to deploy a model outside of it check [the documentation](https://huggingface.co/docs/sagemaker/train#installation-and-setup#). The `model_uri` defines the location of our `GPT-J` model artifact. We are going to use the publicly available one provided by us. ```python from sagemaker.huggingface import HuggingFaceModel import sagemaker # IAM role with permissions to create endpoint role = sagemaker.get_execution_role() # public S3 URI to gpt-j artifact model_uri="s3://huggingface-sagemaker-models/transformers/4.12.3/pytorch/1.9.1/gpt-j/model.tar.gz" # create Hugging Face Model Class huggingface_model = HuggingFaceModel( model_data=model_uri, transformers_version='4.12.3', pytorch_version='1.9.1', py_version='py38', role=role, ) # deploy model to SageMaker Inference predictor = huggingface_model.deploy( initial_instance_count=1, # number of instances instance_type='ml.g4dn.xlarge' #'ml.p3.2xlarge' # ec2 instance type ) ``` If you want to use your own `model.tar.gz` just replace the `model_uri` with your S3 Uri. The deployment should take around 3-5 minutes. ### Run predictions We can run predictions using the `predictor` instances created by our `.deploy` method. To send a request to our endpoint we use the `predictor.predict` with our `inputs`. ```python predictor.predict({ "inputs": "Can you please let us know more details about your " }) ``` If you want to customize your predictions using additional `kwargs` like `min_length`, check out “Usage best practices” below. ## Usage best practices When using generative models, most of the time you want to configure or customize your prediction to fit your needs, for example by using beam search, configuring the max or min length of the generated sequence, or adjust the temperature to reduce repetition. The Transformers library provides different strategies and `kwargs` to do this, the Hugging Face Inference toolkit offers the same functionality using the `parameters` attribute of your request payload. Below you can find examples on how to generate text without parameters, with beam search, and using custom configurations. If you want to learn about different decoding strategies check out this [blog post](https://huggingface.co/blog/how-to-generate). ### Default request This is an example of a default request using `greedy` search. Inference-time after the first request: `3s` ```python predictor.predict({ "inputs": "Can you please let us know more details about your " }) ``` ### Beam search request This is an example of a request using `beam` search with 5 beams. Inference-time after the first request: `3.3s` ```python predictor.predict({ "inputs": "Can you please let us know more details about your ", "parameters" : { "num_beams": 5, } }) ``` ### Parameterized request This is an example of a request using a custom parameter, e.g. `min_length` for generating at least 512 tokens. Inference-time after the first request: `38s` ```python predictor.predict({ "inputs": "Can you please let us know more details about your ", "parameters" : { "max_length": 512, "temperature": 0.9, } }) ``` ### Few-Shot example (advanced) This is an example of how you could `eos_token_id` to stop the generation on a certain token, e.g. `\n` ,`.` or `###` for few-shot predictions. Below is a few-shot example for generating tweets for keywords. Inference-time after the first request: `15-45s` ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B") end_sequence="###" temperature=4 max_generated_token_length=25 prompt= """key: markets tweet: Take feedback from nature and markets, not from people. ### key: children tweet: Maybe we die so we can come back as children. ### key: startups tweet: Startups shouldn’t worry about how to put out fires, they should worry about how to start them. ### key: hugging face tweet:""" predictor.predict({ 'inputs': prompt, "parameters" : { "max_length": int(len(prompt) + max_generated_token_length), "temperature": float(temperature), "eos_token_id": int(tokenizer.convert_tokens_to_ids(end_sequence)), "return_full_text":False } }) ``` --- To delete your endpoint you can run. ```python predictor.delete_endpoint() ``` ## Conclusion We successfully managed to deploy `GPT-J`, a 6 billion parameter language model created by [EleutherAI](https://www.eleuther.ai/), using Amazon SageMaker. We reduced the model load time from 3.5 minutes down to 8 seconds to be able to run scalable, reliable inference. Remember that using `torch.save()` and `torch.load()` can create incompatibility issues. If you want to learn more about scaling out your Amazon SageMaker Endpoints check out my other blog post: [“MLOps: End-to-End Hugging Face Transformers with the Hub & SageMaker Pipelines”](https://www.philschmid.de/mlops-sagemaker-huggingface-transformers). --- Thanks for reading! If you have any question, feel free to contact me, through [Github](https://github.com/huggingface/transformers), or on the [forum](https://discuss.huggingface.co/c/sagemaker/17). You can also connect with me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/).
Boost Wav2Vec2 with n-gram LM in 🤗 Transformers
patrickvonplaten
January 12, 2022
wav2vec2-with-ngram
research, guide, audio
https://huggingface.co/blog/wav2vec2-with-ngram
# Boosting Wav2Vec2 with n-grams in 🤗 Transformers <a target="_blank" href="https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Boosting_Wav2Vec2_with_n_grams_in_Transformers.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> **Wav2Vec2** is a popular pre-trained model for speech recognition. Released in [September 2020](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) by Meta AI Research, the novel architecture catalyzed progress in self-supervised pretraining for speech recognition, *e.g.* [*G. Ng et al.*, 2021](https://arxiv.org/pdf/2104.03416.pdf), [*Chen et al*, 2021](https://arxiv.org/abs/2110.13900), [*Hsu et al.*, 2021](https://arxiv.org/abs/2106.07447) and [*Babu et al.*, 2021](https://arxiv.org/abs/2111.09296). On the Hugging Face Hub, Wav2Vec2's most popular pre-trained checkpoint currently amounts to over [**250,000** monthly downloads](https://huggingface.co/facebook/wav2vec2-base-960h). Using Connectionist Temporal Classification (CTC), pre-trained Wav2Vec2-like checkpoints are extremely easy to fine-tune on downstream speech recognition tasks. In a nutshell, fine-tuning pre-trained Wav2Vec2 checkpoints works as follows: A single randomly initialized linear layer is stacked on top of the pre-trained checkpoint and trained to classify raw audio input to a sequence of letters. It does so by: 1. extracting audio representations from the raw audio (using CNN layers), 2. processing the sequence of audio representations with a stack of transformer layers, and, 3. classifying the processed audio representations into a sequence of output letters. Previously audio classification models required an additional language model (LM) and a dictionary to transform the sequence of classified audio frames to a coherent transcription. Wav2Vec2's architecture is based on transformer layers, thus giving each processed audio representation context from all other audio representations. In addition, Wav2Vec2 leverages the [CTC algorithm](https://distill.pub/2017/ctc/) for fine-tuning, which solves the problem of alignment between a varying "input audio length"-to-"output text length" ratio. Having contextualized audio classifications and no alignment problems, Wav2Vec2 does not require an external language model or dictionary to yield acceptable audio transcriptions. As can be seen in Appendix C of the [official paper](https://arxiv.org/abs/2006.11477), Wav2Vec2 gives impressive downstream performances on [LibriSpeech](https://huggingface.co/datasets/librispeech_asr) without using a language model at all. However, from the appendix, it also becomes clear that using Wav2Vec2 in combination with a language model can yield a significant improvement, especially when the model was trained on only 10 minutes of transcribed audio. Until recently, the 🤗 Transformers library did not offer a simple user interface to decode audio files with a fine-tuned Wav2Vec2 **and** a language model. This has thankfully changed. 🤗 Transformers now offers an easy-to-use integration with *Kensho Technologies'* [pyctcdecode library](https://github.com/kensho-technologies/pyctcdecode). This blog post is a step-by-step **technical** guide to explain how one can create an **n-gram** language model and combine it with an existing fine-tuned Wav2Vec2 checkpoint using 🤗 Datasets and 🤗 Transformers. We start by: 1. How does decoding audio with an LM differ from decoding audio without an LM? 2. How to get suitable data for a language model? 3. How to build an *n-gram* with KenLM? 4. How to combine the *n-gram* with a fine-tuned Wav2Vec2 checkpoint? For a deep dive into how Wav2Vec2 functions - which is not necessary for this blog post - the reader is advised to consult the following material: - [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) - [Fine-Tune Wav2Vec2 for English ASR with 🤗 Transformers](https://huggingface.co/blog/fine-tune-wav2vec2-english) - [An Illustrated Tour of Wav2vec 2.0](https://jonathanbgn.com/2021/09/30/illustrated-wav2vec-2.html) ## **1. Decoding audio data with Wav2Vec2 and a language model** As shown in 🤗 Transformers [exemple docs of Wav2Vec2](https://huggingface.co/docs/transformers/master/en/model_doc/wav2vec2#transformers.Wav2Vec2ForCTC), audio can be transcribed as follows. First, we install `datasets` and `transformers`. ```bash pip install datasets transformers ``` Let's load a small excerpt of the [Librispeech dataset](https://huggingface.co/datasets/librispeech_asr) to demonstrate Wav2Vec2's speech transcription capabilities. ```python from datasets import load_dataset dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") dataset ``` **Output:** ```bash Reusing dataset librispeech_asr (/root/.cache/huggingface/datasets/hf-internal-testing___librispeech_asr/clean/2.1.0/f2c70a4d03ab4410954901bde48c54b85ca1b7f9bf7d616e7e2a72b5ee6ddbfc) Dataset({ features: ['file', 'audio', 'text', 'speaker_id', 'chapter_id', 'id'], num_rows: 73 }) ``` We can pick one of the 73 audio samples and listen to it. ```python audio_sample = dataset[2] audio_sample["text"].lower() ``` **Output:** ```bash he tells us that at this festive season of the year with christmas and roast beef looming before us similes drawn from eating and its results occur most readily to the mind ``` Having chosen a data sample, we now load the fine-tuned model and processor. ```python from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-100h") model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-100h") ``` Next, we process the data ```python inputs = processor(audio_sample["audio"]["array"], sampling_rate=audio_sample["audio"]["sampling_rate"], return_tensors="pt") ``` forward it to the model ```python import torch with torch.no_grad(): logits = model(**inputs).logits ``` and decode it ```python predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.batch_decode(predicted_ids) transcription[0].lower() ``` **Output:** ```bash 'he tells us that at this festive season of the year with christmaus and rose beef looming before us simalyis drawn from eating and its results occur most readily to the mind' ``` Comparing the transcription to the target transcription above, we can see that some words *sound* correct, but are not *spelled* correctly, *e.g.*: - *christmaus* vs. *christmas* - *rose* vs. *roast* - *simalyis* vs. *similes* Let's see whether combining Wav2Vec2 with an ***n-gram*** lnguage model can help here. First, we need to install `pyctcdecode` and `kenlm`. ```bash pip install https://github.com/kpu/kenlm/archive/master.zip pyctcdecode ``` For demonstration purposes, we have prepared a new model repository [patrickvonplaten/wav2vec2-base-100h-with-lm](https://huggingface.co/patrickvonplaten/wav2vec2-base-100h-with-lm) which contains the same Wav2Vec2 checkpoint but has an additional **4-gram** language model for English. Instead of using `Wav2Vec2Processor`, this time we use `Wav2Vec2ProcessorWithLM` to load the **4-gram** model in addition to the feature extractor and tokenizer. ```python from transformers import Wav2Vec2ProcessorWithLM processor = Wav2Vec2ProcessorWithLM.from_pretrained("patrickvonplaten/wav2vec2-base-100h-with-lm") ``` In constrast to decoding the audio without language model, the processor now directly receives the model's output `logits` instead of the `argmax(logits)` (called `predicted_ids`) above. The reason is that when decoding with a language model, at each time step, the processor takes the probabilities of all possible output characters into account. Let's take a look at the dimension of the `logits` output. ```python logits.shape ``` **Output:** ```bash torch.Size([1, 624, 32]) ``` We can see that the `logits` correspond to a sequence of 624 vectors each having 32 entries. Each of the 32 entries thereby stands for the logit probability of one of the 32 possible output characters of the model: ```python " ".join(sorted(processor.tokenizer.get_vocab())) ``` **Output:** ```bash "' </s> <pad> <s> <unk> A B C D E F G H I J K L M N O P Q R S T U V W X Y Z |" ``` Intuitively, one can understand the decoding process of `Wav2Vec2ProcessorWithLM` as applying beam search through a matrix of size 624 $\times$ 32 probabilities while leveraging the probabilities of the next letters as given by the *n-gram* language model. OK, let's run the decoding step again. `pyctcdecode` language model decoder does not automatically convert `torch` tensors to `numpy` so we'll have to convert them ourselves before. ```python transcription = processor.batch_decode(logits.numpy()).text transcription[0].lower() ``` **Output:** ```bash 'he tells us that at this festive season of the year with christmas and rose beef looming before us similes drawn from eating and its results occur most readily to the mind' ``` Cool! Recalling the words `facebook/wav2vec2-base-100h` without a language model transcribed incorrectly previously, *e.g.*, > - *christmaus* vs. *christmas* > - *rose* vs. *roast* > - *simalyis* vs. *similes* we can take another look at the transcription of `facebook/wav2vec2-base-100h` **with** a 4-gram language model. 2 out of 3 errors are corrected; *christmas* and *similes* have been correctly transcribed. Interestingly, the incorrect transcription of *rose* persists. However, this should not surprise us very much. Decoding audio without a language model is much more prone to yield spelling mistakes, such as *christmaus* or *similes* (those words don't exist in the English language as far as I know). This is because the speech recognition system almost solely bases its prediction on the acoustic input it was given and not really on the language modeling context of previous and successive predicted letters \\( {}^1 \\). If on the other hand, we add a language model, we can be fairly sure that the speech recognition system will heavily reduce spelling errors since a well-trained *n-gram* model will surely not predict a word that has spelling errors. But the word *rose* is a valid English word and therefore the 4-gram will predict this word with a probability that is not insignificant. The language model on its own most likely does favor the correct word *roast* since the word sequence *roast beef* is much more common in English than *rose beef*. Because the final transcription is derived from a weighted combination of `facebook/wav2vec2-base-100h` output probabilities and those of the *n-gram* language model, it is quite common to see incorrectly transcribed words such as *rose*. For more information on how you can tweak different parameters when decoding with `Wav2Vec2ProcessorWithLM`, please take a look at the official documentation [here](https://huggingface.co/docs/transformers/master/en/model_doc/wav2vec2#transformers.Wav2Vec2ProcessorWithLM.batch_decode). ------------------------------------------------------------------------ \\({}^1 \\) Some research shows that a model such as `facebook/wav2vec2-base-100h` - when sufficiently large and trained on enough data - can learn language modeling dependencies between intermediate audio representations similar to a language model. Great, now that you have seen the advantages adding an *n-gram* language model can bring, let's dive into how to create an *n-gram* and `Wav2Vec2ProcessorWithLM` from scratch. ## **2. Getting data for your language model** A language model that is useful for a speech recognition system should support the acoustic model, *e.g.* Wav2Vec2, in predicting the next word (or token, letter) and therefore model the following distribution: \\( \mathbf{P}(w_n | \mathbf{w}_0^{t-1}) \\) with \\( w_n \\) being the next word and \\( \mathbf{w}_0^{t-1} \\) being the sequence of all previous words since the beginning of the utterance. Simply said, the language model should be good at predicting the next word given all previously transcribed words regardless of the audio input given to the speech recognition system. As always a language model is only as good as the data it is trained on. In the case of speech recognition, we should therefore ask ourselves for what kind of data, the speech recognition will be used for: *conversations*, *audiobooks*, *movies*, *speeches*, *, etc*, \...? The language model should be good at modeling language that corresponds to the target transcriptions of the speech recognition system. For demonstration purposes, we assume here that we have fine-tuned a pre-trained [`facebook/wav2vec2-xls-r-300m`](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on [Common Voice 7](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) in Swedish. The fine-tuned checkpoint can be found [here](https://huggingface.co/hf-test/xls-r-300m-sv). Common Voice 7 is a relatively crowd-sourced read-out audio dataset and we will evaluate the model on its test data. Let's now look for suitable text data on the Hugging Face Hub. We search all datasets for those [that contain Swedish data](https://huggingface.co/datasets?languages=languages:sv&sort=downloads). Browsing a bit through the datasets, we are looking for a dataset that is similar to Common Voice's read-out audio data. The obvious choices of [oscar](https://huggingface.co/datasets/oscar) and [mc4](https://huggingface.co/datasets/mc4) might not be the most suitable here because they: - are generated from crawling the web, which might not be very clean and correspond well to spoken language - require a lot of pre-processing - are very large which is not ideal for demonstration purposes here 😉 A dataset that seems sensible here and which is relatively clean and easy to pre-process is [europarl_bilingual](https://huggingface.co/datasets/europarl_bilingual) as it's a dataset that is based on discussions and talks of the European parliament. It should therefore be relatively clean and correspond well to read-out audio data. The dataset is originally designed for machine translation and can therefore only be accessed in translation pairs. We will only extract the text of the target language, Swedish (`sv`), from the *English-to-Swedish* translations. ```python target_lang="sv" # change to your target lang ``` Let's download the data. ```python from datasets import load_dataset dataset = load_dataset("europarl_bilingual", lang1="en", lang2=target_lang, split="train") ``` We see that the data is quite large - it has over a million translations. Since it's only text data, it should be relatively easy to process though. Next, let's look at how the data was preprocessed when training the fine-tuned *XLS-R* checkpoint in Swedish. Looking at the [`run.sh` file](https://huggingface.co/hf-test/xls-r-300m-sv/blob/main/run.sh), we can see that the following characters were removed from the official transcriptions: ```python chars_to_ignore_regex = '[,?.!\-\;\:"“%‘”�—’…–]' # change to the ignored characters of your fine-tuned model ``` Let's do the same here so that the alphabet of our language model matches the one of the fine-tuned acoustic checkpoints. We can write a single map function to extract the Swedish text and process it right away. ```python import re def extract_text(batch): text = batch["translation"][target_lang] batch["text"] = re.sub(chars_to_ignore_regex, "", text.lower()) return batch ``` Let's apply the `.map()` function. This should take roughly 5 minutes. ```python dataset = dataset.map(extract_text, remove_columns=dataset.column_names) ``` Great. Let's upload it to the Hub so that we can inspect and reuse it better. You can log in by executing the following cell. ```python from huggingface_hub import notebook_login notebook_login() ``` **Output:** ```bash Login successful Your token has been saved to /root/.huggingface/token Authenticated through git-credential store but this isn't the helper defined on your machine. You might have to re-authenticate when pushing to the Hugging Face Hub. Run the following command in your terminal in case you want to set this credential helper as the default git config --global credential.helper store ``` Next, we call 🤗 Hugging Face's [`push_to_hub`](https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=push#datasets.Dataset.push_to_hub) method to upload the dataset to the repo `"sv_corpora_parliament_processed"`. ```python dataset.push_to_hub(f"{target_lang}_corpora_parliament_processed", split="train") ``` That was easy! The dataset viewer is automatically enabled when uploading a new dataset, which is very convenient. You can now directly inspect the dataset online. Feel free to look through our preprocessed dataset directly on [`hf-test/sv_corpora_parliament_processed`](https://huggingface.co/datasets/hf-test/sv_corpora_parliament_processed). Even if we are not a native speaker in Swedish, we can see that the data is well processed and seems clean. Next, let's use the data to build a language model. ## **3. Build an *n-gram* with KenLM** While large language models based on the [Transformer architecture](https://jalammar.github.io/illustrated-transformer/) have become the standard in NLP, it is still very common to use an ***n-gram*** LM to boost speech recognition systems - as shown in Section 1. Looking again at Table 9 of Appendix C of the [official Wav2Vec2 paper](https://arxiv.org/abs/2006.11477), it can be noticed that using a *Transformer*-based LM for decoding clearly yields better results than using an *n-gram* model, but the difference between *n-gram* and *Transformer*-based LM is much less significant than the difference between *n-gram* and no LM. *E.g.*, for the large Wav2Vec2 checkpoint that was fine-tuned on 10min only, an *n-gram* reduces the word error rate (WER) compared to no LM by *ca.* 80% while a *Transformer*-based LM *only* reduces the WER by another 23% compared to the *n-gram*. This relative WER reduction becomes less, the more data the acoustic model has been trained on. *E.g.*, for the large checkpoint a *Transformer*-based LM reduces the WER by merely 8% compared to an *n-gram* LM whereas the *n-gram* still yields a 21% WER reduction compared to no language model. The reason why an *n-gram* is preferred over a *Transformer*-based LM is that *n-grams* come at a significantly smaller computational cost. For an *n-gram*, retrieving the probability of a word given previous words is almost only as computationally expensive as querying a look-up table or tree-like data storage - *i.e.* it's very fast compared to modern *Transformer*-based language models that would require a full forward pass to retrieve the next word probabilities. For more information on how *n-grams* function and why they are (still) so useful for speech recognition, the reader is advised to take a look at [this excellent summary](https://web.stanford.edu/~jurafsky/slp3/3.pdf) from Stanford. Great, let's see step-by-step how to build an *n-gram*. We will use the popular [KenLM library](https://github.com/kpu/kenlm) to do so. Let's start by installing the Ubuntu library prerequisites: ```bash sudo apt install build-essential cmake libboost-system-dev libboost-thread-dev libboost-program-options-dev libboost-test-dev libeigen3-dev zlib1g-dev libbz2-dev liblzma-dev ``` before downloading and unpacking the KenLM repo. ```bash wget -O - https://kheafield.com/code/kenlm.tar.gz | tar xz ``` KenLM is written in C++, so we'll make use of `cmake` to build the binaries. ```bash mkdir kenlm/build && cd kenlm/build && cmake .. && make -j2 ls kenlm/build/bin ``` Great, as we can see, the executable functions have successfully been built under `kenlm/build/bin/`. KenLM by default computes an *n-gram* with [Kneser-Ney smooting](https://en.wikipedia.org/wiki/Kneser%E2%80%93Ney_smoothing). All text data used to create the *n-gram* is expected to be stored in a text file. We download our dataset and save it as a `.txt` file. ```python from datasets import load_dataset username = "hf-test" # change to your username dataset = load_dataset(f"{username}/{target_lang}_corpora_parliament_processed", split="train") with open("text.txt", "w") as file: file.write(" ".join(dataset["text"])) ``` Now, we just have to run KenLM's `lmplz` command to build our *n-gram*, called `"5gram.arpa"`. As it's relatively common in speech recognition, we build a *5-gram* by passing the `-o 5` parameter. For more information on the different *n-gram* LM that can be built with KenLM, one can take a look at the [official website of KenLM](https://kheafield.com/code/kenlm/). Executing the command below might take a minute or so. ```bash kenlm/build/bin/lmplz -o 5 <"text.txt" > "5gram.arpa" ``` **Output:** ```bash === 1/5 Counting and sorting n-grams === Reading /content/swedish_text.txt ----5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100 tcmalloc: large alloc 1918697472 bytes == 0x55d40d0f0000 @ 0x7fdccb1a91e7 0x55d40b2f17a2 0x55d40b28c51e 0x55d40b26b2eb 0x55d40b257066 0x7fdcc9342bf7 0x55d40b258baa tcmalloc: large alloc 8953896960 bytes == 0x55d47f6c0000 @ 0x7fdccb1a91e7 0x55d40b2f17a2 0x55d40b2e07ca 0x55d40b2e1208 0x55d40b26b308 0x55d40b257066 0x7fdcc9342bf7 0x55d40b258baa **************************************************************************************************** Unigram tokens 42153890 types 360209 === 2/5 Calculating and sorting adjusted counts === Chain sizes: 1:4322508 2:1062772928 3:1992699264 4:3188318720 5:4649631744 tcmalloc: large alloc 4649631744 bytes == 0x55d40d0f0000 @ 0x7fdccb1a91e7 0x55d40b2f17a2 0x55d40b2e07ca 0x55d40b2e1208 0x55d40b26b8d7 0x55d40b257066 0x7fdcc9342bf7 0x55d40b258baa tcmalloc: large alloc 1992704000 bytes == 0x55d561ce0000 @ 0x7fdccb1a91e7 0x55d40b2f17a2 0x55d40b2e07ca 0x55d40b2e1208 0x55d40b26bcdd 0x55d40b257066 0x7fdcc9342bf7 0x55d40b258baa tcmalloc: large alloc 3188326400 bytes == 0x55d695a86000 @ 0x7fdccb1a91e7 0x55d40b2f17a2 0x55d40b2e07ca 0x55d40b2e1208 0x55d40b26bcdd 0x55d40b257066 0x7fdcc9342bf7 0x55d40b258baa Statistics: 1 360208 D1=0.686222 D2=1.01595 D3+=1.33685 2 5476741 D1=0.761523 D2=1.06735 D3+=1.32559 3 18177681 D1=0.839918 D2=1.12061 D3+=1.33794 4 30374983 D1=0.909146 D2=1.20496 D3+=1.37235 5 37231651 D1=0.944104 D2=1.25164 D3+=1.344 Memory estimate for binary LM: type MB probing 1884 assuming -p 1.5 probing 2195 assuming -r models -p 1.5 trie 922 without quantization trie 518 assuming -q 8 -b 8 quantization trie 806 assuming -a 22 array pointer compression trie 401 assuming -a 22 -q 8 -b 8 array pointer compression and quantization === 3/5 Calculating and sorting initial probabilities === Chain sizes: 1:4322496 2:87627856 3:363553620 4:728999592 5:1042486228 ----5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100 #################################################################################################### === 4/5 Calculating and writing order-interpolated probabilities === Chain sizes: 1:4322496 2:87627856 3:363553620 4:728999592 5:1042486228 ----5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100 #################################################################################################### === 5/5 Writing ARPA model === ----5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100 **************************************************************************************************** Name:lmplz VmPeak:14181536 kB VmRSS:2199260 kB RSSMax:4160328 kB user:120.598 sys:26.6659 CPU:147.264 real:136.344 ``` Great, we have built a *5-gram* LM! Let's inspect the first couple of lines. ```bash head -20 5gram.arpa ``` **Output:** ```bash \data\ ngram 1=360208 ngram 2=5476741 ngram 3=18177681 ngram 4=30374983 ngram 5=37231651 \1-grams: -6.770219 <unk> 0 0 <s> -0.11831701 -4.6095004 återupptagande -1.2174699 -2.2361007 av -0.79668784 -4.8163533 sessionen -0.37327805 -2.2251768 jag -1.4205662 -4.181505 förklarar -0.56261665 -3.5790775 europaparlamentets -0.63611007 -4.771945 session -0.3647111 -5.8043895 återupptagen -0.3058712 -2.8580177 efter -0.7557702 -5.199537 avbrottet -0.43322718 ``` There is a small problem that 🤗 Transformers will not be happy about later on. The *5-gram* correctly includes a "Unknown" or `<unk>`, as well as a *begin-of-sentence*, `<s>` token, but no *end-of-sentence*, `</s>` token. This sadly has to be corrected currently after the build. We can simply add the *end-of-sentence* token by adding the line `0 </s> -0.11831701` below the *begin-of-sentence* token and increasing the `ngram 1` count by 1. Because the file has roughly 100 million lines, this command will take *ca.* 2 minutes. ```python with open("5gram.arpa", "r") as read_file, open("5gram_correct.arpa", "w") as write_file: has_added_eos = False for line in read_file: if not has_added_eos and "ngram 1=" in line: count=line.strip().split("=")[-1] write_file.write(line.replace(f"{count}", f"{int(count)+1}")) elif not has_added_eos and "<s>" in line: write_file.write(line) write_file.write(line.replace("<s>", "</s>")) has_added_eos = True else: write_file.write(line) ``` Let's now inspect the corrected *5-gram*. ```bash head -20 5gram_correct.arpa ``` **Output:** ```bash \data\ ngram 1=360209 ngram 2=5476741 ngram 3=18177681 ngram 4=30374983 ngram 5=37231651 \1-grams: -6.770219 <unk> 0 0 <s> -0.11831701 0 </s> -0.11831701 -4.6095004 återupptagande -1.2174699 -2.2361007 av -0.79668784 -4.8163533 sessionen -0.37327805 -2.2251768 jag -1.4205662 -4.181505 förklarar -0.56261665 -3.5790775 europaparlamentets -0.63611007 -4.771945 session -0.3647111 -5.8043895 återupptagen -0.3058712 -2.8580177 efter -0.7557702 ``` Great, this looks better! We're done at this point and all that is left to do is to correctly integrate the `"ngram"` with [`pyctcdecode`](https://github.com/kensho-technologies/pyctcdecode) and 🤗 Transformers. ## **4. Combine an *n-gram* with Wav2Vec2** In a final step, we want to wrap the *5-gram* into a `Wav2Vec2ProcessorWithLM` object to make the *5-gram* boosted decoding as seamless as shown in Section 1. We start by downloading the currently "LM-less" processor of [`xls-r-300m-sv`](https://huggingface.co/hf-test/xls-r-300m-sv). ```python from transformers import AutoProcessor processor = AutoProcessor.from_pretrained("hf-test/xls-r-300m-sv") ``` Next, we extract the vocabulary of its tokenizer as it represents the `"labels"` of `pyctcdecode`'s `BeamSearchDecoder` class. ```python vocab_dict = processor.tokenizer.get_vocab() sorted_vocab_dict = {k.lower(): v for k, v in sorted(vocab_dict.items(), key=lambda item: item[1])} ``` The `"labels"` and the previously built `5gram_correct.arpa` file is all that's needed to build the decoder. ```python from pyctcdecode import build_ctcdecoder decoder = build_ctcdecoder( labels=list(sorted_vocab_dict.keys()), kenlm_model_path="5gram_correct.arpa", ) ``` **Output:** ```bash Found entries of length > 1 in alphabet. This is unusual unless style is BPE, but the alphabet was not recognized as BPE type. Is this correct? Unigrams and labels don't seem to agree. ``` We can safely ignore the warning and all that is left to do now is to wrap the just created `decoder`, together with the processor's `tokenizer` and `feature_extractor` into a `Wav2Vec2ProcessorWithLM` class. ```python from transformers import Wav2Vec2ProcessorWithLM processor_with_lm = Wav2Vec2ProcessorWithLM( feature_extractor=processor.feature_extractor, tokenizer=processor.tokenizer, decoder=decoder ) ``` We want to directly upload the LM-boosted processor into the model folder of [`xls-r-300m-sv`](https://huggingface.co/hf-test/xls-r-300m-sv) to have all relevant files in one place. Let's clone the repo, add the new decoder files and upload them afterward. First, we need to install `git-lfs`. ```bash sudo apt-get install git-lfs tree ``` Cloning and uploading of modeling files can be done conveniently with the `huggingface_hub`'s `Repository` class. More information on how to use the `huggingface_hub` to upload any files, please take a look at the [official docs](https://huggingface.co/docs/huggingface_hub/how-to-upstream). ```python from huggingface_hub import Repository repo = Repository(local_dir="xls-r-300m-sv", clone_from="hf-test/xls-r-300m-sv") ``` **Output:** ```bash Cloning https://huggingface.co/hf-test/xls-r-300m-sv into local empty directory. ``` Having cloned `xls-r-300m-sv`, let's save the new processor with LM into it. ```python processor_with_lm.save_pretrained("xls-r-300m-sv") ``` Let's inspect the local repository. The `tree` command conveniently can also show the size of the different files. ```bash tree -h xls-r-300m-sv/ ``` **Output:** ```bash xls-r-300m-sv/ ├── [ 23] added_tokens.json ├── [ 401] all_results.json ├── [ 253] alphabet.json ├── [2.0K] config.json ├── [ 304] emissions.csv ├── [ 226] eval_results.json ├── [4.0K] language_model │   ├── [4.1G] 5gram_correct.arpa │   ├── [ 78] attrs.json │   └── [4.9M] unigrams.txt ├── [ 240] preprocessor_config.json ├── [1.2G] pytorch_model.bin ├── [3.5K] README.md ├── [4.0K] runs │   └── [4.0K] Jan09_22-00-50_brutasse │   ├── [4.0K] 1641765760.8871996 │   │   └── [4.6K] events.out.tfevents.1641765760.brutasse.31164.1 │   ├── [ 42K] events.out.tfevents.1641765760.brutasse.31164.0 │   └── [ 364] events.out.tfevents.1641794162.brutasse.31164.2 ├── [1.2K] run.sh ├── [ 30K] run_speech_recognition_ctc.py ├── [ 502] special_tokens_map.json ├── [ 279] tokenizer_config.json ├── [ 29K] trainer_state.json ├── [2.9K] training_args.bin ├── [ 196] train_results.json ├── [ 319] vocab.json └── [4.0K] wandb ├── [ 52] debug-internal.log -> run-20220109_220240-1g372i3v/logs/debug-internal.log ├── [ 43] debug.log -> run-20220109_220240-1g372i3v/logs/debug.log ├── [ 28] latest-run -> run-20220109_220240-1g372i3v └── [4.0K] run-20220109_220240-1g372i3v ├── [4.0K] files │   ├── [8.8K] conda-environment.yaml │   ├── [140K] config.yaml │   ├── [4.7M] output.log │   ├── [5.4K] requirements.txt │   ├── [2.1K] wandb-metadata.json │   └── [653K] wandb-summary.json ├── [4.0K] logs │   ├── [3.4M] debug-internal.log │   └── [8.2K] debug.log └── [113M] run-1g372i3v.wandb 9 directories, 34 files ``` As can be seen the *5-gram* LM is quite large - it amounts to more than 4 GB. To reduce the size of the *n-gram* and make loading faster, `kenLM` allows converting `.arpa` files to binary ones using the `build_binary` executable. Let's make use of it here. ```bash kenlm/build/bin/build_binary xls-r-300m-sv/language_model/5gram_correct.arpa xls-r-300m-sv/language_model/5gram.bin ``` **Output:** ```bash Reading xls-r-300m-sv/language_model/5gram_correct.arpa ----5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100 **************************************************************************************************** SUCCESS ``` Great, it worked! Let's remove the `.arpa` file and check the size of the binary *5-gram* LM. ```bash rm xls-r-300m-sv/language_model/5gram_correct.arpa && tree -h xls-r-300m-sv/ ``` **Output:** ```bash xls-r-300m-sv/ ├── [ 23] added_tokens.json ├── [ 401] all_results.json ├── [ 253] alphabet.json ├── [2.0K] config.json ├── [ 304] emissions.csv ├── [ 226] eval_results.json ├── [4.0K] language_model │   ├── [1.8G] 5gram.bin │   ├── [ 78] attrs.json │   └── [4.9M] unigrams.txt ├── [ 240] preprocessor_config.json ├── [1.2G] pytorch_model.bin ├── [3.5K] README.md ├── [4.0K] runs │   └── [4.0K] Jan09_22-00-50_brutasse │   ├── [4.0K] 1641765760.8871996 │   │   └── [4.6K] events.out.tfevents.1641765760.brutasse.31164.1 │   ├── [ 42K] events.out.tfevents.1641765760.brutasse.31164.0 │   └── [ 364] events.out.tfevents.1641794162.brutasse.31164.2 ├── [1.2K] run.sh ├── [ 30K] run_speech_recognition_ctc.py ├── [ 502] special_tokens_map.json ├── [ 279] tokenizer_config.json ├── [ 29K] trainer_state.json ├── [2.9K] training_args.bin ├── [ 196] train_results.json ├── [ 319] vocab.json └── [4.0K] wandb ├── [ 52] debug-internal.log -> run-20220109_220240-1g372i3v/logs/debug-internal.log ├── [ 43] debug.log -> run-20220109_220240-1g372i3v/logs/debug.log ├── [ 28] latest-run -> run-20220109_220240-1g372i3v └── [4.0K] run-20220109_220240-1g372i3v ├── [4.0K] files │   ├── [8.8K] conda-environment.yaml │   ├── [140K] config.yaml │   ├── [4.7M] output.log │   ├── [5.4K] requirements.txt │   ├── [2.1K] wandb-metadata.json │   └── [653K] wandb-summary.json ├── [4.0K] logs │   ├── [3.4M] debug-internal.log │   └── [8.2K] debug.log └── [113M] run-1g372i3v.wandb 9 directories, 34 files ``` Nice, we reduced the *n-gram* by more than half to less than 2GB now. In the final step, let's upload all files. ```python repo.push_to_hub(commit_message="Upload lm-boosted decoder") ``` **Output:** ```bash Git LFS: (1 of 1 files) 1.85 GB / 1.85 GB Counting objects: 9, done. Delta compression using up to 2 threads. Compressing objects: 100% (9/9), done. Writing objects: 100% (9/9), 1.23 MiB | 1.92 MiB/s, done. Total 9 (delta 3), reused 0 (delta 0) To https://huggingface.co/hf-test/xls-r-300m-sv 27d0c57..5a191e2 main -> main ``` That's it. Now you should be able to use the *5gram* for LM-boosted decoding as shown in Section 1. As can be seen on [`xls-r-300m-sv`'s model card](https://huggingface.co/hf-test/xls-r-300m-sv#inference-with-lm) our *5gram* LM-boosted decoder yields a WER of 18.85% on Common Voice's 7 test set which is a relative performance of *ca.* 30% 🔥.
Case Study: Millisecond Latency using Hugging Face Infinity and modern CPUs
philschmid
January 13, 2022
infinity-cpu-performance
analysis
https://huggingface.co/blog/infinity-cpu-performance
# Case Study: Millisecond Latency using Hugging Face Infinity and modern CPUs <script async defer src="https://unpkg.com/medium-zoom-element@0/dist/medium-zoom-element.min.js"></script> <br> <div style="background-color: #e6f9e6; padding: 16px 32px; outline: 2px solid; border-radius: 10px;"> December 2022 Update: Infinity is no longer offered by Hugging Face as a commercial inference solution. To deploy and accelerate your models, we recommend the following new solutions: * [Inference Endpoints](https://huggingface.co/docs/inference-endpoints/index) to easily deploy models on dedicated infrastructure managed by Hugging Face. * Our open-source optimization libraries, [🤗 Optimum Intel](https://huggingface.co/blog/openvino) and [🤗 Optimum ONNX Runtime](https://huggingface.co/docs/optimum/main/en/onnxruntime/overview), to get the highest efficiency out of training and running models for inference. * Hugging Face [Expert Acceleration Program](https://huggingface.co/support), a commercial service for Hugging Face experts to work directly with your team to accelerate your Machine Learning roadmap and models. </div> ## Introduction Transfer learning has changed Machine Learning by reaching new levels of accuracy from Natural Language Processing (NLP) to Audio and Computer Vision tasks. At Hugging Face, we work hard to make these new complex models and large checkpoints as easily accessible and usable as possible. But while researchers and data scientists have converted to the new world of Transformers, few companies have been able to deploy these large, complex models in production at scale. The main bottleneck is the latency of predictions which can make large deployments expensive to run and real-time use cases impractical. Solving this is a difficult engineering challenge for any Machine Learning Engineering team and requires the use of advanced techniques to optimize models all the way down to the hardware. With [Hugging Face Infinity](https://huggingface.co/infinity), we offer a containerized solution that makes it easy to deploy low-latency, high-throughput, hardware-accelerated inference pipelines for the most popular Transformer models. Companies can get both the accuracy of Transformers and the efficiency necessary for large volume deployments, all in a simple to use package. In this blog post, we want to share detailed performance results for Infinity running on the latest generation of Intel Xeon CPU, to achieve optimal cost, efficiency, and latency for your Transformer deployments. ## What is Hugging Face Infinity Hugging Face Infinity is a containerized solution for customers to deploy end-to-end optimized inference pipelines for State-of-the-Art Transformer models, on any infrastructure. Hugging Face Infinity consists of 2 main services: * The Infinity Container is a hardware-optimized inference solution delivered as a Docker container. * Infinity Multiverse is a Model Optimization Service through which a Hugging Face Transformer model is optimized for the Target Hardware. Infinity Multiverse is compatible with Infinity Container. The Infinity Container is built specifically to run on a Target Hardware architecture and exposes an HTTP /predict endpoint to run inference. <br> <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="Product overview" src="assets/46_infinity_cpu_performance/overview.png"></medium-zoom> <figcaption>Figure 1. Infinity Overview</figcaption> </figure> <br> An Infinity Container is designed to serve 1 Model and 1 Task. A Task corresponds to machine learning tasks as defined in the [Transformers Pipelines documentation](https://huggingface.co/docs/transformers/master/en/main_classes/pipelines). As of the writing of this blog post, supported tasks include feature extraction/document embedding, ranking, sequence classification, and token classification. You can find more information about Hugging Face Infinity at [hf.co/infinity](https://huggingface.co/infinity), and if you are interested in testing it for yourself, you can sign up for a free trial at [hf.co/infinity-trial](https://huggingface.co/infinity-trial). --- ## Benchmark Inference performance benchmarks often only measure the execution of the model. In this blog post, and when discussing the performance of Infinity, we always measure the end-to-end pipeline including pre-processing, prediction, post-processing. Please keep this in mind when comparing these results with other latency measurements. <br> <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="Pipeline" src="assets/46_infinity_cpu_performance/pipeline.png"></medium-zoom> <figcaption>Figure 2. Infinity End-to-End Pipeline</figcaption> </figure> <br> ### Environment As a benchmark environment, we are going to use the [Amazon EC2 C6i instances](https://aws.amazon.com/ec2/instance-types/c6i), which are compute-optimized instances powered by the 3rd generation of Intel Xeon Scalable processors. These new Intel-based instances are using the ice-lake Process Technology and support Intel AVX-512, Intel Turbo Boost, and Intel Deep Learning Boost. In addition to superior performance for machine learning workloads, the Intel Ice Lake C6i instances offer great cost-performance and are our recommendation to deploy Infinity on Amazon Web Services. To learn more, visit the [EC2 C6i instance](https://aws.amazon.com/ec2/instance-types/c6i) page. ### Methodologies When it comes to benchmarking BERT-like models, two metrics are most adopted: * **Latency**: Time it takes for a single prediction of the model (pre-process, prediction, post-process) * **Throughput**: Number of executions performed in a fixed amount of time for one benchmark configuration, respecting Physical CPU cores, Sequence Length, and Batch Size These two metrics will be used to benchmark Hugging Face Infinity across different setups to understand the benefits and tradeoffs in this blog post. --- ## Results To run the benchmark, we created an infinity container for the [EC2 C6i instance](https://aws.amazon.com/ec2/instance-types/c6i) (Ice-lake) and optimized a [DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert) model for sequence classification using Infinity Multiverse. This ice-lake optimized Infinity Container can achieve up to 34% better latency & throughput compared to existing cascade-lake-based instances, and up to 800% better latency & throughput compared to vanilla transformers running on ice-lake. The Benchmark we created consists of 192 different experiments and configurations. We ran experiments for: * Physical CPU cores: 1, 2, 4, 8 * Sequence length: 8, 16, 32, 64, 128, 256, 384, 512 * Batch_size: 1, 2, 4, 8, 16, 32 In each experiment, we collect numbers for: * Throughput (requests per second) * Latency (min, max, avg, p90, p95, p99) You can find the full data of the benchmark in this google spreadsheet: [🤗 Infinity: CPU Ice-Lake Benchmark](https://docs.google.com/spreadsheets/d/1GWFb7L967vZtAS1yHhyTOZK1y-ZhdWUFqovv7-73Plg/edit?usp=sharing). In this blog post, we will highlight a few results of the benchmark including the best latency and throughput configurations. In addition to this, we deployed the [DistilBERT](https://huggingface.co/bhadresh-savani/distilbert-base-uncased-emotion) model we used for the benchmark as an API endpoint on 2 physical cores. You can test it and get a feeling for the performance of Infinity. Below you will find a `curl` command on how to send a request to the hosted endpoint. The API returns a `x-compute-time` HTTP Header, which contains the duration of the end-to-end pipeline. ```bash curl --request POST `-i` \ --url https://infinity.huggingface.co/cpu/distilbert-base-uncased-emotion \ --header 'Content-Type: application/json' \ --data '{"inputs":"I like you. I love you"}' ``` ### Throughput Below you can find the throughput comparison for running infinity on 2 physical cores with batch size 1, compared with vanilla transformers. <br> <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="Throughput" src="assets/46_infinity_cpu_performance/throughput.png"></medium-zoom> <figcaption>Figure 3. Throughput: Infinity vs Transformers</figcaption> </figure> <br> | Sequence Length | Infinity | Transformers | improvement | |-----------------|-------------|--------------|-------------| | 8 | 248 req/sec | 49 req/sec | +506% | | 16 | 212 req/sec | 50 req/sec | +424% | | 32 | 150 req/sec | 40 req/sec | +375% | | 64 | 97 req/sec | 28 req/sec | +346% | | 128 | 55 req/sec | 18 req/sec | +305% | | 256 | 27 req/sec | 9 req/sec | +300% | | 384 | 17 req/sec | 5 req/sec | +340% | | 512 | 12 req/sec | 4 req/sec | +300% | ### Latency Below, you can find the latency results for an experiment running Hugging Face Infinity on 2 Physical Cores with Batch Size 1. It is remarkable to see how robust and constant Infinity is, with minimal deviation for p95, p99, or p100 (max latency). This result is confirmed for other experiments as well in the benchmark. <br> <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="Latency" src="assets/46_infinity_cpu_performance/latency.png"></medium-zoom> <figcaption>Figure 4. Latency (Batch=1, Physical Cores=2)</figcaption> </figure> <br> --- ## Conclusion In this post, we showed how Hugging Face Infinity performs on the new Intel Ice Lake Xeon CPU. We created a detailed benchmark with over 190 different configurations sharing the results you can expect when using Hugging Face Infinity on CPU, what would be the best configuration to optimize your Infinity Container for latency, and what would be the best configuration to maximize throughput. Hugging Face Infinity can deliver up to 800% higher throughput compared to vanilla transformers, and down to 1-4ms latency for sequence lengths up to 64 tokens. The flexibility to optimize transformer models for throughput, latency, or both enables businesses to either reduce the amount of infrastructure cost for the same workload or to enable real-time use cases that were not possible before. If you are interested in trying out Hugging Face Infinity sign up for your trial at [hf.co/infinity-trial](https://hf.co/infinity-trial) ## Resources * [Hugging Face Infinity](https://huggingface.co/infinity) * [Hugging Face Infinity Trial](https://huggingface.co/infinity-trial) * [Amazon EC2 C6i instances](https://aws.amazon.com/ec2/instance-types/c6i) * [DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert) * [DistilBERT paper](https://arxiv.org/abs/1910.01108) * [DistilBERT model](https://huggingface.co/bhadresh-savani/distilbert-base-uncased-emotion) * [🤗 Infinity: CPU Ice-Lake Benchmark](https://docs.google.com/spreadsheets/d/1GWFb7L967vZtAS1yHhyTOZK1y-ZhdWUFqovv7-73Plg/edit?usp=sharing)
Welcome Stable-baselines3 to the Hugging Face Hub 🤗
ThomasSimonini
January 21, 2022
sb3
open-source-collab, rl
https://huggingface.co/blog/sb3
# Welcome Stable-baselines3 to the Hugging Face Hub 🤗 At Hugging Face, we are contributing to the ecosystem for Deep Reinforcement Learning researchers and enthusiasts. That’s why we’re happy to announce that we integrated [Stable-Baselines3](https://github.com/DLR-RM/stable-baselines3) to the Hugging Face Hub. [Stable-Baselines3](https://github.com/DLR-RM/stable-baselines3) is one of the most popular PyTorch Deep Reinforcement Learning library that makes it easy to train and test your agents in a variety of environments (Gym, Atari, MuJoco, Procgen...). With this integration, you can now host your saved models 💾 and load powerful models from the community. In this article, we’re going to show how you can do it. ### Installation To use stable-baselines3 with Hugging Face Hub, you just need to install these 2 libraries: ```bash pip install huggingface_hub pip install huggingface_sb3 ``` ### Finding Models We’re currently uploading saved models of agents playing Space Invaders, Breakout, LunarLander and more. On top of this, you can find [all stable-baselines-3 models from the community here](https://huggingface.co/models?other=stable-baselines3) When you found the model you need, you just have to copy the repository id: ![Image showing how to copy a repository id](assets/47_sb3/repo_id.jpg) ### Download a model from the Hub The coolest feature of this integration is that you can now very easily load a saved model from Hub to Stable-baselines3. In order to do that you just need to copy the repo-id that contains your saved model and the name of the saved model zip file in the repo. For instance`sb3/demo-hf-CartPole-v1`: ```python import gym from huggingface_sb3 import load_from_hub from stable_baselines3 import PPO from stable_baselines3.common.evaluation import evaluate_policy # Retrieve the model from the hub ## repo_id = id of the model repository from the Hugging Face Hub (repo_id = {organization}/{repo_name}) ## filename = name of the model zip file from the repository including the extension .zip checkpoint = load_from_hub( repo_id="sb3/demo-hf-CartPole-v1", filename="ppo-CartPole-v1.zip", ) model = PPO.load(checkpoint) # Evaluate the agent and watch it eval_env = gym.make("CartPole-v1") mean_reward, std_reward = evaluate_policy( model, eval_env, render=True, n_eval_episodes=5, deterministic=True, warn=False ) print(f"mean_reward={mean_reward:.2f} +/- {std_reward}") ``` ### Sharing a model to the Hub In just a minute, you can get your saved model in the Hub. First, you need to be logged in to Hugging Face to upload a model: - If you're using Colab/Jupyter Notebooks: ````python from huggingface_hub import notebook_login notebook_login() ```` - Else: `````bash huggingface-cli login ````` Then, in this example, we train a PPO agent to play CartPole-v1 and push it to a new repo `ThomasSimonini/demo-hf-CartPole-v1` ` `````python from huggingface_sb3 import push_to_hub from stable_baselines3 import PPO # Define a PPO model with MLP policy network model = PPO("MlpPolicy", "CartPole-v1", verbose=1) # Train it for 10000 timesteps model.learn(total_timesteps=10_000) # Save the model model.save("ppo-CartPole-v1") # Push this saved model to the hf repo # If this repo does not exists it will be created ## repo_id = id of the model repository from the Hugging Face Hub (repo_id = {organization}/{repo_name}) ## filename: the name of the file == "name" inside model.save("ppo-CartPole-v1") push_to_hub( repo_id="ThomasSimonini/demo-hf-CartPole-v1", filename="ppo-CartPole-v1.zip", commit_message="Added Cartpole-v1 model trained with PPO", ) `````` Try it out and share your models with the community! ### What's next? In the coming weeks and months, we will be extending the ecosystem by: - Integrating [RL-baselines3-zoo](https://github.com/DLR-RM/rl-baselines3-zoo) - Uploading [RL-trained-agents models](https://github.com/DLR-RM/rl-trained-agents/tree/master) into the Hub: a big collection of pre-trained Reinforcement Learning agents using stable-baselines3 - Integrating other Deep Reinforcement Learning libraries - Implementing Decision Transformers 🔥 - And more to come 🥳 The best way to keep in touch is to [join our discord server](https://discord.gg/YRAq8fMnUG) to exchange with us and with the community. And if you want to dive deeper, we wrote a tutorial where you’ll learn: - How to train a Deep Reinforcement Learning lander agent to land correctly on the Moon 🌕 - How to upload it to the Hub 🚀 ![gif](assets/47_sb3/lunarlander.gif) - How to download and use a saved model from the Hub that plays Space Invaders 👾. ![gif](assets/47_sb3/spaceinvaders.gif) 👉 [The tutorial](https://github.com/huggingface/huggingface_sb3/blob/main/Stable_Baselines_3_and_Hugging_Face_%F0%9F%A4%97_tutorial.ipynb) ### Conclusion We're excited to see what you're working on with Stable-baselines3 and try your models in the Hub 😍. And we would love to hear your feedback 💖. 📧 Feel free to [reach us](mailto:[email protected]). Finally, we would like to thank the SB3 team and in particular [Antonin Raffin](https://araffin.github.io/) for their precious help for the integration of the library 🤗. ### Would you like to integrate your library to the Hub? This integration is possible thanks to the [`huggingface_hub`](https://github.com/huggingface/huggingface_hub) library which has all our widgets and the API for all our supported libraries. If you would like to integrate your library to the Hub, we have a [guide](https://huggingface.co/docs/hub/models-adding-libraries) for you!
Supercharged Searching on the Hugging Face Hub
muellerzr
January 25, 2022
searching-the-hub
guide
https://huggingface.co/blog/searching-the-hub
# Supercharged Searching on the Hugging Face Hub <a target="_blank" href="https://colab.research.google.com/github/muellerzr/hf-blog-notebooks/blob/main/Searching-the-Hub.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> The `huggingface_hub` library is a lightweight interface that provides a programmatic approach to exploring the hosting endpoints Hugging Face provides: models, datasets, and Spaces. Up until now, searching on the Hub through this interface was tricky to pull off, and there were many aspects of it a user had to "just know" and get accustomed to. In this article, we will be looking at a few exciting new features added to `huggingface_hub` to help lower that bar and provide users with a friendly API to search for the models and datasets they want to use without leaving their Jupyter or Python interfaces. > Before we begin, if you do not have the latest version of the `huggingface_hub` library on your system, please run the following cell: ```python !pip install huggingface_hub -U ``` ## Situating the Problem: First, let's imagine the scenario you are in. You'd like to find all models hosted on the Hugging Face Hub for Text Classification, were trained on the GLUE dataset, and are compatible with PyTorch. You may simply just open https://huggingface.co/models and use the widgets on there. But this requires leaving your IDE and scanning those results, all of which requires a few button clicks to get you the information you need. What if there were a solution to this without having to leave your IDE? With a programmatic interface, it also could be easy to see this being integrated into workflows for exploring the Hub. This is where the `huggingface_hub` comes in. For those familiar with the library, you may already know that we can search for these type of models. However, getting the query right is a painful process of trial and error. Could we simplify that? Let's find out! ## Finding what we need First we'll import the `HfApi`, which is a class that helps us interact with the backend hosting for Hugging Face. We can interact with the models, datasets, and more through it. Along with this, we'll import a few helper classes: the `ModelFilter` and `ModelSearchArguments` ```python from huggingface_hub import HfApi, ModelFilter, ModelSearchArguments api = HfApi() ``` These two classes can help us frame a solution to our above problem. The `ModelSearchArguments` class is a namespace-like one that contains every single valid parameter we can search for! Let's take a peek: ```python >>> model_args = ModelSearchArguments() >>> model_args ``` Available Attributes or Keys: * author * dataset * language * library * license * model_name * pipeline_tag We can see a variety of attributes available to us (more on how this magic is done later). If we were to categorize what we wanted, we could likely separate them out as: - `pipeline_tag` (or task): Text Classification - `dataset`: GLUE - `library`: PyTorch Given this separation, it would make sense that we would find them within our `model_args` we've declared: ```python >>> model_args.pipeline_tag.TextClassification ``` 'text-classification' ```python >>> model_args.dataset.glue ``` 'dataset:glue' ```python >>> model_args.library.PyTorch ``` 'pytorch' What we begin to notice though is some of the convience wrapping we perform here. `ModelSearchArguments` (and the complimentary `DatasetSearchArguments`) have a human-readable interface with formatted outputs the API wants, such as how the GLUE dataset should be searched with `dataset:glue`. This is key because without this "cheat sheet" of knowing how certain parameters should be written, you can very easily sit in frustration as you're trying to search for models with the API! Now that we know what the right parameters are, we can search the API easily: ```python >>> models = api.list_models(filter = ( >>> model_args.pipeline_tag.TextClassification, >>> model_args.dataset.glue, >>> model_args.library.PyTorch) >>> ) >>> print(len(models)) ``` ``` 140 ``` We find that there were **140** matching models that fit our criteria! (at the time of writing this). And if we take a closer look at one, we can see that it does indeed look right: ```python >>> models[0] ``` ``` ModelInfo: { modelId: Jiva/xlm-roberta-large-it-mnli sha: c6e64469ec4aa17fedbd1b2522256f90a90b5b86 lastModified: 2021-12-10T14:56:38.000Z tags: ['pytorch', 'xlm-roberta', 'text-classification', 'it', 'dataset:multi_nli', 'dataset:glue', 'arxiv:1911.02116', 'transformers', 'tensorflow', 'license:mit', 'zero-shot-classification'] pipeline_tag: zero-shot-classification siblings: [ModelFile(rfilename='.gitattributes'), ModelFile(rfilename='README.md'), ModelFile(rfilename='config.json'), ModelFile(rfilename='pytorch_model.bin'), ModelFile(rfilename='sentencepiece.bpe.model'), ModelFile(rfilename='special_tokens_map.json'), ModelFile(rfilename='tokenizer.json'), ModelFile(rfilename='tokenizer_config.json')] config: None private: False downloads: 680 library_name: transformers likes: 1 } ``` It's a bit more readable, and there's no guessing involved with "Did I get this parameter right?" > Did you know you can also get the information of this model programmatically with its model ID? Here's how you would do it: > ```python > api.model_info('Jiva/xlm-roberta-large-it-mnli') > ``` ## Taking it up a Notch We saw how we could use the `ModelSearchArguments` and `DatasetSearchArguments` to remove the guesswork from when we want to search the Hub, but what about if we have a very complex, messy query? Such as: I want to search for all models trained for both `text-classification` and `zero-shot` classification, were trained on the Multi NLI and GLUE datasets, and are compatible with both PyTorch and TensorFlow (a more exact query to get the above model). To setup this query, we'll make use of the `ModelFilter` class. It's designed to handle these types of situations, so we don't need to scratch our heads: ```python >>> filt = ModelFilter( >>> task = ["text-classification", "zero-shot-classification"], >>> trained_dataset = [model_args.dataset.multi_nli, model_args.dataset.glue], >>> library = ['pytorch', 'tensorflow'] >>> ) >>> api.list_models(filt) ``` ``` [ModelInfo: { modelId: Jiva/xlm-roberta-large-it-mnli sha: c6e64469ec4aa17fedbd1b2522256f90a90b5b86 lastModified: 2021-12-10T14:56:38.000Z tags: ['pytorch', 'xlm-roberta', 'text-classification', 'it', 'dataset:multi_nli', 'dataset:glue', 'arxiv:1911.02116', 'transformers', 'tensorflow', 'license:mit', 'zero-shot-classification'] pipeline_tag: zero-shot-classification siblings: [ModelFile(rfilename='.gitattributes'), ModelFile(rfilename='README.md'), ModelFile(rfilename='config.json'), ModelFile(rfilename='pytorch_model.bin'), ModelFile(rfilename='sentencepiece.bpe.model'), ModelFile(rfilename='special_tokens_map.json'), ModelFile(rfilename='tokenizer.json'), ModelFile(rfilename='tokenizer_config.json')] config: None private: False downloads: 680 library_name: transformers likes: 1 }] ``` Very quickly we see that it's a much more coordinated approach for searching through the API, with no added headache for you! ## What is the magic? Very briefly we'll talk about the underlying magic at play that gives us this enum-dictionary-like datatype, the `AttributeDictionary`. Heavily inspired by the `AttrDict` class from the [fastcore](https://fastcore.fast.ai/basics.html#AttrDict) library, the general idea is we take a normal dictionary and supercharge it for *exploratory programming* by providing tab-completion for every key in the dictionary. As we saw earlier, this gets even stronger when we have nested dictionaries we can explore through, such as `model_args.dataset.glue`! > For those familiar with JavaScript, we mimic how the `object` class is working. This simple utility class can provide a much more user-focused experience when exploring nested datatypes and trying to understand what is there, such as the return of an API request! As mentioned before, we expand on the `AttrDict` in a few key ways: - You can delete keys with `del model_args[key]` *or* with `del model_args.key` - That clean `__repr__` we saw earlier One very important concept to note though, is that if a key contains a number or special character it **must** be indexed as a dictionary, and *not* as an object. ```python >>> from huggingface_hub.utils.endpoint_helpers import AttributeDictionary ``` A very brief example of this is if we have an `AttributeDictionary` with a key of `3_c`: ```python >>> d = {"a":2, "b":3, "3_c":4} >>> ad = AttributeDictionary(d) ``` ```python >>> # As an attribute >>> ad.3_c ``` File "<ipython-input-6-c0fe109cf75d>", line 2 ad.3_c ^ SyntaxError: invalid token ```python >>> # As a dictionary key >>> ad["3_c"] ``` 4 ## Concluding thoughts Hopefully by now you have a brief understanding of how this new searching API can directly impact your workflow and exploration of the Hub! Along with this, perhaps you know of a place in your code where the `AttributeDictionary` might be useful for you to use. From here, make sure to check out the official documentation on [Searching the Hub Efficiently](https://huggingface.co/docs/huggingface_hub/searching-the-hub) and don't forget to give us a [star](https://github.com/huggingface/huggingface_hub)!
Making automatic speech recognition work on large files with Wav2Vec2 in 🤗 Transformers
Narsil
February 1, 2022
asr-chunking
guide, research, audio
https://huggingface.co/blog/asr-chunking
# Making automatic speech recognition work on large files with Wav2Vec2 in 🤗 Transformers ``` Tl;dr: This post explains how to use the specificities of the Connectionist Temporal Classification (CTC) architecture in order to achieve very good quality automatic speech recognition (ASR) even on arbitrarily long files or during live inference. ``` **Wav2Vec2** is a popular pre-trained model for speech recognition. Released in [September 2020](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) by Meta AI Research, the novel architecture catalyzed progress in self-supervised pretraining for speech recognition, *e.g.* [*G. Ng et al.*, 2021](https://arxiv.org/pdf/2104.03416.pdf), [*Chen et al*, 2021](https://arxiv.org/abs/2110.13900), [*Hsu et al.*, 2021](https://arxiv.org/abs/2106.07447) and [*Babu et al.*, 2021](https://arxiv.org/abs/2111.09296). On the Hugging Face Hub, Wav2Vec2's most popular pre-trained checkpoint currently amounts to over [**250,000** monthly downloads](https://huggingface.co/facebook/wav2vec2-base-960h). **Wav2Vec2** is at its core a **transformers** models and one caveat of **transformers** is that it usually has a finite amount of sequence length it can handle. Either because it uses **position encodings** (not the case here) or simply because the cost of attention in transformers is actually O(n²) in sequence_length, meaning that using very large sequence_length explodes in complexity/memory. So you cannot run with finite hardware (even a very large GPU like A100), simply run Wav2Vec2 on an hour long file. Your program will crash. Let's try it ! ```bash pip install transformers ``` ```python from transformers import pipeline # This will work on any of the thousands of models at # https://huggingface.co/models?pipeline_tag=automatic-speech-recognition pipe = pipeline(model="facebook/wav2vec2-base-960h") # The Public Domain LibriVox file used for the test #!wget https://ia902600.us.archive.org/8/items/thecantervilleghostversion_2_1501_librivox/thecantervilleghostversion2_01_wilde_128kb.mp3 -o very_long_file.mp3 pipe("very_long_file.mp3") # Crash out of memory ! pipe("very_long_file.mp3", chunk_length_s=10) # This works and prints a very long string ! # This whole blogpost will explain how to make things work ``` Simple Chunking --------------- The simplest way to achieve inference on very long files would be to simply chunk the initial audio into shorter samples, let's say 10 seconds each, run inference on those, and end up with a final reconstruction. This is efficient computationally but usually leads to subpar results, the reason being that in order to do good inference, the model needs some context, so around the chunking border, inference tends to be of poor quality. Look at the following diagram: ![Simple chunking](./assets/49_asr_chunking/chunk.png) There are ways to try and work around the problem in a general fashion, but they are never entirely robust. You can try to chunk only when you encounter silence but you may have a non silent audio for a long time (a song, or noisy café audio). You can also try to cut only when there's no voice but it requires another model and this is not an entirely solved problem. You could also have a continous voice for a very long time. As it turns out, CTC structure, which is used by Wav2Vec2, can be exploited in order to achieve very robust speech recognition even on very long files without falling into those pitfalls. Chunking with stride -------------------- Wav2Vec2 uses the [CTC algorithm](https://distill.pub/2017/ctc/), which means that every frame of audio is mapped to a single letter prediction (logit). ![CTC](./assets/49_asr_chunking/CTC.png) That's the main feature we're going to use in order to add a `stride`. This [link](https://www.quora.com/What-does-stride-mean-in-the-context-of-convolutional-neural-networks) explains it in the image context, but it's the same concept for audio. Because of this property, we can: - Start doing inference on **overlapping** chunks so that the model actually has proper context in the center. - **Drop** the inferenced logits on the side. - Chain the **logits** without their dropped sides to recover something extremely close to what the model would have predicted on the full length audio. ![Striding](./assets/49_asr_chunking/Striding.png) This is not **technically** 100% the same thing as running the model on the whole file so it is not enabled by default, but as you saw in the earlier example you need only to add `chunk_length_s` to your `pipeline` for it to work. In practice, we observed that most of the bad inference is kept within the strides, which get dropped before inference, leading to a proper inference of the full text. Let's note that you can choose every argument of this technique: ```python from transformers import pipeline pipe = pipeline(model="facebook/wav2vec2-base-960h") # stride_length_s is a tuple of the left and right stride length. # With only 1 number, both sides get the same stride, by default # the stride_length on one side is 1/6th of the chunk_length_s output = pipe("very_long_file.mp3", chunk_length_s=10, stride_length_s=(4, 2)) ``` Chunking with stride on LM augmented models ------------------------------------------- In [transformers](https://github.com/huggingface/transformers), we also added support for adding LM to Wav2Vec2 in order to boost the WER performance of the models without even finetuning. [See this excellent blogpost explaining how it works](https://huggingface.co/blog/wav2vec2-with-ngram). It turns out, that the LM works directly on the logits themselves, so we can actually apply the exact same technique as before without any modification ! So chunking large files on these LM boosted models still works out of the box. Live inference -------------- A very nice perk of using a CTC model like Wav2vec2, is that it is a single pass model, so it is **very** fast. Especially on GPU. We can exploit that in order to do live inference. The principle is exactly the same as regular striding, but this time we can feed the pipeline data **as it is coming in** and simply use striding on full chunks of length 10s for instance with 1s striding to get proper context. That requires running much more inference steps than simple file chunking, but it can make the live experience much better because the model can print things as you are speaking, without having to wait for X seconds before seeing something displayed.
Getting Started with Sentiment Analysis using Python
FedericoPascual
February 2, 2022
sentiment-analysis-python
sentiment-analysis, nlp, guide
https://huggingface.co/blog/sentiment-analysis-python
# Getting Started with Sentiment Analysis using Python <script async defer src="https://unpkg.com/medium-zoom-element@0/dist/medium-zoom-element.min.js"></script> Sentiment analysis is the automated process of tagging data according to their sentiment, such as positive, negative and neutral. Sentiment analysis allows companies to analyze data at scale, detect insights and automate processes. In the past, sentiment analysis used to be limited to researchers, machine learning engineers or data scientists with experience in natural language processing. However, the AI community has built awesome tools to democratize access to machine learning in recent years. Nowadays, you can use sentiment analysis with a few lines of code and no machine learning experience at all! 🤯 In this guide, you'll learn everything to get started with sentiment analysis using Python, including: 1. [What is sentiment analysis?](#1-what-is-sentiment-analysis) 2. [How to use pre-trained sentiment analysis models with Python](#2-how-to-use-pre-trained-sentiment-analysis-models-with-python) 3. [How to build your own sentiment analysis model](#3-building-your-own-sentiment-analysis-model) 4. [How to analyze tweets with sentiment analysis](#4-analyzing-tweets-with-sentiment-analysis-and-python) Let's get started! 🚀 ## 1. What is Sentiment Analysis? Sentiment analysis is a [natural language processing](https://en.wikipedia.org/wiki/Natural_language_processing) technique that identifies the polarity of a given text. There are different flavors of sentiment analysis, but one of the most widely used techniques labels data into positive, negative and neutral. For example, let's take a look at these tweets mentioning [@VerizonSupport](https://twitter.com/VerizonSupport): - *"dear @verizonsupport your service is straight 💩 in dallas.. been with y’all over a decade and this is all time low for y’all. i’m talking no internet at all."* → Would be tagged as "Negative". - *"@verizonsupport ive sent you a dm"* → would be tagged as "Neutral". - *"thanks to michelle et al at @verizonsupport who helped push my no-show-phone problem along. order canceled successfully and ordered this for pickup today at the apple store in the mall."* → would be tagged as "Positive". Sentiment analysis allows processing data at scale and in real-time. For example, do you want to analyze thousands of tweets, product reviews or support tickets? Instead of sorting through this data manually, you can use sentiment analysis to automatically understand how people are talking about a specific topic, get insights for data-driven decisions and automate business processes. Sentiment analysis is used in a wide variety of applications, for example: - Analyze social media mentions to understand how people are talking about your brand vs your competitors. - Analyze feedback from surveys and product reviews to quickly get insights into what your customers like and dislike about your product. - Analyze incoming support tickets in real-time to detect angry customers and act accordingly to prevent churn. ## 2. How to Use Pre-trained Sentiment Analysis Models with Python Now that we have covered what sentiment analysis is, we are ready to play with some sentiment analysis models! 🎉 On the [Hugging Face Hub](https://huggingface.co/models), we are building the largest collection of models and datasets publicly available in order to democratize machine learning 🚀. In the Hub, you can find more than 27,000 models shared by the AI community with state-of-the-art performances on tasks such as sentiment analysis, object detection, text generation, speech recognition and more. The Hub is free to use and most models have a widget that allows to test them directly on your browser! There are more than [215 sentiment analysis models](https://huggingface.co/models?pipeline_tag=text-classification&sort=downloads&search=sentiment) publicly available on the Hub and integrating them with Python just takes 5 lines of code: ```python pip install -q transformers from transformers import pipeline sentiment_pipeline = pipeline("sentiment-analysis") data = ["I love you", "I hate you"] sentiment_pipeline(data) ``` This code snippet uses the [pipeline class](https://huggingface.co/docs/transformers/main_classes/pipelines) to make predictions from models available in the Hub. It uses the [default model for sentiment analysis](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english?text=I+like+you.+I+love+you) to analyze the list of texts `data` and it outputs the following results: ```python [{'label': 'POSITIVE', 'score': 0.9998}, {'label': 'NEGATIVE', 'score': 0.9991}] ``` You can use a specific sentiment analysis model that is better suited to your language or use case by providing the name of the model. For example, if you want a sentiment analysis model for tweets, you can specify the [model id](https://huggingface.co/finiteautomata/bertweet-base-sentiment-analysis): ```python specific_model = pipeline(model="finiteautomata/bertweet-base-sentiment-analysis") specific_model(data) ``` You can test these models with your own data using this [Colab notebook](https://colab.research.google.com/drive/1G4nvWf6NtytiEyiIkYxs03nno5ZupIJn?usp=sharing): <!-- <div class="flex text-center items-center"> --> <figure class="flex justify-center w-full"> <iframe width="560" height="315" src="https://www.youtube.com/embed/eN-mbWOKJ7Q" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> </figure> The following are some popular models for sentiment analysis models available on the Hub that we recommend checking out: - [Twitter-roberta-base-sentiment](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment) is a roBERTa model trained on ~58M tweets and fine-tuned for sentiment analysis. Fine-tuning is the process of taking a pre-trained large language model (e.g. roBERTa in this case) and then tweaking it with additional training data to make it perform a second similar task (e.g. sentiment analysis). - [Bert-base-multilingual-uncased-sentiment](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) is a model fine-tuned for sentiment analysis on product reviews in six languages: English, Dutch, German, French, Spanish and Italian. - [Distilbert-base-uncased-emotion](https://huggingface.co/bhadresh-savani/distilbert-base-uncased-emotion?text=I+feel+a+bit+let+down) is a model fine-tuned for detecting emotions in texts, including sadness, joy, love, anger, fear and surprise. Are you interested in doing sentiment analysis in languages such as Spanish, French, Italian or German? On the Hub, you will find many models fine-tuned for different use cases and ~28 languages. You can check out the complete list of sentiment analysis models [here](https://huggingface.co/models?pipeline_tag=text-classification&sort=downloads&search=sentiment) and filter at the left according to the language of your interest. ## 3. Building Your Own Sentiment Analysis Model Using pre-trained models publicly available on the Hub is a great way to get started right away with sentiment analysis. These models use deep learning architectures such as transformers that achieve state-of-the-art performance on sentiment analysis and other machine learning tasks. However, you can fine-tune a model with your own data to further improve the sentiment analysis results and get an extra boost of accuracy in your particular use case. In this section, we'll go over two approaches on how to fine-tune a model for sentiment analysis with your own data and criteria. The first approach uses the Trainer API from the [🤗Transformers](https://github.com/huggingface/transformers), an open source library with 50K stars and 1K+ contributors and requires a bit more coding and experience. The second approach is a bit easier and more straightforward, it uses [AutoNLP](https://huggingface.co/autonlp), a tool to automatically train, evaluate and deploy state-of-the-art NLP models without code or ML experience. Let's dive in! ### a. Fine-tuning model with Python In this tutorial, you'll use the IMDB dataset to fine-tune a DistilBERT model for sentiment analysis. The [IMDB dataset](https://huggingface.co/datasets/imdb) contains 25,000 movie reviews labeled by sentiment for training a model and 25,000 movie reviews for testing it. [DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert) is a smaller, faster and cheaper version of [BERT](https://huggingface.co/docs/transformers/model_doc/bert). It has 40% smaller than BERT and runs 60% faster while preserving over 95% of BERT’s performance. You'll use the IMDB dataset to fine-tune a DistilBERT model that is able to classify whether a movie review is positive or negative. Once you train the model, you will use it to analyze new data! ⚡️ We have [created this notebook](https://colab.research.google.com/drive/1t-NJadXsPTDT6EWIR0PRzpn5o8oMHzp3?usp=sharing) so you can use it through this tutorial in Google Colab. #### 1. Activate GPU and Install Dependencies As a first step, let's set up Google Colab to use a GPU (instead of CPU) to train the model much faster. You can do this by going to the menu, clicking on 'Runtime' > 'Change runtime type', and selecting 'GPU' as the Hardware accelerator. Once you do this, you should check if GPU is available on our notebook by running the following code: ```python import torch torch.cuda.is_available() ``` Then, install the libraries you will be using in this tutorial: ```python !pip install datasets transformers huggingface_hub ``` You should also install `git-lfs` to use git in our model repository: ```python !apt-get install git-lfs ``` #### 2. Preprocess data You need data to fine-tune DistilBERT for sentiment analysis. So, let's use [🤗Datasets](https://github.com/huggingface/datasets/) library to download and preprocess the IMDB dataset so you can then use this data for training your model: ```python from datasets import load_dataset imdb = load_dataset("imdb") ``` IMDB is a huge dataset, so let's create smaller datasets to enable faster training and testing: ```python small_train_dataset = imdb["train"].shuffle(seed=42).select([i for i in list(range(3000))]) small_test_dataset = imdb["test"].shuffle(seed=42).select([i for i in list(range(300))]) ``` To preprocess our data, you will use [DistilBERT tokenizer](https://huggingface.co/docs/transformers/v4.15.0/en/model_doc/distilbert#transformers.DistilBertTokenizer): ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") ``` Next, you will prepare the text inputs for the model for both splits of our dataset (training and test) by using the [map method](https://huggingface.co/docs/datasets/about_map_batch.html): ```python def preprocess_function(examples): return tokenizer(examples["text"], truncation=True) tokenized_train = small_train_dataset.map(preprocess_function, batched=True) tokenized_test = small_test_dataset.map(preprocess_function, batched=True) ``` To speed up training, let's use a data_collator to convert your training samples to PyTorch tensors and concatenate them with the correct amount of [padding](https://huggingface.co/docs/transformers/preprocessing#everything-you-always-wanted-to-know-about-padding-and-truncation): ```python from transformers import DataCollatorWithPadding data_collator = DataCollatorWithPadding(tokenizer=tokenizer) ``` #### 3. Training the model Now that the preprocessing is done, you can go ahead and train your model 🚀 You will be throwing away the pretraining head of the DistilBERT model and replacing it with a classification head fine-tuned for sentiment analysis. This enables you to transfer the knowledge from DistilBERT to your custom model 🔥 For training, you will be using the [Trainer API](https://huggingface.co/docs/transformers/v4.15.0/en/main_classes/trainer#transformers.Trainer), which is optimized for fine-tuning [Transformers](https://github.com/huggingface/transformers)🤗 models such as DistilBERT, BERT and RoBERTa. First, let's define DistilBERT as your base model: ```python from transformers import AutoModelForSequenceClassification model = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased", num_labels=2) ``` Then, let's define the metrics you will be using to evaluate how good is your fine-tuned model ([accuracy and f1 score](https://huggingface.co/metrics)): ```python import numpy as np from datasets import load_metric def compute_metrics(eval_pred): load_accuracy = load_metric("accuracy") load_f1 = load_metric("f1") logits, labels = eval_pred predictions = np.argmax(logits, axis=-1) accuracy = load_accuracy.compute(predictions=predictions, references=labels)["accuracy"] f1 = load_f1.compute(predictions=predictions, references=labels)["f1"] return {"accuracy": accuracy, "f1": f1} ``` Next, let's login to your [Hugging Face account](https://huggingface.co/join) so you can manage your model repositories. `notebook_login` will launch a widget in your notebook where you'll need to add your [Hugging Face token](https://huggingface.co/settings/token): ```python from huggingface_hub import notebook_login notebook_login() ``` You are almost there! Before training our model, you need to define the training arguments and define a Trainer with all the objects you constructed up to this point: ```python from transformers import TrainingArguments, Trainer repo_name = "finetuning-sentiment-model-3000-samples" training_args = TrainingArguments( output_dir=repo_name, learning_rate=2e-5, per_device_train_batch_size=16, per_device_eval_batch_size=16, num_train_epochs=2, weight_decay=0.01, save_strategy="epoch", push_to_hub=True, ) trainer = Trainer( model=model, args=training_args, train_dataset=tokenized_train, eval_dataset=tokenized_test, tokenizer=tokenizer, data_collator=data_collator, compute_metrics=compute_metrics, ) ``` Now, it's time to fine-tune the model on the sentiment analysis dataset! 🙌 You just have to call the `train()` method of your Trainer: ```python trainer.train() ``` And voila! You fine-tuned a DistilBERT model for sentiment analysis! 🎉 Training time depends on the hardware you use and the number of samples in the dataset. In our case, it took almost 10 minutes using a GPU and fine-tuning the model with 3,000 samples. The more samples you use for training your model, the more accurate it will be but training could be significantly slower. Next, let's compute the evaluation metrics to see how good your model is: ```python trainer.evaluate() ``` In our case, we got 88% accuracy and 89% f1 score. Quite good for a sentiment analysis model just trained with 3,000 samples! #### 4. Analyzing new data with the model Now that you have trained a model for sentiment analysis, let's use it to analyze new data and get 🤖 predictions! This unlocks the power of machine learning; using a model to automatically analyze data at scale, in real-time ⚡️ First, let's upload the model to the Hub: ```python trainer.push_to_hub() ``` Now that you have pushed the model to the Hub, you can use it [pipeline class](https://huggingface.co/docs/transformers/main_classes/pipelines) to analyze two new movie reviews and see how your model predicts its sentiment with just two lines of code 🤯: ```python from transformers import pipeline sentiment_model = pipeline(model="federicopascual/finetuning-sentiment-model-3000-samples") sentiment_model(["I love this move", "This movie sucks!"]) ``` These are the predictions from our model: ```python [{'label': 'LABEL_1', 'score': 0.9558}, {'label': 'LABEL_0', 'score': 0.9413}] ``` In the IMDB dataset, `Label 1` means positive and `Label 0` is negative. Quite good! 🔥 ### b. Training a sentiment model with AutoNLP [AutoNLP](https://huggingface.co/autonlp) is a tool to train state-of-the-art machine learning models without code. It provides a friendly and easy-to-use user interface, where you can train custom models by simply uploading your data. AutoNLP will automatically fine-tune various pre-trained models with your data, take care of the hyperparameter tuning and find the best model for your use case. All models trained with AutoNLP are deployed and ready for production. Training a sentiment analysis model using AutoNLP is super easy and it just takes a few clicks 🤯. Let's give it a try! As a first step, let's get some data! You'll use [Sentiment140](https://huggingface.co/datasets/sentiment140), a popular sentiment analysis dataset that consists of Twitter messages labeled with 3 sentiments: 0 (negative), 2 (neutral), and 4 (positive). The dataset is quite big; it contains 1,600,000 tweets. As you don't need this amount of data to get your feet wet with AutoNLP and train your first models, we have prepared a smaller version of the Sentiment140 dataset with 3,000 samples that you can download from [here](https://cdn-media.huggingface.co/marketing/content/sentiment%20analysis/sentiment-analysis-python/sentiment140-3000samples.csv). This is how the dataset looks like: <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="Sentiment 140 dataset" src="assets/50_sentiment_python/sentiment140-dataset.png"></medium-zoom> <figcaption>Sentiment 140 dataset</figcaption> </figure> Next, let's create a [new project on AutoNLP](https://ui.autonlp.huggingface.co/new) to train 5 candidate models: <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="Creating a new project on AutoNLP" src="assets/50_sentiment_python/new-project.png"></medium-zoom> <figcaption>Creating a new project on AutoNLP</figcaption> </figure> Then, upload the dataset and map the text column and target columns: <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="Adding a dataset to AutoNLP" src="assets/50_sentiment_python/add-dataset.png"></medium-zoom> <figcaption>Adding a dataset to AutoNLP</figcaption> </figure> Once you add your dataset, go to the "Trainings" tab and accept the pricing to start training your models. AutoNLP pricing can be as low as $10 per model: <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="Adding a dataset to AutoNLP" src="assets/50_sentiment_python/trainings.png"></medium-zoom> <figcaption>Adding a dataset to AutoNLP</figcaption> </figure> After a few minutes, AutoNLP has trained all models, showing the performance metrics for all of them: <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="Adding a dataset to AutoNLP" src="assets/50_sentiment_python/training-success.png"></medium-zoom> <figcaption>Trained sentiment analysis models by AutoNLP</figcaption> </figure> The best model has 77.87% accuracy 🔥 Pretty good for a sentiment analysis model for tweets trained with just 3,000 samples! All these models are automatically uploaded to the Hub and deployed for production. You can use any of these models to start analyzing new data right away by using the [pipeline class](https://huggingface.co/docs/transformers/main_classes/pipelines) as shown in previous sections of this post. ## 4. Analyzing Tweets with Sentiment Analysis and Python In this last section, you'll take what you have learned so far in this post and put it into practice with a fun little project: analyzing tweets about NFTs with sentiment analysis! First, you'll use [Tweepy](https://www.tweepy.org/), an easy-to-use Python library for getting tweets mentioning #NFTs using the [Twitter API](https://developer.twitter.com/en/docs/twitter-api). Then, you will use a sentiment analysis model from the 🤗Hub to analyze these tweets. Finally, you will create some visualizations to explore the results and find some interesting insights. You can use [this notebook](https://colab.research.google.com/drive/182UbzmSeAFgOiow7WNMxvnz-yO-SJQ0W?usp=sharing) to follow this tutorial. Let’s jump into it! ### 1. Install dependencies First, let's install all the libraries you will use in this tutorial: ``` !pip install -q transformers tweepy wordcloud matplotlib ``` ### 2. Set up Twitter API credentials Next, you will set up the credentials for interacting with the Twitter API. First, you'll need to sign up for a [developer account on Twitter](https://developer.twitter.com/en/docs/twitter-api/getting-started/getting-access-to-the-twitter-api). Then, you have to create a new project and connect an app to get an API key and token. You can follow this [step-by-step guide](https://developer.twitter.com/en/docs/tutorials/step-by-step-guide-to-making-your-first-request-to-the-twitter-api-v2) to get your credentials. Once you have the API key and token, let's create a wrapper with Tweepy for interacting with the Twitter API: ```python import tweepy # Add Twitter API key and secret consumer_key = "XXXXXX" consumer_secret = "XXXXXX" # Handling authentication with Twitter auth = tweepy.AppAuthHandler(consumer_key, consumer_secret) # Create a wrapper for the Twitter API api = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True) ``` ### 3. Search for tweets using Tweepy At this point, you are ready to start using the Twitter API to collect tweets 🎉. You will use [Tweepy Cursor](https://docs.tweepy.org/en/v3.5.0/cursor_tutorial.html) to extract 1,000 tweets mentioning #NFTs: ```python # Helper function for handling pagination in our search and handle rate limits def limit_handled(cursor): while True: try: yield cursor.next() except tweepy.RateLimitError: print('Reached rate limite. Sleeping for >15 minutes') time.sleep(15 * 61) except StopIteration: break # Define the term you will be using for searching tweets query = '#NFTs' query = query + ' -filter:retweets' # Define how many tweets to get from the Twitter API count = 1000 # Let's search for tweets using Tweepy search = limit_handled(tweepy.Cursor(api.search, q=query, tweet_mode='extended', lang='en', result_type="recent").items(count)) ``` ### 4. Run sentiment analysis on the tweets Now you can put our new skills to work and run sentiment analysis on your data! 🎉 You will use one of the models available on the Hub fine-tuned for [sentiment analysis of tweets](https://huggingface.co/finiteautomata/bertweet-base-sentiment-analysis). Like in other sections of this post, you will use the [pipeline class](https://huggingface.co/docs/transformers/main_classes/pipelines) to make the predictions with this model: ```python from transformers import pipeline # Set up the inference pipeline using a model from the 🤗 Hub sentiment_analysis = pipeline(model="finiteautomata/bertweet-base-sentiment-analysis") # Let's run the sentiment analysis on each tweet tweets = [] for tweet in search: try: content = tweet.full_text sentiment = sentiment_analysis(content) tweets.append({'tweet': content, 'sentiment': sentiment[0]['label']}) except: pass ``` ### 5. Explore the results of sentiment analysis How are people talking about NFTs on Twitter? Are they talking mostly positively or negatively? Let's explore the results of the sentiment analysis to find out! First, let's load the results on a dataframe and see examples of tweets that were labeled for each sentiment: ```python import pandas as pd # Load the data in a dataframe df = pd.DataFrame(tweets) pd.set_option('display.max_colwidth', None) # Show a tweet for each sentiment display(df[df["sentiment"] == 'POS'].head(1)) display(df[df["sentiment"] == 'NEU'].head(1)) display(df[df["sentiment"] == 'NEG'].head(1)) ``` Output: ``` Tweet: @NFTGalIery Warm, exquisite and elegant palette of charming beauty Its price is 2401 ETH. \nhttps://t.co/Ej3BfVOAqc\n#NFTs #NFTartists #art #Bitcoin #Crypto #OpenSeaNFT #Ethereum #BTC Sentiment: POS Tweet: How much our followers made on #Crypto in December:\n#DAPPRadar airdrop — $200\nFree #VPAD tokens — $800\n#GasDAO airdrop — up to $1000\nStarSharks_SSS IDO — $3500\nCeloLaunch IDO — $3000\n12 Binance XMas #NFTs — $360 \nTOTAL PROFIT: $8500+\n\nJoin and earn with us https://t.co/fS30uj6SYx Sentiment: NEU Tweet: Stupid guy #2\nhttps://t.co/8yKzYjCYIl\n\n#NFT #NFTs #nftcollector #rarible https://t.co/O4V19gMmVk Sentiment: NEG ``` Then, let's see how many tweets you got for each sentiment and visualize these results: ```python # Let's count the number of tweets by sentiments sentiment_counts = df.groupby(['sentiment']).size() print(sentiment_counts) # Let's visualize the sentiments fig = plt.figure(figsize=(6,6), dpi=100) ax = plt.subplot(111) sentiment_counts.plot.pie(ax=ax, autopct='%1.1f%%', startangle=270, fontsize=12, label="") ``` Interestingly, most of the tweets about NFTs are positive (56.1%) and almost none are negative (2.0%): <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="Sentiment analysis result of NFTs tweets" src="assets/50_sentiment_python/sentiment-result.png"></medium-zoom> <figcaption>Sentiment analysis result of NFTs tweets</figcaption> </figure> Finally, let's see what words stand out for each sentiment by creating a word cloud: ```python from wordcloud import WordCloud from wordcloud import STOPWORDS # Wordcloud with positive tweets positive_tweets = df['tweet'][df["sentiment"] == 'POS'] stop_words = ["https", "co", "RT"] + list(STOPWORDS) positive_wordcloud = WordCloud(max_font_size=50, max_words=100, background_color="white", stopwords = stop_words).generate(str(positive_tweets)) plt.figure() plt.title("Positive Tweets - Wordcloud") plt.imshow(positive_wordcloud, interpolation="bilinear") plt.axis("off") plt.show() # Wordcloud with negative tweets negative_tweets = df['tweet'][df["sentiment"] == 'NEG'] stop_words = ["https", "co", "RT"] + list(STOPWORDS) negative_wordcloud = WordCloud(max_font_size=50, max_words=100, background_color="white", stopwords = stop_words).generate(str(negative_tweets)) plt.figure() plt.title("Negative Tweets - Wordcloud") plt.imshow(negative_wordcloud, interpolation="bilinear") plt.axis("off") plt.show() ``` Some of the words associated with positive tweets include Discord, Ethereum, Join, Mars4 and Shroom: <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="Word cloud for positive tweets" src="assets/50_sentiment_python/positive-tweets-wordcloud.png"></medium-zoom> <figcaption>Word cloud for positive tweets</figcaption> </figure> In contrast, words associated with negative tweets include: cookies chaos, Solana, and OpenseaNFT: <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="Word cloud for negative tweets" src="assets/50_sentiment_python/negative-tweets-wordcloud.png"></medium-zoom> <figcaption>Word cloud for negative tweets</figcaption> </figure> And that is it! With just a few lines of python code, you were able to collect tweets, analyze them with sentiment analysis and create some cool visualizations to analyze the results! Pretty cool, huh? ## 5. Wrapping up Sentiment analysis with Python has never been easier! Tools such as [🤗Transformers](https://github.com/huggingface/transformers) and the [🤗Hub](https://huggingface.co/models) makes sentiment analysis accessible to all developers. You can use open source, pre-trained models for sentiment analysis in just a few lines of code 🔥 Do you want to train a custom model for sentiment analysis with your own data? Easy peasy! You can fine-tune a model using [Trainer API](https://huggingface.co/docs/transformers/v4.15.0/en/main_classes/trainer#transformers.Trainer) to build on top of large language models and get state-of-the-art results. If you want something even easier, you can use [AutoNLP](https://huggingface.co/autonlp) to train custom machine learning models by simply uploading data. If you have questions, the Hugging Face community can help answer and/or benefit from, please ask them in the [Hugging Face forum](https://discuss.huggingface.co/). Also, join our [discord server](https://discord.gg/YRAq8fMnUG) to talk with us and with the Hugging Face community.
Fine-Tune ViT for Image Classification with 🤗 Transformers
nateraw
February 11, 2022
fine-tune-vit
guide, cv
https://huggingface.co/blog/fine-tune-vit
# Fine-Tune ViT for Image Classification with 🤗 Transformers <script async defer src="https://unpkg.com/medium-zoom-element@0/dist/medium-zoom-element.min.js"></script> <a target="_blank" href="https://colab.research.google.com/github/nateraw/huggingface-hub-examples/blob/main/vit_image_classification_explained.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> Just as transformers-based models have revolutionized NLP, we're now seeing an explosion of papers applying them to all sorts of other domains. One of the most revolutionary of these was the Vision Transformer (ViT), which was introduced in [June 2021](https://arxiv.org/abs/2010.11929) by a team of researchers at Google Brain. This paper explored how you can tokenize images, just as you would tokenize sentences, so that they can be passed to transformer models for training. It's quite a simple concept, really... 1. Split an image into a grid of sub-image patches 1. Embed each patch with a linear projection 1. Each embedded patch becomes a token, and the resulting sequence of embedded patches is the sequence you pass to the model. <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="A leaf!" src="assets/51_fine_tune_vit/vit-figure.jpg"></medium-zoom> </figure> It turns out that once you've done the above, you can pre-train and fine-tune transformers just as you're used to with NLP tasks. Pretty sweet 😎. --- In this blog post, we'll walk through how to leverage 🤗 `datasets` to download and process image classification datasets, and then use them to fine-tune a pre-trained ViT with 🤗 `transformers`. To get started, let's first install both those packages. ```bash pip install datasets transformers ``` ## Load a dataset Let's start by loading a small image classification dataset and taking a look at its structure. We'll use the [`beans`](https://huggingface.co/datasets/beans) dataset, which is a collection of pictures of healthy and unhealthy bean leaves. 🍃 ```python from datasets import load_dataset ds = load_dataset('beans') ds ``` Let's take a look at the 400th example from the `'train'` split from the beans dataset. You'll notice each example from the dataset has 3 features: 1. `image`: A PIL Image 1. `image_file_path`: The `str` path to the image file that was loaded as `image` 1. `labels`: A [`datasets.ClassLabel`](https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=classlabel#datasets.ClassLabel) feature, which is an integer representation of the label. (Later you'll see how to get the string class names, don't worry!) ```python ex = ds['train'][400] ex ``` { 'image': <PIL.JpegImagePlugin ...>, 'image_file_path': '/root/.cache/.../bean_rust_train.4.jpg', 'labels': 1 } Let's take a look at the image 👀 ```python image = ex['image'] image ``` <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="A leaf!" src="assets/51_fine_tune_vit/example-leaf.jpg"></medium-zoom> </figure> That's definitely a leaf! But what kind? 😅 Since the `'labels'` feature of this dataset is a `datasets.features.ClassLabel`, we can use it to look up the corresponding name for this example's label ID. First, let's access the feature definition for the `'labels'`. ```python labels = ds['train'].features['labels'] labels ``` ClassLabel(num_classes=3, names=['angular_leaf_spot', 'bean_rust', 'healthy'], names_file=None, id=None) Now, let's print out the class label for our example. You can do that by using the [`int2str`](https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=classlabel#datasets.ClassLabel.int2str) function of `ClassLabel`, which, as the name implies, allows to pass the integer representation of the class to look up the string label. ```python labels.int2str(ex['labels']) ``` 'bean_rust' Turns out the leaf shown above is infected with Bean Rust, a serious disease in bean plants. 😢 Let's write a function that'll display a grid of examples from each class to get a better idea of what you're working with. ```python import random from PIL import ImageDraw, ImageFont, Image def show_examples(ds, seed: int = 1234, examples_per_class: int = 3, size=(350, 350)): w, h = size labels = ds['train'].features['labels'].names grid = Image.new('RGB', size=(examples_per_class * w, len(labels) * h)) draw = ImageDraw.Draw(grid) font = ImageFont.truetype("/usr/share/fonts/truetype/liberation/LiberationMono-Bold.ttf", 24) for label_id, label in enumerate(labels): # Filter the dataset by a single label, shuffle it, and grab a few samples ds_slice = ds['train'].filter(lambda ex: ex['labels'] == label_id).shuffle(seed).select(range(examples_per_class)) # Plot this label's examples along a row for i, example in enumerate(ds_slice): image = example['image'] idx = examples_per_class * label_id + i box = (idx % examples_per_class * w, idx // examples_per_class * h) grid.paste(image.resize(size), box=box) draw.text(box, label, (255, 255, 255), font=font) return grid show_examples(ds, seed=random.randint(0, 1337), examples_per_class=3) ``` <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="A leaf!" src="assets/51_fine_tune_vit/leaf-grid.jpg"></medium-zoom> <figcaption>A grid of a few examples from each class in the dataset</figcaption> </figure> From what I'm seeing, - Angular Leaf Spot: Has irregular brown patches - Bean Rust: Has circular brown spots surrounded with a white-ish yellow ring - Healthy: ...looks healthy. 🤷‍♂️ ## Loading ViT Image Processor Now we know what our images look like and better understand the problem we're trying to solve. Let's see how we can prepare these images for our model! When ViT models are trained, specific transformations are applied to images fed into them. Use the wrong transformations on your image, and the model won't understand what it's seeing! 🖼 ➡️ 🔢 To make sure we apply the correct transformations, we will use a [`ViTImageProcessor`](https://huggingface.co/docs/transformers/model_doc/vit#transformers.ViTImageProcessor) initialized with a configuration that was saved along with the pretrained model we plan to use. In our case, we'll be using the [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) model, so let's load its image processor from the Hugging Face Hub. ```python from transformers import ViTImageProcessor model_name_or_path = 'google/vit-base-patch16-224-in21k' processor = ViTImageProcessor.from_pretrained(model_name_or_path) ``` You can see the image processor configuration by printing it. ViTImageProcessor { "do_normalize": true, "do_resize": true, "image_mean": [ 0.5, 0.5, 0.5 ], "image_std": [ 0.5, 0.5, 0.5 ], "resample": 2, "size": 224 } To process an image, simply pass it to the image processor's call function. This will return a dict containing `pixel values`, which is the numeric representation to be passed to the model. You get a NumPy array by default, but if you add the `return_tensors='pt'` argument, you'll get back `torch` tensors instead. ```python processor(image, return_tensors='pt') ``` Should give you something like... { 'pixel_values': tensor([[[[ 0.2706, 0.3255, 0.3804, ...]]]]) } ...where the shape of the tensor is `(1, 3, 224, 224)`. ## Processing the Dataset Now that you know how to read images and transform them into inputs, let's write a function that will put those two things together to process a single example from the dataset. ```python def process_example(example): inputs = processor(example['image'], return_tensors='pt') inputs['labels'] = example['labels'] return inputs ``` ```python process_example(ds['train'][0]) ``` { 'pixel_values': tensor([[[[-0.6157, -0.6000, -0.6078, ..., ]]]]), 'labels': 0 } While you could call `ds.map` and apply this to every example at once, this can be very slow, especially if you use a larger dataset. Instead, you can apply a ***transform*** to the dataset. Transforms are only applied to examples as you index them. First, though, you'll need to update the last function to accept a batch of data, as that's what `ds.with_transform` expects. ```python ds = load_dataset('beans') def transform(example_batch): # Take a list of PIL images and turn them to pixel values inputs = processor([x for x in example_batch['image']], return_tensors='pt') # Don't forget to include the labels! inputs['labels'] = example_batch['labels'] return inputs ``` You can directly apply this to the dataset using `ds.with_transform(transform)`. ```python prepared_ds = ds.with_transform(transform) ``` Now, whenever you get an example from the dataset, the transform will be applied in real time (on both samples and slices, as shown below) ```python prepared_ds['train'][0:2] ``` This time, the resulting `pixel_values` tensor will have shape `(2, 3, 224, 224)`. { 'pixel_values': tensor([[[[-0.6157, -0.6000, -0.6078, ..., ]]]]), 'labels': [0, 0] } # Training and Evaluation The data is processed and you are ready to start setting up the training pipeline. This blog post uses 🤗's Trainer, but that'll require us to do a few things first: - Define a collate function. - Define an evaluation metric. During training, the model should be evaluated on its prediction accuracy. You should define a `compute_metrics` function accordingly. - Load a pretrained checkpoint. You need to load a pretrained checkpoint and configure it correctly for training. - Define the training configuration. After fine-tuning the model, you will correctly evaluate it on the evaluation data and verify that it has indeed learned to correctly classify the images. ### Define our data collator Batches are coming in as lists of dicts, so you can just unpack + stack those into batch tensors. Since the `collate_fn` will return a batch dict, you can `**unpack` the inputs to the model later. ✨ ```python import torch def collate_fn(batch): return { 'pixel_values': torch.stack([x['pixel_values'] for x in batch]), 'labels': torch.tensor([x['labels'] for x in batch]) } ``` ### Define an evaluation metric The [accuracy](https://huggingface.co/metrics/accuracy) metric from `datasets` can easily be used to compare the predictions with the labels. Below, you can see how to use it within a `compute_metrics` function that will be used by the `Trainer`. ```python import numpy as np from datasets import load_metric metric = load_metric("accuracy") def compute_metrics(p): return metric.compute(predictions=np.argmax(p.predictions, axis=1), references=p.label_ids) ``` Let's load the pretrained model. We'll add `num_labels` on init so the model creates a classification head with the right number of units. We'll also include the `id2label` and `label2id` mappings to have human-readable labels in the Hub widget (if you choose to `push_to_hub`). ```python from transformers import ViTForImageClassification labels = ds['train'].features['labels'].names model = ViTForImageClassification.from_pretrained( model_name_or_path, num_labels=len(labels), id2label={str(i): c for i, c in enumerate(labels)}, label2id={c: str(i) for i, c in enumerate(labels)} ) ``` Almost ready to train! The last thing needed before that is to set up the training configuration by defining [`TrainingArguments`](https://huggingface.co/docs/transformers/v4.16.2/en/main_classes/trainer#transformers.TrainingArguments). Most of these are pretty self-explanatory, but one that is quite important here is `remove_unused_columns=False`. This one will drop any features not used by the model's call function. By default it's `True` because usually it's ideal to drop unused feature columns, making it easier to unpack inputs into the model's call function. But, in our case, we need the unused features ('image' in particular) in order to create 'pixel_values'. What I'm trying to say is that you'll have a bad time if you forget to set `remove_unused_columns=False`. ```python from transformers import TrainingArguments training_args = TrainingArguments( output_dir="./vit-base-beans", per_device_train_batch_size=16, evaluation_strategy="steps", num_train_epochs=4, fp16=True, save_steps=100, eval_steps=100, logging_steps=10, learning_rate=2e-4, save_total_limit=2, remove_unused_columns=False, push_to_hub=False, report_to='tensorboard', load_best_model_at_end=True, ) ``` Now, all instances can be passed to Trainer and we are ready to start training! ```python from transformers import Trainer trainer = Trainer( model=model, args=training_args, data_collator=collate_fn, compute_metrics=compute_metrics, train_dataset=prepared_ds["train"], eval_dataset=prepared_ds["validation"], tokenizer=processor, ) ``` ### Train 🚀 ```python train_results = trainer.train() trainer.save_model() trainer.log_metrics("train", train_results.metrics) trainer.save_metrics("train", train_results.metrics) trainer.save_state() ``` ### Evaluate 📊 ```python metrics = trainer.evaluate(prepared_ds['validation']) trainer.log_metrics("eval", metrics) trainer.save_metrics("eval", metrics) ``` Here were my evaluation results - Cool beans! Sorry, had to say it. ***** eval metrics ***** epoch = 4.0 eval_accuracy = 0.985 eval_loss = 0.0637 eval_runtime = 0:00:02.13 eval_samples_per_second = 62.356 eval_steps_per_second = 7.97 Finally, if you want, you can push your model up to the hub. Here, we'll push it up if you specified `push_to_hub=True` in the training configuration. Note that in order to push to hub, you'll have to have git-lfs installed and be logged into your Hugging Face account (which can be done via `huggingface-cli login`). ```python kwargs = { "finetuned_from": model.config._name_or_path, "tasks": "image-classification", "dataset": 'beans', "tags": ['image-classification'], } if training_args.push_to_hub: trainer.push_to_hub('🍻 cheers', **kwargs) else: trainer.create_model_card(**kwargs) ``` The resulting model has been shared to [nateraw/vit-base-beans](https://huggingface.co/nateraw/vit-base-beans). I'm assuming you don't have pictures of bean leaves laying around, so I added some examples for you to give it a try! 🚀
BERT 101 🤗 State Of The Art NLP Model Explained
britneymuller
March 2, 2022
bert-101
guide, nlp
https://huggingface.co/blog/bert-101
<html itemscope itemtype="https://schema.org/FAQPage"> # BERT 101 🤗 State Of The Art NLP Model Explained <script async defer src="https://unpkg.com/medium-zoom-element@0/dist/medium-zoom-element.min.js"></script> ## What is BERT? BERT, short for Bidirectional Encoder Representations from Transformers, is a Machine Learning (ML) model for natural language processing. It was developed in 2018 by researchers at Google AI Language and serves as a swiss army knife solution to 11+ of the most common language tasks, such as sentiment analysis and named entity recognition. Language has historically been difficult for computers to ‘understand’. Sure, computers can collect, store, and read text inputs but they lack basic language _context_. So, along came Natural Language Processing (NLP): the field of artificial intelligence aiming for computers to read, analyze, interpret and derive meaning from text and spoken words. This practice combines linguistics, statistics, and Machine Learning to assist computers in ‘understanding’ human language. Individual NLP tasks have traditionally been solved by individual models created for each specific task. That is, until— BERT! BERT revolutionized the NLP space by solving for 11+ of the most common NLP tasks (and better than previous models) making it the jack of all NLP trades. In this guide, you'll learn what BERT is, why it’s different, and how to get started using BERT: 1. [What is BERT used for?](#1-what-is-bert-used-for) 2. [How does BERT work?](#2-how-does-bert-work) 3. [BERT model size & architecture](#3-bert-model-size--architecture) 4. [BERT’s performance on common language tasks](#4-berts-performance-on-common-language-tasks) 5. [Environmental impact of deep learning](#5-enviornmental-impact-of-deep-learning) 6. [The open source power of BERT](#6-the-open-source-power-of-bert) 7. [How to get started using BERT](#7-how-to-get-started-using-bert) 8. [BERT FAQs](#8-bert-faqs) 9. [Conclusion](#9-conclusion) Let's get started! 🚀 ## 1. What is BERT used for? BERT can be used on a wide variety of language tasks: - Can determine how positive or negative a movie’s reviews are. [(Sentiment Analysis)](https://huggingface.co/blog/sentiment-analysis-python) - Helps chatbots answer your questions. [(Question answering)](https://huggingface.co/tasks/question-answering) - Predicts your text when writing an email (Gmail). [(Text prediction)](https://huggingface.co/tasks/fill-mask) - Can write an article about any topic with just a few sentence inputs. [(Text generation)](https://huggingface.co/tasks/text-generation) - Can quickly summarize long legal contracts. [(Summarization)](https://huggingface.co/tasks/summarization) - Can differentiate words that have multiple meanings (like ‘bank’) based on the surrounding text. (Polysemy resolution) **There are many more language/NLP tasks + more detail behind each of these.** ***Fun Fact:*** You interact with NLP (and likely BERT) almost every single day! NLP is behind Google Translate, voice assistants (Alexa, Siri, etc.), chatbots, Google searches, voice-operated GPS, and more. --- ### 1.1 Example of BERT BERT helps Google better surface (English) results for nearly all searches since November of 2020. Here’s an example of how BERT helps Google better understand specific searches like: <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="BERT Google Search Example" src="assets/52_bert_101/BERT-example.png"></medium-zoom> <figcaption><a href="https://blog.google/products/search/search-language-understanding-bert/">Source</a></figcaption> </figure> Pre-BERT Google surfaced information about getting a prescription filled. Post-BERT Google understands that “for someone” relates to picking up a prescription for someone else and the search results now help to answer that. --- ## 2. How does BERT Work? BERT works by leveraging the following: ### 2.1 Large amounts of training data A massive dataset of 3.3 Billion words has contributed to BERT’s continued success. BERT was specifically trained on Wikipedia (\~2.5B words) and Google’s BooksCorpus (\~800M words). These large informational datasets contributed to BERT’s deep knowledge not only of the English language but also of our world! 🚀 Training on a dataset this large takes a long time. BERT’s training was made possible thanks to the novel Transformer architecture and sped up by using TPUs (Tensor Processing Units - Google’s custom circuit built specifically for large ML models). —64 TPUs trained BERT over the course of 4 days. **Note:** Demand for smaller BERT models is increasing in order to use BERT within smaller computational environments (like cell phones and personal computers). [23 smaller BERT models were released in March 2020](https://github.com/google-research/bert). [DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert) offers a lighter version of BERT; runs 60% faster while maintaining over 95% of BERT’s performance. ### 2.2 What is a Masked Language Model? MLM enables/enforces bidirectional learning from text by masking (hiding) a word in a sentence and forcing BERT to bidirectionally use the words on either side of the covered word to predict the masked word. This had never been done before! **Fun Fact:** We naturally do this as humans! **Masked Language Model Example:** Imagine your friend calls you while camping in Glacier National Park and their service begins to cut out. The last thing you hear before the call drops is: <p class="text-center px-6">Friend: “Dang! I’m out fishing and a huge trout just [blank] my line!”</p> Can you guess what your friend said?? You’re naturally able to predict the missing word by considering the words bidirectionally before and after the missing word as context clues (in addition to your historical knowledge of how fishing works). Did you guess that your friend said, ‘broke’? That’s what we predicted as well but even we humans are error-prone to some of these methods. **Note:** This is why you’ll often see a “Human Performance” comparison to a language model’s performance scores. And yes, newer models like BERT can be more accurate than humans! 🤯 The bidirectional methodology you did to fill in the [blank] word above is similar to how BERT attains state-of-the-art accuracy. A random 15% of tokenized words are hidden during training and BERT’s job is to correctly predict the hidden words. Thus, directly teaching the model about the English language (and the words we use). Isn’t that neat? Play around with BERT’s masking predictions: <div class="bg-white pb-1"> <div class="SVELTE_HYDRATER contents" data-props="{&quot;apiUrl&quot;:&quot;https://api-inference.huggingface.co&quot;,&quot;apiToken&quot;:&quot;&quot;,&quot;model&quot;:{&quot;branch&quot;:&quot;main&quot;,&quot;cardData&quot;:{&quot;language&quot;:&quot;en&quot;,&quot;tags&quot;:[&quot;exbert&quot;],&quot;license&quot;:&quot;apache-2.0&quot;,&quot;datasets&quot;:[&quot;bookcorpus&quot;,&quot;wikipedia&quot;]},&quot;cardError&quot;:{&quot;errors&quot;:[],&quot;warnings&quot;:[]},&quot;cardExists&quot;:true,&quot;config&quot;:{&quot;architectures&quot;:[&quot;BertForMaskedLM&quot;],&quot;model_type&quot;:&quot;bert&quot;},&quot;id&quot;:&quot;bert-base-uncased&quot;,&quot;lastModified&quot;:&quot;2021-05-18T16:20:13.000Z&quot;,&quot;pipeline_tag&quot;:&quot;fill-mask&quot;,&quot;library_name&quot;:&quot;transformers&quot;,&quot;mask_token&quot;:&quot;[MASK]&quot;,&quot;model-index&quot;:null,&quot;private&quot;:false,&quot;gated&quot;:false,&quot;pwcLink&quot;:{&quot;error&quot;:&quot;Unknown error, can&#39;t generate link to Papers With Code.&quot;},&quot;siblings&quot;:[{&quot;rfilename&quot;:&quot;.gitattributes&quot;},{&quot;rfilename&quot;:&quot;README.md&quot;},{&quot;rfilename&quot;:&quot;config.json&quot;},{&quot;rfilename&quot;:&quot;flax_model.msgpack&quot;},{&quot;rfilename&quot;:&quot;pytorch_model.bin&quot;},{&quot;rfilename&quot;:&quot;rust_model.ot&quot;},{&quot;rfilename&quot;:&quot;tf_model.h5&quot;},{&quot;rfilename&quot;:&quot;tokenizer.json&quot;},{&quot;rfilename&quot;:&quot;tokenizer_config.json&quot;},{&quot;rfilename&quot;:&quot;vocab.txt&quot;}],&quot;tags&quot;:[&quot;pytorch&quot;,&quot;tf&quot;,&quot;jax&quot;,&quot;rust&quot;,&quot;bert&quot;,&quot;fill-mask&quot;,&quot;en&quot;,&quot;dataset:bookcorpus&quot;,&quot;dataset:wikipedia&quot;,&quot;arxiv:1810.04805&quot;,&quot;transformers&quot;,&quot;exbert&quot;,&quot;license:apache-2.0&quot;,&quot;autonlp_compatible&quot;,&quot;infinity_compatible&quot;],&quot;tag_objs&quot;:[{&quot;id&quot;:&quot;fill-mask&quot;,&quot;label&quot;:&quot;Fill-Mask&quot;,&quot;subType&quot;:&quot;nlp&quot;,&quot;type&quot;:&quot;pipeline_tag&quot;},{&quot;id&quot;:&quot;pytorch&quot;,&quot;label&quot;:&quot;PyTorch&quot;,&quot;type&quot;:&quot;library&quot;},{&quot;id&quot;:&quot;tf&quot;,&quot;label&quot;:&quot;TensorFlow&quot;,&quot;type&quot;:&quot;library&quot;},{&quot;id&quot;:&quot;jax&quot;,&quot;label&quot;:&quot;JAX&quot;,&quot;type&quot;:&quot;library&quot;},{&quot;id&quot;:&quot;rust&quot;,&quot;label&quot;:&quot;Rust&quot;,&quot;type&quot;:&quot;library&quot;},{&quot;id&quot;:&quot;transformers&quot;,&quot;label&quot;:&quot;Transformers&quot;,&quot;type&quot;:&quot;library&quot;},{&quot;id&quot;:&quot;dataset:bookcorpus&quot;,&quot;label&quot;:&quot;bookcorpus&quot;,&quot;type&quot;:&quot;dataset&quot;},{&quot;id&quot;:&quot;dataset:wikipedia&quot;,&quot;label&quot;:&quot;wikipedia&quot;,&quot;type&quot;:&quot;dataset&quot;},{&quot;id&quot;:&quot;en&quot;,&quot;label&quot;:&quot;en&quot;,&quot;type&quot;:&quot;language&quot;},{&quot;id&quot;:&quot;arxiv:1810.04805&quot;,&quot;label&quot;:&quot;arxiv:1810.04805&quot;,&quot;type&quot;:&quot;arxiv&quot;},{&quot;id&quot;:&quot;license:apache-2.0&quot;,&quot;label&quot;:&quot;apache-2.0&quot;,&quot;type&quot;:&quot;license&quot;},{&quot;id&quot;:&quot;bert&quot;,&quot;label&quot;:&quot;bert&quot;,&quot;type&quot;:&quot;other&quot;},{&quot;id&quot;:&quot;exbert&quot;,&quot;label&quot;:&quot;exbert&quot;,&quot;type&quot;:&quot;other&quot;},{&quot;id&quot;:&quot;autonlp_compatible&quot;,&quot;label&quot;:&quot;AutoNLP Compatible&quot;,&quot;type&quot;:&quot;other&quot;},{&quot;id&quot;:&quot;infinity_compatible&quot;,&quot;label&quot;:&quot;Infinity Compatible&quot;,&quot;type&quot;:&quot;other&quot;}],&quot;transformersInfo&quot;:{&quot;auto_model&quot;:&quot;AutoModelForMaskedLM&quot;,&quot;pipeline_tag&quot;:&quot;fill-mask&quot;,&quot;processor&quot;:&quot;AutoTokenizer&quot;},&quot;widgetData&quot;:[{&quot;text&quot;:&quot;Paris is the [MASK] of France.&quot;},{&quot;text&quot;:&quot;The goal of life is [MASK].&quot;}],&quot;likes&quot;:104,&quot;isLikedByUser&quot;:false},&quot;shouldUpdateUrl&quot;:true}" data-target="InferenceWidget"> <div class="flex flex-col w-full max-w-full"> <div class="font-semibold flex items-center mb-2"> <div class="text-lg flex items-center"> <svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="-ml-1 mr-1 text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"> <path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path> </svg> Hosted inference API </div> <a target="_blank" href="https://api-inference.huggingface.co/"> <svg class="ml-1.5 text-sm text-gray-400 hover:text-black" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"> <path d="M17 22v-8h-4v2h2v6h-3v2h8v-2h-3z" fill="currentColor"></path> <path d="M16 8a1.5 1.5 0 1 0 1.5 1.5A1.5 1.5 0 0 0 16 8z" fill="currentColor"></path> <path d="M16 30a14 14 0 1 1 14-14a14 14 0 0 1-14 14zm0-26a12 12 0 1 0 12 12A12 12 0 0 0 16 4z" fill="currentColor"></path> </svg> </a> </div> <div class="flex items-center justify-between flex-wrap w-full max-w-full text-sm text-gray-500 mb-0.5"> <a class="hover:underline" href="/tasks/fill-mask" target="_blank" title="Learn more about fill-mask"> <div class="inline-flex items-center mr-2 mb-1.5"> <svg class="mr-1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 18 19"> <path d="M12.3625 13.85H10.1875V12.7625H12.3625V10.5875H13.45V12.7625C13.4497 13.0508 13.335 13.3272 13.1312 13.5311C12.9273 13.735 12.6508 13.8497 12.3625 13.85V13.85Z"></path> <path d="M5.8375 8.41246H4.75V6.23746C4.75029 5.94913 4.86496 5.67269 5.06884 5.4688C5.27272 5.26492 5.54917 5.15025 5.8375 5.14996H8.0125V6.23746H5.8375V8.41246Z"></path> <path d="M15.625 5.14998H13.45V2.97498C13.4497 2.68665 13.335 2.4102 13.1312 2.20632C12.9273 2.00244 12.6508 1.88777 12.3625 1.88748H2.575C2.28666 1.88777 2.01022 2.00244 1.80633 2.20632C1.60245 2.4102 1.48778 2.68665 1.4875 2.97498V12.7625C1.48778 13.0508 1.60245 13.3273 1.80633 13.5311C2.01022 13.735 2.28666 13.8497 2.575 13.85H4.75V16.025C4.75028 16.3133 4.86495 16.5898 5.06883 16.7936C5.27272 16.9975 5.54916 17.1122 5.8375 17.1125H15.625C15.9133 17.1122 16.1898 16.9975 16.3937 16.7936C16.5975 16.5898 16.7122 16.3133 16.7125 16.025V6.23748C16.7122 5.94915 16.5975 5.6727 16.3937 5.46882C16.1898 5.26494 15.9133 5.15027 15.625 5.14998V5.14998ZM15.625 16.025H5.8375V13.85H8.0125V12.7625H5.8375V10.5875H4.75V12.7625H2.575V2.97498H12.3625V5.14998H10.1875V6.23748H12.3625V8.41248H13.45V6.23748H15.625V16.025Z"></path> </svg> <span>Fill-Mask</span> </div> </a> <div class="relative mb-1.5 false false"> <div class="inline-flex justify-between w-32 lg:w-44 rounded-md border border-gray-100 px-4 py-1"> <div class="text-sm truncate">Examples</div> <svg class="-mr-1 ml-2 h-5 w-5 transition ease-in-out transform false" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 20 20" fill="currentColor" aria-hidden="true"> <path fill-rule="evenodd" d="M5.293 7.293a1 1 0 011.414 0L10 10.586l3.293-3.293a1 1 0 111.414 1.414l-4 4a1 1 0 01-1.414 0l-4-4a1 1 0 010-1.414z" clip-rule="evenodd"></path> </svg> </div> </div> </div> <form> <div class="text-sm text-gray-500 mb-1.5">Mask token: <code>[MASK]</code> </div> <label class="block "> <span class=" block overflow-auto resize-y py-2 px-3 w-full min-h-[42px] max-h-[500px] border border-gray-200 rounded-lg shadow-inner outline-none focus:ring-1 focus:ring-inset focus:ring-indigo-200 focus:shadow-inner dark:bg-gray-925 svelte-1wfa7x9" role="textbox" contenteditable style="--placeholder: 'Your sentence here...'"></span> </label> <button class="btn-widget w-24 h-10 px-5 mt-2" type="submit">Compute</button> </form> <div class="mt-2"> <div class="text-gray-400 text-xs">This model can be loaded on the Inference API on-demand.</div> </div> <div class="mt-auto pt-4 flex items-center text-xs text-gray-500"> <button class="flex items-center cursor-not-allowed text-gray-300" disabled> <svg class="mr-1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32" style="transform: rotate(360deg);"> <path d="M31 16l-7 7l-1.41-1.41L28.17 16l-5.58-5.59L24 9l7 7z" fill="currentColor"></path> <path d="M1 16l7-7l1.41 1.41L3.83 16l5.58 5.59L8 23l-7-7z" fill="currentColor"></path> <path d="M12.419 25.484L17.639 6l1.932.518L14.35 26z" fill="currentColor"></path> </svg> JSON Output </button> <button class="flex items-center ml-auto"> <svg class="mr-1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"> <path d="M22 16h2V8h-8v2h6v6z" fill="currentColor"></path> <path d="M8 24h8v-2h-6v-6H8v8z" fill="currentColor"></path> <path d="M26 28H6a2.002 2.002 0 0 1-2-2V6a2.002 2.002 0 0 1 2-2h20a2.002 2.002 0 0 1 2 2v20a2.002 2.002 0 0 1-2 2zM6 6v20h20.001L26 6z" fill="currentColor"></path> </svg> Maximize </button> </div> </div> </div> **Fun Fact:** Masking has been around a long time - [1953 Paper on Cloze procedure (or ‘Masking’)](https://psycnet.apa.org/record/1955-00850-001). ### 2.3 What is Next Sentence Prediction? NSP (Next Sentence Prediction) is used to help BERT learn about relationships between sentences by predicting if a given sentence follows the previous sentence or not. **Next Sentence Prediction Example:** 1. Paul went shopping. He bought a new shirt. (correct sentence pair) 2. Ramona made coffee. Vanilla ice cream cones for sale. (incorrect sentence pair) In training, 50% correct sentence pairs are mixed in with 50% random sentence pairs to help BERT increase next sentence prediction accuracy. **Fun Fact:** BERT is trained on both MLM (50%) and NSP (50%) at the same time. ### 2.4 Transformers The Transformer architecture makes it possible to parallelize ML training extremely efficiently. Massive parallelization thus makes it feasible to train BERT on large amounts of data in a relatively short period of time. Transformers use an attention mechanism to observe relationships between words. A concept originally proposed in the popular [2017 Attention Is All You Need](https://proceedings.neurips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf) paper sparked the use of Transformers in NLP models all around the world. <p align="center"> >Since their introduction in 2017, Transformers have rapidly become the state-of-the-art approach to tackle tasks in many domains such as natural language processing, speech recognition, and computer vision. In short, if you’re doing deep learning, then you need Transformers! <p class="text-center px-6">Lewis Tunstall, Hugging Face ML Engineer & <a href="https://www.amazon.com/Natural-Language-Processing-Transformers-Applications/dp/1098103246)">Author of Natural Language Processing with Transformers</a></p> </p> Timeline of popular Transformer model releases: <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="Transformer model timeline" src="assets/52_bert_101/transformers-timeline.png"></medium-zoom> <figcaption><a href="https://huggingface.co/course/chapter1/4">Source</a></figcaption> </figure> #### 2.4.1 How do Transformers work? Transformers work by leveraging attention, a powerful deep-learning algorithm, first seen in computer vision models. —Not all that different from how we humans process information through attention. We are incredibly good at forgetting/ignoring mundane daily inputs that don’t pose a threat or require a response from us. For example, can you remember everything you saw and heard coming home last Tuesday? Of course not! Our brain’s memory is limited and valuable. Our recall is aided by our ability to forget trivial inputs. Similarly, Machine Learning models need to learn how to pay attention only to the things that matter and not waste computational resources processing irrelevant information. Transformers create differential weights signaling which words in a sentence are the most critical to further process. <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="Encoder and Decoder" src="assets/52_bert_101/encoder-and-decoder-transformers-blocks.png"></medium-zoom> </figure> A transformer does this by successively processing an input through a stack of transformer layers, usually called the encoder. If necessary, another stack of transformer layers - the decoder - can be used to predict a target output. —BERT however, doesn’t use a decoder. Transformers are uniquely suited for unsupervised learning because they can efficiently process millions of data points. Fun Fact: Google has been using your reCAPTCHA selections to label training data since 2011. The entire Google Books archive and 13 million articles from the New York Times catalog have been transcribed/digitized via people entering reCAPTCHA text. Now, reCAPTCHA is asking us to label Google Street View images, vehicles, stoplights, airplanes, etc. Would be neat if Google made us aware of our participation in this effort (as the training data likely has future commercial intent) but I digress.. <p class="text-center"> To learn more about Transformers check out our <a href="https://huggingface.co/course/chapter1/1">Hugging Face Transformers Course</a>. </p> ## 3. BERT model size & architecture Let’s break down the architecture for the two original BERT models: <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="Original BERT models architecture" src="assets/52_bert_101/BERT-size-and-architecture.png"></medium-zoom> </figure> ML Architecture Glossary: | ML Architecture Parts | Definition | |-----------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Parameters: | Number of learnable variables/values available for the model. | | Transformer Layers: | Number of Transformer blocks. A transformer block transforms a sequence of word representations to a sequence of contextualized words (numbered representations). | | Hidden Size: | Layers of mathematical functions, located between the input and output, that assign weights (to words) to produce a desired result. | | Attention Heads: | The size of a Transformer block. | | Processing: | Type of processing unit used to train the model. | | Length of Training: | Time it took to train the model. Here’s how many of the above ML architecture parts BERTbase and BERTlarge has: | | Transformer Layers | Hidden Size | Attention Heads | Parameters | Processing | Length of Training | |-----------|--------------------|-------------|-----------------|------------|------------|--------------------| | BERTbase | 12 | 768 | 12 | 110M | 4 TPUs | 4 days | | BERTlarge | 24 | 1024 | 16 | 340M | 16 TPUs | 4 days | Let’s take a look at how BERTlarge’s additional layers, attention heads, and parameters have increased its performance across NLP tasks. ## 4. BERT's performance on common language tasks BERT has successfully achieved state-of-the-art accuracy on 11 common NLP tasks, outperforming previous top NLP models, and is the first to outperform humans! But, how are these achievements measured? ### NLP Evaluation Methods: #### 4.1 SQuAD v1.1 & v2.0 [SQuAD](https://huggingface.co/datasets/squad) (Stanford Question Answering Dataset) is a reading comprehension dataset of around 108k questions that can be answered via a corresponding paragraph of Wikipedia text. BERT’s performance on this evaluation method was a big achievement beating previous state-of-the-art models and human-level performance: <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="BERT's performance on SQuAD v1.1" src="assets/52_bert_101/BERTs-performance-on-SQuAD1.1.png"></medium-zoom> </figure> #### 4.2 SWAG [SWAG](https://huggingface.co/datasets/swag) (Situations With Adversarial Generations) is an interesting evaluation in that it detects a model’s ability to infer commonsense! It does this through a large-scale dataset of 113k multiple choice questions about common sense situations. These questions are transcribed from a video scene/situation and SWAG provides the model with four possible outcomes in the next scene. The model then does its’ best at predicting the correct answer. BERT out outperformed top previous top models including human-level performance: <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="Transformer model timeline" src="assets/52_bert_101/BERTs-performance-on-SWAG.png"></medium-zoom> </figure> #### 4.3 GLUE Benchmark [GLUE](https://huggingface.co/datasets/glue) (General Language Understanding Evaluation) benchmark is a group of resources for training, measuring, and analyzing language models comparatively to one another. These resources consist of nine “difficult” tasks designed to test an NLP model’s understanding. Here’s a summary of each of those tasks: <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="Transformer model timeline" src="assets/52_bert_101/GLUE-Benchmark-tasks.png"></medium-zoom> </figure> <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="Transformer model timeline" src="assets/52_bert_101/BERTs-Performance-on-GLUE.png"></medium-zoom> </figure> While some of these tasks may seem irrelevant and banal, it’s important to note that these evaluation methods are _incredibly_ powerful in indicating which models are best suited for your next NLP application. Attaining performance of this caliber isn’t without consequences. Next up, let’s learn about Machine Learning's impact on the environment. ## 5. Environmental impact of deep learning Large Machine Learning models require massive amounts of data which is expensive in both time and compute resources. These models also have an environmental impact: <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="Transformer model timeline" src="assets/52_bert_101/enviornmental-impact-of-machine-learning.png"></medium-zoom> <figcaption><a href="https://huggingface.co/course/chapter1/4">Source</a></figcaption> </figure> Machine Learning’s environmental impact is one of the many reasons we believe in democratizing the world of Machine Learning through open source! Sharing large pre-trained language models is essential in reducing the overall compute cost and carbon footprint of our community-driven efforts. ## 6. The open source power of BERT Unlike other large learning models like GPT-3, BERT’s source code is publicly accessible ([view BERT’s code on Github](https://github.com/google-research/bert)) allowing BERT to be more widely used all around the world. This is a game-changer! Developers are now able to get a state-of-the-art model like BERT up and running quickly without spending large amounts of time and money. 🤯 Developers can instead focus their efforts on fine-tuning BERT to customize the model’s performance to their unique tasks. It’s important to note that [thousands](https://huggingface.co/models?sort=downloads&search=bert) of open-source and free, pre-trained BERT models are currently available for specific use cases if you don’t want to fine-tune BERT. BERT models pre-trained for specific tasks: - [Twitter sentiment analysis](https://huggingface.co/finiteautomata/bertweet-base-sentiment-analysis) - [Analysis of Japanese text](https://huggingface.co/cl-tohoku/bert-base-japanese-char) - [Emotion categorizer (English - anger, fear, joy, etc.)](https://huggingface.co/j-hartmann/emotion-english-distilroberta-base) - [Clinical Notes analysis](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) - [Speech to text translation](https://huggingface.co/facebook/hubert-large-ls960-ft) - [Toxic comment detection](https://huggingface.co/unitary/toxic-bert?) You can also find [hundreds of pre-trained, open-source Transformer models](https://huggingface.co/models?library=transformers&sort=downloads) available on the Hugging Face Hub. ## 7. How to get started using BERT We've [created this notebook](https://colab.research.google.com/drive/1YtTqwkwaqV2n56NC8xerflt95Cjyd4NE?usp=sharing) so you can try BERT through this easy tutorial in Google Colab. Open the notebook or add the following code to your own. Pro Tip: Use (Shift + Click) to run a code cell. Note: Hugging Face's [pipeline class](https://huggingface.co/docs/transformers/main_classes/pipelines) makes it incredibly easy to pull in open source ML models like transformers with just a single line of code. ### 7.1 Install Transformers First, let's install Transformers via the following code: ```python !pip install transformers ``` ### 7.2 Try out BERT Feel free to swap out the sentence below for one of your own. However, leave [MASK] in somewhere to allow BERT to predict the missing word ```python from transformers import pipeline unmasker = pipeline('fill-mask', model='bert-base-uncased') unmasker("Artificial Intelligence [MASK] take over the world.") ``` When you run the above code you should see an output like this: ``` [{'score': 0.3182411789894104, 'sequence': 'artificial intelligence can take over the world.', 'token': 2064, 'token_str': 'can'}, {'score': 0.18299679458141327, 'sequence': 'artificial intelligence will take over the world.', 'token': 2097, 'token_str': 'will'}, {'score': 0.05600147321820259, 'sequence': 'artificial intelligence to take over the world.', 'token': 2000, 'token_str': 'to'}, {'score': 0.04519503191113472, 'sequence': 'artificial intelligences take over the world.', 'token': 2015, 'token_str': '##s'}, {'score': 0.045153118669986725, 'sequence': 'artificial intelligence would take over the world.', 'token': 2052, 'token_str': 'would'}] ``` Kind of frightening right? 🙃 ### 7.3 Be aware of model bias Let's see what jobs BERT suggests for a "man": ```python unmasker("The man worked as a [MASK].") ``` When you run the above code you should see an output that looks something like: ```python [{'score': 0.09747546911239624, 'sequence': 'the man worked as a carpenter.', 'token': 10533, 'token_str': 'carpenter'}, {'score': 0.052383411675691605, 'sequence': 'the man worked as a waiter.', 'token': 15610, 'token_str': 'waiter'}, {'score': 0.04962698742747307, 'sequence': 'the man worked as a barber.', 'token': 13362, 'token_str': 'barber'}, {'score': 0.037886083126068115, 'sequence': 'the man worked as a mechanic.', 'token': 15893, 'token_str': 'mechanic'}, {'score': 0.037680838257074356, 'sequence': 'the man worked as a salesman.', 'token': 18968, 'token_str': 'salesman'}] ``` BERT predicted the man's job to be a Carpenter, Waiter, Barber, Mechanic, or Salesman Now let's see what jobs BERT suggesst for "woman" ```python unmasker("The woman worked as a [MASK].") ``` You should see an output that looks something like: ```python [{'score': 0.21981535851955414, 'sequence': 'the woman worked as a nurse.', 'token': 6821, 'token_str': 'nurse'}, {'score': 0.1597413569688797, 'sequence': 'the woman worked as a waitress.', 'token': 13877, 'token_str': 'waitress'}, {'score': 0.11547300964593887, 'sequence': 'the woman worked as a maid.', 'token': 10850, 'token_str': 'maid'}, {'score': 0.03796879202127457, 'sequence': 'the woman worked as a prostitute.', 'token': 19215, 'token_str': 'prostitute'}, {'score': 0.030423851683735847, 'sequence': 'the woman worked as a cook.', 'token': 5660, 'token_str': 'cook'}] ``` BERT predicted the woman's job to be a Nurse, Waitress, Maid, Prostitute, or Cook displaying a clear gender bias in professional roles. ### 7.4 Some other BERT Notebooks you might enjoy: [A Visual Notebook to BERT for the First Time](https://colab.research.google.com/github/jalammar/jalammar.github.io/blob/master/notebooks/bert/A_Visual_Notebook_to_Using_BERT_for_the_First_Time.ipynb) [Train your tokenizer](https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/tokenizer_training.ipynb) +Don't forget to checkout the [Hugging Face Transformers Course](https://huggingface.co/course/chapter1/1) to learn more 🎉 ## 8. BERT FAQs <div itemscope itemprop="mainEntity" itemtype="https://schema.org/Question"> <h3 itemprop="name">Can BERT be used with PyTorch?</h3> <div itemscope itemprop="acceptedAnswer" itemtype="https://schema.org/Answer"> <div itemprop="text"> Yes! Our experts at Hugging Face have open-sourced the <a href="https://www.google.com/url?q=https://github.com/huggingface/transformers">PyTorch transformers repository on GitHub</a>. <br /> <p>Pro Tip: Lewis Tunstall, Leandro von Werra, and Thomas Wolf also wrote a book to help people build language applications with Hugging Face called, <a href="https://www.google.com/search?kgmid=/g/11qh58xzh7&hl=en-US&q=Natural+Language+Processing+with+Transformers:+Building+Language+Applications+with+Hugging+Face">‘Natural Language Processing with Transformers’</a>.</p> </div> </div> </div> <div itemscope itemprop="mainEntity" itemtype="https://schema.org/Question"> <h3 itemprop="name">Can BERT be used with Tensorflow?</h3> <div itemscope itemprop="acceptedAnswer" itemtype="https://schema.org/Answer"> <div itemprop="text"> Yes! <a href="https://huggingface.co/docs/transformers/v4.15.0/en/model_doc/bert#transformers.TFBertModel">You can use Tensorflow as the backend of Transformers.</a> </div> </div> </div> <div itemscope itemprop="mainEntity" itemtype="https://schema.org/Question"> <h3 itemprop="name">How long does it take to pre-train BERT?</h3> <div itemscope itemprop="acceptedAnswer" itemtype="https://schema.org/Answer"> <div itemprop="text"> The 2 original BERT models were trained on 4(BERTbase) and 16(BERTlarge) Cloud TPUs for 4 days. </div> </div> </div> <div itemscope itemprop="mainEntity" itemtype="https://schema.org/Question"> <h3 itemprop="name">How long does it take to fine-tune BERT?</h3> <div itemscope itemprop="acceptedAnswer" itemtype="https://schema.org/Answer"> <div itemprop="text"> For common NLP tasks discussed above, BERT takes between 1-25mins on a single Cloud TPU or between 1-130mins on a single GPU. </div> </div> </div> <div itemscope itemprop="mainEntity" itemtype="https://schema.org/Question"> <h3 itemprop="name">What makes BERT different?</h3> <div itemscope itemprop="acceptedAnswer" itemtype="https://schema.org/Answer"> <div itemprop="text"> BERT was one of the first models in NLP that was trained in a two-step way: <ol> <li>BERT was trained on massive amounts of unlabeled data (no human annotation) in an unsupervised fashion.</li> <li>BERT was then trained on small amounts of human-annotated data starting from the previous pre-trained model resulting in state-of-the-art performance.</li> </ol> </div> </div> </div> </html> ## 9. Conclusion BERT is a highly complex and advanced language model that helps people automate language understanding. Its ability to accomplish state-of-the-art performance is supported by training on massive amounts of data and leveraging Transformers architecture to revolutionize the field of NLP. Thanks to BERT’s open-source library, and the incredible AI community’s efforts to continue to improve and share new BERT models, the future of untouched NLP milestones looks bright. What will you create with BERT? Learn how to [fine-tune BERT](https://huggingface.co/docs/transformers/training) for your particular use case 🤗
Guiding Text Generation with Constrained Beam Search in 🤗 Transformers
cwkeam
March 11, 2022
constrained-beam-search
guide, nlp
https://huggingface.co/blog/constrained-beam-search
# Guiding Text Generation with Constrained Beam Search in 🤗 Transformers <a target="_blank" href="https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/53_constrained_beam_search.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> ## **Introduction** This blog post assumes that the reader is familiar with text generation methods using the different variants of beam search, as explained in the blog post: ["How to generate text: using different decoding methods for language generation with Transformers"](https://huggingface.co/blog/how-to-generate) Unlike ordinary beam search, **constrained** beam search allows us to exert control over the output of text generation. This is useful because we sometimes know exactly what we want inside the output. For example, in a Neural Machine Translation task, we might know which words must be included in the final translation with a dictionary lookup. Sometimes, generation outputs that are almost equally possible to a language model might not be equally desirable for the end-user due to the particular context. Both of these situations could be solved by allowing the users to tell the model which words must be included in the end output. ### **Why It's Difficult** However, this is actually a very non-trivial problem. This is because the task requires us to force the generation of certain subsequences *somewhere* in the final output, at *some point* during the generation. Let's say that we're want to generate a sentence `S` that has to include the phrase \\( p_1=\{ t_1, t_2 \} \\) with tokens \\( t_1, t_2 \\) in order. Let's define the expected sentence \\( S \\) as: $$ S_{expected} = \{ s_1, s_2, ..., s_k, t_1, t_2, s_{k+1}, ..., s_n \} $$ The problem is that beam search generates the sequence *token-by-token*. Though not entirely accurate, one can think of beam search as the function \\( B(\mathbf{s}_{0:i}) = s_{i+1} \\), where it looks at the currently generated sequence of tokens from \\( 0 \\) to \\( i \\) then predicts the next token at \\( i+1 \\) . But how can this function know, at an arbitrary step \\( i < k \\) , that the tokens must be generated at some future step \\( k \\) ? Or when it's at the step \\( i=k \\) , how can it know for sure that this is the best spot to force the tokens, instead of some future step \\( i>k \\) ? ![Why constraints are hard](https://raw.githubusercontent.com/huggingface/blog/main/assets/53_constrained_beam_search/why_constraints_are_hard.png) And what if you have multiple constraints with varying requirements? What if you want to force the phrase \\( p_1=\{t_1, t_2\} \\) *and* also the phrase \\( p_2=\{ t_3, t_4, t_5, t_6\} \\) ? What if you want the model to **choose between** the two phrases? What if we want to force the phrase \\( p_1 \\) and force just one phrase among the list of phrases \\( \{p_{21}, p_{22}, p_{23}\} \\) ? The above examples are actually very reasonable use-cases, as it will be shown below, and the new constrained beam search feature allows for all of them! This post will quickly go over what the new ***constrained beam search*** feature can do for you and then go into deeper details about how it works under the hood. ## **Example 1: Forcing a Word** Let's say we're trying to translate `"How old are you?"` to German. `"Wie alt bist du?"` is what you'd say in an informal setting, and `"Wie alt sind Sie?"` is what you'd say in a formal setting. And depending on the context, we might want one form of formality over the other, but how do we tell the model that? ### **Traditional Beam Search** Here's how we would do text translation in the ***traditional beam search setting.*** ``` !pip install -q git+https://github.com/huggingface/transformers.git ``` ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("t5-base") model = AutoModelForSeq2SeqLM.from_pretrained("t5-base") encoder_input_str = "translate English to German: How old are you?" input_ids = tokenizer(encoder_input_str, return_tensors="pt").input_ids outputs = model.generate( input_ids, num_beams=10, num_return_sequences=1, no_repeat_ngram_size=1, remove_invalid_values=True, ) print("Output:\n" + 100 * '-') print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` Output: ---------------------------------------------------------------------------------------------------- Wie alt bist du? ### **With Constrained Beam Search** But what if we knew that we wanted a formal output instead of the informal one? What if we knew from prior knowledge what the generation must include, and we could *inject it* into the generation? The following is what is possible now with the `force_words_ids` keyword argument to `model.generate()`: ```python tokenizer = AutoTokenizer.from_pretrained("t5-base") model = AutoModelForSeq2SeqLM.from_pretrained("t5-base") encoder_input_str = "translate English to German: How old are you?" force_words = ["Sie"] input_ids = tokenizer(encoder_input_str, return_tensors="pt").input_ids force_words_ids = tokenizer(force_words, add_special_tokens=False).input_ids outputs = model.generate( input_ids, force_words_ids=force_words_ids, num_beams=5, num_return_sequences=1, no_repeat_ngram_size=1, remove_invalid_values=True, ) print("Output:\n" + 100 * '-') print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` Output: ---------------------------------------------------------------------------------------------------- Wie alt sind Sie? As you can see, we were able to guide the generation with prior knowledge about our desired output. Previously we would've had to generate a bunch of possible outputs, then filter the ones that fit our requirement. Now we can do that at the generation stage. ## **Example 2: Disjunctive Constraints** We mentioned above a use-case where we know which words we want to be included in the final output. An example of this might be using a dictionary lookup during neural machine translation. But what if we don't know which *word forms* to use, where we'd want outputs like `["raining", "rained", "rains", ...]` to be equally possible? In a more general sense, there are always cases when we don't want the *exact word verbatim*, letter by letter, and might be open to other related possibilities too. Constraints that allow for this behavior are ***Disjunctive Constraints***, which allow the user to input a list of words, whose purpose is to guide the generation such that the final output must contain just *at least one* among the list of words. Here's an example that uses a mix of the above two types of constraints: ```python from transformers import GPT2LMHeadModel, GPT2Tokenizer model = GPT2LMHeadModel.from_pretrained("gpt2") tokenizer = GPT2Tokenizer.from_pretrained("gpt2") force_word = "scared" force_flexible = ["scream", "screams", "screaming", "screamed"] force_words_ids = [ tokenizer([force_word], add_prefix_space=True, add_special_tokens=False).input_ids, tokenizer(force_flexible, add_prefix_space=True, add_special_tokens=False).input_ids, ] starting_text = ["The soldiers", "The child"] input_ids = tokenizer(starting_text, return_tensors="pt").input_ids outputs = model.generate( input_ids, force_words_ids=force_words_ids, num_beams=10, num_return_sequences=1, no_repeat_ngram_size=1, remove_invalid_values=True, ) print("Output:\n" + 100 * '-') print(tokenizer.decode(outputs[0], skip_special_tokens=True)) print(tokenizer.decode(outputs[1], skip_special_tokens=True)) ``` Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation. Output: ---------------------------------------------------------------------------------------------------- The soldiers, who were all scared and screaming at each other as they tried to get out of the The child was taken to a local hospital where she screamed and scared for her life, police said. As you can see, the first output used `"screaming"`, the second output used `"screamed"`, and both used `"scared"` verbatim. The list to choose from `["screaming", "screamed", ...]` doesn't have to be word forms; this can satisfy any use-case where we need just one from a list of words. ## **Traditional Beam search** The following is an example of traditional **beam search**, taken from a previous [blog post](https://huggingface.co/blog/how-to-generate): ![Beam search](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/beam_search.png) Unlike greedy search, beam search works by keeping a longer list of hypotheses. In the above picture, we have displayed three next possible tokens at each possible step in the generation. Here's another way to look at the first step of the beam search for the above example, in the case of `num_beams=3`: ![Beam search step 1](https://raw.githubusercontent.com/huggingface/blog/main/assets/53_constrained_beam_search/beam_1.jpg) Instead of only choosing `"The dog"` like what a greedy search would do, a beam search would allow *further consideration* of `"The nice"` and `"The car"`. In the next step, we consider the next possible tokens for each of the three branches we created in the previous step. ![Beam search step 2](https://raw.githubusercontent.com/huggingface/blog/main/assets/53_constrained_beam_search/beam_2.jpg) Though we end up *considering* significantly more than `num_beams` outputs, we reduce them down to `num_beams` at the end of the step. We can't just keep branching out, then the number of `beams` we'd have to keep track of would be \\( \text{beams}^{n} \\) for \\( n \\) steps, which becomes very large very quickly ( \\( 10 \\) beams after \\( 10 \\) steps is \\( 10,000,000,000 \\) beams!). For the rest of the generation, we repeat the above step until the ending criteria has been met, like generating the `<eos>` token or reaching `max_length`, for example. Branch out, rank, reduce, and repeat. ## **Constrained Beam Search** Constrained beam search attempts to fulfill the constraints by *injecting* the desired tokens at every step of the generation. Let's say that we're trying to force the phrase `"is fast"` in the generated output. In the traditional beam search setting, we find the top `k` most probable next tokens at each branch and append them for consideration. In the constrained setting, we do the same but also append the tokens that will take us *closer to fulfilling our constraints*. Here's a demonstration: ![Constrained Beam Search Step 1](https://raw.githubusercontent.com/huggingface/blog/main/assets/53_constrained_beam_search/cbeam_1.jpg) On top of the usual high-probability next tokens like `"dog"` and `"nice"`, we force the token `"is"` in order to get us closer to fulfilling our constraint of `"is fast"`. For the next step, the branched-out candidates below are mostly the same as that of traditional beam search. But like the above example, constrained beam search adds onto the existing candidates by forcing the constraints at each new branch: ![Constrained Beam Search Step 2](https://raw.githubusercontent.com/huggingface/blog/main/assets/53_constrained_beam_search/cbeam_2.jpg) ### **Banks** Before we talk about the next step, we need to think about the resulting undesirable behavior we can see in the above step. The problem with naively just forcing the desired phrase `"is fast"` in the output is that, most of the time, you'd end up with nonsensical outputs like `"The is fast"` above. This is actually what makes this a nontrivial problem to solve. A deeper discussion about the complexities of solving this problem can be found in the [original feature request issue](https://github.com/huggingface/transformers/issues/14081#issuecomment-1004479944) that was raised in `huggingface/transformers`. Banks solve this problem by creating a *balance* between fulfilling the constraints and creating sensible output. Bank \\( n \\) refers to the ***list of beams that have made \\( n \\) steps progress in fulfilling the constraints***. After sorting all the possible beams into their respective banks, we do a round-robin selection. With the above example, we'd select the most probable output from Bank 2, then most probable from Bank 1, one from Bank 0, the second most probable from Bank 2, the second most probable from Bank 1, and so forth. Since we're using `num_beams=3`, we just do the above process three times to end up with `["The is fast", "The dog is", "The dog and"]`. This way, even though we're *forcing* the model to consider the branch where we've manually appended the desired token, we still keep track of other high-probable sequences that probably make more sense. Even though `"The is fast"` fulfills our constraint completely, it's not a very sensible phrase. Luckily, we have `"The dog is"` and `"The dog and"` to work with in future steps, which hopefully will lead to more sensible outputs later on. This behavior is demonstrated in the third step of the above example: ![Constrained Beam Search Step 3](https://raw.githubusercontent.com/huggingface/blog/main/assets/53_constrained_beam_search/cbeam_3.jpg) Notice how `"The is fast"` doesn't require any manual appending of constraint tokens since it's already fulfilled (i.e., already contains the phrase `"is fast"`). Also, notice how beams like `"The dog is slow"` or `"The dog is mad"` are actually in Bank 0, since, although it includes the token `"is"`, it must restart from the beginning to generate `"is fast"`. By appending something like `"slow"` after `"is"`, it has effectively *reset its progress*. And finally notice how we ended up at a sensible output that contains our constraint phrase: `"The dog is fast"`! We were worried initially because blindly appending the desired tokens led to nonsensical phrases like `"The is fast"`. However, using round-robin selection from banks, we implicitly ended up getting rid of nonsensical outputs in preference for the more sensible outputs. ## **More About `Constraint` Classes and Custom Constraints** The main takeaway from the explanation can be summarized as the following. At every step, we keep pestering the model to consider the tokens that fulfill our constraints, all the while keeping track of beams that don't, until we end up with reasonably high probability sequences that contain our desired phrases. So a principled way to design this implementation was to represent each constraint as a `Constraint` object, whose purpose was to keep track of its progress and tell the beam search which tokens to generate next. Although we have provided the keyword argument `force_words_ids` for `model.generate()`, the following is what actually happens in the back-end: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, PhrasalConstraint tokenizer = AutoTokenizer.from_pretrained("t5-base") model = AutoModelForSeq2SeqLM.from_pretrained("t5-base") encoder_input_str = "translate English to German: How old are you?" constraints = [ PhrasalConstraint( tokenizer("Sie", add_special_tokens=False).input_ids ) ] input_ids = tokenizer(encoder_input_str, return_tensors="pt").input_ids outputs = model.generate( input_ids, constraints=constraints, num_beams=10, num_return_sequences=1, no_repeat_ngram_size=1, remove_invalid_values=True, ) print("Output:\n" + 100 * '-') print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` Output: ---------------------------------------------------------------------------------------------------- Wie alt sind Sie? You can define one yourself and input it into the `constraints` keyword argument to design your unique constraints. You just have to create a sub-class of the `Constraint` abstract interface class and follow its requirements. You can find more information in the definition of `Constraint` found [here](https://github.com/huggingface/transformers/blob/main/src/transformers/generation/beam_constraints.py). Some unique ideas (not yet implemented; maybe you can give it a try!) include constraints like `OrderedConstraints`, `TemplateConstraints` that may be added further down the line. Currently, the generation is fulfilled by including the sequences, wherever in the output. For example, a previous example had one sequence with scared -> screaming and the other with screamed -> scared. `OrderedConstraints` could allow the user to specify the order in which these constraints are fulfilled. `TemplateConstraints` could allow for a more niche use of the feature, where the objective can be something like: ```python starting_text = "The woman" template = ["the", "", "School of", "", "in"] possible_outputs == [ "The woman attended the Ross School of Business in Michigan.", "The woman was the administrator for the Harvard School of Business in MA." ] ``` or: ```python starting_text = "The woman" template = ["the", "", "", "University", "", "in"] possible_outputs == [ "The woman attended the Carnegie Mellon University in Pittsburgh.", ] impossible_outputs == [ "The woman attended the Harvard University in MA." ] ``` or if the user does not care about the number of tokens that can go in between two words, then one can just use `OrderedConstraint`. ## **Conclusion** Constrained beam search gives us a flexible means to inject external knowledge and requirements into text generation. Previously, there was no easy way to tell the model to 1. include a list of sequences where 2. some of which are optional and some are not, such that 3. they're generated *somewhere* in the sequence at respective reasonable positions. Now, we can have full control over our generation with a mix of different subclasses of `Constraint` objects! This new feature is based mainly on the following papers: - [Guided Open Vocabulary Image Captioning with Constrained Beam Search](https://arxiv.org/pdf/1612.00576.pdf) - [Fast Lexically Constrained Decoding with Dynamic Beam Allocation for Neural Machine Translation](https://arxiv.org/abs/1804.06609) - [Improved Lexically Constrained Decoding for Translation and Monolingual Rewriting](https://aclanthology.org/N19-1090/) - [Guided Generation of Cause and Effect](https://arxiv.org/pdf/2107.09846.pdf) Like the ones above, many new research papers are exploring ways of using external knowledge (e.g., KGs, KBs) to guide the outputs of large deep learning models. Hopefully, this constrained beam search feature becomes another effective way to achieve this purpose. Thanks to everybody that gave guidance for this feature contribution: Patrick von Platen for being involved from the [initial issue](https://github.com/huggingface/transformers/issues/14081) to the [final PR](https://github.com/huggingface/transformers/pull/15761), and Narsil Patry, for providing detailed feedback on the code. *Thumbnail of this post uses an icon with the attribution: <a href="https://www.flaticon.com/free-icons/shorthand" title="shorthand icons">Shorthand icons created by Freepik - Flaticon</a>*
Image search with 🤗 datasets
davanstrien
March 16, 2022
image-search-datasets
cv
https://huggingface.co/blog/image-search-datasets
# Image search with 🤗 datasets <a target="_blank" href="https://colab.research.google.com/gist/davanstrien/e2c29fbbed20dc767e5a74e210f4237b/hf_blog_image_search.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> 🤗 [`datasets`](https://huggingface.co/docs/datasets/) is a library that makes it easy to access and share datasets. It also makes it easy to process data efficiently -- including working with data which doesn't fit into memory. When `datasets` was first launched, it was associated mostly with text data. However, recently, `datasets` has added increased support for audio as well as images. In particular, there is now a `datasets` [feature type for images](https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=image#datasets.Image). A previous [blog post](https://huggingface.co/blog/fine-tune-vit) showed how `datasets` can be used with 🤗 `transformers` to train an image classification model. In this blog post, we'll see how we can combine `datasets` and a few other libraries to create an image search application. First, we'll install `datasets`. Since we're going to be working with images, we'll also install [`pillow`](https://pillow.readthedocs.io/en/stable/). We'll also need `sentence_transformers` and `faiss`. We'll introduce those in more detail below. We also install [`rich`](https://github.com/Textualize/rich) - we'll only briefly use it here, but it's a super handy package to have around -- I'd really recommend exploring it further! ``` python !pip install datasets pillow rich faiss-gpu sentence_transformers ``` To start, let's take a look at the image feature. We can use the wonderful [rich](https://rich.readthedocs.io/) library to poke around python objects (functions, classes etc.) ``` python from rich import inspect import datasets ``` ``` python inspect(datasets.Image, help=True) ``` <pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="color: #000080; text-decoration-color: #000080">╭───────────────────────── </span><span style="color: #000080; text-decoration-color: #000080; font-weight: bold">&lt;</span><span style="color: #ff00ff; text-decoration-color: #ff00ff; font-weight: bold">class</span><span style="color: #000000; text-decoration-color: #000000"> </span><span style="color: #008000; text-decoration-color: #008000">'datasets.features.image.Image'</span><span style="color: #000080; text-decoration-color: #000080; font-weight: bold">&gt;</span><span style="color: #000080; text-decoration-color: #000080"> ─────────────────────────╮</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #00ffff; text-decoration-color: #00ffff; font-style: italic">class </span><span style="color: #800000; text-decoration-color: #800000; font-weight: bold">Image</span><span style="font-weight: bold">(</span>decode: bool = <span style="color: #00ff00; text-decoration-color: #00ff00; font-style: italic">True</span>, id: Union<span style="font-weight: bold">[</span>str, NoneType<span style="font-weight: bold">]</span> = <span style="color: #800080; text-decoration-color: #800080; font-style: italic">None</span><span style="font-weight: bold">)</span> -&gt; <span style="color: #800080; text-decoration-color: #800080; font-style: italic">None</span>: <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #008080; text-decoration-color: #008080">Image feature to read image data from an image file.</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #008080; text-decoration-color: #008080">Input: The Image feature accepts as input:</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #008080; text-decoration-color: #008080">- A :obj:`str`: Absolute path to the image file </span><span style="color: #008080; text-decoration-color: #008080; font-weight: bold">(</span><span style="color: #008080; text-decoration-color: #008080">i.e. random access is allowed</span><span style="color: #008080; text-decoration-color: #008080; font-weight: bold">)</span><span style="color: #008080; text-decoration-color: #008080">.</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #008080; text-decoration-color: #008080">- A :obj:`dict` with the keys:</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #008080; text-decoration-color: #008080"> - path: String with relative path of the image file to the archive file.</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #008080; text-decoration-color: #008080"> - bytes: Bytes of the image file.</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #008080; text-decoration-color: #008080"> This is useful for archived files with sequential access.</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #008080; text-decoration-color: #008080">- An :obj:`np.ndarray`: NumPy array representing an image.</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #008080; text-decoration-color: #008080">- A :obj:`PIL.Image.Image`: PIL image object.</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #008080; text-decoration-color: #008080">Args:</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #008080; text-decoration-color: #008080"> decode </span><span style="color: #008080; text-decoration-color: #008080; font-weight: bold">(</span><span style="color: #008080; text-decoration-color: #008080">:obj:`bool`, default ``</span><span style="color: #00ff00; text-decoration-color: #00ff00; font-style: italic">True</span><span style="color: #008080; text-decoration-color: #008080">``</span><span style="color: #008080; text-decoration-color: #008080; font-weight: bold">)</span><span style="color: #008080; text-decoration-color: #008080">: Whether to decode the image data. If `</span><span style="color: #ff0000; text-decoration-color: #ff0000; font-style: italic">False</span><span style="color: #008080; text-decoration-color: #008080">`,</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #008080; text-decoration-color: #008080"> returns the underlying dictionary in the format </span><span style="color: #008080; text-decoration-color: #008080; font-weight: bold">{</span><span style="color: #008000; text-decoration-color: #008000">"path"</span><span style="color: #008080; text-decoration-color: #008080">: image_path, </span><span style="color: #008000; text-decoration-color: #008000">"bytes"</span><span style="color: #008080; text-decoration-color: #008080">: </span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #008080; text-decoration-color: #008080">image_bytes</span><span style="color: #008080; text-decoration-color: #008080; font-weight: bold">}</span><span style="color: #008080; text-decoration-color: #008080">.</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #808000; text-decoration-color: #808000; font-style: italic">decode</span> = <span style="color: #00ff00; text-decoration-color: #00ff00; font-style: italic">True</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #808000; text-decoration-color: #808000; font-style: italic">dtype</span> = <span style="color: #008000; text-decoration-color: #008000">'PIL.Image.Image'</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #808000; text-decoration-color: #808000; font-style: italic">id</span> = <span style="color: #800080; text-decoration-color: #800080; font-style: italic">None</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #808000; text-decoration-color: #808000; font-style: italic">pa_type</span> = <span style="color: #800080; text-decoration-color: #800080; font-weight: bold">StructType</span><span style="font-weight: bold">(</span>struct<span style="font-weight: bold">&lt;</span><span style="color: #ff00ff; text-decoration-color: #ff00ff; font-weight: bold">bytes:</span><span style="color: #000000; text-decoration-color: #000000"> binary, path: string</span><span style="font-weight: bold">&gt;)</span> <span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">╰───────────────────────────────────────────────────────────────────────────────────────────╯</span> </pre> We can see there a few different ways in which we can pass in our images. We'll come back to this in a little while. A really nice feature of the `datasets` library (beyond the functionality for processing data, memory mapping etc.) is that you get some nice things 'for free'. One of these is the ability to add a [`faiss`](https://github.com/facebookresearch/faiss) index to a dataset. [`faiss`](https://github.com/facebookresearch/faiss) is a ["library for efficient similarity search and clustering of dense vectors"](https://github.com/facebookresearch/faiss). The `datasets` [docs](https://huggingface.co/docs/datasets) shows an [example](https://huggingface.co/docs/datasets/faiss_es.html#id1) of using a `faiss` index for text retrieval. In this post we'll see if we can do the same for images. ## The dataset: "Digitised Books - Images identified as Embellishments. c. 1510 - c. 1900" This is a dataset of images which have been pulled from a collection of digitised books from the British Library. These images come from books across a wide time period and from a broad range of domains. The images were extracted using information contained in the OCR output for each book. As a result, it's known which book the images came from, but not necessarily anything else about that image i.e. what is shown in the image. Some attempts to help overcome this have included uploading the images to [flickr](https://www.flickr.com/photos/britishlibrary/albums). This allows people to tag the images or put them into various different categories. There have also been projects to tag the dataset [using machine learning](https://blogs.bl.uk/digital-scholarship/2016/11/sherlocknet-update-millions-of-tags-and-thousands-of-captions-added-to-the-bl-flickr-images.html). This work makes it possible to search by tags, but we might want a 'richer' ability to search. For this particular experiment, we'll work with a subset of the collections which contain "embellishments". This dataset is a bit smaller, so it will be better for experimenting with. We can get the full data from the British Library's data repository: [https://doi.org/10.21250/db17](https://doi.org/10.21250/db17). Since the full dataset is still fairly large, you'll probably want to start with a smaller sample. ## Creating our dataset Our dataset consists of a folder containing subdirectories inside which are images. This is a fairly standard format for sharing image datasets. Thanks to a recently merged [pull request](https://github.com/huggingface/datasets/pull/2830) we can directly load this dataset using `datasets` `ImageFolder` loader 🤯 ```python from datasets import load_dataset dataset = load_dataset("imagefolder", data_files="https://zenodo.org/record/6224034/files/embellishments_sample.zip?download=1") ``` Let's see what we get back. ```python dataset ``` ``` DatasetDict({ train: Dataset({ features: ['image', 'label'], num_rows: 10000 }) }) ``` We can get back a `DatasetDict`, and we have a Dataset with image and label features. Since we don't have any train/validation splits here, let's grab the train part of our dataset. Let's also take a look at one example from our dataset to see what this looks like. ```python dataset = dataset["train"] dataset[0] ``` ```python {'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=358x461 at 0x7F9488DBB090>, 'label': 208} ``` Let's start with the label column. It contains the parent folder for our images. In this case, the label column represents the year of publication for the books from which the images are taken. We can see the mappings for this using `dataset.features`: ```python dataset.features['label'] ``` In this particular dataset, the image filenames also contain some metadata about the book from which the image was taken. There are a few ways we can get this information. When we look at one example from our dataset that the `image` feature was a `PIL.JpegImagePlugin.JpegImageFile`. Since `PIL.Images` have a filename attribute, one way in which we can grab our filenames is by accessing this. ```python dataset[0]['image'].filename ``` ```python /root/.cache/huggingface/datasets/downloads/extracted/f324a87ed7bf3a6b83b8a353096fbd9500d6e7956e55c3d96d2b23cc03146582/embellishments_sample/1920/000499442_0_000579_1_[The Ring and the Book etc ]_1920.jpg ``` Since we might want easy access to this information later, let's create a new column to extract the filename. For this, we'll use the `map` method. ```python dataset = dataset.map(lambda example: {"fname": example['image'].filename.split("/")[-1]}) ``` We can look at one example to see what this looks like now. ```python dataset[0] ``` ```python {'fname': '000499442_0_000579_1_[The Ring and the Book etc ]_1920.jpg', 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=358x461 at 0x7F94862A9650>, 'label': 208} ``` We've got our metadata now. Let's see some pictures already! If we access an example and index into the `image` column we'll see our image 😃 ``` python dataset[10]['image'] ``` <img src="assets/54_image_search_datasets/dataset_image.jpg" alt="An example image from our dataset"> > **Note** in an [earlier version](https://danielvanstrien.xyz/metadata/deployment/huggingface/ethics/huggingface-datasets/faiss/2022/01/13/image_search.html) of this blog post the steps to download and load the images was much more convoluted. The new ImageFolder loader makes this process much easier 😀 In particular, we don't need to worry about how to load our images since datasets took care of this for us. ## Push all the things to the hub! <img src="https://i.imgflip.com/613c0r.jpg" alt="Push all the things to the hub"> One of the super awesome things about the 🤗 ecosystem is the Hugging Face Hub. We can use the Hub to access models and datasets. It is often used for sharing work with others, but it can also be a useful tool for work in progress. `datasets` recently added a `push_to_hub` method that allows you to push a dataset to the Hub with minimal fuss. This can be really helpful by allowing you to pass around a dataset with all the transforms etc. already done. For now, we'll push the dataset to the Hub and keep it private initially. Depending on where you are running the code, you may need to authenticate. You can either do this using the `huggingface-cli login` command or, if you are running in a notebook, using `notebook_login` ``` python from huggingface_hub import notebook_login notebook_login() ``` ``` python dataset.push_to_hub('davanstrien/embellishments-sample', private=True) ``` > **Note**: in a [previous version](https://danielvanstrien.xyz/metadata/deployment/huggingface/ethics/huggingface-datasets/faiss/2022/01/13/image_search.html) of this blog post we had to do a few more steps to ensure images were embedded when using `push_to_hub`. Thanks to [this pull request](https://github.com/huggingface/datasets/pull/3685) we no longer need to worry about these extra steps. We just need to make sure `embed_external_files=True` (which is the default behaviour). ### Switching machines At this point, we've created a dataset and moved it to the Hub. This means it is possible to pick up the work/dataset elsewhere. In this particular example, having access to a GPU is important. Using the Hub as a way to pass around our data we could start on a laptop and pick up the work on Google Colab. If we move to a new machine, we may need to login again. Once we've done this we can load our dataset ``` python from datasets import load_dataset dataset = load_dataset("davanstrien/embellishments-sample", use_auth_token=True) ``` ## Creating embeddings 🕸 We now have a dataset with a bunch of images in it. To begin creating our image search app, we need to embed these images. There are various ways to try and do this, but one possible way is to use the CLIP models via the `sentence_transformers` library. The [CLIP model](https://openai.com/blog/clip/) from OpenAI learns a joint representation for both images and text, which is very useful for what we want to do since we want to input text and get back an image. We can download the model using the `SentenceTransformer` class. ``` python from sentence_transformers import SentenceTransformer model = SentenceTransformer('clip-ViT-B-32') ``` This model will take as input either an image or some text and return an embedding. We can use the `datasets` `map` method to encode all our images using this model. When we call map, we return a dictionary with the key `embeddings` containing the embeddings returned by the model. We also pass `device='cuda'` when we call the model; this ensures that we're doing the encoding on the GPU. ``` python ds_with_embeddings = dataset.map( lambda example: {'embeddings':model.encode(example['image'], device='cuda')}, batched=True, batch_size=32) ``` We can 'save' our work by pushing back to the Hub using `push_to_hub`. ``` python ds_with_embeddings.push_to_hub('davanstrien/embellishments-sample', private=True) ``` If we were to move to a different machine, we could grab our work again by loading it from the Hub 😃 ``` python from datasets import load_dataset ds_with_embeddings = load_dataset("davanstrien/embellishments-sample", use_auth_token=True) ``` We now have a new column which contains the embeddings for our images. We could manually search through these and compare them to some input embedding but datasets has an `add_faiss_index` method. This uses the [faiss](https://github.com/facebookresearch/faiss) library to create an efficient index for searching embeddings. For more background on this library, you can watch this [YouTube video](https://www.youtube.com/embed/sKyvsdEv6rk) ``` python ds_with_embeddings['train'].add_faiss_index(column='embeddings') ``` ``` Dataset({ features: ['fname', 'year', 'path', 'image', 'embeddings'], num_rows: 10000 }) ``` ## Image search > **Note** that these examples were generated from the full version of the dataset so you may get slightly different results. We now have everything we need to create a simple image search. We can use the same model we used to encode our images to encode some input text. This will act as the prompt we try and find close examples for. Let's start with 'a steam engine'. ``` python prompt = model.encode("A steam engine") ``` We can use another method from the datasets library `get_nearest_examples` to get images which have an embedding close to our input prompt embedding. We can pass in a number of results we want to get back. ``` python scores, retrieved_examples = ds_with_embeddings['train'].get_nearest_examples('embeddings', prompt, k=9) ``` We can index into the first example this retrieves: ``` python retrieved_examples['image'][0] ``` <img src="assets/54_image_search_datasets/search_result.jpg" alt="An image of a factory"> This isn't quite a steam engine, but it's also not a completely weird result. We can plot the other results to see what was returned. ``` python import matplotlib.pyplot as plt ``` ``` python plt.figure(figsize=(20, 20)) columns = 3 for i in range(9): image = retrieved_examples['image'][i] plt.subplot(9 / columns + 1, columns, i + 1) plt.imshow(image) ``` <img src="assets/54_image_search_datasets/steam_engine_search_results.jpg"> Some of these results look fairly close to our input prompt. We can wrap this in a function so we can more easily play around with different prompts ``` python def get_image_from_text(text_prompt, number_to_retrieve=9): prompt = model.encode(text_prompt) scores, retrieved_examples = ds_with_embeddings['train'].get_nearest_examples('embeddings', prompt, k=number_to_retrieve) plt.figure(figsize=(20, 20)) columns = 3 for i in range(9): image = retrieved_examples['image'][i] plt.title(text_prompt) plt.subplot(9 / columns + 1, columns, i + 1) plt.imshow(image) ``` ``` python get_image_from_text("An illustration of the sun behind a mountain") ``` <img src="assets/54_image_search_datasets/sun_behind_mountain.jpg"> ### Trying a bunch of prompts ✨ Now we have a function for getting a few results, we can try a bunch of different prompts: - For some of these I'll choose prompts which are a broad 'category' i.e. 'a musical instrument' or 'an animal', others are specific i.e. 'a guitar'. - Out of interest I also tried a boolean operator: "An illustration of a cat or a dog". - Finally I tried something a little more abstract: \"an empty abyss\" ``` python prompts = ["A musical instrument", "A guitar", "An animal", "An illustration of a cat or a dog", "an empty abyss"] ``` ``` python for prompt in prompts: get_image_from_text(prompt) ``` <img src="assets/54_image_search_datasets/musical_instrument.jpg"> <img src="assets/54_image_search_datasets/guitar.jpg"> <img src="assets/54_image_search_datasets/an_animal.jpg"> <img src="assets/54_image_search_datasets/cat_or_dog.jpg"> <img src="assets/54_image_search_datasets/an_empty_abyss.jpg"> We can see these results aren't always right, but they are usually reasonable. It already seems like this could be useful for searching for the semantic content of an image in this dataset. However we might hold off on sharing this as is... ## Creating a Hugging Face Space? 🤷🏼 One obvious next step for this kind of project is to create a Hugging Face [Space](https://huggingface.co/spaces/launch) demo. This is what I've done for other [models](https://huggingface.co/spaces/BritishLibraryLabs/British-Library-books-genre-classifier-v2). It was a fairly simple process to get a [Gradio app setup](https://gradio.app/) from the point we got to here. Here is a screenshot of this app: <img src="assets/54_image_search_datasets/spaces_image_search.jpg" alt="Screenshot of Gradio search app"> However, I'm a little bit vary about making this public straightaway. Looking at the model card for the CLIP model we can look at the primary intended uses: > ### Primary intended uses > > We primarily imagine the model will be used by researchers to better understand robustness, generalization, and other capabilities, biases, and constraints of computer vision models. > [source](https://huggingface.co/openai/clip-vit-base-patch32) This is fairly close to what we are interested in here. Particularly we might be interested in how well the model deals with the kinds of images in our dataset (illustrations from mostly 19th century books). The images in our dataset are (probably) fairly different from the training data. The fact that some of the images also contain text might help CLIP since it displays some [OCR ability](https://openai.com/blog/clip/). However, looking at the out-of-scope use cases in the model card: > ### Out-of-Scope Use Cases > > Any deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP's performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case > currently potentially harmful. > [source](https://huggingface.co/openai/clip-vit-base-patch32) suggests that 'deployment' is not a good idea. Whilst the results I got are interesting, I haven't played around with the model enough yet (and haven't done anything more systematic to evaluate its performance and biases) to be confident about 'deploying' it. Another additional consideration is the target dataset itself. The images are drawn from books covering a variety of subjects and time periods. There are plenty of books which represent colonial attitudes and as a result some of the images included may represent certain groups of people in a negative way. This could potentially be a bad combo with a tool which allows any arbitrary text input to be encoded as a prompt. There may be ways around this issue but this will require a bit more thought. ## Conclusion Although we don't have a nice demo to show for it, we've seen how we can use `datasets` to: - load images into the new `Image` feature type - 'save' our work using `push_to_hub` and use this to move data between machines/sessions - create a `faiss` index for images that we can use to retrieve images from a text (or image) input.
Accelerate BERT inference with Hugging Face Transformers and AWS inferentia
philschmid
March 16, 2022
bert-inferentia-sagemaker
partnerships, aws, guide, nlp
https://huggingface.co/blog/bert-inferentia-sagemaker
# Accelerate BERT inference with Hugging Face Transformers and AWS Inferentia <script async defer src="https://unpkg.com/medium-zoom-element@0/dist/medium-zoom-element.min.js"></script> notebook: [sagemaker/18_inferentia_inference](https://github.com/huggingface/notebooks/blob/master/sagemaker/18_inferentia_inference/sagemaker-notebook.ipynb) The adoption of [BERT](https://huggingface.co/blog/bert-101) and [Transformers](https://huggingface.co/docs/transformers/index) continues to grow. Transformer-based models are now not only achieving state-of-the-art performance in Natural Language Processing but also for [Computer Vision](https://arxiv.org/abs/2010.11929), [Speech](https://arxiv.org/abs/2006.11477), and [Time-Series](https://arxiv.org/abs/2002.06103). 💬 🖼 🎤 ⏳ Companies are now slowly moving from the experimentation and research phase to the production phase in order to use transformer models for large-scale workloads. But by default BERT and its friends are relatively slow, big, and complex models compared to the traditional Machine Learning algorithms. Accelerating Transformers and BERT is and will become an interesting challenge to solve in the future. AWS's take to solve this challenge was to design a custom machine learning chip designed for optimized inference workload called [AWS Inferentia](https://aws.amazon.com/machine-learning/inferentia/?nc1=h_ls). AWS says that AWS Inferentia *“delivers up to 80% lower cost per inference and up to 2.3X higher throughput than comparable current generation GPU-based Amazon EC2 instances.”* The real value of AWS Inferentia instances compared to GPU comes through the multiple Neuron Cores available on each device. A Neuron Core is the custom accelerator inside AWS Inferentia. Each Inferentia chip comes with 4x Neuron Cores. This enables you to either load 1 model on each core (for high throughput) or 1 model across all cores (for lower latency). ## Tutorial In this end-to-end tutorial, you will learn how to speed up BERT inference for text classification with Hugging Face Transformers, Amazon SageMaker, and AWS Inferentia. You can find the notebook here: [sagemaker/18_inferentia_inference](https://github.com/huggingface/notebooks/blob/master/sagemaker/18_inferentia_inference/sagemaker-notebook.ipynb) You will learn how to: - [1. Convert your Hugging Face Transformer to AWS Neuron](#1-convert-your-hugging-face-transformer-to-aws-neuron) - [2. Create a custom `inference.py` script for `text-classification`](#2-create-a-custom-inferencepy-script-for-text-classification) - [3. Create and upload the neuron model and inference script to Amazon S3](#3-create-and-upload-the-neuron-model-and-inference-script-to-amazon-s3) - [4. Deploy a Real-time Inference Endpoint on Amazon SageMaker](#4-deploy-a-real-time-inference-endpoint-on-amazon-sagemaker) - [5. Run and evaluate Inference performance of BERT on Inferentia](#5-run-and-evaluate-inference-performance-of-bert-on-inferentia) Let's get started! 🚀 --- *If you are going to use Sagemaker in a local environment (not SageMaker Studio or Notebook Instances), you need access to an IAM Role with the required permissions for Sagemaker. You can find [here](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) more about it.* ## 1. Convert your Hugging Face Transformer to AWS Neuron We are going to use the [AWS Neuron SDK for AWS Inferentia](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/index.html). The Neuron SDK includes a deep learning compiler, runtime, and tools for converting and compiling PyTorch and TensorFlow models to neuron compatible models, which can be run on [EC2 Inf1 instances](https://aws.amazon.com/ec2/instance-types/inf1/). As a first step, we need to install the [Neuron SDK](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-intro/neuron-install-guide.html) and the required packages. *Tip: If you are using Amazon SageMaker Notebook Instances or Studio you can go with the `conda_python3` conda kernel.* ```python # Set Pip repository to point to the Neuron repository !pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com # Install Neuron PyTorch !pip install torch-neuron==1.9.1.* neuron-cc[tensorflow] sagemaker>=2.79.0 transformers==4.12.3 --upgrade ``` After we have installed the Neuron SDK we can load and convert our model. Neuron models are converted using `torch_neuron` with its `trace` method similar to `torchscript`. You can find more information in our [documentation](https://huggingface.co/docs/transformers/serialization#torchscript). To be able to convert our model we first need to select the model we want to use for our text classification pipeline from [hf.co/models](http://hf.co/models). For this example, let's go with [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) but this can be easily adjusted with other BERT-like models. ```python model_id = "distilbert-base-uncased-finetuned-sst-2-english" ``` At the time of writing, the [AWS Neuron SDK does not support dynamic shapes](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/models/models-inferentia.html#dynamic-shapes), which means that the input size needs to be static for compiling and inference. In simpler terms, this means that when the model is compiled with e.g. an input of batch size 1 and sequence length of 16, the model can only run inference on inputs with that same shape. *When using a `t2.medium` instance the compilation takes around 3 minutes* ```python import os import tensorflow # to workaround a protobuf version conflict issue import torch import torch.neuron from transformers import AutoTokenizer, AutoModelForSequenceClassification # load tokenizer and model tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForSequenceClassification.from_pretrained(model_id, torchscript=True) # create dummy input for max length 128 dummy_input = "dummy input which will be padded later" max_length = 128 embeddings = tokenizer(dummy_input, max_length=max_length, padding="max_length",return_tensors="pt") neuron_inputs = tuple(embeddings.values()) # compile model with torch.neuron.trace and update config model_neuron = torch.neuron.trace(model, neuron_inputs) model.config.update({"traced_sequence_length": max_length}) # save tokenizer, neuron model and config for later use save_dir="tmp" os.makedirs("tmp",exist_ok=True) model_neuron.save(os.path.join(save_dir,"neuron_model.pt")) tokenizer.save_pretrained(save_dir) model.config.save_pretrained(save_dir) ``` ## 2. Create a custom `inference.py` script for `text-classification` The [Hugging Face Inference Toolkit](https://github.com/aws/sagemaker-huggingface-inference-toolkit) supports zero-code deployments on top of the [pipeline feature](https://huggingface.co/transformers/main_classes/pipelines.html) from 🤗 Transformers. This allows users to deploy Hugging Face transformers without an inference script [[Example](https://github.com/huggingface/notebooks/blob/master/sagemaker/11_deploy_model_from_hf_hub/deploy_transformer_model_from_hf_hub.ipynb)]. Currently, this feature is not supported with AWS Inferentia, which means we need to provide an `inference.py` script for running inference. *If you would be interested in support for zero-code deployments for Inferentia let us know on the [forum](https://discuss.huggingface.co/c/sagemaker/17).* --- To use the inference script, we need to create an `inference.py` script. In our example, we are going to overwrite the `model_fn` to load our neuron model and the `predict_fn` to create a text-classification pipeline. If you want to know more about the `inference.py` script check out this [example](https://github.com/huggingface/notebooks/blob/master/sagemaker/17_custom_inference_script/sagemaker-notebook.ipynb). It explains amongst other things what `model_fn` and `predict_fn` are. ```python !mkdir code ``` We are using the `NEURON_RT_NUM_CORES=1` to make sure that each HTTP worker uses 1 Neuron core to maximize throughput. ```python %%writefile code/inference.py import os from transformers import AutoConfig, AutoTokenizer import torch import torch.neuron # To use one neuron core per worker os.environ["NEURON_RT_NUM_CORES"] = "1" # saved weights name AWS_NEURON_TRACED_WEIGHTS_NAME = "neuron_model.pt" def model_fn(model_dir): # load tokenizer and neuron model from model_dir tokenizer = AutoTokenizer.from_pretrained(model_dir) model = torch.jit.load(os.path.join(model_dir, AWS_NEURON_TRACED_WEIGHTS_NAME)) model_config = AutoConfig.from_pretrained(model_dir) return model, tokenizer, model_config def predict_fn(data, model_tokenizer_model_config): # destruct model, tokenizer and model config model, tokenizer, model_config = model_tokenizer_model_config # create embeddings for inputs inputs = data.pop("inputs", data) embeddings = tokenizer( inputs, return_tensors="pt", max_length=model_config.traced_sequence_length, padding="max_length", truncation=True, ) # convert to tuple for neuron model neuron_inputs = tuple(embeddings.values()) # run prediciton with torch.no_grad(): predictions = model(*neuron_inputs)[0] scores = torch.nn.Softmax(dim=1)(predictions) # return dictonary, which will be json serializable return [{"label": model_config.id2label[item.argmax().item()], "score": item.max().item()} for item in scores] ``` ## 3. Create and upload the neuron model and inference script to Amazon S3 Before we can deploy our neuron model to Amazon SageMaker we need to create a `model.tar.gz` archive with all our model artifacts saved into `tmp/`, e.g. `neuron_model.pt` and upload this to Amazon S3. To do this we need to set up our permissions. ```python import sagemaker import boto3 sess = sagemaker.Session() # sagemaker session bucket -> used for uploading data, models and logs # sagemaker will automatically create this bucket if it not exists sagemaker_session_bucket=None if sagemaker_session_bucket is None and sess is not None: # set to default bucket if a bucket name is not given sagemaker_session_bucket = sess.default_bucket() try: role = sagemaker.get_execution_role() except ValueError: iam = boto3.client('iam') role = iam.get_role(RoleName='sagemaker_execution_role')['Role']['Arn'] sess = sagemaker.Session(default_bucket=sagemaker_session_bucket) print(f"sagemaker role arn: {role}") print(f"sagemaker bucket: {sess.default_bucket()}") print(f"sagemaker session region: {sess.boto_region_name}") ``` Next, we create our `model.tar.gz`. The `inference.py` script will be placed into a `code/` folder. ```python # copy inference.py into the code/ directory of the model directory. !cp -r code/ tmp/code/ # create a model.tar.gz archive with all the model artifacts and the inference.py script. %cd tmp !tar zcvf model.tar.gz * %cd .. ``` Now we can upload our `model.tar.gz` to our session S3 bucket with `sagemaker`. ```python from sagemaker.s3 import S3Uploader # create s3 uri s3_model_path = f"s3://{sess.default_bucket()}/{model_id}" # upload model.tar.gz s3_model_uri = S3Uploader.upload(local_path="tmp/model.tar.gz",desired_s3_uri=s3_model_path) print(f"model artifcats uploaded to {s3_model_uri}") ``` ## 4. Deploy a Real-time Inference Endpoint on Amazon SageMaker After we have uploaded our `model.tar.gz` to Amazon S3 can we create a custom `HuggingfaceModel`. This class will be used to create and deploy our real-time inference endpoint on Amazon SageMaker. ```python from sagemaker.huggingface.model import HuggingFaceModel # create Hugging Face Model Class huggingface_model = HuggingFaceModel( model_data=s3_model_uri, # path to your model and script role=role, # iam role with permissions to create an Endpoint transformers_version="4.12", # transformers version used pytorch_version="1.9", # pytorch version used py_version='py37', # python version used ) # Let SageMaker know that we've already compiled the model via neuron-cc huggingface_model._is_compiled_model = True # deploy the endpoint endpoint predictor = huggingface_model.deploy( initial_instance_count=1, # number of instances instance_type="ml.inf1.xlarge" # AWS Inferentia Instance ) ``` ## 5. Run and evaluate Inference performance of BERT on Inferentia The `.deploy()` returns an `HuggingFacePredictor` object which can be used to request inference. ```python data = { "inputs": "the mesmerizing performances of the leads keep the film grounded and keep the audience riveted .", } res = predictor.predict(data=data) res ``` We managed to deploy our neuron compiled BERT to AWS Inferentia on Amazon SageMaker. Now, let's test its performance. As a dummy load test, we will loop and send 10,000 synchronous requests to our endpoint. ```python # send 10000 requests for i in range(10000): resp = predictor.predict( data={"inputs": "it 's a charming and often affecting journey ."} ) ``` Let's inspect the performance in cloudwatch. ```python print(f"https://console.aws.amazon.com/cloudwatch/home?region={sess.boto_region_name}#metricsV2:graph=~(metrics~(~(~'AWS*2fSageMaker~'ModelLatency~'EndpointName~'{predictor.endpoint_name}~'VariantName~'AllTraffic))~view~'timeSeries~stacked~false~region~'{sess.boto_region_name}~start~'-PT5M~end~'P0D~stat~'Average~period~30);query=~'*7bAWS*2fSageMaker*2cEndpointName*2cVariantName*7d*20{predictor.endpoint_name}") ``` The average latency for our BERT model is `5-6ms` for a sequence length of 128. <br> <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="Model Latency in Cloudwatch" src="assets/55_bert_inferentia_sagemaker/cloudwatch_metrics_bert.png"></medium-zoom> <figcaption>Figure 1. Model Latency</figcaption> </figure> <br> ### Delete model and endpoint To clean up, we can delete the model and endpoint. ```python predictor.delete_model() predictor.delete_endpoint() ``` ## Conclusion We successfully managed to compile a vanilla Hugging Face Transformers model to an AWS Inferentia compatible Neuron Model. After that we deployed our Neuron model to Amazon SageMaker using the new Hugging Face Inference DLC. We managed to achieve `5-6ms` latency per neuron core, which is faster than CPU in terms of latency, and achieves a higher throughput than GPUs since we ran 4 models in parallel. If you or you company are currently using a BERT-like Transformer for encoder tasks (text-classification, token-classification, question-answering etc.), and the latency meets your requirements you should switch to AWS Inferentia. This will not only save costs, but can also increase efficiency and performance for your models. We are planning to do a more detailed case study on cost-performance of transformers in the future, so stay tuned! Also if you want to learn more about accelerating transformers you should also check out Hugging Face [optimum](https://github.com/huggingface/optimum). --- Thanks for reading! If you have any questions, feel free to contact me, through [Github](https://github.com/huggingface/transformers), or on the [forum](https://discuss.huggingface.co/c/sagemaker/17). You can also connect with me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/).
Fine-Tune a Semantic Segmentation Model with a Custom Dataset
segments-tobias
March 17, 2022
fine-tune-segformer
guide, partnerships, cv
https://huggingface.co/blog/fine-tune-segformer
# Fine-Tune a Semantic Segmentation Model with a Custom Dataset <script async defer src="https://unpkg.com/medium-zoom-element@0/dist/medium-zoom-element.min.js"></script> <a target="_blank" href="https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/56_fine_tune_segformer.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> **This guide shows how you can fine-tune Segformer, a state-of-the-art semantic segmentation model. Our goal is to build a model for a pizza delivery robot, so it can see where to drive and recognize obstacles 🍕🤖. We'll first label a set of sidewalk images on [Segments.ai](https://segments.ai?utm_source=hf&utm_medium=colab&utm_campaign=sem_seg). Then we'll fine-tune a pre-trained SegFormer model by using [`🤗 transformers`](https://huggingface.co/transformers), an open-source library that offers easy-to-use implementations of state-of-the-art models. Along the way, you'll learn how to work with the Hugging Face Hub, the largest open-source catalog of models and datasets.** Semantic segmentation is the task of classifying each pixel in an image. You can see it as a more precise way of classifying an image. It has a wide range of use cases in fields such as medical imaging and autonomous driving. For example, for our pizza delivery robot, it is important to know exactly where the sidewalk is in an image, not just whether there is a sidewalk or not. Because semantic segmentation is a type of classification, the network architectures used for image classification and semantic segmentation are very similar. In 2014, [a seminal paper](https://arxiv.org/abs/1411.4038) by Long et al. used convolutional neural networks for semantic segmentation. More recently, Transformers have been used for image classification (e.g. [ViT](https://huggingface.co/blog/fine-tune-vit)), and now they're also being used for semantic segmentation, pushing the state-of-the-art further. [SegFormer](https://huggingface.co/docs/transformers/model_doc/segformer) is a model for semantic segmentation introduced by Xie et al. in 2021. It has a hierarchical Transformer encoder that doesn't use positional encodings (in contrast to ViT) and a simple multi-layer perceptron decoder. SegFormer achieves state-of-the-art performance on multiple common datasets. Let's see how our pizza delivery robot performs for sidewalk images. <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="Pizza delivery robot segmenting a scene" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/56_fine_tune_segformer/pizza-scene.png"></medium-zoom> </figure> Let's get started by installing the necessary dependencies. Because we're going to push our dataset and model to the Hugging Face Hub, we need to install [Git LFS](https://git-lfs.github.com/) and log in to Hugging Face. The installation of `git-lfs` might be different on your system. Note that Google Colab has Git LFS pre-installed. ```bash pip install -q transformers datasets evaluate segments-ai apt-get install git-lfs git lfs install huggingface-cli login ``` # 1. Create/choose a dataset The first step in any ML project is assembling a good dataset. In order to train a semantic segmentation model, we need a dataset with semantic segmentation labels. We can either use an existing dataset from the Hugging Face Hub, such as [ADE20k](https://huggingface.co/datasets/scene_parse_150), or create our own dataset. For our pizza delivery robot, we could use an existing autonomous driving dataset such as [CityScapes](https://www.cityscapes-dataset.com/) or [BDD100K](https://bdd100k.com/). However, these datasets were captured by cars driving on the road. Since our delivery robot will be driving on the sidewalk, there will be a mismatch between the images in these datasets and the data our robot will see in the real world. We don't want our delivery robot to get confused, so we'll create our own semantic segmentation dataset using images captured on sidewalks. We'll show how you can label the images we captured in the next steps. If you just want to use our finished, labeled dataset, you can skip the ["Create your own dataset"](#create-your-own-dataset) section and continue from ["Use a dataset from the Hub"](#use-a-dataset-from-the-hub). ## Create your own dataset To create your semantic segmentation dataset, you'll need two things: 1. images covering the situations your model will encounter in the real world 2. segmentation labels, i.e. images where each pixel represents a class/category. We went ahead and captured a thousand images of sidewalks in Belgium. Collecting and labeling such a dataset can take a long time, so you can start with a smaller dataset and expand it if the model does not perform well enough. <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="Example images from the sidewalk dataset" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/56_fine_tune_segformer/sidewalk-examples.png"></medium-zoom> <figcaption>Some examples of the raw images in the sidewalk dataset.</figcaption> </figure> To obtain segmentation labels, we need to indicate the classes of all the regions/objects in these images. This can be a time-consuming endeavour, but using the right tools can speed up the task significantly. For labeling, we'll use [Segments.ai](https://segments.ai?utm_source=hf&utm_medium=colab&utm_campaign=sem_seg), since it has smart labeling tools for image segmentation and an easy-to-use Python SDK. ### Set up the labeling task on Segments.ai First, create an account at [https://segments.ai/join](https://segments.ai/join?utm_source=hf&utm_medium=colab&utm_campaign=sem_seg). Next, create a new dataset and upload your images. You can either do this from the web interface or via the Python SDK (see the [notebook](https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/56_fine_tune_segformer.ipynb)). ### Label the images Now that the raw data is loaded, go to [segments.ai/home](https://segments.ai/home) and open the newly created dataset. Click "Start labeling" and create segmentation masks. You can use the ML-powered superpixel and autosegment tools to label faster. <figure class="image table text-center m-0"> <video alt="Labeling a sidewalk image on Segments.ai" style="max-width: 70%; margin: auto;" autoplay loop autobuffer muted playsinline > <source src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/56_fine_tune_segformer/sidewalk-labeling-crop.mp4" poster="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/56_fine_tune_segformer/sidewalk-labeling-crop-poster.png" type="video/mp4"> </video> <figcaption>Tip: when using the superpixel tool, scroll to change the superpixel size, and click and drag to select segments.</figcaption> </figure> ### Push the result to the Hugging Face Hub When you're done labeling, create a new dataset release containing the labeled data. You can either do this on the releases tab on Segments.ai, or programmatically through the SDK as shown in the notebook. Note that creating the release can take a few seconds. You can check the releases tab on Segments.ai to check if your release is still being created. Now, we'll convert the release to a [Hugging Face dataset](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset) via the Segments.ai Python SDK. If you haven't set up the Segments Python client yet, follow the instructions in the "Set up the labeling task on Segments.ai" section of the [notebook](https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/56_fine_tune_segformer.ipynb#scrollTo=9T2Jr9t9y4HD). *Note that the conversion can take a while, depending on the size of your dataset.* ```python from segments.huggingface import release2dataset release = segments_client.get_release(dataset_identifier, release_name) hf_dataset = release2dataset(release) ``` If we inspect the features of the new dataset, we can see the image column and the corresponding label. The label consists of two parts: a list of annotations and a segmentation bitmap. The annotation corresponds to the different objects in the image. For each object, the annotation contains an `id` and a `category_id`. The segmentation bitmap is an image where each pixel contains the `id` of the object at that pixel. More information can be found in the [relevant docs](https://docs.segments.ai/reference/sample-and-label-types/label-types#segmentation-labels). For semantic segmentation, we need a semantic bitmap that contains a `category_id` for each pixel. We'll use the `get_semantic_bitmap` function from the Segments.ai SDK to convert the bitmaps to semantic bitmaps. To apply this function to all the rows in our dataset, we'll use [`dataset.map`](https://huggingface.co/docs/datasets/package_reference/main_classes#datasets.Dataset.map). ```python from segments.utils import get_semantic_bitmap def convert_segmentation_bitmap(example): return { "label.segmentation_bitmap": get_semantic_bitmap( example["label.segmentation_bitmap"], example["label.annotations"], id_increment=0, ) } semantic_dataset = hf_dataset.map( convert_segmentation_bitmap, ) ``` You can also rewrite the `convert_segmentation_bitmap` function to use batches and pass `batched=True` to `dataset.map`. This will significantly speed up the mapping, but you might need to tweak the `batch_size` to ensure the process doesn't run out of memory. The SegFormer model we're going to fine-tune later expects specific names for the features. For convenience, we'll match this format now. Thus, we'll rename the `image` feature to `pixel_values` and the `label.segmentation_bitmap` to `label` and discard the other features. ```python semantic_dataset = semantic_dataset.rename_column('image', 'pixel_values') semantic_dataset = semantic_dataset.rename_column('label.segmentation_bitmap', 'label') semantic_dataset = semantic_dataset.remove_columns(['name', 'uuid', 'status', 'label.annotations']) ``` We can now push the transformed dataset to the Hugging Face Hub. That way, your team and the Hugging Face community can make use of it. In the next section, we'll see how you can load the dataset from the Hub. ```python hf_dataset_identifier = f"{hf_username}/{dataset_name}" semantic_dataset.push_to_hub(hf_dataset_identifier) ``` ## Use a dataset from the Hub If you don't want to create your own dataset, but found a suitable dataset for your use case on the Hugging Face Hub, you can define the identifier here. For example, you can use the full labeled sidewalk dataset. Note that you can check out the examples [directly in your browser](https://huggingface.co/datasets/segments/sidewalk-semantic). ```python hf_dataset_identifier = "segments/sidewalk-semantic" ``` # 2. Load and prepare the Hugging Face dataset for training Now that we've created a new dataset and pushed it to the Hugging Face Hub, we can load the dataset in a single line. ```python from datasets import load_dataset ds = load_dataset(hf_dataset_identifier) ``` Let's shuffle the dataset and split the dataset in a train and test set. ```python ds = ds.shuffle(seed=1) ds = ds["train"].train_test_split(test_size=0.2) train_ds = ds["train"] test_ds = ds["test"] ``` We'll extract the number of labels and the human-readable ids, so we can configure the segmentation model correctly later on. ```python import json from huggingface_hub import hf_hub_download repo_id = f"datasets/{hf_dataset_identifier}" filename = "id2label.json" id2label = json.load(open(hf_hub_download(repo_id=hf_dataset_identifier, filename=filename, repo_type="dataset"), "r")) id2label = {int(k): v for k, v in id2label.items()} label2id = {v: k for k, v in id2label.items()} num_labels = len(id2label) ``` ## Image processor & data augmentation A SegFormer model expects the input to be of a certain shape. To transform our training data to match the expected shape, we can use `SegFormerImageProcessor`. We could use the `ds.map` function to apply the image processor to the whole training dataset in advance, but this can take up a lot of disk space. Instead, we'll use a *transform*, which will only prepare a batch of data when that data is actually used (on-the-fly). This way, we can start training without waiting for further data preprocessing. In our transform, we'll also define some data augmentations to make our model more resilient to different lighting conditions. We'll use the [`ColorJitter`](https://pytorch.org/vision/main/generated/torchvision.transforms.ColorJitter.html) function from `torchvision` to randomly change the brightness, contrast, saturation, and hue of the images in the batch. ```python from torchvision.transforms import ColorJitter from transformers import SegformerImageProcessor processor = SegformerImageProcessor() jitter = ColorJitter(brightness=0.25, contrast=0.25, saturation=0.25, hue=0.1) def train_transforms(example_batch): images = [jitter(x) for x in example_batch['pixel_values']] labels = [x for x in example_batch['label']] inputs = processor(images, labels) return inputs def val_transforms(example_batch): images = [x for x in example_batch['pixel_values']] labels = [x for x in example_batch['label']] inputs = processor(images, labels) return inputs # Set transforms train_ds.set_transform(train_transforms) test_ds.set_transform(val_transforms) ``` # 3. Fine-tune a SegFormer model ## Load the model to fine-tune The SegFormer authors define 5 models with increasing sizes: B0 to B5. The following chart (taken from the original paper) shows the performance of these different models on the ADE20K dataset, compared to other models. <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(0,0,0,.7)" alt="SegFormer model variants compared with other segmentation models" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/56_fine_tune_segformer/segformer.png"></medium-zoom> <figcaption><a href="https://arxiv.org/abs/2105.15203">Source</a></figcaption> </figure> Here, we'll load the smallest SegFormer model (B0), pre-trained on ImageNet-1k. It's only about 14MB in size! Using a small model will make sure that our model can run smoothly on our pizza delivery robot. ```python from transformers import SegformerForSemanticSegmentation pretrained_model_name = "nvidia/mit-b0" model = SegformerForSemanticSegmentation.from_pretrained( pretrained_model_name, id2label=id2label, label2id=label2id ) ``` ## Set up the Trainer To fine-tune the model on our data, we'll use Hugging Face's [Trainer API](https://huggingface.co/docs/transformers/main_classes/trainer). We need to set up the training configuration and an evalutation metric to use a Trainer. First, we'll set up the [`TrainingArguments`](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments). This defines all training hyperparameters, such as learning rate and the number of epochs, frequency to save the model and so on. We also specify to push the model to the hub after training (`push_to_hub=True`) and specify a model name (`hub_model_id`). ```python from transformers import TrainingArguments epochs = 50 lr = 0.00006 batch_size = 2 hub_model_id = "segformer-b0-finetuned-segments-sidewalk-2" training_args = TrainingArguments( "segformer-b0-finetuned-segments-sidewalk-outputs", learning_rate=lr, num_train_epochs=epochs, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, save_total_limit=3, evaluation_strategy="steps", save_strategy="steps", save_steps=20, eval_steps=20, logging_steps=1, eval_accumulation_steps=5, load_best_model_at_end=True, push_to_hub=True, hub_model_id=hub_model_id, hub_strategy="end", ) ``` Next, we'll define a function that computes the evaluation metric we want to work with. Because we're doing semantic segmentation, we'll use the [mean Intersection over Union (mIoU)](https://huggingface.co/spaces/evaluate-metric/mean_iou), directly accessible in the [`evaluate` library](https://huggingface.co/docs/evaluate/index). IoU represents the overlap of segmentation masks. Mean IoU is the average of the IoU of all semantic classes. Take a look at [this blogpost](https://www.jeremyjordan.me/evaluating-image-segmentation-models/) for an overview of evaluation metrics for image segmentation. Because our model outputs logits with dimensions height/4 and width/4, we have to upscale them before we can compute the mIoU. ```python import torch from torch import nn import evaluate metric = evaluate.load("mean_iou") def compute_metrics(eval_pred): with torch.no_grad(): logits, labels = eval_pred logits_tensor = torch.from_numpy(logits) # scale the logits to the size of the label logits_tensor = nn.functional.interpolate( logits_tensor, size=labels.shape[-2:], mode="bilinear", align_corners=False, ).argmax(dim=1) pred_labels = logits_tensor.detach().cpu().numpy() # currently using _compute instead of compute # see this issue for more info: https://github.com/huggingface/evaluate/pull/328#issuecomment-1286866576 metrics = metric._compute( predictions=pred_labels, references=labels, num_labels=len(id2label), ignore_index=0, reduce_labels=processor.do_reduce_labels, ) # add per category metrics as individual key-value pairs per_category_accuracy = metrics.pop("per_category_accuracy").tolist() per_category_iou = metrics.pop("per_category_iou").tolist() metrics.update({f"accuracy_{id2label[i]}": v for i, v in enumerate(per_category_accuracy)}) metrics.update({f"iou_{id2label[i]}": v for i, v in enumerate(per_category_iou)}) return metrics ``` Finally, we can instantiate a `Trainer` object. ```python from transformers import Trainer trainer = Trainer( model=model, args=training_args, train_dataset=train_ds, eval_dataset=test_ds, compute_metrics=compute_metrics, ) ``` Now that our trainer is set up, training is as simple as calling the `train` function. We don't need to worry about managing our GPU(s), the trainer will take care of that. ```python trainer.train() ``` When we're done with training, we can push our fine-tuned model and the image processor to the Hub. This will also automatically create a model card with our results. We'll supply some extra information in `kwargs` to make the model card more complete. ```python kwargs = { "tags": ["vision", "image-segmentation"], "finetuned_from": pretrained_model_name, "dataset": hf_dataset_identifier, } processor.push_to_hub(hub_model_id) trainer.push_to_hub(**kwargs) ``` # 4. Inference Now comes the exciting part, using our fine-tuned model! In this section, we'll show how you can load your model from the hub and use it for inference. However, you can also try out your model directly on the Hugging Face Hub, thanks to the cool widgets powered by the [hosted inference API](https://api-inference.huggingface.co/docs/python/html/index.html). If you pushed your model to the Hub in the previous step, you should see an inference widget on your model page. You can add default examples to the widget by defining example image URLs in your model card. See [this model card](https://huggingface.co/segments-tobias/segformer-b0-finetuned-segments-sidewalk/blob/main/README.md) as an example. <figure class="image table text-center m-0 w-full"> <video alt="The interactive widget of the model" style="max-width: 70%; margin: auto;" autoplay loop autobuffer muted playsinline > <source src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/56_fine_tune_segformer/widget.mp4" poster="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/56_fine_tune_segformer/widget-poster.png" type="video/mp4"> </video> </figure> ## Use the model from the Hub We'll first load the model from the Hub using `SegformerForSemanticSegmentation.from_pretrained()`. ```python from transformers import SegformerImageProcessor, SegformerForSemanticSegmentation processor = SegformerImageProcessor.from_pretrained("nvidia/segformer-b0-finetuned-ade-512-512") model = SegformerForSemanticSegmentation.from_pretrained(f"{hf_username}/{hub_model_id}") ``` Next, we'll load an image from our test dataset. ```python image = test_ds[0]['pixel_values'] gt_seg = test_ds[0]['label'] image ``` To segment this test image, we first need to prepare the image using the image processor. Then we forward it through the model. We also need to remember to upscale the output logits to the original image size. In order to get the actual category predictions, we just have to apply an `argmax` on the logits. ```python from torch import nn inputs = processor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4) # First, rescale logits to original image size upsampled_logits = nn.functional.interpolate( logits, size=image.size[::-1], # (height, width) mode='bilinear', align_corners=False ) # Second, apply argmax on the class dimension pred_seg = upsampled_logits.argmax(dim=1)[0] ``` Now it's time to display the result. We'll display the result next to the ground-truth mask. <figure class="image table text-center m-0 w-full"> <medium-zoom background="rgba(1,1,1,1)" alt="SegFormer prediction vs the ground truth" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/56_fine_tune_segformer/output.png"></medium-zoom> </figure> What do you think? Would you send our pizza delivery robot on the road with this segmentation information? The result might not be perfect yet, but we can always expand our dataset to make the model more robust. We can now also go train a larger SegFormer model, and see how it stacks up. # 5. Conclusion That's it! You now know how to create your own image segmentation dataset and how to use it to fine-tune a semantic segmentation model. We introduced you to some useful tools along the way, such as: * [Segments.ai](https://segments.ai) for labeling your data * [🤗 datasets](https://huggingface.co/docs/datasets/) for creating and sharing a dataset * [🤗 transformers](https://huggingface.co/transformers) for easily fine-tuning a state-of-the-art segmentation model * [Hugging Face Hub](https://huggingface.co/docs/hub/main) for sharing our dataset and model, and for creating an inference widget for our model We hope you enjoyed this post and learned something. Feel free to share your own model with us on Twitter ([@TobiasCornille](https://twitter.com/tobiascornille), [@NielsRogge](https://twitter.com/nielsrogge), and [@huggingface](https://twitter.com/huggingface)).
Announcing the 🤗 AI Research Residency Program
douwekiela
March 22, 2022
ai-residency
community, research
https://huggingface.co/blog/ai-residency
# Announcing the 🤗 AI Research Residency Program 🎉 🎉 🎉 The 🤗 Research Residency Program is a 9-month opportunity to launch or advance your career in machine learning research 🚀. The goal of the residency is to help you grow into an impactful AI researcher. Residents will work alongside Researchers from our Science Team. Together, you will pick a research problem and then develop new machine learning techniques to solve it in an open & collaborative way, with the hope of ultimately publishing your work and making it visible to a wide audience. Applicants from all backgrounds are welcome! Ideally, you have some research experience and are excited about our mission to democratize responsible machine learning. The progress of our field has the potential to exacerbate existing disparities in ways that disproportionately hurt the most marginalized people in society — including people of color, people from working-class backgrounds, women, and LGBTQ+ people. These communities must be centered in the work we do as a research community. So we strongly encourage proposals from people whose personal experience reflects these identities.. We encourage applications relating to AI that demonstrate a clear and positive societal impact. ## How to Apply Since the focus of your work will be on developing Machine Learning techniques, your application should show evidence of programming skills and of prerequisite courses, like calculus or linear algebra, or links to an open-source project that demonstrates programming and mathematical ability. More importantly, your application needs to present interest in effecting positive change through AI in any number of creative ways. This can stem from a topic that is of particular interest to you and your proposal would capture concrete ways in which machine learning can contribute. Thinking through the entire pipeline, from understanding where ML tools are needed to gathering data and deploying the resulting approach, can help make your project more impactful. We are actively working to build a culture that values diversity, equity, and inclusivity. We are intentionally building a workplace where people feel respected and supported—regardless of who you are or where you come from. We believe this is foundational to building a great company and community. Hugging Face is an equal opportunity employer and we do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. [Submit your application here](https://apply.workable.com/huggingface/j/1B77519961). ## FAQs * **Can I complete the program part-time?**<br>No. The Residency is only offered as a full-time position. * **I have been out of school for several years. Can I apply?**<br>Yes. We will consider applications from various backgrounds. * **Can I be enrolled as a student at a university or work for another employer during the residency?**<br>No, the residency can’t be completed simultaneously with any other obligations. * **Will I receive benefits during the Residency?**<br>Yes, residents are eligible for most benefits, including medical (depending on location). * **Will I be required to relocate for this residency?**<br>Absolutely not! We are a distributed team and you are welcome to work from wherever you are currently located. * **Is there a deadline?**<br>Applications close on April 3rd, 2022!
Machine Learning Experts - Meg Mitchell Interview
britneymuller
March 23, 2022
meg-mitchell-interview
expert-acceleration-program, ml-experts
https://huggingface.co/blog/meg-mitchell-interview
# Machine Learning Experts - Margaret Mitchell Hey friends! Welcome to Machine Learning Experts. I'm your host, Britney Muller and today’s guest is none other than [Margaret Mitchell](https://twitter.com/mmitchell_ai) (Meg for short). Meg founded & co-led Google’s Ethical AI Group, is a pioneer in the field of Machine Learning, has published over 50 papers, and is a leading researcher in Ethical AI. You’ll hear Meg talk about the moment she realized the importance of ethical AI (an incredible story!), how ML teams can be more aware of harmful data bias, and the power (and performance) benefits of inclusion and diversity in ML. <a href="https://huggingface.co/support?utm_source=blog&utm_medium=blog&utm_campaign=ml_experts&utm_content=meg_interview_article"><img src="/blog/assets/57_meg_mitchell_interview/Meg-cta.png"></a> Very excited to introduce this powerful episode to you! Here’s my conversation with Meg Mitchell: <iframe width="100%" style="aspect-ratio: 16 / 9;"src="https://www.youtube.com/embed/FpIxYGyJBbs" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> ## Transcription: *Note: Transcription has been slightly modified/reformatted to deliver the highest-quality reading experience.* ### Could you share a little bit about your background and what brought you to Hugging Face? **Dr. Margaret Mitchell’s Background:** - Bachelor’s in Linguistics at Reed College - Worked on NLP - Worked on assistive and augmentative technology after her Bachelor’s and also during her graduate studies - Master’s in Computational Linguistics at the University of Washington - PhD in Computer Science **Meg:** I did heavy statistical work as a postdoc at Johns Hopkins and then went to Microsoft Research where I continued doing vision to language generation that led to working on an app for people who are blind to navigate the world a bit easier called [Seeing AI](https://www.microsoft.com/en-us/ai/seeing-ai). After a few years at Microsoft, I left to work at Google to focus on big data problems inherent in deep learning. That’s where I started focusing on things like fairness, rigorous evaluation for different kinds of issues, and bias. While at Google, I founded and co-led the Ethical AI Team which focuses on inclusion and transparency. After four years at Google, I came over to Hugging Face where I was able to jump in and focus on coding. I’m helping to create protocols for ethical AI research, inclusive hiring, systems, and setting up a good culture here at Hugging Face. ### When did you recognize the importance of Ethical AI? **Meg:** This occurred when I was working at Microsoft while I was working on the assistance technology, Seeing AI. In general, I was working on generating language from images and I started to see was how lopsided data was. Data represents a subset of the world and it influences what a model will say. So I began to run into issues where white people would be described as ‘people’ and black people would be described as ‘black people’ as if white was a default and black was a marked characteristic. That was concerning to me. There was also an ah-ha moment when I was feeding my system a sequence of images, getting it to talk more about a story of what is happening. And I fed it some images of this massive blast where a lot of people worked, called the ‘Hebstad blast’. You could see that the person taking the picture was on the second or third story looking out on the blast. The blast was very close to this person. It was a very dire and intense moment and when I fed this to the system the system’s output was that “ this is awesome, this is a great view, this is beautiful’. And I thought.. this is a great view of this horrible scene but the important part here is that people may be dying. This is a massive destructive explosion. But the thing is, when you’re learning from images people don’t tend to take photos of terrible things, they take photos of sunsets, fireworks, etc., and a visual recognition model had learned on these images and believed that color in the sky was a positive, beautiful thing. At that moment, I realized that if a model with that sort of thinking had access to actions it would be just one hop away from a system that would blow up buildings because it thought it was beautiful. This was a moment for me when I realized I didn’t want to keep making these systems do better on benchmarks, I wanted to fundamentally shift how we were looking at these problems, how we were approaching data and analysis of data, how we were evaluating and all of the factors we were leaving out with these straightforward pipelines. So that really became my shift into ethical AI work. ### In what applications is data ethics most important? **Meg:** Human-centric technology that deals with people and identity (face recognition, pedestrian recognition). In NLP this would pertain more to the privacy of individuals, how individuals are talked about, and the biases models pick up with regards to descriptors used for people. ### How can ML teams be more aware of harmful bias? **Meg:** A primary issue is that these concepts haven't been taught and most teams simply aren’t aware. Another problem is the lack of a lexicon to contextualize and communicate what is going on. For example: - This is what marginalization is - This is what a power differential is - Here is what inclusion is - Here is how stereotypes work Having a better understanding of these pillars is really important. Another issue is the culture behind machine learning. It’s taken a bit of an ‘Alpha’ or ‘macho’ approach where the focus is on ‘beating’ the last numbers, making things ‘faster’, ‘bigger’, etc. There are lots of parallels that can be made to human anatomy. There’s also a very hostile competitiveness that comes out where you find that women are disproportionately treated as less than. Since women are often much more familiar with discrimination women are focusing a lot more on ethics, stereotypes, sexism, etc. within AI. This means it gets associated with women more and seen as less than which makes the culture a lot harder to penetrate. It’s generally assumed that I’m not technical. It’s something I have to prove over and over again. I’m called a linguist, an ethicist because these are things I care about and know about but that is treated as less-than. People say or think, “You don’t program, you don’t know about statistics, you are not as important,” and it’s often not until I start talking about things technically that people take me seriously which is unfortunate. There is a massive cultural barrier in ML. ### Lack of diversity and inclusion hurts everyone **Meg:** Diversity is when you have a lot of races, ethnicities, genders, abilities, statuses at the table. Inclusion is when each person feels comfortable talking, they feel welcome. One of the best ways to be more inclusive is to not be exclusive. Feels fairly obvious but is often missed. People get left out of meetings because we don’t find them helpful or find them annoying or combative (which is a function of various biases). To be inclusive you need to not be exclusive so when scheduling a meeting pay attention to the demographic makeup of the people you’re inviting. If your meeting is all-male, that’s a problem. It’s incredibly valuable to become more aware and intentional about the demographic makeup of the people you’re including in an email. But you’ll notice in tech, a lot of meetings are all male, and if you bring it up that can be met with a lot of hostility. Air on the side of including people. We all have biases but there are tactics to break some of those patterns. When writing an email I’ll go through their gender and ethnicities to ensure I’m being inclusive. It’s a very conscious effort. That sort of thinking through demographics helps. However, mention this before someone sends an email or schedules a meeting. People tend to not respond as well when you mention these things after the fact. ### Diversity in AI - Isn’t there proof that having a more diverse set of people on an ML project results in better outcomes? **Meg:** Yes, since you have different perspectives you have a different distribution over options and thus, more options. One of the fundamental aspects of machine learning is that when you start training you can use a randomized starting point and what kind of distribution you want to sample from. Most engineers can agree that you don’t want to sample from one little piece of the distribution to have the best chance of finding a local optimum. You need to translate this approach to the people sitting at the table. Just how you want to have a Gaussian approach over different start states, so too do you want that at the table when you’re starting projects because it gives you this larger search space making it easier to attain a local optimum. ### Can you talk about Model Cards and how that project came to be? **Meg:** This project started at Google when I first started working on fairness and what a rigorous evaluation of fairness would look like. In order to do that you need to have an understanding of context and understanding of who would use it. This revolved around how to approach model biases and it wasn’t getting a lot of pick up. I was talking to [Timnit Gebru](https://twitter.com/timnitGebru) who was at that time someone in the field with similar interest to me and she was talking about this idea of datasheets; a kind of documentation for data (based on her experience at Apple) doing engineering where you tend to have specifications of hardware. But we don’t have something similar for data and she was talking about how crazy that is. So Timnit had this idea of datasheets for datasets. It struck me that by having an ‘artifact’ people in tech who are motivated by launches would care a lot more about it. So if we say you have to produce this artifact and it will count as a launch suddenly people would be more incentivized to do it. The way we came up with the name was that a comparable word to ‘data sheet’ that could be used for models was card (plus it was shorter). Also decided to call it ‘model cards’ because the name was very generic and would have longevity over time. Timnit’s paper was called [‘Data Sheets for Datasets’](https://arxiv.org/abs/1803.09010). So we called ours [‘Model Cards for Model Reporting’](https://arxiv.org/abs/1810.03993) and once we had the published paper people started taking us more seriously. Couldn’t have done this without Timnit Gebru’s brilliance suggesting “You need an artifact, a standardized thing that people will want to produce.” ### Where are model cards headed? **Meg:** There’s a pretty big barrier to entry to do model cards in a way that is well informed by ethics. Partly because the people who need to fill this out are often engineers and developers who want to launch their model and don’t want to sit around thinking about documentation and ethics. Part of why I wanted to join Hugging Face is because it gave me an opportunity to standardize how these processes could be filled out and automated as much as possible. One thing I really like about Hugging Face is there is a focus on creating end-to-end machine learning processes that are as smooth as possible. Would love to do something like that with model cards where you could have something largely automatically generated as a function of different questions asked or even based on model specifications directly. We want to work towards having model cards as filled out as possible and interactive. Interactivity would allow you to see the difference in false-negative rate as you move the decision threshold. Normally with classification systems, you set some threshold at which you say yes or no, like .7, but in practice, you actually want to vary the decision threshold to trade off different errors. A static report of how well it works isn’t as informative as you want it to be because you want to know how well it works as different decision thresholds are chosen, and you could use that to decide what decision threshold to be used with your system. So we created a model card where you could interactively change the decision threshold and see how the numbers change. Moving towards that direction in further automation and interactivity is the way to go. ### Decision thresholds & model transparency **Meg:** When Amazon first started putting out facial recognition and facial analysis technology it was found that the gender classification was disproportionately bad for black women and Amazon responded by saying “this was done using the wrong decision threshold”. And then one of the police agencies who had been using one of these systems had been asked what decision threshold they had been using and said, “Oh we’re not using a decision threshold,”. Which was like oh you really don’t understand how this works and are using this out of the box with default parameter settings?! That is a problem. So minimally having this documentary brings awareness to decisions around the various types of parameters. Machine learning models are so different from other things we put out into the public. Toys, medicine, and cars have all sorts of regulations to ensure products are safe and work as intended. We don’t have that in machine learning, partly because it’s new so the laws and regulations don’t exist yet. It’s a bit like the wild west, and that’s what we’re trying to change with model cards. ### What are you working on at Hugging Face? - Working on a few different tools designed for engineers. - Working on philosophical and social science research: Just did a deep dive into UDHR (Universal Declaration of Human Rights) and how those can be applied with AI. Trying to help bridge the gaps between AI, ML, law, and philosophy. - Trying to develop some statistical methods that are helpful for testing systems as well as understanding datasets. - We also recently [put out a tool](https://huggingface.co/spaces/huggingface/data-measurements-tool) that shows how well a language maps to Zipfian distributions (how natural language tends to go) so you can test how well your model is matching with natural language that way. - Working a lot on the culture stuff: spending a lot of time on hiring and what processes we should have in place to be more inclusive. - Working on [Big Science](https://bigscience.huggingface.co/): a massive effort with people from all around the world, not just hugging face working on data governance (how can big data be used and examined without having it proliferate all over the world/being tracked with how it’s used). - Occasionally I’ll do an interview or talk to a Senator, so it’s all over the place. - Try to answer emails sometimes. *Note: Everyone at Hugging Face wears several hats.* :) ### Meg’s impact on AI Meg is featured in the book [Genius Makers ‘The Mavericks who brought AI to Google, Facebook, and the World’](https://www.amazon.com/Genius-Makers-Mavericks-Brought-Facebook/dp/1524742678). Cade Metz interviewed Meg for this while she was at Google. Meg’s pioneering research, systems, and work have played a pivotal role in the history of AI. (we are so lucky to have her at Hugging Face!) ### Rapid Fire Questions: ### Best piece of advice for someone looking to get into AI? **Meg:** Depends on who the person is. If they have marginalized characteristics I would give very different advice. For example, if it was a woman I would say, 'Don’t listen to your supervisors saying you aren’t good at this. Chances are you are just thinking about things differently than they are used to so have confidence in yourself.' If it’s someone with more majority characteristics I’d say, 'Forget about the pipeline problem, pay attention to the people around you and make sure that you hold them up so that the pipeline you’re in now becomes less of a problem.' Also, 'Evaluate your systems'. ### What industries are you most excited to see ML applied (or ML Ethics be applied) **Meg:** The health and assistive domains continue to be areas I care a lot about and see a ton of potential. Also want to see systems that help people understand their own biases. Lots of technology is being created to screen job candidates for job interviews but I feel that technology should really be focused on the interviewer and how they might be coming at the situation with different biases. Would love to have more technology that assists humans to be more inclusive instead of assisting humans to exclude people. ### You frequently include incredible examples of biased models in your Keynotes and interviews. One in particular that I love is the criminal detection model you've talked about that was using patterns of mouth angles to identify criminals (which you swiftly debunked). **Meg:** Yes, [the example is that] they were making this claim that there was this angle theta that was more indicative of criminals when it was a smaller angle. However, I was looking at the math and I realized that what they were talking about was a smile! Where you would have a wider angle for a smile vs a smaller angle associated with a straight face. They really missed the boat on what they were actually capturing there. Experimenter's bias: wanting to find things that aren’t there. ### Should people be afraid of AI taking over the world? **Meg:** There are a lot of things to be afraid of with AI. I like to see it as we have a distribution over different kinds of outcomes, some more positive than others, so there’s not one set one that we can know. There are a lot of different things where AI can be super helpful and more task-based over more generalized intelligence. You can see it going in another direction, similar to what I mentioned earlier about a model thinking something destructive is beautiful is one hop away from a system that is able to press a button to set off a missile. Don’t think people should be scared per se, but they should think about the best and worst-case scenarios and try to mitigate or stop the worst outcomes. I think the biggest thing right now is these systems can widen the divide between the haves and have nots. Further giving power to people who have power and further worsening things for people who don’t. The people designing these systems tend to be people with more power and wealth and they design things for their kinds of interest. I think that’s happening right now and something to think about in the future. Hopefully, we can focus on the things that are most beneficial and continue heading in that direction. ### Fav ML papers? **Meg:** Most recently I’ve really loved what [Abeba Birhane](https://abebabirhane.github.io) has been doing on [values that are encoded in machine learning](https://arxiv.org/abs/2106.15590). My own team at Google had been working on [data genealogies](https://journals.sagepub.com/doi/full/10.1177/20539517211035955), bringing critical analysis on how ML data is handled which they have a few papers on - for example, [Data and its (dis)contents: A survey of dataset development and use in machine learning research](https://arxiv.org/abs/2012.05345). Really love that work and might be biased because it included my team and direct reports, I’m very proud of them but it really is fundamentally good work. Earlier papers that I’m interested in are more reflective of what I was doing at that time. Really love the work of [Herbert Clark](https://neurotree.org/beta/publications.php?pid=4636) who was a psycholinguistics/communications person and he did a lot of work that is easily ported to computational models about how humans communicate. Really love his work and cite him a lot throughout my thesis. ### Anything else you would like to mention? **Meg:** One of the things I’m working on, that I think other people should be working on, is lowering the barrier of entry to AI for people with different academic backgrounds. We have a lot of people developing technology, which is great, but we don’t have a lot of people in a situation where they can really question the technology because there is often a bottleneck. For example, if you want to know about data directly you have to be able to log into a server and write a SQL query. So there is a bottleneck where engineers have to do it and I want to remove that barrier. How can we take things that are fundamentally technical code stuff and open it up so people can directly query the data without knowing how to program? We will be able to make better technology when we remove the barriers that require engineers to be in the middle. ### Outro **Britney:** Meg had a hard stop on the hour but I was able to ask her my last question offline: What’s something you’ve been interested in lately? Meg’s response: "How to propagate and grow plants in synthetic/controlled settings." Just when I thought she couldn’t get any cooler. 🤯 I’ll leave you with a recent quote from Meg in a [Science News article on Ethical AI](https://www.sciencenews.org/article/computer-science-history-ethics-future-robots-ai): *“The most pressing problem is the diversity and inclusion of who’s at the table from the start. All the other issues fall out from there.” -Meg Mitchell.* Thank you for listening to Machine Learning Experts! <a href="https://huggingface.co/support?utm_source=blog&utm_medium=blog&utm_campaign=ml_experts&utm_content=meg_interview_article"><img src="/blog/assets/57_meg_mitchell_interview/Meg-cta.png"></a> **Honorable mentions + links:** - [Emily Bender](https://twitter.com/emilymbender?lang=en) - [Ehud Reiter](https://mobile.twitter.com/ehudreiter) - [Abeba Birhane](https://abebabirhane.github.io/) - [Seeing AI](https://www.microsoft.com/en-us/ai/seeing-ai) - [Data Sheets for Datasets](https://arxiv.org/abs/1803.09010) - [Model Cards](https://modelcards.withgoogle.com/about) - [Model Cards Paper](https://arxiv.org/abs/1810.03993) - [Abeba Birhane](https://arxiv.org/search/cs?searchtype=author&query=Birhane%2C+A) - [The Values Encoded in Machine Learning Research](https://arxiv.org/abs/2106.15590) - [Data and its (dis)contents:](https://arxiv.org/abs/2012.05345) - [Herbert Clark](https://neurotree.org/beta/publications.php?pid=4636) **Follow Meg Online:** - [Twitter](https://twitter.com/mmitchell_ai) - [Website](http://www.m-mitchell.com) - [LinkedIn](https://www.linkedin.com/in/margaret-mitchell-9b13429)
Introducing Decision Transformers on Hugging Face 🤗
edbeeching
March 28, 2022
decision-transformers
open-source-collab, guide, rl
https://huggingface.co/blog/decision-transformers
# Introducing Decision Transformers on Hugging Face 🤗 At Hugging Face, we are contributing to the ecosystem for Deep Reinforcement Learning researchers and enthusiasts. Recently, we have integrated Deep RL frameworks such as [Stable-Baselines3](https://github.com/DLR-RM/stable-baselines3). And today we are happy to announce that we integrated the [Decision Transformer](https://arxiv.org/abs/2106.01345), an Offline Reinforcement Learning method, into the 🤗 transformers library and the Hugging Face Hub. We have some exciting plans for improving accessibility in the field of Deep RL and we are looking forward to sharing them with you over the coming weeks and months. - [What is Offline Reinforcement Learning?](#what-is-offline-reinforcement-learning?) - [Introducing Decision Transformers](#introducing-decision-transformers) - [Using the Decision Transformer in 🤗 Transformers](#using-the-decision-transformer-in--transformers) - [Conclusion](#conclusion) - [What's next?](#whats-next) - [References](#references) ## What is Offline Reinforcement Learning? Deep Reinforcement Learning (RL) is a framework to build decision-making agents. These agents aim to learn optimal behavior (policy) by interacting with the environment through trial and error and receiving rewards as unique feedback. The agent’s goal is to maximize **its cumulative reward, called return.** Because RL is based on the reward hypothesis: **all goals can be described as the maximization of the expected cumulative reward.** Deep Reinforcement Learning agents **learn with batches of experience.** The question is, how do they collect it?: ![Offline vs Online RL](assets/58_decision-transformers/offlinevsonlinerl.gif) *A comparison between Reinforcement Learning in an Online and Offline setting, figure taken from [this post](https://offline-rl.github.io/)* In online reinforcement learning, **the agent gathers data directly**: it collects a batch of experience by interacting with the environment. Then, it uses this experience immediately (or via some replay buffer) to learn from it (update its policy). But this implies that either you train your agent directly in the real world or have a simulator. If you don’t have one, you need to build it, which can be very complex (how to reflect the complex reality of the real world in an environment?), expensive, and insecure since if the simulator has flaws, the agent will exploit them if they provide a competitive advantage. On the other hand, in offline reinforcement learning, the agent only uses data collected from other agents or human demonstrations. **It does not interact with the environment**. The process is as follows: 1. Create a dataset using one or more policies and/or human interactions. 2. Run offline RL on this dataset to learn a policy This method has one drawback: the counterfactual queries problem. What do we do if our agent decides to do something for which we don’t have the data? For instance, turning right on an intersection but we don’t have this trajectory. There’s already exists some solutions on this topic, but if you want to know more about offline reinforcement learning you can watch [this video](https://www.youtube.com/watch?v=k08N5a0gG0A) ## Introducing Decision Transformers The Decision Transformer model was introduced by [“Decision Transformer: Reinforcement Learning via Sequence Modeling” by Chen L. et al](https://arxiv.org/abs/2106.01345). It abstracts Reinforcement Learning as a **conditional-sequence modeling problem**. The main idea is that instead of training a policy using RL methods, such as fitting a value function, that will tell us what action to take to maximize the return (cumulative reward), we use a sequence modeling algorithm (Transformer) that, given a desired return, past states, and actions, will generate future actions to achieve this desired return. It’s an autoregressive model conditioned on the desired return, past states, and actions to generate future actions that achieve the desired return. This is a complete shift in the Reinforcement Learning paradigm since we use generative trajectory modeling (modeling the joint distribution of the sequence of states, actions, and rewards) to replace conventional RL algorithms. It means that in Decision Transformers, we don’t maximize the return but rather generate a series of future actions that achieve the desired return. The process goes this way: 1. We feed the last K timesteps into the Decision Transformer with 3 inputs: - Return-to-go - State - Action 2. The tokens are embedded either with a linear layer if the state is a vector or CNN encoder if it’s frames. 3. The inputs are processed by a GPT-2 model which predicts future actions via autoregressive modeling. ![Decision Transformers architecture](assets/58_decision-transformers/dt-architecture.gif) *Decision Transformer architecture. States, actions, and returns are fed into modality specific linear embeddings and a positional episodic timestep encoding is added. Tokens are fed into a GPT architecture which predicts actions autoregressively using a causal self-attention mask. Figure from [1].* ## Using the Decision Transformer in 🤗 Transformers The Decision Transformer model is now available as part of the 🤗 transformers library. In addition, we share [nine pre-trained model checkpoints for continuous control tasks in the Gym environment](https://huggingface.co/models?other=gym-continous-control). <figure class="image table text-center m-0 w-full"> <video alt="WalkerEd-expert" style="max-width: 70%; margin: auto;" autoplay loop autobuffer muted playsinline > <source src="assets/58_decision-transformers/walker2d-expert.mp4" type="video/mp4"> </video> </figure> *An “expert” Decision Transformers model, learned using offline RL in the Gym Walker2d environment.* ### Install the package `````python pip install git+https://github.com/huggingface/transformers ````` ### Loading the model Using the Decision Transformer is relatively easy, but as it is an autoregressive model, some care has to be taken in order to prepare the model’s inputs at each time-step. We have prepared both a [Python script](https://github.com/huggingface/transformers/blob/main/examples/research_projects/decision_transformer/run_decision_transformer.py) and a [Colab notebook](https://colab.research.google.com/drive/1K3UuajwoPY1MzRKNkONNRS3gS5DxZ-qF?usp=sharing) that demonstrates how to use this model. Loading a pretrained Decision Transformer is simple in the 🤗 transformers library: `````python from transformers import DecisionTransformerModel model_name = "edbeeching/decision-transformer-gym-hopper-expert" model = DecisionTransformerModel.from_pretrained(model_name) `````` ### Creating the environment We provide pretrained checkpoints for the Gym Hopper, Walker2D and Halfcheetah. Checkpoints for Atari environments will soon be available. `````python import gym env = gym.make("Hopper-v3") state_dim = env.observation_space.shape[0] # state size act_dim = env.action_space.shape[0] # action size `````` ### Autoregressive prediction function The model performs an [autoregressive prediction](https://en.wikipedia.org/wiki/Autoregressive_model); that is to say that predictions made at the current time-step **t** are sequentially conditioned on the outputs from previous time-steps. This function is quite meaty, so we will aim to explain it in the comments. `````python # Function that gets an action from the model using autoregressive prediction # with a window of the previous 20 timesteps. def get_action(model, states, actions, rewards, returns_to_go, timesteps): # This implementation does not condition on past rewards states = states.reshape(1, -1, model.config.state_dim) actions = actions.reshape(1, -1, model.config.act_dim) returns_to_go = returns_to_go.reshape(1, -1, 1) timesteps = timesteps.reshape(1, -1) # The prediction is conditioned on up to 20 previous time-steps states = states[:, -model.config.max_length :] actions = actions[:, -model.config.max_length :] returns_to_go = returns_to_go[:, -model.config.max_length :] timesteps = timesteps[:, -model.config.max_length :] # pad all tokens to sequence length, this is required if we process batches padding = model.config.max_length - states.shape[1] attention_mask = torch.cat([torch.zeros(padding), torch.ones(states.shape[1])]) attention_mask = attention_mask.to(dtype=torch.long).reshape(1, -1) states = torch.cat([torch.zeros((1, padding, state_dim)), states], dim=1).float() actions = torch.cat([torch.zeros((1, padding, act_dim)), actions], dim=1).float() returns_to_go = torch.cat([torch.zeros((1, padding, 1)), returns_to_go], dim=1).float() timesteps = torch.cat([torch.zeros((1, padding), dtype=torch.long), timesteps], dim=1) # perform the prediction state_preds, action_preds, return_preds = model( states=states, actions=actions, rewards=rewards, returns_to_go=returns_to_go, timesteps=timesteps, attention_mask=attention_mask, return_dict=False,) return action_preds[0, -1] `````` ### Evaluating the model In order to evaluate the model, we need some additional information; the mean and standard deviation of the states that were used during training. Fortunately, these are available for each of the checkpoint’s [model card](https://huggingface.co/edbeeching/decision-transformer-gym-hopper-expert) on the Hugging Face Hub! We also need a target return for the model. This is the power of return conditioned Offline Reinforcement Learning: we can use the target return to control the performance of the policy. This could be really powerful in a multiplayer setting, where we would like to adjust the performance of an opponent bot to be at a suitable difficulty for the player. The authors show a great plot of this in their paper! ![Results Decision Transformers](assets/58_decision-transformers/results-dt.png) *Sampled (evaluation) returns accumulated by Decision Transformer when conditioned on the specified target (desired) returns. Top: Atari. Bottom: D4RL medium-replay datasets. Figure from [1].* ``````python TARGET_RETURN = 3.6 # This was normalized during training MAX_EPISODE_LENGTH = 1000 state_mean = np.array( [1.3490015, -0.11208222, -0.5506444, -0.13188992, -0.00378754, 2.6071432, 0.02322114, -0.01626922, -0.06840388, -0.05183131, 0.04272673,]) state_std = np.array( [0.15980862, 0.0446214, 0.14307782, 0.17629202, 0.5912333, 0.5899924, 1.5405099, 0.8152689, 2.0173461, 2.4107876, 5.8440027,]) state_mean = torch.from_numpy(state_mean) state_std = torch.from_numpy(state_std) state = env.reset() target_return = torch.tensor(TARGET_RETURN).float().reshape(1, 1) states = torch.from_numpy(state).reshape(1, state_dim).float() actions = torch.zeros((0, act_dim)).float() rewards = torch.zeros(0).float() timesteps = torch.tensor(0).reshape(1, 1).long() # take steps in the environment for t in range(max_ep_len): # add zeros for actions as input for the current time-step actions = torch.cat([actions, torch.zeros((1, act_dim))], dim=0) rewards = torch.cat([rewards, torch.zeros(1)]) # predicting the action to take action = get_action(model, (states - state_mean) / state_std, actions, rewards, target_return, timesteps) actions[-1] = action action = action.detach().numpy() # interact with the environment based on this action state, reward, done, _ = env.step(action) cur_state = torch.from_numpy(state).reshape(1, state_dim) states = torch.cat([states, cur_state], dim=0) rewards[-1] = reward pred_return = target_return[0, -1] - (reward / scale) target_return = torch.cat([target_return, pred_return.reshape(1, 1)], dim=1) timesteps = torch.cat([timesteps, torch.ones((1, 1)).long() * (t + 1)], dim=1) if done: break `````` You will find a more detailed example, with the creation of videos of the agent in our [Colab notebook](https://colab.research.google.com/drive/1K3UuajwoPY1MzRKNkONNRS3gS5DxZ-qF?usp=sharing). ## Conclusion In addition to Decision Transformers, we want to support more use cases and tools from the Deep Reinforcement Learning community. Therefore, it would be great to hear your feedback on the Decision Transformer model, and more generally anything we can build with you that would be useful for RL. Feel free to **[reach out to us](mailto:[email protected])**. ## What’s next? In the coming weeks and months, we plan on supporting other tools from the ecosystem: - Integrating **[RL-baselines3-zoo](https://github.com/DLR-RM/rl-baselines3-zoo)** - Uploading **[RL-trained-agents models](https://github.com/DLR-RM/rl-trained-agents)** into the Hub: a big collection of pre-trained Reinforcement Learning agents using stable-baselines3 - Integrating other Deep Reinforcement Learning libraries - Implementing Convolutional Decision Transformers For Atari - And more to come 🥳 The best way to keep in touch is to **[join our discord server](https://discord.gg/YRAq8fMnUG)** to exchange with us and with the community. ## References [1] Chen, Lili, et al. "Decision transformer: Reinforcement learning via sequence modeling." *Advances in neural information processing systems* 34 (2021). [2] Agarwal, Rishabh, Dale Schuurmans, and Mohammad Norouzi. "An optimistic perspective on offline reinforcement learning." *International Conference on Machine Learning*. PMLR, 2020. ### Acknowledgements We would like to thank the paper’s first authors, Kevin Lu and Lili Chen, for their constructive conversations.
Don't repeat yourself - 🤗 Transformers Design Philosophy
patrickvonplaten
April 5, 2022
transformers-design-philosophy
community
https://huggingface.co/blog/transformers-design-philosophy
# ~~Don't~~ Repeat Yourself* ##### *Designing open-source libraries for modern machine learning* ## 🤗 Transformers Design Philosophy *"Don't repeat yourself"*, or **DRY**, is a well-known principle of software development. The principle originates from "The pragmatic programmer", one of the most read books on code design. The principle's simple message makes obvious sense: Don't rewrite a logic that already exists somewhere else. This ensures the code remains in sync, making it easier to maintain and more robust. Any change to this logical pattern will uniformly affect all of its dependencies. At first glance, the design of Hugging Face's Transformers library couldn't be more contrary to the DRY principle. Code for the attention mechanism is more or less copied over 50 times into different model files. Sometimes code of the whole BERT model is copied into other model files. We often force new model contributions identical to existing models - besides a small logical tweak - to copy all of the existing code. Why do we do this? Are we just too lazy or overwhelmed to centralize all logical pieces into one place? No, we are not lazy - it's a very conscious decision not to apply the DRY design principle to the Transformers library. Instead, we decided to adopt a different design principle which we like to call the ***single model file*** policy. The *single model file* policy states that all code necessary for the forward pass of a model is in one and only one file - called the model file. If a reader wants to understand how BERT works for inference, she should only have to look into BERT's `modeling_bert.py` file. We usually reject any attempt to abstract identical sub-components of different models into a new centralized place. We don't want to have a `attention_layer.py` that includes all possible attention mechanisms. Again why do we do this? In short the reasons are: - **1. Transformers is built by and for the open-source community.** - **2. Our product are models and our customers are users reading or tweaking model code.** - **3. The field of machine learning evolves extremely fast.** - **4. Machine Learning models are static.** ### 1. Built by and for the open-source community Transformers is built to actively incentivize external contributions. A contribution is often either a bug fix or a new model contribution. If a bug is found in one of the model files, we want to make it as easy as possible for the finder to fix it. There is little that is more demotivating than fixing a bug only to see that it caused 100 failures of other models. Because model code is independent from all other models, it's fairly easy for someone that only understands the one model she is working with to fix it. Similarly, it's easier to add new modeling code and review the corresponding PR if only a single new model file is added. The contributor does not have to figure out how to add new functionality to a centralized attention mechanism without breaking existing models. The reviewer can easily verify that none of the existing models are broken. ### 2. Modeling code is our product We assume that a significant amount of users of the Transformers library not only read the documentation, but also look into the actual modeling code and potentially modify it. This hypothesis is backed by the Transformers library being forked over 10,000 times and the Transformers paper being cited over a thousand times. Therefore it is of utmost importance that someone reading Transformers modeling code for the first time can easily understand and potentially adapt it. Providing all the necessary logical components in order in a single modeling file helps a lot to achieve improved readability and adaptability. Additionally, we care a great deal about sensible variable/method naming and prefer expressive/readable code over character-efficient code. ### 3. Machine Learning is evolving at a neck-breaking speed Research in the field of machine learning, and especially neural networks, evolves extremely fast. A model that was state-of-the-art a year ago might be outdated today. We don't know which attention mechanism, position embedding, or architecture will be the best in a year. Therefore, we cannot define standard logical patterns that apply to all models. As an example, two years ago, one might have defined BERT's self attention layer as the standard attention layer used by all Transformers models. Logically, a "standard" attention function could have been moved into a central `attention.py` file. But then came attention layers that added relative positional embeddings in each attention layer (T5), multiple different forms of chunked attention (Reformer, Longformer, BigBird), and separate attention mechanism for position and word embeddings (DeBERTa), etc... Every time we would have to have asked ourselves whether the "standard" attention function should be adapted or whether it would have been better to add a new attention function to `attention.py`. But then how do we name it? `attention_with_positional_embd`, `reformer_attention`, `deberta_attention`? It's dangerous to give logical components of machine learning models general names because the perception of what this component stands for might change or become outdated very quickly. E.g., does chunked attention corresponds to GPTNeo's, Reformer's, or BigBird's chunked attention? Is the attention layer a self-attention layer, a cross-attentional layer, or does it include both? However, if we name attention layers by their model's name, we should directly put the attention function in the corresponding modeling file. ### 4. Machine Learning models are static The Transformers library is a unified and polished collection of machine learning models that different research teams have created. Every machine learning model is usually accompanied by a paper and its official GitHub repository. Once a machine learning model is published, it is rarely adapted or changed afterward. Instead, research teams tend to publish a new model built upon previous models but rarely make significant changes to already published code. This is an important realization when deciding on the design principles of the Transformers library. It means that once a model architecture has been added to Transformers, the fundamental components of the model don't change anymore. Bugs are often found and fixed, methods and variables might be renamed, and the output or input format of the model might be slightly changed, but the model's core components don't change anymore. Consequently, the need to apply global changes to all models in Transformers is significantly reduced, making it less important that every logical pattern only exists once since it's rarely changed. A second realization is that models do **not** depend on each other in a bidirectional way. More recent published models might depend on existing models, but it's quite obvious that an existing model cannot logically depend on its successor. E.g. T5 is partly built upon BERT and therefore T5's modeling code might logically depend on BERT's modeling code, but BERT cannot logically depend in any way on T5. Thus, it would not be logically sound to refactor BERT's attention function to also work with T5's attention function - someone reading through BERT's attention layer should not have to know anything about T5. Again, this advocates against centralizing components such as the attention layer into modules that all models can access. On the other hand, the modeling code of successor models can very well logically depend on its predecessor model. E.g., DeBERTa-v2 modeling code does logically depend to some extent on DeBERTa's modeling code. Maintainability is significantly improved by ensuring the modeling code of DeBERTa-v2 stays in sync with DeBERTa's. Fixing a bug in DeBERTa should ideally also fix the same bug in DeBERTa-v2. How can we maintain the *single model file* policy while ensuring that successor models stay in sync with their predecessor model? Now, we explain why we put the asterisk \\( {}^{\textbf{*}} \\) after *"Repeat Yourself"*. We don't blindly copy-paste all existing modeling code even if it looks this way. One of Transformers' core maintainers, [Sylvain Gugger](https://github.com/sgugger), found a great mechanism that respects both the *single file policy* and keeps maintainability cost in bounds. This mechanism, loosely called *"the copying mechanism"*, allows us to mark logical components, such as an attention layer function, with a `# Copied from <predecessor_model>.<function>` statement, which enforces the marked code to be identical to the `<function>` of the `<predecessor_model>`. E.g., this line of over [DeBERTa-v2's class](https://github.com/huggingface/transformers/blob/21decb7731e998d3d208ec33e5b249b0a84c0a02/src/transformers/models/deberta_v2/modeling_deberta_v2.py#L325) enforces the whole class to be identical to [DeBERTa's class](https://github.com/huggingface/transformers/blob/21decb7731e998d3d208ec33e5b249b0a84c0a02/src/transformers/models/deberta/modeling_deberta.py#L336) except for the prefix `DeBERTav2`. This way, the copying mechanism keeps modeling code very easy to understand while significantly reducing maintenance. If some code is changed in a function of a predecessor model that is referred to by a function of its successor model, there are tools in place that automatically correct the successor model's function. ### Drawbacks Clearly, there are also drawbacks to the single file policy two of which we quickly want to mention here. A major goal of Transformers is to provide a unified API for both inference and training for all models so that a user can quickly switch between different models in her setup. However, ensuring a unified API across models is much more difficult if modeling files are not allowed to use abstracted logical patterns. We solve this problem by running **a lot** of tests (*ca.* 20,000 tests are run daily at the time of writing this blog post) to ensure that models follow a consistent API. In this case, the single file policy requires us to be very rigorous when reviewing model and test additions. Second, there is a lot of research on just a single component of a Machine Learning model. *E.g.*, research teams investigate new forms of an attention mechanism that would apply to all existing pre-trained models as has been done in the [Rethinking Attention with Performers](https://arxiv.org/abs/2009.14794). How should we incorporate such research into the Transformers library? It is indeed problematic. Should we change all existing models? This would go against points 3. and 4. as written above. Should we add 100+ new modeling files each prefixed with `Performer...`? This seems absurd. In such a case there is sadly no good solution and we opt for not integrating the paper into Transformers in this case. If the paper would have gotten much more traction and included strong pre-trained checkpoints, we would have probably added new modeling files of the most important models such as `modeling_performer_bert.py` available. ### Conclusion All in all, at 🤗 Hugging Face we are convinced that the *single file policy* is the right coding philosophy for Transformers. What do you think? If you read until here, we would be more than interested in hearing your opinion! If you would like to leave a comment, please visit the corresponding forum post [here](https://discuss.huggingface.co/t/repeat-yourself-transformers-design-philosophy/16483).
Habana Labs and Hugging Face Partner to Accelerate Transformer Model Training
susanlansing
April 12, 2022
habana
partnerships
https://huggingface.co/blog/habana
# Habana Labs and Hugging Face Partner to Accelerate Transformer Model Training *Santa Clara and San Francisco, CA, April 12th, 2022* Powered by deep learning, transformer models deliver state-of-the-art performance on a wide range of machine learning tasks, such as natural language processing, computer vision, speech, and more. However, training them at scale often requires a large amount of computing power, making the whole process unnecessarily long, complex, and costly. Today, [Habana® Labs](https://habana.ai/), a pioneer in high-efficiency, purpose-built deep learning processors, and Hugging Face, the home of [Transformer](https://github.com/huggingface/transformers) models, are happy to announce that they’re joining forces to make it easier and quicker to train high-quality transformer models. Thanks to the integration of Habana’s [SynapseAI software suite](https://habana.ai/training-software/) with the Hugging Face [Optimum open-source library](https://github.com/huggingface/optimum), data scientists and machine learning engineers can now accelerate their Transformer training jobs on Habana processors with just a few lines of code and enjoy greater productivity as well as lower training cost. [Habana Gaudi](https://habana.ai/training/) training solutions, which power Amazon’s EC2 DL1 instances and Supermicro’s X12 Gaudi AI Training Server, deliver price/performance up to 40% lower than comparable training solutions and enable customers to train more while spending less. The integration of ten 100 Gigabit Ethernet ports onto every Gaudi processor enables system scaling from 1 to thousands of Gaudis with ease and cost-efficiency. Habana’s SynapseAI® is optimized—at inception—to enable Gaudi performance and usability, supports TensorFlow and PyTorch frameworks, with a focus on computer vision and natural language processing applications. With 60,000+ stars on Github, 30,000+ models, and millions of monthly visits, Hugging Face is one of the fastest-growing projects in open source software history, and the go-to place for the machine learning community. With its [Hardware Partner Program](https://huggingface.co/hardware), Hugging Face provides Gaudi’s advanced deep learning hardware with the ultimate Transformer toolset. This partnership will enable rapid expansion of the Habana Gaudi training transformer model library, bringing Gaudi efficiency and ease of use to a wide array of customer use cases like natural language processing, computer vision, speech, and more. “*We’re excited to partner with Hugging Face and its many open-source developers to address the growing demand for transformer models that benefit from the efficiency, usability, and scalability of the Gaudi training platform*”, said Sree Ganesan, head of software product management, Habana Labs. “Habana Gaudi brings a new level of efficiency to deep learning model training, and we’re super excited to make this performance easily accessible to Transformer users with minimal code changes through Optimum”, said Jeff Boudier, product director at Hugging Face. To learn how to get started training with Habana Gaudi, please visit [https://developer.habana.ai](https://developer.habana.ai). For more info on the Hugging Face and Habana Gaudi collaboration, please visit [https://huggingface.co/Habana](https://huggingface.co/Habana).
Machine Learning Experts - Lewis Tunstall Interview
britneymuller
April 13, 2022
lewis-tunstall-interview
expert-acceleration-program, ml-experts
https://huggingface.co/blog/lewis-tunstall-interview
# Machine Learning Experts - Lewis Tunstall ## 🤗 Welcome to Machine Learning Experts - Lewis Tunstall Hey friends! Welcome to Machine Learning Experts. I'm your host, Britney Muller and today’s guest is [Lewis Tunstall](https://twitter.com/_lewtun). Lewis is a Machine Learning Engineer at Hugging Face where he works on applying Transformers to automate business processes and solve MLOps challenges. Lewis has built ML applications for startups and enterprises in the domains of NLP, topological data analysis, and time series. You’ll hear Lewis talk about his [new book](https://transformersbook.com/), transformers, large scale model evaluation, how he’s helping ML engineers optimize for faster latency and higher throughput, and more. In a previous life, Lewis was a theoretical physicist and outside of work loves to play guitar, go trail running, and contribute to open-source projects. <a href="https://huggingface.co/support?utm_source=blog&utm_medium=blog&utm_campaign=ml_experts&utm_content=lewis_interview_article"><img src="/blog/assets/60_lewis_tunstall_interview/lewis-cta.png"></a> Very excited to introduce this fun and brilliant episode to you! Here’s my conversation with Lewis Tunstall: <iframe width="100%" style="aspect-ratio: 16 / 9;"src="https://www.youtube.com/embed/igW5VWewuLE" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> *Note: Transcription has been slightly modified/reformatted to deliver the highest-quality reading experience.* ### Welcome, Lewis! Thank you so much for taking time out of your busy schedule to chat with me today about your awesome work! **Lewis:** Thanks, Britney. It’s a pleasure to be here. ### Curious if you can do a brief self-introduction and highlight what brought you to Hugging Face? **Lewis:** What brought me to Hugging Face was transformers. In 2018, I was working with transformers at a startup in Switzerland. My first project was a question answering task where you input some text and train a model to try and find the answer to a question within that text. In those days the library was called: pytorch-pretrained-bert, it was a very focused code base with a couple of scripts and it was the first time I worked with transformers. I had no idea what was going on so I read the original [‘Attention Is All You Need’](https://arxiv.org/abs/1706.03762) paper but I couldn’t understand it. So I started looking around for other resources to learn from. In the process, Hugging Face exploded with their library growing into many architectures and I got really excited about contributing to open-source software. So around 2019, I had this kinda crazy idea to write a book about transformers because I felt there was an information gap that was missing. So I partnered up with my friend, [Leandro](https://twitter.com/lvwerra) (von Werra) and we sent [Thom](https://twitter.com/Thom_Wolf) (Wolf) a cold email out of nowhere saying, “Hey we are going to write a book about transformers, are you interested?” and I was expecting no response. But to our great surprise, he responded “Yea, sure let’s have a chat.” and around 1.5 years later this is our book: [NLP with Transformers](https://transformersbook.com/). This collaboration set the seeds for Leandro and I to eventually join Hugging Face. And I've been here now for around nine months. ### That is incredible. How does it feel to have a copy of your book in your hands? **Lewis:** I have to say, I just became a parent about a year and a half ago and it feels kind of similar to my son being born. You're holding this thing that you created. It's quite an exciting feeling and so different to actually hold it (compared to reading a PDF). Confirms that it’s actually real and I didn't just dream about it. ### Exactly. Congratulations! Want to briefly read one endorsement that I love about this book; “_Complexity made simple. This is a rare and precious book about NLP, transformers, and the growing ecosystem around them, Hugging Face. Whether these are still buzzwords to you or you already have a solid grasp of it all, the authors will navigate you with humor, scientific rigor, and plenty of code examples into the deepest secrets of the coolest technology around. From “off-the-shelf pre-trained” to “from-scratch custom” models, and from performance to missing labels issues, the authors address practically every real-life struggle of an ML engineer and provide state-of-the-art solutions, making this book destined to dictate the standards in the field for years to come._” —Luca Perrozi Ph.D., Data Science and Machine Learning Associate Manager at Accenture. Checkout [Natural Language Processing with Transformers](https://transformersbook.com/). ### Can you talk about the work you've done with the transformers library? **Lewis:** One of the things that I experienced in my previous jobs before Hugging Face was there's this challenge in the industry when deploying these models into production; these models are really large in terms of the number of parameters and this adds a lot of complexity to the requirements you might have. So for example, if you're trying to build a chatbot you need this model to be very fast and responsive. And most of the time these models are a bit too slow if you just take an off-the-shelf model, train it, and then try to integrate it into your application. So what I've been working on for the last few months on the transformers library is providing the functionality to export these models into a format that lets you run them much more efficiently using tools that we have at Hugging Face, but also just general tools in the open-source ecosystem. In a way, the philosophy of the transformers library is like writing lots of code so that the users don't have to write that code. In this particular example, what we're talking about is something called the ONNX format. It's a special format that is used in industry where you can basically have a model that's written in PyTorch but you can then convert it to TensorFlow or you can run it on some very dedicated hardware. And if you actually look at what's needed to make this conversion happen in the transformers library, it's fairly gnarly. But we make it so that you only really have to run one line of code and the library will take care of you. So the idea is that this particular feature lets machine learning engineers or even data scientists take their model, convert it to this format, and then optimize it to get faster latency and higher throughput. ### That's very cool. Have there been, any standout applications of transformers? **Lewis:** I think there are a few. One is maybe emotional or personal, for example many of us when OpenAI released GPT-2, this very famous language model which can generate text. OpenAI actually provided in their blog posts some examples of the essays that this model had created. And one of them was really funny. One was an essay about why we shouldn't recycle or why recycling is bad. And the model wrote a compelling essay on why recycling was bad. Leandro and I were working at a startup at the time and I printed it out and stuck it right above the recycling bin in the office as a joke. And people were like, “Woah, who wrote this?” and I said, “An algorithm.” I think there's something sort of strangely human, right? Where if we see generated text we get more surprised when it looks like something I (or another human) might have written versus other applications that have been happening like classifying text or more conventional tasks. ### That's incredible. I remember when they released those examples for GPT-2, and one of my favorites (that almost gave me this sense of, whew, we're not quite there yet) were some of the more inaccurate mentions like “underwater fires”. **Lewis:** Exactly! **Britney:** But, then something had happened with an oil spill that next year, where there were actually fires underwater! And I immediately thought about that text and thought, maybe AI is onto something already that we're not quite aware of? ### You and other experts at Hugging Face have been working hard on the Hugging Face Course. How did that come about & where is it headed? **Lewis:** When I joined Hugging Face, [Sylvian](https://twitter.com/GuggerSylvain) and [Lysandre](https://twitter.com/LysandreJik), two of the core maintainers of the transformers library, were developing a course to basically bridge the gap between people who are more like software engineers who are curious about natural language processing but specifically curious about the transformers revolution that's been happening. So I worked with them and others in the open-source team to create a free course called the [Hugging Face Course](https://huggingface.co/course/chapter1/1). And this course is designed to really help people go from knowing kind of not so much about ML all the way through to having the ability to train models on many different tasks. And, we've released two parts of this course and planning to release the third part this year. I'm really excited about the next part that we're developing right now where we're going to explore different modalities where transformers are really powerful. Most of the time we think of transformers for NLP, but likely there's been this explosion where transformers are being used in things like audio or in computer vision and we're going to be looking at these in detail. ### What are some transformers applications that you're excited about? **Lewis:** So one that's kind of fun is in the course we had an event last year where we got people in the community to use the course material to build applications. And one of the participants in this event created a cover letter generator for jobs. So the idea is that when you apply for a job there's always this annoying thing you have to write a cover letter and it's always like a bit like you have to be witty. So this guy created a cover letter generator where you provide some information about yourself and then it generates it from that. And he actually used that to apply to Hugging Face. ### No way?! **Lewis:** He's joining the Big Science team as an intern. So. I mean this is a super cool thing, right? When you learn something and then use that thing to apply which I thought was pretty awesome. ### Where do you want to see more ML applications? **Lewis:** So I think personally, the area that I'm most excited about is the application of machine learning into natural sciences. And that's partly because of my background. I used to be a Physicist in a previous lifetime but I think what's also very exciting here is that in a lot of fields. For example, in physics or chemistry you already know what the say underlying laws are in terms of equations that you can write down but it turns out that many of the problems that you're interested in studying often require a simulation. Or they often require very hardcore supercomputers to understand and solve these equations. And one of the most exciting things to me is the combination of deep learning with the prior knowledge that scientists have gathered to make breakthroughs that weren't previously possible. And I think a great example is [DeepMind’s Alpha Fold](https://www.deepmind.com/research/highlighted-research/alphafold) model for protein structure prediction where they were basically using a combination of transformers with some extra information to generate predictions of proteins that I think previously were taking on the order of months and now they can do them in days. So this accelerates the whole field in a really powerful way. And I can imagine these applications ultimately lead to hopefully a better future for humanity. ### How you see the world of model evaluation evolving? **Lewis:** That's a great question. So at Hugging Face, one of the things I've been working on has been trying to build the infrastructure and the tooling that enables what we call 'large-scale evaluation'. So you may know that the [Hugging Face Hub](https://huggingface.co/models) has thousands of models and datasets. But if you're trying to navigate this space you might ask yourself, 'I'm interested in question answering and want to know what the top 10 models on this particular task are'. And at the moment, it's hard to find the answer to that, not just on the Hub, but in general in the space of machine learning this is quite hard. You often have to read papers and then you have to take those models and test them yourself manually and that's very slow and inefficient. So one thing that we've been working on is to develop a way that you can evaluate models and datasets directly through the Hub. We're still trying to experiment there with the direction. But I'm hoping that we have something cool to show later this year. And there's another side to this which is that a large part of the measuring progress in machine learning is through the use of benchmarks. These benchmarks are traditionally a set of datasets with some tasks but what's been maybe missing is that a lot of researchers speak to us and say, “Hey, I've got this cool idea for a benchmark, but I don't really want to implement all of the nitty-gritty infrastructure for the submissions, and the maintenance, and all those things.” And so we've been working with some really cool partners on hosting benchmarks on the Hub directly. So that then people in the research community can use the tooling that we have and then simplify the evaluation of these models. ### That is super interesting and powerful. **Lewis:** Maybe one thing to mention is that the whole evaluation question is a very subtle one. We know from previous benchmarks, such as SQuAD, a famous benchmark to measure how good models are at question answering, that many of these transformer models are good at taking shortcuts. Well, that's the aim but it turns out that many of these transformer models are really good at taking shortcuts. So, what they’re actually doing is they're getting a very high score on a benchmark which doesn't necessarily translate into the actual thing you were interested in which was answering questions. And you have all these subtle failure modes where the models will maybe provide completely wrong answers or they should not even answer at all. And so at the moment in the research community there's a very active and vigorous discussion about what role benchmarks play in the way we measure progress. But also, how do these benchmarks encode our values as a community? And one thing that I think Hugging Face can really offer the community here is the means to diversify the space of values because traditionally most of these research papers come from the U.S. which is a great country but it's a small slice of the human experience, right? ### What are some common mistakes machine learning engineers or teams make? **Lewis:** I can maybe tell you the ones that I've done. Probably a good representative of the rest of the things. So I think the biggest lesson I learned when I was starting out in the field is using baseline models when starting out. It’s a common problem that I did and then later saw other junior engineers doing is reaching for the fanciest state-of-the-art model. Although that may work, a lot of the time what happens is you introduce a lot of complexity into the problem and your state-of-the-art model may have a bug and you won't really know how to fix it because the model is so complex. It’s a very common pattern in industry and especially within NLP is that you can actually get quite far with regular expressions and linear models like logistic regression and these kinds of things will give you a good start. Then if you can build a better model then great, you should do that, but it's great to have a reference point. And then I think the second big lesson I’ve learned from building a lot of projects is that you can get a bit obsessed with the modeling part of the problem because that's the exciting bit when you're doing machine learning but there's this whole ecosystem. Especially if you work in a large company there'll be this whole ecosystem of services and things that are around your application. So the lesson there is you should really try to build something end to end that maybe doesn't even have any machine learning at all. But it's the scaffolding upon which you can build the rest of the system because you could spend all this time training an awesome mode, and then you go, oh, oops. It doesn't integrate with the requirements we have in our application. And then you've wasted all this time. ### That's a good one! Don't over-engineer. Something I always try to keep in mind. **Lewis:** Exactly. And it's a natural thing I think as humans especially if you're nerdy you really want to find the most interesting way to do something and most of the time simple is better. ### If you could go back and do one thing differently at the beginning of your career in machine learning, what would it be? **Lewis:** Oh, wow. That's a tough one. Hmm. So, the reason this is a really hard question to answer is that now that I’m working at Hugging Face, it's the most fulfilling type of work that I've really done in my whole life. And the question is if I changed something when I started out maybe I wouldn't be here, right? It's one of those things where it's a tricky one in that sense. I suppose one thing that maybe I would've done slightly differently is when I started out working as a data scientist you tend to develop the skills which are about mapping business problems to software problems or ultimately machine learning problems. And this is a really great skill to have. But what I later discovered is that my true driving passion is doing open source software development. So probably the thing I would have done differently would have been to start that much earlier. Because at the end of the day most open source is really driven by community members. So that would have been maybe a way to shortcut my path to doing this full-time. ### I love the idea of had you done something differently maybe you wouldn't be at Hugging Face. **Lewis:** It’s like the butterfly effect movie, right? You go back in time and then you don't have any legs or something. ### Totally. Don't want to mess with a good thing! **Lewis:** Exactly. ### Rapid Fire Questions: ### Best piece of advice for someone looking to get into AI/Machine Learning? **Lewis:** Just start. Just start coding. Just start contributing if you want to do open-source. You can always find reasons not to do it but you just have to get your hands dirty. ### What are some of the industries you're most excited to see machine learning applied? **Lewis:** As I mentioned before, I think the natural sciences is the area I’m most excited about This is where I think that's most exciting. If we look at something, say at the industrial side, I guess some of the development of new drugs through machine learning is very exciting. Personally, I'd be really happy if there were advancements in robotics where I could finally have a robot to like fold my laundry because I really hate doing this and it would be nice if like there was an automated way of handling that. ### Should people be afraid of AI taking over the world? **Lewis:** Maybe. It’s a tough one because I think we have reasons to think that we may create systems that are quite dangerous in the sense that they could be used to cause a lot of harm. An analogy is perhaps with weapons you can use within the sports like archery and shooting, but you can also use them for war. One big risk is probably if we think about combining these techniques with the military perhaps this leads to some tricky situations. But, I'm not super worried about the Terminator. I'm more worried about, I don't know, a rogue agent on the financial stock market bankrupting the whole world. ### That's a good point. **Lewis:** Sorry, that's a bit dark. ### No, that was great. The next question is a follow-up on your folding laundry robot. When will AI-assisted robots be in homes everywhere? **Lewis:** Honest answer. I don't know. Everyone, I know who's working on robotics says this is still an extremely difficult task in the sense that robotics hasn't quite experienced the same kind of revolutions that NLP and deep learning have had. But on the other hand, you can see some pretty exciting developments in the last year, especially around the idea of being able to transfer knowledge from a simulation into the real world. I think there's hope that in my lifetime I will have a laundry-folding robot. ### What have you been interested in lately? It could be a movie, a recipe, a podcast, literally anything. And I'm just curious what that is and how someone interested in that might find it or get started. **Lewis:** It's a great question. So for me, I like podcasts in general. It’s my new way of reading books because I have a young baby so I'm just doing chores and listening at the same time. One podcast that really stands out recently is actually the [DeepMind podcast](https://www.deepmind.com/the-podcast) produced by Hannah Fry who's a mathematician in the UK and she gives this beautiful journey through not just what Deep Mind does, but more generally, what deep learning and especially reinforcement learning does and how they're impacting the world. Listening to this podcast feels like you're listening to like a BBC documentary because you know the English has such great accents and you feel really inspired because a lot of the work that she discusses in this podcast has a strong overlap with what we do at Hugging Face. You see this much bigger picture of trying to pave the way for a better future. It resonated strongly. And I just love it because the explanations are super clear and you can share it with your family and your friends and say, “Hey, if you want to know what I'm doing? This can give you a rough idea.” It gives you a very interesting insight into the Deep Mind researchers and their backstory as well. ### I'm definitely going to give that a listen. [Update: It’s one of my new favorite podcasts. :) Thank you, Lewis!] ### What are some of your favorite Machine Learning papers? **Lewis:** Depends on how we measure this, but there's [one paper that stands out to me, which is quite an old paper](https://www.stat.berkeley.edu/~breiman/randomforest2001.pdf). It’s by the creator of random forests, Leo Breiman. Random forests is a very famous classic machine learning technique that's useful for tabular data that you see in industry and I had to teach random forests at university a year ago. And I was like, okay, I'll read this paper from the 2000s and see if I understand it. And it's a model of clarity. It's very short, and very clearly explains how the algorithm is implemented. You can basically just take this paper and implement the code very very easily. And that to me was a really nice example of how papers were written in medieval times. Whereas nowadays, most papers, have this formulaic approach of, okay, here's an introduction, here's a table with some numbers that get better, and here's like some random related work section. So, I think that's one that like stands out to me a lot. But another one that's a little bit more recent is [a paper by DeepMind](https://www.nature.com/articles/d41586-021-03593-1) again on using machine learning techniques to prove fundamental theorems like algebraic topology, which is a special branch of abstract mathematics. And at one point in my life, I used to work on these related topics. So, to me, it's a very exciting, perspective of augmenting the knowledge that a mathematician would have in trying to narrow down the space of theorems that they might have to search for. I think this to me was surprising because a lot of the time I've been quite skeptical that machine learning will lead to this fundamental scientific insight beyond the obvious ones like making predictions. But this example showed that you can actually be quite creative and help mathematicians find new ideas. ### What is the meaning of life? **Lewis:** I think that the honest answer is, I don't know. And probably anyone who does tell you an answer probably is lying. That's a bit sarcastic. I dunno, I guess being a site scientist by training and especially a physicist, you develop this worldview that is very much that there isn't really some sort of deeper meaning to this. It's very much like the universe is quite random and I suppose the only thing you can take from that beyond being very sad is that you derive your own meaning, right? And most of the time this comes either from the work that you do or from the family or from your friends that you have. But I think when you find a way to derive your own meaning and discover what you do is actually interesting and meaningful that that's the best part. Life is very up and down, right? At least for me personally, the things that have always been very meaningful are generally in creating things. So, I used to be a musician, so that was a way of creating music for other people and there was great pleasure in doing that. And now I kind of, I guess, create code which is a form of creativity. ### Absolutely. I think that's beautiful, Lewis! Is there anything else you would like to share or mention before we sign off? **Lewis:** Maybe [buy my book](https://transformersbook.com/). ### It is so good! **Lewis:** [shows book featuring a parrot on the cover] Do you know the story about the parrot? ### I don't think so. **Lewis:** So when O’Reilly is telling you “We're going to get our illustrator now to design the cover,” it's a secret, right? They don't tell you what the logic is or you have no say in the matter. So, basically, the illustrator comes up with an idea and in one of the last chapters of the book we have a section where we basically train a GPT-2 like model on Python code, this was Thom's idea, and he decided to call it code parrot. I think the idea or the joke he had was that there's a lot of discussion in the community about this paper that Meg Mitchell and others worked on called, ‘Stochastic Parrots’. And the idea was that you have these very powerful language models which seem to exhibit human-like traits in their writing as we discussed earlier but deep down maybe they're just doing some sort of like parrot parenting thing. You know, if you talk to like a cockatoo it will swear at you or make jokes. That may not be a true measure of intelligence, right? So I think that the illustrator somehow maybe saw that and decided to put a parrot which I think is a perfect metaphor for the book. And the fact that there are transformers in it. ### Had no idea that that was the way O'Reilly's covers came about. They don't tell you and just pull context from the book and create something? **Lewis:** It seems like it. I mean, we don't really know the process. I'm just sort of guessing that maybe the illustrator was trying to get an idea and saw a few animals in the book. In one of the chapters we have a discussion about giraffes and zebras and stuff. But yeah I'm happy with the parrot cover. ### I love it. Well, it looks absolutely amazing. A lot of these types of books tend to be quite dry and technical and this one reads almost like a novel mixed with great applicable technical information, which is beautiful. **Lewis:** Thanks. Yeah, that’s one thing we realized afterward because it was the first time we were writing a book we thought we should be sort of serious, right? But if you sort of know me I'm like never really serious about anything. And in hindsight, we should have been even more silly in the book. I had to control my humor in various places but maybe there'll be a second edition one day and then we can just inject it with memes. ### Please do, I look forward to that! **Lewis:** In fact, there is one meme in the book. We tried to sneak this in past the Editor and have the DOGE dog inside the book and we use a special vision transformer to try and classify what this meme is. ### So glad you got that one in there. Well done! Look forward to many more in the next edition. Thank you so much for joining me today. I really appreciate it. Where can our listeners find you online? **Lewis:** I'm fairly active on Twitter. You can just find me my handle [@_lewtun](https://twitter.com/_lewtun). LinkedIn is a strange place and I'm not really on there very much. And of course, there's [Hugging Face](https://huggingface.co/lewtun), the [Hugging Face Forums](https://discuss.huggingface.co/), and [Discord](https://discuss.huggingface.co/t/join-the-hugging-face-discord/11263). ### Perfect. Thank you so much, Lewis. And I'll chat with you soon! **Lewis:** See ya, Britney. Bye. Thank you for listening to Machine Learning Experts! <a href="https://huggingface.co/support?utm_source=blog&utm_medium=blog&utm_campaign=ml_experts&utm_content=lewis_interview_article"><img src="/blog/assets/60_lewis_tunstall_interview/lewis-cta.png"></a>
CO2 Emissions and the 🤗 Hub: Leading the Charge
sasha
April 22, 2022
carbon-emissions-on-the-hub
community, guide
https://huggingface.co/blog/carbon-emissions-on-the-hub
# CO2 Emissions and the 🤗 Hub: Leading the Charge ## What are CO2 Emissions and why are they important? Climate change is one of the greatest challenges that we are facing and reducing emissions of greenhouse gases such as carbon dioxide (CO2) is an important part of tackling this problem. Training and deploying machine learning models will emit CO2 due to the energy usage of the computing infrastructures that are used: from GPUs to storage, it all needs energy to function and emits CO2 in the process. ![Image of recent Transformer models and their carbon footprints](assets/60_carbon_emissions_on_the_hub/transformer_carbon_footprints.png) > Pictured: Recent Transformer models and their carbon footprints The amount of CO2 emitted depends on different factors such as runtime, hardware used, and carbon intensity of the energy source. Using the tools described below will help you both track and report your own emissions (which is important to improve the transparency of our field as a whole!) and choose models based on their carbon footprint. ## How to calculate your own CO2 Emissions automatically with Transformers Before we begin, if you do not have the latest version of the `huggingface_hub` library on your system, please run the following: ``` pip install huggingface_hub -U ``` ## How to find low-emission models using the Hugging Face Hub With the model now uploaded to the Hub, how can you search for models on the Hub while trying to be eco-friendly? Well, the `huggingface_hub` library has a new special parameter to perform this search: `emissions_threshold`. All you need to do is specify a minimum or maximum number of grams, and all models that fall within that range. For example, we can search for all models that took a maximum of 100 grams to make: ```python from huggingface_hub import HfApi api = HfApi() models = api.list_models(emissions_thresholds=(None, 100), cardData=True) len(models) >>> 191 ``` There were quite a few! This also helps to find smaller models, given they typically did not release as much carbon during training. We can look at one up close to see it does fit our threshold: ```python model = models[0] print(f'Model Name: {model.modelId}\nCO2 Emitted during training: {model.cardData["co2_eq_emissions"]}') >>> Model Name: esiebomajeremiah/autonlp-email-classification-657119381 CO2 Emitted during training: 3.516233232503715 ``` Similarly, we can search for a minimum value to find very large models that emitted a lot of CO2 during training: ```python models = api.list_models(emissions_thresholds=(500, None), cardData=True) len(models) >>> 10 ``` Now let's see exactly how much CO2 one of these emitted: ```python model = models[0] print(f'Model Name: {model.modelId}\nCO2 Emitted during training: {model.cardData["co2_eq_emissions"]}') >>> Model Name: Maltehb/aelaectra-danish-electra-small-cased CO2 Emitted during training: 4009.5 ``` That's a lot of CO2! As you can see, in just a few lines of code we can quickly vet models we may want to use to make sure we're being environmentally cognizant! ## How to Report Your Carbon Emissions with `transformers` If you're using `transformers`, you can automatically track and report carbon emissions thanks to the `codecarbon` integration. If you've installed `codecarbon` on your machine, the `Trainer` object will automatically add the `CodeCarbonCallback` while training, which will store carbon emissions data for you as you train. So, if you run something like this... ```python from datasets import load_dataset from transformers import AutoModelForSequenceClassification, AutoTokenizer, Trainer, TrainingArguments ​ ds = load_dataset("imdb") model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", num_labels=2) tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") ​ def tokenize_function(examples): return tokenizer(examples["text"], padding="max_length", truncation=True) ​ ​ small_train_dataset = ds["train"].shuffle(seed=42).select(range(1000)).map(tokenize_function, batched=True) small_eval_dataset = ds["test"].shuffle(seed=42).select(range(1000)).map(tokenize_function, batched=True) ​ ​ training_args = TrainingArguments( "codecarbon-text-classification", num_train_epochs=4, push_to_hub=True ) ​ trainer = Trainer( model=model, args=training_args, train_dataset=small_train_dataset, eval_dataset=small_eval_dataset, ) ​ trainer.train() ``` ...you'll be left with a file within the `codecarbon-text-classification` directory called `emissions.csv`. This file will keep track of the carbon emissions across different training runs. Then, when you're ready, you can take the emissions from the run you used to train your final model and include that in its model card. 📝 An example of this data being included at the top of the model card is shown below: ![Visual of organizing the co2_eq_emissions in a Model Card file](assets/60_carbon_emissions_on_the_hub/metadata_example.png) For more references on the metadata format for `co2_eq_emissions ` see [the hub docs](https://huggingface.co/docs/hub/models-cards-co2). ### Further readings - Rolnick et al. (2019) - [Tackling Climate Change with Machine Learning](https://arxiv.org/pdf/1906.05433.pdf) - Strubell et al. (2019) - [Energy and Policy Considerations for Deep Learning in NLP](https://arxiv.org/pdf/1906.02243.pdf) - Schwartz et al. (2020) - [Green AI](https://dl.acm.org/doi/abs/10.1145/3381831)
Supercharged Customer Service with Machine Learning
patrickvonplaten
April 25, 2022
supercharge-customer-service-with-machine-learning
guide, nlp
https://huggingface.co/blog/supercharge-customer-service-with-machine-learning
# Supercharged Customer Service with Machine Learning <a target="_blank" href="https://github.com/patrickvonplaten/notebooks/blob/master/Using_%F0%9F%A4%97_Transformers_and_%F0%9F%A4%97_Datasets_filter_customer_feedback_filtering.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> In this blog post, we will simulate a real-world customer service use case and use tools machine learning tools of the Hugging Face ecosystem to address it. We strongly recommend using this notebook as a template/example to solve **your** real-world use case. ## Defining Task, Dataset & Model Before jumping into the actual coding part, it's important to have a clear definition of the use case that you would like to automate or partly automate. A clear definition of the use case helps identify the most suitable task, dataset to use, and model to apply for your use case. ### Defining your NLP task Alright, let's dive into a hypothetical problem we wish to solve using models of natural language processing models. Let's assume we are selling a product and our customer support team receives thousands of messages including feedback, complaints, and questions which ideally should all be answered. Quickly, it becomes obvious that customer support is by no means able to reply to every message. Thus, we decide to only respond to the most unsatisfied customers and aim to answer 100% of those messages, as these are likely the most urgent compared to the other neutral and positive messages. Assuming that a) messages of very unsatisfied customers represent only a fraction of all messages and b) that we can filter out unsatisfied messages in an automated way, customer support should be able to reach this goal. To filter out unsatisfied messages in an automated way, we plan on applying natural language processing technologies. The first step is to map our use case - *filtering out unsatisfied messages* - to a machine learning task. The [tasks page on the Hugging Face Hub](https://huggingface.co/tasks) is a great place to get started to see which task best fits a given scenario. Each task has a detailed description and potential use cases. The task of finding messages of the most unsatisfied customers can be modeled as a text classification task: Classify a message into one of the following 5 categories: *very unsatisfied*, *unsatisfied*, *neutral*, *satisfied*, **or** *very satisfied*. ### Finding suitable datasets Having decided on the task, next, we should find the data the model will be trained on. This is usually more important for the performance of your use case than picking the right model architecture. Keep in mind that a model is **only as good as the data it has been trained on**. Thus, we should be very careful when curating and/or selecting the dataset. Since we consider the hypothetical use case of *filtering out unsatisfied messages*, let's look into what datasets are available. For your real-world use case, it is **very likely** that you have internal data that best represents the actual data your NLP system is supposed to handle. Therefore, you should use such internal data to train your NLP system. It can nevertheless be helpful to also include publicly available data to improve the generalizability of your model. Let's take a look at all available Datasets on the [Hugging Face Hub](https://huggingface.co/datasets). On the left side, you can filter the datasets according to *Task Categories* as well as *Tasks* which are more specific. Our use case corresponds to *Text Classification* -> *Sentiment Analysis* so let's select [these filters](https://huggingface.co/datasets?task_categories=task_categories:text-classification&task_ids=task_ids:sentiment-classification&sort=downloads). We are left with *ca.* 80 datasets at the time of writing this notebook. Two aspects should be evaluated when picking a dataset: - **Quality**: Is the dataset of high quality? More specifically: Does the data correspond to the data you expect to deal with in your use case? Is the data diverse, unbiased, ...? - **Size**: How big is the dataset? Usually, one can safely say the bigger the dataset, the better. It's quite tricky to evaluate whether a dataset is of high quality efficiently, and it's even more challenging to know whether and how the dataset is biased. An efficient and reasonable heuristic for high quality is to look at the download statistics. The more downloads, the more usage, the higher chance that the dataset is of high quality. The size is easy to evaluate as it can usually be quickly read upon. Let's take a look at the most downloaded datasets: - [Glue](https://huggingface.co/datasets/glue) - [Amazon polarity](https://huggingface.co/datasets/amazon_polarity) - [Tweet eval](https://huggingface.co/datasets/tweet_eval) - [Yelp review full](https://huggingface.co/datasets/yelp_review_full) - [Amazon reviews multi](https://huggingface.co/datasets/amazon_reviews_multi) Now we can inspect those datasets in more detail by reading through the dataset card, which ideally should give all relevant and important information. In addition, the [dataset viewer](https://huggingface.co/datasets/glue/viewer/cola/test) is an incredibly powerful tool to inspect whether the data suits your use case. Let's quickly go over the dataset cards of the models above: - *GLUE* is a collection of small datasets that primarily serve to compare new model architectures for researchers. The datasets are too small and don't correspond enough to our use case. - *Amazon polarity* is a huge and well-suited dataset for customer feedback since the data deals with customer reviews. However, it only has binary labels (positive/negative), whereas we are looking for more granularity in the sentiment classification. - *Tweet eval* uses different emojis as labels that cannot easily be mapped to a scale going from unsatisfied to satisfied. - *Amazon reviews multi* seems to be the most suitable dataset here. We have sentiment labels ranging from 1-5 corresponding to 1-5 stars on Amazon. These labels can be mapped to *very unsatisfied, neutral, satisfied, very satisfied*. We have inspected some examples on [the dataset viewer](https://huggingface.co/datasets/amazon_reviews_multi/viewer/en/train) to verify that the reviews look very similar to actual customer feedback reviews, so this seems like a very good dataset. In addition, each review has a `product_category` label, so we could even go as far as to only use reviews of a product category corresponding to the one we are working in. The dataset is multi-lingual, but we are just interested in the English version for now. - *Yelp review full* looks like a very suitable dataset. It's large and contains product reviews and sentiment labels from 1 to 5. Sadly, the dataset viewer is not working here, and the dataset card is also relatively sparse, requiring some more time to inspect the dataset. At this point, we should read the paper, but given the time constraint of this blog post, we'll choose to go for *Amazon reviews multi*. As a conclusion, let's focus on the [*Amazon reviews multi*](https://huggingface.co/datasets/amazon_reviews_multi) dataset considering all training examples. As a final note, we recommend making use of Hub's dataset functionality even when working with private datasets. The Hugging Face Hub, Transformers, and Datasets are flawlessly integrated, which makes it trivial to use them in combination when training models. In addition, the Hugging Face Hub offers: - [A dataset viewer for every dataset](https://huggingface.co/datasets/amazon_reviews_multi) - [Easy demoing of every model using widgets](https://huggingface.co/docs/hub/models-widgets) - [Private and Public models](https://huggingface.co/docs/hub/repositories-settings) - [Git version control for repositories](https://huggingface.co/docs/hub/repositories-getting-started) - [Highest security mechanisms](https://huggingface.co/docs/hub/security) ### Finding a suitable model Having decided on the task and the dataset that best describes our use case, we can now look into choosing a model to be used. Most likely, you will have to fine-tune a pretrained model for your own use case, but it is worth checking whether the hub already has suitable fine-tuned models. In this case, you might reach a higher performance by just continuing to fine-tune such a model on your dataset. Let's take a look at all models that have been fine-tuned on Amazon Reviews Multi. You can find the list of models on the bottom right corner - clicking on *Browse models trained on this dataset* you can see [a list of all models fine-tuned on the dataset that are publicly available](https://huggingface.co/models?dataset=dataset:amazon_reviews_multi). Note that we are only interested in the English version of the dataset because our customer feedback will only be in English. Most of the most downloaded models are trained on the multi-lingual version of the dataset and those that don't seem to be multi-lingual have very little information or poor performance. At this point, it might be more sensible to fine-tune a purely pretrained model instead of using one of the already fine-tuned ones shown in the link above. Alright, the next step now is to find a suitable pretrained model to be used for fine-tuning. This is actually more difficult than it seems given the large amount of pretrained and fine-tuned models that are on the [Hugging Face Hub](https://huggingface.co/models). The best option is usually to simply try out a variety of different models to see which one performs best. We still haven't found the perfect way of comparing different model checkpoints to each other at Hugging Face, but we provide some resources that are worth looking into: - The [model summary](https://huggingface.co/docs/transformers/model_summary) gives a short overview of different model architectures. - A task-specific search on the Hugging Face Hub, *e.g.* [a search on text-classification models](https://huggingface.co/models), shows you the most downloaded checkpoints which is also an indication of how well those checkpoints perform. However, both of the above resources are currently suboptimal. The model summary is not always kept up to date by the authors. The speed at which new model architectures are released and old model architectures become outdated makes it extremely difficult to have an up-to-date summary of all model architectures. Similarly, it doesn't necessarily mean that the most downloaded model checkpoint is the best one. E.g. [`bert-base-cased`](https://huggingface.co/bert-base-uncased) is amongst the most downloaded model checkpoints but is not the best performing checkpoint anymore. The best approach is to try out various model architectures, stay up to date with new model architectures by following experts in the field, and check well-known leaderboards. For text-classification, the important benchmarks to look at are [GLUE](https://gluebenchmark.com/leaderboard) and [SuperGLUE](https://super.gluebenchmark.com/leaderboard). Both benchmarks evaluate pretrained models on a variety of text-classification tasks, such as grammatical correctness, natural language inference, Yes/No question answering, etc..., which are quite similar to our target task of sentiment analysis. Thus, it is reasonable to choose one of the leading models of these benchmarks for our task. At the time of writing this blog post, the best performing models are very large models containing more than 10 billion parameters most of which are not open-sourced, *e.g.* *ST-MoE-32B*, *Turing NLR v5*, or *ERNIE 3.0*. One of the top-ranking models that is easily accessible is [DeBERTa](https://huggingface.co/docs/transformers/model_doc/deberta). Therefore, let's try out DeBERTa's newest base version - *i.e.* [`microsoft/deberta-v3-base`](https://huggingface.co/microsoft/deberta-v3-base). ## Training / Fine-tuning a model with 🤗 Transformers and 🤗 Datasets In this section, we will jump into the technical details of how to fine-tune a model end-to-end to be able to automatically filter out very unsatisfied customer feedback messages. Cool! Let's start by installing all necessary pip packages and setting up our code environment, then look into preprocessing the dataset, and finally start training the model. The following notebook can be run online in a google colab pro with the GPU runtime environment enabled. ### Install all necessary packages To begin with, let's install [`git-lfs`](https://git-lfs.github.com/) so that we can automatically upload our trained checkpoints to the Hub during training. ```bash apt install git-lfs ``` Also, we install the 🤗 Transformers and 🤗 Datasets libraries to run this notebook. Since we will be using [DeBERTa](https://huggingface.co/docs/transformers/model_doc/deberta-v2#debertav2) in this blog post, we also need to install the [`sentencepiece`](https://github.com/google/sentencepiece) library for its tokenizer. ```bash pip install datasets transformers[sentencepiece] ``` Next, let's login into our [Hugging Face account](https://huggingface.co/join) so that models are uploaded correctly under your name tag. ```python from huggingface_hub import notebook_login notebook_login() ``` **Output:** ``` Login successful Your token has been saved to /root/.huggingface/token Authenticated through git-credential store but this isn't the helper defined on your machine. You might have to re-authenticate when pushing to the Hugging Face Hub. Run the following command in your terminal in case you want to set this credential helper as the default git config --global credential.helper store ``` ### Preprocess the dataset Before we can start training the model, we should bring the dataset in a format that is understandable by the model. Thankfully, the 🤗 Datasets library makes this extremely easy as you will see in the following cells. The `load_dataset` function loads the dataset, nicely arranges it into predefined attributes, such as `review_body` and `stars`, and finally saves the newly arranged data using the [arrow format](https://arrow.apache.org/#:~:text=Format,data%20access%20without%20serialization%20overhead.) on disk. The arrow format allows for fast and memory-efficient data reading and writing. Let's load and prepare the English version of the `amazon_reviews_multi` dataset. ```python from datasets import load_dataset amazon_review = load_dataset("amazon_reviews_multi", "en") ``` **Output:** ``` Downloading and preparing dataset amazon_reviews_multi/en (download: 82.11 MiB, generated: 58.69 MiB, post-processed: Unknown size, total: 140.79 MiB) to /root/.cache/huggingface/datasets/amazon_reviews_multi/en/1.0.0/724e94f4b0c6c405ce7e476a6c5ef4f87db30799ad49f765094cf9770e0f7609... Dataset amazon_reviews_multi downloaded and prepared to /root/.cache/huggingface/datasets/amazon_reviews_multi/en/1.0.0/724e94f4b0c6c405ce7e476a6c5ef4f87db30799ad49f765094cf9770e0f7609. Subsequent calls will reuse this data. ``` Great, that was fast 🔥. Let's take a look at the structure of the dataset. ```python print(amazon_review) ``` **Output:** ``` {.output .execute_result execution_count="5"} DatasetDict({ train: Dataset({ features: ['review_id', 'product_id', 'reviewer_id', 'stars', 'review_body', 'review_title', 'language', 'product_category'], num_rows: 200000 }) validation: Dataset({ features: ['review_id', 'product_id', 'reviewer_id', 'stars', 'review_body', 'review_title', 'language', 'product_category'], num_rows: 5000 }) test: Dataset({ features: ['review_id', 'product_id', 'reviewer_id', 'stars', 'review_body', 'review_title', 'language', 'product_category'], num_rows: 5000 }) }) ``` We have 200,000 training examples as well as 5000 validation and test examples. This sounds reasonable for training! We're only really interested in the input being the `"review_body"` column and the target being the `"starts"` column. Let's check out a random example. ```python random_id = 34 print("Stars:", amazon_review["train"][random_id]["stars"]) print("Review:", amazon_review["train"][random_id]["review_body"]) ``` **Output:** ``` Stars: 1 Review: This product caused severe burning of my skin. I have used other brands with no problems ``` The dataset is in a human-readable format, but now we need to transform it into a "machine-readable" format. Let's define the model repository which includes all utils necessary to preprocess and fine-tune the checkpoint we decided on. ```python model_repository = "microsoft/deberta-v3-base" ``` Next, we load the tokenizer of the model repository, which is a [DeBERTa's Tokenizer](https://huggingface.co/docs/transformers/model_doc/deberta-v2#transformers.DebertaV2Tokenizer). ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained(model_repository) ``` As mentioned before, we will use the `"review_body"` as the model's input and `"stars"` as the model's target. Next, we make use of the tokenizer to transform the input into a sequence of token ids that can be understood by the model. The tokenizer does exactly this and can also help you to limit your input data to a certain length to not run into a memory issue. Here, we limit the maximum length to 128 tokens which in the case of DeBERTa corresponds to roughly 100 words which in turn corresponds to *ca.* 5-7 sentences. Looking at the [dataset viewer](https://huggingface.co/datasets/amazon_reviews_multi/viewer/en/test) again, we can see that this covers pretty much all training examples. **Important**: This doesn't mean that our model cannot handle longer input sequences, it just means that we use a maximum length of 128 for training since it covers 99% of our training and we don't want to waste memory. Transformer models have shown to be very good at generalizing to longer sequences after training. If you want to learn more about tokenization in general, please have a look at [the Tokenizers docs](https://huggingface.co/course/chapter6/1?fw=pt). The labels are easy to transform as they already correspond to numbers in their raw form, *i.e.* the range from 1 to 5. Here we just shift the labels into the range 0 to 4 since indexes usually start at 0. Great, let's pour our thoughts into some code. We will define a `preprocess_function` that we'll apply to each data sample. ```python def preprocess_function(example): output_dict = tokenizer(example["review_body"], max_length=128, truncation=True) output_dict["labels"] = [e - 1 for e in example["stars"]] return output_dict ``` To apply this function to all data samples in our dataset, we use the [`map`](https://huggingface.co/docs/datasets/master/en/package_reference/main_classes#datasets.Dataset.map) method of the `amazon_review` object we created earlier. This will apply the function on all the elements of all the splits in `amazon_review`, so our training, validation, and testing data will be preprocessed in one single command. We run the mapping function in `batched=True` mode to speed up the process and also remove all columns since we don't need them anymore for training. ```python tokenized_datasets = amazon_review.map(preprocess_function, batched=True, remove_columns=amazon_review["train"].column_names) ``` Let's take a look at the new structure. ```python tokenized_datasets ``` **Output:** ``` DatasetDict({ train: Dataset({ features: ['input_ids', 'token_type_ids', 'attention_mask', 'labels'], num_rows: 200000 }) validation: Dataset({ features: ['input_ids', 'token_type_ids', 'attention_mask', 'labels'], num_rows: 5000 }) test: Dataset({ features: ['input_ids', 'token_type_ids', 'attention_mask', 'labels'], num_rows: 5000 }) }) ``` We can see that the outer layer of the structure stayed the same but the naming of the columns has changed. Let's take a look at the same random example we looked at previously only that it's preprocessed now. ```python print("Input IDS:", tokenized_datasets["train"][random_id]["input_ids"]) print("Labels:", tokenized_datasets["train"][random_id]["labels"]) ``` **Output:** ``` Input IDS: [1, 329, 714, 2044, 3567, 5127, 265, 312, 1158, 260, 273, 286, 427, 340, 3006, 275, 363, 947, 2] Labels: 0 ``` Alright, the input text is transformed into a sequence of integers which can be transformed to word embeddings by the model, and the label index is simply shifted by -1. ### Fine-tune the model Having preprocessed the dataset, next we can fine-tune the model. We will make use of the popular [Hugging Face Trainer](https://huggingface.co/docs/transformers/main/en/main_classes/trainer) which allows us to start training in just a couple of lines of code. The `Trainer` can be used for more or less all tasks in PyTorch and is extremely convenient by taking care of a lot of boilerplate code needed for training. Let's start by loading the model checkpoint using the convenient [`AutoModelForSequenceClassification`](https://huggingface.co/docs/transformers/main/en/model_doc/auto#transformers.AutoModelForSequenceClassification). Since the checkpoint of the model repository is just a pretrained checkpoint we should define the size of the classification head by passing `num_lables=5` (since we have 5 sentiment classes). ```python from transformers import AutoModelForSequenceClassification model = AutoModelForSequenceClassification.from_pretrained(model_repository, num_labels=5) ``` ``` Some weights of the model checkpoint at microsoft/deberta-v3-base were not used when initializing DebertaV2ForSequenceClassification: ['mask_predictions.classifier.bias', 'mask_predictions.LayerNorm.bias', 'mask_predictions.dense.weight', 'mask_predictions.dense.bias', 'mask_predictions.LayerNorm.weight', 'lm_predictions.lm_head.dense.bias', 'lm_predictions.lm_head.bias', 'lm_predictions.lm_head.LayerNorm.weight', 'lm_predictions.lm_head.dense.weight', 'lm_predictions.lm_head.LayerNorm.bias', 'mask_predictions.classifier.weight'] - This IS expected if you are initializing DebertaV2ForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing DebertaV2ForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of DebertaV2ForSequenceClassification were not initialized from the model checkpoint at microsoft/deberta-v3-base and are newly initialized: ['pooler.dense.bias', 'classifier.weight', 'classifier.bias', 'pooler.dense.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. ``` Next, we load a data collator. A [data collator](https://huggingface.co/docs/transformers/main_classes/data_collator) is responsible for making sure each batch is correctly padded during training, which should happen dynamically since training samples are reshuffled before each epoch. ```python from transformers import DataCollatorWithPadding data_collator = DataCollatorWithPadding(tokenizer=tokenizer) ``` During training, it is important to monitor the performance of the model on a held-out validation set. To do so, we should pass a to define a `compute_metrics` function to the `Trainer` which is then called at each validation step during training. The simplest metric for the text classification task is *accuracy*, which simply states how much percent of the training samples were correctly classified. Using the *accuracy* metric might be problematic however if the validation or test data is very unbalanced. Let's verify quickly that this is not the case by counting the occurrences of each label. ```python from collections import Counter print("Validation:", Counter(tokenized_datasets["validation"]["labels"])) print("Test:", Counter(tokenized_datasets["test"]["labels"])) ``` **Output:** ``` Validation: Counter({0: 1000, 1: 1000, 2: 1000, 3: 1000, 4: 1000}) Test: Counter({0: 1000, 1: 1000, 2: 1000, 3: 1000, 4: 1000}) ``` The validation and test data sets are as balanced as they can be, so we can safely use accuracy here! Let's load the [accuracy metric](https://huggingface.co/metrics/accuracy) via the datasets library. ```python from datasets import load_metric accuracy = load_metric("accuracy") ``` Next, we define the `compute_metrics` which will be applied to the predicted outputs of the model which is of type [`EvalPrediction`](https://huggingface.co/docs/transformers/main/en/internal/trainer_utils#transformers.EvalPrediction) and therefore exposes the model's predictions and the gold labels. We compute the predicted label class by taking the `argmax` of the model's prediction before passing it alongside the gold labels to the accuracy metric. ```python import numpy as np def compute_metrics(pred): pred_logits = pred.predictions pred_classes = np.argmax(pred_logits, axis=-1) labels = np.asarray(pred.label_ids) acc = accuracy.compute(predictions=pred_classes, references=labels) return {"accuracy": acc["accuracy"]} ``` Great, now all components required for training are ready and all that's left to do is to define the hyper-parameters of the `Trainer`. We need to make sure that the model checkpoints are uploaded to the Hugging Face Hub during training. By setting `push_to_hub=True`, this is done automatically at every `save_steps` via the convenient [`push_to_hub`](https://huggingface.co/docs/transformers/main/en/main_classes/trainer#transformers.Trainer.push_to_hub) method. Besides, we define some standard hyper-parameters such as learning rate, warm-up steps and training epochs. We will log the loss every 500 steps and run evaluation every 5000 steps. ```python from transformers import TrainingArguments training_args = TrainingArguments( output_dir="deberta_amazon_reviews_v1", num_train_epochs=2, learning_rate=2e-5, warmup_steps=200, logging_steps=500, save_steps=5000, eval_steps=5000, push_to_hub=True, evaluation_strategy="steps", ) ``` Putting it all together, we can finally instantiate the Trainer by passing all required components. We'll use the `"validation"` split as the held-out dataset during training. ```python from transformers import Trainer trainer = Trainer( args=training_args, compute_metrics=compute_metrics, model=model, tokenizer=tokenizer, data_collator=data_collator, train_dataset=tokenized_datasets["train"], eval_dataset=tokenized_datasets["validation"] ) ``` The trainer is ready to go 🚀 You can start training by calling `trainer.train()`. ```python train_metrics = trainer.train().metrics trainer.save_metrics("train", train_metrics) ``` **Output:** ``` ***** Running training ***** Num examples = 200000 Num Epochs = 2 Instantaneous batch size per device = 8 Total train batch size (w. parallel, distributed & accumulation) = 8 Gradient Accumulation steps = 1 Total optimization steps = 50000 ``` **Output:** <div> <table><p> <tbody> <tr style="text-align: left;"> <td>Step</td> <td>Training Loss</td> <td>Validation Loss</td> <td>Accuracy</td> </tr> <tr> <td>5000</td> <td>0.931200</td> <td>0.979602</td> <td>0.585600</td> </tr> <tr> <td>10000</td> <td>0.931600</td> <td>0.933607</td> <td>0.597400</td> </tr> <tr> <td>15000</td> <td>0.907600</td> <td>0.917062</td> <td>0.602600</td> </tr> <tr> <td>20000</td> <td>0.902400</td> <td>0.919414</td> <td>0.604600</td> </tr> <tr> <td>25000</td> <td>0.879400</td> <td>0.910928</td> <td>0.608400</td> </tr> <tr> <td>30000</td> <td>0.806700</td> <td>0.933923</td> <td>0.609200</td> </tr> <tr> <td>35000</td> <td>0.826800</td> <td>0.907260</td> <td>0.616200</td> </tr> <tr> <td>40000</td> <td>0.820500</td> <td>0.904160</td> <td>0.615800</td> </tr> <tr> <td>45000</td> <td>0.795000</td> <td>0.918947</td> <td>0.616800</td> </tr> <tr> <td>50000</td> <td>0.783600</td> <td>0.907572</td> <td>0.618400</td> </tr> </tbody> </table><p> </div> **Output:** ``` ***** Running Evaluation ***** Num examples = 5000 Batch size = 8 Saving model checkpoint to deberta_amazon_reviews_v1/checkpoint-50000 Configuration saved in deberta_amazon_reviews_v1/checkpoint-50000/config.json Model weights saved in deberta_amazon_reviews_v1/checkpoint-50000/pytorch_model.bin tokenizer config file saved in deberta_amazon_reviews_v1/checkpoint-50000/tokenizer_config.json Special tokens file saved in deberta_amazon_reviews_v1/checkpoint-50000/special_tokens_map.json added tokens file saved in deberta_amazon_reviews_v1/checkpoint-50000/added_tokens.json Training completed. Do not forget to share your model on huggingface.co/models =) ``` Cool, we see that the model seems to learn something! Training loss and validation loss are going down and the accuracy also ends up being well over random chance (20%). Interestingly, we see an accuracy of around **58.6 %** after only 5000 steps which doesn't improve that much anymore afterward. Choosing a bigger model or training for longer would have probably given better results here, but that's good enough for our hypothetical use case! Alright, finally let's upload the model checkpoint to the Hub. ```python trainer.push_to_hub() ``` **Output:** ``` Saving model checkpoint to deberta_amazon_reviews_v1 Configuration saved in deberta_amazon_reviews_v1/config.json Model weights saved in deberta_amazon_reviews_v1/pytorch_model.bin tokenizer config file saved in deberta_amazon_reviews_v1/tokenizer_config.json Special tokens file saved in deberta_amazon_reviews_v1/special_tokens_map.json added tokens file saved in deberta_amazon_reviews_v1/added_tokens.json Several commits (2) will be pushed upstream. The progress bars may be unreliable. ``` ### Evaluate / Analyse the model Now that we have fine-tuned the model we need to be very careful about analyzing its performance. Note that canonical metrics, such as *accuracy*, are useful to get a general picture about your model's performance, but it might not be enough to evaluate how well the model performs on your actual use case. The better approach is to find a metric that best describes the actual use case of the model and measure exactly this metric during and after training. Let's dive into evaluating the model 🤿. The model has been uploaded to the Hub under [`deberta_v3_amazon_reviews`](https://huggingface.co/patrickvonplaten/deberta_v3_amazon_reviews) after training, so in a first step, let's download it from there again. ```python from transformers import AutoModelForSequenceClassification model = AutoModelForSequenceClassification.from_pretrained("patrickvonplaten/deberta_v3_amazon_reviews") ``` The Trainer is not only an excellent class to train a model, but also to evaluate a model on a dataset. Let's instantiate the trainer with the same instances and functions as before, but this time there is no need to pass a training dataset. ```python trainer = Trainer( args=training_args, compute_metrics=compute_metrics, model=model, tokenizer=tokenizer, data_collator=data_collator, ) ``` We use the Trainer's [`predict`](https://huggingface.co/docs/transformers/main/en/main_classes/trainer#transformers.Trainer.predict) function to evaluate the model on the test dataset on the same metric. ```python prediction_metrics = trainer.predict(tokenized_datasets["test"]).metrics prediction_metrics ``` **Output:** ``` ***** Running Prediction ***** Num examples = 5000 Batch size = 8 ``` **Output:** ``` {'test_accuracy': 0.608, 'test_loss': 0.9637690186500549, 'test_runtime': 21.9574, 'test_samples_per_second': 227.714, 'test_steps_per_second': 28.464} ``` The results are very similar to performance on the validation dataset, which is usually a good sign as it shows that the model didn't overfit the test dataset. However, 60% accuracy is far from being perfect on a 5-class classification problem, but do we need very high accuracy for all classes? Since we are mostly concerned with very negative customer feedback, let's just focus on how well the model performs on classifying reviews of the most unsatisfied customers. We also decide to help the model a bit - all feedback classified as either **very unsatisfied** or **unsatisfied** will be handled by us - to catch close to 99% of the **very unsatisfied** messages. At the same time, we also measure how many **unsatisfied** messages we can answer this way and how much unnecessary work we do by answering messages of neutral, satisfied, and very satisfied customers. Great, let's write a new `compute_metrics` function. ```python import numpy as np def compute_metrics(pred): pred_logits = pred.predictions pred_classes = np.argmax(pred_logits, axis=-1) labels = np.asarray(pred.label_ids) # First let's compute % of very unsatisfied messages we can catch very_unsatisfied_label_idx = (labels == 0) very_unsatisfied_pred = pred_classes[very_unsatisfied_label_idx] # Now both 0 and 1 labels are 0 labels the rest is > 0 very_unsatisfied_pred = very_unsatisfied_pred * (very_unsatisfied_pred - 1) # Let's count how many labels are 0 -> that's the "very unsatisfied"-accuracy true_positives = sum(very_unsatisfied_pred == 0) / len(very_unsatisfied_pred) # Second let's compute how many satisfied messages we unnecessarily reply to satisfied_label_idx = (labels > 1) satisfied_pred = pred_classes[satisfied_label_idx] # how many predictions are labeled as unsatisfied over all satisfied messages? false_positives = sum(satisfied_pred <= 1) / len(satisfied_pred) return {"%_unsatisfied_replied": round(true_positives, 2), "%_satisfied_incorrectly_labels": round(false_positives, 2)} ``` We again instantiate the `Trainer` to easily run the evaluation. ```python trainer = Trainer( args=training_args, compute_metrics=compute_metrics, model=model, tokenizer=tokenizer, data_collator=data_collator, ) ``` And let's run the evaluation again with our new metric computation which is better suited for our use case. ```python prediction_metrics = trainer.predict(tokenized_datasets["test"]).metrics prediction_metrics ``` **Output:** ``` ***** Running Prediction ***** Num examples = 5000 Batch size = 8 ``` **Output:** ``` {'test_%_satisfied_incorrectly_labels': 0.11733333333333333, 'test_%_unsatisfied_replied': 0.949, 'test_loss': 0.9637690186500549, 'test_runtime': 22.8964, 'test_samples_per_second': 218.375, 'test_steps_per_second': 27.297} ``` Cool! This already paints a pretty nice picture. We catch around 95% of **very unsatisfied** customers automatically at a cost of wasting our efforts on 10% of satisfied messages. Let's do some quick math. We receive daily around 10,000 messages for which we expect ca. 500 to be very negative. Instead of having to answer to all 10,000 messages, using this automatic filtering, we would only need to look into 500 + 0.12 \* 10,000 = 1700 messages and only reply to 475 messages while incorrectly missing 5% of the messages. Pretty nice - a 83% reduction in human effort at missing only 5% of very unsatisfied customers! Obviously, the numbers don't represent the gained value of an actual use case, but we could come close to it with enough high-quality training data of your real-world example! Let's save the results ```python trainer.save_metrics("prediction", prediction_metrics) ``` and again upload everything on the Hub. ```python trainer.push_to_hub() ``` **Output:** ``` Saving model checkpoint to deberta_amazon_reviews_v1 Configuration saved in deberta_amazon_reviews_v1/config.json Model weights saved in deberta_amazon_reviews_v1/pytorch_model.bin tokenizer config file saved in deberta_amazon_reviews_v1/tokenizer_config.json Special tokens file saved in deberta_amazon_reviews_v1/special_tokens_map.json added tokens file saved in deberta_amazon_reviews_v1/added_tokens.json To https://huggingface.co/patrickvonplaten/deberta_amazon_reviews_v1 599b891..ad77e6d main -> main Dropping the following result as it does not have all the necessary fields: {'task': {'name': 'Text Classification', 'type': 'text-classification'}} To https://huggingface.co/patrickvonplaten/deberta_amazon_reviews_v1 ad77e6d..13e5ddd main -> main ``` The data is now saved [here](https://huggingface.co/patrickvonplaten/deberta_amazon_reviews_v1/blob/main/prediction_results.json). That's it for today 😎. As a final step, it would also make a lot of sense to try the model out on actual real-world data. This can be done directly on the inference widget on [the model card](https://huggingface.co/patrickvonplaten/deberta_amazon_reviews_v1): ![example.png](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/classification_widget.png) It does seem to generalize quite well to real-world data 🔥 ## Optimization As soon as you think the model's performance is good enough for production it's all about making the model as memory efficient and fast as possible. There are some obvious solutions to this like choosing the best suited accelerated hardware, *e.g.* better GPUs, making sure no gradients are computed during the forward pass, or lowering the precision, *e.g.* to float16. More advanced optimization methods include using open-source accelerator libraries such as [ONNX Runtime](https://onnxruntime.ai/index.html), [quantization](https://pytorch.org/docs/stable/quantization.html), and inference servers like [Triton](https://developer.nvidia.com/nvidia-triton-inference-server). At Hugging Face, we have been working a lot to facilitate the optimization of models, especially with our open-source [Optimum library](https://huggingface.co/hardware). Optimum makes it extremely simple to optimize most 🤗 Transformers models. If you're looking for **highly optimized** solutions which don't require any technical knowledge, you might be interested in the [Inference API](https://huggingface.co/inference-api), a plug & play solution to serve in production a wide variety of machine learning tasks, including sentiment analysis. Moreover, if you are searching for **support for your custom use cases**, Hugging Face's team of experts can help accelerate your ML projects! Our team answer questions and find solutions as needed in your machine learning journey from research to production. Visit [hf.co/support](https://huggingface.co/support) to learn more and request a quote.
Introducing Hugging Face for Education
Violette
April 25, 2022
education
community
https://huggingface.co/blog/education
# Introducing Hugging Face for Education 🤗 Given that machine learning will make up the overwhelming majority of software development and that non-technical people will be exposed to AI systems more and more, one of the main challenges of AI is adapting and enhancing employee skills. It is also becoming necessary to support teaching staff in proactively taking AI's ethical and critical issues into account. As an open-source company democratizing machine learning, [Hugging Face](https://huggingface.co/) believes it is essential to educate people from all backgrounds worldwide. We launched the [ML demo.cratization tour](https://www.notion.so/ML-Demo-cratization-tour-with-66847a294abd4e9785e85663f5239652) in March 2022, where experts from Hugging Face taught hands-on classes on Building Machine Learning Collaboratively to more than 1000 students from 16 countries. Our new goal: **to teach machine learning to 5 million people by the end of 2023**. *This blog post provides a high-level description of how we will reach our goals around education.* ## 🤗 **Education for All** 🗣️ Our goal is to make the potential and limitations of machine learning understandable to everyone. We believe that doing so will help evolve the field in a direction where the application of these technologies will lead to net benefits for society as a whole. Some examples of our existing efforts: - we describe in a very accessible way [different uses of ML models](https://huggingface.co/tasks) (summarization, text generation, object detection…), - we allow everyone to try out models directly in their browser through widgets in the model pages, hence lowering the need for technical skills to do so ([example](https://huggingface.co/cmarkea/distilcamembert-base-sentiment)), - we document and warn about harmful biases identified in systems (like [GPT-2](https://huggingface.co/gpt2#limitations-and-bias)). - we provide tools to create open-source [ML apps](https://huggingface.co/spaces) that allow anyone to understand the potential of ML in one click. ## 🤗 **Education for Beginners** 🗣️ We want to lower the barrier to becoming a machine learning engineer by providing online courses, hands-on workshops, and other innovative techniques. - We provide a free [course](https://huggingface.co/course/chapter1/1) about natural language processing (NLP) and more domains (soon) using free tools and libraries from the Hugging Face ecosystem. It’s completely free and without ads. The ultimate goal of this course is to learn how to apply Transformers to (almost) any machine learning problem! - We provide a free [course](https://github.com/huggingface/deep-rl-class) about Deep Reinforcement Learning. In this course, you can study Deep Reinforcement Learning in theory and practice, learn to use famous Deep RL libraries, train agents in unique environments, publish your trained agents in one line of code to the Hugging Face Hub, and more! - We provide a free [course](https://huggingface.co/course/chapter9/1) on how to build interactive demos for your machine learning models. The ultimate goal of this course is to allow ML developers to easily present their work to a wide audience including non-technical teams or customers, researchers to more easily reproduce machine learning models and behavior, end users to more easily identify and debug failure points of models, and more! - Experts at Hugging Face wrote a [book](https://transformersbook.com/) on Transformers and their applications to a wide range of NLP tasks. Apart from those efforts, many team members are involved in other educational efforts such as: - Participating in meetups, conferences and workshops. - Creating podcasts, YouTube videos, and blog posts. - [Organizing events](https://github.com/huggingface/community-events/tree/main/huggan) in which free GPUs are provided for anyone to be able to train and share models and create demos for them. ## 🤗 **Education for Instructors** 🗣️ We want to empower educators with tools and offer collaborative spaces where students can build machine learning using open-source technologies and state-of-the-art machine learning models. - We provide to educators free infrastructure and resources to quickly introduce real-world applications of ML to theirs students and make learning more fun and interesting. By creating a [classroom](https://huggingface.co/classrooms) for free from the hub, instructors can turn their classes into collaborative environments where students can learn and build ML-powered applications using free open-source technologies and state-of-the-art models.  - We’ve assembled [a free toolkit](https://github.com/huggingface/education-toolkit) translated to 8 languages that instructors of machine learning or Data Science can use to easily prepare labs, homework, or classes. The content is self-contained so that it can be easily incorporated into an existing curriculum. This content is free and uses well-known Open Source technologies (🤗 transformers, gradio, etc). Feel free to pick a tutorial and teach it! 1️⃣ [A Tour through the Hugging Face Hub](https://github.com/huggingface/education-toolkit/blob/main/01_huggingface-hub-tour.md) 2️⃣ [Build and Host Machine Learning Demos with Gradio & Hugging Face](https://colab.research.google.com/github/huggingface/education-toolkit/blob/main/02_ml-demos-with-gradio.ipynb) 3️⃣ [Getting Started with Transformers](https://colab.research.google.com/github/huggingface/education-toolkit/blob/main/03_getting-started-with-transformers.ipynb) - We're organizing a dedicated, free workshop (June 6) on how to teach our educational resources in your machine learning and data science classes. Do not hesitate to [register](https://www.eventbrite.com/e/how-to-teach-open-source-machine-learning-tools-tickets-310980931337). - We are currently doing a worldwide tour in collaboration with university instructors to teach more than 10000 students one of our core topics: How to build machine learning collaboratively? You can request someone on the Hugging Face team to run the session for your class via the [ML demo.cratization tour initiative](https://www.notion.so/ML-Demo-cratization-tour-with-66847a294abd4e9785e85663f5239652)**.** <img width="535" alt="image" src="https://user-images.githubusercontent.com/95622912/164271167-58ec0115-dda1-4217-a308-9d4b2fbf86f5.png"> ## 🤗 **Education Events & News** - **09/08**[EVENT]: ML Demo.cratization tour in Argentina at 2pm (GMT-3). [Link here](https://www.uade.edu.ar/agenda/clase-pr%C3%A1ctica-con-hugging-face-c%C3%B3mo-construir-machine-learning-de-forma-colaborativa/) 🔥 We are currently working on more content in the course, and more! Stay tuned!
Getting Started with Transformers on Habana Gaudi
juliensimon
April 26, 2022
getting-started-habana
partnerships, guide
https://huggingface.co/blog/getting-started-habana
# Getting Started with Transformers on Habana Gaudi A couple of weeks ago, we've had the pleasure to [announce](https://huggingface.co/blog/habana) that [Habana Labs](https://habana.ai) and [Hugging Face](https://huggingface.co/) would partner to accelerate Transformer model training. Habana Gaudi accelerators deliver up to 40% better price performance for training machine learning models compared to the latest GPU-based Amazon EC2 instances. We are super excited to bring this price performance advantages to Transformers 🚀 In this hands-on post, I'll show you how to quickly set up a Habana Gaudi instance on Amazon Web Services, and then fine-tune a BERT model for text classification. As usual, all code is provided so that you may reuse it in your projects. Let's get started! ## Setting up an Habana Gaudi instance on AWS The simplest way to work with Habana Gaudi accelerators is to launch an Amazon EC2 [DL1](https://aws.amazon.com/ec2/instance-types/dl1/) instance. These instances are equipped with 8 Habana Gaudi processors that can easily be put to work thanks to the [Habana Deep Learning Amazon Machine Image](https://aws.amazon.com/marketplace/server/procurement?productId=9a75c51a-a4d1-4470-884f-6be27933fcc8) (AMI). This AMI comes preinstalled with the [Habana SynapseAI® SDK](https://developer.habana.ai/), and the tools required to run Gaudi accelerated Docker containers. If you'd like to use other AMIs or containers, instructions are available in the [Habana documentation](https://docs.habana.ai/en/latest/AWS_Quick_Starts/index.html). Starting from the [EC2 console](https://console.aws.amazon.com/ec2sp/v2/) in the us-east-1 region, I first click on **Launch an instance** and define a name for the instance ("habana-demo-julsimon"). Then, I search the Amazon Marketplace for Habana AMIs. <kbd> <img src="assets/61_getting_started_habana/habana01.png"> </kbd> I pick the Habana Deep Learning Base AMI (Ubuntu 20.04). <kbd> <img src="assets/61_getting_started_habana/habana02.png"> </kbd> Next, I pick the *dl1.24xlarge* instance size (the only size available). <kbd> <img src="assets/61_getting_started_habana/habana03.png"> </kbd> Then, I select the keypair that I'll use to connect to the instance with ```ssh```. If you don't have a keypair, you can create one in place. <kbd> <img src="assets/61_getting_started_habana/habana04.png"> </kbd> As a next step, I make sure that the instance allows incoming ```ssh``` traffic. I do not restrict the source address for simplicity, but you should definitely do it in your account. <kbd> <img src="assets/61_getting_started_habana/habana05.png"> </kbd> By default, this AMI will start an instance with 8GB of Amazon EBS storage, which won't be enough here. I bump storage to 50GB. <kbd> <img src="assets/61_getting_started_habana/habana08.png"> </kbd> Next, I assign an Amazon IAM role to the instance. In real life, this role should have the minimum set of permissions required to run your training job, such as the ability to read data from one of your Amazon S3 buckets. This role is not needed here as the dataset will be downloaded from the Hugging Face hub. If you're not familiar with IAM, I highly recommend reading the [Getting Started](https://docs.aws.amazon.com/IAM/latest/UserGuide/getting-started.html) documentation. Then, I ask EC2 to provision my instance as a [Spot Instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-spot-instances.html), a great way to reduce the $13.11 per hour cost. <kbd> <img src="assets/61_getting_started_habana/habana06.png"> </kbd> Finally, I launch the instance. A couple of minutes later, the instance is ready and I can connect to it with ```ssh```. Windows users can do the same with *PuTTY* by following the [documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/putty.html). ``` ssh -i ~/.ssh/julsimon-keypair.pem [email protected] ``` On this instance, the last setup step is to pull the Habana container for PyTorch, which is the framework I'll use to fine-tune my model. You can find information on other prebuilt containers and on how to build your own in the Habana [documentation](https://docs.habana.ai/en/latest/Installation_Guide/index.html). ``` docker pull \ vault.habana.ai/gaudi-docker/1.5.0/ubuntu20.04/habanalabs/pytorch-installer-1.11.0:1.5.0-610 ``` Once the image has been pulled to the instance, I run it in interactive mode. ``` docker run -it \ --runtime=habana \ -e HABANA_VISIBLE_DEVICES=all \ -e OMPI_MCA_btl_vader_single_copy_mechanism=none \ --cap-add=sys_nice \ --net=host \ --ipc=host vault.habana.ai/gaudi-docker/1.5.0/ubuntu20.04/habanalabs/pytorch-installer-1.11.0:1.5.0-610 ``` I'm now ready to fine-tune my model. ## Fine-tuning a text classification model on Habana Gaudi I first clone the [Optimum Habana](https://github.com/huggingface/optimum-habana) repository inside the container I've just started. ``` git clone https://github.com/huggingface/optimum-habana.git ``` Then, I install the Optimum Habana package from source. ``` cd optimum-habana pip install . ``` Then, I move to the subdirectory containing the text classification example and install the required Python packages. ``` cd examples/text-classification pip install -r requirements.txt ``` I can now launch the training job, which downloads the [bert-large-uncased-whole-word-masking](https://huggingface.co/bert-large-uncased-whole-word-masking) model from the Hugging Face hub, and fine-tunes it on the [MRPC](https://www.microsoft.com/en-us/download/details.aspx?id=52398) task of the [GLUE](https://gluebenchmark.com/) benchmark. Please note that I'm fetching the Habana Gaudi configuration for BERT from the Hugging Face hub, and you could also use your own. In addition, other popular models are supported, and you can find their configuration file in the [Habana organization](https://huggingface.co/Habana). ``` python run_glue.py \ --model_name_or_path bert-large-uncased-whole-word-masking \ --gaudi_config_name Habana/bert-large-uncased-whole-word-masking \ --task_name mrpc \ --do_train \ --do_eval \ --per_device_train_batch_size 32 \ --learning_rate 3e-5 \ --num_train_epochs 3 \ --max_seq_length 128 \ --use_habana \ --use_lazy_mode \ --output_dir ./output/mrpc/ ``` After 2 minutes and 12 seconds, the job is complete and has achieved an excellent F1 score of 0.9181, which could certainly improve with more epochs. ``` ***** train metrics ***** epoch = 3.0 train_loss = 0.371 train_runtime = 0:02:12.85 train_samples = 3668 train_samples_per_second = 82.824 train_steps_per_second = 2.597 ***** eval metrics ***** epoch = 3.0 eval_accuracy = 0.8505 eval_combined_score = 0.8736 eval_f1 = 0.8968 eval_loss = 0.385 eval_runtime = 0:00:06.45 eval_samples = 408 eval_samples_per_second = 63.206 eval_steps_per_second = 7.901 ``` Last but not least, I terminate the EC2 instance to avoid unnecessary charges. Looking at the [Savings Summary](https://console.aws.amazon.com/ec2sp/v2/home/spot) in the EC2 console, I see that I saved 70% thanks to Spot Instances, paying only $3.93 per hour instead of $13.11. <kbd> <img src="assets/61_getting_started_habana/habana07.png"> </kbd> As you can see, the combination of Transformers, Habana Gaudi, and AWS instances is powerful, simple, and cost-effective. Give it a try and let us know what you think. We definitely welcome your questions and feedback on the [Hugging Face Forum](https://discuss.huggingface.co/). --- *Please [reach out to Habana](https://developer.habana.ai/accelerate-transformer-training-on-habana-gaudi-processors-with-hugging-face/) to learn more about training Hugging Face models on Gaudi processors.*
Director of Machine Learning Insights [Series]
britneymuller
April 27, 2022
ml-director-insights
community, research
https://huggingface.co/blog/ml-director-insights
# Director of Machine Learning Insights [Part 1] Few seats at the Machine Learning table span both technical skills, problem solving and business acumen like Directors of Machine Learning Directors of Machine Learning and/or Data Science are often expected to design ML systems, have deep knowledge of mathematics, familiarity with ML frameworks, rich data architecture understanding, experience applying ML to real-world applications, solid communication skills, and often expected to keep on top of industry developments. A tall order! <a href="https://huggingface.co/platform?utm_source=article&utm_medium=blog&utm_campaign=ml_director_insights_1"><img width="100%" style= "margin:auto" src="/blog/assets/61_ml_director_insights/private-model-hub.png"></a> For these reasons, we’ve tapped into this unique group of ML Directors for a series of articles highlighting their thoughts on current ML insights and industry trends ranging from Healthcare to Finance, eCommerce, SaaS, Research, Media, and more. For example, one Director will note how ML can be used to reduce empty deadheading truck driving (which occurs ~20% of the time) down to just 19% would cut carbon emissions by ~100,000 Americans. Note: This is back of napkin math, done by an ex-rocket Scientist however, so we’ll take it. In this first installment, you’ll hear from a researcher (who’s using ground penetrating radar to detect buried landmines), an ex-Rocket Scientist, a Dzongkha fluent amateur gamer (Kuzu = Hello!), an ex-van living Scientist, a high-performance Data Science team coach who’s still very hands-on, a data practitioner who values relationships, family, dogs, and pizza. —All of whom are currently Directors of Machine Learning with rich field insights. 🚀 Let’s meet some top Machine Learning Directors and hear what they have to say about Machine Learning’s impact on their prospective industries: <img class="mx-auto" style="float: left;" padding="5px" src="/blog/assets/61_ml_director_insights/Archi-Mitra.jpeg"></a> ### [Archi Mitra](https://www.linkedin.com/in/archimitra/) - Director of Machine Learning at [Buzzfeed](https://www.buzzfeed.com/) **Background:** Bringing balance to the promise of ML for business. People over Process. Strategy over Hope. AI Ethics over AI Profits. Brown New Yorker. **Fun Fact:** I can speak [Dzongkha](https://en.wikipedia.org/wiki/Dzongkha) (google it!) and am a supporter of [Youth for Seva](https://www.youthforseva.org/Donation). **Buzzfeed:** An American Internet media, news and entertainment company with a focus on digital media. #### **1. How has ML made a positive impact on Media?** _Privacy first personalization for customers:_ Every user is unique and while their long-term interests are stable, their short-term interests are stochastic. They expect their relationship with the Media to reflect this. The combination of advancement in hardware acceleration and Deep Learning for recommendations has unlocked the ability to start deciphering this nuance and serve users with the right content at the right time at the right touchpoint. _Assistive tools for makers:_ Makers are the limited assets in media and preserving their creative bandwidth by ML driven human-in-the-loop assistive tools have seen an outsized impact. Something as simple as automatically suggesting an appropriate title, image, video, and/or product that can go along with the content they are creating unlocks a collaborative machine-human flywheel. _Tightened testing:_ In a capital intensive media venture, there is a need to shorten the time between collecting information on what resonates with users and immediately acting on it. With a wide variety of Bayesian techniques and advancements in reinforcement learning, we have been able to drastically reduce not only the time but the cost associated with it. #### **2. What are the biggest ML challenges within Media?** _Privacy, editorial voice, and equitable coverage:_ Media is a key pillar in the democratic world now more than ever. ML needs to respect that and operate within constraints that are not strictly considered table stakes in any other domain or industry. Finding a balance between editorially curated content & programming vs ML driven recommendations continues to be a challenge. Another unique challenge to BuzzFeed is we believe that the internet should be free which means we don't track our users like others can. #### **3. What’s a common mistake you see people make trying to integrate ML into Media?** Ignoring “the makers” of media: Media is prevalent because it houses a voice that has a deep influence on people. The editors, content creators, writers & makers are the larynx of that voice and the business and building ML that enables them, extends their impact and works in harmony with them is the key ingredient to success. #### **4. What excites you most about the future of ML?** Hopefully, small data-driven general-purpose multi-modal multi-task real-time ML systems that create step-function improvements in drug discovery, high precision surgery, climate control systems & immersive metaverse experiences. Realistically, more accessible, low-effort meta-learning techniques for highly accurate text and image generation. <img class="mx-auto" style="float: left;" padding="5px" width="200" src="/blog/assets/61_ml_director_insights/Li-Tan.jpeg"></a> ### [Li Tan](http://linkedin.com/in/iamtanli/) - Director of Machine Learning & AI at [Johnson & Johnson](https://www.jnj.com/) **Background:** Li is an AI/ML veteran with 15+ years of experience leading high-profile Data Science teams within industry leaders like Johnson & Johnson, Microsoft, and Amazon. **Fun Fact:** Li continues to be curious, is always learning, and enjoys hands-on programming. **Johnson & Johnson:** A Multinational corporation that develops medical devices, pharmaceuticals, and consumer packaged goods. #### **1. How has ML made a positive impact on Pharmaceuticals?** AI/ML applications have exploded in the pharmaceuticals space the past few years and are making many long-term positive impacts. Pharmaceuticals and healthcare have many use cases that can leverage AI/ML. Applications range from research, and real-world evidence, to smart manufacturing and quality assurance. The technologies used are also very broad: NLP/NLU, CV, AIIoT, Reinforcement Learning, etc. even things like AlphaFold. #### **2. What are the biggest ML challenges within Pharmaceuticals?** The biggest ML challenge within pharma and healthcare is how to ensure equality and diversity in AI applications. For example, how to make sure the training set has good representations of all ethnic groups. Due to the nature of healthcare and pharma, this problem can have a much greater impact compared to applications in some other fields. #### **3. What’s a common mistake you see people make trying to integrate ML into Pharmaceuticals?** Wouldn’t say this is necessarily a mistake, but I see many people gravitate toward extreme perspectives when it comes to AI applications in healthcare; either too conservative or too aggressive. Some people are resistant due to high regulatory requirements. We had to qualify many of our AI applications with strict GxP validation. It may require a fair amount of work, but we believe the effort is worthwhile. On the opposite end of the spectrum, there are many people who think AI/Deep Learning models can outperform humans in many applications and run completely autonomously. As practitioners, we know that currently, neither is true. ML models can be incredibly valuable but still make mistakes. So I recommend a more progressive approach. The key is to have a framework that can leverage the power of AI while having goalkeepers in place. [FDA has taken actions](https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-software-medical-device) to regulate how AI/ML should be used in software as a medical device and I believe that’s a positive step forward for our industry. #### **4. What excites you most about the future of ML?** The intersections between AI/ML and other hard sciences and technologies. I’m excited to see what’s to come. <img class="mx-auto" style="float: left;" padding="5px" width="200" src="/blog/assets/61_ml_director_insights/Alina-Zare.jpeg"></a> ### [Alina Zare](https://www.linkedin.com/in/alina-zare/) - Director of the Machine Learning & Sensing Laboratory at the [University of Florida](https://faculty.eng.ufl.edu/machine-learning/people/faculty/) **Background:** Alina Zare teaches and conducts research in the area of machine learning and artificial intelligence as a Professor in the Electrical and Computer Engineering Department at the University of Florida and Director of the Machine Learning and Sensing Lab. Dr. Zare’s research has focused primarily on developing new machine learning algorithms to automatically understand and process data and imagery. Her research work has included automated plant root phenotyping, sub-pixel hyperspectral image analysis, target detection, and underwater scene understanding using synthetic aperture sonar, LIDAR data analysis, Ground Penetrating Radar analysis, and buried landmine and explosive hazard detection. **Fun Fact:** Alina is a rower. She joined the crew team in high school, rowed throughout college and grad school, was head coach of the University of Missouri team while she was an assistant professor, and then rowed as a masters rower when she joined the faculty at UF. **Machine Learning & Sensing Laboratory:** A University of Florida laboratory that develops machine learning methods for autonomously analyzing and understanding sensor data. #### **1. How has ML made a positive impact on Science** ML has made a positive impact in a number of ways from helping to automate tedious and/or slow tasks or providing new ways to examine and look at various questions. One example from my work in ML for plant science is that we have developed ML approaches to automate plant root segmentation and characterization in imagery. This task was previously a bottleneck for plant scientists looking at root imagery. By automating this step through ML we can conduct these analyses at a much higher throughput and begin to use this data to investigate plant biology research questions at scale. #### **2. What are the biggest ML challenges within Scientific research?** There are many challenges. One example is when using ML for Science research, we have to think carefully through the data collection and curation protocols. In some cases, the protocols we used for non-ML analysis are not appropriate or effective. The quality of the data and how representative it is of what we expect to see in the application can make a huge impact on the performance, reliability, and trustworthiness of an ML-based system. #### **3. What’s a common mistake you see people make trying to integrate ML into Science?** Related to the question above, one common mistake is misinterpreting results or performance to be a function of just the ML system and not also considering the data collection, curation, calibration, and normalization protocols. #### **4. What excites you most about the future of ML?** There are a lot of really exciting directions. A lot of my research currently is in spaces where we have a huge amount of prior knowledge and empirically derived models. For example, I have ongoing work using ML for forest ecology research. The forestry community has a rich body of prior knowledge and current purely data-driven ML systems are not leveraging. I think hybrid methods that seamlessly blend prior knowledge with ML approaches will be an interesting and exciting path forward. An example may be understanding how likely two species are to co-occur in an area. Or what species distribution we could expect given certain environmental conditions. These could potentially be used w/ data-driven methods to make predictions in changing conditions. <img class="mx-auto" style="float: left;" padding="5px" width="200" src="/blog/assets/61_ml_director_insights/Nathan-Cahill.jpeg"></a> ### [Nathan Cahill](https://www.linkedin.com/in/nathan-m-cahill/) Ph.D. - Director of Machine Learning at [Xpress Technologies](https://xpresstechfreight.com/) **Background:** Nathan is a passionate machine learning leader with 7 years of experience in research and development, and three years experience creating business value by shipping ML models to prod. He specializes in finding and strategically prioritizing the business' biggest pain points: unlocking the power of data earlier on in the growth curve. **Fun Fact:** Before getting into transportation and logistics I was engineering rockets at Northrop Grumman. #RocketScience **Xpress Technologies:** A digital freight matching technology to connect Shippers, Brokers and Carriers to bring efficiency and automation to the Transportation Industry. #### **1. How has ML made a positive impact on Logistics/Transportation?** The transportation industry is incredibly fragmented. The top players in the game have less than 1% market share. As a result, there exist inefficiencies that can be solved by digital solutions. For example, when you see a semi-truck driving on the road, there is currently a 20% chance that the truck is driving with nothing in the back. Yes, 20% of the miles a tractor-trailer drives are from the last drop off of their previous load to their next pickup. The chances are that there is another truck driving empty (or "deadheading") in the other direction. With machine learning and optimization this deadhead percent can be reduced significantly, and just taking that number from 20% to 19% percent would cut the equivalent carbon emissions of 100,000 Americans. Note: the carbon emissions of 100k Americans were my own back of the napkin math. #### **2. What are the biggest ML challenges within Logistics?** The big challenge within logistics is due to the fact that the industry is so fragmented: there is no shared pool of data that would allow technology solutions to "see" the big picture. For example a large fraction of brokerage loads, maybe a majority, costs are negotiated on a load by load basis making them highly volatile. This makes pricing a very difficult problem to solve. If the industry became more transparent and shared data more freely, so much more would become possible. #### **3. What’s a common mistake you see people make trying to integrate ML into Logistics?** I think that the most common mistake I see is people doing ML and Data Science in a vacuum. Most ML applications within logistics will significantly change the dynamics of the problem if they are being used so it's important to develop models iteratively with the business and make sure that performance in reality matches what you expect in training. An example of this is in pricing where if you underprice a lane slightly, your prices may be too competitive which will create an influx of freight on that lane. This, in turn, may cause costs to go up as the brokers struggle to find capacity for those loads, exacerbating the issue. #### **4. What excites you the most about the future of ML?** I think the thing that excites me most about ML is the opportunity to make people better at their jobs. As ML begins to be ubiquitous in business, it will be able to help speed up decisions and automate redundant work. This will accelerate the pace of innovation and create immense economic value. I can’t wait to see what problems we solve in the next 10 years aided by data science and ML! <img class="mx-auto" style="float: left;" padding="5px" width="200" src="/blog/assets/61_ml_director_insights/Nicolas-Bertagnolli.png"></a> ### [Nicolas Bertagnolli](https://www.linkedin.com/in/nicolas-bertagnolli-058aba81/) - Director of Machine Learning at [BEN](https://ben.productplacement.com/) **Background:** Nic is a scientist and engineer working to improve human communication through machine learning. He’s spent the last decade applying ML/NLP to solve data problems in the medical space from uncovering novel patterns in cancer genomes to leveraging billions of clinical notes to reduce costs and improve outcomes. At BEN, Nic innovates intelligent technologies that scale human capabilities to reach people. See his [CV](http://www.nbertagnolli.com/assets/Bertagnolli_CV.pdf), [research](http://www.nbertagnolli.com/), and [Medium articles here](https://nbertagnolli.medium.com/). **Fun Fact:** Nic lived in a van and traveled around the western United States for three years before starting work at BEN. **BEN:** An entertainment AI company that places brands inside influencer, streaming, TV, and film content to connect brands with audiences in a way that advertisements cannot. #### **1. How has ML made a positive impact on Marketing?** In so many ways! It’s completely changing the landscape. Marketing is a field steeped in tradition based on gut feelings. In the past 20 years, there has been a move to more and more statistically informed marketing decisions but many brands are still relying on the gut instincts of their marketing departments. ML is revolutionizing this. With the ability to analyze data about which advertisements perform well we can make really informed decisions about how and who we market to. At BEN, ML has really helped us take the guesswork out of a lot of the process when dealing with influencer marketing. Data helps shine a light through the fog of bias and subjectivity so that we can make informed decisions. That’s just the obvious stuff! ML is also making it possible to make safer marketing decisions for brands. For example, it’s illegal to advertise alcohol to people under the age of 21. Using machine learning we can identify influencers whose audiences are mainly above 21. This scales our ability to help alcohol brands, and also brands who are worried about their image being associated with alcohol. #### **2. What are the biggest ML challenges within Marketing?** As with most things in Machine Learning the problems often aren’t really with the models themselves. With tools like [Hugging Face](http://huggingface.co), [torch hub](https://pytorch.org/docs/stable/hub.html), etc. so many great and flexible models are available to work with. The real challenges have to do with collecting, cleaning, and managing the data. If we want to talk about the hard ML-y bits of the job, some of it comes down to the fact that there is a lot of noise in what people view and enjoy. Understanding things like virality are really really hard. Understanding what makes a creator/influencer successful over time is really hard. There is a lot of weird preference information buried in some pretty noisy difficult-to-acquire data. These problems come down to having really solid communication between data, ML, and business teams, and building models which augment and collaborate with humans instead of fully automating away their roles. #### **3. What’s a common mistake you see people make trying to integrate ML into Marketing?** I don’t think this is exclusive to marketing but prioritizing machine learning and data science over good infrastructure is a big problem I see often. Organizations hear about ML and want to get a piece of the pie so they hire some data scientists only to find out that they don’t have any infrastructure to service their new fancy pants models. A ton of the value of ML is in the infrastructure around the models and if you’ve got trained models but no infrastructure you’re hosed. One of the really nice things about BEN is we invested heavily in our data infrastructure and built the horse before the cart. Now Data Scientists can build models that get served to our end users quickly instead of having to figure out every step of that pipeline themselves. Invest in data engineering before hiring lots of ML folks. #### **4. What excites you most about the future of ML?** There is so much exciting stuff going on. I think the pace and democratization of the field is perhaps what I find most exciting. I remember almost 10 years ago writing my first seq2seq model for language translation. It was hundreds of lines of code, took forever to train and was pretty challenging. Now you can basically build a system to translate any language to any other language in under 100 lines of python code. It’s insane! This trend is most likely to continue and as the ML infrastructure gets better and better it will be easier and easier for people without deep domain expertise to deploy and serve models to other people. Much like in the beginning of the internet, software developers were few and far between and you needed a skilled team to set up a website. Then things like Django, Rails, etc. came out making website building easy but serving it was hard. We’re kind of at this place where building the models is easy but serving them reliably, monitoring them reliably, etc. is still challenging. I think in the next few years the barrier to entry is going to come WAY down here and basically, any high schooler could deploy a deep transformer to some cloud infrastructure and start serving useful results to the general population. This is really exciting because it means we’ll start to see more and more tangible innovation, much like the explosion of online services. So many cool things! <img class="mx-auto" style="float: left;" padding="5px" width="200" src="/blog/assets/61_ml_director_insights/Eric-Golinko.jpeg"></a> ### [Eric Golinko](https://www.linkedin.com/in/eric-golinko/) - Director of Machine Learning at [E Source](https://www.esource.com/) **Background:** Experienced data practitioner and team builder. I’ve worked in many industries across companies of different sizes. I’m a problem solver, by training a mathematician and computer scientist. But, above all, I value relationships, family, dogs, travel and pizza. **Fun Fact:** Eric adores nachos! **E Source:** Provides independent market intelligence, consulting, and predictive data science to utilities, major energy users, and other key players in the retail energy marketplace. #### **1. How has ML made a positive impact on the Energy/Utility industry?** Access to business insight. Provided a pre-requisite is great data. Utilities have many data relationships within their data portfolio from customers to devices, more specifically, this speaks to monthly billing amounts and enrollment in energy savings programs. Data like that could be stored in a relational database, whereas device or asset data we can think of as the pieces of machinery that make our grid. Bridging those types of data is non-trivial. In addition, third-party data spatial/gis and weather are extremely important. Through the lens of machine learning, we are able to find and explore features and outcomes that have a real impact. #### **2. What are the biggest ML challenges within Utilities?** There is a demystification that needs to happen. What machine learning can do and where it needs to be monitored or could fall short. The utility industry has established ways of operating, machine learning can be perceived as a disruptor. Because of this, departments can be slow to adopt any new technology or paradigm. However, if the practitioner is able to prove results, then results create traction and a larger appetite to adopt. Additional challenges are on-premise data and access to the cloud and infrastructure. It’s a gradual process and has a learning curve that requires patience. #### **3. What’s a common mistake you see people make trying to integrate ML into Utilities?** Not unique to utilizes, but moving too fast and neglecting good data quality and simple quality checks. Aside from this machine learning is practiced among many groups in some direct or indirect way. A challenge is integrating best development practices across teams. This also means model tracking and being able to persist experiments and continuous discovery. #### **4. What excites you most about the future of ML?** I’ve been doing this for over a decade, and I somehow still feel like a novice. I feel fortunate to have been part of teams where I’d be lucky to be called the average member. My feeling is that the next ten years and beyond will be more focused on data engineering to see even a larger number of use cases covered by machine learning. --- 🤗 Thank you for joining us in this first installment of ML Director Insights. Stay tuned for more insights from ML Directors in SaaS, Finance, and e-Commerce. Big thanks to Eric Golinko, Nicolas Bertagnolli, Nathan Cahill, Alina Zare, Li Tan, and Archi Mitra for their brilliant insights and participation in this piece. We look forward to watching each of your continued successes and will be cheering you on each step of the way. 🎉 Lastly, if you or your team are interested in accelerating your ML roadmap with Hugging Face Experts please visit [hf.co/support](https://huggingface.co/support?utm_source=article&utm_medium=blog&utm_campaign=ml_director_insights_1) to learn more.
Opinion Classification with Kili and HuggingFace AutoTrain
alperiox
April 28, 2022
opinion-classification-with-kili
guide
https://huggingface.co/blog/opinion-classification-with-kili
# Opinion Classification with Kili and HuggingFace AutoTrain ## Introduction Understanding your users’ needs is crucial in any user-related business. But it also requires a lot of hard work and analysis, which is quite expensive. Why not leverage Machine Learning then? With much less coding by using Auto ML. In this article, we will leverage [HuggingFace AutoTrain](https://huggingface.co/autotrain) and [Kili](https://kili-technology.com/) to build an active learning pipeline for text classification. [Kili](https://kili-technology.com/) is a platform that empowers a data-centric approach to Machine Learning through quality training data creation. It provides collaborative data annotation tools and APIs that enable quick iterations between reliable dataset building and model training. Active learning is a process in which you add labeled data to the data set and then retrain a model iteratively. Therefore, it is endless and requires humans to label the data. As a concrete example use case for this article, we will build our pipeline by using user reviews of Medium from the Google Play Store. After that, we are going to categorize the reviews with the pipeline we built. Finally, we will apply sentiment analysis to the classified reviews. Then we will analyze the results, understanding the users’ needs and satisfaction will be much easier. ## AutoTrain with HuggingFace Automated Machine Learning is a term for automating a Machine Learning pipeline. It also includes data cleaning, model selection, and hyper-parameter optimization too. We can use 🤗 transformers for automated hyper-parameter searching. Hyper-parameter optimization is a difficult and time-consuming process. While we can build our pipeline ourselves by using transformers and other powerful APIs, it is also possible to fully automate this with [AutoTrain](https://huggingface.co/autotrain). AutoTrain is built on many powerful APIs like transformers, [datasets](https://github.com/huggingface/datasets) and [inference-api](https://huggingface.co/docs/transformers/main_classes/trainer). Cleaning the data, model selection, and hyper-parameter optimization steps are all fully automated in AutoTrain. One can fully utilize this framework to build production-ready SOTA transformer models for a specific task. Currently, AutoTrain supports binary and multi-label text classification, token classification, extractive question answering, text summarization, and text scoring. It also supports many languages like English, German, French, Spanish, Finnish, Swedish, Hindi, Dutch, and [more](https://huggingface.co/autotrain). If your language is not supported by AutoTrain, it is also possible to use custom models with custom tokenizers. ## Kili [Kili](https://kili-technology.com/) is an end-to-end AI training platform for data-centric businesses. Kili provides optimized labeling features and quality management tools to manage your data. You can quickly annotate the image, video, text, pdf, and voice data while controlling the quality of the dataset. It also has powerful APIs for GraphQL and Python which eases data management a lot. It is available either online or on-premise and it enables modern Machine Learning technics either on computer vision or on NLP and OCR. It supports text classification, named entity recognition (NER), relation extraction, and more NLP/OCR tasks. It also supports computer vision tasks like object detection, image transcription, video classification, semantic segmentation, and many more! Kili is a commercial tool but you can also create a free developer account to try Kili’s tools. You can learn more from the [pricing](https://kili-technology.com/pricing/) page. ## Project We will work on an example of review classification, along with sentiment analysis, to get insights about a mobile application. We have extracted around 40 thousand reviews of Medium from the Google Play Store. We will [annotate the review texts](https://kili-technology.com/blog/text-annotation-in-machine-learning-an-overview/) in this dataset step by step. And then we’re going to build a pipeline for review classification. In the modeling, the first model will be prepared with AutoTrain. Then we will also build a model without using AutoTrain. All the code and the dataset can be found on the [GitHub repository](https://github.com/alperiox/review-classification-kili-hf-automl) of the project. ## Dataset Let’s start by taking a look at the raw dataset, ![](assets/59_opinion-classification-with-kili/1.png) There are 10 columns and 40130 samples in this dataset. The only column we need is `content` which is the review of the user. Before starting, we need to define some categories. We have defined 4 categories, - Subscription: Since medium has a subscription option, anything related to users' opinions about subscription features should belong here. - Content: Medium is a sharing platform, there are lots of writings from poetry to advanced artificial intelligence research. Users’ opinions about a variety of topics, the quality of the content should belong here. - Interface: Thoughts about UI, searching articles, recommendation engine, and anything related to the interface should belong here. This also includes payment-related issues. - User Experience: The user’s general thoughts and opinions about the application. Which should be generally abstract without indicating another category. For the labeling part, we need to create a project in Kili’s platform at first. We can use either the web interface of the platform or APIs. Let's see both. **From the web interface:** From the project list page, we create a multi-class text classification project. ![](assets/59_opinion-classification-with-kili/2.png) After that, on the project’s page, you can add your data by clicking the Add assets button. Currently, you can add at most 25000 samples, but you can extend this limit if you contact the Kili sales team. After we create our project, we need to add jobs. We can prepare a labeling interface from the Settings page Although we have defined 4 categories, it is inevitable to come across reviews that should have multiple categories or completely weird ones. I will add two more labels (which are not to use in modeling) to catch these cases too. In our example, we added two more labels (Other, Multi-label). We also added a named entity recognition (NER) job just to specify how we decided on a label while labeling. The final interface is shown below ![](assets/59_opinion-classification-with-kili/3.png) As you can see from the menu at the left, it is also possible to drop a link that describes your labels on the `Instructions` page. We can also add other members to our project from `Members` or add quality measures from the `Quality management` pages. More information can be found in the [documentation](https://cloud.kili-technology.com/docs/overview/introduction-to-kili-technology.html). **Now, let’s create our project with Python API:** At first, we need to import needed libraries ([notebooks/kili_project_management.ipynb](https://github.com/alperiox/review-classification-kili-hf-automl/blob/master/notebooks/kili_project_management.ipynb)) ```python import os #we will process the data (which is a csv file) import pandas as pd #API client from kili.client import Kili #Why not use pretty progress bars? from tqdm import tqdm from dotenv import load_dotenv load_dotenv() ``` In order to access the platform, we need to authenticate our client ```python API_KEY = os.getenv('KILI_API_KEY') # initialize and authenticate the Kili client kili = Kili(api_key = API_KEY) ``` Now we can start to prepare our interface, the interface is just a dictionary in Python. We will define our jobs, then fill the labels up. Since all labels also could have children labels, we will pass labels as dictionaries too. ```python labels = ['User experience', 'Subscription', 'Content', 'Other', 'Multi label'] entity_dict = { 'User experience': '#cc4125', 'Subscription': '#4543e6', 'Content': '#3edeb6', } project_name = 'User review dataset for topic classification' project_description = "Medium's app reviews fetched from google play store for topic classification" interface = { 'jobs': { 'JOB_0': { 'mlTask': 'CLASSIFICATION', 'instruction': 'Labels', 'required': 1, 'content': { "categories": {}, "input": "radio", }, }, 'JOB_1': { 'mlTask': "NAMED_ENTITIES_RECOGNITION", 'instruction': 'Entities', 'required': 1, 'content': { 'categories': {}, "input": "radio" }, }, } } # fill the interface json with jobs for label in labels: # converts labels to uppercase and replaces whitespaces with underscores (_) # ex. User experience -> USER_EXPERIENCE # this is the preferred way to fill the interface label_upper = label.strip().upper().replace(' ', '_') # content_dict_0 = interface['jobs']['JOB_0']['content'] categories_0 = content_dict_0['categories'] category = {'name': label, 'children': []} categories_0[label_upper] = category for label, color in entity_dict.items(): label_upper = label.strip().upper().replace(' ', '_') content_dict_1 = interface['jobs']['JOB_1']['content'] categories_1 = content_dict_1['categories'] category = {'name': label, 'children': [], 'color': color} categories_1[label_upper] = category # now we can create our project # this method returns the created project’s id project_id = kili.create_project(json_interface=interface, input_type='TEXT', title=project_name, description=project_description)['id'] ``` We are ready to upload our data to the project. The `append_many_to_dataset` method can be used to import the data into the platform. By using the Python API, we can import the data by batch of 100 maximum. Here is a simple function to upload the data: ```python def import_dataframe(project_id:str, dataset:pd.DataFrame, text_data_column:str, external_id_column:str, subset_size:int=100) -> bool: """ Arguments: Inputs - project_id (str): specifies the project to load the data, this is also returned when we create our project - dataset (pandas DataFrame): Dataset that has proper columns for id and text inputs - text_data_column (str): specifies which column has the text input data - external_id_column (str): specifies which column has the ids - subset_size (int): specifies the number of samples to import at a time. Cannot be higher than 100 Outputs: None Returns: True or False regards to process succession """ assert subset_size <= 100, "Kili only allows to upload 100 assets at most at a time onto the app" L = len(dataset) # set 25000 as an upload limit, can be changed if L>25000: print('Kili Projects currently supports maximum 25000 samples as default. Importing first 25000 samples...') L=25000 i = 0 while i+subset_size < L: subset = dataset.iloc[i:i+subset_size] externalIds = subset[external_id_column].astype(str).to_list() contents = subset[text_data_column].astype(str).to_list() kili.append_many_to_dataset(project_id=project_id, content_array=contents, external_id_array=externalIds) i += subset_size return True ``` It simply imports the given `dataset` DataFrame to a project specified by project_id. We can see the arguments from docstring, we just need to pass our dataset along with the corresponding column names. We’ll just use the sample indices we get when we load the data. And then voila, uploading the data is done! ```python dataset_path = '../data/processed/lowercase_cleaned_dataset.csv' df = pd.read_csv(dataset_path).reset_index() # reset index to get the indices import_dataframe(project_id, df, 'content', 'index') ``` It wasn’t difficult to use the Python API, the helper methods we used covered many difficulties. We also used another script to check the new samples when we updated the dataset. Sometimes the model performance drop down after the dataset update. This is due to simple mistakes like mislabeling and introducing bias to the dataset. The script simply authenticates and then moves distinct samples of two given dataset versions to `To Review`. We can change the property of a sample through `update_properties_in_assets` method: ([scripts/move_diff_to_review.py](https://github.com/alperiox/review-classification-kili-hf-automl/blob/master/scripts/move_diff_to_review.py)) ```python # Set up the Kili client and arguments from kili.client import Kili from dotenv import load_dotenv import os import argparse import pandas as pd load_dotenv() parser = argparse.ArgumentParser() parser.add_argument('--first', required=True, type=str, help='Path to first dataframe') parser.add_argument('--second', required=True, type=str, help='Path to second dataframe') args = vars(parser.parse_args()) # set the kili connection up API_KEY = os.getenv('KILI_API_KEY') kili = Kili(API_KEY) # read dataframes df1 = pd.read_csv(args['first']) df2 = pd.read_csv(args['second']) # concating two of them should let us have duplicates of common elements # then we can drop the duplicated elements without keeping any duplicates to get the different elements across the two dataframes diff_df = pd.concat((df1, df2)).drop_duplicates(keep=False) diff_ids = diff_df['id'].to_list() # The changes should be given as an array that # contains the change for every single sample. # That’s why [‘TO_REVIEW’] * len(diff_df) is passed to status_array argument kili.update_properties_in_assets(diff_ids, status_array=['TO_REVIEW'] * len(diff_ids)) print('SET %d ENTRIES TO BE REVIEWED!' % len(diff_df)) ``` ## Labeling Now that we have the source data uploaded, the platform has a built-in labeling interface which is pretty easy to use. Available keyboard shortcuts helped while annotating the data. We used the interface without breaking a sweat, there are automatically defined shortcuts and it simplifies the labeling. We can see the shortcuts by clicking the keyboard icon at the right-upper part of the interface, they are also shown by underlined characters in the labeling interface at the right. ![](assets/59_opinion-classification-with-kili/4.png) Some samples were very weird, so we decided to skip them while labeling. In general, the process was way easier thanks to Kili’s built-in platform. ![](assets/59_opinion-classification-with-kili/5.gif) ## Exporting the Labeled Data The labeled data is exported with ease by using Python API. The script below exports the labeled and reviewed samples into a dataframe, then saves it with a given name as a CSV file. ([scripts/prepare_dataset.py](https://github.com/alperiox/review-classification-kili-hf-automl/blob/master/scripts/prepare_dataset.py)) ```python import argparse import os import pandas as pd from dotenv import load_dotenv from kili.client import Kili load_dotenv() parser = argparse.ArgumentParser() parser.add_argument('--output_name', required=True, type=str, default='dataset.csv') parser.add_argument('--remove', required=False, type=str) args = vars(parser.parse_args()) API_KEY = os.getenv('KILI_API_KEY') dataset_path = '../data/processed/lowercase_cleaned_dataset.csv' output_path = os.path.join('../data/processed', args['output_name']) def extract_labels(labels_dict): response = labels_dict[-1] # pick the latest version of the sample label_job_dict = response['jsonResponse']['JOB_0'] categories = label_job_dict['categories'] # all samples have a label, we can just pick it by its index label = categories[0]['name'] return label kili = Kili(API_KEY) print('Authenticated!') # query will return a list that contains matched elements (projects in this case) # since we have only one project with this name, we can just pick the first index project = kili.projects( search_query='User review dataset for topic classification')[0] project_id = project['id'] # we can customize the returned fields # the fields below are pretty much enough, # labels.jsonResponse carries the labeling data returned_fields = [ 'id', 'externalId', 'labels.jsonResponse', 'skipped', 'status' ] # I read the raw dataset too in order to match the samples with externalId dataset = pd.read_csv(dataset_path) # we can fetch the data as a dataframe df = kili.assets(project_id=project_id, status_in=['LABELED', 'REVIEWED'], fields=returned_fields, format='pandas') print('Got the samples!') # we will pass the skipped samples df_ns = df[~df['skipped']].copy() # extract the labeled samples df_ns.loc[:, 'label'] = df_ns['labels'].apply(extract_labels) # The externalId column is returned as string, let’s convert it to integer # to use as indices df_ns.loc[:, 'content'] = dataset.loc[df_ns.externalId.astype(int), 'content'] # we can drop the `labels` column now df_ns = df_ns.drop(columns=['labels']) # we'll remove the multi-labeled samples df_ns = df_ns[df_ns['label'] != 'MULTI_LABEL'].copy() # also remove the samples with label specified in remove argument if it's given if args['remove']: df_ns = df_ns.drop(index=df_ns[df_ns['label'] == args['remove']].index) print(‘DATA FETCHING DONE') print('DATASET HAS %d SAMPLES' % (len(df_ns))) print('SAVING THE PROCESSED DATASET TO: %s' % os.path.abspath(output_path)) df_ns.to_csv(output_path, index=False) print('DONE!') ``` Nice! We now have the labeled data as a csv file. Let's create a dataset repository in HuggingFace and upload the data there! It's really simple, just click your profile picture and select `New Dataset` option. ![](assets/59_opinion-classification-with-kili/19.png) Then enter the repository name, pick a license if you want and it's done! ![](assets/59_opinion-classification-with-kili/20.png) Now we can upload the dataset from `Add file` in the `Files and versions` tab. ![](assets/59_opinion-classification-with-kili/22.png) Dataset viewer is automatically available after you upload the data, we can easily check the samples! ![](assets/59_opinion-classification-with-kili/24.png) It is also possible to [upload the dataset to Hugging Face's dataset hub](https://huggingface.co/docs/datasets/upload_dataset#upload-from-python) by using `datasets` package. ## Modeling Let's use active learning. We iteratively label and fine-tune the model. In each iteration, we label 50 samples in the dataset. The number of samples is shown below: ![](assets/59_opinion-classification-with-kili/6.png) Let’s try out AutoTrain first: First, open the [AutoTrain](https://ui.autonlp.huggingface.co/) 1. Create a project ![](assets/59_opinion-classification-with-kili/7.png) 2. We can select the dataset repository we created before or upload the dataset again. Then we need to choose the split type, I’ll leave it as Auto. ![](assets/59_opinion-classification-with-kili/8.png) 3. Train the models ![](assets/59_opinion-classification-with-kili/9.png) AutoTrain will try different models and select the best models. Then performs hyper-parameter optimization automatically. The dataset is also processed automatically. The price totally depends on your use case. It can be as low as $10 or it can be more expensive than the current value. The training is done after around 20 minutes, the results are pretty good! ![](assets/59_opinion-classification-with-kili/10.png) The best model’s accuracy is almost %89. ![](assets/59_opinion-classification-with-kili/11.png) Now we can use this [model](https://huggingface.co/alperiox/autonlp-user-review-classification-536415182) to perform the analysis, it only took about 30 minutes to set up the whole thing. ## Modeling without AutoTrain We will use [Ray Tune](https://docs.ray.io/en/latest/tune/index.html) and Hugging Face’s Trainer API to search hyper-parameters and fine-tune a pre-trained deep learning model. We have selected [roBERTa base sentiment classification model](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment) which is trained on tweets for fine-tuning. We've fine-tuned the model on google collaboratory and it can be found on the `notebooks` folder in the [GitHub repository](https://github.com/alperiox/user-review-classification-hf-kili). Ray tune is a popular library for hyper-parameter optimization which comes with many SOTA algorithms out of the box. It is also possible to use [Optuna](https://optuna.readthedocs.io/en/stable/index.html) and [SigOpt](https://sigopt.com/). We also used [Async Successive Halving Algorithm [(ASHA)](https://docs.ray.io/en/latest/tune/api_docs/schedulers.html#asha-tune-schedulers-ashascheduler) as the scheduler and [HyperOpt](https://hyperopt.github.io/hyperopt/) as the search algorithm. Which is pretty much a starting point. You can use different [schedulers](https://docs.ray.io/en/latest/tune/api_docs/schedulers.html) and [search algorithms](https://docs.ray.io/en/latest/tune/api_docs/suggestion.html). What will we do? - Import the necessary libraries (a dozen of them) and prepare a dataset class - Define needed functions and methods to process the data - Load the pre-trained model and tokenizer - Run hyper-parameter search - Use the best results for evaluation Let’s start with importing necessary libraries! (all the code is in [notebooks/modeling.ipynb](https://github.com/alperiox/review-classification-kili-hf-automl/blob/master/notebooks/modeling.ipynb) and [google collaboratory notebook](https://colab.research.google.com/drive/1YL-q3_JTEnOtoQdiDUnwSxLVn9Aqpzs8?usp=sharing)) ```python # general data science/utilization/visualization imports import json import os import random # progress bar from tqdm import tqdm # data manipulation / reading import numpy as np import pandas as pd # visualization import plotly.express as px import matplotlib.pyplot as plt # pre-defined evaluation metrics from sklearn.metrics import (accuracy_score, f1_score, precision_score, recall_score) from sklearn.model_selection import train_test_split # torch imports import torch import torch.nn as nn from torch.utils.data import DataLoader, Dataset, random_split # huggingface imports import transformers from datasets import load_metric from transformers import (AutoModelForSequenceClassification, AutoTokenizer, Trainer, TrainingArguments) # ray tune imports for hyperparameter optimization from ray.tune.schedulers import ASHAScheduler, PopulationBasedTraining from ray.tune.suggest.hyperopt import HyperOptSearch ``` We will set a seed for the libraries we use for reproducibility ```python def seed_all(seed): torch.manual_seed(seed) random.seed(seed) np.random.seed(seed) SEED=42 seed_all(SEED) ``` Now let’s define our dataset class! ```python class TextClassificationDataset(Dataset): def __init__(self, dataframe): self.labels = dataframe.label.to_list() self.inputs = dataframe.content.to_list() self.labels_to_idx = {k:v for k,v in labels_dict.items()} # copy the labels_dict dictionary def __len__(self): return len(self.inputs) def __getitem__(self, idx): if type(idx)==torch.Tensor: idx = list(idx) input_data = self.inputs[idx] target = self.labels[idx] target = self.labels_to_idx[target] return {'text': input_data, 'label':target} ``` We can download the model easily by specifying HuggingFace hub repository. It is also needed to import the tokenizer for the specified model. We have to provide a function to initialize the model during hyper-parameter optimization. The model will be defined there. The metric to optimize is accuracy, we want this value to be as high as possible. Because of that, we need to load the metric, then define a function to get the predictions and calculate the preferred metric. ```python model_name = 'cardiffnlp/twitter-roberta-base-sentiment' # we will perform the search to optimize the model accuracy, # we need to specify and load the accuracy metric as a first step metric = load_metric("accuracy") # since we already entered a model name, we can load the tokenizer # we can also load the model but i'll describe it in the model_init function. tokenizer = AutoTokenizer.from_pretrained(model_name) def model_init(): """ Hyperparameter optimization is performed by newly initialized models, therefore we will need to initialize the model again for every single search run. This function initializes and returns the pre-trained model selected with `model_name` """ return AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=4, return_dict=True, ignore_mismatched_sizes=True) # the function to calculate accuracy def compute_metrics(eval_pred): logits, labels = eval_pred predictions = np.argmax(logits, axis=-1) # just pick the indices that has the maximum values return metric.compute(predictions=predictions, references=labels) ``` After defining metric calculation and model initialization function, we can load the data: ```python file_name = "dataset-11.csv" dataset_path = os.path.join('data/processed', file_name) dataset = pd.read_csv(dataset_path) ``` I also defined two dictionaries for mapping labels to indices and indices to labels. ```python idx_to_label = dict(enumerate(dataset.label.unique())) labels_dict = {v:k for k,v in idx_to_label.items()} ``` Now we can define the search algorithm and the scheduler for the hyper-parameter-search. ```python scheduler = ASHAScheduler(metric='objective', mode='max') search_algorithm = HyperOptSearch(metric='objective', mode='max', random_state_seed=SEED) # number of runs for parameter searching n_trials = 40 ``` We also need to tokenize the text data before passing it to the model, we can easily do this by using the loaded tokenizer. Ray Tune works in a black-box setting so I used tokenizer as a default argument for a work-around. Otherwise, an error about tokenizer definition would arise. ```python def tokenize(sample, tokenizer=tokenizer): tokenized_sample = tokenizer(sample['text'], padding=True, truncation=True) tokenized_sample['label'] = sample['label'] return tokenized_sample ``` Another utility function that returns stratified and tokenized Torch dataset splits: ```python def prepare_datasets(dataset_df, test_size=.2, val_size=.2): train_set, test_set = train_test_split(dataset_df, test_size=test_size, stratify=dataset_df.label, random_state=SEED) train_set, val_set = train_test_split(train_set, test_size=val_size, stratify=train_set.label, random_state=SEED) # shuffle the dataframes beforehand train_set = train_set.sample(frac=1, random_state=SEED) val_set = val_set.sample(frac=1, random_state=SEED) test_set = test_set.sample(frac=1, random_state=SEED) # convert dataframes to torch datasets train_dataset = TextClassificationDataset(train_set) val_dataset = TextClassificationDataset(val_set) test_dataset = TextClassificationDataset(test_set) # tokenize the datasets tokenized_train_set = train_dataset.map(tokenize) tokenized_val_set = val_dataset.map(tokenize) tokenized_test_set = test_dataset.map(tokenize) # finally return the processed sets return tokenized_train_set, tokenized_val_set, tokenized_test_set ``` Now we can perform the search! Let’s start by processing the data: ```python tokenized_train_set, tokenized_val_set, tokenized_test_set = prepare_datasets(dataset) training_args = TrainingArguments( 'trial_results', evaluation_strategy="steps", disable_tqdm=True, skip_memory_metrics=True, ) trainer = Trainer( args=training_args, tokenizer=tokenizer, train_dataset=tokenized_train_set, eval_dataset=tokenized_val_set, model_init=model_init, compute_metrics=compute_metrics ) best_run = trainer.hyperparameter_search( direction="maximize", n_trials=n_trials, backend="ray", search_alg=search_algorithm, scheduler=scheduler ) ``` We performed the search with 20 and 40 trials respectively, the results are shown below. The weighted average of F1, Recall, and Precision scores for 20 runs. ![](assets/59_opinion-classification-with-kili/12.png) The weighted average of F1, Recall, and Precision scores for 40 runs. ![](assets/59_opinion-classification-with-kili/13.png) The performance spiked up at the third dataset version. At some point in data labeling, I’ve introduced too much bias to the dataset mistakingly. As we can see its performance becomes more reasonable since the sample variance increased later on. The final model is saved at Google Drive and can be downloaded from [here](https://drive.google.com/drive/folders/1X_ci2Pwu0-1XbXsaCksQHZF0254TIHiD?usp=sharing), it is also possible to download via the [download_models.py](https://github.com/alperiox/review-classification-kili-hf-automl/tree/master/scripts) script. ## Final Analysis We can use the fine-tuned model to conduct the final analysis now. All we have to do is load the data, process it, and get the prediction results from the model. Then we can use a pre-trained model for sentiment analysis and hopefully get insights. We use Google Colab for the inference ([here](https://colab.research.google.com/drive/1kGYl_YcMmA2gj6HnYFzkcxSDNPlHjYaZ?usp=sharing)) and then exported the results to [result.csv](https://github.com/alperiox/review-classification-kili-hf-automl/tree/master/results). It can be found in `results` in the GitHub repository. We then analyzed the results in another [google collaboratory notebook](https://colab.research.google.com/drive/1TOX7tqJ7SGbUDWwA_6D1y-U0aNNXY04Q?usp=sharing) for an interactive experience. So you can also use it easily and interactively. Let’s check the results now! We can see that the given scores are highly positive. In general, the application is liked by the users. ![](assets/59_opinion-classification-with-kili/14.png) This also matches with the sentiment analysis, most of the reviews are positive and the least amount of reviews are classified as negative. ![](assets/59_opinion-classification-with-kili/15.png) As we can see from above, the model's performance is kind of understandable. Positive scores are dominantly higher than the others, just like the sentimental analysis graph shows. As it comes to the categories defined before, it seems that the model predicts most of the reviews are about users' experiences (excluding experiences related to other categories): ![](assets/59_opinion-classification-with-kili/16.png) We can also see the sentiment predictions over defined categories below: ![](assets/59_opinion-classification-with-kili/17.png) We won't do a detailed analysis of the reviews, a basic understanding of potential problems would suffice. Therefore, it is enough to conclude simple results from the final data: - It is understandable that most of the reviews about the subscription are negative. Paid content generally is not welcomed in mobile applications. - There are many negative reviews about the interface. This may be a clue for further analysis. Maybe there is a misconception about features, or a feature doesn't work as users thought. - People have generally liked the articles and most of them had good experiences. Important note about the plot: we haven't filtered the reviews by application version. When we look at the results of the latest current version (4.5), it seems that the interface of the application confuses the users or has annoying bugs. ![](assets/59_opinion-classification-with-kili/18.png) ## Conclusion Now we can use the pre-trained model to try to understand the potential shortcomings of the mobile application. Then it would be easier to analyze a specific feature. We used HuggingFace’s powerful APIs and AutoTrain along with Kili’s easy-to-use interface in this example. The modeling with AutoTrain just took 30 minutes, it chose the models and trained them for our use. AutoTrain is definitely much more efficient since I spent more time as I develop the model by myself. All the code, datasets, and scripts can be found in [github](https://github.com/alperiox/review-classification-kili-hf-automl). You can also try the [AutoTrain model](https://huggingface.co/alperiox/autonlp-user-review-classification-536415182). While we can consider this as a valid starting point, we should collect more data and try to build better pipelines. Better pipelines would result in more efficient improvements.
Accelerate Large Model Training using PyTorch Fully Sharded Data Parallel
smangrul
May 2, 2022
pytorch-fsdp
guide
https://huggingface.co/blog/pytorch-fsdp
# Accelerate Large Model Training using PyTorch Fully Sharded Data Parallel In this post we will look at how we can leverage **[Accelerate](https://github.com/huggingface/accelerate)** Library for training large models which enables users to leverage the latest features of **[PyTorch FullyShardedDataParallel (FSDP)](https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/)**. # Motivation 🤗 **With the ever increasing scale, size and parameters of the Machine Learning (ML) models, ML practitioners are finding it difficult to train or even load such large models on their hardware.** On one hand, it has been found that large models learn quickly (data and compute efficient) and are significantly more performant when compared to smaller models [1]; on the other hand, it becomes prohibitive to train such models on most of the available hardware. Distributed training is the key to enable training such large ML models. There have been major recent advances in the field of **Distributed Training at Scale**. Few the most notable advances are given below: 1. Data Parallelism using ZeRO - Zero Redundancy Optimizer [2] 1. Stage 1: Shards optimizer states across data parallel workers/GPUs 2. Stage 2: Shards optimizer states + gradients across data parallel workers/GPUs 3. Stage 3: Shards optimizer states + gradients + model parameters across data parallel workers/GPUs 4. CPU Offload: Offloads the gradients + optimizer states to CPU building on top of ZERO Stage 2 [3] 2. Tensor Parallelism [4]: Form of model parallelism wherein sharding parameters of individual layers with huge number of parameters across accelerators/GPUs is done in a clever manner to achieve parallel computation while avoiding expensive communication synchronization overheads. 3. Pipeline Parallelism [5]: Form of model parallelism wherein different layers of the model are put across different accelerators/GPUs and pipelining is employed to keep all the accelerators running simultaneously. Here, for instance, the second accelerator/GPU computes on the first micro-batch while the first accelerator/GPU computes on the second micro-batch. 4. 3D parallelism [3]: Employs Data Parallelism using ZERO + Tensor Parallelism + Pipeline Parallelism to train humongous models in the order of 100s of Billions of parameters. For instance, BigScience 176B parameters Language Model employ this [6]. In this post we will look at Data Parallelism using ZeRO and more specifically the latest PyTorch feature **[FullyShardedDataParallel (FSDP)](https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/)**. **[DeepSpeed](https://github.com/microsoft/deepspeed)** and **[FairScale](https://github.com/facebookresearch/fairscale/)** have implemented the core ideas of the ZERO paper. These have already been integrated in `transformers` Trainer and accompanied by great blog [Fit More and Train Faster With ZeRO via DeepSpeed and FairScale](https://huggingface.co/blog/zero-deepspeed-fairscale) [10]. PyTorch recently upstreamed the Fairscale FSDP into PyTorch Distributed with additional optimizations. # Accelerate 🚀: Leverage PyTorch FSDP without any code changes We will look at the task of Causal Language Modelling using GPT-2 Large (762M) and XL (1.5B) model variants. Below is the code for pre-training GPT-2 model. It is similar to the official causal language modeling example [here](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_clm_no_trainer.py) with the addition of 2 arguments `n_train` (2000) and `n_val` (500) to prevent preprocessing/training on entire data in order to perform quick proof of concept benchmarks. <a href="./assets/62_pytorch_fsdp/run_clm_no_trainer.py" target="_parent">run_clm_no_trainer.py</a> Sample FSDP config after running the command `accelerate config`: ```bash compute_environment: LOCAL_MACHINE deepspeed_config: {} distributed_type: FSDP fsdp_config: min_num_params: 2000 offload_params: false sharding_strategy: 1 machine_rank: 0 main_process_ip: null main_process_port: null main_training_function: main mixed_precision: 'no' num_machines: 1 num_processes: 2 use_cpu: false ``` ## Multi-GPU FSDP Here, we experiment on the Single-Node Multi-GPU setting. We compare the performance of Distributed Data Parallel (DDP) and FSDP in various configurations. First, GPT-2 Large(762M) model is used wherein DDP works with certain batch sizes without throwing Out Of Memory (OOM) errors. Next, GPT-2 XL (1.5B) model is used wherein DDP fails with OOM error even on batch size of 1. We observe that FSDP enables larger batch sizes for GPT-2 Large model and it enables training the GPT-2 XL model with decent batch size unlike DDP. **Hardware setup**: 2X24GB NVIDIA Titan RTX GPUs. Command for training GPT-2 Large Model (762M parameters): ```bash export BS=#`try with different batch sizes till you don't get OOM error, #i.e., start with larger batch size and go on decreasing till it fits on GPU` time accelerate launch run_clm_no_trainer.py \ --model_name_or_path gpt2-large \ --dataset_name wikitext \ --dataset_config_name wikitext-2-raw-v1 \ --per_device_train_batch_size $BS --per_device_eval_batch_size $BS --num_train_epochs 1 --block_size 12 ``` Sample FSDP Run: ![Sample FSDP Run](./assets/62_pytorch_fsdp/sample_fsdp_run.png) | Method | Batch Size Max ($BS) | Approx Train Time (minutes) | Notes | | --- | --- | --- | --- | | DDP (Distributed Data Parallel) | 7 | 15 | | | DDP + FP16 | 7 | 8 | | | FSDP with SHARD_GRAD_OP | 11 | 11 | | | FSDP with min_num_params = 1M + FULL_SHARD | 15 | 12 | | | FSDP with min_num_params = 2K + FULL_SHARD | 15 | 13 | | | FSDP with min_num_params = 1M + FULL_SHARD + Offload to CPU | 20 | 23 | | | FSDP with min_num_params = 2K + FULL_SHARD + Offload to CPU | 22 | 24 | | Table 1: Benchmarking FSDP on GPT-2 Large (762M) model With respect to DDP, from Table 1 we can observe that FSDP **enables larger batch sizes**, up to **2X-3X** without and with CPU offload setting, respectively. In terms of train time, DDP with mixed precision is the fastest followed by FSDP using ZERO Stage 2 and Stage 3, respectively. As the task of causal language modelling always has fixed context sequence length (--block_size), the train time speedup with FSDP wasn’t that great. For applications with dynamic batching, FSDP which enables larger batch sizes will likely have considerable speed up in terms of train time. FSDP mixed precision support currently has few [issues](https://github.com/pytorch/pytorch/issues/75676) with transformer. Once this is supported, the training time speed up will further improve considerably. ### CPU Offloading to enable training humongous models that won’t fit the GPU memory Command for training GPT-2 XL Model (1.5B parameters): ```bash export BS=#`try with different batch sizes till you don't get OOM error, #i.e., start with larger batch size and go on decreasing till it fits on GPU` time accelerate launch run_clm_no_trainer.py \ --model_name_or_path gpt2-xl \ --dataset_name wikitext \ --dataset_config_name wikitext-2-raw-v1 \ --per_device_train_batch_size $BS --per_device_eval_batch_size $BS --num_train_epochs 1 --block_size 12 ``` | Method | Batch Size Max ($BS) | Num GPUs | Approx Train Time (Hours) | Notes | | --- | --- | --- | --- | --- | | DDP | 1 | 1 | NA | OOM Error RuntimeError: CUDA out of memory. Tried to allocate 40.00 MiB (GPU 0; 23.65 GiB total capacity; 22.27 GiB already allocated; 20.31 MiB free; 22.76 GiB reserved in total by PyTorch) | | DDP | 1 | 2 | NA | OOM Error RuntimeError: CUDA out of memory. Tried to allocate 40.00 MiB (GPU 0; 23.65 GiB total capacity; 22.27 GiB already allocated; 20.31 MiB free; 22.76 GiB reserved in total by PyTorch) | | DDP + FP16 | 1 | 1 | NA | OOM Error RuntimeError: CUDA out of memory. Tried to allocate 40.00 MiB (GPU 0; 23.65 GiB total capacity; 22.27 GiB already allocated; 20.31 MiB free; 22.76 GiB reserved in total by PyTorch) | | FSDP with min_num_params = 2K | 5 | 2 | 0.6 | | | FSDP with min_num_params = 2K + Offload to CPU | 10 | 1 | 3 | | | FSDP with min_num_params = 2K + Offload to CPU | 14 | 2 | 1.16 | | Table 2: Benchmarking FSDP on GPT-2 XL (1.5B) model From Table 2, we can observe that DDP (w and w/o fp16) isn’t even able to run with batch size of 1 and results in CUDA OOM error. FSDP with Zero-Stage 3 is able to be run on 2 GPUs with batch size of 5 (effective batch size =10 (5 X 2)). FSDP with CPU offload can further increase the max batch size to 14 per GPU when using 2 GPUs. **FSDP with CPU offload enables training GPT-2 1.5B model on a single GPU with a batch size of 10**. This enables ML practitioners with minimal compute resources to train such large models, thereby democratizing large model training. ## Capabilities and limitations of the FSDP Integration Let’s dive into the current support that Accelerate provides for FSDP integration and the known limitations. **Required PyTorch version for FSDP support**: PyTorch Nightly (or 1.12.0 if you read this after it has been released) as the model saving with FSDP activated is only available with recent fixes. **Configuration through CLI:** 1. **Sharding Strategy**: [1] FULL_SHARD, [2] SHARD_GRAD_OP 2. **Min Num Params**: FSDP's minimum number of parameters for Default Auto Wrapping. 3. **Offload Params**: Decides Whether to offload parameters and gradients to CPU. For more control, users can leverage the `FullyShardedDataParallelPlugin` wherein they can specify `auto_wrap_policy`, `backward_prefetch` and `ignored_modules`. After creating an instance of this class, users can pass it when creating the Accelerator object. For more information on these options, please refer to the PyTorch [FullyShardedDataParallel](https://github.com/pytorch/pytorch/blob/0df2e863fbd5993a7b9e652910792bd21a516ff3/torch/distributed/fsdp/fully_sharded_data_parallel.py#L236) code. Next, we will see the importance of the `min_num_params` config. Below is an excerpt from [8] detailing the importance of FSDP Auto Wrap Policy. ![Importance of FSDP Auto Wrap Policy](./assets/62_pytorch_fsdp/auto_wrap_importance.png) (Source: [link](https://pytorch.org/tutorials/intermediate/FSDP_tutorial.html)) When using the `default_auto_wrap_policy`, a layer is wrapped in FSDP module if the number of parameters in that layer is more than the min_num_params . The code for finetuning BERT-Large (330M) model on the GLUE MRPC task is the official complete NLP example outlining how to properly use FSDP feature with the addition of utilities for tracking peak memory usage. [fsdp_with_peak_mem_tracking.py](https://github.com/huggingface/accelerate/tree/main/examples/by_feature/fsdp_with_peak_mem_tracking.py) We leverage the tracking functionality support in Accelerate to log the train and evaluation peak memory usage along with evaluation metrics. Below is the snapshot of the plots from wandb [run](https://wandb.ai/smangrul/FSDP-Test?workspace=user-smangrul). ![Wandb Run](./assets/62_pytorch_fsdp/wandb_run.png) We can observe that the DDP takes twice as much memory as FSDP with auto wrap. FSDP without auto wrap takes more memory than FSDP with auto wrap but considerably less than that of DDP. FSDP with auto wrap with min_num_params=2k takes marginally less memory when compared to setting with min_num_params=1M. This highlights the importance of the FSDP Auto Wrap Policy and users should play around with the `min_num_params` to find the setting which considerably saves memory and isn’t resulting in lot of communication overhead. PyTorch team is working on auto tuning tool for this config as mentioned in [8]. ### **Few caveats to be aware of** - PyTorch FSDP auto wraps sub-modules, flattens the parameters and shards the parameters in place. Due to this, any optimizer created before model wrapping gets broken and occupies more memory. Hence, it is highly recommended and efficient to prepare model before creating optimizer. `Accelerate` will automatically wrap the model and create an optimizer for you in case of single model with a warning message. > FSDP Warning: When using FSDP, it is efficient and recommended to call prepare for the model before creating the optimizer > However, below is the recommended way to prepare model and optimizer while using FSDP: ```diff model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", return_dict=True) + model = accelerator.prepare(model) optimizer = torch.optim.AdamW(params=model.parameters(), lr=lr) - model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare(model, - optimizer, train_dataloader, eval_dataloader, lr_scheduler - ) + optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare( + optimizer, train_dataloader, eval_dataloader, lr_scheduler + ) ``` - In case of a single model, if you have created optimizer with multiple parameter groups and called prepare with them together, then the parameter groups will be lost and the following warning is displayed: > FSDP Warning: When using FSDP, several parameter groups will be conflated into a single one due to nested module wrapping and parameter flattening. > This is because parameter groups created before wrapping will have no meaning post wrapping due parameter flattening of nested FSDP modules into 1D arrays (which can consume many layers). For instance, below are the named parameters of FSDP model on GPU 0 (When using 2 GPUs. Around 55M (110M/2) params in 1D arrays as this will have the 1st shard of the parameters). Here, if one has applied no weight decay for [bias, LayerNorm.weight] named parameters of unwrapped BERT-Base model, it can’t be applied to the below FSDP wrapped model as there are no named parameters with either of those strings and the parameters of those layers are concatenated with parameters of various other layers. More details mentioned in this [issue](https://github.com/pytorch/pytorch/issues/76501) (`The original model parameters' .grads are not set, meaning that they cannot be optimized separately (which is why we cannot support multiple parameter groups)`). ``` { '_fsdp_wrapped_module.flat_param': torch.Size([494209]), '_fsdp_wrapped_module._fpw_module.bert.embeddings.word_embeddings._fsdp_wrapped_module.flat_param': torch.Size([11720448]), '_fsdp_wrapped_module._fpw_module.bert.encoder._fsdp_wrapped_module.flat_param': torch.Size([42527232]) } ``` - In case of multiple models, it is necessary to prepare the models before creating optimizers else it will throw an error. - Mixed precision is currently not supported with FSDP as we wait for PyTorch to fix support for it. # How it works 📝 ![FSDP Workflow](./assets/62_pytorch_fsdp/FSDP_workflow.png) (Source: [link](https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/)) The above workflow gives an overview of what happens behind the scenes when FSDP is activated. Let's first understand how DDP works and how FSDP improves it. In DDP, each worker/accelerator/GPU has a replica of the entire model parameters, gradients and optimizer states. Each worker gets a different batch of data, it goes through the forwards pass, a loss is computed followed by the backward pass to generate gradients. Now, an all-reduce operation is performed wherein each worker gets the gradients from the remaining workers and averaging is done. In this way, each worker now has the same global gradients which are used by the optimizer to update the model parameters. We can see that having full replicas consume a lot of redundant memory on each GPU, which limits the batch size as well as the size of the models. FSDP precisely addresses this by sharding the optimizer states, gradients and model parameters across the data parallel workers. It further facilitates CPU offloading of all those tensors, thereby enabling loading large models which won't fit the available GPU memory. Similar to DDP, each worker gets a different batch of data. During the forward pass, if the CPU offload is enabled, the parameters of the local shard are first copied to the GPU/accelerator. Then, each worker performs all-gather operation for a given FSDP wrapped module/layer(s) to all get the needed parameters, computation is performed followed by releasing/emptying the parameter shards of other workers. This continues for all the FSDP modules. The loss gets computed after the forward pass and during the backward pass, again an all-gather operation is performed to get all the needed parameters for a given FSDP module, computation is performed to get local gradients followed by releasing the shards of other workers. Now, the local gradients are averaged and sharded to each relevant workers using reduce-scatter operation. This allows each worker to update the parameters of its local shard. If CPU offload is activated, the gradients are passed to CPU for updating parameters directly on CPU. Please refer [7, 8, 9] for all the in-depth details on the workings of the PyTorch FSDP and the extensive experimentation carried out using this feature. # Issues If you encounter any issues with the integration part of PyTorch FSDP, please open an Issue in [accelerate](https://github.com/huggingface/accelerate/issues). But if you have problems with PyTorch FSDP configuration, and deployment - you need to ask the experts in their domains, therefore, please, open a [PyTorch Issue](https://github.com/pytorch/pytorch/issues) instead. # References [1] [Train Large, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers](http://nlp.cs.berkeley.edu/pubs/Li-Wallace-Shen-Lin-Keutzer-Klein-Gonzalez_2020_Transformers_paper.pdf) [2] [ZeRO: Memory Optimizations Toward Training Trillion Parameter Models](https://arxiv.org/pdf/1910.02054v3.pdf) [3] [DeepSpeed: Extreme-scale model training for everyone - Microsoft Research](https://www.microsoft.com/en-us/research/blog/deepspeed-extreme-scale-model-training-for-everyone/) [4] [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/pdf/1909.08053.pdf) [5] [Introducing GPipe, an Open Source Library for Efficiently Training Large-scale Neural Network Models](https://ai.googleblog.com/2019/03/introducing-gpipe-open-source-library.html) [6] [Which hardware do you need to train a 176B parameters model?](https://bigscience.huggingface.co/blog/which-hardware-to-train-a-176b-parameters-model) [7] [Introducing PyTorch Fully Sharded Data Parallel (FSDP) API | PyTorch](https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/) [8] [Getting Started with Fully Sharded Data Parallel(FSDP) — PyTorch Tutorials 1.11.0+cu102 documentation](https://pytorch.org/tutorials/intermediate/FSDP_tutorial.html) [9] [Training a 1 Trillion Parameter Model With PyTorch Fully Sharded Data Parallel on AWS | by PyTorch | PyTorch | Mar, 2022 | Medium](https://medium.com/pytorch/training-a-1-trillion-parameter-model-with-pytorch-fully-sharded-data-parallel-on-aws-3ac13aa96cff) [10] [Fit More and Train Faster With ZeRO via DeepSpeed and FairScale](https://huggingface.co/blog/zero-deepspeed-fairscale)
An Introduction to Deep Reinforcement Learning
ThomasSimonini
May 4, 2022
deep-rl-intro
rl
https://huggingface.co/blog/deep-rl-intro
# An Introduction to Deep Reinforcement Learning <h2>Chapter 1 of the <a href="https://github.com/huggingface/deep-rl-class">Deep Reinforcement Learning Class with Hugging Face 🤗</a></h2> ⚠️ A **new updated version of this article is available here** 👉 [https://huggingface.co/deep-rl-course/unit1/introduction](https://huggingface.co/deep-rl-course/unit1/introduction) *This article is part of the Deep Reinforcement Learning Class. A free course from beginner to expert. Check the syllabus [here.](https://huggingface.co/deep-rl-course/unit0/introduction)* <img src="assets/63_deep_rl_intro/thumbnail.png" alt="Thumbnail"/> --- ⚠️ A **new updated version of this article is available here** 👉 [https://huggingface.co/deep-rl-course/unit1/introduction](https://huggingface.co/deep-rl-course/unit1/introduction) *This article is part of the Deep Reinforcement Learning Class. A free course from beginner to expert. Check the syllabus [here.](https://huggingface.co/deep-rl-course/unit0/introduction)* Welcome to the most fascinating topic in Artificial Intelligence: **Deep Reinforcement Learning.** Deep RL is a type of Machine Learning where an agent learns **how to behave** in an environment **by performing actions** and **seeing the results.** Since 2013 and the [Deep Q-Learning paper](https://www.cs.toronto.edu/~vmnih/docs/dqn.pdf), we’ve seen a lot of breakthroughs. From OpenAI [five that beat some of the best Dota2 players of the world,](https://www.twitch.tv/videos/293517383) to the [Dexterity project](https://openai.com/blog/learning-dexterity/), we **live in an exciting moment in Deep RL research.** <figure class="image table text-center m-0 w-full"> <img src="assets/63_deep_rl_intro/OpenAIFive.jpg" alt="OpenAI Five, an AI that beat some of the best Dota2 players in the world"/> <figcaption>OpenAI Five, an AI <a href="https://www.twitch.tv/videos/293517383">that beat some of the best Dota2 players in the world</a></figcaption> </figure> Moreover, since 2018, **you have now, access to so many amazing environments and libraries to build your agents.** That’s why this is the best moment to start learning, and with this course **you’re in the right place.** Yes, because this article is the first unit of [Deep Reinforcement Learning Class](https://github.com/huggingface/deep-rl-class), a **free class from beginner to expert** where you’ll learn the theory and practice using famous Deep RL libraries such as Stable Baselines3, RL Baselines3 Zoo and RLlib. In this free course, you will: - 📖 Study Deep Reinforcement Learning in **theory and practice**. - 🧑‍💻 Learn to **use famous Deep RL libraries** such as Stable Baselines3, RL Baselines3 Zoo, and RLlib. - 🤖 Train agents in **unique environments** such as [SnowballFight](https://huggingface.co/spaces/ThomasSimonini/SnowballFight), Huggy the Doggo 🐶, and classical ones such as Space Invaders and PyBullet. - 💾 Publish your trained agents **in one line of code to the Hub**. But also download powerful agents from the community. - 🏆 **Participate in challenges** where you will evaluate your agents against other teams. - 🖌️🎨 Learn to **share your environments** made with Unity and Godot. So in this first unit, **you’ll learn the foundations of Deep Reinforcement Learning.** And then, you'll train your first lander agent to **land correctly on the Moon 🌕 and upload it to the Hugging Face Hub, a free, open platform where people can share ML models, datasets and demos.** <figure class="image table text-center m-0 w-full"> <video alt="LunarLander" style="max-width: 70%; margin: auto;" autoplay loop autobuffer muted playsinline > <source src="assets/63_deep_rl_intro/lunarlander.mp4" type="video/mp4"> </video> </figure> It’s essential **to master these elements** before diving into implementing Deep Reinforcement Learning agents. The goal of this chapter is to give you solid foundations. If you prefer, you can watch the 📹 video version of this chapter : <iframe width="560" height="315" src="https://www.youtube.com/embed/q0BiUn5LiBc?start=127" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> So let’s get started! 🚀 - [What is Reinforcement Learning?](#what-is-reinforcement-learning) - [The big picture](#the-big-picture) - [A formal definition](#a-formal-definition) - [The Reinforcement Learning Framework](#the-reinforcement-learning-framework) - [The RL Process](#the-rl-process) - [The reward hypothesis: the central idea of Reinforcement Learning](#the-reward-hypothesis-the-central-idea-of-reinforcement-learning) - [Markov Property](#markov-property) - [Observations/States Space](#observationsstates-space) - [Action Space](#action-space) - [Rewards and the discounting](#rewards-and-the-discounting) - [Type of tasks](#type-of-tasks) - [Exploration/ Exploitation tradeoff](#exploration-exploitation-tradeoff) - [The two main approaches for solving RL problems](#the-two-main-approaches-for-solving-rl-problems) - [The Policy π: the agent’s brain](#the-policy-π-the-agents-brain) - [Policy-Based Methods](#policy-based-methods) - [Value-based methods](#value-based-methods) - [The “Deep” in Reinforcement Learning](#the-deep-in-reinforcement-learning) ## **What is Reinforcement Learning?** To understand Reinforcement Learning, let’s start with the big picture. ### **The big picture** The idea behind Reinforcement Learning is that an agent (an AI) will learn from the environment by **interacting with it** (through trial and error) and **receiving rewards** (negative or positive) as feedback for performing actions. Learning from interaction with the environment **comes from our natural experiences.** For instance, imagine putting your little brother in front of a video game he never played, a controller in his hands, and letting him alone. <figure class="image table text-center m-0 w-full"> <img src="assets/63_deep_rl_intro/Illustration_1.jpg" alt="Illustration_1"/> </figure> Your brother will interact with the environment (the video game) by pressing the right button (action). He got a coin, that’s a +1 reward. It’s positive, he just understood that in this game **he must get the coins.** <figure class="image table text-center m-0 w-full"> <img src="assets/63_deep_rl_intro/Illustration_2.jpg" alt="Illustration_2"/> </figure> But then, **he presses right again** and he touches an enemy, he just died -1 reward. <figure class="image table text-center m-0 w-full"> <img src="assets/63_deep_rl_intro/Illustration_3.jpg" alt="Illustration_3"/> </figure> By interacting with his environment through trial and error, your little brother understood that **he needed to get coins in this environment but avoid the enemies.** **Without any supervision**, the child will get better and better at playing the game. That’s how humans and animals learn, **through interaction.** Reinforcement Learning is just a **computational approach of learning from action.** ### **A formal definition** If we take now a formal definition: > Reinforcement learning is a framework for solving control tasks (also called decision problems) by building agents that learn from the environment by interacting with it through trial and error and receiving rewards (positive or negative) as unique feedback. > ⇒ But how Reinforcement Learning works? ## **The Reinforcement Learning Framework** ### **The RL Process** <figure class="image table text-center m-0 w-full"> <img src="assets/63_deep_rl_intro/RL_process.jpg" alt="The RL process"/> <figcaption>The RL Process: a loop of state, action, reward and next state</figcaption> <figcaption>Source: <a href="http://incompleteideas.net/book/RLbook2020.pdf">Reinforcement Learning: An Introduction, Richard Sutton and Andrew G. Barto</a></figcaption> </figure> To understand the RL process, let’s imagine an agent learning to play a platform game: <figure class="image table text-center m-0 w-full"> <img src="assets/63_deep_rl_intro/RL_process_game.jpg" alt="The RL process"/> </figure> - Our Agent receives **state \\(S_0\\)** from the **Environment** — we receive the first frame of our game (Environment). - Based on that **state \\(S_0\\),** the Agent takes **action \\(A_0\\)** — our Agent will move to the right. - Environment goes to a **new** **state \\(S_1\\)** — new frame. - The environment gives some **reward \\(R_1\\)** to the Agent — we’re not dead *(Positive Reward +1)*. This RL loop outputs a sequence of **state, action, reward and next state.** <figure class="image table text-center m-0 w-full"> <img src="assets/63_deep_rl_intro/sars.jpg" alt="State, Action, Reward, Next State"/> </figure> The agent's goal is to maximize its cumulative reward, **called the expected return.** ### **The reward hypothesis: the central idea of Reinforcement Learning** ⇒ Why is the goal of the agent to maximize the expected return? Because RL is based on the **reward hypothesis**, which is that all goals can be described as the **maximization of the expected return** (expected cumulative reward). That’s why in Reinforcement Learning, **to have the best behavior,** we need to **maximize the expected cumulative reward.** ### **Markov Property** In papers, you’ll see that the RL process is called the **Markov Decision Process** (MDP). We’ll talk again about the Markov Property in the following units. But if you need to remember something today about it, Markov Property implies that our agent needs **only the current state to decide** what action to take and **not the history of all the states** **and actions** they took before. ### **Observations/States Space** Observations/States are the **information our agent gets from the environment.** In the case of a video game, it can be a frame (a screenshot). In the case of the trading agent, it can be the value of a certain stock, etc. There is a differentiation to make between *observation* and *state*: - *State s*: is **a complete description of the state of the world** (there is no hidden information). In a fully observed environment. <figure class="image table text-center m-0 w-full"> <img class="center" src="assets/63_deep_rl_intro/chess.jpg" alt="Chess"/> <figcaption>In chess game, we receive a state from the environment since we have access to the whole check board information.</figcaption> </figure> In chess game, we receive a state from the environment since we have access to the whole check board information. With a chess game, we are in a fully observed environment, since we have access to the whole check board information. - *Observation o*: is a **partial description of the state.** In a partially observed environment. <figure class="image table text-center m-0 w-full"> <img class="center" src="assets/63_deep_rl_intro/mario.jpg" alt="Mario"/> <figcaption>In Super Mario Bros, we only see a part of the level close to the player, so we receive an observation.</figcaption> </figure> In Super Mario Bros, we only see a part of the level close to the player, so we receive an observation. In Super Mario Bros, we are in a partially observed environment. We receive an observation **since we only see a part of the level.** > In reality, we use the term state in this course but we will make the distinction in implementations. > To recap: <figure class="image table text-center m-0 w-full"> <img src="assets/63_deep_rl_intro/obs_space_recap.jpg" alt="Obs space recap"/> </figure> ### Action Space The Action space is the set of **all possible actions in an environment.** The actions can come from a *discrete* or *continuous space*: - *Discrete space*: the number of possible actions is **finite**. <figure class="image table image-center text-center m-0 w-full"> <img class="center" src="assets/63_deep_rl_intro/mario.jpg" alt="Mario"/> <figcaption>Again, in Super Mario Bros, we have only 4 directions and jump possible</figcaption> </figure> In Super Mario Bros, we have a finite set of actions since we have only 4 directions and jump. - *Continuous space*: the number of possible actions is **infinite**. <figure class="image table text-center m-0 w-full"> <img src="assets/63_deep_rl_intro/self_driving_car.jpg" alt="Self Driving Car"/> <figcaption>A Self Driving Car agent has an infinite number of possible actions since it can turn left 20°, 21,1°, 21,2°, honk, turn right 20°… </figcaption> </figure> To recap: <figure class="image table text-center m-0 w-full"> <img src="assets/63_deep_rl_intro/action_space.jpg" alt="Recap action space"/> </figcaption> </figure> Taking this information into consideration is crucial because it will **have importance when choosing the RL algorithm in the future.** ### **Rewards and the discounting** The reward is fundamental in RL because it’s **the only feedback** for the agent. Thanks to it, our agent knows **if the action taken was good or not.** The cumulative reward at each time step t can be written as: <figure class="image table text-center m-0 w-full"> <img src="assets/63_deep_rl_intro/rewards_1.jpg" alt="Rewards"/> <figcaption>The cumulative reward equals to the sum of all rewards of the sequence. </figcaption> </figure> Which is equivalent to: <figure class="image table text-center m-0 w-full"> <img src="assets/63_deep_rl_intro/rewards_2.jpg" alt="Rewards"/> <figcaption>The cumulative reward = rt+1 (rt+k+1 = rt+0+1 = rt+1)+ rt+2 (rt+k+1 = rt+1+1 = rt+2) + ... </figcaption> </figure> </figure> However, in reality, **we can’t just add them like that.** The rewards that come sooner (at the beginning of the game) **are more likely to happen** since they are more predictable than the long-term future reward. Let’s say your agent is this tiny mouse that can move one tile each time step, and your opponent is the cat (that can move too). Your goal is **to eat the maximum amount of cheese before being eaten by the cat.** <figure class="image table text-center m-0 w-full"> <img src="assets/63_deep_rl_intro/rewards_3.jpg" alt="Rewards"/> </figure> As we can see in the diagram, **it’s more probable to eat the cheese near us than the cheese close to the cat** (the closer we are to the cat, the more dangerous it is). Consequently, **the reward near the cat, even if it is bigger (more cheese), will be more discounted** since we’re not really sure we’ll be able to eat it. To discount the rewards, we proceed like this: 1. We define a discount rate called gamma. **It must be between 0 and 1.** Most of the time between **0.99 and 0.95**. - The larger the gamma, the smaller the discount. This means our agent **cares more about the long-term reward.** - On the other hand, the smaller the gamma, the bigger the discount. This means our **agent cares more about the short term reward (the nearest cheese).** 2. Then, each reward will be discounted by gamma to the exponent of the time step. As the time step increases, the cat gets closer to us, **so the future reward is less and less likely to happen.** Our discounted cumulative expected rewards is: <figure class="image table text-center m-0 w-full"> <img src="assets/63_deep_rl_intro/rewards_4.jpg" alt="Rewards"/> </figure> ### Type of tasks A task is an **instance** of a Reinforcement Learning problem. We can have two types of tasks: episodic and continuing. #### Episodic task In this case, we have a starting point and an ending point **(a terminal state). This creates an episode**: a list of States, Actions, Rewards, and new States. For instance, think about Super Mario Bros: an episode begin at the launch of a new Mario Level and ending **when you’re killed or you reached the end of the level.** <figure class="image table text-center m-0 w-full"> <img class="center" src="assets/63_deep_rl_intro/mario.jpg" alt="Mario"/> <figcaption>Beginning of a new episode. </figcaption> </figure> #### Continuing tasks These are tasks that continue forever (no terminal state). In this case, the agent must **learn how to choose the best actions and simultaneously interact with the environment.** For instance, an agent that does automated stock trading. For this task, there is no starting point and terminal state. **The agent keeps running until we decide to stop them.** <figure class="image table text-center m-0 w-full"> <img src="assets/63_deep_rl_intro/stock.jpg" alt="Stock Market"/> </figure> <figure class="image table text-center m-0 w-full"> <img src="assets/63_deep_rl_intro/tasks.jpg" alt="Tasks recap"/> </figure> ## **Exploration/ Exploitation tradeoff** Finally, before looking at the different methods to solve Reinforcement Learning problems, we must cover one more very important topic: *the exploration/exploitation trade-off.* - Exploration is exploring the environment by trying random actions in order to **find more information about the environment.** - Exploitation is **exploiting known information to maximize the reward.** Remember, the goal of our RL agent is to maximize the expected cumulative reward. However, **we can fall into a common trap**. Let’s take an example: <figure class="image table text-center m-0 w-full"> <img src="assets/63_deep_rl_intro/exp_1.jpg" alt="Exploration"/> </figure> In this game, our mouse can have an **infinite amount of small cheese** (+1 each). But at the top of the maze, there is a gigantic sum of cheese (+1000). However, if we only focus on exploitation, our agent will never reach the gigantic sum of cheese. Instead, it will only exploit **the nearest source of rewards,** even if this source is small (exploitation). But if our agent does a little bit of exploration, it can **discover the big reward** (the pile of big cheese). This is what we call the exploration/exploitation trade-off. We need to balance how much we **explore the environment** and how much we **exploit what we know about the environment.** Therefore, we must **define a rule that helps to handle this trade-off**. We’ll see in future chapters different ways to handle it. If it’s still confusing, **think of a real problem: the choice of a restaurant:** <figure class="image table text-center m-0 w-full"> <img src="assets/63_deep_rl_intro/exp_2.jpg" alt="Exploration"/> <figcaption>Source: <a href="http://rail.eecs.berkeley.edu/deeprlcourse-fa17/f17docs/lecture_13_exploration.pdf"> Berkley AI Course</a> </figcaption> </figure> - *Exploitation*: You go every day to the same one that you know is good and **take the risk to miss another better restaurant.** - *Exploration*: Try restaurants you never went to before, with the risk of having a bad experience **but the probable opportunity of a fantastic experience.** To recap: <figure class="image table text-center m-0 w-full"> <img src="assets/63_deep_rl_intro/expexpltradeoff.jpg" alt="Exploration Exploitation Tradeoff"/> </figure> ## **The two main approaches for solving RL problems** ⇒ Now that we learned the RL framework, how do we solve the RL problem? In other terms, how to build an RL agent that can **select the actions that maximize its expected cumulative reward?** ### **The Policy π: the agent’s brain** The Policy **π** is the **brain of our Agent**, it’s the function that tell us what **action to take given the state we are.** So it **defines the agent’s behavior** at a given time. <figure class="image table text-center m-0 w-full"> <img src="assets/63_deep_rl_intro/policy_1.jpg" alt="Policy"/> <figcaption>Think of policy as the brain of our agent, the function that will tells us the action to take given a state </figcaption> </figure> Think of policy as the brain of our agent, the function that will tells us the action to take given a state This Policy **is the function we want to learn**, our goal is to find the optimal policy **π*, the policy that** maximizes **expected return** when the agent acts according to it. We find this **π* through training.** There are two approaches to train our agent to find this optimal policy π*: - **Directly,** by teaching the agent to learn which **action to take,** given the state is in: **Policy-Based Methods.** - Indirectly, **teach the agent to learn which state is more valuable** and then take the action that **leads to the more valuable states**: Value-Based Methods. ### **Policy-Based Methods** In Policy-Based Methods, **we learn a policy function directly.** This function will map from each state to the best corresponding action at that state. **Or a probability distribution over the set of possible actions at that state.** <figure class="image table text-center m-0 w-full"> <img src="assets/63_deep_rl_intro/policy_2.jpg" alt="Policy"/> <figcaption>As we can see here, the policy (deterministic) <b>directly indicates the action to take for each step.</b> </figcaption> </figure> We have two types of policy: - *Deterministic*: a policy at a given state **will always return the same action.** <figure class="image table text-center m-0 w-full"> <img src="assets/63_deep_rl_intro/policy_3.jpg" alt="Policy"/> <figcaption>action = policy(state) </figcaption> </figure> <figure class="image table text-center m-0 w-full"> <img src="assets/63_deep_rl_intro/policy_4.jpg" alt="Policy"/> </figure> - *Stochastic*: output **a probability distribution over actions.** <figure class="image table text-center m-0 w-full"> <img src="assets/63_deep_rl_intro/policy_5.jpg" alt="Policy"/> <figcaption>policy(actions | state) = probability distribution over the set of actions given the current state </figcaption> </figure> <figure class="image table text-center m-0 w-full"> <img class="center" src="assets/63_deep_rl_intro/mario.jpg" alt="Mario"/> <figcaption>Given an initial state, our stochastic policy will output probability distributions over the possible actions at that state. </figcaption> </figure> If we recap: <figure class="image table text-center m-0 w-full"> <img src="assets/63_deep_rl_intro/pbm_1.jpg" alt="Pbm recap"/> </figure> <figure class="image table text-center m-0 w-full"> <img src="assets/63_deep_rl_intro/pbm_2.jpg" alt="Pbm recap"/> </figure> ### **Value-based methods** In Value-based methods, instead of training a policy function, we **train a value function** that maps a state to the expected value **of being at that state.** The value of a state is the **expected discounted return** the agent can get if it **starts in that state, and then act according to our policy.** “Act according to our policy” just means that our policy is **“going to the state with the highest value”.** <figure class="image table text-center m-0 w-full"> <img src="assets/63_deep_rl_intro/value_1.jpg" alt="Value based RL"/> </figure> Here we see that our value function **defined value for each possible state.** <figure class="image table text-center m-0 w-full"> <img src="assets/63_deep_rl_intro/value_2.jpg" alt="Value based RL"/> <figcaption>Thanks to our value function, at each step our policy will select the state with the biggest value defined by the value function: -7, then -6, then -5 (and so on) to attain the goal. </figcaption> </figure> Thanks to our value function, at each step our policy will select the state with the biggest value defined by the value function: -7, then -6, then -5 (and so on) to attain the goal. If we recap: <figure class="image table text-center m-0 w-full"> <img src="assets/63_deep_rl_intro/vbm_1.jpg" alt="Vbm recap"/> </figure> <figure class="image table text-center m-0 w-full"> <img src="assets/63_deep_rl_intro/vbm_2.jpg" alt="Vbm recap"/> </figure> ## **The “Deep” in Reinforcement Learning** ⇒ What we've talked about so far is Reinforcement Learning. But where does the "Deep" come into play? Deep Reinforcement Learning introduces **deep neural networks to solve Reinforcement Learning problems** — hence the name “deep”. For instance, in the next article, we’ll work on Q-Learning (classic Reinforcement Learning) and then Deep Q-Learning both are value-based RL algorithms. You’ll see the difference is that in the first approach, **we use a traditional algorithm** to create a Q table that helps us find what action to take for each state. In the second approach, **we will use a Neural Network** (to approximate the q value). <figure class="image table text-center m-0 w-full"> <img src="assets/63_deep_rl_intro/deep.jpg" alt="Value based RL"/> <figcaption>Schema inspired by the Q learning notebook by Udacity </figcaption> </figure> If you are not familiar with Deep Learning you definitely should watch <a href="https://course.fast.ai/">the fastai Practical Deep Learning for Coders (Free)</a> That was a lot of information, if we summarize: - Reinforcement Learning is a computational approach of learning from action. We build an agent that learns from the environment **by interacting with it through trial and error** and receiving rewards (negative or positive) as feedback. - The goal of any RL agent is to maximize its expected cumulative reward (also called expected return) because RL is based on the **reward hypothesis**, which is that **all goals can be described as the maximization of the expected cumulative reward.** - The RL process is a loop that outputs a sequence of **state, action, reward and next state.** - To calculate the expected cumulative reward (expected return), we discount the rewards: the rewards that come sooner (at the beginning of the game) **are more probable to happen since they are more predictable than the long term future reward.** - To solve an RL problem, you want to **find an optimal policy**, the policy is the “brain” of your AI that will tell us **what action to take given a state.** The optimal one is the one who **gives you the actions that max the expected return.** - There are two ways to find your optimal policy: 1. By training your policy directly: **policy-based methods.** 2. By training a value function that tells us the expected return the agent will get at each state and use this function to define our policy: **value-based methods.** - Finally, we speak about Deep RL because we introduces **deep neural networks to estimate the action to take (policy-based) or to estimate the value of a state (value-based)** hence the name “deep.” --- Now that you've studied the bases of Reinforcement Learning, you’re ready to train your first lander agent to **land correctly on the Moon 🌕 and share it with the community through the Hub** 🔥 <figure class="image table text-center m-0 w-full"> <video alt="LunarLander" style="max-width: 70%; margin: auto;" autoplay loop autobuffer muted playsinline > <source src="assets/63_deep_rl_intro/lunarlander.mp4" type="video/mp4"> </video> </figure> Start the tutorial here 👉 https://github.com/huggingface/deep-rl-class/blob/main/unit1/unit1.ipynb And since the best way to learn and avoid the illusion of competence is **to test yourself**. We wrote a quiz to help you find where **you need to reinforce your study**. Check your knowledge here 👉 https://github.com/huggingface/deep-rl-class/blob/main/unit1/quiz.md Congrats on finishing this chapter! **That was the biggest one**, and there was a lot of information. And congrats on finishing the tutorial. You’ve just trained your first Deep RL agent and shared it on the Hub 🥳. That’s **normal if you still feel confused** with all these elements. **This was the same for me and for all people who studied RL.** Take time to really grasp the material before continuing. It’s important to master these elements and having a solid foundations before entering the **fun part.** We published additional readings in the syllabus if you want to go deeper 👉 https://github.com/huggingface/deep-rl-class/blob/main/unit1/README.md Naturally, during the course, **we’re going to use and explain these terms again**, but it’s better to understand them before diving into the next chapters. In the next chapter, [we’re going to learn about Q-Learning and dive deeper **into the value-based methods.**](https://huggingface.co/blog/deep-rl-q-part1) And don't forget to share with your friends who want to learn 🤗 ! Finally, we want **to improve and update the course iteratively with your feedback**. If you have some, please fill this form 👉 https://forms.gle/3HgA7bEHwAmmLfwh9 ### Keep learning, stay awesome,
Welcome fastai to the Hugging Face Hub
espejelomar
May 6, 2022
fastai
guide, open-source-collab, community
https://huggingface.co/blog/fastai
# Welcome fastai to the Hugging Face Hub ## Making neural nets uncool again... and sharing them <a target="_blank" href="https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/64_fastai_hub.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> Few have done as much as the [fast.ai](https://www.fast.ai/) ecosystem to make Deep Learning accessible. Our mission at Hugging Face is to democratize good Machine Learning. Let's make exclusivity in access to Machine Learning, including [pre-trained models](https://huggingface.co/models), a thing of the past and let's push this amazing field even further. fastai is an [open-source Deep Learning library](https://github.com/fastai/fastai) that leverages PyTorch and Python to provide high-level components to train fast and accurate neural networks with state-of-the-art outputs on text, vision, and tabular data. However, fast.ai, the company, is more than just a library; it has grown into a thriving ecosystem of open source contributors and people learning about neural networks. As some examples, check out their [book](https://github.com/fastai/fastbook) and [courses](https://course.fast.ai/). Join the fast.ai [Discord](https://discord.com/invite/YKrxeNn) and [forums](https://forums.fast.ai/). It is a guarantee that you will learn by being part of their community! Because of all this, and more (the writer of this post started his journey thanks to the fast.ai course), we are proud to announce that fastai practitioners can now share and upload models to Hugging Face Hub with a single line of Python. 👉 In this post, we will introduce the integration between fastai and the Hub. Additionally, you can open this tutorial as a [Colab notebook](https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/64_fastai_hub.ipynb). We want to thank the fast.ai community, notably [Jeremy Howard](https://twitter.com/jeremyphoward), [Wayde Gilliam](https://twitter.com/waydegilliam), and [Zach Mueller](https://twitter.com/TheZachMueller) for their feedback 🤗. This blog is heavily inspired by the [Hugging Face Hub section](https://docs.fast.ai/huggingface.html) in the fastai docs. ## Why share to the Hub? The Hub is a central platform where anyone can share and explore models, datasets, and ML demos. It has the most extensive collection of Open Source models, datasets, and demos. Sharing on the Hub amplifies the impact of your fastai models by making them available for others to download and explore. You can also use transfer learning with fastai models; load someone else's model as the basis for your task. Anyone can access all the fastai models in the Hub by filtering the [hf.co/models](https://huggingface.co/models?library=fastai&sort=downloads) webpage by the fastai library, as in the image below. ![Fastai Models in the Hub](assets/64_fastai/hf_hub_fastai.png) In addition to free model hosting and exposure to the broader community, the Hub has built-in [version control based on git](https://huggingface.co/docs/transformers/model_sharing#repository-features) (git-lfs, for large files) and [model cards](https://huggingface.co/docs/hub/models-cards) for discoverability and reproducibility. For more information on navigating the Hub, see [this introduction](https://github.com/huggingface/education-toolkit/blob/main/01_huggingface-hub-tour.md). ## Joining Hugging Face and installation To share models in the Hub, you will need to have a user. Create it on the [Hugging Face website](https://huggingface.co/join). The `huggingface_hub` library is a lightweight Python client with utility functions to interact with the Hugging Face Hub. To push fastai models to the hub, you need to have some libraries pre-installed (fastai>=2.4, fastcore>=1.3.27 and toml). You can install them automatically by specifying ["fastai"] when installing `huggingface_hub`, and your environment is good to go: ```bash pip install huggingface_hub["fastai"] ``` ## Creating a fastai `Learner` Here we train the [first model in the fastbook](https://github.com/fastai/fastbook/blob/master/01_intro.ipynb) to identify cats 🐱. We fully recommended reading the entire fastbook. ```py # Training of 6 lines in chapter 1 of the fastbook. from fastai.vision.all import * path = untar_data(URLs.PETS)/'images' def is_cat(x): return x[0].isupper() dls = ImageDataLoaders.from_name_func( path, get_image_files(path), valid_pct=0.2, seed=42, label_func=is_cat, item_tfms=Resize(224)) learn = vision_learner(dls, resnet34, metrics=error_rate) learn.fine_tune(1) ``` ## Sharing a `Learner` to the Hub A [`Learner` is a fastai object](https://docs.fast.ai/learner.html#Learner) that bundles a model, data loaders, and a loss function. We will use the words `Learner` and Model interchangeably throughout this post. First, log in to the Hugging Face Hub. You will need to create a `write` token in your [Account Settings](http://hf.co/settings/tokens). Then there are three options to log in: 1. Type `huggingface-cli login` in your terminal and enter your token. 2. If in a python notebook, you can use `notebook_login`. ```py from huggingface_hub import notebook_login notebook_login() ``` 3. Use the `token` argument of the `push_to_hub_fastai` function. You can input `push_to_hub_fastai` with the `Learner` you want to upload and the repository id for the Hub in the format of "namespace/repo_name". The namespace can be an individual account or an organization you have write access to (for example, 'fastai/stanza-de'). For more details, refer to the [Hub Client documentation](https://huggingface.co/docs/huggingface_hub/main/en/package_reference/mixins#huggingface_hub.push_to_hub_fastai). ```py from huggingface_hub import push_to_hub_fastai # repo_id = "YOUR_USERNAME/YOUR_LEARNER_NAME" repo_id = "espejelomar/identify-my-cat" push_to_hub_fastai(learner=learn, repo_id=repo_id) ``` The `Learner` is now in the Hub in the repo named [`espejelomar/identify-my-cat`](https://huggingface.co/espejelomar/identify-my-cat). An automatic model card is created with some links and next steps. When uploading a fastai `Learner` (or any other model) to the Hub, it is helpful to edit its model card (image below) so that others better understand your work (refer to the [Hugging Face documentation](https://huggingface.co/docs/hub/models-cards)). ![Fastai Model Card](assets/64_fastai/hf_model_card.png) if you want to learn more about `push_to_hub_fastai` go to the [Hub Client Documentation](https://huggingface.co/docs/huggingface_hub/main/en/package_reference/mixins#huggingface_hub.from_pretrained_fastai). There are some cool arguments you might be interested in 👀. Remember, your model is a [Git repository](https://huggingface.co/docs/transformers/model_sharing#repository-features) with all the advantages that this entails: version control, commits, branches... ## Loading a `Learner` from the Hugging Face Hub Loading a model from the Hub is even simpler. We will load our `Learner`, "espejelomar/identify-my-cat", and test it with a cat image (🦮?). This code is adapted from the [first chapter of the fastbook](https://github.com/fastai/fastbook/blob/master/01_intro.ipynb). First, upload an image of a cat (or possibly a dog?). The [Colab notebook with this tutorial](https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/64_fastai_hub.ipynb) uses `ipywidgets` to interactively upload a cat image (or not?). Here we will use this cute cat 🐅: ![Fastai Model Card](assets/64_fastai/cat.jpeg) Now let's load the `Learner` we just shared in the Hub and test it. ```py from huggingface_hub import from_pretrained_fastai # repo_id = "YOUR_USERNAME/YOUR_LEARNER_NAME" repo_id = "espejelomar/identify-my-cat" learner = from_pretrained_fastai(repo_id) ``` It works 👇! ```py _,_,probs = learner.predict(img) print(f"Probability it's a cat: {100*probs[1].item():.2f}%") Probability it's a cat: 100.00% ``` The [Hub Client documentation](https://huggingface.co/docs/huggingface_hub/main/en/package_reference/mixins#huggingface_hub.from_pretrained_fastai) includes addtional details on `from_pretrained_fastai`. ## `Blurr` to mix fastai and Hugging Face Transformers (and share them)! > [Blurr is] a library designed for fastai developers who want to train and deploy Hugging Face transformers - [Blurr Docs](https://github.com/ohmeow/blurr). We will: 1. Train a `blurr` Learner with the [high-level Blurr API](https://github.com/ohmeow/blurr#using-the-high-level-blurr-api). It will load the `distilbert-base-uncased` model from the Hugging Face Hub and prepare a sequence classification model. 2. Share it to the Hub with the namespace `fastai/blurr_IMDB_distilbert_classification` using `push_to_hub_fastai`. 3. Load it with `from_pretrained_fastai` and try it with `learner_blurr.predict()`. Collaboration and open-source are fantastic! First, install `blurr` and train the Learner. ```bash git clone https://github.com/ohmeow/blurr.git cd blurr pip install -e ".[dev]" ``` ```python import torch import transformers from fastai.text.all import * from blurr.text.data.all import * from blurr.text.modeling.all import * path = untar_data(URLs.IMDB_SAMPLE) model_path = Path("models") imdb_df = pd.read_csv(path / "texts.csv") learn_blurr = BlearnerForSequenceClassification.from_data(imdb_df, "distilbert-base-uncased", dl_kwargs={"bs": 4}) learn_blurr.fit_one_cycle(1, lr_max=1e-3) ``` Use `push_to_hub_fastai` to share with the Hub. ```python from huggingface_hub import push_to_hub_fastai # repo_id = "YOUR_USERNAME/YOUR_LEARNER_NAME" repo_id = "fastai/blurr_IMDB_distilbert_classification" push_to_hub_fastai(learn_blurr, repo_id) ``` Use `from_pretrained_fastai` to load a `blurr` model from the Hub. ```python from huggingface_hub import from_pretrained_fastai # repo_id = "YOUR_USERNAME/YOUR_LEARNER_NAME" repo_id = "fastai/blurr_IMDB_distilbert_classification" learner_blurr = from_pretrained_fastai(repo_id) ``` Try it with a couple sentences and review their sentiment (negative or positive) with `learner_blurr.predict()`. ```python sentences = ["This integration is amazing!", "I hate this was not available before."] probs = learner_blurr.predict(sentences) print(f"Probability that sentence '{sentences[0]}' is negative is: {100*probs[0]['probs'][0]:.2f}%") print(f"Probability that sentence '{sentences[1]}' is negative is: {100*probs[1]['probs'][0]:.2f}%") ``` Again, it works! ```python Probability that sentence 'This integration is amazing!' is negative is: 29.46% Probability that sentence 'I hate this was not available before.' is negative is: 70.04% ``` ## What's next? Take the [fast.ai course](https://course.fast.ai/) (a new version is coming soon), follow [Jeremy Howard](https://twitter.com/jeremyphoward?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor) and [fast.ai](https://twitter.com/FastDotAI) on Twitter for updates, and start sharing your fastai models on the Hub 🤗. Or load one of the [models that are already in the Hub](https://huggingface.co/models?library=fastai&sort=downloads). 📧 Feel free to contact us via the [Hugging Face Discord](https://discord.gg/YRAq8fMnUG) and share if you have an idea for a project. We would love to hear your feedback 💖. ### Would you like to integrate your library to the Hub? This integration is made possible by the [`huggingface_hub`](https://github.com/huggingface/huggingface_hub) library. If you want to add your library to the Hub, we have a [guide](https://huggingface.co/docs/hub/models-adding-libraries) for you! Or simply tag someone from the Hugging Face team. A shout out to the Hugging Face team for all the work on this integration, in particular [@osanseviero](https://twitter.com/osanseviero) 🦙. Thank you fastlearners and hugging learners 🤗.
We Raised $100 Million for Open & Collaborative Machine Learning 🚀
The Hugging Face Team
May 9, 2022
series-c
news
https://huggingface.co/blog/series-c
# We Raised $100 Million for Open & Collaborative Machine Learning 🚀 Today we have some exciting news to share! Hugging Face has raised $100 Million in Series C funding 🔥🔥🔥 led by Lux Capital with major participations from Sequoia, Coatue and support of existing investors Addition, a_capital, SV Angel, Betaworks, AIX Ventures, Kevin Durant, Rich Kleiman from Thirty Five Ventures, Olivier Pomel (co-founder & CEO at Datadog) and more. <figure class="image table text-center m-0 w-full"> <img src="/blog/assets/65_series_c/thumbnail.jpg" alt="Series C"/> </figure> We've come a long way since we first open sourced [PyTorch BERT](https://twitter.com/Thom_Wolf/status/1068637731281088513) in 2018 and are just getting started! 🙌 Machine learning is becoming the default way to build technology. When you think about your average day, machine learning is everywhere: from your Zoom background, to searching on Google, to ordering an Uber or writing an email with auto-complete --it's all machine learning. Hugging Face is now the fastest growing community & most used platform for machine learning! With 100,000 pre-trained models & 10,000 datasets hosted on the platform for NLP, computer vision, speech, time-series, biology, reinforcement learning, chemistry and more, the [Hugging Face Hub](https://huggingface.co/models) has become the Home of Machine Learning to create, collaborate, and deploy state-of-the-art models. <figure class="image table text-center m-0 w-full"> <img src="assets/65_series_c/home-of-machine-learning.png" alt="The Home of Machine Learning"/> </figure> Over 10,000 companies are now using Hugging Face to build technology with machine learning. Their Machine Learning scientists, Data scientists and Machine Learning engineers have saved countless hours while accelerating their machine learning roadmaps with the help of our [products](https://huggingface.co/platform) and [services](https://huggingface.co/support). We want to have a positive impact on the AI field. We think the direction of more responsible AI is through openly sharing models, datasets, training procedures, evaluation metrics and working together to solve issues. We believe open source and open science bring trust, robustness, reproducibility, and continuous innovation. With this in mind, we are leading [BigScience](https://bigscience.huggingface.co/), a collaborative workshop around the study and creation of very large language models gathering more than 1,000 researchers of all backgrounds and disciplines. We are now training the [world's largest open source multilingual language model](https://twitter.com/BigScienceLLM) 🌸 ⚠️ But there’s still a huge amount of work left to do. At Hugging Face, we know that Machine Learning has some important limitations and challenges that need to be tackled now like biases, privacy, and energy consumption. With openness, transparency & collaboration, we can foster responsible & inclusive progress, understanding & accountability to mitigate these challenges. Thanks to the new funding, we’ll be doubling down on research, open-source, products and responsible democratization of AI. <figure class="image table text-center m-0 w-full"> <img src="assets/65_series_c/team.png" alt="The Home of Machine Learning"/> </figure> It's been a hell of a ride to grow from 30 to 120+ team members in the past 12 months. We were super lucky to have been joined by incredibly talented (and fun!) teammates like [Dr. Margaret Mitchell](https://www.bloomberg.com/news/articles/2021-08-24/fired-at-google-after-critical-work-ai-researcher-mitchell-to-join-hugging-face) and the [Gradio team](https://gradio.app/joining-huggingface/), and we don't plan to stop here. We're [hiring for every position](https://apply.workable.com/huggingface) you can think of for every level of seniority. We are a remote-friendly, decentralized organization with transparency and value-inspired decision making by default. Huge thanks to every contributor in our amazing community and team, our customers, partners, and investors for helping us reach this point. We couldn't have done it without you, and we can't wait to work together with you on what's next. Your contributions are key to helping build a better future where AI is founded on open source, open science, ethics and collaboration. --- *For press inquiries, please contact <a href="mailto:[email protected]">[email protected]</a>*
Accelerated Inference with Optimum and Transformers Pipelines
philschmid
May 10, 2022
optimum-inference
guide, community
https://huggingface.co/blog/optimum-inference
# Accelerated Inference with Optimum and Transformers Pipelines > Inference has landed in Optimum with support for Hugging Face Transformers pipelines, including text-generation using ONNX Runtime. The adoption of BERT and Transformers continues to grow. Transformer-based models are now not only achieving state-of-the-art performance in Natural Language Processing but also for Computer Vision, Speech, and Time-Series. 💬 🖼 🎤 ⏳ Companies are now moving from the experimentation and research phase to the production phase in order to use Transformer models for large-scale workloads. But by default BERT and its friends are relatively slow, big, and complex models compared to traditional Machine Learning algorithms. To solve this challenge, we created [Optimum](https://huggingface.co/blog/hardware-partners-program) – an extension of [Hugging Face Transformers](https://github.com/huggingface/transformers) to accelerate the training and inference of Transformer models like BERT. In this blog post, you'll learn: - [1. What is Optimum? An ELI5](#1-what-is-optimum-an-eli5) - [2. New Optimum inference and pipeline features](#2-new-optimum-inference-and-pipeline-features) - [3. End-to-End tutorial on accelerating RoBERTa for Question-Answering including quantization and optimization](#3-end-to-end-tutorial-on-accelerating-roberta-for-question-answering-including-quantization-and-optimization) - [4. Current Limitations](#4-current-limitations) - [5. Optimum Inference FAQ](#5-optimum-inference-faq) - [6. What’s next?](#6-whats-next) Let's get started! 🚀 ## 1. What is Optimum? An ELI5 [Hugging Face Optimum](https://github.com/huggingface/optimum) is an open-source library and an extension of [Hugging Face Transformers](https://github.com/huggingface/transformers), that provides a unified API of performance optimization tools to achieve maximum efficiency to train and run models on accelerated hardware, including toolkits for optimized performance on [Graphcore IPU](https://github.com/huggingface/optimum-graphcore) and [Habana Gaudi](https://github.com/huggingface/optimum-habana). Optimum can be used for accelerated training, quantization, graph optimization, and now inference as well with support for [transformers pipelines](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#pipelines). ## 2. New Optimum inference and pipeline features With [release](https://github.com/huggingface/optimum/releases/tag/v1.2.0) of Optimum 1.2, we are adding support for [inference](https://huggingface.co/docs/optimum/main/en/onnxruntime/modeling_ort) and [transformers pipelines](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#pipelines). This allows Optimum users to leverage the same API they are used to from transformers with the power of accelerated runtimes, like [ONNX Runtime](https://onnxruntime.ai/). **Switching from Transformers to Optimum Inference** The [Optimum Inference models](https://huggingface.co/docs/optimum/main/en/onnxruntime/modeling_ort) are API compatible with Hugging Face Transformers models. This means you can just replace your `AutoModelForXxx` class with the corresponding `ORTModelForXxx` class in Optimum. For example, this is how you can use a question answering model in Optimum: ```diff from transformers import AutoTokenizer, pipeline -from transformers import AutoModelForQuestionAnswering +from optimum.onnxruntime import ORTModelForQuestionAnswering -model = AutoModelForQuestionAnswering.from_pretrained("deepset/roberta-base-squad2") # pytorch checkpoint +model = ORTModelForQuestionAnswering.from_pretrained("optimum/roberta-base-squad2") # onnx checkpoint tokenizer = AutoTokenizer.from_pretrained("deepset/roberta-base-squad2") optimum_qa = pipeline("question-answering", model=model, tokenizer=tokenizer) question = "What's my name?" context = "My name is Philipp and I live in Nuremberg." pred = optimum_qa(question, context) ``` In the first release, we added [support for ONNX Runtime](https://huggingface.co/docs/optimum/main/en/onnxruntime/modeling_ort) but there is more to come! These new `ORTModelForXX` can now be used with the [transformers pipelines](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#pipelines). They are also fully integrated into the [Hugging Face Hub](https://huggingface.co/models) to push and pull optimized checkpoints from the community. In addition to this, you can use the [ORTQuantizer](https://huggingface.co/docs/optimum/main/en/onnxruntime/quantization) and [ORTOptimizer](https://huggingface.co/docs/optimum/main/en/onnxruntime/optimization) to first quantize and optimize your model and then run inference on it. Check out [End-to-End Tutorial on accelerating RoBERTa for question-answering including quantization and optimization](#3-end-to-end-tutorial-on-accelerating-roberta-for-question-answering-including-quantization-and-optimization) for more details. ## 3. End-to-End tutorial on accelerating RoBERTa for Question-Answering including quantization and optimization In this End-to-End tutorial on accelerating RoBERTa for question-answering, you will learn how to: 1. Install `Optimum` for ONNX Runtime 2. Convert a Hugging Face `Transformers` model to ONNX for inference 3. Use the `ORTOptimizer` to optimize the model 4. Use the `ORTQuantizer` to apply dynamic quantization 5. Run accelerated inference using Transformers pipelines 6. Evaluate the performance and speed Let’s get started 🚀 *This tutorial was created and run on an `m5.xlarge` AWS EC2 Instance.* ### 3.1 Install `Optimum` for Onnxruntime Our first step is to install `Optimum` with the `onnxruntime` utilities. ```bash pip install "optimum[onnxruntime]==1.2.0" ``` This will install all required packages for us including `transformers`, `torch`, and `onnxruntime`. If you are going to use a GPU you can install optimum with `pip install optimum[onnxruntime-gpu]`. ### 3.2 Convert a Hugging Face `Transformers` model to ONNX for inference** Before we can start optimizing we need to convert our vanilla `transformers` model to the `onnx` format. To do this we will use the new [ORTModelForQuestionAnswering](https://huggingface.co/docs/optimum/main/en/onnxruntime/modeling_ort#optimum.onnxruntime.ORTModelForQuestionAnswering) class calling the `from_pretrained()` method with the `from_transformers` attribute. The model we are using is the [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) a fine-tuned RoBERTa model on the SQUAD2 dataset achieving an F1 score of `82.91` and as the feature (task) `question-answering`. ```python from pathlib import Path from transformers import AutoTokenizer, pipeline from optimum.onnxruntime import ORTModelForQuestionAnswering model_id = "deepset/roberta-base-squad2" onnx_path = Path("onnx") task = "question-answering" # load vanilla transformers and convert to onnx model = ORTModelForQuestionAnswering.from_pretrained(model_id, from_transformers=True) tokenizer = AutoTokenizer.from_pretrained(model_id) # save onnx checkpoint and tokenizer model.save_pretrained(onnx_path) tokenizer.save_pretrained(onnx_path) # test the model with using transformers pipeline, with handle_impossible_answer for squad_v2 optimum_qa = pipeline(task, model=model, tokenizer=tokenizer, handle_impossible_answer=True) prediction = optimum_qa(question="What's my name?", context="My name is Philipp and I live in Nuremberg.") print(prediction) # {'score': 0.9041663408279419, 'start': 11, 'end': 18, 'answer': 'Philipp'} ``` We successfully converted our vanilla transformers to `onnx` and used the model with the `transformers.pipelines` to run the first prediction. Now let's optimize it. 🏎 If you want to learn more about exporting transformers model check-out the documentation: [Export 🤗 Transformers Models](https://huggingface.co/docs/transformers/main/en/serialization) ### 3.3 Use the `ORTOptimizer` to optimize the model After we saved our onnx checkpoint to `onnx/` we can now use the `ORTOptimizer` to apply graph optimization such as operator fusion and constant folding to accelerate latency and inference. ```python from optimum.onnxruntime import ORTOptimizer from optimum.onnxruntime.configuration import OptimizationConfig # create ORTOptimizer and define optimization configuration optimizer = ORTOptimizer.from_pretrained(model_id, feature=task) optimization_config = OptimizationConfig(optimization_level=99) # enable all optimizations # apply the optimization configuration to the model optimizer.export( onnx_model_path=onnx_path / "model.onnx", onnx_optimized_model_output_path=onnx_path / "model-optimized.onnx", optimization_config=optimization_config, ) ``` To test performance we can use the `ORTModelForQuestionAnswering` class again and provide an additional `file_name` parameter to load our optimized model. **(This also works for models available on the hub).** ```python from optimum.onnxruntime import ORTModelForQuestionAnswering # load quantized model opt_model = ORTModelForQuestionAnswering.from_pretrained(onnx_path, file_name="model-optimized.onnx") # test the quantized model with using transformers pipeline opt_optimum_qa = pipeline(task, model=opt_model, tokenizer=tokenizer, handle_impossible_answer=True) prediction = opt_optimum_qa(question="What's my name?", context="My name is Philipp and I live in Nuremberg.") print(prediction) # {'score': 0.9041663408279419, 'start': 11, 'end': 18, 'answer': 'Philipp'} ``` We will evaluate the performance changes in step [3.6 Evaluate the performance and speed](#36-evaluate-the-performance-and-speed) in detail. ### 3.4 Use the `ORTQuantizer` to apply dynamic quantization After we have optimized our model we can accelerate it even more by quantizing it using the `ORTQuantizer`. The `ORTOptimizer` can be used to apply dynamic quantization to decrease the size of the model size and accelerate latency and inference. *We use the `avx512_vnni` since the instance is powered by an intel cascade-lake CPU supporting avx512.* ```python from optimum.onnxruntime import ORTQuantizer from optimum.onnxruntime.configuration import AutoQuantizationConfig # create ORTQuantizer and define quantization configuration quantizer = ORTQuantizer.from_pretrained(model_id, feature=task) qconfig = AutoQuantizationConfig.avx512_vnni(is_static=False, per_channel=True) # apply the quantization configuration to the model quantizer.export( onnx_model_path=onnx_path / "model-optimized.onnx", onnx_quantized_model_output_path=onnx_path / "model-quantized.onnx", quantization_config=qconfig, ) ``` We can now compare this model size as well as some latency performance ```python import os # get model file size size = os.path.getsize(onnx_path / "model.onnx")/(1024*1024) print(f"Vanilla Onnx Model file size: {size:.2f} MB") size = os.path.getsize(onnx_path / "model-quantized.onnx")/(1024*1024) print(f"Quantized Onnx Model file size: {size:.2f} MB") # Vanilla Onnx Model file size: 473.31 MB # Quantized Onnx Model file size: 291.77 MB ``` <figure class="image table text-center m-0 w-full"> <img src="assets/66_optimum_inference/model_size.png" alt="Model size comparison"/> </figure> We decreased the size of our model by almost 50% from 473MB to 291MB. To run inference we can use the `ORTModelForQuestionAnswering` class again and provide an additional `file_name` parameter to load our quantized model. **(This also works for models available on the hub).** ```python # load quantized model quantized_model = ORTModelForQuestionAnswering.from_pretrained(onnx_path, file_name="model-quantized.onnx") # test the quantized model with using transformers pipeline quantized_optimum_qa = pipeline(task, model=quantized_model, tokenizer=tokenizer, handle_impossible_answer=True) prediction = quantized_optimum_qa(question="What's my name?", context="My name is Philipp and I live in Nuremberg.") print(prediction) # {'score': 0.9246969819068909, 'start': 11, 'end': 18, 'answer': 'Philipp'} ``` Nice! The model predicted the same answer. ### 3.5 Run accelerated inference using Transformers pipelines [Optimum](https://huggingface.co/docs/optimum/main/en/pipelines#optimizing-with-ortoptimizer) has built-in support for [transformers pipelines](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#pipelines). This allows us to leverage the same API that we know from using PyTorch and TensorFlow models. We have already used this feature in steps 3.2,3.3 & 3.4 to test our converted and optimized models. At the time of writing this, we are supporting [ONNX Runtime](https://onnxruntime.ai/) with more to come in the future. An example of how to use the [transformers pipelines](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#pipelines) can be found below. ```python from transformers import AutoTokenizer, pipeline from optimum.onnxruntime import ORTModelForQuestionAnswering tokenizer = AutoTokenizer.from_pretrained(onnx_path) model = ORTModelForQuestionAnswering.from_pretrained(onnx_path) optimum_qa = pipeline("question-answering", model=model, tokenizer=tokenizer) prediction = optimum_qa(question="What's my name?", context="My name is Philipp and I live in Nuremberg.") print(prediction) # {'score': 0.9041663408279419, 'start': 11, 'end': 18, 'answer': 'Philipp'} ``` In addition to this we added a `pipelines` API to Optimum to guarantee more safety for your accelerated models. Meaning if you are trying to use `optimum.pipelines` with an unsupported model or task you will see an error. You can use `optimum.pipelines` as a replacement for `transformers.pipelines`. ```python from transformers import AutoTokenizer from optimum.onnxruntime import ORTModelForQuestionAnswering from optimum.pipelines import pipeline tokenizer = AutoTokenizer.from_pretrained(onnx_path) model = ORTModelForQuestionAnswering.from_pretrained(onnx_path) optimum_qa = pipeline("question-answering", model=model, tokenizer=tokenizer, handle_impossible_answer=True) prediction = optimum_qa(question="What's my name?", context="My name is Philipp and I live in Nuremberg.") print(prediction) # {'score': 0.9041663408279419, 'start': 11, 'end': 18, 'answer': 'Philipp'} ``` ### 3.6 Evaluate the performance and speed During this [End-to-End tutorial on accelerating RoBERTa for Question-Answering including quantization and optimization](#3-end-to-end-tutorial-on-accelerating-roberta-for-question-answering-including-quantization-and-optimization), we created 3 different models. A vanilla converted model, an optimized model, and a quantized model. As the last step of the tutorial, we want to take a detailed look at the performance and accuracy of our model. Applying optimization techniques, like graph optimizations or quantization not only impact performance (latency) those also might have an impact on the accuracy of the model. So accelerating your model comes with a trade-off. Let's evaluate our models. Our transformers model [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) was fine-tuned on the SQUAD2 dataset. This will be the dataset we use to evaluate our models. ```python from datasets import load_metric,load_dataset metric = load_metric("squad_v2") dataset = load_dataset("squad_v2")["validation"] print(f"length of dataset {len(dataset)}") #length of dataset 11873 ``` We can now leverage the [map](https://huggingface.co/docs/datasets/v2.1.0/en/process#map) function of [datasets](https://huggingface.co/docs/datasets/index) to iterate over the validation set of squad 2 and run prediction for each data point. Therefore we write a `evaluate` helper method which uses our pipelines and applies some transformation to work with the [squad v2 metric.](https://huggingface.co/metrics/squad_v2) *This can take quite a while (1.5h)* ```python def evaluate(example): default = optimum_qa(question=example["question"], context=example["context"]) optimized = opt_optimum_qa(question=example["question"], context=example["context"]) quantized = quantized_optimum_qa(question=example["question"], context=example["context"]) return { 'reference': {'id': example['id'], 'answers': example['answers']}, 'default': {'id': example['id'],'prediction_text': default['answer'], 'no_answer_probability': 0.}, 'optimized': {'id': example['id'],'prediction_text': optimized['answer'], 'no_answer_probability': 0.}, 'quantized': {'id': example['id'],'prediction_text': quantized['answer'], 'no_answer_probability': 0.}, } result = dataset.map(evaluate) # COMMENT IN to run evaluation on 2000 subset of the dataset # result = dataset.shuffle().select(range(2000)).map(evaluate) ``` Now lets compare the results ```python default_acc = metric.compute(predictions=result["default"], references=result["reference"]) optimized = metric.compute(predictions=result["optimized"], references=result["reference"]) quantized = metric.compute(predictions=result["quantized"], references=result["reference"]) print(f"vanilla model: exact={default_acc['exact']}% f1={default_acc['f1']}%") print(f"optimized model: exact={optimized['exact']}% f1={optimized['f1']}%") print(f"quantized model: exact={quantized['exact']}% f1={quantized['f1']}%") # vanilla model: exact=79.07858165585783% f1=82.14970024570314% # optimized model: exact=79.07858165585783% f1=82.14970024570314% # quantized model: exact=78.75010528088941% f1=81.82526107204629% ``` Our optimized & quantized model achieved an exact match of `78.75%` and an f1 score of `81.83%` which is `99.61%` of the original accuracy. Achieving `99%` of the original model is very good especially since we used dynamic quantization. Okay, let's test the performance (latency) of our optimized and quantized model. But first, let’s extend our context and question to a more meaningful sequence length of 128. ```python context="Hello, my name is Philipp and I live in Nuremberg, Germany. Currently I am working as a Technical Lead at Hugging Face to democratize artificial intelligence through open source and open science. In the past I designed and implemented cloud-native machine learning architectures for fin-tech and insurance companies. I found my passion for cloud concepts and machine learning 5 years ago. Since then I never stopped learning. Currently, I am focusing myself in the area NLP and how to leverage models like BERT, Roberta, T5, ViT, and GPT2 to generate business value." question="As what is Philipp working?" ``` To keep it simple, we are going to use a python loop and calculate the avg/mean latency for our vanilla model and for the optimized and quantized model. ```python from time import perf_counter import numpy as np def measure_latency(pipe): latencies = [] # warm up for _ in range(10): _ = pipe(question=question, context=context) # Timed run for _ in range(100): start_time = perf_counter() _ = pipe(question=question, context=context) latency = perf_counter() - start_time latencies.append(latency) # Compute run statistics time_avg_ms = 1000 * np.mean(latencies) time_std_ms = 1000 * np.std(latencies) return f"Average latency (ms) - {time_avg_ms:.2f} +\- {time_std_ms:.2f}" print(f"Vanilla model {measure_latency(optimum_qa)}") print(f"Optimized & Quantized model {measure_latency(quantized_optimum_qa)}") # Vanilla model Average latency (ms) - 117.61 +\- 8.48 # Optimized & Quantized model Average latency (ms) - 64.94 +\- 3.65 ``` <figure class="image table text-center m-0 w-full"> <img src="assets/66_optimum_inference/results.png" alt="Latency & F1 results"/> </figure> We managed to accelerate our model latency from `117.61ms` to `64.94ms` or roughly 2x while keeping `99.61%` of the accuracy. Something we should keep in mind is that we used a mid-performant CPU instance with 2 physical cores. By switching to GPU or a more performant CPU instance, e.g. [ice-lake powered you can decrease the latency number down to a few milliseconds.](https://huggingface.co/blog/bert-cpu-scaling-part-2#more-efficient-ai-processing-on-latest-intel-ice-lake-cpus) ## 4. Current Limitations We just started supporting inference in [https://github.com/huggingface/optimum](https://github.com/huggingface/optimum) so we would like to share current limitations as well. All of those limitations are on the roadmap and will be resolved in the near future. - **Remote Models > 2GB:** Currently, only models smaller than 2GB can be loaded from the [Hugging Face Hub](https://hf.co/). We are working on adding support for models > 2GB / multi-file models. - **Seq2Seq tasks/model:** We don’t have support for seq2seq tasks, like summarization and models like T5 mostly due to the limitation of the single model support. But we are actively working to solve it, to provide you with the same experience you are familiar with in transformers. - **Past key values:** Generation models like GPT-2 use something called past key values which are precomputed key-value pairs of the attention blocks and can be used to speed up decoding. Currently the ORTModelForCausalLM is not using past key values. - **No cache:** Currently when loading an optimized model (*.onnx), it will not be cached locally. ## 5. Optimum Inference FAQ **Which tasks are supported?** You can find a list of all supported tasks in the [documentation](https://huggingface.co/docs/optimum/main/en/pipelines). Currently support pipelines tasks are `feature-extraction`, `text-classification`, `token-classification`, `question-answering`, `zero-shot-classification`, `text-generation` **Which models are supported?** Any model that can be exported with [transformers.onnx](https://huggingface.co/docs/transformers/serialization) and has a supported task can be used, this includes among others BERT, ALBERT, GPT2, RoBERTa, XLM-RoBERTa, DistilBERT .... **Which runtimes are supported?** Currently, ONNX Runtime is supported. We are working on adding more in the future. [Let us know](https://discuss.huggingface.co/c/optimum/59) if you are interested in a specific runtime. **How can I use Optimum with Transformers?** You can find an example and instructions in our [documentation](https://huggingface.co/docs/optimum/main/en/pipelines#transformers-pipeline-usage). **How can I use GPUs?** To be able to use GPUs you simply need to install `optimum[onnxruntine-gpu]` which will install the required GPU providers and use them by default. **How can I use a quantized and optimized model with pipelines?** You can load the optimized or quantized model using the new [ORTModelForXXX](https://huggingface.co/docs/optimum/main/en/onnxruntime/modeling_ort) classes using the [from_pretrained](https://huggingface.co/docs/optimum/main/en/onnxruntime/modeling_ort#optimum.onnxruntime.ORTModelForQuestionAnswering.forward.example) method. You can learn more about it in our [documentation](https://huggingface.co/docs/optimum/main/en/onnxruntime/modeling_ort#optimum-inference-with-onnx-runtime). ## 6. What’s next? What’s next for Optimum you ask? A lot of things. We are focused on making Optimum the reference open-source toolkit to work with transformers for acceleration & optimization. To be able to achieve this we will solve the current limitations, improve the documentation, create more content and examples and push the limits for accelerating and optimizing transformers. Some important features on the roadmap for Optimum amongst the [current limitations](#4-current-limitations) are: - Support for speech models (Wav2vec2) and speech tasks (automatic speech recognition) - Support for vision models (ViT) and vision tasks (image classification) - Improve performance by adding support for [OrtValue](https://onnxruntime.ai/docs/api/python/api_summary.html#ortvalue) and [IOBinding](https://onnxruntime.ai/docs/api/python/api_summary.html#iobinding) - Easier ways to evaluate accelerated models - Add support for other runtimes and providers like TensorRT and AWS-Neuron --- Thanks for reading! If you are as excited as I am about accelerating Transformers, make them efficient and scale them to billions of requests. You should apply, [we are hiring](https://apply.workable.com/huggingface/#jobs).🚀 If you have any questions, feel free to contact me, through [Github](https://github.com/huggingface/optimum/issues), or on the [forum](https://discuss.huggingface.co/c/optimum/59). You can also connect with me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/).
Student Ambassador Program's call for applications is open!
Violette
May 13, 2022
ambassadors
community
https://huggingface.co/blog/ambassadors
# Student Ambassador Program’s call for applications is open! As an open-source company democratizing machine learning, Hugging Face believes it is essential to **[teach](https://huggingface.co/blog/education)** open-source ML to people from all backgrounds worldwide. **We aim to teach machine learning to 5 million people by 2023**. Are you studying machine learning and/or already evangelizing your community with ML? Do you want to be a part of our ML democratization efforts and show your campus community how to build ML models with Hugging Face? **If yes, we want to support you in your journey by opening our first Student Ambassador Program 🤗 🥳** If you want to: * help your peers in their machine learning journey, * learn and use free, open-source technologies, * contribute to a thriving ecosystem, * and you're keen on fostering communities while sharing [our community values](https://huggingface2.notion.site/huggingface2/Hugging-Face-Code-of-Conduct-45eeeafa9ef44c5e888a2952619fdfa8), The Student Ambassador Program is an excellent opportunity for you. You have until June 13, 2022, to [apply](https://docs.google.com/forms/d/e/1FAIpQLScY9kTi-TjZipRFRviluRCwSjFf3CCsMbKedzO1tq2S0wtbNQ/viewform?usp=sf_link)! <br /> **What are the benefits of being part of the Program?** 🤩  Selected ambassadors will benefit from resources and support: 🎎 Network of peers with whom ambassadors can collaborate. 🧑🏻‍💻 Workshops and support from the Hugging Face team! 🤗 Insight into the latest projects, features, and more! 🎁 Merchandise and assets. ✨ Being officially recognized as a Hugging Face’s Ambassador <br /> **Eligibility Requirements for Students** - Validate your student status - Have taken at least one machine learning/data science course (online courses are considered as well) - Be enrolled in an accredited college or university - Be a user of the Hugging Face Hub and/or the Hugging Face’s libraries - Acknowledge the [Code of Conduct](https://huggingface2.notion.site/huggingface2/Hugging-Face-Code-of-Conduct-45eeeafa9ef44c5e888a2952619fdfa8). Community is at the center of the Hugging Face ecosystem. Because of that, we strictly adhere to our [Code of conduct](https://huggingface2.notion.site/huggingface2/Hugging-Face-Code-of-Conduct-45eeeafa9ef44c5e888a2952619fdfa8). If any ambassador infringes it or behaves inadequately, they will be excluded from the Program. **[Apply here](https://docs.google.com/forms/d/e/1FAIpQLScY9kTi-TjZipRFRviluRCwSjFf3CCsMbKedzO1tq2S0wtbNQ/viewform?usp=sf_link) to become an ambassador!** **Timeline:** - Deadline for the end of the [application](https://docs.google.com/forms/d/e/1FAIpQLScY9kTi-TjZipRFRviluRCwSjFf3CCsMbKedzO1tq2S0wtbNQ/viewform?usp=sf_link) is June 13. - The Program will start on June 30, 2022. - The Program will end on December 31, 2022.
Director of Machine Learning Insights [Part 2: SaaS Edition]
britneymuller
May 13, 2022
ml-director-insights-2
community, research
https://huggingface.co/blog/ml-director-insights-2
# Director of Machine Learning Insights [Part 2: SaaS Edition] _If you or your team are interested in building ML solutions faster visit [hf.co/support](https://huggingface.co/support?utm_source=article&utm_medium=blog&utm_campaign=ml_director_insights_2) today!_ 👋 Welcome to Part 2 of our Director of Machine Learning Insights [Series]. Check out [Part 1 here.](https://huggingface.co/blog/ml-director-insights) Directors of Machine Learning have a unique seat at the AI table spanning the perspective of various roles and responsibilities. Their rich knowledge of ML frameworks, engineering, architecture, real-world applications and problem-solving provides deep insights into the current state of ML. For example, one director will note how using new transformers speech technology decreased their team’s error rate by 30% and how simple thinking can help save _a lot_ of computational power. Ever wonder what directors at Salesforce or ZoomInfo currently think about the state of Machine Learning? What their biggest challenges are? And what they're most excited about? Well, you're about to find out! In this second SaaS focused installment, you’ll hear from a deep learning for healthcare textbook author who also founded a non-profit for mentoring ML talent, a chess fanatic cybersecurity expert, an entrepreneur whose business was inspired by Barbie’s need to monitor brand reputation after a lead recall, and a seasoned patent and academic paper author who enjoys watching his 4 kids make the same mistakes as his ML models. 🚀 Let’s meet some top Machine Learning Directors in SaaS and hear what they have to say about Machine Learning: <img class="mx-auto" style="float: left;" padding="5px" width="200" src="/blog/assets/67_ml_director_insights/Omar-Rahman.jpeg"></a> ### [Omar Rahman](https://www.linkedin.com/in/omar-rahman-4739713a/) - Director of Machine Learning at [Salesforce](https://www.salesforce.com/) **Background:** Omar leads a team of Machine Learning and Data Engineers in leveraging ML for defensive security purposes as part of the Cybersecurity team. Previously, Omar has led data science and machine learning engineering teams at Adobe and SAP focusing on bringing intelligent capabilities to marketing cloud and procurement applications. Omar holds a Master’s degree in Electrical Engineering from Arizona State University. **Fun Fact:** Omar loves to play chess and volunteers his free time to guide and mentor graduate students in AI. **Salesforce:** World's #1 customer relationship management software. #### **1. How has ML made a positive impact on SaaS?** ML has benefited SaaS offerings in many ways. **a. Improving automation within applications:** For example, a service ticket router using NLP (Natural Language Processing) to understand the context of the service request and routing it to the appropriate team within the organization. **b. Reduction in code complexity:** Rules-based systems tend to get unwieldy as new rules are added, thereby increasing maintenance costs. For example, An ML-based language translation system is more accurate and robust with much fewer lines of code as compared to previous rules-based systems. **c. Better forecasting results in cost savings.** Being able to forecast more accurately helps in reducing backorders in the supply chain as well as cost savings due to a reduction in storage costs. #### **2. What are the biggest ML challenges within SaaS?** a. Productizing ML applications require a lot more than having a model. Being able to leverage the model for serving results, detecting and adapting to changes in statistics of data, etc. creates significant overhead in deploying and maintaining ML systems. b. In most large organizations, data is often siloed and not well maintained resulting in significant time spent in consolidating data, pre-processing, data cleaning activities, etc., thereby resulting in a significant amount of time and effort needed to create ML-based applications. #### **3. What’s a common mistake you see people make trying to integrate ML into SaaS?** Not focussing enough on the business context and the problem being solved, rather trying to use the latest and greatest algorithms and newly open-sourced libraries. A lot can be achieved by simple traditional ML techniques. #### **4. What excites you most about the future of ML?** Generalized artificial intelligence capabilities, if built and managed well, have the capability to transform humanity in more ways than one can imagine. My hope is that we will see great progress in the areas of healthcare and transportation. We already see the benefits of AI in radiology resulting in significant savings in manpower thereby enabling humans to focus on more complex tasks. Self-driving cars and trucks are already transforming the transportation sector. <img class="mx-auto" style="float: left;" padding="5px" width="200" src="/blog/assets/67_ml_director_insights/Cao-Danica-Xiao.jpeg"></a> ### [Cao (Danica) Xiao](https://www.linkedin.com/in/caoxiao/) - Senior Director of Machine Learning at [Amplitude](https://amplitude.com/) **Background:** Cao (Danica) Xiao is the Senior Director and Head of Data Science and Machine Learning at Amplitude. Her team focuses on developing and deploying self-serving machine learning models and products based on multi-sourced user data to solve critical business challenges regarding digital production analytics and optimization. Besides, she is a passionate machine learning researcher with over 95+ papers published in leading CS venues. She is also a technology leader with extensive experience in machine learning roadmap creation, team building, and mentoring. Prior to Amplitude, Cao (Danica) was the Global Head of Machine Learning in the Analytics Center of Excellence of IQVIA. Before that, she was a research staff member at IBM Research and research lead at MIT-IBM Watson AI Lab. She got her Ph.D. degree in machine learning from the University of Washington, Seattle. Recently, she also co-authored a textbook on deep learning for healthcare and founded a non-profit organization for mentoring machine learning talents. **Fun Fact:** Cao is a cat-lover and is a mom to two cats: one Singapura girl and one British shorthair boy. **Amplitude:** A cloud-based product-analytics platform that helps customers build better products. #### **1. How has ML made a positive impact on SaaS?** ML plays a game-changing role in turning massive noisy machine-generated or user-generated data into answers to all kinds of business questions including personalization, prediction, recommendation, etc. It impacts a wide spectrum of industry verticals via SaaS. #### **2. What are the biggest ML challenges within SaaS?** Lack of data for ML model training that covers a broader range of industry use cases. While being a general solution for all industry verticals, still need to figure out how to handle the vertical-specific needs arising from business, or domain shift issue that affects ML model quality. #### **3. What’s a common mistake you see people make trying to integrate ML into a SaaS product?** Not giving users the flexibility to incorporate their business knowledge or other human factors that are critical to business success. For example, for a self-serve product recommendation, it would be great if users could control the diversity of recommended products. #### **4. What excites you most about the future of ML?** ML has seen tremendous success. It also evolves rapidly to address the current limitations (e.g., lack of data, domain shift, incorporation of domain knowledge). More ML technologies will be applied to solve business or customer needs. For example, interpretable ML for users to understand and trust the ML model outputs; counterfactual prediction for users to estimate the alternative outcome should they make a different business decision. <img class="mx-auto" style="float: left;" padding="5px" width="200" src="/blog/assets/67_ml_director_insights/Raphael-Cohen.jpeg"></a> ### [Raphael Cohen](https://www.linkedin.com/in/raphael-cohen-63a87779/) - Director of the Machine Learning at [ZoomInfo](https://www.zoominfo.com/) **Background:** Raphael has a Ph.D. in the field of understanding health records and genetics, has authored 20 academic papers and has 8 patents. Raphael is also a leader in Data Science and Research with a background in NLP, Speech, healthcare, sales, customer journeys, and IT. **Fun Fact:** Raphael has 4 kids and enjoys seeing them learn and make the same mistakes as some of his ML models. **ZoomInfo:** Intelligent sales and marketing technology backed by the world's most comprehensive business database. #### **1. How has ML made a positive impact on SaaS** Machine Learning has facilitated the transcription of conversational data to help people unlock new insights and understandings. People can now easily view the things they talked about, summarized goals, takeaways, who spoke the most, who asked the best questions, what the next steps are, and more. This is incredibly useful for many interactions like email and video conferencing (which are more common now than ever). With [Chorus.ai](chorus.ai) we transcribe conversations as they are being recorded in real-time. We use an algorithm called [Wave2Vec](https://arxiv.org/abs/1904.05862) to do this. 🤗 [Hugging Face recently released their own Wave2Vec](https://huggingface.co/docs/transformers/model_doc/wav2vec2) version created for training that we derived a lot of value from. This new generation of transformers speech technology is incredibly powerful, it has decreased our error rate by 30%. Once we transcribe a conversation we can look into the content - this is where NLP comes in and we rely heavily on [Hugging Face Transformers](https://huggingface.co/docs/transformers/index) to allow us to depict around 20 categories of topics inside recordings and emails; for example, are we talking about pricing, signing a contract, next steps, all of these topics are sent through email or discussed and it’s easy to now extract that info without having to go back through all of your conversations. This helps make people much better at their jobs. #### **2. What are the biggest ML challenges within SaaS?** The biggest challenge is understanding when to make use of ML. What problems can we solve with ML and which shouldn’t we? A lot of times we have a breakthrough with an ML model but a computationally lighter heuristic model is better suited to solve the problem we have. This is where a strong AI strategy comes into play. —Understand how you want your final product to work and at what efficiency. We also have the question of how to get the ML models you’ve built into production with a low environmental/computational footprint? Everyone is struggling with this; how to keep models in production in an efficient way without burning too many resources. A great example of this was when we moved to the Wav2Vec framework, which required us to break down our conversational audio into 15sec segments that get fed into this huge model. During this, we discovered that we were feeding the model a lot of segments that were pure silence. This is common when someone doesn’t show up or one person is waiting for another to join a meeting. Just by adding another very light model to tell us when not to send the silent segments into this big complicated ML model, we are able to save a lot of computational power/energy. This is an example of where engineers can think of other easier ways to speed up and save on model production. There’s an opportunity for more engineers to be savvier and better optimize models without burning too many resources. #### **3. What’s a common mistake you see people make trying to integrate ML into SaaS?** Is my solution the smartest solution? Is there a better way to break this down and solve it more efficiently? When we started identifying speakers we went directly with an ML method and this wasn’t as accurate as the video conference provider data. Since then we learned that the best way to do this is to start with the metadata of who speaks from the conference provider and then overlay that with a smart embedding model. We lost precious time during this learning curve. We shouldn’t have used this large ML solution if we stopped to understand there are other data sources we should invest in that will help us accelerate more efficiently. Think outside the box and don’t just take something someone built and think I have an idea of how to make this better. Where can we be smarter by understanding the problem better? #### **4. What excites you most about the future of ML?** I think we are in the middle of another revolution. For us, seeing our error rates drop by 30% by our Wave2Vec model was amazing. We had been working for years only getting 1% drops at each time and then within 3 months' time we saw such a huge improvement and we know that’s only the beginning. In academia, bigger and smarter things are happening. These pre-trained models are allowing us to do things we could never imagine before. This is very exciting! We are also seeing a lot of tech from NLP entering other domains like speech and vision and being able to power them. Another thing I’m really excited about is generating models! We recently worked with a company called [Bria.ai](https://bria.ai/) and they use these amazing GANs to create images. So you take a stock photo and you can turn it into a different photo by saying “remove glasses”, “add glasses” or “add hair” and it does so perfectly. The idea is that we can use this to generate data. We can take images of people from meetings not smiling and we can make them smile in order to build a data set for smile detection. This will be transformative. You can take 1 image and turn it into 100 images. This will also apply to speech generation which could be a powerful application within the service industry. #### **Any final thoughts?** –It’s challenging to put models into production. Believe data science teams need engineering embedded with them. Engineers should be part of the AI team. This will be an important structural pivot in the future. <img class="mx-auto" style="float: left;" padding="5px" width="200" src="/blog/assets/67_ml_director_insights/Martin-Ostrovsky.jpeg"></a> ### [Martin Ostrovsky](https://www.linkedin.com/in/martinostrovsky/) Founder/CEO & Machine Learning Director at [Repustate Inc.](https://www.repustate.com/) **Background:** Martin is passionate about AI, ML, and NLP and is responsible for guiding the strategy and success of all Repustate products by leading the cross-functional team responsible for developing and improving them. He sets the strategy, roadmap, and feature definition for Repustate’s Global Text Analytics API, Sentiment Analysis, Deep Search, and Named Entity Recognition solutions. He has a Bachelor's degree in Computer Science from York University and earned his Master of Business Administration from the Schulich School of Business. **Fun Fact:** The first application of ML I used was for Barbie toys. My professor at Schulich Business School mentioned that Barbie needed to monitor their brand reputation due to a recall of the toys over concerns of excessive lead in them. Hiring people to manually go through each social post and online article seemed just so inefficient and ineffective to me. So I proposed to create a machine learning algorithm that would monitor what people thought of them from across all social media and online channels. The algorithm worked seamlessly. And that’s how I decided to name my company, Repustate - the “state” of your “repu”tation. 🤖 **Repustate:** A leading provider of text analytics services for enterprise companies. #### **1. Favorite ML business application?** My favorite ML application is cybersecurity. Cybersecurity remains the most critical part for any company (government or non-government) with regard to data. Machine Learning helps identify cyber threats, fight cyber-crime, including cyberbullying, and allows for a faster response to security breaches. ML algorithms quickly analyze the most likely vulnerabilities and potential malware and spyware applications based on user data. They can spot distortion in endpoint entry patterns and identify it as a potential data breach. #### **2. What is your biggest ML challenge?** The biggest ML challenge is audio to text transcription in the Arabic Language. There are quite a few systems that can decipher Arabic but they lack accuracy. Arabic is the official language of 26 countries and has 247 million native speakers and 29 million non-native speakers. It is a complex language with a rich vocabulary and many dialects. The sentiment mining tool needs to read data directly in Arabic if you want accurate insights from Arabic text because otherwise nuances are lost in translations. Translating text to English or any other language can completely change the meaning of words in Arabic, including even the root word. That’s why the algorithm needs to be trained on Arabic datasets and use a dedicated Arabic part-of-speech tagger. Because of these challenges, most companies fail to provide accurate Arabic audio to text translation to date. #### **3. What’s a common mistake you see people make trying to integrate ML?** The most common mistake that companies make while trying to integrate ML is insufficient data in their training datasets. Most ML models cannot distinguish between good data and insufficient data. Therefore, training datasets are considered relevant and used as a precedent to determine the results in most cases. This challenge isn’t limited to small- or medium-sized businesses; large enterprises have the same challenge. No matter what the ML processes are, companies need to ensure that the training datasets are reliable and exhaustive for their desired outcome by incorporating a human element into the early stages of machine learning. However, companies can create the required foundation for successful machine learning projects with a thorough review of accurate, comprehensive, and constant training data. #### **4. Where do you see ML having the biggest impact in the next 5-10 years?** In the next 5-10 years, ML will have the biggest impact on transforming the healthcare sector. **Networked hospitals and connected care:** With predictive care, command centers are all set to analyze clinical and location data to monitor supply and demand across healthcare networks in real-time. With ML, healthcare professionals will be able to spot high-risk patients more quickly and efficiently, thus removing bottlenecks in the system. You can check the spread of contractible diseases faster, take better measures to manage epidemics, identify at-risk patients more accurately, especially for genetic diseases, and more. **Better staff and patient experiences:** Predictive healthcare networks are expected to reduce wait times, improve staff workflows, and take on the ever-growing administrative burden. By learning from every patient, diagnosis, and procedure, ML is expected to create experiences that adapt to hospital staff as well as the patient. This improves health outcomes and reduces clinician shortages and burnout while enabling the system to be financially sustainable. --- 🤗 Thank you for joining us in this second installment of ML Director Insights. Stay tuned for more insights from ML Directors in Finance, Healthcare and e-Commerce. Big thanks to Omar Rahman, Cao (Danica) Xiao, Raphael Cohen, and Martin Ostrovsky for their brilliant insights and participation in this piece. We look forward to watching each of your continued successes and will be cheering you on each step of the way. 🎉 If you or your team are interested in accelerating your ML roadmap with Hugging Face Experts please visit [hf.co/support](https://huggingface.co/support?utm_source=article&utm_medium=blog&utm_campaign=ml_director_insights_2) to learn more.
Gradio 3.0 is Out!
abidlabs
May 16, 2022
gradio-blocks
community, open-source-collab
https://huggingface.co/blog/gradio-blocks
# Gradio 3.0 is Out! ### Machine Learning Demos Machine learning demos are an increasingly vital part of releasing a model. Demos allow anyone — not just ML engineers — to try out a model in the browser, give feedback on predictions, and build trust in the model if it performs well. More than 600,000 ML demos have been built with the Gradio library since its first version in 2019, and today, we are thrilled to announce **Gradio 3.0**: a ground-up redesign of the Gradio library 🥳 ### What's New in Gradio 3.0? 🔥 A complete redesign of the frontend, based on the feedback we're hearing from Gradio users: * We've switched to modern technologies (like <a href="https://svelte.dev/" target="_blank">Svelte</a>) to build the Gradio frontend. We're seeing much smaller payloads and much faster page loads as a result! * We've also embranced a much cleaner design that will allow Gradio demos to fit in visually in more settings (such as being <a href="https://discuss.huggingface.co/t/gradio-iframe-embedding/13021/9?u=abidlabs">embedded</a> in blog posts). <img class="max-w-full mx-auto my-6" style="width: 54rem" src="/blog/assets/68_gradio_blocks/lion.jpg"> * We've revamped our existing components, like `Dataframe` to be more user-friendly (try dragging-and-dropping a CSV file into a Dataframe) as well as added new components, such as the `Gallery`, to allow you to build the right UI for your model. <img class="max-w-full mx-auto my-6" style="width: 54rem" src="/blog/assets/68_gradio_blocks/dalle.jpg"> * We've added a `TabbedInterface` class which allows you to group together related demos as multiple tabs in one web app <img class="max-w-full mx-auto my-6" style="width: 54rem" src="/blog/assets/68_gradio_blocks/tts.png"> Check out all the components you can use [on our (redesigned) docs](http://www.gradio.app/docs) 🤗! 🔥 We've created a new low-level language called **Gradio Blocks** that lets you build complex custom web apps, right in Python: <img class="max-w-full mx-auto my-6" style="width: 54rem" src="/blog/assets/68_gradio_blocks/mindseye-lite.jpg"> Why did we create Blocks? Gradio demos are very easy to build, but what if you want more control over the layout of your demo, or more flexibility on how the data flows? For example, you might want to: * Change the layout of your demo instead of just having all of the inputs on the left and outputs on the right * Have multi-step interfaces, in which the output of one model becomes the input to the next model, or have more flexible data flows in general * Change a component's properties (for example, the choices in a Dropdown) or its visibilty based on user input The low-level Blocks API allows you to do all of this, right in Python. Here's an example of a Blocks demo that creates two simple demos and uses tabs to group them together: ```python import numpy as np import gradio as gr def flip_text(x): return x[::-1] def flip_image(x): return np.fliplr(x) with gr.Blocks() as demo: gr.Markdown("Flip text or image files using this demo.") with gr.Tabs(): with gr.TabItem("Flip Text"): text_input = gr.Textbox() text_output = gr.Textbox() # this demo runs whenever the input textbox changes text_input.change(flip_text, inputs=text_input, outputs=text_output) with gr.TabItem("Flip Image"): with gr.Row(): image_input = gr.Image() image_output = gr.Image() button = gr.Button("Flip") # this demo runs whenever the button is clicked button.click(flip_image, inputs=image_input, outputs=image_output) demo.launch() ``` Once you run `launch()`, the following demo will appear: <img class="max-w-full mx-auto my-6" style="width: 54rem" src="/blog/assets/68_gradio_blocks/flipper.png"> For a step-by-step introduction to Blocks, check out [the dedicated Blocks Guide](https://www.gradio.app/introduction_to_blocks/) ### The Gradio Blocks Party We're very excited about Gradio Blocks -- and we'd love for you to try it out -- so we are organizing a competition, **the Gradio Blocks Party** (😉), to see who can build the best demos with Blocks. By building these demos, we can make state-of-the-art machine learning accessible, not just to engineers, but anyone who can use an Internet browser! Even if you've never used Gradio before, this is the perfect time to start, because the Blocks Party is running until the end of May. We'll be giving out 🤗 merch and other prizes at the end of the Party for demos built using Blocks. Learn more about Blocks Party here: https://huggingface.co/spaces/Gradio-Blocks/README
Announcing the Hugging Face Fellowship Program
espejelomar
May 17, 2022
fellowship
community
https://huggingface.co/blog/fellowship
# Announcing the Hugging Face Fellowship Program The Fellowship is a network of exceptional people from different backgrounds who contribute to the Machine Learning open-source ecosystem 🚀. The goal of the program is to empower key contributors to enable them to scale their impact while inspiring others to contribute as well. ## How the Fellowship works 🙌🏻 This is Hugging Face supporting the amazing work of contributors! Being a Fellow works differently for everyone. The key question here is: ❓ **What would contributors need to have more impact? How can Hugging Face support them so they can do that project they have always wanted to do?** Fellows of all backgrounds are welcome! The progress of Machine Learning depends on grassroots contributions. Each person has a unique set of skills and knowledge that can be used to democratize the field in a variety of ways. Each Fellow achieves impact differently and that is perfect 🌈. Hugging Face supports them to continue creating and sharing the way that fits their needs the best. ## What are the benefits of being part of the Fellowship? 🤩 The benefits will be based on the interests of each individual. Some examples of how Hugging Face supports Fellows: 💾 Computing and resources 🎁 Merchandise and assets. ✨ Official recognition from Hugging Face. ## How to become a Fellow Fellows are currently nominated by members of the Hugging Face team or by another Fellow. How can prospects get noticed? The main criterion is that they have contributed to the democratization of open-source Machine Learning. How? In the ways that they prefer. Here are some examples of the first Fellows: - **María Grandury** - Created the [largest Spanish-speaking NLP community](https://somosnlp.org/) and organized a Hackathon that achieved 23 Spaces, 23 datasets, and 33 models that advanced the SOTA for Spanish ([see the Organization](https://huggingface.co/hackathon-pln-es) in the Hub). 👩🏼‍🎤 - **Manuel Romero** - Contributed [over 300 models](https://huggingface.co/mrm8488) to the Hugging Face Hub. He has trained multiple SOTA models in Spanish. 🤴🏻 - **Aritra Roy Gosthipathy**: Contributed new architectures for TensorFlow to the Transformers library, improved Keras tooling, and helped create the Keras working group (for example, see his [Vision Transformers tutorial](https://twitter.com/RisingSayak/status/1515918406171914240)). 🦹🏻 - **Vaibhav Srivastav** - Advocacy in the field of speech. He has led the [ML4Audio working group](https://github.com/Vaibhavs10/ml-with-audio) ([see the recordings](https://www.youtube.com/playlist?list=PLo2EIpI_JMQtOQK_B4G97yn1QWZ4Xi4Tu)) and paper discussion sessions. 🦹🏻 - **Bram Vanroy** - Helped many contributors and the Hugging Face team from the beginning. He has reported several [issues](https://github.com/huggingface/transformers/issues/1332) and merged [pull requests](https://github.com/huggingface/transformers/pull/1346) in the Transformers library since September 2019. 🦸🏼 - **Christopher Akiki** - Contributed to sprints, workshops, [Big Science](https://t.co/oIRne5fZYb), and cool demos! Check out some of his recent projects like his [TF-coder](https://t.co/NtTmO6ngHP) and the [income stats explorer](https://t.co/dNMO7lHAIR). 🦹🏻‍♀️ - **Ceyda Çınarel** - Contributed to many successful Hugging Face and Spaces models in various sprints. Check out her [ButterflyGAN Space](https://huggingface.co/spaces/huggan/butterfly-gan) or [search for reaction GIFs with CLIP](https://huggingface.co/spaces/flax-community/clip-reply-demo). 👸🏻 Additionally, there are strategic areas where Hugging Face is looking for open-source contributions. These areas will be added and updated frequently on the [Fellowship Doc with specific projects](https://docs.google.com/document/d/11mh36a4fgBlj8sh3_KoP2TckuPcnD-_S_UAtsEWgs50/edit). Prospects should not hesitate to write in the #looking-for-collaborators channel in the [Hugging Face Discord](https://t.co/1n75wi976V?amp=1) if they want to undertake a project in these areas, support or be considered as a Fellow. Additionally, refer to the **Where and how can I contribute?** question below. If you are currently a student, consider applying to the [Student Ambassador Program](https://huggingface.co/blog/ambassadors). The application deadline is June 13, 2022. Hugging Face is actively working to build a culture that values ​​diversity, equity, and inclusion. Hugging Face intentionally creates a community where people feel respected and supported, regardless of who they are or where they come from. This is critical to building the future of open Machine Learning. The Fellowship will not discriminate based on race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. ## Frequently Asked Questions * **I am just starting to contribute. Can I be a fellow?** Fellows are nominated based on their open-source and community contributions. If you want to participate in the Fellowship, the best way to start is to begin contributing! If you are a student, the [Student Ambassador Program](https://huggingface.co/blog/ambassadors) might be more suitable for you (the application deadline is June 13, 2022). * **Where and how can I contribute?** It depends on your interests. Here are some ideas of areas where you can contribute, but you should work on things that get **you** excited! - Share exciting models with the community through the Hub. These can be for Computer Vision, Reinforcement Learning, and any other ML domain! - Create tutorials and projects using different open-source libraries—for example, Stable-Baselines 3, fastai, or Keras. - Organize local sprints to promote open source Machine Learning in different languages or niches. For example, the [Somos NLP Hackathon](https://huggingface.co/hackathon-pln-es) focused on Spanish speakers. The [HugGAN sprint](https://github.com/huggingface/community-events/tree/main/huggan) focused on generative models. - Translate the [Hugging Face Course](https://github.com/huggingface/course#-languages-and-translations), the [Transformers documentation](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md) or the [Educational Toolkit](https://github.com/huggingface/education-toolkit/blob/main/TRANSLATING.md). - [Doc with specific projects](https://docs.google.com/document/d/11mh36a4fgBlj8sh3_KoP2TckuPcnD-_S_UAtsEWgs50/edit) where contributions would be valuable. The Hugging Face team will frequently update the doc with new projects. Please share in the #looking-for-contributors channel on the [Hugging Face Discord](https://hf.co/join/discord) if you want to work on a particular project. * **Will I be an employee of Hugging Face?** No, the Fellowship does not mean you are an employee of Hugging Face. However, feel free to mention in any forum, including LinkedIn, that you are a Hugging Face Fellow. Hugging Face is growing and this could be a good path for a bigger relationship in the future 😎. Check the [Hugging Face job board](https://hf.co/jobs) for updated opportunities. * **Will I receive benefits during the Fellowship?** Yes, the benefits will depend on the particular needs and projects that each Fellow wants to undertake. * **Is there a deadline?** No. Admission to the program is ongoing and contingent on the nomination of a current Fellow or member of the Hugging Face team. Please note that being nominated may not be enough to be admitted as a Fellow.
Machine Learning Experts - Sasha Luccioni Interview
britneymuller
May 17, 2022
sasha-luccioni-interview
expert-acceleration-program, ml-experts
https://huggingface.co/blog/sasha-luccioni-interview
# Machine Learning Experts - Sasha Luccioni ## 🤗 Welcome to Machine Learning Experts - Sasha Luccioni 🚀 _If you're interested in learning how ML Experts, like Sasha, can help accelerate your ML roadmap visit: <a href="https://huggingface.co/support?utm_source=blog&utm_medium=blog&utm_campaign=ml_experts&utm_content=sasha_interview_article">hf.co/support.</a>_ Hey friends! Welcome to Machine Learning Experts. I'm your host, Britney Muller and today’s guest is [Sasha Luccioni](https://twitter.com/SashaMTL). Sasha is a Research Scientist at Hugging Face where she works on the ethical and societal impacts of Machine Learning models and datasets. Sasha is also a co-chair of the Carbon Footprint WG of the [Big Science Workshop](https://bigscience.huggingface.co), on the Board of [WiML](https://wimlworkshop.org), and a founding member of the [Climate Change AI (CCAI)](https://www.climatechange.ai) organization which catalyzes impactful work applying machine learning to the climate crisis. You’ll hear Sasha talk about how she measures the carbon footprint of an email, how she helped a local soup kitchen leverage the power of ML, and how meaning and creativity fuel her work. Very excited to introduce this brilliant episode to you! Here’s my conversation with Sasha Luccioni: <iframe width="100%" style="aspect-ratio: 16 / 9;"src="https://www.youtube.com/embed/AQRkcMr0Zk0" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> *Note: Transcription has been slightly modified/reformatted to deliver the highest-quality reading experience.* ### Thank you so much for joining us today, we are so excited to have you on! **Sasha:** I'm really excited to be here. ### Diving right in, can you speak to your background and what led you to Hugging Face? **Sasha:** Yeah, I mean if we go all the way back, I started studying linguistics. I was super into languages and both of my parents were mathematicians. But I thought, I don't want to do math, I want to do language. I started doing NLP, natural language processing, during my undergrad and got super into it. My Ph.D. was in computer science, but I maintained a linguistic angle. I started out in humanities and then got into computer science. Then after my Ph.D., I spent a couple of years working in applied AI research. My last job was in finance, and then one day I decided that I wanted to do good and socially positive AI research, so I quit my job. I decided that no amount of money was worth working on AI for AI's sake, I wanted to do more. So I spent a couple of years working with Yoshua Bengio, meanwhile working on AI for good projects, AI for climate change projects, and then I was looking for my next role. I wanted to be in a place that I trusted was doing the right things and going in the right direction. When I met Thom and Clem, I knew that Hugging Face was a place for me and that it would be exactly what I was looking for. ### Love that you wanted to something that felt meaningful! **Sasha:** Yeah, when I hear people on Sunday evening being like “Monday's tomorrow…” I'm like “Tomorrow's Monday! That's great!” And it's not that I'm a workaholic, I definitely do other stuff, and have a family and everything, but I'm literally excited to go to work to do really cool stuff. Think that's important. I know people can live without it, but I can't. ### What are you most excited about that you're working on now? **Sasha:** I think the [Big Science](https://bigscience.huggingface.co/) project is definitely super inspiring. For the last couple of years, I've been seeing these large language models, and I was always like, but how do they work? And where's the code, where's their data, and what's going on in there? How are they developed and who was involved? It was all like a black box thing, and I'm so happy that we're finally making it a glass box. And there are so many people involved and so many really interesting perspectives. And I'm chairing the carbon footprint working group, so we're working on different aspects of environmental impacts and above and beyond just counting CO2 emissions, but other things like the manufacturing costs. At some point, we even consider how much CO2 an email generates, things like that, so we're definitely thinking of different perspectives. Also about the data, I'm involved in a lot of the data working groups at Big Science, and it's really interesting because typically it’s been like we're gonna get the most data we can, stuff it in a language model and it's gonna be great. And it's gonna learn all this stuff, but what's actually in there, there's so much weird stuff on the internet, and things that you don't necessarily want your model to be seeing. So we're really looking into mindfulness, data curation, and multilingualism as well to make sure that it's not just a hundred percent English or 99% English. So it's such a great initiative, and it makes me excited to be involved. ### Love the idea of evaluating the carbon footprint of an email!? **Sasha:** Yeah, people did it, depending on the attachment or not, but it was just because we found this article of, I think it was a theoretical physics project and they did that, they did everything. They did video calls, travel commutes, emails, and the actual experiments as well. And they made this pie chart and it was cool because there were 37 categories in the pie chart, and we really wanted to do that. But I don't know if we want to go into that level of detail, but we were going to do a survey and ask participants on average, how many hours did they spend working on Big Science or training in language models and things like that. So we didn’t want just the number of GPU hours for training the model, but also people's implication and participation in the project. ### Can you speak a little bit more about the environmental impact of AI? **Sasha:** Yeah, it's a topic I got involved in three years ago now. The first article that came out was by [Emma Strubell and her colleagues](https://arxiv.org/pdf/1906.02243.pdf) and they essentially trained a large language model with hyperparameter tuning. So essentially looking at all the different configurations and then the figure they got was like that AI model emitted as much carbon as five cars in their lifetimes. Which includes gas and everything, like the average kind of consumption. And with my colleagues we were like, well that doesn't sound right, it can't be all models, right? And so we really went off the deep end into figuring out what has an impact on emissions, and how we can measure emissions. So first we just [created this online calculator](https://mlco2.github.io/impact/) where someone could enter what hardware they use, how long they trained for, and where on their location or a cloud computing instance. And then it would give them an estimate of the carbon involved that they admitted. Essentially that was our first attempt, a calculator, and then we helped create a package called code carbon which actually does that in real-time. So it's gonna run in parallel to whatever you're doing training a model and then at the end spit out an estimate of the carbon emissions. Lately we've been going further and further. I just had an article that I was a co-author on that got accepted, about how to proactively reduce emissions. For example, by anticipating times when servers are not as used as other times, like doing either time delaying or picking the right region because if you train in, I don't know, Australia, it's gonna be a coal-based grid, and so it's gonna be highly polluting. Whereas in Quebec or Montreal where I'm based, it's a hundred percent hydroelectricity. So just by making that choice, you can reduce your emissions by around a hundredfold. And so just small things like that, like above and beyond estimating, we also want people to start reducing their emissions. It's the next step. ### It’s never crossed my mind that geographically where you compute has a different emissions cost. **Sasha:** Oh yeah, and I'm so into energy grids now. Every time I go somewhere I'm like, so what's the energy coming from? How are you generating it? And so it's really interesting, there are a lot of historical factors and a lot of cultural factors. For example; France is mostly nuclear, mostly energy, and Canada has a lot of hydroelectric energy. Some places have a lot of wind or tidal, and so it's really interesting just to understand when you turn on a lamp, where that electricity is coming from and at what cost to the environment. Because when I was growing up, I would always turn off the lights, and unplug whatever but not anything more than that. It was just good best practices. You turn off the light when you're not in a room, but after that, you can really go deeper depending on where you live, your energy's coming from different sources. And there is more or less pollution, but we just don't see that we don't see how energy is produced, we just see the light and we're like oh, this is my lamp. So it's really important to start thinking about that. ### It's so easy not to think about that stuff, which I could see being a barrier for machine learning engineers who might not have that general awareness. **Sasha:** Yeah, exactly. And I mean usually, it's just by habit, right? I think there's a default option when you're using cloud instances, often it's like the closest one to you or the one with the most GPUs available or whatever. There's a default option, and people are like okay, fine, whatever and click the default. It's this nudge theory aspect. I did a master's in cognitive science and just by changing the default option, you can change people's behavior to an incredible degree. So whether you put apples or chocolate bars near the cash register, or small stuff like that. And so if the default option, all of a sudden was the low carbon one, we could save so many emissions just because people are just like okay, fine, I'm gonna train a model in Montreal, I don't care. It doesn't matter, as long as you have access to the hardware you need, you don't care where it is. But in the long run, it really adds up. ### What are some of the ways that machine learning teams and engineers could be a bit more proactive in aspects like that? **Sasha:** So I've noticed that a lot of people are really environmentally conscious. Like they'll bike to work or they'll eat less meat and things like that. They'll have this kind of environmental awareness, but then disassociate it from their work because we're not aware of our impact as machine learning researchers or engineers on the environment. And without sharing it necessarily, just starting to measure, for example, carbon emissions. And starting to look at what instances you're picking, if you have a choice. For example, I know that Google Cloud and AWS have started putting low carbon as a little tag so you can pick it because the information is there. And starting to make these little steps, and connecting the dots between environment and tech. These are dots that are not often connected because tech is so like the cloud, it's nice to be distributed, and you don't really see it. And so by grounding it more, you see the impact it can have on the environment. ### That's a great point. And I've listened to a couple talks and podcasts of yours, where you've mentioned how machine learning can be used to help offset the environmental impact of models. **Sasha:** Yeah, we wrote a paper a couple of years ago that was a cool experience. It's almost a hundred pages, it's called [Tackling Climate Change with Machine Learning](https://dl.acm.org/doi/10.1145/3485128). And there are like 25 authors, but there are all these different sections ranging from electricity to city planning to transportation to forestry and agriculture. We essentially have these chapters of the paper where we talk about the problems that exist. For example, renewable energy is variable in a lot of cases. So if you have solar panels, they won't produce energy at night. That's kind of like a given. And then wind power is dependent on the wind. And so a big challenge in implementing renewable energy is that you have to respond to the demand. You need to be able to give people power at night, even if you're on solar energy. And so typically you have either diesel generators or this backup system that often cancels out the environmental effect, like the emissions that you're saving, but what machine learning can do, you're essentially predicting how much energy will be needed. So based on previous days, based on the temperature, based on events that happen, you can start being like okay, well we're gonna be predicting half an hour out or an hour out or 6 hours or 24 hours. And you can start having different horizons and doing time series prediction. Then instead of powering up a diesel generator which is cool because you can just power them up, and in a couple of seconds they're up and running. What you can also do is have batteries, but batteries you need to start charging them ahead of time. So say you're six hours out, you start charging your batteries, knowing that either there's a cloud coming or that night's gonna fall, so you need that energy stored ahead. And so there are things that you could do that are proactive that can make a huge difference. And then machine learning is good at that, it’s good at predicting the future, it’s good at finding the right features, and things like that. So that's one of the go-to examples. Another one is remote sensing. So we have a lot of satellite data about the planet and see either deforestation or tracking wildfires. In a lot of cases, you can detect wildfires automatically based on satellite imagery and deploy people right away. Because they're often in remote places that you don't necessarily have people living in. And so there are all these different cases in which machine learning could be super useful. We have the data, we have the need, and so this paper is all about how to get involved and whatever you're good at, whatever you like doing, and how to apply machine learning and use it in the fight against climate change. ### For people listening that are interested in this effort, but perhaps work at an organization where it's not prioritized, what tips do you have to help incentivize teams to prioritize environmental impact? **Sasha:** So it's always a question of cost and benefit or time, you know, the time that you need to put in. And sometimes people just don't know that there are different tools that exist or approaches. And so if people are interested or even curious to learn about it. I think that's the first up because even when I first started thinking of what I can do, I didn't know that all these things existed. People have been working on this for like a fairly long time using different data science techniques. For example, we created a website called [climatechange.ai](http://climatechange.ai), and we have interactive summaries that you can read about how climate change can help and detect methane or whatever. And I think that just by sprinkling this knowledge can help trigger some interesting thought processes or discussions. I've participated in several round tables at companies that are not traditionally climate change-oriented but are starting to think about it. And they're like okay well we put a composting bin in the kitchen, or we did this and we did that. So then from the tech side, what can we do? It's really interesting because there are a lot of low-hanging fruits that you just need to learn about. And then it's like oh well, I can do that, I can by default use this cloud computing instance and that's not gonna cost me anything. And you need to change a parameter somewhere. ### What are some of the more common mistakes you see machine learning engineers or teams make when it comes to implementing these improvements? **Sasha:** Actually, machine learning people or AI people, in general, have this stereotype from other communities that we think AI's gonna solve everything. We just arrived and we're like oh, we're gonna do AI. And it's gonna solve all your problems no matter what you guys have been doing for 50 years, AI's gonna do it. And I haven't seen that attitude that much, but we know what AI can do, we know what machine learning can do, and we have a certain kind of worldview. It's like when you have a hammer, everything's a nail, so it’s kind of something like that. And I participated in a couple of hackathons and just like in general, people want to make stuff or do stuff to fight climate change. It's often like oh, this sounds like a great thing AI can do, and we're gonna do it without thinking of how it's gonna be used or how it's gonna be useful or how it's gonna be. Because it's like yeah, sure, AI can do all this stuff, but then at the end of the day, someone's gonna use it. For example, if you create something for scanning satellite imagery and detecting wildfire, the information that your model outputs has to be interpretable. Or you need to add that little extra step of sending a new email or whatever it is. Otherwise, we train a model, it's great, it's super accurate, but then at the end of the day, nobody's gonna use it just because it's missing a tiny little connection to the real world or the way that people will use it. And that's not sexy, people are like yeah, whatever, I don't even know how to write a script that sends an email. I don't either. But still, just doing that little extra step, that's so much less technologically complex than what you've done so far. Just adding that little thing will make a big difference and it can be in terms of UI, or it can be in terms of creating an app. It's like the machine learning stuff that's actually crucial for your project to be used. And I've participated in organizing workshops where people submit ideas that are super great on paper that have great accuracy rates, but then they just stagnate in paper form or article form because you still need to have that next step. I remember this one presentation of a machine learning algorithm that could reduce flight emissions of airplanes by 3 to 7% by calculating the wind speed, etc. Of course, that person should have done a startup or a product or pitched this to Boeing or whatever, otherwise it was just a paper that they published in this workshop that I was organizing, and then that was it. And scientists or engineers don't necessarily have those skills necessary to go see an airplane manufacturer with this thing, but it's frustrating. And at the end of the day, to see these great ideas, this great tech that just fizzles. ### So sad. That's such a great story though and how there are opportunities like that. **Sasha:** Yeah, and I think scientists, so often, don't necessarily want to make money, they just want to solve problems often. And so you don't necessarily even need to start a startup, you could just talk to someone or pitch this to someone, but you have to get out of your comfort zone. And the academic conferences you go to, you need to go to a networking event in the aviation industry and that's scary, right? And so there are often these barriers between disciplines that I find very sad. I actually like going to a business or random industry networking event because this is where connections can get made, that can make the biggest changes. It's not in the industry-specific conferences because everyone's talking about the same technical style that of course, they're making progress and making innovations. But then if you're the only machine learning expert in a room full of aviation experts, you can do so much. You can spark all these little sparks, and after you're gonna have people reducing emissions of flights. ### That's powerful. Wondering if you could add some more context as to why finding meaning in your work is so important? **Sasha:** Yeah, there's this concept that my mom read about in some magazine ages ago when I was a kid. It's called [Ikigai](https://en.wikipedia.org/wiki/Ikigai), and it's a Japanese concept, it's like how to find the reason or the meaning of life. It's kind of how to find your place in the universe. And it was like you need to find something that has these four elements. Like what you love doing, what you're good at, what the world needs and then what can be a career. I was always like this is my career, but she was always like no because even if you love doing this, but you can't get paid for it, then it's a hard life as well. And so she always asked me this when I was picking my courses at university or even my degree, she'll always be like okay, well is that aligned with things you love and things you're good at? And some things she'd be like yeah, but you're not good at that though. I mean you could really want to do this, but maybe this is not what you're good at. So I think that it's always been my driving factor in my career. And I feel that it helps feel that you're useful and you're like a positive force in the world. For example, when I was working at Morgan Stanley, I felt that there were interesting problems like I was doing really well, no questions asked, the salary was amazing. No complaints there, but there was missing this what the world needs aspect that was kind of like this itch I couldn’t scratch essentially. But given this framing, this itchy guy, I was like oh, that's what's missing in my life. And so I think that people in general, not only in machine learning, it's good to think about not only what you're good at, but also what you love doing, what motivates you, why you would get out of bed in the morning and of course having this aspect of what the world needs. And it doesn't have to be like solving world hunger, it can be on a much smaller scale or on a much more conceptual scale. For example, what I feel like we're doing at Hugging Face is really that machine learning needs more open source code, more model sharing, but not because it's gonna solve any one particular problem, because it can contribute to a spectrum of problems. Anything from reproducibility to compatibility to product, but there's like the world needs this to some extent. And so I think that really helped me converge on Hugging Face as being maybe the world doesn't necessarily need better social networks because a lot of people doing AI research in the context of social media or these big tech companies. Maybe the world doesn't necessarily need that, maybe not right now, maybe what the world needs is something different. And so this kind of four-part framing really helped me find meaning in my career and my life in general, trying to find all these four elements. ### What other examples or applications do you find and see potential meaning in AI machine learning? **Sasha:** I think that an often overlooked aspect is accessibility and I guess democratization, but like making AI easier for non-specialists. Because can you imagine if I don't know anyone like a journalist or a doctor or any profession you can think of could easily train or use an AI model. Because I feel like yeah, for sure we do AI in medicine and healthcare, but it's from a very AI machine learning angle. But if we had more doctors who were empowered to create more tools or any profession like a baker… I actually have a friend who has a bakery here in Montreal and he was like yeah, well can AI help me make better bread? And I was like probably, yeah. I'm sure that if you do some kind of experimentation and he's like oh, I can install a camera in my oven. And I was like oh yeah, you could do that I guess. I mean we were talking about it and you know, actually, bread is pretty fickle, you need the right humidity, and it actually takes a lot of experimentation and a lot of know-how from ‘boulangers’ [‘bakers’]. And the same thing for croissants, his croissants are so good and he's like yeah, well you need to really know the right butter, etc. And he was like I want to make an AI model that will help bake bread. And I was like I don't even know how to help you start, like where do you start doing that? So accessibility is such an important part. For example, the internet has become so accessible nowadays. Anyone can navigate, and initially, it was a lot less so I think that AI still has some road to travel in order to become a more accessible and democratic tool. ### And you've talked before about the power of data and how it's not talked about enough. **Sasha:** Yeah, four or five years ago, I went to Costa Rica with my husband on a trip. We were just looking on a map and then I found this research center that was at the edge of the world. It was like being in the middle of nowhere. We had to take a car on a dirt road, then a first boat then a second boat to get there. And they're in the middle of the jungle and they essentially study the jungle and they have all these camera traps that are automatically activated, that are all over the jungle. And then every couple of days they have to hike from camera to camera to switch out the SD cards. And then they take these SD cards back to the station and then they have a laptop and they have to go through every picture it took. And of course, there are a lot of false positives because of wind or whatever, like an animal moving really fast, so there's literally maybe like 5% of actual good images. And I was like why aren't they using it to track biodiversity? And they'd no, we saw a Jaguar on blah, blah, blah at this location because they have a bunch of them. Then they would try to track if a Jaguar or another animal got killed, if it had babies, or if it looked injured; like all of these different things. And then I was like, I'm sure a part of that could be automated, at least the filtering process of taking out the images that are essentially not useful, but they had graduate students or whatever doing it. But still, there are so many examples like this domain in all areas. And just having these little tools, I'm not saying that because I think we're not there yet, completely replacing scientists in this kind of task, but just small components that are annoying and time-consuming, then machine learning can help bridge that gap. ### Wow. That is so interesting! **Sasha:** It's actually really, camera trap data is a really huge part of tracking biodiversity. It's used for birds and other animals. It's used in a lot of cases and actually, there's been Kaggle competitions for the last couple of years around camera trap data. And essentially during the year, they have camera traps in different places like Kenya has a bunch and Tanzania as well. And then at the end of the year, they have this big Kaggle competition of recognizing different species of animals. Then after that they deployed the models, and then they update them every year. So it's picking up, but there's just a lot of data, as you said. So each ecosystem is unique and so you need a model that's gonna be trained on exactly. You can't take a model from Kenya and make it work in Costa Rica, that's not gonna work. You need data, you need experts to train the model, and so there are a lot of elements that need to converge in order for you to be able to do this. Kind of like AutoTrain, Hugging Face has one, but even simpler where biodiversity researchers in Costa Rica could be like these are my images, help me figure out which ones are good quality and the types of animals that are on them. And they could just drag and drop the images like a web UI or something. And then they had this model that's like, here are the 12 images of Jaguars, this one is injured, this one has a baby, etc. ### Do you have insights for teams that are trying to solve for things like this with machine learning, but just lack the necessary data? **Sasha:** Yeah, I guess another anecdote, I have a lot of these anecdotes, but at some point we wanted to organize an AI for social good hackathon here in Montreal like three or three or four years ago. And then we were gonna contact all these NGOs, like soup kitchens, homeless shelters in Montreal. And we started going to these places and then we're like okay, where's your data? And they're like, “What data?” I'm like, “Well don't you keep track of how many people you have in your homeless shelter or if they come back,” and they're like “No.” And then they're like, “But on the other hand, we have these problems of either people disappearing and we don't know where they are or people staying for a long time. And then at a certain point we're supposed to not let them stand.” They had a lot of issues, for example, in the food kitchen, they had a lot of wasted food because they had trouble predicting how many people would arrive. And sometimes they're like yeah, we noticed that in October, usually there are fewer people, but we don't really have any data to support that. So we completely canceled the hackathon, then instead we did, I think we call them data literacy or digital literacy workshops. So essentially we went to these places if they were interested and we gave one or two-hour workshops about how to use a spreadsheet and figure out what they wanted to track. Because sometimes they didn't even know what kind of things they wanted to save or wanted to really have a trace of. So we did a couple of them in some places like we would come back every couple of months and check in. And then a year later we had a couple, especially a food kitchen, we actually managed to make a connection between them, and I don't remember what the company name was anymore, but they essentially did this supply chain management software thing. And so the kitchen was actually able to implement a system where they would track like we got 10 pounds of tomatoes, this many people showed up today, and this is the waste of food we have. Then a year later we were able to do a hackathon to help them reduce food waste. So that was really cool because we really saw a year and some before they had no trace of anything, they just had intuitions, which were useful, but weren't formal. And then a year later we were able to get data and integrate it into their app, and then they would have a thing saying be careful, your tomatoes are gonna go bad soon because it's been three days since you had them. Or in cases where it's like pasta, it would be six months or a year, and so we implemented a system that would actually give alerts to them. And it was super simple in terms of technology, there was not even much AI in there, but just something that would help them keep track of different categories of food. And so it was a really interesting experience because I realized that yeah, you can come in and be like we're gonna help you do whatever, but if you don't have much data, what are you gonna do? ### Exactly, that's so interesting. That's so amazing that you were able to jump in there and provide that first step; the educational piece of that puzzle to get them set up on something like that. **Sasha:** Yeah, it's been a while since I organized any hackathons. But I think these community involvement events are really important because they help people learn stuff like we learn that you can't just like barge in and use AI, digital literacy is so much more important and they just never really put the effort into collecting the data, even if they needed it. Or they didn't know what could be done and things like that. So taking this effort or five steps back and helping improve tech skills, generally speaking, is a really useful contribution that people don't really realize is an option, I guess. ### What industries are you most excited to see machine learning be applied to? **Sasha:** Climate change! Yeah, the environment is kind of my number one. Education has always been something that I've really been interested in and I've kind of always been waiting. I did my Ph.D. in education and AI, like how AI can be used in education. I keep waiting for it to finally hit a certain peak, but I guess there are a lot of contextual elements and stuff like that, but I think AI, machine learning, and education can be used in so many different ways. For example, what I was working on during my Ph.D. was how to help pick activities, like learning activities and exercises that are best suited for learners. Instead of giving all kids or adults or whatever the same exercise to help them focus on their weak knowledge points, weak skills, and focusing on those. So instead of like a one size fits all approach. And not replacing the teacher, but tutoring more, like okay, you learn a concept in school, and help you work on it. And you have someone figure this one out really fast and they don't need those exercises, but someone else could need more time to practice. And I think that there is so much that can be done, but I still don't see it really being used, but I think it's potentially really impactful. ### All right, so we're going to dive into rapid-fire questions. If you could go back and do one thing differently at the start of your machine learning career, what would it be? **Sasha:** I would spend more time focusing on math. So as I said, my parents are mathematicians and they would always give me extra math exercises. And they would always be like math is universal, math, math, math. So when you get force-fed things in your childhood, you don't necessarily appreciate them later, and so I was like no, languages. And so for a good part of my university studies, I was like no math, only humanities. And so I feel like if I had been a bit more open from the beginning and realized the potential of math, even in linguistics or a lot of things, I think I would've come to where I'm at much faster than spending three years being like no math, no math. I remember in grade 12, my final year of high school, my parents made me sign up for a math competition, like an Olympiad and I won it. Then I remember I had a medal and I put it on my mom and I'm like “Now leave me alone, I'm not gonna do any more math in my life.” And she was like “Yeah, yeah.” And then after that, when I was picking my Ph.D. program, she's like “Oh I see there are math classes, eh? because you're doing machine learning, eh?”, and I was like “No,” but yeah, I should have gotten over my initial distaste for math a lot quicker. ### That's so funny, and it’s interesting to hear that because I often hear people say you need to know less and less math, the more advanced some of these ML libraries and programs get. **Sasha:** Definitely, but I think having a good base, I'm not saying you have to be a super genius, but having this intuition. Like when I was working with Yoshua for example, he's a total math genius and just the facility of interpreting results or understanding behaviors of a machine learning model just because math is so second nature. Whereas for me I have to be like, okay, so I'm gonna write this equation with the loss function. I'm gonna try to understand the consequences, etc., and it's a bit less automatic, but it's a skill that you can develop. It's not necessarily theoretical, it could also be experimental knowledge. But just having this really solid math background helps you get there quicker, you couldn't really skip a few steps. ### That was brilliant. And you can ask your parents for help? **Sasha:** No, I refuse to ask my parents for help, no way. Plus since they're like theoretical mathematicians, they think machine learning is just for people who aren't good at math and who are lazy or whatever. And so depending on whatever area you're in, there's pure mathematicians, theoretical mathematics, applied mathematicians, there's like statisticians, and there are all these different camps. And so I remember my little brother also was thinking of going to machine learning, and my dad was like no, stay in theoretical math, that's where all the geniuses are. He was like “No, machine learning is where math goes to die,” and I was like “Dad, I’m here!” And he was like “Well I'd rather your brother stayed in something more refined,” and I was like “That's not fair.” So yeah, there are a lot of empirical aspects in machine learning, and a lot of trial and error, like you're tuning hyperparameters and you don't really know why. And so I think formal mathematicians, unless there's like a formula, they don't think ML is real or legit. ### So besides maybe a mathematical foundation, what advice would you give to someone looking to get into machine learning? **Sasha:** I think getting your hands dirty and starting out with I don't know, Jupyter Notebooks or coding exercises, things like that. Especially if you do have specific angles or problems you want to get into or just ideas in general, and so starting to try. I remember I did a summer school in machine learning when I was at the beginning of my Ph.D., I think. And then it was really interesting, but then all these examples were so disconnected. I don't remember what the data was, like cats versus dogs, I don't know, but like, why am I gonna use that? And then they're like part of the exercise was to find something that you want to use, like a classifier essentially to do. Then I remember I got pictures of flowers or something, and I got super into it. I was like yeah, see, it confuses this flower and that flower because they're kind of similar. I understand I need more images, and I got super into it and that's when it clicked in my head, it's not only this super abstract classification. Or like oh yeah, I remember we were using this data app called [MNIST](https://huggingface.co/datasets/mnist) which is super popular because it's like handwritten digits and they're really small, and the network goes fast. So people use it a lot in the beginning of machine learning courses. And I was like who cares, I don't want to classify digits, like whatever, right? And then when they let us pick our own images, all of a sudden it gets a lot more personal, interesting, and captivating. So I think that if people are stuck in a rut, they can really focus on things that interest them. For example, get some climate change data and start playing around with it and it really makes the process more pleasant. ### I love that, find something that you're interested in. **Sasha:** Exactly. And one of my favorite projects I worked on was classifying butterflies. We trained neural networks to classify butterflies based on pictures people took and it was so much fun. You learn so much, and then you're also solving a problem that you understand how it's gonna be used, and so it was such a great thing to be involved in. And I wish that everyone had found this kind of interest in the work they do because you really feel like you're making a difference, and it's cool, it's fun and it's interesting, and you want to do more. For example, this project was done in partnership with the Montreal insectarium, which is a museum for insects. And I kept in touch with a lot of these people and then they just renovated the insectarium and they're opening it after like three years of renovation this weekend. They also invited me and my family to the opening, and I'm so excited to go there. You could actually handle insects, they’re going to have stick bugs, and they're gonna have a big greenhouse where there are butterflies everywhere. And in that greenhouse, I mean you have to install the app, but you can take pictures of butterflies, then it uses our AI network to identify them. And I'm so excited to go there to use the app and to see my kids using it and to see this whole thing. Because of the old version, they would give you this little pamphlet with pictures of butterflies and you have to go find them. I just can't wait to see the difference between that static representation and this actual app that you could use to take pictures of butterflies. ### Oh my gosh. And how cool to see something that you created being used like that. **Sasha:** Exactly. And even if it's not like fighting climate change, I think it can make a big difference in helping people appreciate nature and biodiversity and taking things from something that's so abstract and two-dimensional to something that you can really get involved in and take pictures of. I think that makes a huge difference in terms of our perception and our connection. It helps you make a connection between yourself and nature, for example. ### So should people be afraid of AI taking over the world? **Sasha:** I think that we're really far from it. I guess it depends on what you mean by taking over the world, but I think that we should be a lot more mindful of what's going on right now. Instead of thinking to the future and being like oh terminator, whatever, and to kind of be aware of how AI's being used in our phones and our lives, and to be more cognizant of that. Technology or events in general, we have more influence on them than we think by using Alexa, for example, we're giving agency, we're giving not only material or funds to this technology. And we can also participate in it, for example, oh well I'm gonna opt out of my data being used for whatever if I am using this technology. Or I'm gonna read the fine print and figure out what it is that AI is doing in this case, and being more involved in general. So I think that people are really seeing AI as a very distant potential mega threat, but it's actually a current threat, but on a different scale. It's like a different perception. It's like instead of thinking of this AGI or whatever, start thinking about the small things in our lives that AI is being used for, and then engage with them. And then there's gonna be less chance that AGI is gonna take over the world if you make the more mindful choices about data sharing, about consent, about using technology in certain ways. Like if you find out that your police force in your city is using facial recognition technology, you can speak up about that. That's part of your rights as a citizen in many places. And so it's by engaging yourself, you can have an influence on the future by engaging in the present. ### What are you interested in right now? It could be anything, a movie, a recipe, a podcast, etc.? **Sasha:** So during the pandemic, or the lockdowns and stuff like that, I got super into plants. I bought so many plants and now we're preparing a garden with my children. So this is the first time I've done this, we've planted seeds like tomatoes, peppers, and cucumbers. I usually just buy them at the groceries when they're already ready, but this time around I was like, no, I want to teach my kids. But I also want to learn what the whole process is. And so we planted them maybe 10 days ago and they're starting to grow. And we're watering them every day, and I think that this is also part of this process of learning more about nature and the conditions that can help plants thrive and stuff like that. So last summer already, we built not just a square essentially that we fill in with dirt, but this year we're trying to make it better. I want to have several levels and stuff like that, so I'm really looking forward to learning more about growing your own food. ### That is so cool. I feel like that's such a grounding activity. **Sasha:** Yeah, and it's like the polar opposite of what I do. It's great not doing something on my computer, but just going outside and having dirty fingernails. I remember being like who would want to do gardening, it’s so boring, now I'm super into gardening. I can't wait for the weekend to go gardening. ### Yeah, that's great. There's something so rewarding about creating something that you can see touch, feel, and smell as opposed to pushing pixels. **Sasha:** Exactly, sometimes you spend a whole day grappling with this program that has bugs in it and it's not working. You're so frustrating, and then you go outside and you're like, but I have cherry tomatoes, it's all good. ### What are some of your favorite machine learning papers? **Sasha:** My favorite currently, papers by a researcher or by [Abeba Birhane](https://twitter.com/Abebab) who's a researcher in AI ethics. It's like a completely different way of looking at things. So for example, she wrote [a paper](https://arxiv.org/abs/2106.15590) that just got accepted to [FAcct](https://facctconference.org/), which is fairness in ethics conference in AI. Which was about values and how the way we do machine learning research is actually driven by the things that we value and the things that, for example, if I value a network that has high accuracy, for example, performance, I might be less willing to focus on efficiency. So for example, I'll train a model for a long time, just because I want it to be really accurate. Or like if I want to have something new, like this novelty value, I'm not gonna read the literature and see what people have been doing for whatever 10 years, I'm gonna be like I'm gonna reinvent this. So she and her co-authors write this really interesting paper about the connection between values that are theoretical, like a kind of metaphysical, and the way that they're instantiated in machine learning. And I found that it was really interesting because typically we don't see it that way. Typically it's like oh, well we have to establish state-of-the-art, we have to establish accuracy and do this and that, and then like site-related work, but it's like a checkbox, you just have to do it. And then they think a lot more in-depth about why we're doing this, and then what are some ultra ways of doing things. For example, doing a trade off between efficiency and accuracy, like if you have a model that's slightly less accurate, but that's a lot more efficient and trains faster, that could be a good way of democratizing AI because people need less computational resources to train a model. And so there are all these different connections that they make that I find it really cool. ### Wow, we'll definitely be linking to that paper as well, so people can check that out. Yeah, very cool. Anything else you'd like to share? Maybe things you're working on or that you would like people to know about? **Sasha:** Yeah, something I'm working on outside of Big Science is on evaluation and how we evaluate models. Well kind of to what Ababa talks about in her paper, but even from just a pure machine learning perspective, what are the different ways that we can evaluate models and compare them on different aspects, I guess. Not only accuracy but efficiency and carbon emissions and things like that. So there's a project that started a month or ago on how to evaluate in a way that's not only performance-driven, but takes into account different aspects essentially. And I think that this has been a really overlooked aspect of machine learning, like people typically just once again and just check off like oh, you have to evaluate this and that and that, and then submit the paper. There are also these interesting trade-offs that we could be doing and things that we could be measuring that we're not. For example, if you have a data set and you have an average accuracy, is the accuracy the same again in different subsets of the data set, like are there for example, patterns that you can pick up on that will help you improve your model, but also make it fairer? I guess the typical example is like image recognition, does it do the same in different… Well the famous [Gender Shades](http://gendershades.org/) paper about the algorithm did better on white men than African American women, but you could do that about anything. Not only gender and race, but you could do that for images, color or types of objects or angles. Like is it good for images from above or images from street level. There are all these different ways of analyzing accuracy or performance that we haven't really looked at because it's typically more time-consuming. And so we want to make tools to help people delve deeper into the results and understand their models better. ### Where can people find you online? **Sasha:** I'm on [Twitter @SashaMTL](https://twitter.com/SashaMTL), and that's about it. I have a [website](https://www.sashaluccioni.com/), I don't update it enough, but Twitter I think is the best. ### Perfect. We can link to that too. Sasha, thank you so much for joining me today, this has been so insightful and amazing. I really appreciate it. **Sasha:** Thanks, Britney. ### Thank you for listening to Machine Learning Experts! _If you or someone you know is interested in direct access to leading ML experts like Sasha who are ready to help accelerate your ML project, go to <a href="https://huggingface.co/support?utm_source=blog&utm_medium=blog&utm_campaign=ml_experts&utm_content=sasha_interview_article">hf.co/support</a> to learn more._ ❤️
An Introduction to Q-Learning Part 1
ThomasSimonini
May 18, 2022
deep-rl-q-part1
rl
https://huggingface.co/blog/deep-rl-q-part1
# An Introduction to Q-Learning Part 1 <h2>Unit 2, part 1 of the <a href="https://github.com/huggingface/deep-rl-class">Deep Reinforcement Learning Class with Hugging Face 🤗</a></h2> ⚠️ A **new updated version of this article is available here** 👉 [https://huggingface.co/deep-rl-course/unit1/introduction](https://huggingface.co/deep-rl-course/unit2/introduction) *This article is part of the Deep Reinforcement Learning Class. A free course from beginner to expert. Check the syllabus [here.](https://huggingface.co/deep-rl-course/unit0/introduction)* <img src="assets/70_deep_rl_q_part1/thumbnail.gif" alt="Thumbnail"/> --- ⚠️ A **new updated version of this article is available here** 👉 [https://huggingface.co/deep-rl-course/unit1/introduction](https://huggingface.co/deep-rl-course/unit2/introduction) *This article is part of the Deep Reinforcement Learning Class. A free course from beginner to expert. Check the syllabus [here.](https://huggingface.co/deep-rl-course/unit0/introduction)* In the [first chapter of this class](https://huggingface.co/blog/deep-rl-intro), we learned about Reinforcement Learning (RL), the RL process, and the different methods to solve an RL problem. We also trained our first lander agent to **land correctly on the Moon 🌕 and uploaded it to the Hugging Face Hub.** So today, we're going to **dive deeper into one of the Reinforcement Learning methods: value-based methods** and study our first RL algorithm: **Q-Learning.** We'll also **implement our first RL agent from scratch**: a Q-Learning agent and will train it in two environments: 1. Frozen-Lake-v1 (non-slippery version): where our agent will need to **go from the starting state (S) to the goal state (G)** by walking only on frozen tiles (F) and avoiding holes (H). 2. An autonomous taxi will need **to learn to navigate** a city to **transport its passengers from point A to point B.** <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/envs.gif" alt="Environments"/> </figure> This unit is divided into 2 parts: <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/two_parts.jpg" alt="Two Parts"/> </figure> In the first part, we'll **learn about the value-based methods and the difference between Monte Carlo and Temporal Difference Learning.** And in the second part, **we'll study our first RL algorithm: Q-Learning, and implement our first RL Agent.** This unit is fundamental **if you want to be able to work on Deep Q-Learning** (unit 3): the first Deep RL algorithm that was able to play Atari games and **beat the human level on some of them** (breakout, space invaders…). So let's get started! - [What is RL? A short recap](#what-is-rl-a-short-recap) - [The two types of value-based methods](#the-two-types-of-value-based-methods) - [The State-Value function](#the-state-value-function) - [The Action-Value function](#the-action-value-function) - [The Bellman Equation: simplify our value estimation](#the-bellman-equation-simplify-our-value-estimation) - [Monte Carlo vs Temporal Difference Learning](#monte-carlo-vs-temporal-difference-learning) - [Monte Carlo: learning at the end of the episode](#monte-carlo-learning-at-the-end-of-the-episode) - [Temporal Difference Learning: learning at each step](#temporal-difference-learning-learning-at-each-step) ## **What is RL? A short recap** In RL, we build an agent that can **make smart decisions**. For instance, an agent that **learns to play a video game.** Or a trading agent that **learns to maximize its benefits** by making smart decisions on **what stocks to buy and when to sell.** <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/rl-process.jpg" alt="RL process"/> </figure> But, to make intelligent decisions, our agent will learn from the environment by **interacting with it through trial and error** and receiving rewards (positive or negative) **as unique feedback.** Its goal **is to maximize its expected cumulative reward** (because of the reward hypothesis). **The agent's decision-making process is called the policy π:** given a state, a policy will output an action or a probability distribution over actions. That is, given an observation of the environment, a policy will provide an action (or multiple probabilities for each action) that the agent should take. <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/policy.jpg" alt="Policy"/> </figure> **Our goal is to find an optimal policy π***, aka., a policy that leads to the best expected cumulative reward. And to find this optimal policy (hence solving the RL problem), there **are two main types of RL methods**: - *Policy-based methods*: **Train the policy directly** to learn which action to take given a state. - *Value-based methods*: **Train a value function** to learn **which state is more valuable** and use this value function **to take the action that leads to it.** <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/two-approaches.jpg" alt="Two RL approaches"/> </figure> And in this chapter, **we'll dive deeper into the Value-based methods.** ## **The two types of value-based methods** In value-based methods, **we learn a value function** that **maps a state to the expected value of being at that state.** <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/vbm-1.jpg" alt="Value Based Methods"/> </figure> The value of a state is the **expected discounted return** the agent can get if it **starts at that state and then acts according to our policy.** If you forgot what discounting is, you [can read this section](https://huggingface.co/blog/deep-rl-intro#rewards-and-the-discounting). > But what does it mean to act according to our policy? After all, we don't have a policy in value-based methods, since we train a value function and not a policy. > Remember that the goal of an **RL agent is to have an optimal policy π.** To find it, we learned that there are two different methods: - *Policy-based methods:* **Directly train the policy** to select what action to take given a state (or a probability distribution over actions at that state). In this case, we **don't have a value function.** <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/two-approaches-2.jpg" alt="Two RL approaches"/> </figure> The policy takes a state as input and outputs what action to take at that state (deterministic policy). And consequently, **we don't define by hand the behavior of our policy; it's the training that will define it.** - *Value-based methods:* **Indirectly, by training a value function** that outputs the value of a state or a state-action pair. Given this value function, our policy **will take action.** But, because we didn't train our policy, **we need to specify its behavior.** For instance, if we want a policy that, given the value function, will take actions that always lead to the biggest reward, **we'll create a Greedy Policy.** <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/two-approaches-3.jpg" alt="Two RL approaches"/> <figcaption>Given a state, our action-value function (that we train) outputs the value of each action at that state, then our greedy policy (that we defined) selects the action with the biggest state-action pair value.</figcaption> </figure> Consequently, whatever method you use to solve your problem, **you will have a policy**, but in the case of value-based methods you don't train it, your policy **is just a simple function that you specify** (for instance greedy policy) and this policy **uses the values given by the value-function to select its actions.** So the difference is: - In policy-based, **the optimal policy is found by training the policy directly.** - In value-based, **finding an optimal value function leads to having an optimal policy.** <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/link-value-policy.jpg" alt="Link between value and policy"/> </figure> In fact, most of the time, in value-based methods, you'll use **an Epsilon-Greedy Policy** that handles the exploration/exploitation trade-off; we'll talk about it when we talk about Q-Learning in the second part of this unit. So, we have two types of value-based functions: ### **The State-Value function** We write the state value function under a policy π like this: <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/state-value-function-1.jpg" alt="State value function"/> </figure> For each state, the state-value function outputs the expected return if the agent **starts at that state,** and then follow the policy forever after (for all future timesteps if you prefer). <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/state-value-function-2.jpg" alt="State value function"/> <figcaption>If we take the state with value -7: it's the expected return starting at that state and taking actions according to our policy (greedy policy), so right, right, right, down, down, right, right.</figcaption> </figure> ### **The Action-Value function** In the Action-value function, for each state and action pair, the action-value function **outputs the expected return** if the agent starts in that state and takes action, and then follows the policy forever after. The value of taking action an in state s under a policy π is: <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/action-state-value-function-1.jpg" alt="Action State value function"/> </figure> <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/action-state-value-function-2.jpg" alt="Action State value function"/> </figure> We see that the difference is: - In state-value function, we calculate **the value of a state \\(S_t\\)** - In action-value function, we calculate **the value of the state-action pair ( \\(S_t, A_t\\) ) hence the value of taking that action at that state.** <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/two-types.jpg" alt="Two types of value function"/> <figcaption> Note: We didn't fill all the state-action pairs for the example of Action-value function</figcaption> </figure> In either case, whatever value function we choose (state-value or action-value function), **the value is the expected return.** However, the problem is that it implies that **to calculate EACH value of a state or a state-action pair, we need to sum all the rewards an agent can get if it starts at that state.** This can be a tedious process, and that's **where the Bellman equation comes to help us.** ## **The Bellman Equation: simplify our value estimation** The Bellman equation **simplifies our state value or state-action value calculation.** <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/bellman.jpg" alt="Bellman equation"/> </figure> With what we learned from now, we know that if we calculate the \\(V(S_t)\\) (value of a state), we need to calculate the return starting at that state and then follow the policy forever after. **(Our policy that we defined in the following example is a Greedy Policy, and for simplification, we don't discount the reward).** So to calculate \\(V(S_t)\\), we need to make the sum of the expected rewards. Hence: <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/bellman2.jpg" alt="Bellman equation"/> <figcaption>To calculate the value of State 1: the sum of rewards if the agent started in that state and then followed the greedy policy (taking actions that leads to the best states values) for all the time steps.</figcaption> </figure> Then, to calculate the \\(V(S_{t+1})\\), we need to calculate the return starting at that state \\(S_{t+1}\\). <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/bellman3.jpg" alt="Bellman equation"/> <figcaption>To calculate the value of State 2: the sum of rewards **if the agent started in that state, and then followed the **policy for all the time steps.</figcaption> </figure> So you see, that's a pretty tedious process if you need to do it for each state value or state-action value. Instead of calculating the expected return for each state or each state-action pair, **we can use the Bellman equation.** The Bellman equation is a recursive equation that works like this: instead of starting for each state from the beginning and calculating the return, we can consider the value of any state as: **The immediate reward \\(R_{t+1}\\) + the discounted value of the state that follows ( \\(gamma * V(S_{t+1}) \\) ) .** <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/bellman4.jpg" alt="Bellman equation"/> <figcaption>For simplification here we don’t discount so gamma = 1.</figcaption> </figure> If we go back to our example, the value of State 1= expected cumulative return if we start at that state. <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/bellman2.jpg" alt="Bellman equation"/> </figure> To calculate the value of State 1: the sum of rewards **if the agent started in that state 1** and then followed the **policy for all the time steps.** Which is equivalent to \\(V(S_{t})\\) = Immediate reward \\(R_{t+1}\\) + Discounted value of the next state \\(gamma * V(S_{t+1})\\) <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/bellman6.jpg" alt="Bellman equation"/> </figure> For simplification, here we don't discount, so gamma = 1. - The value of \\(V(S_{t+1}) \\) = Immediate reward \\(R_{t+2}\\) + Discounted value of the next state ( \\(gamma * V(S_{t+2})\\) ). - And so on. To recap, the idea of the Bellman equation is that instead of calculating each value as the sum of the expected return, **which is a long process.** This is equivalent **to the sum of immediate reward + the discounted value of the state that follows.** ## **Monte Carlo vs Temporal Difference Learning** The last thing we need to talk about before diving into Q-Learning is the two ways of learning. Remember that an RL agent **learns by interacting with its environment.** The idea is that **using the experience taken**, given the reward it gets, will **update its value or policy.** Monte Carlo and Temporal Difference Learning are two different **strategies on how to train our value function or our policy function.** Both of them **use experience to solve the RL problem.** On one hand, Monte Carlo uses **an entire episode of experience before learning.** On the other hand, Temporal Difference uses **only a step ( \\(S_t, A_t, R_{t+1}, S_{t+1}\\) ) to learn.** We'll explain both of them **using a value-based method example.** ### **Monte Carlo: learning at the end of the episode** Monte Carlo waits until the end of the episode, calculates \\(G_t\\) (return) and uses it as **a target for updating \\(V(S_t)\\).** So it requires a **complete entire episode of interaction before updating our value function.** <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/monte-carlo-approach.jpg" alt="Monte Carlo"/> </figure> If we take an example: <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/MC-2.jpg" alt="Monte Carlo"/> </figure> - We always start the episode **at the same starting point.** - **The agent takes actions using the policy**. For instance, using an Epsilon Greedy Strategy, a policy that alternates between exploration (random actions) and exploitation. - We get **the reward and the next state.** - We terminate the episode if the cat eats the mouse or if the mouse moves > 10 steps. - At the end of the episode, **we have a list of State, Actions, Rewards, and Next States** - **The agent will sum the total rewards \\(G_t\\)** (to see how well it did). - It will then **update \\(V(s_t)\\) based on the formula** <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/MC-3.jpg" alt="Monte Carlo"/> </figure> - Then **start a new game with this new knowledge** By running more and more episodes, **the agent will learn to play better and better.** <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/MC-3p.jpg" alt="Monte Carlo"/> </figure> For instance, if we train a state-value function using Monte Carlo: - We just started to train our Value function, **so it returns 0 value for each state** - Our learning rate (lr) is 0.1 and our discount rate is 1 (= no discount) - Our mouse **explores the environment and takes random actions** <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/MC-4.jpg" alt="Monte Carlo"/> </figure> - The mouse made more than 10 steps, so the episode ends . <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/MC-4p.jpg" alt="Monte Carlo"/> </figure> - We have a list of state, action, rewards, next_state, **we need to calculate the return \\(G{t}\\)** - \\(G_t = R_{t+1} + R_{t+2} + R_{t+3} ...\\) - \\(G_t = R_{t+1} + R_{t+2} + R_{t+3}…\\) (for simplicity we don’t discount the rewards). - \\(G_t = 1 + 0 + 0 + 0+ 0 + 0 + 1 + 1 + 0 + 0\\) - \\(G_t= 3\\) - We can now update \\(V(S_0)\\): <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/MC-5.jpg" alt="Monte Carlo"/> </figure> - New \\(V(S_0) = V(S_0) + lr * [G_t — V(S_0)]\\) - New \\(V(S_0) = 0 + 0.1 * [3 – 0]\\) - New \\(V(S_0) = 0.3\\) <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/MC-5p.jpg" alt="Monte Carlo"/> </figure> ### **Temporal Difference Learning: learning at each step** - **Temporal difference, on the other hand, waits for only one interaction (one step) \\(S_{t+1}\\)** - to form a TD target and update \\(V(S_t)\\) using \\(R_{t+1}\\) and \\(gamma * V(S_{t+1})\\). The idea with **TD is to update the \\(V(S_t)\\) at each step.** But because we didn't play during an entire episode, we don't have \\(G_t\\) (expected return). Instead, **we estimate \\(G_t\\) by adding \\(R_{t+1}\\) and the discounted value of the next state.** This is called bootstrapping. It's called this **because TD bases its update part on an existing estimate \\(V(S_{t+1})\\) and not a complete sample \\(G_t\\).** <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/TD-1.jpg" alt="Temporal Difference"/> </figure> This method is called TD(0) or **one-step TD (update the value function after any individual step).** <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/TD-1p.jpg" alt="Temporal Difference"/> </figure> If we take the same example, <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/TD-2.jpg" alt="Temporal Difference"/> </figure> - We just started to train our Value function, so it returns 0 value for each state. - Our learning rate (lr) is 0.1, and our discount rate is 1 (no discount). - Our mouse explore the environment and take a random action: **going to the left** - It gets a reward \\(R_{t+1} = 1\\) since **it eats a piece of cheese** <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/TD-2p.jpg" alt="Temporal Difference"/> </figure> <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/TD-3.jpg" alt="Temporal Difference"/> </figure> We can now update \\(V(S_0)\\): New \\(V(S_0) = V(S_0) + lr * [R_1 + gamma * V(S_1) - V(S_0)]\\) New \\(V(S_0) = 0 + 0.1 * [1 + 1 * 0–0]\\) New \\(V(S_0) = 0.1\\) So we just updated our value function for State 0. Now we **continue to interact with this environment with our updated value function.** <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/TD-3p.jpg" alt="Temporal Difference"/> </figure> If we summarize: - With Monte Carlo, we update the value function from a complete episode, and so we **use the actual accurate discounted return of this episode.** - With TD learning, we update the value function from a step, so we replace \\(G_t\\) that we don't have with **an estimated return called TD target.** <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/Summary.jpg" alt="Summary"/> </figure> So now, before diving on Q-Learning, let's summarise what we just learned: We have two types of value-based functions: - State-Value function: outputs the expected return if **the agent starts at a given state and acts accordingly to the policy forever after.** - Action-Value function: outputs the expected return if **the agent starts in a given state, takes a given action at that state** and then acts accordingly to the policy forever after. - In value-based methods, **we define the policy by hand** because we don't train it, we train a value function. The idea is that if we have an optimal value function, we **will have an optimal policy.** There are two types of methods to learn a policy for a value function: - With *the Monte Carlo method*, we update the value function from a complete episode, and so we **use the actual accurate discounted return of this episode.** - With *the TD Learning method,* we update the value function from a step, so we replace Gt that we don't have with **an estimated return called TD target.** <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/summary-learning-mtds.jpg" alt="Summary"/> </figure> --- So that’s all for today. Congrats on finishing this first part of the chapter! There was a lot of information. **That’s normal if you still feel confused with all these elements**. This was the same for me and for all people who studied RL. **Take time to really grasp the material before continuing**. And since the best way to learn and avoid the illusion of competence is **to test yourself**. We wrote a quiz to help you find where **you need to reinforce your study**. Check your knowledge here 👉 https://github.com/huggingface/deep-rl-class/blob/main/unit2/quiz1.md <a href="https://huggingface.co/blog/deep-rl-q-part2">In the second part , we’ll study our first RL algorithm: Q-Learning</a>, and implement our first RL Agent in two environments: 1. Frozen-Lake-v1 (non-slippery version): where our agent will need to **go from the starting state (S) to the goal state (G)** by walking only on frozen tiles (F) and avoiding holes (H). 2. An autonomous taxi will need **to learn to navigate** a city to **transport its passengers from point A to point B.** <figure class="image table text-center m-0 w-full"> <img src="assets/70_deep_rl_q_part1/envs.gif" alt="Environments"/> </figure> And don't forget to share with your friends who want to learn 🤗 ! Finally, we want **to improve and update the course iteratively with your feedback**. If you have some, please fill this form 👉 https://forms.gle/3HgA7bEHwAmmLfwh9 ### Keep learning, stay awesome,
Putting ethical principles at the core of research lifecycle
SaulLu
May 19, 2022
ethical-charter-multimodal
research, nlp, audio, cv
https://huggingface.co/blog/ethical-charter-multimodal
# Putting ethical principles at the core of the research lifecycle ## Ethical charter - Multimodal project ## Purpose of the ethical charter It has been well documented that machine learning research and applications can potentially lead to "data privacy issues, algorithmic biases, automation risks and malicious uses" (NeurIPS 2021 [ethics guidelines](https://nips.cc/public/EthicsGuidelines)). The purpose of this short document is to formalize the ethical principles that we (the multimodal learning group at Hugging Face) adopt for the project we are pursuing. By defining these ethical principles at the beginning of the project, we make them core to our machine learning lifecycle. By being transparent about the decisions we're making in the project, who is working on which aspects of the system, and how the team can be contacted, we hope to receive feedback early enough in the process to make meaningful changes, and ground discussions about choices in an awareness of the goals we aim to achieve and the values we hope to incorporate. This document is the result of discussions led by the multimodal learning group at Hugging Face (composed of machine learning researchers and engineers), with the contributions of multiple experts in ethics operationalization, data governance, and personal privacy. ## Limitations of this ethical charter This document is a work in progress and reflects a state of reflection as of May 2022. There is no consensus nor official definition of "ethical AI" and our considerations are very likely to change over time. In case of updates, we will reflect changes directly in this document while providing the rationale for changes and tracking the history of updates [through GitHub](https://github.com/huggingface/blog/commits/main/ethical-charter-multimodal.md). This document is not intended to be a source of truth about best practices for ethical AI. We believe that even though it is imperfect, thinking about the impact of our research, the potential harms we foresee, and strategies we can take to mitigate these harms is going in the right direction for the machine learning community. Throughout the project, we will document how we operationalize the values described in this document, along with the advantages and limitations we observe in the context of the project. ## Content policy Studying the current state-of-the-art multimodal systems, we foresee several misuses of the technologies we aim at as part of this project. We provide guidelines on some of the use cases we ultimately want to prevent: - Promotion of content and activities which are detrimental in nature, such as violence, harassment, bullying, harm, hate, and all forms of discrimination. Prejudice targeted at specific identity subpopulations based on gender, race, age, ability status, LGBTQA+ orientation, religion, education, socioeconomic status, and other sensitive categories (such as sexism/misogyny, casteism, racism, ableism, transphobia, homophobia). - Violation of regulations, privacy, copyrights, human rights, cultural rights, fundamental rights, laws, and any other form of binding documents. - Generating personally identifiable information. - Generating false information without any accountability and/or with the purpose of harming and triggering others. - Incautious usage of the model in high-risk domains - such as medical, legal, finance, and immigration - that can fundamentally damage people’s lives. ## Values for the project - **Be transparent:** We are transparent and open about the intent, sources of data, tools, and decisions. By being transparent, we expose the weak points of our work to the community and thus are responsible and can be held accountable. - **Share open and reproducible work:** Openness touches on two aspects: the processes and the results. We believe it is good research practice to share precise descriptions of the data, tools, and experimental conditions. Research artifacts, including tools and model checkpoints, must be accessible - for use within the intended scope - to all without discrimination (e.g., religion, ethnicity, sexual orientation, gender, political orientation, age, ability). We define accessibility as ensuring that our research can be easily explained to an audience beyond the machine learning research community. - **Be fair:** We define fairness as the equal treatment of all human beings. Being fair implies monitoring and mitigating unwanted biases that are based on characteristics such as race, gender, disabilities, and sexual orientation. To limit as much as possible negative outcomes, especially outcomes that impact marginalized and vulnerable groups, reviews of unfair biases - such as racism for predictive policing algorithms - should be conducted on both the data and the model outputs. - **Be self-critical:** We are aware of our imperfections and we should constantly lookout for ways to better operationalize ethical values and other responsible AI decisions. For instance, this includes better strategies for curating and filtering training data. We should not overclaim or entertain spurious discourses and hype. - **Give credit:** We should respect and acknowledge people's work through proper licensing and credit attribution. We note that some of these values can sometimes be in conflict (for instance being fair and sharing open and reproducible work, or respecting individuals’ privacy and sharing datasets), and emphasize the need to consider risks and benefits of our decisions on a case by case basis.
How Sempre Health is leveraging the Expert Acceleration Program to accelerate their ML roadmap
federicopascual
May 19, 2022
sempre-health-eap-case-study
expert-acceleration-program, case-study, case-studies
https://huggingface.co/blog/sempre-health-eap-case-study
# How Sempre Health is leveraging the Expert Acceleration Program to accelerate their ML roadmap 👋 Hello, friends! We recently sat down with [Swaraj Banerjee](https://www.linkedin.com/in/swarajbanerjee/) and [Larry Zhang](https://www.linkedin.com/in/larry-zhang-b58642a3/) from [Sempre Health](https://www.semprehealth.com/), a startup that brings behavior-based, dynamic pricing to Healthcare. They are doing some exciting work with machine learning and are leveraging our [Expert Acceleration Program](https://huggingface.co/support) to accelerate their ML roadmap. An example of our collaboration is their new NLP pipeline to automatically classify and respond inbound messages. Since deploying it to production, they have seen more than 20% of incoming messages get automatically handled by this new system 🤯 having a massive impact on their business scalability and team workflow. In this short video, Swaraj and Larry walk us through some of their machine learning work and share their experience collaborating with our team via the [Expert Acceleration Program](https://huggingface.co/support). Check it out: <iframe width="100%" style="aspect-ratio: 16 / 9;"src="https://www.youtube.com/embed/QBOTlNJUtdk" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> If you'd like to accelerate your machine learning roadmap with the help of our experts, as Swaraj and Larry did, visit [hf.co/support](https://huggingface.co/support) to learn more about our Expert Acceleration Program and request a quote. ## Transcription: ### Introduction My name is Swaraj. I'm the CTO and co-founder at Sempre Health. I'm Larry, I'm a machine learning engineer at Sempre Health. We're working on medication adherence and affordability by combining SMS engagement and discounts for filling prescriptions. ### How do you apply Machine Learning at Sempre Health? Here at Sempre Health, we receive thousands of text messages from the patients on our platform every single day. A huge portion of these messages are messages that we can actually automatically respond to. So, for example, if a patient messages us a simple _“Thank you”_, we can automatically reply with _“You're welcome”_. Or if a patient says _“Can you refill my prescription?”_, we have systems in place to automatically call their pharmacy and submit a refill request on their behalf. We're using machine learning, specifically natural language processing (NLP), to help identify which of these thousands of text messages that we see daily are ones that we can automatically handle. ### What challenges were you facing before the Expert Acceleration Program? Our rule-based system caught about 80% of our inbound text messages, but we wanted to do much better. We knew that a statistical machine learning approach would be the only way to improve our parsing. When we looked around for what tools we could leverage, we found the language models on Hugging Face would be a great place to start. Even though Larry and I have backgrounds in machine learning and NLP, we were worried that we weren't formulating our problem perfectly, using the best model or neural network architecture for our particular use case and training data. ### How did you leverage the Expert Acceleration Program? The Hugging Face team really helped us in all aspects of implementing our NLP solution for this particular problem. They give us really good advice on how to get both representative as well as accurate labels for our text messages. They also saved us countless hours of research time by pointing us immediately to the right models and the right methods. I can definitely say with a lot of confidence that it would've taken us a lot longer to see the results that we see today without the Expert Acceleration Program. ### What surprised you about the Expert Acceleration Program? We knew what we wanted to get out of the program; we had this very concrete problem and we knew that if we used the Hugging Face libraries correctly, we could make a tremendous impact on our product. We were pleasantly surprised that we got the help that we wanted. The people that we worked with were really sharp, met us where we were, didn't require us to do a bunch of extra work, and so it was pleasantly surprising to get exactly what we wanted out of the program. ### What was the impact of collaborating with the Hugging Face team? The most important thing about this collaboration was making a tremendous impact on our business's scalability and our operations team's workflow. We launched our production NLP pipeline several weeks ago. Since then, we've consistently seen almost 20% of incoming messages get automatically handled by our new system. These are messages that would've created a ticket for our patient operations team before. So we've reduced a lot of low-value work from our team. ### For what type of AI problems should ML teams consider the Expert Acceleration Program? Here at Sempre Health, we're a pretty small team and we're just starting to explore how we can leverage ML to better our overall patient experience. The expertise of the Hugging Face team definitely expedited our development process for this project. So we'd recommend this program to any teams that are really looking to quickly add AI pipelines to their products without a lot of the hassle and development time that normally comes with machine learning development. --- With the Expert Acceleration Program, we've put together a world-class team to help customers build better ML solutions, faster. Our experts answer questions and find solutions as needed in your machine learning journey from research to production. Visit [hf.co/support](https://huggingface.co/support) to learn more and request a quote.
An Introduction to Q-Learning Part 2
ThomasSimonini
May 20, 2022
deep-rl-q-part2
rl
https://huggingface.co/blog/deep-rl-q-part2
# An Introduction to Q-Learning Part 2/2 <h2>Unit 2, part 2 of the <a href="https://github.com/huggingface/deep-rl-class">Deep Reinforcement Learning Class with Hugging Face 🤗</a></h2> ⚠️ A **new updated version of this article is available here** 👉 [https://huggingface.co/deep-rl-course/unit1/introduction](https://huggingface.co/deep-rl-course/unit2/q-learning) *This article is part of the Deep Reinforcement Learning Class. A free course from beginner to expert. Check the syllabus [here.](https://huggingface.co/deep-rl-course/unit0/introduction)* <img src="assets/73_deep_rl_q_part2/thumbnail.gif" alt="Thumbnail"/> --- ⚠️ A **new updated version of this article is available here** 👉 [https://huggingface.co/deep-rl-course/unit1/introduction](https://huggingface.co/deep-rl-course/unit2/q-learning) *This article is part of the Deep Reinforcement Learning Class. A free course from beginner to expert. Check the syllabus [here.](https://huggingface.co/deep-rl-course/unit0/introduction)* [In the first part of this unit](https://huggingface.co/blog/deep-rl-q-part1), **we learned about the value-based methods and the difference between Monte Carlo and Temporal Difference Learning**. So, in the second part, we’ll **study Q-Learning**, **and implement our first RL agent from scratch**, a Q-Learning agent, and will train it in two environments: 1. Frozen Lake v1 ❄️: where our agent will need to **go from the starting state (S) to the goal state (G)** by walking only on frozen tiles (F) and avoiding holes (H). 2. An autonomous taxi 🚕: where the agent will need **to learn to navigate** a city to **transport its passengers from point A to point B.** <figure class="image table text-center m-0 w-full"> <img src="assets/73_deep_rl_q_part2/envs.gif" alt="Environments"/> </figure> This unit is fundamental if you want to be able to work on Deep Q-Learning (Unit 3). So let’s get started! 🚀 - [Introducing Q-Learning](#introducing-q-learning) - [What is Q-Learning?](#what-is-q-learning) - [The Q-Learning algorithm](#the-q-learning-algorithm) - [Off-policy vs. On-policy](#off-policy-vs-on-policy) - [A Q-Learning example](#a-q-learning-example) ## **Introducing Q-Learning** ### **What is Q-Learning?** Q-Learning is an **off-policy value-based method that uses a TD approach to train its action-value function:** - *Off-policy*: we'll talk about that at the end of this chapter. - *Value-based method*: finds the optimal policy indirectly by training a value or action-value function that will tell us **the value of each state or each state-action pair.** - *Uses a TD approach:* **updates its action-value function at each step instead of at the end of the episode.** **Q-Learning is the algorithm we use to train our Q-Function**, an **action-value function** that determines the value of being at a particular state and taking a specific action at that state. <figure class="image table text-center m-0 w-full"> <img src="assets/73_deep_rl_q_part2/Q-function.jpg" alt="Q-function"/> <figcaption>Given a state and action, our Q Function outputs a state-action value (also called Q-value)</figcaption> </figure> The **Q comes from "the Quality" of that action at that state.** Internally, our Q-function has **a Q-table, a table where each cell corresponds to a state-action value pair value.** Think of this Q-table as **the memory or cheat sheet of our Q-function.** If we take this maze example: <figure class="image table text-center m-0 w-full"> <img src="assets/73_deep_rl_q_part2/Maze-1.jpg" alt="Maze example"/> </figure> The Q-Table is initialized. That's why all values are = 0. This table **contains, for each state, the four state-action values.** <figure class="image table text-center m-0 w-full"> <img src="assets/73_deep_rl_q_part2/Maze-2.jpg" alt="Maze example"/> </figure> Here we see that the **state-action value of the initial state and going up is 0:** <figure class="image table text-center m-0 w-full"> <img src="assets/73_deep_rl_q_part2/Maze-3.jpg" alt="Maze example"/> </figure> Therefore, Q-function contains a Q-table **that has the value of each-state action pair.** And given a state and action, **our Q-Function will search inside its Q-table to output the value.** <figure class="image table text-center m-0 w-full"> <img src="assets/73_deep_rl_q_part2/Q-function-2.jpg" alt="Q-function"/> <figcaption>Given a state and action pair, our Q-function will search inside its Q-table to output the state-action pair value (the Q value).</figcaption> </figure> If we recap, *Q-Learning* **is the RL algorithm that:** - Trains *Q-Function* (an **action-value function**) which internally is a *Q-table* **that contains all the state-action pair values.** - Given a state and action, our Q-Function **will search into its Q-table the corresponding value.** - When the training is done, **we have an optimal Q-function, which means we have optimal Q-Table.** - And if we **have an optimal Q-function**, we **have an optimal policy** since we **know for each state what is the best action to take.** <figure class="image table text-center m-0 w-full"> <img src="assets/73_deep_rl_q_part2/link-value-policy.jpg" alt="Link value policy"/> </figure> But, in the beginning, **our Q-Table is useless since it gives arbitrary values for each state-action pair** (most of the time, we initialize the Q-Table to 0 values). But, as we'll **explore the environment and update our Q-Table, it will give us better and better approximations.** <figure class="image table text-center m-0 w-full"> <img src="assets/73_deep_rl_q_part2/Q-learning-1.jpg" alt="Q-learning"/> <figcaption>We see here that with the training, our Q-Table is better since, thanks to it, we can know the value of each state-action pair.</figcaption> </figure> So now that we understand what Q-Learning, Q-Function, and Q-Table are, **let's dive deeper into the Q-Learning algorithm**. ### **The Q-Learning algorithm** This is the Q-Learning pseudocode; let's study each part and **see how it works with a simple example before implementing it.** Don't be intimidated by it, it's simpler than it looks! We'll go over each step. <figure class="image table text-center m-0 w-full"> <img src="assets/73_deep_rl_q_part2/Q-learning-2.jpg" alt="Q-learning"/> </figure> **Step 1: We initialize the Q-Table** <figure class="image table text-center m-0 w-full"> <img src="assets/73_deep_rl_q_part2/Q-learning-3.jpg" alt="Q-learning"/> </figure> We need to initialize the Q-Table for each state-action pair. **Most of the time, we initialize with values of 0.** **Step 2: Choose action using Epsilon Greedy Strategy** <figure class="image table text-center m-0 w-full"> <img src="assets/73_deep_rl_q_part2/Q-learning-4.jpg" alt="Q-learning"/> </figure> Epsilon Greedy Strategy is a policy that handles the exploration/exploitation trade-off. The idea is that we define epsilon ɛ = 1.0: - *With probability 1 — ɛ* : we do **exploitation** (aka our agent selects the action with the highest state-action pair value). - With probability ɛ: **we do exploration** (trying random action). At the beginning of the training, **the probability of doing exploration will be huge since ɛ is very high, so most of the time, we'll explore.** But as the training goes on, and consequently our **Q-Table gets better and better in its estimations, we progressively reduce the epsilon value** since we will need less and less exploration and more exploitation. <figure class="image table text-center m-0 w-full"> <img src="assets/73_deep_rl_q_part2/Q-learning-5.jpg" alt="Q-learning"/> </figure> **Step 3: Perform action At, gets reward Rt+1 and next state St+1** <figure class="image table text-center m-0 w-full"> <img src="assets/73_deep_rl_q_part2/Q-learning-6.jpg" alt="Q-learning"/> </figure> **Step 4: Update Q(St, At)** Remember that in TD Learning, we update our policy or value function (depending on the RL method we choose) **after one step of the interaction.** To produce our TD target, **we used the immediate reward \\(R_{t+1}\\) plus the discounted value of the next state best state-action pair** (we call that bootstrap). <figure class="image table text-center m-0 w-full"> <img src="assets/73_deep_rl_q_part2/Q-learning-7.jpg" alt="Q-learning"/> </figure> Therefore, our \\(Q(S_t, A_t)\\) **update formula goes like this:** <figure class="image table text-center m-0 w-full"> <img src="assets/73_deep_rl_q_part2/Q-learning-8.jpg" alt="Q-learning"/> </figure> It means that to update our \\(Q(S_t, A_t)\\): - We need \\(S_t, A_t, R_{t+1}, S_{t+1}\\). - To update our Q-value at a given state-action pair, we use the TD target. How do we form the TD target? 1. We obtain the reward after taking the action \\(R_{t+1}\\). 2. To get the **best next-state-action pair value**, we use a greedy policy to select the next best action. Note that this is not an epsilon greedy policy, this will always take the action with the highest state-action value. Then when the update of this Q-value is done. We start in a new_state and select our action **using our epsilon-greedy policy again.** **It's why we say that this is an off-policy algorithm.** ### **Off-policy vs On-policy** The difference is subtle: - *Off-policy*: using **a different policy for acting and updating.** For instance, with Q-Learning, the Epsilon greedy policy (acting policy), is different from the greedy policy that is **used to select the best next-state action value to update our Q-value (updating policy).** <figure class="image table text-center m-0 w-full"> <img src="assets/73_deep_rl_q_part2/off-on-1.jpg" alt="Off-on policy"/> <figcaption>Acting Policy</figcaption> </figure> Is different from the policy we use during the training part: <figure class="image table text-center m-0 w-full"> <img src="assets/73_deep_rl_q_part2/off-on-2.jpg" alt="Off-on policy"/> <figcaption>Updating policy</figcaption> </figure> - *On-policy:* using the **same policy for acting and updating.** For instance, with Sarsa, another value-based algorithm, **the Epsilon-Greedy Policy selects the next_state-action pair, not a greedy policy.** <figure class="image table text-center m-0 w-full"> <img src="assets/73_deep_rl_q_part2/off-on-3.jpg" alt="Off-on policy"/> <figcaption>Sarsa</figcaption> </figure> <figure class="image table text-center m-0 w-full"> <img src="assets/73_deep_rl_q_part2/off-on-4.jpg" alt="Off-on policy"/> </figure> ## **A Q-Learning example** To better understand Q-Learning, let's take a simple example: <figure class="image table text-center m-0 w-full"> <img src="assets/73_deep_rl_q_part2/Maze-Example-2.jpg" alt="Maze-Example"/> </figure> - You're a mouse in this tiny maze. You always **start at the same starting point.** - The goal is **to eat the big pile of cheese at the bottom right-hand corner** and avoid the poison. After all, who doesn't like cheese? - The episode ends if we eat the poison, **eat the big pile of cheese or if we spent more than five steps.** - The learning rate is 0.1 - The gamma (discount rate) is 0.99 <figure class="image table text-center m-0 w-full"> <img src="assets/73_deep_rl_q_part2/q-ex-1.jpg" alt="Maze-Example"/> </figure> The reward function goes like this: - **+0:** Going to a state with no cheese in it. - **+1:** Going to a state with a small cheese in it. - **+10:** Going to the state with the big pile of cheese. - **-10:** Going to the state with the poison and thus die. - **+0** If we spend more than five steps. <figure class="image table text-center m-0 w-full"> <img src="assets/73_deep_rl_q_part2/q-ex-2.jpg" alt="Maze-Example"/> </figure> To train our agent to have an optimal policy (so a policy that goes right, right, down), **we will use the Q-Learning algorithm**. **Step 1: We initialize the Q-Table** <figure class="image table text-center m-0 w-full"> <img src="assets/73_deep_rl_q_part2/Example-1.jpg" alt="Maze-Example"/> </figure> So, for now, **our Q-Table is useless**; we need **to train our Q-function using the Q-Learning algorithm.** Let's do it for 2 training timesteps: Training timestep 1: **Step 2: Choose action using Epsilon Greedy Strategy** Because epsilon is big = 1.0, I take a random action, in this case, I go right. <figure class="image table text-center m-0 w-full"> <img src="assets/73_deep_rl_q_part2/q-ex-3.jpg" alt="Maze-Example"/> </figure> **Step 3: Perform action At, gets Rt+1 and St+1** By going right, I've got a small cheese, so \\(R_{t+1} = 1\\), and I'm in a new state. <figure class="image table text-center m-0 w-full"> <img src="assets/73_deep_rl_q_part2/q-ex-4.jpg" alt="Maze-Example"/> </figure> **Step 4: Update \\(Q(S_t, A_t)\\)** We can now update \\(Q(S_t, A_t)\\) using our formula. <figure class="image table text-center m-0 w-full"> <img src="assets/73_deep_rl_q_part2/q-ex-5.jpg" alt="Maze-Example"/> </figure> <figure class="image table text-center m-0 w-full"> <img src="assets/73_deep_rl_q_part2/Example-4.jpg" alt="Maze-Example"/> </figure> Training timestep 2: **Step 2: Choose action using Epsilon Greedy Strategy** **I take a random action again, since epsilon is big 0.99** (since we decay it a little bit because as the training progress, we want less and less exploration). I took action down. **Not a good action since it leads me to the poison.** <figure class="image table text-center m-0 w-full"> <img src="assets/73_deep_rl_q_part2/q-ex-6.jpg" alt="Maze-Example"/> </figure> **Step 3: Perform action At, gets \\(R_{t+1}\\) and St+1** Because I go to the poison state, **I get \\(R_{t+1} = -10\\), and I die.** <figure class="image table text-center m-0 w-full"> <img src="assets/73_deep_rl_q_part2/q-ex-7.jpg" alt="Maze-Example"/> </figure> **Step 4: Update \\(Q(S_t, A_t)\\)** <figure class="image table text-center m-0 w-full"> <img src="assets/73_deep_rl_q_part2/q-ex-8.jpg" alt="Maze-Example"/> </figure> Because we're dead, we start a new episode. But what we see here is that **with two explorations steps, my agent became smarter.** As we continue exploring and exploiting the environment and updating Q-values using TD target, **Q-Table will give us better and better approximations. And thus, at the end of the training, we'll get an estimate of the optimal Q-Function.** --- Now that we **studied the theory of Q-Learning**, let's **implement it from scratch**. A Q-Learning agent that we will train in two environments: 1. *Frozen-Lake-v1* ❄️ (non-slippery version): where our agent will need to **go from the starting state (S) to the goal state (G)** by walking only on frozen tiles (F) and avoiding holes (H). 2. *An autonomous taxi* 🚕 will need **to learn to navigate** a city to **transport its passengers from point A to point B.** <figure class="image table text-center m-0 w-full"> <img src="assets/73_deep_rl_q_part2/envs.gif" alt="Environments"/> </figure> Start the tutorial here 👉 https://colab.research.google.com/github/huggingface/deep-rl-class/blob/main/unit2/unit2.ipynb The leaderboard 👉 https://huggingface.co/spaces/chrisjay/Deep-Reinforcement-Learning-Leaderboard --- Congrats on finishing this chapter! There was a lot of information. And congrats on finishing the tutorials. You’ve just implemented your first RL agent from scratch and shared it on the Hub 🥳. Implementing from scratch when you study a new architecture **is important to understand how it works.** That’s **normal if you still feel confused** with all these elements. **This was the same for me and for all people who studied RL.** Take time to really grasp the material before continuing. And since the best way to learn and avoid the illusion of competence is **to test yourself**. We wrote a quiz to help you find where **you need to reinforce your study**. Check your knowledge here 👉 https://github.com/huggingface/deep-rl-class/blob/main/unit2/quiz2.md It’s essential to master these elements and having a solid foundations before entering the **fun part.** Don't hesitate to modify the implementation, try ways to improve it and change environments, **the best way to learn is to try things on your own!** We published additional readings in the syllabus if you want to go deeper 👉 https://github.com/huggingface/deep-rl-class/blob/main/unit2/README.md <a href="https://huggingface.co/blog/deep-rl-dqn">In the next unit, we’re going to learn about Deep-Q-Learning.</a> And don't forget to share with your friends who want to learn 🤗 ! Finally, we want **to improve and update the course iteratively with your feedback**. If you have some, please fill this form 👉 https://forms.gle/3HgA7bEHwAmmLfwh9 ### Keep learning, stay awesome,
Efficient Table Pre-training without Real Data: An Introduction to TAPEX
SivilTaram
May 23, 2022
tapex
research, nlp, community
https://huggingface.co/blog/tapex
# Efficient Table Pre-training without Real Data: An Introduction to TAPEX In recent years, language model pre-training has achieved great success via leveraging large-scale textual data. By employing pre-training tasks such as [masked language modeling](https://arxiv.org/abs/1810.04805), these models have demonstrated surprising performance on several downstream tasks. However, the dramatic gap between the pre-training task (e.g., language modeling) and the downstream task (e.g., table question answering) makes existing pre-training not efficient enough. In practice, we often need an *extremely large amount* of pre-training data to obtain promising improvement, even for [domain-adaptive pretraining](https://arxiv.org/abs/2004.02349). How might we design a pre-training task to close the gap, and thus accelerate pre-training? ### Overview In "[TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://openreview.net/forum?id=O50443AsCP)", we explore **using synthetic data as a proxy for real data during pre-training**, and demonstrate its powerfulness with *TAPEX (Table Pre-training via Execution)* as an example. In TAPEX, we show that table pre-training can be achieved by learning a neural SQL executor over a synthetic corpus. ![snippet](assets/74_tapex/tapex-overview.png) > Note: [Table] is a placeholder for the user provided table in the input. As shown in the figure above, by systematically sampling *executable SQL queries and their execution outputs* over tables, TAPEX first synthesizes a synthetic and non-natural pre-training corpus. Then, it continues to pre-train a language model (e.g., [BART](https://arxiv.org/abs/1910.13461)) to output the execution results of SQL queries, which mimics the process of a neural SQL executor. ### Pre-training The following figure illustrates the pre-training process. At each step, we first take a table from the web. The example table is about Olympics Games. Then we can sample an executable SQL query `SELECT City WHERE Country = France ORDER BY Year ASC LIMIT 1`. Through an off-the-shelf SQL executor (e.g., MySQL), we can obtain the query’s execution result `Paris`. Similarly, by feeding the concatenation of the SQL query and the flattened table to the model (e.g., BART encoder) as input, the execution result serves as the supervision for the model (e.g., BART decoder) as output. ![corpus](assets/74_tapex/procedure.gif) Why use programs such as SQL queries rather than natural language sentences as a source for pre-training? The greatest advantage is that the diversity and scale of programs can be systematically guaranteed, compared to uncontrollable natural language sentences. Therefore, we can easily synthesize a diverse, large-scale, and high-quality pre-training corpus by sampling SQL queries. You can try the trained neural SQL executor in 🤗 Transformers as below: ```python from transformers import TapexTokenizer, BartForConditionalGeneration import pandas as pd tokenizer = TapexTokenizer.from_pretrained("microsoft/tapex-large-sql-execution") model = BartForConditionalGeneration.from_pretrained("microsoft/tapex-large-sql-execution") data = { "year": [1896, 1900, 1904, 2004, 2008, 2012], "city": ["athens", "paris", "st. louis", "athens", "beijing", "london"] } table = pd.DataFrame.from_dict(data) # tapex accepts uncased input since it is pre-trained on the uncased corpus query = "select year where city = beijing" encoding = tokenizer(table=table, query=query, return_tensors="pt") outputs = model.generate(**encoding) print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) # ['2008'] ``` ### Fine-tuning During fine-tuning, we feed the concatenation of the natural language question and the flattened table to the model as input, the answer labeled by annotators serves as the supervision for the model as output. Want to fine-tune TAPEX by yourself? You can look at the fine-tuning script [here](https://github.com/huggingface/transformers/tree/main/examples/research_projects/tapex), which has been officially integrated into 🤗 Transformers `4.19.0`! And by now, [all available TAPEX models](https://huggingface.co/models?sort=downloads&search=microsoft%2Ftapex) have interactive widgets officially supported by Huggingface! You can try to answer some questions as below. <div class="bg-white pb-1"><div class="SVELTE_HYDRATER contents" data-props="{&quot;apiUrl&quot;:&quot;https://api-inference.huggingface.co&quot;,&quot;model&quot;:{&quot;author&quot;:&quot;microsoft&quot;,&quot;cardData&quot;:{&quot;language&quot;:&quot;en&quot;,&quot;tags&quot;:[&quot;tapex&quot;,&quot;table-question-answering&quot;],&quot;license&quot;:&quot;mit&quot;},&quot;cardError&quot;:{&quot;errors&quot;:[],&quot;warnings&quot;:[]},&quot;cardExists&quot;:true,&quot;config&quot;:{&quot;architectures&quot;:[&quot;BartForConditionalGeneration&quot;],&quot;model_type&quot;:&quot;bart&quot;},&quot;discussionsDisabled&quot;:false,&quot;id&quot;:&quot;microsoft/tapex-large-finetuned-wtq&quot;,&quot;lastModified&quot;:&quot;2022-05-05T07:01:43.000Z&quot;,&quot;pipeline_tag&quot;:&quot;table-question-answering&quot;,&quot;library_name&quot;:&quot;transformers&quot;,&quot;mask_token&quot;:&quot;<mask>&quot;,&quot;model-index&quot;:null,&quot;private&quot;:false,&quot;gated&quot;:false,&quot;pwcLink&quot;:{&quot;error&quot;:&quot;Unknown error, can't generate link to Papers With Code.&quot;},&quot;tags&quot;:[&quot;pytorch&quot;,&quot;bart&quot;,&quot;text2text-generation&quot;,&quot;en&quot;,&quot;arxiv:2107.07653&quot;,&quot;transformers&quot;,&quot;tapex&quot;,&quot;table-question-answering&quot;,&quot;license:mit&quot;,&quot;autotrain_compatible&quot;],&quot;tag_objs&quot;:[{&quot;id&quot;:&quot;table-question-answering&quot;,&quot;label&quot;:&quot;Table Question Answering&quot;,&quot;subType&quot;:&quot;nlp&quot;,&quot;type&quot;:&quot;pipeline_tag&quot;},{&quot;id&quot;:&quot;pytorch&quot;,&quot;label&quot;:&quot;PyTorch&quot;,&quot;type&quot;:&quot;library&quot;},{&quot;id&quot;:&quot;transformers&quot;,&quot;label&quot;:&quot;Transformers&quot;,&quot;type&quot;:&quot;library&quot;},{&quot;id&quot;:&quot;en&quot;,&quot;label&quot;:&quot;en&quot;,&quot;type&quot;:&quot;language&quot;},{&quot;id&quot;:&quot;arxiv:2107.07653&quot;,&quot;label&quot;:&quot;arxiv:2107.07653&quot;,&quot;type&quot;:&quot;arxiv&quot;},{&quot;id&quot;:&quot;license:mit&quot;,&quot;label&quot;:&quot;mit&quot;,&quot;type&quot;:&quot;license&quot;},{&quot;id&quot;:&quot;bart&quot;,&quot;label&quot;:&quot;bart&quot;,&quot;type&quot;:&quot;other&quot;},{&quot;id&quot;:&quot;text2text-generation&quot;,&quot;label&quot;:&quot;text2text-generation&quot;,&quot;type&quot;:&quot;other&quot;},{&quot;id&quot;:&quot;tapex&quot;,&quot;label&quot;:&quot;tapex&quot;,&quot;type&quot;:&quot;other&quot;},{&quot;id&quot;:&quot;autotrain_compatible&quot;,&quot;label&quot;:&quot;AutoTrain Compatible&quot;,&quot;type&quot;:&quot;other&quot;}],&quot;transformersInfo&quot;:{&quot;auto_model&quot;:&quot;AutoModelForSeq2SeqLM&quot;,&quot;pipeline_tag&quot;:&quot;text2text-generation&quot;,&quot;processor&quot;:&quot;AutoTokenizer&quot;},&quot;widgetData&quot;:[{&quot;text&quot;:&quot;How many stars does the transformers repository have?&quot;,&quot;table&quot;:{&quot;Repository&quot;:[&quot;Transformers&quot;,&quot;Datasets&quot;,&quot;Tokenizers&quot;],&quot;Stars&quot;:[36542,4512,3934],&quot;Contributors&quot;:[651,77,34],&quot;Programming language&quot;:[&quot;Python&quot;,&quot;Python&quot;,&quot;Rust, Python and NodeJS&quot;]}}],&quot;likes&quot;:0,&quot;isLikedByUser&quot;:false},&quot;shouldUpdateUrl&quot;:true,&quot;includeCredentials&quot;:true}" data-target="InferenceWidget"><div class="flex flex-col w-full max-w-full "> <div class="font-semibold flex items-center mb-2"><div class="text-lg flex items-center"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="-ml-1 mr-1 text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg> Hosted inference API</div> <a target="_blank" href="https://api-inference.huggingface.co/"><svg class="ml-1.5 text-sm text-gray-400 hover:text-black" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M17 22v-8h-4v2h2v6h-3v2h8v-2h-3z" fill="currentColor"></path><path d="M16 8a1.5 1.5 0 1 0 1.5 1.5A1.5 1.5 0 0 0 16 8z" fill="currentColor"></path><path d="M16 30a14 14 0 1 1 14-14a14 14 0 0 1-14 14zm0-26a12 12 0 1 0 12 12A12 12 0 0 0 16 4z" fill="currentColor"></path></svg></a></div> <div class="flex items-center justify-between flex-wrap w-full max-w-full text-sm text-gray-500 mb-0.5"><a target="_blank"><div class="inline-flex items-center mr-2 mb-1.5"><svg class="mr-1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 18 19"><path d="M15.825 1.88748H6.0375C5.74917 1.88777 5.47272 2.00244 5.26884 2.20632C5.06496 2.4102 4.95029 2.68665 4.95 2.97498V4.60623H2.775C2.48667 4.60652 2.21022 4.72119 2.00634 4.92507C1.80246 5.12895 1.68779 5.4054 1.6875 5.69373V16.025C1.68779 16.3133 1.80246 16.5898 2.00634 16.7936C2.21022 16.9975 2.48667 17.1122 2.775 17.1125H15.825C16.1133 17.1122 16.3898 16.9975 16.5937 16.7936C16.7975 16.5898 16.9122 16.3133 16.9125 16.025V2.97498C16.9122 2.68665 16.7975 2.4102 16.5937 2.20632C16.3898 2.00244 16.1133 1.88777 15.825 1.88748ZM6.0375 2.97498H15.825V4.60623H6.0375V2.97498ZM15.825 8.41248H11.475V5.69373H15.825V8.41248ZM6.0375 12.2187V9.49998H10.3875V12.2187H6.0375ZM10.3875 13.3062V16.025H6.0375V13.3062H10.3875ZM4.95 12.2187H2.775V9.49998H4.95V12.2187ZM10.3875 5.69373V8.41248H6.0375V5.69373H10.3875ZM11.475 9.49998H15.825V12.2187H11.475V9.49998ZM4.95 5.69373V8.41248H2.775V5.69373H4.95ZM2.775 13.3062H4.95V16.025H2.775V13.3062ZM11.475 16.025V13.3062H15.825V16.025H11.475Z"></path></svg> <span>Table Question Answering</span></div></a> <div class="relative mb-1.5 false false"><div class="inline-flex justify-between w-32 lg:w-44 rounded-md border border-gray-100 px-4 py-1"><div class="text-sm truncate">Examples</div> <svg class="-mr-1 ml-2 h-5 w-5 transition ease-in-out transform false" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 20 20" fill="currentColor" aria-hidden="true"><path fill-rule="evenodd" d="M5.293 7.293a1 1 0 011.414 0L10 10.586l3.293-3.293a1 1 0 111.414 1.414l-4 4a1 1 0 01-1.414 0l-4-4a1 1 0 010-1.414z" clip-rule="evenodd"></path></svg></div> </div></div> <form><div class="flex h-10"><input class="form-input-alt flex-1 rounded-r-none min-w-0 " placeholder="Your sentence here..." required="" type="text"> <button class="btn-widget w-24 h-10 px-5 rounded-l-none border-l-0 " type="submit">Compute</button></div></form> <div class="mt-4"> <div class="overflow-auto"><table class="table-question-answering"><thead><tr><th contenteditable="true" class="border-2 border-gray-100 h-6">Repository </th><th contenteditable="true" class="border-2 border-gray-100 h-6">Stars </th><th contenteditable="true" class="border-2 border-gray-100 h-6">Contributors </th><th contenteditable="true" class="border-2 border-gray-100 h-6">Programming language </th></tr></thead> <tbody><tr class="bg-white"><td class="border-gray-100 border-2 h-6" contenteditable="">Transformers</td><td class="border-gray-100 border-2 h-6" contenteditable="">36542</td><td class="border-gray-100 border-2 h-6" contenteditable="">651</td><td class="border-gray-100 border-2 h-6" contenteditable="">Python</td> </tr><tr class="bg-white"><td class="border-gray-100 border-2 h-6" contenteditable="">Datasets</td><td class="border-gray-100 border-2 h-6" contenteditable="">4512</td><td class="border-gray-100 border-2 h-6" contenteditable="">77</td><td class="border-gray-100 border-2 h-6" contenteditable="">Python</td> </tr><tr class="bg-white"><td class="border-gray-100 border-2 h-6" contenteditable="">Tokenizers</td><td class="border-gray-100 border-2 h-6" contenteditable="">3934</td><td class="border-gray-100 border-2 h-6" contenteditable="">34</td><td class="border-gray-100 border-2 h-6" contenteditable="">Rust, Python and NodeJS</td> </tr></tbody></table></div> <div class="flex mb-1 flex-wrap"><button class="btn-widget flex-1 lg:flex-none mt-2 mr-1.5" type="button"><svg class="mr-2" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 32"><path d="M3 11v2h26v-2H3zm0 8v2h26v-2H3z" fill="currentColor"></path></svg> Add row</button> <button class="btn-widget flex-1 lg:flex-none mt-2 lg:mr-1.5" type="button"><svg class="transform rotate-90 mr-1" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 32 32"><path d="M3 11v2h26v-2H3zm0 8v2h26v-2H3z" fill="currentColor"></path></svg> Add col</button> <button class="btn-widget flex-1 mt-2 lg:flex-none lg:ml-auto" type="button">Reset table</button></div></div> <div class="mt-2"><div class="text-gray-400 text-xs">This model can be loaded on the Inference API on-demand.</div> </div> <div class="mt-auto pt-4 flex items-center text-xs text-gray-500"><button class="flex items-center cursor-not-allowed text-gray-300" disabled=""><svg class="mr-1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32" style="transform: rotate(360deg);"><path d="M31 16l-7 7l-1.41-1.41L28.17 16l-5.58-5.59L24 9l7 7z" fill="currentColor"></path><path d="M1 16l7-7l1.41 1.41L3.83 16l5.58 5.59L8 23l-7-7z" fill="currentColor"></path><path d="M12.419 25.484L17.639 6l1.932.518L14.35 26z" fill="currentColor"></path></svg> JSON Output</button> <button class="flex items-center ml-auto"><svg class="mr-1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M22 16h2V8h-8v2h6v6z" fill="currentColor"></path><path d="M8 24h8v-2h-6v-6H8v8z" fill="currentColor"></path><path d="M26 28H6a2.002 2.002 0 0 1-2-2V6a2.002 2.002 0 0 1 2-2h20a2.002 2.002 0 0 1 2 2v20a2.002 2.002 0 0 1-2 2zM6 6v20h20.001L26 6z" fill="currentColor"></path></svg> Maximize</button></div> </div></div></div> ### Experiments We evaluate TAPEX on four benchmark datasets, including [WikiSQL (Weak)](https://huggingface.co/datasets/wikisql), [WikiTableQuestions](https://huggingface.co/datasets/wikitablequestions), [SQA](https://huggingface.co/datasets/msr_sqa) and [TabFact](https://huggingface.co/datasets/tab_fact). The first three datasets are about table question answering, while the last one is about table fact verification, both requiring joint reasoning about tables and natural language. Below are some examples from the most challenging dataset, WikiTableQuestions: | Question | Answer | |:---: |:---:| | according to the table, what is the last title that spicy horse produced? | Akaneiro: Demon Hunters | | what is the difference in runners-up from coleraine academical institution and royal school dungannon? | 20 | | what were the first and last movies greenstreet acted in? | The Maltese Falcon, Malaya | | in which olympic games did arasay thondike not finish in the top 20? | 2012 | | which broadcaster hosted 3 titles but they had only 1 episode? | Channel 4 | Experimental results demonstrate that TAPEX outperforms previous table pre-training approaches by a large margin and ⭐achieves new state-of-the-art results on all of them⭐. This includes the improvements on the weakly-supervised WikiSQL denotation accuracy to **89.6%** (+2.3% over SOTA, +3.8% over BART), the TabFact accuracy to **84.2%** (+3.2% over SOTA, +3.0% over BART), the SQA denotation accuracy to **74.5%** (+3.5% over SOTA, +15.9% over BART), and the WikiTableQuestion denotation accuracy to **57.5%** (+4.8% over SOTA, +19.5% over BART). To our knowledge, this is the first work to exploit pre-training via synthetic executable programs and to achieve new state-of-the-art results on various downstream tasks. ![corpus](assets/74_tapex/tapex-performance.png) ### Comparison to Previous Table Pre-training The earliest work on table pre-training, [TAPAS](https://aclanthology.org/2020.acl-main.398/) from Google Research - also [available in 🤗 Transformers](https://huggingface.co/docs/transformers/model_doc/tapas) - and [TaBERT](https://aclanthology.org/2020.acl-main.745/) from Meta AI, have revealed that collecting more *domain-adaptive* data can improve the downstream performance. However, these previous works mainly employ *general-purpose* pre-training tasks, e.g., language modeling or its variants. TAPEX explores a different path by sacrificing the naturalness of the pre-trained source in order to obtain a *domain-adaptive* pre-trained task, i.e. SQL execution. A graphical comparison of BERT, TAPAS/TaBERT and our TAPEX can be seen below. ![comparsion](assets/74_tapex/comparsion-tapex.png) We believe the SQL execution task is closer to the downstream table question answering task, especially from the perspective of structural reasoning capabilities. Imagine you are faced with a SQL query `SELECT City ORDER BY Year` and a natural question `Sort all cities by year`. The reasoning paths required by the SQL query and the question are similar, except that SQL is a bit more rigid than natural language. If a language model can be pre-trained to faithfully “execute” SQL queries and produce correct results, it should have a deep understanding on natural language with similar intents. ![efficiency](assets/74_tapex/tapex-efficiency.png) What about the efficiency? How efficient is such a pre-training method compared to the previous pre-training? The answer is given in the above figure: compared with previous table pre-training method TaBERT, TAPEX could yield 2% improvement only using 2% of the pre-training corpus, achieving a speedup of nearly **50** times! With a larger pre-training corpus (e.g., 5 million <SQL, Table, Execution Result> pairs), the performance on downstream datasets would be better. ### Conclusion In this blog, we introduce TAPEX, a table pre-training approach whose corpus is automatically synthesized via sampling SQL queries and their execution results. TAPEX addresses the data scarcity challenge in table pre-training by learning a neural SQL executor on a diverse, large-scale, and high-quality synthetic corpus. Experimental results on four downstream datasets demonstrate that TAPEX outperforms previous table pre-training approaches by a large margin, with a higher pre-training efficiency. ### Take Away What can we learn from the success of TAPEX? I suggest that, especially if you want to perform efficient continual pre-training, you may try these options: 1. Synthesize an accurate and small corpus, instead of mining a large but noisy corpus from the Internet. 2. Simulate domain-adaptive skills via programs, instead of general-purpose language modeling via natural language sentences.
Introducing Pull Requests and Discussions 🥳
victor
May 25, 2022
community-update
launch
https://huggingface.co/blog/community-update
# Introducing Pull Requests and Discussions 🥳 ![Pull requests and discussions on the hub](assets/76_community_update/community-update.png) We are thrilled to announce the release of our latest collaborative features: pull requests and discussions on the Hugging Face Hub! Pull requests and discussions are available today under the [community tab](https://huggingface.co/gpt2/discussions) for all repository types: models, datasets, and Spaces. Any member of the community can create and participate in discussions and pull requests, facilitating collaborations not only within teams, but also with everyone else in the community! It's the biggest update ever done to the Hub, and we can't wait to see the community members start collaborating with it 🤩. The new "Community" tab also aligns with proposals in ethical ML throughout the years. Feedback and iterations have a central place in the development of ethical machine learning software. We really believe having it in the community's toolset will unlock new kinds of positive patterns in ML, collaborations, and progress. Some example use cases for discussions and pull requests: - Propose suggestions in model cards to improve disclosures of ethical biases. - Let users flag concerning generations of a given Space demo. - Provide a venue through which model and dataset authors can have a direct discussion with community members. - Allow others to improve your repositories! For example, users might want to provide TensorFlow weights! ## Discussions ![Discussions on the Hugging Face Hub](assets/76_community_update/new-discussion.png) [Discussions](https://huggingface.co/gpt2/discussions?type=discussion) allow community members ask and answer questions as well as share their ideas and suggestions directly with the repository owners and the community. Anyone can create and participate in discussions in the community tab of a repository. ## Pull requests ![Pull requests on the Hugging Face Hub](assets/76_community_update/new-pr.png) [Pull requests](https://huggingface.co/gpt2/discussions?type=pull_request) allow community members open, comment, merge, or close pull requests directly from the website. The easiest way to open a pull request is to use the "Collaborate" button in the "Files and versions" tab. It will let you do single file contributions very easily. Under the hood, our Pull requests do not use forks and branches, but instead, custom "branches" called `refs` that are stored directly on the source repo. This approach to avoids the need to create a forks for each new version of the model/dataset. ## How is this different from other git hosts At a high level, we aim to build a simpler version of other git hosts' (like GitHub's) PRs and Issues: - no forks are involved: contributors push to a special `ref` branch directly on the source repo - no hard distinction between issues and PRs: they are essentially the same so we display them in the same lists - streamlined for ML (i.e. models/datasets/Spaces repos), not arbitrary repos ## What's next Of course, it's only the beginning. We will listen to the community feedback to add new features and improve the community tab in the future. If you have any feedback, you can [join the discussion here](https://huggingface.co/spaces/huggingface/HuggingDiscussions/discussions/1). Today is the best time to join your first discussion and open a PR! 🤗
Graphcore and Hugging Face Launch New Lineup of IPU-Ready Transformers
sallydoherty
May 26, 2022
graphcore-update
graphcore, partnerships
https://huggingface.co/blog/graphcore-update
# Graphcore and Hugging Face Launch New Lineup of IPU-Ready Transformers [Graphcore](https://huggingface.co/hardware/graphcore/) and Hugging Face have significantly expanded the range of Machine Learning modalities and tasks available in [Hugging Face Optimum](https://github.com/huggingface/optimum), an open-source library for Transformers performance optimization. Developers now have convenient access to a wide range of off-the-shelf Hugging Face Transformer models, optimised to deliver the best possible performance on Graphcore’s IPU. Including the [BERT transformer model](https://www.graphcore.ai/posts/getting-started-with-hugging-face-transformers-for-ipus-with-optimum) made available shortly after [Optimum Graphcore launched](https://huggingface.co/blog/graphcore), developers can now access 10 models covering Natural Language Processing (NLP), Speech and Computer Vision, which come with IPU configuration files and ready-to-use pre-trained and fine-tuned model weights. ## New Optimum models ### Computer vision [ViT](https://huggingface.co/Graphcore/vit-base-ipu) (Vision Transformer) is a breakthrough in image recognition that uses the transformer mechanism as its main component. When images are input to ViT, they're divided into small patches similar to how words are processed in language systems. Each patch is encoded by the Transformer (Embedding) and then can be processed individually. ### NLP [GPT-2](https://huggingface.co/Graphcore/gpt2-medium-wikitext-103) (Generative Pre-trained Transformer 2) is a text generation transformer model pretrained on a very large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it is trained to generate texts from a prompt by guessing the next word in sentences. [RoBERTa](https://huggingface.co/Graphcore/roberta-base-squad2) (Robustly optimized BERT approach) is a transformer model that (like GPT-2) is pretrained on a large corpus of English data in a self-supervised fashion. More precisely, RoBERTa it was pretrained with the masked language modeling (MLM) objective. Taking a sentence, the model randomly masks 15% of the words in the input then runs the entire masked sentence through the model and has to predict the masked words. Roberta can be used for masked language modeling, but is mostly intended to be fine-tuned on a downstream task. [DeBERTa](https://huggingface.co/Graphcore/deberta-base-ipu) (Decoding-enhanced BERT with disentangled attention) is a pretrained neural language model for NLP tasks. DeBERTa adapts the 2018 BERT and 2019 RoBERTa models using two novel techniques—a disentangled attention mechanism and an enhanced mask decoder—significantly improving the efficiency of model pretraining and performance of downstream tasks. [BART](https://huggingface.co/Graphcore/bart-base-ipu) is a transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. BART is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works well for comprehension tasks (e.g. text classification, question answering). [LXMERT](https://huggingface.co/Graphcore/lxmert-gqa-uncased) (Learning Cross-Modality Encoder Representations from Transformers) is a multimodal transformer model for learning vision and language representations. It has three encoders: object relationship encoder, a language encoder, and a cross-modality encoder. It is pretrained via a combination of masked language modeling, visual-language text alignment, ROI-feature regression, masked visual-attribute modeling, masked visual-object modeling, and visual-question answering objectives. It has achieved state-of-the-art results on the VQA and GQA visual-question-answering datasets. [T5](https://huggingface.co/Graphcore/t5-small-ipu) (Text-to-Text Transfer Transformer) is a revolutionary new model that can take any text and convert it into a machine learning format for translation, question answering or classification. It introduces a unified framework that converts all text-based language problems into a text-to-text format for transfer learning. By doing so, it has simplified a way to use the same model, objective function, hyperparameters, and decoding procedure across a diverse set of NLP tasks. ### Speech [HuBERT](https://huggingface.co/Graphcore/hubert-base-ipu) (Hidden-Unit BERT) is a self-supervised speech recognition model pretrained on audio, learning a combined acoustic and language model over continuous inputs. The HuBERT model either matches or improves upon the state-of-the-art wav2vec 2.0 performance on the Librispeech (960h) and Libri-light (60,000h) benchmarks with 10min, 1h, 10h, 100h, and 960h fine-tuning subsets. [Wav2Vec2](https://huggingface.co/Graphcore/wav2vec2-base-ipu) is a pretrained self-supervised model for automatic speech recognition. Using a novel contrastive pretraining objective, Wav2Vec2 learns powerful speech representations from large amounts of unlabelled speech data, followed by fine-tuning on a small amount of transcribed speech data, outperforming the best semi-supervised methods while being conceptually simpler. ## Hugging Face Optimum Graphcore: building on a solid partnership Graphcore joined the [Hugging Face Hardware Partner Program](https://huggingface.co/hardware) in 2021 as a founding member, with both companies sharing the common goal of lowering the barriers for innovators seeking to harness the power of machine intelligence. Since then, Graphcore and Hugging Face have worked together extensively to make training of transformer models on IPUs fast and easy, with the first Optimum Graphcore model (BERT) being made available last year. Transformers have proven to be extremely efficient for a wide range of functions, including feature extraction, text generation, sentiment analysis, translation and many more. Models like BERT are widely used by Graphcore customers in a huge array of applications including cybersecurity, voice call automation, drug discovery, and translation. Optimizing their performance in the real world requires considerable time, effort and skills that are beyond the reach of many companies and organizations. In providing an open-source library of transformer models, Hugging Face has directly addressed these issues. Integrating IPUs with HuggingFace also allows developers to leverage not just the models, but also datasets available in the HuggingFace Hub. Developers can now use Graphcore systems to train 10 different types of state-of-the-art transformer models and access thousands of datasets with minimal coding complexity. With this partnership, we are providing users with the tools and ecosystem to easily download and fine-tune state-of-the-art pretrained models to various domains and downstream tasks. ## Bringing Graphcore’s latest hardware and software to the table While members of Hugging Face’s ever-expanding user base have already been able to benefit from the speed, performance, and power- and cost-efficiency of IPU technology, a combination of recent hardware and software releases from Graphcore will unlock even more potential. On the hardware front, the [Bow IPU](https://www.graphcore.ai/bow-processors) — announced in March and now shipping to customers — is the first processor in the world to use Wafer-on-Wafer (WoW) 3D stacking technology, taking the well-documented benefits of the IPU to the next level. Featuring ground-breaking advances in compute architecture and silicon implementation, communication and memory, each Bow IPU delivers up to 350 teraFLOPS of AI compute—an impressive 40% increase in performance—and up to 16% more power efficiency compared to the previous generation IPU. Importantly, Hugging Face Optimum users can switch seamlessly from previous generation IPUs to Bow processors, as no code changes are required. Software also plays a vital role in unlocking the IPU’s capabilities, so naturally Optimum offers a plug-and-play experience with Graphcore’s easy-to-use Poplar SDK — which itself has received a major 2.5 update. Poplar makes it easy to train state-of-the-art models on state-of-the-art hardware, thanks to its full integration with standard machine learning frameworks, including PyTorch, PyTorch Lightning, and TensorFlow—as well as orchestration and deployment tools such as Docker and Kubernetes. Making Poplar compatible with these widely used, third-party systems allows developers to easily port their models from their other compute platforms and start taking advantage of the IPU’s advanced AI capabilities. ## Get started with Hugging Face’s Optimum Graphcore models If you’re interested in combining the benefits of IPU technology with the strengths of transformer models, you can download the latest range of Optimum Graphcore models from the [Graphcore organization on the Hub](https://huggingface.co/Graphcore), or access the code from the [Optimum GitHub repo](https://github.com/huggingface/optimum-graphcore). Our [Getting Started blog post](https://huggingface.co/blog/graphcore-getting-started) will guide you through each step to start experimenting with IPUs. Additionally, Graphcore has built an extensive page of [developer resources](https://www.graphcore.ai/developer), where you can find the IPU Model Garden—a repository of deployment-ready ML applications including computer vision, NLP, graph networks and more—alongside an array of documentation, tutorials, how-to-videos, webinars, and more. You can also access [Graphcore’s GitHub repo](https://github.com/graphcore) for more code references and tutorials. To learn more about using Hugging Face on Graphcore, head over to our [partner page](https://huggingface.co/hardware/graphcore)!
Deep Q-Learning with Atari
ThomasSimonini
June 7, 2022
deep-rl-dqn
rl
https://huggingface.co/blog/deep-rl-dqn
# Deep Q-Learning with Space Invaders <h2>Unit 3, of the <a href="https://github.com/huggingface/deep-rl-class">Deep Reinforcement Learning Class with Hugging Face 🤗</a></h2> ⚠️ A **new updated version of this article is available here** 👉 [https://huggingface.co/deep-rl-course/unit1/introduction](https://huggingface.co/deep-rl-course/unit3/introduction) *This article is part of the Deep Reinforcement Learning Class. A free course from beginner to expert. Check the syllabus [here.](https://huggingface.co/deep-rl-course/unit0/introduction)* <img src="assets/78_deep_rl_dqn/thumbnail.gif" alt="Thumbnail"/> --- ⚠️ A **new updated version of this article is available here** 👉 [https://huggingface.co/deep-rl-course/unit1/introduction](https://huggingface.co/deep-rl-course/unit3/introduction) *This article is part of the Deep Reinforcement Learning Class. A free course from beginner to expert. Check the syllabus [here.](https://huggingface.co/deep-rl-course/unit0/introduction)* [In the last unit](https://huggingface.co/blog/deep-rl-q-part2), we learned our first reinforcement learning algorithm: Q-Learning, **implemented it from scratch**, and trained it in two environments, FrozenLake-v1 ☃️ and Taxi-v3 🚕. We got excellent results with this simple algorithm. But these environments were relatively simple because the **State Space was discrete and small** (14 different states for FrozenLake-v1 and 500 for Taxi-v3). But as we'll see, producing and updating a **Q-table can become ineffective in large state space environments.** So today, **we'll study our first Deep Reinforcement Learning agent**: Deep Q-Learning. Instead of using a Q-table, Deep Q-Learning uses a Neural Network that takes a state and approximates Q-values for each action based on that state. And **we'll train it to play Space Invaders and other Atari environments using [RL-Zoo](https://github.com/DLR-RM/rl-baselines3-zoo)**, a training framework for RL using Stable-Baselines that provides scripts for training, evaluating agents, tuning hyperparameters, plotting results, and recording videos. <figure class="image table text-center m-0 w-full"> <img src="assets/78_deep_rl_dqn/atari-envs.gif" alt="Environments"/> </figure> So let’s get started! 🚀 To be able to understand this unit, **you need to understand [Q-Learning](https://huggingface.co/blog/deep-rl-q-part2) first.** - [From Q-Learning to Deep Q-Learning](#from-q-learning-to-deep-q-learning) - [The Deep Q Network](#the-deep-q-network-dqn) - [Preprocessing the input and temporal limitation](#preprocessing-the-input-and-temporal-limitation) - [The Deep Q-Learning Algorithm](#the-deep-q-learning-algorithm) - [Experience Replay to make more efficient use of experiences](#experience-replay-to-make-more-efficient-use-of-experiences) - [Fixed Q-Target to stabilize the training](#fixed-q-target-to-stabilize-the-training) - [Double DQN](#double-dqn) ## From Q-Learning to Deep Q-Learning We learned that **Q-Learning is an algorithm we use to train our Q-Function**, an **action-value function** that determines the value of being at a particular state and taking a specific action at that state. <figure class="image table text-center m-0 w-full"> <img src="assets/73_deep_rl_q_part2/Q-function.jpg" alt="Q-function"/> <figcaption>Given a state and action, our Q Function outputs a state-action value (also called Q-value)</figcaption> </figure> The **Q comes from "the Quality" of that action at that state.** Internally, our Q-function has **a Q-table, a table where each cell corresponds to a state-action pair value.** Think of this Q-table as **the memory or cheat sheet of our Q-function.** The problem is that Q-Learning is a *tabular method*. Aka, a problem in which the state and actions spaces **are small enough to approximate value functions to be represented as arrays and tables**. And this is **not scalable**. Q-Learning was working well with small state space environments like: - FrozenLake, we had 14 states. - Taxi-v3, we had 500 states. But think of what we're going to do today: we will train an agent to learn to play Space Invaders using the frames as input. As **[Nikita Melkozerov mentioned](https://twitter.com/meln1k), Atari environments** have an observation space with a shape of (210, 160, 3), containing values ranging from 0 to 255 so that gives us 256^(210x160x3) = 256^100800 (for comparison, we have approximately 10^80 atoms in the observable universe). <img src="assets/78_deep_rl_dqn/atari.jpg" alt="Atari State Space"/> Therefore, the state space is gigantic; hence creating and updating a Q-table for that environment would not be efficient. In this case, the best idea is to approximate the Q-values instead of a Q-table using a parametrized Q-function \\(Q_{\theta}(s,a)\\) . This neural network will approximate, given a state, the different Q-values for each possible action at that state. And that's exactly what Deep Q-Learning does. <img src="assets/63_deep_rl_intro/deep.jpg" alt="Deep Q Learning"/> Now that we understand Deep Q-Learning, let's dive deeper into the Deep Q-Network. ## The Deep Q-Network (DQN) This is the architecture of our Deep Q-Learning network: <img src="assets/78_deep_rl_dqn/deep-q-network.jpg" alt="Deep Q Network"/> As input, we take a **stack of 4 frames** passed through the network as a state and output a **vector of Q-values for each possible action at that state**. Then, like with Q-Learning, we just need to use our epsilon-greedy policy to select which action to take. When the Neural Network is initialized, **the Q-value estimation is terrible**. But during training, our Deep Q-Network agent will associate a situation with appropriate action and **learn to play the game well**. ### Preprocessing the input and temporal limitation We mentioned that we **preprocess the input**. It’s an essential step since we want to reduce the complexity of our state to reduce the computation time needed for training. So what we do is **reduce the state space to 84x84 and grayscale it** (since the colors in Atari environments don't add important information). This is an essential saving since we **reduce our three color channels (RGB) to 1**. We can also **crop a part of the screen in some games** if it does not contain important information. Then we stack four frames together. <img src="assets/78_deep_rl_dqn/preprocessing.jpg" alt="Preprocessing"/> Why do we stack four frames together? We stack frames together because it helps us **handle the problem of temporal limitation**. Let’s take an example with the game of Pong. When you see this frame: <img src="assets/78_deep_rl_dqn/temporal-limitation.jpg" alt="Temporal Limitation"/> Can you tell me where the ball is going? No, because one frame is not enough to have a sense of motion! But what if I add three more frames? **Here you can see that the ball is going to the right**. <img src="assets/78_deep_rl_dqn/temporal-limitation-2.jpg" alt="Temporal Limitation"/> That’s why, to capture temporal information, we stack four frames together. Then, the stacked frames are processed by three convolutional layers. These layers **allow us to capture and exploit spatial relationships in images**. But also, because frames are stacked together, **you can exploit some spatial properties across those frames**. Finally, we have a couple of fully connected layers that output a Q-value for each possible action at that state. <img src="assets/78_deep_rl_dqn/deep-q-network.jpg" alt="Deep Q Network"/> So, we see that Deep Q-Learning is using a neural network to approximate, given a state, the different Q-values for each possible action at that state. Let’s now study the Deep Q-Learning algorithm. ## The Deep Q-Learning Algorithm We learned that Deep Q-Learning **uses a deep neural network to approximate the different Q-values for each possible action at a state** (value-function estimation). The difference is that, during the training phase, instead of updating the Q-value of a state-action pair directly as we have done with Q-Learning: <img src="https://huggingface.co/blog/assets/73_deep_rl_q_part2/q-ex-5.jpg" alt="Q Loss"/> In Deep Q-Learning, we create a **Loss function between our Q-value prediction and the Q-target and use Gradient Descent to update the weights of our Deep Q-Network to approximate our Q-values better**. <img src="assets/78_deep_rl_dqn/Q-target.jpg" alt="Q-target"/> The Deep Q-Learning training algorithm has *two phases*: - **Sampling**: we perform actions and **store the observed experiences tuples in a replay memory**. - **Training**: Select the **small batch of tuple randomly and learn from it using a gradient descent update step**. <img src="assets/78_deep_rl_dqn/sampling-training.jpg" alt="Sampling Training"/> But, this is not the only change compared with Q-Learning. Deep Q-Learning training **might suffer from instability**, mainly because of combining a non-linear Q-value function (Neural Network) and bootstrapping (when we update targets with existing estimates and not an actual complete return). To help us stabilize the training, we implement three different solutions: 1. *Experience Replay*, to make more **efficient use of experiences**. 2. *Fixed Q-Target* **to stabilize the training**. 3. *Double Deep Q-Learning*, to **handle the problem of the overestimation of Q-values**. <!--- We'll see these three solutions in the pseudocode. ---> ### Experience Replay to make more efficient use of experiences Why do we create a replay memory? Experience Replay in Deep Q-Learning has two functions: 1. **Make more efficient use of the experiences during the training**. - Experience replay helps us **make more efficient use of the experiences during the training.** Usually, in online reinforcement learning, we interact in the environment, get experiences (state, action, reward, and next state), learn from them (update the neural network) and discard them. - But with experience replay, we create a replay buffer that saves experience samples **that we can reuse during the training.** <img src="assets/78_deep_rl_dqn/experience-replay.jpg" alt="Experience Replay"/> ⇒ This allows us to **learn from individual experiences multiple times**. 2. **Avoid forgetting previous experiences and reduce the correlation between experiences**. - The problem we get if we give sequential samples of experiences to our neural network is that it tends to forget **the previous experiences as it overwrites new experiences.** For instance, if we are in the first level and then the second, which is different, our agent can forget how to behave and play in the first level. The solution is to create a Replay Buffer that stores experience tuples while interacting with the environment and then sample a small batch of tuples. This prevents **the network from only learning about what it has immediately done.** Experience replay also has other benefits. By randomly sampling the experiences, we remove correlation in the observation sequences and avoid **action values from oscillating or diverging catastrophically.** In the Deep Q-Learning pseudocode, we see that we **initialize a replay memory buffer D from capacity N** (N is an hyperparameter that you can define). We then store experiences in the memory and sample a minibatch of experiences to feed the Deep Q-Network during the training phase. <img src="assets/78_deep_rl_dqn/experience-replay-pseudocode.jpg" alt="Experience Replay Pseudocode"/> ### Fixed Q-Target to stabilize the training When we want to calculate the TD error (aka the loss), we calculate the **difference between the TD target (Q-Target) and the current Q-value (estimation of Q)**. But we **don’t have any idea of the real TD target**. We need to estimate it. Using the Bellman equation, we saw that the TD target is just the reward of taking that action at that state plus the discounted highest Q value for the next state. <img src="assets/78_deep_rl_dqn/Q-target.jpg" alt="Q-target"/> However, the problem is that we are using the same parameters (weights) for estimating the TD target **and** the Q value. Consequently, there is a significant correlation between the TD target and the parameters we are changing. Therefore, it means that at every step of training, **our Q values shift but also the target value shifts.** So, we’re getting closer to our target, but the target is also moving. It’s like chasing a moving target! This led to a significant oscillation in training. It’s like if you were a cowboy (the Q estimation) and you want to catch the cow (the Q-target), you must get closer (reduce the error). <img src="assets/78_deep_rl_dqn/qtarget-1.jpg" alt="Q-target"/> At each time step, you’re trying to approach the cow, which also moves at each time step (because you use the same parameters). <img src="assets/78_deep_rl_dqn/qtarget-2.jpg" alt="Q-target"/> <img src="assets/78_deep_rl_dqn/qtarget-3.jpg" alt="Q-target"/> This leads to a bizarre path of chasing (a significant oscillation in training). <img src="assets/78_deep_rl_dqn/qtarget-4.jpg" alt="Q-target"/> Instead, what we see in the pseudo-code is that we: - Use a **separate network with a fixed parameter** for estimating the TD Target - **Copy the parameters from our Deep Q-Network at every C step** to update the target network. <img src="assets/78_deep_rl_dqn/fixed-q-target-pseudocode.jpg" alt="Fixed Q-target Pseudocode"/> ### Double DQN Double DQNs, or Double Learning, were introduced [by Hado van Hasselt](https://papers.nips.cc/paper/3964-double-q-learning). This method **handles the problem of the overestimation of Q-values.** To understand this problem, remember how we calculate the TD Target: We face a simple problem by calculating the TD target: how are we sure that **the best action for the next state is the action with the highest Q-value?** We know that the accuracy of Q values depends on what action we tried **and** what neighboring states we explored. Consequently, we don’t have enough information about the best action to take at the beginning of the training. Therefore, taking the maximum Q value (which is noisy) as the best action to take can lead to false positives. If non-optimal actions are regularly **given a higher Q value than the optimal best action, the learning will be complicated.** The solution is: when we compute the Q target, we use two networks to decouple the action selection from the target Q value generation. We: <!---<img src="assets/78_deep_rl_dqn/double-dqn-pseudocode.jpg" alt="Double DQN Pseudocode"/>---> - Use our **DQN network** to select the best action to take for the next state (the action with the highest Q value). - Use our **Target network** to calculate the target Q value of taking that action at the next state. Therefore, Double DQN helps us reduce the overestimation of q values and, as a consequence, helps us train faster and have more stable learning. Since these three improvements in Deep Q-Learning, many have been added such as Prioritized Experience Replay, Dueling Deep Q-Learning. They’re out of the scope of this course but if you’re interested, check the links we put in the reading list. 👉 **[https://github.com/huggingface/deep-rl-class/blob/main/unit3/README.md](https://github.com/huggingface/deep-rl-class/blob/main/unit3/README.md)** Now that you've studied the theory behind Deep Q-Learning, **you’re ready to train your Deep Q-Learning agent to play Atari Games**. We'll start with Space Invaders, but you'll be able to use any Atari game you want 🔥 We're using the RL-Baselines-3 Zoo integration, a vanilla version of Deep Q-Learning with no extensions such as Double-DQN, Dueling-DQN, and Prioritized Experience Replay. Start the tutorial here 👉 https://colab.research.google.com/github/huggingface/deep-rl-class/blob/main/unit3/unit3.ipynb The leaderboard to compare your results with your classmates 🏆 👉 https://huggingface.co/spaces/chrisjay/Deep-Reinforcement-Learning-Leaderboard <figure class="image table text-center m-0 w-full"> <img src="assets/78_deep_rl_dqn/atari-envs.gif" alt="Environments"/> </figure> --- Congrats on finishing this chapter! There was a lot of information. And congrats on finishing the tutorial. You’ve just trained your first Deep Q-Learning agent and shared it on the Hub 🥳. That’s **normal if you still feel confused** with all these elements. **This was the same for me and for all people who studied RL.** Take time to really grasp the material before continuing. Don't hesitate to train your agent in other environments (Pong, Seaquest, QBert, Ms Pac Man). The **best way to learn is to try things on your own!** We published additional readings in the syllabus if you want to go deeper 👉 **[https://github.com/huggingface/deep-rl-class/blob/main/unit3/README.md](https://github.com/huggingface/deep-rl-class/blob/main/unit3/README.md)** In the next unit, we’re going to learn about Policy Gradients methods. And don't forget to share with your friends who want to learn 🤗 ! Finally, we want **to improve and update the course iteratively with your feedback**. If you have some, please fill this form 👉 **[https://forms.gle/3HgA7bEHwAmmLfwh9](https://forms.gle/3HgA7bEHwAmmLfwh9)** ### **Keep learning, stay awesome,**
The Annotated Diffusion Model
nielsr
June 7, 2022
annotated-diffusion
guide, diffusion, stable-diffusion
https://huggingface.co/blog/annotated-diffusion
# The Annotated Diffusion Model <script async defer src="https://unpkg.com/medium-zoom-element@0/dist/medium-zoom-element.min.js"></script> <a target="_blank" href="https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/annotated_diffusion.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> In this blog post, we'll take a deeper look into **Denoising Diffusion Probabilistic Models** (also known as DDPMs, diffusion models, score-based generative models or simply [autoencoders](https://benanne.github.io/2022/01/31/diffusion.html)) as researchers have been able to achieve remarkable results with them for (un)conditional image/audio/video generation. Popular examples (at the time of writing) include [GLIDE](https://arxiv.org/abs/2112.10741) and [DALL-E 2](https://openai.com/dall-e-2/) by OpenAI, [Latent Diffusion](https://github.com/CompVis/latent-diffusion) by the University of Heidelberg and [ImageGen](https://imagen.research.google/) by Google Brain. We'll go over the original DDPM paper by ([Ho et al., 2020](https://arxiv.org/abs/2006.11239)), implementing it step-by-step in PyTorch, based on Phil Wang's [implementation](https://github.com/lucidrains/denoising-diffusion-pytorch) - which itself is based on the [original TensorFlow implementation](https://github.com/hojonathanho/diffusion). Note that the idea of diffusion for generative modeling was actually already introduced in ([Sohl-Dickstein et al., 2015](https://arxiv.org/abs/1503.03585)). However, it took until ([Song et al., 2019](https://arxiv.org/abs/1907.05600)) (at Stanford University), and then ([Ho et al., 2020](https://arxiv.org/abs/2006.11239)) (at Google Brain) who independently improved the approach. Note that there are [several perspectives](https://twitter.com/sedielem/status/1530894256168222722?s=20&t=mfv4afx1GcNQU5fZklpACw) on diffusion models. Here, we employ the discrete-time (latent variable model) perspective, but be sure to check out the other perspectives as well. Alright, let's dive in! ```python from IPython.display import Image Image(filename='assets/78_annotated-diffusion/ddpm_paper.png') ``` <p align="center"> <img src="assets/78_annotated-diffusion/ddpm_paper.png" width="500" /> </p> We'll install and import the required libraries first (assuming you have [PyTorch](https://pytorch.org/) installed). ```python !pip install -q -U einops datasets matplotlib tqdm import math from inspect import isfunction from functools import partial %matplotlib inline import matplotlib.pyplot as plt from tqdm.auto import tqdm from einops import rearrange, reduce from einops.layers.torch import Rearrange import torch from torch import nn, einsum import torch.nn.functional as F ``` ## What is a diffusion model? A (denoising) diffusion model isn't that complex if you compare it to other generative models such as Normalizing Flows, GANs or VAEs: they all convert noise from some simple distribution to a data sample. This is also the case here where **a neural network learns to gradually denoise data** starting from pure noise. In a bit more detail for images, the set-up consists of 2 processes: * a fixed (or predefined) forward diffusion process \\(q\\) of our choosing, that gradually adds Gaussian noise to an image, until you end up with pure noise * a learned reverse denoising diffusion process \\(p_\theta\\), where a neural network is trained to gradually denoise an image starting from pure noise, until you end up with an actual image. <p align="center"> <img src="assets/78_annotated-diffusion/diffusion_figure.png" width="600" /> </p> Both the forward and reverse process indexed by \\(t\\) happen for some number of finite time steps \\(T\\) (the DDPM authors use \\(T=1000\\)). You start with \\(t=0\\) where you sample a real image \\(\mathbf{x}_0\\) from your data distribution (let's say an image of a cat from ImageNet), and the forward process samples some noise from a Gaussian distribution at each time step \\(t\\), which is added to the image of the previous time step. Given a sufficiently large \\(T\\) and a well behaved schedule for adding noise at each time step, you end up with what is called an [isotropic Gaussian distribution](https://math.stackexchange.com/questions/1991961/gaussian-distribution-is-isotropic) at \\(t=T\\) via a gradual process. ## In more mathematical form Let's write this down more formally, as ultimately we need a tractable loss function which our neural network needs to optimize. Let \\(q(\mathbf{x}_0)\\) be the real data distribution, say of "real images". We can sample from this distribution to get an image, \\(\mathbf{x}_0 \sim q(\mathbf{x}_0)\\). We define the forward diffusion process \\(q(\mathbf{x}_t | \mathbf{x}_{t-1})\\) which adds Gaussian noise at each time step \\(t\\), according to a known variance schedule \\(0 < \beta_1 < \beta_2 < ... < \beta_T < 1\\) as $$ q(\mathbf{x}_t | \mathbf{x}_{t-1}) = \mathcal{N}(\mathbf{x}_t; \sqrt{1 - \beta_t} \mathbf{x}_{t-1}, \beta_t \mathbf{I}). $$ Recall that a normal distribution (also called Gaussian distribution) is defined by 2 parameters: a mean \\(\mu\\) and a variance \\(\sigma^2 \geq 0\\). Basically, each new (slightly noisier) image at time step \\(t\\) is drawn from a **conditional Gaussian distribution** with \\(\mathbf{\mu}_t = \sqrt{1 - \beta_t} \mathbf{x}_{t-1}\\) and \\(\sigma^2_t = \beta_t\\), which we can do by sampling \\(\mathbf{\epsilon} \sim \mathcal{N}(\mathbf{0}, \mathbf{I})\\) and then setting \\(\mathbf{x}_t = \sqrt{1 - \beta_t} \mathbf{x}_{t-1} + \sqrt{\beta_t} \mathbf{\epsilon}\\). Note that the \\(\beta_t\\) aren't constant at each time step \\(t\\) (hence the subscript) --- in fact one defines a so-called **"variance schedule"**, which can be linear, quadratic, cosine, etc. as we will see further (a bit like a learning rate schedule). So starting from \\(\mathbf{x}_0\\), we end up with \\(\mathbf{x}_1, ..., \mathbf{x}_t, ..., \mathbf{x}_T\\), where \\(\mathbf{x}_T\\) is pure Gaussian noise if we set the schedule appropriately. Now, if we knew the conditional distribution \\(p(\mathbf{x}_{t-1} | \mathbf{x}_t)\\), then we could run the process in reverse: by sampling some random Gaussian noise \\(\mathbf{x}_T\\), and then gradually "denoise" it so that we end up with a sample from the real distribution \\(\mathbf{x}_0\\). However, we don't know \\(p(\mathbf{x}_{t-1} | \mathbf{x}_t)\\). It's intractable since it requires knowing the distribution of all possible images in order to calculate this conditional probability. Hence, we're going to leverage a neural network to **approximate (learn) this conditional probability distribution**, let's call it \\(p_\theta (\mathbf{x}_{t-1} | \mathbf{x}_t)\\), with \\(\theta\\) being the parameters of the neural network, updated by gradient descent. Ok, so we need a neural network to represent a (conditional) probability distribution of the backward process. If we assume this reverse process is Gaussian as well, then recall that any Gaussian distribution is defined by 2 parameters: * a mean parametrized by \\(\mu_\theta\\); * a variance parametrized by \\(\Sigma_\theta\\); so we can parametrize the process as $$ p_\theta (\mathbf{x}_{t-1} | \mathbf{x}_t) = \mathcal{N}(\mathbf{x}_{t-1}; \mu_\theta(\mathbf{x}_{t},t), \Sigma_\theta (\mathbf{x}_{t},t))$$ where the mean and variance are also conditioned on the noise level \\(t\\). Hence, our neural network needs to learn/represent the mean and variance. However, the DDPM authors decided to **keep the variance fixed, and let the neural network only learn (represent) the mean \\(\mu_\theta\\) of this conditional probability distribution**. From the paper: > First, we set \\(\Sigma_\theta ( \mathbf{x}_t, t) = \sigma^2_t \mathbf{I}\\) to untrained time dependent constants. Experimentally, both \\(\sigma^2_t = \beta_t\\) and \\(\sigma^2_t = \tilde{\beta}_t\\) (see paper) had similar results. This was then later improved in the [Improved diffusion models](https://openreview.net/pdf?id=-NEXDKk8gZ) paper, where a neural network also learns the variance of this backwards process, besides the mean. So we continue, assuming that our neural network only needs to learn/represent the mean of this conditional probability distribution. ## Defining an objective function (by reparametrizing the mean) To derive an objective function to learn the mean of the backward process, the authors observe that the combination of \\(q\\) and \\(p_\theta\\) can be seen as a variational auto-encoder (VAE) [(Kingma et al., 2013)](https://arxiv.org/abs/1312.6114). Hence, the **variational lower bound** (also called ELBO) can be used to minimize the negative log-likelihood with respect to ground truth data sample \\(\mathbf{x}_0\\) (we refer to the VAE paper for details regarding ELBO). It turns out that the ELBO for this process is a sum of losses at each time step \\(t\\), \\(L = L_0 + L_1 + ... + L_T\\). By construction of the forward \\(q\\) process and backward process, each term (except for \\(L_0\\)) of the loss is actually the **KL divergence between 2 Gaussian distributions** which can be written explicitly as an L2-loss with respect to the means! A direct consequence of the constructed forward process \\(q\\), as shown by Sohl-Dickstein et al., is that we can sample \\(\mathbf{x}_t\\) at any arbitrary noise level conditioned on \\(\mathbf{x}_0\\) (since sums of Gaussians is also Gaussian). This is very convenient: we don't need to apply \\(q\\) repeatedly in order to sample \\(\mathbf{x}_t\\). We have that $$q(\mathbf{x}_t | \mathbf{x}_0) = \cal{N}(\mathbf{x}_t; \sqrt{\bar{\alpha}_t} \mathbf{x}_0, (1- \bar{\alpha}_t) \mathbf{I})$$ with \\(\alpha_t := 1 - \beta_t\\) and \\(\bar{\alpha}_t := \Pi_{s=1}^{t} \alpha_s\\). Let's refer to this equation as the "nice property". This means we can sample Gaussian noise and scale it appropriatly and add it to \\(\mathbf{x}_0\\) to get \\(\mathbf{x}_t\\) directly. Note that the \\(\bar{\alpha}_t\\) are functions of the known \\(\beta_t\\) variance schedule and thus are also known and can be precomputed. This then allows us, during training, to **optimize random terms of the loss function \\(L\\)** (or in other words, to randomly sample \\(t\\) during training and optimize \\(L_t\\)). Another beauty of this property, as shown in Ho et al. is that one can (after some math, for which we refer the reader to [this excellent blog post](https://lilianweng.github.io/posts/2021-07-11-diffusion-models/)) instead **reparametrize the mean to make the neural network learn (predict) the added noise (via a network \\(\mathbf{\epsilon}_\theta(\mathbf{x}_t, t)\\)) for noise level \\(t\\)** in the KL terms which constitute the losses. This means that our neural network becomes a noise predictor, rather than a (direct) mean predictor. The mean can be computed as follows: $$ \mathbf{\mu}_\theta(\mathbf{x}_t, t) = \frac{1}{\sqrt{\alpha_t}} \left( \mathbf{x}_t - \frac{\beta_t}{\sqrt{1- \bar{\alpha}_t}} \mathbf{\epsilon}_\theta(\mathbf{x}_t, t) \right)$$ The final objective function \\(L_t\\) then looks as follows (for a random time step \\(t\\) given \\(\mathbf{\epsilon} \sim \mathcal{N}(\mathbf{0}, \mathbf{I})\\) ): $$ \| \mathbf{\epsilon} - \mathbf{\epsilon}_\theta(\mathbf{x}_t, t) \|^2 = \| \mathbf{\epsilon} - \mathbf{\epsilon}_\theta( \sqrt{\bar{\alpha}_t} \mathbf{x}_0 + \sqrt{(1- \bar{\alpha}_t) } \mathbf{\epsilon}, t) \|^2.$$ Here, \\(\mathbf{x}_0\\) is the initial (real, uncorrupted) image, and we see the direct noise level \\(t\\) sample given by the fixed forward process. \\(\mathbf{\epsilon}\\) is the pure noise sampled at time step \\(t\\), and \\(\mathbf{\epsilon}_\theta (\mathbf{x}_t, t)\\) is our neural network. The neural network is optimized using a simple mean squared error (MSE) between the true and the predicted Gaussian noise. The training algorithm now looks as follows: <p align="center"> <img src="assets/78_annotated-diffusion/training.png" width="400" /> </p> In other words: * we take a random sample \\(\mathbf{x}_0\\) from the real unknown and possibily complex data distribution \\(q(\mathbf{x}_0)\\) * we sample a noise level \\(t\\) uniformally between \\(1\\) and \\(T\\) (i.e., a random time step) * we sample some noise from a Gaussian distribution and corrupt the input by this noise at level \\(t\\) (using the nice property defined above) * the neural network is trained to predict this noise based on the corrupted image \\(\mathbf{x}_t\\) (i.e. noise applied on \\(\mathbf{x}_0\\) based on known schedule \\(\beta_t\\)) In reality, all of this is done on batches of data, as one uses stochastic gradient descent to optimize neural networks. ## The neural network The neural network needs to take in a noised image at a particular time step and return the predicted noise. Note that the predicted noise is a tensor that has the same size/resolution as the input image. So technically, the network takes in and outputs tensors of the same shape. What type of neural network can we use for this? What is typically used here is very similar to that of an [Autoencoder](https://en.wikipedia.org/wiki/Autoencoder), which you may remember from typical "intro to deep learning" tutorials. Autoencoders have a so-called "bottleneck" layer in between the encoder and decoder. The encoder first encodes an image into a smaller hidden representation called the "bottleneck", and the decoder then decodes that hidden representation back into an actual image. This forces the network to only keep the most important information in the bottleneck layer. In terms of architecture, the DDPM authors went for a **U-Net**, introduced by ([Ronneberger et al., 2015](https://arxiv.org/abs/1505.04597)) (which, at the time, achieved state-of-the-art results for medical image segmentation). This network, like any autoencoder, consists of a bottleneck in the middle that makes sure the network learns only the most important information. Importantly, it introduced residual connections between the encoder and decoder, greatly improving gradient flow (inspired by ResNet in [He et al., 2015](https://arxiv.org/abs/1512.03385)). <p align="center"> <img src="assets/78_annotated-diffusion/unet_architecture.jpg" width="400" /> </p> As can be seen, a U-Net model first downsamples the input (i.e. makes the input smaller in terms of spatial resolution), after which upsampling is performed. Below, we implement this network, step-by-step. ### Network helpers First, we define some helper functions and classes which will be used when implementing the neural network. Importantly, we define a `Residual` module, which simply adds the input to the output of a particular function (in other words, adds a residual connection to a particular function). We also define aliases for the up- and downsampling operations. ```python def exists(x): return x is not None def default(val, d): if exists(val): return val return d() if isfunction(d) else d def num_to_groups(num, divisor): groups = num // divisor remainder = num % divisor arr = [divisor] * groups if remainder > 0: arr.append(remainder) return arr class Residual(nn.Module): def __init__(self, fn): super().__init__() self.fn = fn def forward(self, x, *args, **kwargs): return self.fn(x, *args, **kwargs) + x def Upsample(dim, dim_out=None): return nn.Sequential( nn.Upsample(scale_factor=2, mode="nearest"), nn.Conv2d(dim, default(dim_out, dim), 3, padding=1), ) def Downsample(dim, dim_out=None): # No More Strided Convolutions or Pooling return nn.Sequential( Rearrange("b c (h p1) (w p2) -> b (c p1 p2) h w", p1=2, p2=2), nn.Conv2d(dim * 4, default(dim_out, dim), 1), ) ``` ### Position embeddings As the parameters of the neural network are shared across time (noise level), the authors employ sinusoidal position embeddings to encode \\(t\\), inspired by the Transformer ([Vaswani et al., 2017](https://arxiv.org/abs/1706.03762)). This makes the neural network "know" at which particular time step (noise level) it is operating, for every image in a batch. The `SinusoidalPositionEmbeddings` module takes a tensor of shape `(batch_size, 1)` as input (i.e. the noise levels of several noisy images in a batch), and turns this into a tensor of shape `(batch_size, dim)`, with `dim` being the dimensionality of the position embeddings. This is then added to each residual block, as we will see further. ```python class SinusoidalPositionEmbeddings(nn.Module): def __init__(self, dim): super().__init__() self.dim = dim def forward(self, time): device = time.device half_dim = self.dim // 2 embeddings = math.log(10000) / (half_dim - 1) embeddings = torch.exp(torch.arange(half_dim, device=device) * -embeddings) embeddings = time[:, None] * embeddings[None, :] embeddings = torch.cat((embeddings.sin(), embeddings.cos()), dim=-1) return embeddings ``` ### ResNet block Next, we define the core building block of the U-Net model. The DDPM authors employed a Wide ResNet block ([Zagoruyko et al., 2016](https://arxiv.org/abs/1605.07146)), but Phil Wang has replaced the standard convolutional layer by a "weight standardized" version, which works better in combination with group normalization (see ([Kolesnikov et al., 2019](https://arxiv.org/abs/1912.11370)) for details). ```python class WeightStandardizedConv2d(nn.Conv2d): """ https://arxiv.org/abs/1903.10520 weight standardization purportedly works synergistically with group normalization """ def forward(self, x): eps = 1e-5 if x.dtype == torch.float32 else 1e-3 weight = self.weight mean = reduce(weight, "o ... -> o 1 1 1", "mean") var = reduce(weight, "o ... -> o 1 1 1", partial(torch.var, unbiased=False)) normalized_weight = (weight - mean) / (var + eps).rsqrt() return F.conv2d( x, normalized_weight, self.bias, self.stride, self.padding, self.dilation, self.groups, ) class Block(nn.Module): def __init__(self, dim, dim_out, groups=8): super().__init__() self.proj = WeightStandardizedConv2d(dim, dim_out, 3, padding=1) self.norm = nn.GroupNorm(groups, dim_out) self.act = nn.SiLU() def forward(self, x, scale_shift=None): x = self.proj(x) x = self.norm(x) if exists(scale_shift): scale, shift = scale_shift x = x * (scale + 1) + shift x = self.act(x) return x class ResnetBlock(nn.Module): """https://arxiv.org/abs/1512.03385""" def __init__(self, dim, dim_out, *, time_emb_dim=None, groups=8): super().__init__() self.mlp = ( nn.Sequential(nn.SiLU(), nn.Linear(time_emb_dim, dim_out * 2)) if exists(time_emb_dim) else None ) self.block1 = Block(dim, dim_out, groups=groups) self.block2 = Block(dim_out, dim_out, groups=groups) self.res_conv = nn.Conv2d(dim, dim_out, 1) if dim != dim_out else nn.Identity() def forward(self, x, time_emb=None): scale_shift = None if exists(self.mlp) and exists(time_emb): time_emb = self.mlp(time_emb) time_emb = rearrange(time_emb, "b c -> b c 1 1") scale_shift = time_emb.chunk(2, dim=1) h = self.block1(x, scale_shift=scale_shift) h = self.block2(h) return h + self.res_conv(x) ``` ### Attention module Next, we define the attention module, which the DDPM authors added in between the convolutional blocks. Attention is the building block of the famous Transformer architecture ([Vaswani et al., 2017](https://arxiv.org/abs/1706.03762)), which has shown great success in various domains of AI, from NLP and vision to [protein folding](https://www.deepmind.com/blog/alphafold-a-solution-to-a-50-year-old-grand-challenge-in-biology). Phil Wang employs 2 variants of attention: one is regular multi-head self-attention (as used in the Transformer), the other one is a [linear attention variant](https://github.com/lucidrains/linear-attention-transformer) ([Shen et al., 2018](https://arxiv.org/abs/1812.01243)), whose time- and memory requirements scale linear in the sequence length, as opposed to quadratic for regular attention. For an extensive explanation of the attention mechanism, we refer the reader to Jay Allamar's [wonderful blog post](https://jalammar.github.io/illustrated-transformer/). ```python class Attention(nn.Module): def __init__(self, dim, heads=4, dim_head=32): super().__init__() self.scale = dim_head**-0.5 self.heads = heads hidden_dim = dim_head * heads self.to_qkv = nn.Conv2d(dim, hidden_dim * 3, 1, bias=False) self.to_out = nn.Conv2d(hidden_dim, dim, 1) def forward(self, x): b, c, h, w = x.shape qkv = self.to_qkv(x).chunk(3, dim=1) q, k, v = map( lambda t: rearrange(t, "b (h c) x y -> b h c (x y)", h=self.heads), qkv ) q = q * self.scale sim = einsum("b h d i, b h d j -> b h i j", q, k) sim = sim - sim.amax(dim=-1, keepdim=True).detach() attn = sim.softmax(dim=-1) out = einsum("b h i j, b h d j -> b h i d", attn, v) out = rearrange(out, "b h (x y) d -> b (h d) x y", x=h, y=w) return self.to_out(out) class LinearAttention(nn.Module): def __init__(self, dim, heads=4, dim_head=32): super().__init__() self.scale = dim_head**-0.5 self.heads = heads hidden_dim = dim_head * heads self.to_qkv = nn.Conv2d(dim, hidden_dim * 3, 1, bias=False) self.to_out = nn.Sequential(nn.Conv2d(hidden_dim, dim, 1), nn.GroupNorm(1, dim)) def forward(self, x): b, c, h, w = x.shape qkv = self.to_qkv(x).chunk(3, dim=1) q, k, v = map( lambda t: rearrange(t, "b (h c) x y -> b h c (x y)", h=self.heads), qkv ) q = q.softmax(dim=-2) k = k.softmax(dim=-1) q = q * self.scale context = torch.einsum("b h d n, b h e n -> b h d e", k, v) out = torch.einsum("b h d e, b h d n -> b h e n", context, q) out = rearrange(out, "b h c (x y) -> b (h c) x y", h=self.heads, x=h, y=w) return self.to_out(out) ``` ### Group normalization The DDPM authors interleave the convolutional/attention layers of the U-Net with group normalization ([Wu et al., 2018](https://arxiv.org/abs/1803.08494)). Below, we define a `PreNorm` class, which will be used to apply groupnorm before the attention layer, as we'll see further. Note that there's been a [debate](https://tnq177.github.io/data/transformers_without_tears.pdf) about whether to apply normalization before or after attention in Transformers. ```python class PreNorm(nn.Module): def __init__(self, dim, fn): super().__init__() self.fn = fn self.norm = nn.GroupNorm(1, dim) def forward(self, x): x = self.norm(x) return self.fn(x) ``` ### Conditional U-Net Now that we've defined all building blocks (position embeddings, ResNet blocks, attention and group normalization), it's time to define the entire neural network. Recall that the job of the network \\(\mathbf{\epsilon}_\theta(\mathbf{x}_t, t)\\) is to take in a batch of noisy images and their respective noise levels, and output the noise added to the input. More formally: - the network takes a batch of noisy images of shape `(batch_size, num_channels, height, width)` and a batch of noise levels of shape `(batch_size, 1)` as input, and returns a tensor of shape `(batch_size, num_channels, height, width)` The network is built up as follows: * first, a convolutional layer is applied on the batch of noisy images, and position embeddings are computed for the noise levels * next, a sequence of downsampling stages are applied. Each downsampling stage consists of 2 ResNet blocks + groupnorm + attention + residual connection + a downsample operation * at the middle of the network, again ResNet blocks are applied, interleaved with attention * next, a sequence of upsampling stages are applied. Each upsampling stage consists of 2 ResNet blocks + groupnorm + attention + residual connection + an upsample operation * finally, a ResNet block followed by a convolutional layer is applied. Ultimately, neural networks stack up layers as if they were lego blocks (but it's important to [understand how they work](http://karpathy.github.io/2019/04/25/recipe/)). ```python class Unet(nn.Module): def __init__( self, dim, init_dim=None, out_dim=None, dim_mults=(1, 2, 4, 8), channels=3, self_condition=False, resnet_block_groups=4, ): super().__init__() # determine dimensions self.channels = channels self.self_condition = self_condition input_channels = channels * (2 if self_condition else 1) init_dim = default(init_dim, dim) self.init_conv = nn.Conv2d(input_channels, init_dim, 1, padding=0) # changed to 1 and 0 from 7,3 dims = [init_dim, *map(lambda m: dim * m, dim_mults)] in_out = list(zip(dims[:-1], dims[1:])) block_klass = partial(ResnetBlock, groups=resnet_block_groups) # time embeddings time_dim = dim * 4 self.time_mlp = nn.Sequential( SinusoidalPositionEmbeddings(dim), nn.Linear(dim, time_dim), nn.GELU(), nn.Linear(time_dim, time_dim), ) # layers self.downs = nn.ModuleList([]) self.ups = nn.ModuleList([]) num_resolutions = len(in_out) for ind, (dim_in, dim_out) in enumerate(in_out): is_last = ind >= (num_resolutions - 1) self.downs.append( nn.ModuleList( [ block_klass(dim_in, dim_in, time_emb_dim=time_dim), block_klass(dim_in, dim_in, time_emb_dim=time_dim), Residual(PreNorm(dim_in, LinearAttention(dim_in))), Downsample(dim_in, dim_out) if not is_last else nn.Conv2d(dim_in, dim_out, 3, padding=1), ] ) ) mid_dim = dims[-1] self.mid_block1 = block_klass(mid_dim, mid_dim, time_emb_dim=time_dim) self.mid_attn = Residual(PreNorm(mid_dim, Attention(mid_dim))) self.mid_block2 = block_klass(mid_dim, mid_dim, time_emb_dim=time_dim) for ind, (dim_in, dim_out) in enumerate(reversed(in_out)): is_last = ind == (len(in_out) - 1) self.ups.append( nn.ModuleList( [ block_klass(dim_out + dim_in, dim_out, time_emb_dim=time_dim), block_klass(dim_out + dim_in, dim_out, time_emb_dim=time_dim), Residual(PreNorm(dim_out, LinearAttention(dim_out))), Upsample(dim_out, dim_in) if not is_last else nn.Conv2d(dim_out, dim_in, 3, padding=1), ] ) ) self.out_dim = default(out_dim, channels) self.final_res_block = block_klass(dim * 2, dim, time_emb_dim=time_dim) self.final_conv = nn.Conv2d(dim, self.out_dim, 1) def forward(self, x, time, x_self_cond=None): if self.self_condition: x_self_cond = default(x_self_cond, lambda: torch.zeros_like(x)) x = torch.cat((x_self_cond, x), dim=1) x = self.init_conv(x) r = x.clone() t = self.time_mlp(time) h = [] for block1, block2, attn, downsample in self.downs: x = block1(x, t) h.append(x) x = block2(x, t) x = attn(x) h.append(x) x = downsample(x) x = self.mid_block1(x, t) x = self.mid_attn(x) x = self.mid_block2(x, t) for block1, block2, attn, upsample in self.ups: x = torch.cat((x, h.pop()), dim=1) x = block1(x, t) x = torch.cat((x, h.pop()), dim=1) x = block2(x, t) x = attn(x) x = upsample(x) x = torch.cat((x, r), dim=1) x = self.final_res_block(x, t) return self.final_conv(x) ``` ## Defining the forward diffusion process The forward diffusion process gradually adds noise to an image from the real distribution, in a number of time steps \\(T\\). This happens according to a **variance schedule**. The original DDPM authors employed a linear schedule: > We set the forward process variances to constants increasing linearly from \\(\beta_1 = 10^{−4}\\) to \\(\beta_T = 0.02\\). However, it was shown in ([Nichol et al., 2021](https://arxiv.org/abs/2102.09672)) that better results can be achieved when employing a cosine schedule. Below, we define various schedules for the \\(T\\) timesteps (we'll choose one later on). ```python def cosine_beta_schedule(timesteps, s=0.008): """ cosine schedule as proposed in https://arxiv.org/abs/2102.09672 """ steps = timesteps + 1 x = torch.linspace(0, timesteps, steps) alphas_cumprod = torch.cos(((x / timesteps) + s) / (1 + s) * torch.pi * 0.5) ** 2 alphas_cumprod = alphas_cumprod / alphas_cumprod[0] betas = 1 - (alphas_cumprod[1:] / alphas_cumprod[:-1]) return torch.clip(betas, 0.0001, 0.9999) def linear_beta_schedule(timesteps): beta_start = 0.0001 beta_end = 0.02 return torch.linspace(beta_start, beta_end, timesteps) def quadratic_beta_schedule(timesteps): beta_start = 0.0001 beta_end = 0.02 return torch.linspace(beta_start**0.5, beta_end**0.5, timesteps) ** 2 def sigmoid_beta_schedule(timesteps): beta_start = 0.0001 beta_end = 0.02 betas = torch.linspace(-6, 6, timesteps) return torch.sigmoid(betas) * (beta_end - beta_start) + beta_start ``` To start with, let's use the linear schedule for \\(T=300\\) time steps and define the various variables from the \\(\beta_t\\) which we will need, such as the cumulative product of the variances \\(\bar{\alpha}_t\\). Each of the variables below are just 1-dimensional tensors, storing values from \\(t\\) to \\(T\\). Importantly, we also define an `extract` function, which will allow us to extract the appropriate \\(t\\) index for a batch of indices. ```python timesteps = 300 # define beta schedule betas = linear_beta_schedule(timesteps=timesteps) # define alphas alphas = 1. - betas alphas_cumprod = torch.cumprod(alphas, axis=0) alphas_cumprod_prev = F.pad(alphas_cumprod[:-1], (1, 0), value=1.0) sqrt_recip_alphas = torch.sqrt(1.0 / alphas) # calculations for diffusion q(x_t | x_{t-1}) and others sqrt_alphas_cumprod = torch.sqrt(alphas_cumprod) sqrt_one_minus_alphas_cumprod = torch.sqrt(1. - alphas_cumprod) # calculations for posterior q(x_{t-1} | x_t, x_0) posterior_variance = betas * (1. - alphas_cumprod_prev) / (1. - alphas_cumprod) def extract(a, t, x_shape): batch_size = t.shape[0] out = a.gather(-1, t.cpu()) return out.reshape(batch_size, *((1,) * (len(x_shape) - 1))).to(t.device) ``` We'll illustrate with a cats image how noise is added at each time step of the diffusion process. ```python from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) # PIL image of shape HWC image ``` <img src="assets/78_annotated-diffusion/output_cats.jpeg" width="400" /> Noise is added to PyTorch tensors, rather than Pillow Images. We'll first define image transformations that allow us to go from a PIL image to a PyTorch tensor (on which we can add the noise), and vice versa. These transformations are fairly simple: we first normalize images by dividing by \\(255\\) (such that they are in the \\([0,1]\\) range), and then make sure they are in the \\([-1, 1]\\) range. From the DPPM paper: > We assume that image data consists of integers in \\(\{0, 1, ... , 255\}\\) scaled linearly to \\([−1, 1]\\). This ensures that the neural network reverse process operates on consistently scaled inputs starting from the standard normal prior \\(p(\mathbf{x}_T )\\). ```python from torchvision.transforms import Compose, ToTensor, Lambda, ToPILImage, CenterCrop, Resize image_size = 128 transform = Compose([ Resize(image_size), CenterCrop(image_size), ToTensor(), # turn into torch Tensor of shape CHW, divide by 255 Lambda(lambda t: (t * 2) - 1), ]) x_start = transform(image).unsqueeze(0) x_start.shape ``` <div class="output stream stdout"> Output: ---------------------------------------------------------------------------------------------------- torch.Size([1, 3, 128, 128]) </div> We also define the reverse transform, which takes in a PyTorch tensor containing values in \\([-1, 1]\\) and turn them back into a PIL image: ```python import numpy as np reverse_transform = Compose([ Lambda(lambda t: (t + 1) / 2), Lambda(lambda t: t.permute(1, 2, 0)), # CHW to HWC Lambda(lambda t: t * 255.), Lambda(lambda t: t.numpy().astype(np.uint8)), ToPILImage(), ]) ``` Let's verify this: ```python reverse_transform(x_start.squeeze()) ``` <img src="assets/78_annotated-diffusion/output_cats_verify.png" width="100" /> We can now define the forward diffusion process as in the paper: ```python # forward diffusion (using the nice property) def q_sample(x_start, t, noise=None): if noise is None: noise = torch.randn_like(x_start) sqrt_alphas_cumprod_t = extract(sqrt_alphas_cumprod, t, x_start.shape) sqrt_one_minus_alphas_cumprod_t = extract( sqrt_one_minus_alphas_cumprod, t, x_start.shape ) return sqrt_alphas_cumprod_t * x_start + sqrt_one_minus_alphas_cumprod_t * noise ``` Let's test it on a particular time step: ```python def get_noisy_image(x_start, t): # add noise x_noisy = q_sample(x_start, t=t) # turn back into PIL image noisy_image = reverse_transform(x_noisy.squeeze()) return noisy_image ``` ```python # take time step t = torch.tensor([40]) get_noisy_image(x_start, t) ``` <img src="assets/78_annotated-diffusion/output_cats_noisy.png" width="100" /> Let's visualize this for various time steps: ```python import matplotlib.pyplot as plt # use seed for reproducability torch.manual_seed(0) # source: https://pytorch.org/vision/stable/auto_examples/plot_transforms.html#sphx-glr-auto-examples-plot-transforms-py def plot(imgs, with_orig=False, row_title=None, **imshow_kwargs): if not isinstance(imgs[0], list): # Make a 2d grid even if there's just 1 row imgs = [imgs] num_rows = len(imgs) num_cols = len(imgs[0]) + with_orig fig, axs = plt.subplots(figsize=(200,200), nrows=num_rows, ncols=num_cols, squeeze=False) for row_idx, row in enumerate(imgs): row = [image] + row if with_orig else row for col_idx, img in enumerate(row): ax = axs[row_idx, col_idx] ax.imshow(np.asarray(img), **imshow_kwargs) ax.set(xticklabels=[], yticklabels=[], xticks=[], yticks=[]) if with_orig: axs[0, 0].set(title='Original image') axs[0, 0].title.set_size(8) if row_title is not None: for row_idx in range(num_rows): axs[row_idx, 0].set(ylabel=row_title[row_idx]) plt.tight_layout() ``` ```python plot([get_noisy_image(x_start, torch.tensor([t])) for t in [0, 50, 100, 150, 199]]) ``` <img src="assets/78_annotated-diffusion/output_cats_noisy_multiple.png" width="800" /> This means that we can now define the loss function given the model as follows: ```python def p_losses(denoise_model, x_start, t, noise=None, loss_type="l1"): if noise is None: noise = torch.randn_like(x_start) x_noisy = q_sample(x_start=x_start, t=t, noise=noise) predicted_noise = denoise_model(x_noisy, t) if loss_type == 'l1': loss = F.l1_loss(noise, predicted_noise) elif loss_type == 'l2': loss = F.mse_loss(noise, predicted_noise) elif loss_type == "huber": loss = F.smooth_l1_loss(noise, predicted_noise) else: raise NotImplementedError() return loss ``` The `denoise_model` will be our U-Net defined above. We'll employ the Huber loss between the true and the predicted noise. ## Define a PyTorch Dataset + DataLoader Here we define a regular [PyTorch Dataset](https://pytorch.org/tutorials/beginner/basics/data_tutorial.html). The dataset simply consists of images from a real dataset, like Fashion-MNIST, CIFAR-10 or ImageNet, scaled linearly to \\([−1, 1]\\). Each image is resized to the same size. Interesting to note is that images are also randomly horizontally flipped. From the paper: > We used random horizontal flips during training for CIFAR10; we tried training both with and without flips, and found flips to improve sample quality slightly. Here we use the 🤗 [Datasets library](https://huggingface.co/docs/datasets/index) to easily load the Fashion MNIST dataset from the [hub](https://huggingface.co/datasets/fashion_mnist). This dataset consists of images which already have the same resolution, namely 28x28. ```python from datasets import load_dataset # load dataset from the hub dataset = load_dataset("fashion_mnist") image_size = 28 channels = 1 batch_size = 128 ``` Next, we define a function which we'll apply on-the-fly on the entire dataset. We use the `with_transform` [functionality](https://huggingface.co/docs/datasets/v2.2.1/en/package_reference/main_classes#datasets.Dataset.with_transform) for that. The function just applies some basic image preprocessing: random horizontal flips, rescaling and finally make them have values in the \\([-1,1]\\) range. ```python from torchvision import transforms from torch.utils.data import DataLoader # define image transformations (e.g. using torchvision) transform = Compose([ transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Lambda(lambda t: (t * 2) - 1) ]) # define function def transforms(examples): examples["pixel_values"] = [transform(image.convert("L")) for image in examples["image"]] del examples["image"] return examples transformed_dataset = dataset.with_transform(transforms).remove_columns("label") # create dataloader dataloader = DataLoader(transformed_dataset["train"], batch_size=batch_size, shuffle=True) ``` ```python batch = next(iter(dataloader)) print(batch.keys()) ``` <div class="output stream stdout"> Output: ---------------------------------------------------------------------------------------------------- dict_keys(['pixel_values']) </div> ## Sampling As we'll sample from the model during training (in order to track progress), we define the code for that below. Sampling is summarized in the paper as Algorithm 2: <img src="assets/78_annotated-diffusion/sampling.png" width="500" /> Generating new images from a diffusion model happens by reversing the diffusion process: we start from \\(T\\), where we sample pure noise from a Gaussian distribution, and then use our neural network to gradually denoise it (using the conditional probability it has learned), until we end up at time step \\(t = 0\\). As shown above, we can derive a slighly less denoised image \\(\mathbf{x}_{t-1 }\\) by plugging in the reparametrization of the mean, using our noise predictor. Remember that the variance is known ahead of time. Ideally, we end up with an image that looks like it came from the real data distribution. The code below implements this. ```python @torch.no_grad() def p_sample(model, x, t, t_index): betas_t = extract(betas, t, x.shape) sqrt_one_minus_alphas_cumprod_t = extract( sqrt_one_minus_alphas_cumprod, t, x.shape ) sqrt_recip_alphas_t = extract(sqrt_recip_alphas, t, x.shape) # Equation 11 in the paper # Use our model (noise predictor) to predict the mean model_mean = sqrt_recip_alphas_t * ( x - betas_t * model(x, t) / sqrt_one_minus_alphas_cumprod_t ) if t_index == 0: return model_mean else: posterior_variance_t = extract(posterior_variance, t, x.shape) noise = torch.randn_like(x) # Algorithm 2 line 4: return model_mean + torch.sqrt(posterior_variance_t) * noise # Algorithm 2 (including returning all images) @torch.no_grad() def p_sample_loop(model, shape): device = next(model.parameters()).device b = shape[0] # start from pure noise (for each example in the batch) img = torch.randn(shape, device=device) imgs = [] for i in tqdm(reversed(range(0, timesteps)), desc='sampling loop time step', total=timesteps): img = p_sample(model, img, torch.full((b,), i, device=device, dtype=torch.long), i) imgs.append(img.cpu().numpy()) return imgs @torch.no_grad() def sample(model, image_size, batch_size=16, channels=3): return p_sample_loop(model, shape=(batch_size, channels, image_size, image_size)) ``` Note that the code above is a simplified version of the original implementation. We found our simplification (which is in line with Algorithm 2 in the paper) to work just as well as the [original, more complex implementation](https://github.com/hojonathanho/diffusion/blob/master/diffusion_tf/diffusion_utils.py), which employs [clipping](https://github.com/hojonathanho/diffusion/issues/5). ## Train the model Next, we train the model in regular PyTorch fashion. We also define some logic to periodically save generated images, using the `sample` method defined above. ```python from pathlib import Path def num_to_groups(num, divisor): groups = num // divisor remainder = num % divisor arr = [divisor] * groups if remainder > 0: arr.append(remainder) return arr results_folder = Path("./results") results_folder.mkdir(exist_ok = True) save_and_sample_every = 1000 ``` Below, we define the model, and move it to the GPU. We also define a standard optimizer (Adam). ```python from torch.optim import Adam device = "cuda" if torch.cuda.is_available() else "cpu" model = Unet( dim=image_size, channels=channels, dim_mults=(1, 2, 4,) ) model.to(device) optimizer = Adam(model.parameters(), lr=1e-3) ``` Let's start training! ```python from torchvision.utils import save_image epochs = 6 for epoch in range(epochs): for step, batch in enumerate(dataloader): optimizer.zero_grad() batch_size = batch["pixel_values"].shape[0] batch = batch["pixel_values"].to(device) # Algorithm 1 line 3: sample t uniformally for every example in the batch t = torch.randint(0, timesteps, (batch_size,), device=device).long() loss = p_losses(model, batch, t, loss_type="huber") if step % 100 == 0: print("Loss:", loss.item()) loss.backward() optimizer.step() # save generated images if step != 0 and step % save_and_sample_every == 0: milestone = step // save_and_sample_every batches = num_to_groups(4, batch_size) all_images_list = list(map(lambda n: sample(model, batch_size=n, channels=channels), batches)) all_images = torch.cat(all_images_list, dim=0) all_images = (all_images + 1) * 0.5 save_image(all_images, str(results_folder / f'sample-{milestone}.png'), nrow = 6) ``` <div class="output stream stdout"> Output: ---------------------------------------------------------------------------------------------------- Loss: 0.46477368474006653 Loss: 0.12143351882696152 Loss: 0.08106148988008499 Loss: 0.0801810547709465 Loss: 0.06122320517897606 Loss: 0.06310459971427917 Loss: 0.05681884288787842 Loss: 0.05729678273200989 Loss: 0.05497899278998375 Loss: 0.04439849033951759 Loss: 0.05415581166744232 Loss: 0.06020551547408104 Loss: 0.046830907464027405 Loss: 0.051029372960329056 Loss: 0.0478244312107563 Loss: 0.046767622232437134 Loss: 0.04305662214756012 Loss: 0.05216279625892639 Loss: 0.04748568311333656 Loss: 0.05107741802930832 Loss: 0.04588869959115982 Loss: 0.043014321476221085 Loss: 0.046371955424547195 Loss: 0.04952816292643547 Loss: 0.04472338408231735 </div> ## Sampling (inference) To sample from the model, we can just use our sample function defined above: ```python # sample 64 images samples = sample(model, image_size=image_size, batch_size=64, channels=channels) # show a random one random_index = 5 plt.imshow(samples[-1][random_index].reshape(image_size, image_size, channels), cmap="gray") ``` <img src="assets/78_annotated-diffusion/output.png" width="300" /> Seems like the model is capable of generating a nice T-shirt! Keep in mind that the dataset we trained on is pretty low-resolution (28x28). We can also create a gif of the denoising process: ```python import matplotlib.animation as animation random_index = 53 fig = plt.figure() ims = [] for i in range(timesteps): im = plt.imshow(samples[i][random_index].reshape(image_size, image_size, channels), cmap="gray", animated=True) ims.append([im]) animate = animation.ArtistAnimation(fig, ims, interval=50, blit=True, repeat_delay=1000) animate.save('diffusion.gif') plt.show() ``` <img src=" assets/78_annotated-diffusion/diffusion-sweater.gif" width="300" /> # Follow-up reads Note that the DDPM paper showed that diffusion models are a promising direction for (un)conditional image generation. This has since then (immensely) been improved, most notably for text-conditional image generation. Below, we list some important (but far from exhaustive) follow-up works: - Improved Denoising Diffusion Probabilistic Models ([Nichol et al., 2021](https://arxiv.org/abs/2102.09672)): finds that learning the variance of the conditional distribution (besides the mean) helps in improving performance - Cascaded Diffusion Models for High Fidelity Image Generation ([Ho et al., 2021](https://arxiv.org/abs/2106.15282)): introduces cascaded diffusion, which comprises a pipeline of multiple diffusion models that generate images of increasing resolution for high-fidelity image synthesis - Diffusion Models Beat GANs on Image Synthesis ([Dhariwal et al., 2021](https://arxiv.org/abs/2105.05233)): show that diffusion models can achieve image sample quality superior to the current state-of-the-art generative models by improving the U-Net architecture, as well as introducing classifier guidance - Classifier-Free Diffusion Guidance ([Ho et al., 2021](https://openreview.net/pdf?id=qw8AKxfYbI)): shows that you don't need a classifier for guiding a diffusion model by jointly training a conditional and an unconditional diffusion model with a single neural network - Hierarchical Text-Conditional Image Generation with CLIP Latents (DALL-E 2) ([Ramesh et al., 2022](https://cdn.openai.com/papers/dall-e-2.pdf)): uses a prior to turn a text caption into a CLIP image embedding, after which a diffusion model decodes it into an image - Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding (ImageGen) ([Saharia et al., 2022](https://arxiv.org/abs/2205.11487)): shows that combining a large pre-trained language model (e.g. T5) with cascaded diffusion works well for text-to-image synthesis Note that this list only includes important works until the time of writing, which is June 7th, 2022. For now, it seems that the main (perhaps only) disadvantage of diffusion models is that they require multiple forward passes to generate an image (which is not the case for generative models like GANs). However, there's [research going on](https://arxiv.org/abs/2204.13902) that enables high-fidelity generation in as few as 10 denoising steps.
Director of Machine Learning Insights [Part 3: Finance Edition]
britneymuller
June 14, 2022
ml-director-insights-3
community, research
https://huggingface.co/blog/ml-director-insights-3
# Director of Machine Learning Insights [Part 3: Finance Edition] _If you're interested in building ML solutions faster visit [hf.co/support](https://huggingface.co/support?utm_source=article&utm_medium=blog&utm_campaign=ml_director_insights_3) today!_ 👋 Welcome back to our Director of ML Insights Series, Finance Edition! If you missed earlier Editions you can find them here: - [Director of Machine Learning Insights [Part 1]](https://huggingface.co/blog/ml-director-insights) - [Director of Machine Learning Insights [Part 2 : SaaS Edition]](https://huggingface.co/blog/ml-director-insights-2) Machine Learning Directors within finance face the unique challenges of navigating legacy systems, deploying interpretable models, and maintaining customer trust, all while being highly regulated (with lots of government oversight). Each of these challenges requires deep industry knowledge and technical expertise to pilot effectively. The following experts from U.S. Bank, the Royal Bank of Canada, Moody's Analytics and ex Research Scientist at Bloomberg AI all help uncover unique gems within the Machine Learning x Finance sector. You’ll hear from a juniors Greek National Tennis Champion, a published author with over 100+ patents, and a cycle polo player who regularly played at the world’s oldest polo club (the Calcutta Polo Club). All turned financial ML experts. 🚀 Buckle up Goose, here are the top insights from financial ML Mavericks: _Disclaimer: All views are from individuals and not from any past or current employers._ <img class="mx-auto" style="float: left;" padding="5px" width="200" src="/blog/assets/78_ml_director_insights/Ioannis-Bakagiannis.jpeg"></a> ### [Ioannis Bakagiannis](https://www.linkedin.com/in/bakagiannisioannis//) - Director of Machine Learning, Marketing Science at [RBC](https://www.rbcroyalbank.com/personal.html) **Background:** Passionate Machine Learning Expert with experience in delivering scalable, production-grade, and state-of-the-art Machine Learning solutions. Ioannis is also the Host of [Bak Up Podcast](https://www.youtube.com/channel/UCHK-YMcyzw2TwKonKoFtiug) and seeks to make an impact on the world through AI. **Fun Fact:** Ioannis was a juniors Greek national tennis champion.🏆 **RBC:** The world’s leading organizations look to RBC Capital Markets as an innovative, trusted partner in capital markets, banking and finance. #### **1. How has ML made a positive impact on finance?** We all know that ML is a disrupting force in all industries while continuously creating new business opportunities. Many financial products have been created or altered due to ML such as personalized insurance and targeted marketing. Disruptions and profit are great but my favorite financial impact has been the ML-initiated conversation around trust in financial decision making. In the past, financial decisions like loan approval, rate determination, portfolio management, etc. have all been done by humans with relevant expertise. Essentially, people trusted “other people” or “experts” for financial decisions (and often without question). When ML attempted to automate that decision-making process, people asked, “Why should we trust a model?”. Models appeared to be black boxes of doom coming to replace honest working people. But that argument has initiated the conversation of trust in financial decision-making and ethics, regardless of who or what is involved. As an industry, we are still defining this conversation but with more transparency, thanks to ML in finance. #### **2. What are the biggest ML challenges within finance?** I can’t speak for companies but established financial institutions experience one continuous struggle, like all long-lived organizations: Legacy Systems. Financial organizations have been around for a while and they have evolved over time but today they have found themselves somehow as ‘tech companies’. Such organizations need to be part of cutting-edge technologies so they can compete with newcomer rivals but at the same time maintain the robustness that makes our financial world work. This internal battle is skewed by the risk appetite of the institutions. Financial risk increases linearly (usually) with the scale of the solution you provide since we are talking about money. But on top of that, there are other forms of risk that a system failure will incur such as Regulatory and Reputational risk. This compounded risk along with the complexity of migrating a huge, mature system to a new tech stack is, at least in my opinion, the biggest challenge in adopting cutting-edge technologies such as ML. #### **3. What’s a common mistake you see people make trying to integrate ML into financial applications?** ML, even with all its recent attention, is still a relatively new field in software engineering. The deployment of ML applications is often not a well-defined process. The artist/engineer can deliver an ML application but the world around it is still not familiar with the technical process. At that intersection of technical and non-technical worlds, I have seen the most “mistakes”. It is hard to optimize for the right Business and ML KPIs and define the right objective function or the desired labels. I have seen applications go to waste due to undesired prediction windows or because they predict the wrong labels. The worst outcome comes when the misalignment is not uncovered in the development step and makes it into production. Then applications can create unwanted user behavior or simply measure/predict the wrong thing. Unfortunately, we tend to equip the ML teams with tools and computing but not with solid processes and communication buffers. And mistakes at the beginning of an ill-defined process grow with every step. #### **4. What excites you most about the future of ML?** It is difficult not to get excited with everything new that comes out of ML. The field changes so frequently that it’s refreshing. Currently, we are good at solving individual problems: computer vision, the next word prediction, data point generation, etc, but we haven’t been able to address multiple problems at the same time. I’m excited to see how we can model such behaviors in mathematical expressions that currently seem to contradict each other. Hope we get there soon! <img class="mx-auto" style="float: left;" padding="5px" width="200" src="/blog/assets/78_ml_director_insights/Debanjan-Mahata.jpeg"></a> ### [Debanjan Mahata](https://www.linkedin.com/in/debanjanmahata/) - Director of AI & ML at [Moody's Analytics](https://www.moodysanalytics.com/) / Ex Research Scientist @ Bloomberg AI **Background:** Debanjan is Director of Machine Learning in the AI Team at Moody's Analytics and also serves as an Adjunct Faculty at IIIT-Delhi, India. He is an active researcher and is currently interested in various information extraction problems and domain adaptation techniques in NLP. He has a track record of formulating and applying machine learning to various use cases. He actively participates in the program committee of different top tier conference venues in machine learning. **Fun Fact:** Debanjan played cycle polo at the world's oldest polo club (the Calcutta Polo Club) when he was a kid. **Moody's Analytics:** Provides financial intelligence and analytical tools supporting our clients’ growth, efficiency and risk management objectives. #### **1. How has ML made a positive impact on finance?** Machine learning (ML) has made a significant positive impact in the finance industry in many ways. For example, it has helped in combating financial crimes and identifying fraudulent transactions. Machine learning has been a crucial tool in applications such as Know Your Customer (KYC) screening and Anti Money Laundering (AML). With an increase in AML fines by financial institutions worldwide, ever changing realm of sanctions, and greater complexity in money laundering, banks are increasing their investments in KYC and AML technologies, many of which are powered by ML. ML is revolutionizing multiple facets of this sector, especially bringing huge efficiency gains by automating various processes and assisting analysts to do their jobs more efficiently and accurately. One of the key useful traits of ML is that it can learn from and find hidden patterns in large volumes of data. With a focus on digitization, the financial sector is producing digital data more than ever, which makes it challenging for humans to comprehend, process and make decisions. ML is enabling humans in making sense of the data, glean information from them, and make well-informed decisions. At Moody's Analytics, we are using ML and helping our clients to better manage risk and meet business and industry demands. #### **2. What are the biggest ML challenges within finance?** 1. Reducing the False Positives without impacting the True Positives - A number of applications using ML in the regtech space rely on alerts. With strict regulatory measures and big financial implications of a wrong decision, human investigations can be time consuming and demanding. ML certainly helps in these scenarios in assisting human analysts to arrive at the right decisions. But if a ML system results in a lot of False Positives, it makes an analysts' job harder. Coming up with the right balance is an important challenge for ML in finance. 2. Gap between ML in basic research and education and ML in finance - Due to the regulated nature of the finance industry, we see limited exchange of ideas, data, and resources between the basic research and the finance sector, in the area of ML. There are few exceptions of course. This has led to scarcity of developing ML research that cater to the needs of the finance industry. I think more efforts must be made to decrease this gap. Otherwise, it will be increasingly challenging for the finance industry to leverage the latest ML advances. 3. Legacy infrastructure and databases - Many financial institutions still carry legacy infrastructure with them which makes it challenging for applying modern ML technologies and especially to integrate them. The finance industry would benefit from borrowing key ideas, culture and best practices from the tech industry when it comes to developing new infrastructure and enabling the ML professionals to innovate and make more impact. There are certainly challenges related to operationalizing ML across the industry. 4. Data and model governance - More data and model governance efforts need to be made in this sector. As we collect more and more data there should be more increase in the efforts to collect high quality data and the right data. Extra precautions need to be taken when ML models are involved in decisioning. Proper model governance measures and frameworks needs to be developed for different financial applications. A big challenge in this space is the lack of tools and technologies to operationalize data and model governance that are often needed for ML systems operating in this sector. More efforts should also be made in understanding bias in the data that train the models and how to make it a common practice to mitigate them in the overall process. Ensuring auditability, model and data lineage has been challenging for ML teams. 5. Explainability and Interpretability - Developing models which are highly accurate as well as interpretable and explainable is a big challenge. Modern deep learning models often outperform more traditional models; however, they lack explainability and interpretability. Most of the applications in finance demands explainability. Adopting the latest developments in this area and ensuring the development of interpretable models with explainable predictions have been a challenge. #### **3. What’s a common mistake you see people make trying to integrate ML into financial applications?** - Not understanding the data well and the raw predictions made by the ML models trained on them. - Not analyzing failed efforts and learning from them. - Not understanding the end application and how it will be used. - Trying complex techniques when simpler solutions might suffice. #### **4. What excites you most about the future of ML?** I am really blown away by how modern ML models have been learning rich representations of text, audio, images, videos, code and so on using self-supervised learning on large amounts of data. The future is certainly multi-modal and there has been consistent progress in understanding multi-modal content through the lens of ML. I think this is going to play a crucial role in the near future and I am excited by it and looking forward to being a part of these advances. <img class="mx-auto" style="float: left;" padding="5px" width="200" src="/blog/assets/78_ml_director_insights/Soumitri-Kolavennu.jpeg"></a> ### [Soumitri Kolavennu](https://www.linkedin.com/in/soumitri-kolavennu-2b47376/) - Artificial Intelligence Leader - Enterprise Analytics & AI at [U.S. Bank](https://www.usbank.com/index.html) **Background:** Soumitri Kolavennu is a SVP and head of AI research in U.S. Bank’s enterprise analytics and AI organization. He is currently focused on deep learning based NLP, vision & audio analytics, graph neural networks, sensor/knowledge fusion, time-series data with application to automation, information extraction, fraud detection and anti-money laundering in financial systems. Previously, he held the position of Fellows Leader & Senior Fellow, while working at Honeywell International Inc. where he had worked on IoT and control systems applied to smart home, smart cities, industrial and automotive systems. **Fun Fact:** Soumitri is a prolific inventor with 100+ issued U.S. patents in varied fields including control systems, Internet of Things, wireless networking, optimization, turbocharging, speech recognition, machine learning and AI. He also has around 30 publications, [authored a book](https://www.elsevier.com/books/industrial-wireless-sensor-networks/budampati/978-1-78242-230-3), book chapters and was elected member of NIST’s smart grid committee. **U.S. Bank:** The largest regional bank in the United States, U.S. Bank blends its relationship teams, branches and ATM networks with digital tools that allow customers to bank when, where and how they prefer. #### **1. How has ML made a positive impact on finance?** Machine learning and artificial intelligence have made a profound and positive impact on finance in general and banking in particular. There are many applications in banking where many factors (features) are to be considered when making a decision and ML has traditionally helped in this respect. For example, the credit score we all universally rely on is derived from a machine learning algorithm. Over the years ML has interestingly also helped remove human bias from decisions and provided a consistent algorithmic approach to decisions. For example, in credit card/loan underwriting and mortgages, modern AI techniques can take more factors (free form text, behavioral trends, social and financial interactions) into account for decisions while also detecting fraud. #### **2. What are the biggest ML challenges within finance?** The finance and banking industry brings a lot of challenges due to the nature of the industry. First of all, it is a highly regulated industry with government oversight in many aspects. The data that is often used is very personal and identifiable data (social security numbers, bank statements, tax records, etc). Hence there is a lot of care taken to create machine learning and AI models that are private and unbiased. Many government regulations require any models to be explainable. For example, if a loan is denied, there is a fundamental need to explain why it is denied. The data on the other hand, which may be scarce in other industries is abundant in the financial industry. (Mortgage records have to be kept for 30 years for example). The current trend for digitization of data and the explosion of more sophisticated AI/ML techniques has created a unique opportunity for the application of these advances. #### **3. What’s a common mistake you see people make trying to integrate ML into financial applications?** One of the most common mistakes people make is to use a model or a technique without understanding the underlying working principles, advantages, and shortcomings of the model. People tend to think of AI/ML models as a ‘black box’. In finance, it is especially important to understand the model and to be able to explain its’ output. Another mistake is not comprehensively testing the model on a representative input space. Model performance, validation, inference capacities, and model monitoring (retraining intervals) are all important to consider when choosing a model. #### **4. What excites you most about the future of ML?** Now is a great time to be in applied ML and AI. The techniques in AI/ML are certainly refining if not redefining many scientific disciplines. I am very excited about how all the developments that are currently underway will reshape the future. When I first started working in NLP, I was in awe of the ability of neural networks/language models to generate a number or vector (which we now call embeddings) that represents a word, a sentence with the associated grammar, or even a paragraph. We are constantly in search of more and more appropriate and contextual embeddings. We have advanced far beyond a “simple” embedding for a text to “multimodal” embeddings that are even more awe-inspiring to me. I am most excited and look forward to generating and playing with these new embeddings enabling more exciting applications in the future. --- 🤗 Thank you for joining us in this third installment of ML Director Insights. Stay tuned for more insights from ML Directors. Big thanks to Soumitri Kolavennu, Debanjan Mahata, and Ioannis Bakagiannis for their brilliant insights and participation in this piece. We look forward to watching your continued success and will be cheering you on each step of the way. 🎉 If you're' interested in accelerating your ML roadmap with Hugging Face Experts please visit [hf.co/support](https://huggingface.co/support?utm_source=article&utm_medium=blog&utm_campaign=ml_director_insights_3) to learn more.
Intel and Hugging Face Partner to Democratize Machine Learning Hardware Acceleration
juliensimon
June 15, 2022
intel
hardware, intel, guide
https://huggingface.co/blog/intel
# Intel and Hugging Face Partner to Democratize Machine Learning Hardware Acceleration ![image](assets/80_intel/01.png) The mission of Hugging Face is to democratize good machine learning and maximize its positive impact across industries and society. Not only do we strive to advance Transformer models, but we also work hard on simplifying their adoption. Today, we're excited to announce that Intel has officially joined our [Hardware Partner Program](https://huggingface.co/hardware). Thanks to the [Optimum](https://github.com/huggingface/optimum-intel) open-source library, Intel and Hugging Face will collaborate to build state-of-the-art hardware acceleration to train, fine-tune and predict with Transformers. Transformer models are increasingly large and complex, which can cause production challenges for latency-sensitive applications like search or chatbots. Unfortunately, latency optimization has long been a hard problem for Machine Learning (ML) practitioners. Even with deep knowledge of the underlying framework and hardware platform, it takes a lot of trial and error to figure out which knobs and features to leverage. Intel provides a complete foundation for accelerated AI with the Intel Xeon Scalable CPU platform and a wide range of hardware-optimized AI software tools, frameworks, and libraries. Thus, it made perfect sense for Hugging Face and Intel to join forces and collaborate on building powerful model optimization tools that let users achieve the best performance, scale, and productivity on Intel platforms. “*We’re excited to work with Hugging Face to bring the latest innovations of Intel Xeon hardware and Intel AI software to the Transformers community, through open source integration and integrated developer experiences.*”, says Wei Li, Intel Vice President & General Manager, AI and Analytics. In recent months, Intel and Hugging Face collaborated on scaling Transformer workloads. We published detailed tuning guides and benchmarks on inference ([part 1](https://huggingface.co/blog/bert-cpu-scaling-part-1), [part 2](https://huggingface.co/blog/bert-cpu-scaling-part-2)) and achieved [single-digit millisecond latency](https://huggingface.co/blog/infinity-cpu-performance) for DistilBERT on the latest Intel Xeon Ice Lake CPUs. On the training side, we added support for [Habana Gaudi](https://huggingface.co/blog/getting-started-habana) accelerators, which deliver up to 40% better price-performance than GPUs. The next logical step was to expand on this work and share it with the ML community. Enter the [Optimum Intel](https://github.com/huggingface/optimum-intel) open source library! Let’s take a deeper look at it. ## Get Peak Transformers Performance with Optimum Intel [Optimum](https://github.com/huggingface/optimum) is an open-source library created by Hugging Face to simplify Transformer acceleration across a growing range of training and inference devices. Thanks to built-in optimization techniques, you can start accelerating your workloads in minutes, using ready-made scripts, or applying minimal changes to your existing code. Beginners can use Optimum out of the box with excellent results. Experts can keep tweaking for maximum performance. [Optimum Intel](https://github.com/huggingface/optimum-intel) is part of Optimum and builds on top of the [Intel Neural Compressor](https://www.intel.com/content/www/us/en/developer/tools/oneapi/neural-compressor.html) (INC). INC is an [open-source library](https://github.com/intel/neural-compressor) that delivers unified interfaces across multiple deep learning frameworks for popular network compression technologies, such as quantization, pruning, and knowledge distillation. This tool supports automatic accuracy-driven tuning strategies to help users quickly build the best quantized model. With Optimum Intel, you can apply state-of-the-art optimization techniques to your Transformers with minimal effort. Let’s look at a complete example. ## Case study: Quantizing DistilBERT with Optimum Intel In this example, we will run post-training quantization on a DistilBERT model fine-tuned for classification. Quantization is a process that shrinks memory and compute requirements by reducing the bit width of model parameters. For example, you can often replace 32-bit floating-point parameters with 8-bit integers at the expense of a small drop in prediction accuracy. We have already fine-tuned the original model to classify product reviews for shoes according to their star rating (from 1 to 5 stars). You can view this [model](https://huggingface.co/juliensimon/distilbert-amazon-shoe-reviews) and its [quantized](https://huggingface.co/juliensimon/distilbert-amazon-shoe-reviews-quantized?) version on the Hugging Face hub. You can also test the original model in this [Space](https://huggingface.co/spaces/juliensimon/amazon-shoe-reviews-spaces). Let’s get started! All code is available in this [notebook](https://gitlab.com/juliensimon/huggingface-demos/-/blob/main/amazon-shoes/03_optimize_inc_quantize.ipynb). As usual, the first step is to install all required libraries. It’s worth mentioning that we have to work with a CPU-only version of PyTorch for the quantization process to work correctly. ``` pip -q uninstall torch -y pip -q install torch==1.11.0+cpu --extra-index-url https://download.pytorch.org/whl/cpu pip -q install transformers datasets optimum[neural-compressor] evaluate --upgrade ``` Then, we prepare an evaluation dataset to assess model performance during quantization. Starting from the dataset we used to fine-tune the original model, we only keep a few thousand reviews and their labels and save them to local storage. Next, we load the original model, its tokenizer, and the evaluation dataset from the Hugging Face hub. ``` from datasets import load_dataset from transformers import AutoModelForSequenceClassification, AutoTokenizer model_name = "juliensimon/distilbert-amazon-shoe-reviews" model = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=5) tokenizer = AutoTokenizer.from_pretrained(model_name) eval_dataset = load_dataset("prashantgrao/amazon-shoe-reviews", split="test").select(range(300)) ``` Next, we define an evaluation function that computes model metrics on the evaluation dataset. This allows the Optimum Intel library to compare these metrics before and after quantization. For this purpose, the Hugging Face [evaluate](https://github.com/huggingface/evaluate/) library is very convenient! ``` import evaluate def eval_func(model): task_evaluator = evaluate.evaluator("text-classification") results = task_evaluator.compute( model_or_pipeline=model, tokenizer=tokenizer, data=eval_dataset, metric=evaluate.load("accuracy"), label_column="labels", label_mapping=model.config.label2id, ) return results["accuracy"] ``` We then set up the quantization job using a [configuration]. You can find details on this configuration on the Neural Compressor [documentation](https://github.com/intel/neural-compressor/blob/master/docs/source/quantization.md). Here, we go for post-training dynamic quantization with an acceptable accuracy drop of 5%. If accuracy drops more than the allowed 5%, different part of the model will then be quantized until it an acceptable drop in accuracy or if the maximum number of trials, here set to 10, is reached. ``` from neural_compressor.config import AccuracyCriterion, PostTrainingQuantConfig, TuningCriterion tuning_criterion = TuningCriterion(max_trials=10) accuracy_criterion = AccuracyCriterion(tolerable_loss=0.05) # Load the quantization configuration detailing the quantization we wish to apply quantization_config = PostTrainingQuantConfig( approach="dynamic", accuracy_criterion=accuracy_criterion, tuning_criterion=tuning_criterion, ) ``` We can now launch the quantization job and save the resulting model and its configuration file to local storage. ``` from neural_compressor.config import PostTrainingQuantConfig from optimum.intel.neural_compressor import INCQuantizer # The directory where the quantized model will be saved save_dir = "./model_inc" quantizer = INCQuantizer.from_pretrained(model=model, eval_fn=eval_func) quantizer.quantize(quantization_config=quantization_config, save_directory=save_dir) ``` The log tells us that Optimum Intel has quantized 38 ```Linear``` and 2 ```Embedding``` operators. ``` [INFO] |******Mixed Precision Statistics*****| [INFO] +----------------+----------+---------+ [INFO] | Op Type | Total | INT8 | [INFO] +----------------+----------+---------+ [INFO] | Embedding | 2 | 2 | [INFO] | Linear | 38 | 38 | [INFO] +----------------+----------+---------+ ``` Comparing the first layer of the original model (```model.distilbert.transformer.layer[0]```) and its quantized version (```inc_model.distilbert.transformer.layer[0]```), we see that ```Linear``` has indeed been replaced by ```DynamicQuantizedLinear```, its quantized equivalent. ``` # Original model TransformerBlock( (attention): MultiHeadSelfAttention( (dropout): Dropout(p=0.1, inplace=False) (q_lin): Linear(in_features=768, out_features=768, bias=True) (k_lin): Linear(in_features=768, out_features=768, bias=True) (v_lin): Linear(in_features=768, out_features=768, bias=True) (out_lin): Linear(in_features=768, out_features=768, bias=True) ) (sa_layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (ffn): FFN( (dropout): Dropout(p=0.1, inplace=False) (lin1): Linear(in_features=768, out_features=3072, bias=True) (lin2): Linear(in_features=3072, out_features=768, bias=True) ) (output_layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) ) ``` ``` # Quantized model TransformerBlock( (attention): MultiHeadSelfAttention( (dropout): Dropout(p=0.1, inplace=False) (q_lin): DynamicQuantizedLinear(in_features=768, out_features=768, dtype=torch.qint8, qscheme=torch.per_channel_affine) (k_lin): DynamicQuantizedLinear(in_features=768, out_features=768, dtype=torch.qint8, qscheme=torch.per_channel_affine) (v_lin): DynamicQuantizedLinear(in_features=768, out_features=768, dtype=torch.qint8, qscheme=torch.per_channel_affine) (out_lin): DynamicQuantizedLinear(in_features=768, out_features=768, dtype=torch.qint8, qscheme=torch.per_channel_affine) ) (sa_layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (ffn): FFN( (dropout): Dropout(p=0.1, inplace=False) (lin1): DynamicQuantizedLinear(in_features=768, out_features=3072, dtype=torch.qint8, qscheme=torch.per_channel_affine) (lin2): DynamicQuantizedLinear(in_features=3072, out_features=768, dtype=torch.qint8, qscheme=torch.per_channel_affine) ) (output_layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) ) ``` Very well, but how does this impact accuracy and prediction time? Before and after each quantization step, Optimum Intel runs the evaluation function on the current model. The accuracy of the quantized model is now a bit lower (``` 0.546```) than the original model (```0.574```). We also see that the evaluation step of the quantized model was 1.34x faster than the original model. Not bad for a few lines of code! ``` [INFO] |**********************Tune Result Statistics**********************| [INFO] +--------------------+----------+---------------+------------------+ [INFO] | Info Type | Baseline | Tune 1 result | Best tune result | [INFO] +--------------------+----------+---------------+------------------+ [INFO] | Accuracy | 0.5740 | 0.5460 | 0.5460 | [INFO] | Duration (seconds) | 13.1534 | 9.7695 | 9.7695 | [INFO] +--------------------+----------+---------------+------------------+ ``` You can find the resulting [model](https://huggingface.co/juliensimon/distilbert-amazon-shoe-reviews-quantized) hosted on the Hugging Face hub. To load a quantized model hosted locally or on the 🤗 hub, you can do as follows : ``` from optimum.intel.neural_compressor import INCModelForSequenceClassification inc_model = INCModelForSequenceClassification.from_pretrained(save_dir) ``` ## We’re only getting started In this example, we showed you how to easily quantize models post-training with Optimum Intel, and that’s just the beginning. The library supports other types of quantization as well as pruning, a technique that zeroes or removes model parameters that have little or no impact on the predicted outcome. We are excited to partner with Intel to bring Hugging Face users peak efficiency on the latest Intel Xeon CPUs and Intel AI libraries. Please [give Optimum Intel a star](https://github.com/huggingface/optimum-intel) to get updates, and stay tuned for many upcoming features! *Many thanks to [Ella Charlaix](https://github.com/echarlaix) for her help on this post.*
Convert Transformers to ONNX with Hugging Face Optimum
philschmid
June 22, 2022
convert-transformers-to-onnx
guide, community, hardware
https://huggingface.co/blog/convert-transformers-to-onnx
# Convert Transformers to ONNX with Hugging Face Optimum Hundreds of Transformers experiments and models are uploaded to the [Hugging Face Hub](https://huggingface.co/) every single day. Machine learning engineers and students conducting those experiments use a variety of frameworks like PyTorch, TensorFlow/Keras, or others. These models are already used by thousands of companies and form the foundation of AI-powered products. If you deploy Transformers models in production environments, we recommend exporting them first into a serialized format that can be loaded, optimized, and executed on specialized runtimes and hardware. In this guide, you'll learn about: 1. [What is ONNX?](#1-what-is-onnx) 2. [What is Hugging Face Optimum?](#2-what-is-hugging-face-optimum) 3. [What Transformers architectures are supported?](#3-what-transformers-architectures-are-supported) 4. [How can I convert a Transformers model (BERT) to ONNX?](#4-how-can-i-convert-a-transformers-model-bert-to-onnx) 5. [What's next?](#5-whats-next) Let's get started! 🚀 --- If you are interested in optimizing your models to run with maximum efficiency, check out the [🤗 Optimum library](https://github.com/huggingface/optimum). <html itemscope itemtype="https://schema.org/FAQPage"> <div itemscope itemprop="mainEntity" itemtype="https://schema.org/Question"> <a id="1-what-is-onnx"><h2 itemprop="name"> 1. What is ONNX?</h2></a> <div itemscope itemprop="acceptedAnswer" itemtype="https://schema.org/Answer"> <div itemprop="text"> The [ONNX or Open Neural Network eXchange](https://onnx.ai) is an open standard and format to represent machine learning models. ONNX defines a common set of operators and a common file format to represent deep learning models in a wide variety of frameworks, including PyTorch and TensorFlow. <figure class="image table text-center m-0 w-full"> <img src="assets/81_convert_transformers_to_onnx/graph.png" alt="Netron ONNX Graph"/> <figcaption>pseudo ONNX graph, visualized with NETRON</figcaption> </figure> When a model is exported to the ONNX format, these operators are used to construct a computational graph (often called an `intermediate representation`) which represents the flow of data through the neural network. > **Important:** ONNX Is not a Runtime ONNX is only the representation that can be used with runtimes like ONNX Runtime. You can find a list of supported accelerators [here](https://onnx.ai/supported-tools.html#deployModel). > ➡️[Learn more about ONNX.](https://onnx.ai/about.html) </div> </div> </div> <html itemscope itemtype="https://schema.org/FAQPage"> <div itemscope itemprop="mainEntity" itemtype="https://schema.org/Question"> <a id="2-what-is-hugging-face-optimum"><h2 itemprop="name"> 2. What is Hugging Face Optimum?</h2></a> <div itemscope itemprop="acceptedAnswer" itemtype="https://schema.org/Answer"> <div itemprop="text"> [Hugging Face Optimum](https://github.com/huggingface/optimum) is an open-source library and an extension of [Hugging Face Transformers](https://github.com/huggingface/transformers), that provides a unified API of performance optimization tools to achieve maximum efficiency to train and run models on accelerated hardware, including toolkits for optimized performance on [Graphcore IPU](https://github.com/huggingface/optimum-graphcore) and [Habana Gaudi](https://github.com/huggingface/optimum-habana). Optimum can be used for converting, quantization, graph optimization, accelerated training & inference with support for [transformers pipelines](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#pipelines). Below you can see a typical developer journey of how you can leverage Optimum with ONNX. <figure class="image table text-center m-0 w-full"> <img src="assets/81_convert_transformers_to_onnx/user-journey.png" alt="developer journey optimum"/> </figure> [➡️ Learn more about Optimum](https://huggingface.co/blog/hardware-partners-program) </div> </div> </div> <html itemscope itemtype="https://schema.org/FAQPage"> <div itemscope itemprop="mainEntity" itemtype="https://schema.org/Question"> <a id="3-what-transformers-architectures-are-supported"><h2 itemprop="name"> 3. What Transformers architectures are supported?</h2></a> <div itemscope itemprop="acceptedAnswer" itemtype="https://schema.org/Answer"> <div itemprop="text"> A list of all supported Transformers architectures can be found in the [ONNX section of the Transformers documentation](https://huggingface.co/docs/transformers/serialization#onnx). Below is an excerpt of the most commonly used architectures which can be converted to ONNX and optimized with [Hugging Face Optimum](https://huggingface.co/docs/optimum/index) - ALBERT - BART - BERT - DistilBERT - ELECTRA - GPT Neo - GPT-J - GPT-2 - RoBERTa - T5 - ViT - XLM - … [➡️ All supported architectures](https://huggingface.co/docs/transformers/serialization#onnx) </div> </div> </div> <html itemscope itemtype="https://schema.org/FAQPage"> <div itemscope itemprop="mainEntity" itemtype="https://schema.org/Question"> <a id="4-how-can-i-convert-a-transformers-model-bert-to-onnx"><h2 itemprop="name">4. How can I convert a Transformers model (BERT) to ONNX?</h2></a> <div itemscope itemprop="acceptedAnswer" itemtype="https://schema.org/Answer"> <div itemprop="text"> There are currently three ways to convert your Hugging Face Transformers models to ONNX. In this section, you will learn how to export [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) for `text-classification` using all three methods going from the low-level `torch` API to the most user-friendly high-level API of `optimum`. Each method will do exactly the same ### Export with `torch.onnx` (low-level) [torch.onnx](https://pytorch.org/docs/stable/onnx.html) enables you to convert model checkpoints to an ONNX graph by the `export` method. But you have to provide a lot of values like `input_names`, `dynamic_axes`, etc. You’ll first need to install some dependencies: ```python pip install transformers torch ``` exporting our checkpoint with `export` ```python import torch from transformers import AutoModelForSequenceClassification, AutoTokenizer # load model and tokenizer model_id = "distilbert-base-uncased-finetuned-sst-2-english" model = AutoModelForSequenceClassification.from_pretrained(model_id) tokenizer = AutoTokenizer.from_pretrained(model_id) dummy_model_input = tokenizer("This is a sample", return_tensors="pt") # export torch.onnx.export( model, tuple(dummy_model_input.values()), f="torch-model.onnx", input_names=['input_ids', 'attention_mask'], output_names=['logits'], dynamic_axes={'input_ids': {0: 'batch_size', 1: 'sequence'}, 'attention_mask': {0: 'batch_size', 1: 'sequence'}, 'logits': {0: 'batch_size', 1: 'sequence'}}, do_constant_folding=True, opset_version=13, ) ``` ### Export with `transformers.onnx` (mid-level) [transformers.onnx](https://huggingface.co/docs/transformers/serialization#exporting-a-model-to-onnx) enables you to convert model checkpoints to an ONNX graph by leveraging configuration objects. That way you don’t have to provide the complex configuration for `dynamic_axes` etc. You’ll first need to install some dependencies: ```python pip install transformers[onnx] torch ``` Exporting our checkpoint with the `transformers.onnx`. ```python from pathlib import Path import transformers from transformers.onnx import FeaturesManager from transformers import AutoConfig, AutoTokenizer, AutoModelForSequenceClassification # load model and tokenizer model_id = "distilbert-base-uncased-finetuned-sst-2-english" feature = "sequence-classification" model = AutoModelForSequenceClassification.from_pretrained(model_id) tokenizer = AutoTokenizer.from_pretrained(model_id) # load config model_kind, model_onnx_config = FeaturesManager.check_supported_model_or_raise(model, feature=feature) onnx_config = model_onnx_config(model.config) # export onnx_inputs, onnx_outputs = transformers.onnx.export( preprocessor=tokenizer, model=model, config=onnx_config, opset=13, output=Path("trfs-model.onnx") ) ``` ### Export with Optimum (high-level) [Optimum](https://huggingface.co/docs/optimum/onnxruntime/modeling_ort#switching-from-transformers-to-optimum-inference) Inference includes methods to convert vanilla Transformers models to ONNX using the `ORTModelForXxx` classes. To convert your Transformers model to ONNX you simply have to pass `from_transformers=True` to the `from_pretrained()` method and your model will be loaded and converted to ONNX leveraging the [transformers.onnx](https://huggingface.co/docs/transformers/serialization#exporting-a-model-to-onnx) package under the hood. You’ll first need to install some dependencies: ```python pip install optimum[onnxruntime] ``` Exporting our checkpoint with `ORTModelForSequenceClassification` ```python from optimum.onnxruntime import ORTModelForSequenceClassification model = ORTModelForSequenceClassification.from_pretrained("distilbert-base-uncased-finetuned-sst-2-english",from_transformers=True) ``` The best part about the conversion with Optimum is that you can immediately use the `model` to run predictions or load it [inside a pipeline.](https://huggingface.co/docs/optimum/onnxruntime/modeling_ort#switching-from-transformers-to-optimum-inference) </div> </div> </div> ## 5. What's next? Since you successfully convert your Transformers model to ONNX the whole set of optimization and quantization tools is now open to use. Potential next steps can be: - Use the onnx model for [Accelerated Inference with Optimum and Transformers Pipelines](https://huggingface.co/blog/optimum-inference) - Apply [static quantization to your model](https://www.philschmid.de/static-quantization-optimum) for ~3x latency improvements - Use ONNX runtime for [training](https://github.com/huggingface/optimum/tree/main/examples/onnxruntime/training) - Convert your ONNX model to [TensorRT](https://docs.nvidia.com/deeplearning/tensorrt/api/python_api/) to improve GPU performance - … If you are interested in optimizing your models to run with maximum efficiency, check out the [🤗 Optimum library](https://github.com/huggingface/optimum). --- Thanks for reading! If you have any questions, feel free to contact me, through [Github](https://github.com/huggingface/transformers), or on the [forum](https://discuss.huggingface.co/c/optimum/59). You can also connect with me on [Twitter](https://twitter.com/_philschmid) or [LinkedIn](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/). </html>
Getting Started With Embeddings
espejelomar
June 23, 2022
getting-started-with-embeddings
guide, nlp
https://huggingface.co/blog/getting-started-with-embeddings
# Getting Started With Embeddings Check out this tutorial with the Notebook Companion: <a target="_blank" href="https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/80_getting_started_with_embeddings.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> ## Understanding embeddings An embedding is a numerical representation of a piece of information, for example, text, documents, images, audio, etc. The representation captures the semantic meaning of what is being embedded, making it robust for many industry applications. Given the text "What is the main benefit of voting?", an embedding of the sentence could be represented in a vector space, for example, with a list of 384 numbers (for example, [0.84, 0.42, ..., 0.02]). Since this list captures the meaning, we can do exciting things, like calculating the distance between different embeddings to determine how well the meaning of two sentences matches. Embeddings are not limited to text! You can also create an embedding of an image (for example, a list of 384 numbers) and compare it with a text embedding to determine if a sentence describes the image. This concept is under powerful systems for image search, classification, description, and more! How are embeddings generated? The open-source library called [Sentence Transformers](https://www.sbert.net/index.html) allows you to create state-of-the-art embeddings from images and text for free. This blog shows an example with this library. ## What are embeddings for? > "[...] once you understand this ML multitool (embedding), you'll be able to build everything from search engines to recommendation systems to chatbots and a whole lot more. You don't have to be a data scientist with ML expertise to use them, nor do you need a huge labeled dataset." - [Dale Markowitz, Google Cloud](https://cloud.google.com/blog/topics/developers-practitioners/meet-ais-multitool-vector-embeddings). Once a piece of information (a sentence, a document, an image) is embedded, the creativity starts; several interesting industrial applications use embeddings. E.g., Google Search uses embeddings to [match text to text and text to images](https://cloud.google.com/blog/topics/developers-practitioners/meet-ais-multitool-vector-embeddings); Snapchat uses them to "[serve the right ad to the right user at the right time](https://eng.snap.com/machine-learning-snap-ad-ranking)"; and Meta (Facebook) uses them for [their social search](https://research.facebook.com/publications/embedding-based-retrieval-in-facebook-search/). Before they could get intelligence from embeddings, these companies had to embed their pieces of information. An embedded dataset allows algorithms to search quickly, sort, group, and more. However, it can be expensive and technically complicated. In this post, we use simple open-source tools to show how easy it can be to embed and analyze a dataset. ## Getting started with embeddings We will create a small Frequently Asked Questions (FAQs) engine: receive a query from a user and identify which FAQ is the most similar. We will use the [US Social Security Medicare FAQs](https://faq.ssa.gov/en-US/topic/?id=CAT-01092). But first, we need to embed our dataset (other texts use the terms encode and embed interchangeably). The Hugging Face Inference API allows us to embed a dataset using a quick POST call easily. Since the embeddings capture the semantic meaning of the questions, it is possible to compare different embeddings and see how different or similar they are. Thanks to this, you can get the most similar embedding to a query, which is equivalent to finding the most similar FAQ. Check out our [semantic search tutorial](https://huggingface.co/spaces/sentence-transformers/embeddings-semantic-search) for a more detailed explanation of how this mechanism works. In a nutshell, we will: 1. Embed Medicare's FAQs using the Inference API. 2. Upload the embedded questions to the Hub for free hosting. 3. Compare a customer's query to the embedded dataset to identify which is the most similar FAQ. ## 1. Embedding a dataset The first step is selecting an existing pre-trained model for creating the embeddings. We can choose a model from the [Sentence Transformers library](https://huggingface.co/sentence-transformers). In this case, let's use the ["sentence-transformers/all-MiniLM-L6-v2"](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) because it's a small but powerful model. In a future post, we will examine other models and their trade-offs. Log in to the Hub. You must create a write token in your [Account Settings](http://hf.co/settings/tokens). We will store the write token in `hf_token`. ```py model_id = "sentence-transformers/all-MiniLM-L6-v2" hf_token = "get your token in http://hf.co/settings/tokens" ``` To generate the embeddings you can use the `https://api-inference.huggingface.co/pipeline/feature-extraction/{model_id}` endpoint with the headers `{"Authorization": f"Bearer {hf_token}"}`. Here is a function that receives a dictionary with the texts and returns a list with embeddings. ```py import requests api_url = f"https://api-inference.huggingface.co/pipeline/feature-extraction/{model_id}" headers = {"Authorization": f"Bearer {hf_token}"} ``` The first time you generate the embeddings, it may take a while (approximately 20 seconds) for the API to return them. We use the `retry` decorator (install with `pip install retry`) so that if on the first try, `output = query(dict(inputs = texts))` doesn't work, wait 10 seconds and try three times again. This happens because, on the first request, the model needs to be downloaded and installed on the server, but subsequent calls are much faster. ```py def query(texts): response = requests.post(api_url, headers=headers, json={"inputs": texts, "options":{"wait_for_model":True}}) return response.json() ``` The current API does not enforce strict rate limitations. Instead, Hugging Face balances the loads evenly between all our available resources and favors steady flows of requests. If you need to embed several texts or images, the [Hugging Face Accelerated Inference API](https://huggingface.co/docs/api-inference/index) would speed the inference and let you choose between using a CPU or GPU. ```py texts = ["How do I get a replacement Medicare card?", "What is the monthly premium for Medicare Part B?", "How do I terminate my Medicare Part B (medical insurance)?", "How do I sign up for Medicare?", "Can I sign up for Medicare Part B if I am working and have health insurance through an employer?", "How do I sign up for Medicare Part B if I already have Part A?", "What are Medicare late enrollment penalties?", "What is Medicare and who can get it?", "How can I get help with my Medicare Part A and Part B premiums?", "What are the different parts of Medicare?", "Will my Medicare premiums be higher because of my higher income?", "What is TRICARE ?", "Should I sign up for Medicare Part B if I have Veterans' Benefits?"] output = query(texts) ``` As a response, you get back a list of lists. Each list contains the embedding of a FAQ. The model, ["sentence-transformers/all-MiniLM-L6-v2"](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2), is encoding the input questions to 13 embeddings of size 384 each. Let's convert the list to a Pandas `DataFrame` of shape (13x384). ```py import pandas as pd embeddings = pd.DataFrame(output) ``` It looks similar to this matrix: ```py [[-0.02388945 0.05525852 -0.01165488 ... 0.00577787 0.03409787 -0.0068891 ] [-0.0126876 0.04687412 -0.01050217 ... -0.02310316 -0.00278466 0.01047371] [ 0.00049438 0.11941205 0.00522949 ... 0.01687654 -0.02386115 0.00526433] ... [-0.03900796 -0.01060951 -0.00738271 ... -0.08390449 0.03768405 0.00231361] [-0.09598278 -0.06301168 -0.11690582 ... 0.00549841 0.1528919 0.02472013] [-0.01162949 0.05961934 0.01650903 ... -0.02821241 -0.00116556 0.0010672 ]] ``` ## 2. Host embeddings for free on the Hugging Face Hub 🤗 Datasets is a library for quickly accessing and sharing datasets. Let's host the embeddings dataset in the Hub using the user interface (UI). Then, anyone can load it with a single line of code. You can also use the terminal to share datasets; see [the documentation](https://huggingface.co/docs/datasets/share#share) for the steps. In the [notebook companion](https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/80_getting_started_with_embeddings.ipynb) of this entry, you will be able to use the terminal to share the dataset. If you want to skip this section, check out the [`ITESM/embedded_faqs_medicare` repo](https://huggingface.co/datasets/ITESM/embedded_faqs_medicare) with the embedded FAQs. First, we export our embeddings from a Pandas `DataFrame` to a CSV. You can save your dataset in any way you prefer, e.g., zip or pickle; you don't need to use Pandas or CSV. Since our embeddings file is not large, we can store it in a CSV, which is easily inferred by the `datasets.load_dataset()` function we will employ in the next section (see the [Datasets documentation](https://huggingface.co/docs/datasets/about_dataset_load#build-and-load)), i.e., we don't need to create a loading script. We will save the embeddings with the name `embeddings.csv`. ```py embeddings.to_csv("embeddings.csv", index=False) ``` Follow the next steps to host `embeddings.csv` in the Hub. * Click on your user in the top right corner of the [Hub UI](https://huggingface.co/). * Create a dataset with "New dataset." ![](assets/80_getting_started_with_embeddings/SelectDataset.png) * Choose the Owner (organization or individual), name, and license of the dataset. Select if you want it to be private or public. Create the dataset. ![](assets/80_getting_started_with_embeddings/createDataset.png) * Go to the "Files" tab (screenshot below) and click "Add file" and "Upload file." ![](assets/80_getting_started_with_embeddings/AddFile.png) * Finally, drag or upload the dataset, and commit the changes. ![](assets/80_getting_started_with_embeddings/UploadFile.png) Now the dataset is hosted on the Hub for free. You (or whoever you want to share the embeddings with) can quickly load them. Let's see how. ## 3. Get the most similar Frequently Asked Questions to a query Suppose a Medicare customer asks, "How can Medicare help me?". We will **find** which of our FAQs could best answer our user query. We will create an embedding of the query that can represent its semantic meaning. We then compare it to each embedding in our FAQ dataset to identify which is closest to the query in vector space. Install the 🤗 Datasets library with `pip install datasets`. Then, load the embedded dataset from the Hub and convert it to a PyTorch `FloatTensor`. Note that this is not the only way to operate on a `Dataset`; for example, you could use NumPy, Tensorflow, or SciPy (refer to the [Documentation](https://huggingface.co/docs/datasets/loading)). If you want to practice with a real dataset, the [`ITESM/embedded_faqs_medicare`](https://huggingface.co/datasets/ITESM/embedded_faqs_medicare) repo contains the embedded FAQs, or you can use the [companion notebook](https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/80_getting_started_with_embeddings.ipynb) to this blog. ```py import torch from datasets import load_dataset faqs_embeddings = load_dataset('namespace/repo_name') dataset_embeddings = torch.from_numpy(faqs_embeddings["train"].to_pandas().to_numpy()).to(torch.float) ``` We use the query function we defined before to embed the customer's question and convert it to a PyTorch `FloatTensor` to operate over it efficiently. Note that after the embedded dataset is loaded, we could use the `add_faiss_index` and `search` methods of a `Dataset` to identify the closest FAQ to an embedded query using the [faiss library](https://github.com/facebookresearch/faiss). Here is a [nice tutorial of the alternative](https://huggingface.co/docs/datasets/faiss_es). ```py question = ["How can Medicare help me?"] output = query(question) query_embeddings = torch.FloatTensor(output) ``` You can use the `util.semantic_search` function in the Sentence Transformers library to identify which of the FAQs are closest (most similar) to the user's query. This function uses cosine similarity as the default function to determine the proximity of the embeddings. However, you could also use other functions that measure the distance between two points in a vector space, for example, the dot product. Install `sentence-transformers` with `pip install -U sentence-transformers`, and search for the five most similar FAQs to the query. ```py from sentence_transformers.util import semantic_search hits = semantic_search(query_embeddings, dataset_embeddings, top_k=5) ``` `util.semantic_search` identifies how close each of the 13 FAQs is to the customer query and returns a list of dictionaries with the top `top_k` FAQs. `hits` looks like this: ```py [{'corpus_id': 8, 'score': 0.75653076171875}, {'corpus_id': 7, 'score': 0.7418993711471558}, {'corpus_id': 3, 'score': 0.7252674102783203}, {'corpus_id': 9, 'score': 0.6735571622848511}, {'corpus_id': 10, 'score': 0.6505177617073059}] ``` The values ​​in `corpus_id` allow us to index the list of `texts` we defined in the first section and get the five most similar FAQs: ```py print([texts[hits[0][i]['corpus_id']] for i in range(len(hits[0]))]) ``` Here are the 5 FAQs that come closest to the customer's query: ```py ['How can I get help with my Medicare Part A and Part B premiums?', 'What is Medicare and who can get it?', 'How do I sign up for Medicare?', 'What are the different parts of Medicare?', 'Will my Medicare premiums be higher because of my higher income?'] ``` This list represents the 5 FAQs closest to the customer's query. Nice! We used here PyTorch and Sentence Transformers as our main numerical tools. However, we could have defined the cosine similarity and ranking functions by ourselves using tools such as NumPy and SciPy. ## Additional resources to keep learning If you want to know more about the Sentence Transformers library: - The [Hub Organization](https://huggingface.co/sentence-transformers) for all the new models and instructions on how to download models. - The [Nils Reimers tweet](https://twitter.com/Nils_Reimers/status/1487014195568775173) comparing Sentence Transformer models with GPT-3 Embeddings. Spoiler alert: the Sentence Transformers are awesome! - The [Sentence Transformers documentation](https://www.sbert.net/), - [Nima's thread](https://twitter.com/NimaBoscarino/status/1535331680805801984) on recent research. Thanks for reading!
Announcing Evaluation on the Hub
douwekiela
June 28, 2022
eval-on-the-hub
community, launch, guide
https://huggingface.co/blog/eval-on-the-hub
# Announcing Evaluation on the Hub <br> <div style="background-color: #e6f9e6; padding: 16px 32px; outline: 2px solid; border-radius: 10px;"> November 2023 Update: This project has been archived. If you want to evaluate LLMs on the Hub, check out [this collection of leaderboards](https://huggingface.co/collections/clefourrier/llm-leaderboards-and-benchmarks-✨-64f99d2e11e92ca5568a7cce). </div> <em>TL;DR</em>: Today we introduce [Evaluation on the Hub](https://huggingface.co/spaces/autoevaluate/model-evaluator), a new tool powered by [AutoTrain](https://huggingface.co/autotrain) that lets you evaluate any model on any dataset on the Hub without writing a single line of code! <figure class="image table text-center m-0"> <video alt="Evaluating models from the Hugging Face Hub" style="max-width: 70%; margin: auto;" autoplay loop autobuffer muted playsinline > <source src="/blog/assets/82_eval_on_the_hub/autoeval-demo.mp4" type="video/mp4"> </video> <figcaption>Evaluate all the models 🔥🔥🔥!</figcaption> </figure> Progress in AI has been nothing short of amazing, to the point where some people are now seriously debating whether AI models may be better than humans at certain tasks. However, that progress has not at all been even: to a machine learner from several decades ago, modern hardware and algorithms might look incredible, as might the sheer quantity of data and compute at our disposal, but the way we evaluate these models has stayed roughly the same. However, it is no exaggeration to say that modern AI is in an evaluation crisis. Proper evaluation these days involves measuring many models, often on many datasets and with multiple metrics. But doing so is unnecessarily cumbersome. This is especially the case if we care about reproducibility, since self-reported results may have suffered from inadvertent bugs, subtle differences in implementation, or worse. We believe that better evaluation can happen, if we - the community - establish a better set of best practices and try to remove the hurdles. Over the past few months, we've been hard at work on [Evaluation on the Hub](https://huggingface.co/spaces/autoevaluate/model-evaluator): evaluate any model on any dataset using any metric, at the click of a button. To get started, we evaluated hundreds models on several key datasets, and using the nifty new [Pull Request feature](https://huggingface.co/blog/community-update) on the Hub, opened up loads of PRs on model cards to display their verified performance. Evaluation results are encoded directly in the model card metadata, following [a format](https://huggingface.co/docs/hub/models-cards) for all models on the Hub. Check out the model card for [DistilBERT](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english/blob/main/README.md#L7-L42) to see how it looks! ## On the Hub Evaluation on the Hub opens the door to so many interesting use cases. From the data scientist or executive who needs to decide which model to deploy, to the academic trying to reproduce a paper’s results on a new dataset, to the ethicist who wants to better understand risks of deployment. If we have to single out three primary initial use case scenarios, they are these: **Finding the best model for your task**<br/> Suppose you know exactly what your task is and you want to find the right model for the job. You can check out the leaderboard for a dataset representative of your task, which aggregates all the results. That’s great! And what if that fancy new model you’re interested in isn’t on the [leaderboard](https://huggingface.co/spaces/autoevaluate/leaderboards) yet for that dataset? Simply run an evaluation for it, without leaving the Hub. **Evaluating models on your brand new dataset**<br/> Now what if you have a brand spanking new dataset that you want to run baselines on? You can upload it to the Hub and evaluate as many models on it as you like. No code required. What’s more, you can be sure that the way you are evaluating these models on your dataset is exactly the same as how they’ve been evaluated on other datasets. **Evaluating your model on many other related datasets**<br/> Or suppose you have a brand new question answering model, trained on SQuAD? There are hundreds of different question answering datasets to evaluate on :scream: You can pick the ones you are interested in and evaluate your model, directly from the Hub. ## Ecosystem ![The Hugging Face Ecosystem and Evaluation on the Hub](/blog/assets/82_eval_on_the_hub/ecosystem.png) <figcaption><center><i>Evaluation on the Hub fits neatly into the Hugging Face ecosystem.</i></center></figcaption> Evaluation on the Hub is meant to make your life easier. But of course, there’s a lot happening in the background. What we really like about Evaluation on the Hub: it fits so neatly into the existing Hugging Face ecosystem, we almost had to do it. Users start on dataset pages, from where they can launch evaluations or see leaderboards. The model evaluation submission interface and the leaderboards are regular Hugging Face Spaces. The evaluation backend is powered by AutoTrain, which opens up a PR on the Hub for the given model’s model card. ## DogFood - Distinguishing Dogs, Muffins and Fried Chicken So what does it look like in practice? Let’s run through an example. Suppose you are in the business of telling apart dogs, muffins and fried chicken (a.k.a. dogfooding!). ![Dog Food Examples](/blog/assets/82_eval_on_the_hub/dogfood-example.png) <figcaption><center><i>Example images of dogs and food (muffins and fried chicken). <a href="https://github.com/qw2243c/Image-Recognition-Dogs-Fried-Chicken-or-Blueberry-Muffins-/">Source</a> / <a href="https://twitter.com/teenybiscuit/status/667777205397680129?s=20&t=wPgYJMp-JPwRsNAOMvEbxg">Original source</a>.</i></center></figcaption> As the above image shows, to solve this problem, you’ll need: * A dataset of dog, muffin, and fried chicken images * Image classifiers that have been trained on these images Fortunately, your data science team has uploaded [a dataset](https://huggingface.co/datasets/lewtun/dog_food) to the Hugging Face Hub and trained [a few different models on it](https://huggingface.co/models?datasets=lewtun/dog_food). So now you just need to pick the best one - let’s use Evaluation on the Hub to see how well they perform on the test set! ### Configuring an evaluation job To get started, head over to the [`model-evaluator` Space](https://huggingface.co/spaces/autoevaluate/model-evaluator) and select the dataset you want to evaluate models on. For our dataset of dog and food images, you’ll see something like the image below: ![Model Evaluator](/blog/assets/82_eval_on_the_hub/model-evaluator.png) Now, many datasets on the Hub contain metadata that specifies how an evaluation should be configured (check out [acronym_identification](https://huggingface.co/datasets/acronym_identification/blob/main/README.md#L22-L30) for an example). This allows you to evaluate models with a single click, but in our case we’ll show you how to configure the evaluation manually. Clicking on the <em>Advanced configuration</em> button will show you the various settings to choose from: * The task, dataset, and split configuration * The mapping of the dataset columns to a standard format * The choice of metrics As shown in the image below, configuring the task, dataset, and split to evaluate on is straightforward: ![Advanced Configuration](/blog/assets/82_eval_on_the_hub/config.png) The next step is to define which dataset columns contain the images, and which ones contain the labels: ![Dataset Mapping](/blog/assets/82_eval_on_the_hub/mapping.png) Now that the task and dataset are configured, the final (optional) step is to select the metrics to evaluate with. Each task is associated with a set of default metrics. For example, the image below shows that F1 score, accuracy etc will be computed automatically. To spice things up, we’ll also calculate the [Matthew’s correlation coefficient](https://huggingface.co/spaces/evaluate-metric/matthews_correlation), which provides a balanced measure of classifier performance: ![Selecting Metrics](/blog/assets/82_eval_on_the_hub/select-metrics.png) And that’s all it takes to configure an evaluation job! Now we just need to pick some models to evaluate - let’s take a look. ### Selecting models to evaluate Evaluation on the Hub links datasets and models via tags in the model card metadata. In our example, we have three models to choose from, so let’s select them all! ![Selecting Models](/blog/assets/82_eval_on_the_hub/select-model.png) Once the models are selected, simply enter your Hugging Face Hub username (to be notified when the evaluation is complete) and hit the big <em>Evaluate models</em> button: ![Launching the Evaluation](/blog/assets/82_eval_on_the_hub/evaluate.png) Once a job is submitted, the models will be automatically evaluated and a Hub pull request will be opened with the evaluation results: ![Pull Request](/blog/assets/82_eval_on_the_hub/pr.png) You can also copy-paste the evaluation metadata into the dataset card so that you and the community can skip the manual configuration next time! ![Metadata Pull Request](/blog/assets/82_eval_on_the_hub/metadata.png) ### Check out the leaderboard To facilitate the comparison of models, Evaluation on the Hub also provides leaderboards that allow you to examine which models perform best on which split and metric: ![Leaderboard](/blog/assets/82_eval_on_the_hub/leaderboard.png) Looks like the Swin Transformer came out on top! ### Try it yourself! If you’d like to evaluate your own choice of models, give Evaluation on the Hub a spin by checking out these popular datasets: * [Emotion](https://huggingface.co/spaces/autoevaluate/model-evaluator?dataset=emotion) for text classification * [MasakhaNER](https://huggingface.co/spaces/autoevaluate/model-evaluator?dataset=masakhaner) for named entity recognition * [SAMSum](https://huggingface.co/spaces/autoevaluate/model-evaluator?dataset=samsum) for text summarization ## The Bigger Picture Since the dawn of machine learning, we've evaluated models by computing some form of accuracy on a held-out test set that is assumed to be independent and identically distributed. Under the pressures of modern AI, that paradigm is now starting to show serious cracks. Benchmarks are saturating, meaning that machines outperform humans on certain test sets, almost faster than we can come up with new ones. Yet, AI systems are known to be brittle and suffer from, or even worse amplify, severe malicious biases. Reproducibility is lacking. Openness is an afterthought. While people fixate on leaderboards, practical considerations for deploying models, such as efficiency and fairness, are often glossed over. The hugely important role data plays in model development is still not taken seriously enough. What is more, the practices of pretraining and prompt-based in-context learning have blurred what it means to be “in distribution” in the first place. Machine learning is slowly catching up to these things, and we hope to help the field move forward with our work. ## Next Steps A few weeks ago, we launched the Hugging Face [Evaluate library](https://github.com/huggingface/evaluate), aimed at lowering barriers to the best practices of machine learning evaluation. We have also been hosting benchmarks, like [RAFT](https://huggingface.co/spaces/ought/raft-leaderboard) and [GEM](https://huggingface.co/spaces/GEM/submission-form). Evaluation on the Hub is a logical next step in our efforts to enable a future where models are evaluated in a more holistic fashion, along many axes of evaluation, in a trustable and guaranteeably reproducible manner. Stay tuned for more launches soon, including more tasks, and a new and improved [data measurements tool](https://huggingface.co/spaces/huggingface/data-measurements-tool)! We’re excited to see where the community will take this! If you'd like to help out, evaluate as many models on as many datasets as you like. And as always, please give us lots of feedback, either on the [Community tabs](https://huggingface.co/spaces/autoevaluate/model-evaluator/discussions) or the [forums](https://discuss.huggingface.co/)!
Accelerate Large Model Training using DeepSpeed
smangrul
June 28, 2022
accelerate-deepspeed
guide
https://huggingface.co/blog/accelerate-deepspeed
# Accelerate Large Model Training using DeepSpeed In this post we will look at how we can leverage the **[Accelerate](https://github.com/huggingface/accelerate)** library for training large models which enables users to leverage the ZeRO features of **[DeeSpeed](https://www.deepspeed.ai)**. # Motivation 🤗 **Tired of Out of Memory (OOM) errors while trying to train large models? We've got you covered. Large models are very performant [1] but difficult to train with the available hardware. To get the most of the available hardware for training large models one can leverage Data Parallelism using ZeRO - Zero Redundancy Optimizer [2]**. Below is a short description of Data Parallelism using ZeRO with diagram from this [blog post](https://www.microsoft.com/en-us/research/blog/zero-deepspeed-new-system-optimizations-enable-training-models-with-over-100-billion-parameters/) ![ZeRO Data Parallelism](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/parallelism-zero.png) (Source: [link](https://www.microsoft.com/en-us/research/blog/zero-deepspeed-new-system-optimizations-enable-training-models-with-over-100-billion-parameters/)) a. **Stage 1** : Shards optimizer states across data parallel workers/GPUs b. **Stage 2** : Shards optimizer states + gradients across data parallel workers/GPUs c. **Stage 3**: Shards optimizer states + gradients + model parameters across data parallel workers/GPUs d. **Optimizer Offload**: Offloads the gradients + optimizer states to CPU/Disk building on top of ZERO Stage 2 e. **Param Offload**: Offloads the model parameters to CPU/Disk building on top of ZERO Stage 3 In this blogpost we will look at how to leverage Data Parallelism using ZeRO using Accelerate. **[DeepSpeed](https://github.com/microsoft/deepspeed)**, **[FairScale](https://github.com/facebookresearch/fairscale/)** and **[PyTorch FullyShardedDataParallel (FSDP)](https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/)** have implemented the core ideas of the ZERO paper. These have already been integrated in 🤗 `transformers` Trainer and 🤗 `accelerate` accompanied by great blogs [Fit More and Train Faster With ZeRO via DeepSpeed and FairScale](https://huggingface.co/blog/zero-deepspeed-fairscale) [4] and [Accelerate Large Model Training using PyTorch Fully Sharded Data Parallel](https://huggingface.co/blog/pytorch-fsdp) [5]. We defer the explanation of what goes behind the scenes to those blogs and mainly focus on leveraging DeepSpeed ZeRO using Accelerate. # Accelerate 🚀: Leverage DeepSpeed ZeRO without any code changes **Hardware setup**: 2X24GB NVIDIA Titan RTX GPUs. 60GB RAM. We will look at the task of finetuning encoder-only model for text-classification. We will use pretrained `microsoft/deberta-v2-xlarge-mnli` (900M params) for finetuning on MRPC GLUE dataset. The code is available here [run_cls_no_trainer.py](https://github.com/pacman100/accelerate-deepspeed-test/blob/main/src/modeling/run_cls_no_trainer.py). It is similar to the official text-classification example [here](https://github.com/huggingface/transformers/blob/main/examples/pytorch/text-classification/run_glue_no_trainer.py) with the addition of logic to measure train and eval time. Let's compare performance between Distributed Data Parallel (DDP) and DeepSpeed ZeRO Stage-2 in a Multi-GPU Setup. To enable DeepSpeed ZeRO Stage-2 without any code changes, please run `accelerate config` and leverage the [Accelerate DeepSpeed Plugin](https://huggingface.co/docs/accelerate/deepspeed#accelerate-deepspeed-plugin). **ZeRO Stage-2 DeepSpeed Plugin Example** ```bash compute_environment: LOCAL_MACHINE deepspeed_config: gradient_accumulation_steps: 1 gradient_clipping: 1.0 offload_optimizer_device: none offload_param_device: none zero3_init_flag: false zero_stage: 2 distributed_type: DEEPSPEED fsdp_config: {} machine_rank: 0 main_process_ip: null main_process_port: null main_training_function: main mixed_precision: fp16 num_machines: 1 num_processes: 2 use_cpu: false ``` Now, run below command for training: ```bash accelerate launch run_cls_no_trainer.py \ --model_name_or_path "microsoft/deberta-v2-xlarge-mnli" \ --task_name "mrpc" \ --ignore_mismatched_sizes \ --max_length 128 \ --per_device_train_batch_size 40 \ --learning_rate 2e-5 \ --num_train_epochs 3 \ --output_dir "/tmp/mrpc/deepspeed_stage2/" \ --with_tracking \ --report_to "wandb" \ ``` In our Single-Node Multi-GPU setup, the maximum batch size that DDP supports without OOM error is 8. In contrast, DeepSpeed Zero-Stage 2 enables batch size of 40 without running into OOM errors. Therefore, DeepSpeed enables to fit **5X** more data per GPU when compared to DDP. Below is the snapshot of the plots from wandb [run](https://wandb.ai/smangrul/DDP_vs_DeepSpeed_cls_task?workspace=user-smangrul) along with benchmarking table comparing DDP vs DeepSpeed. ![Wandb Run](./assets/83_accelerate_deepspeed/cls_run.png) --- | Method | Batch Size Max | Train time per epoch (seconds) | Eval time per epoch (seconds) | F1 score | Accuracy | | --- | --- | --- | --- | --- | --- | | DDP (Distributed Data Parallel) | 8 | 103.57 | 2.04 | 0.931 | 0.904 | | DeepSpeed ZeRO Stage 2 | **40** | **28.98** | **1.79** | **0.936** | **0.912** | Table 1: Benchmarking DeepSpeed ZeRO Stage-2 on DeBERTa-XL (900M) model --- With this bigger batch size, we observe ~**3.5X** speed up in total training time without any drop in perforamnce metrics, all this without changing any code. Yay! 🤗. To be able to tweak more options, you will need to use a DeepSpeed config file and minimal code changes. Let's see how to do this. # Accelerate 🚀: Leverage a DeepSpeed Config file to tweak more options First, We will look at the task of finetuning a sequence-to-sequence model for training our own Chatbot. Specifically, we will finetune `facebook/blenderbot-400M-distill` on the [smangrul/MuDoConv](https://huggingface.co/datasets/smangrul/MuDoConv) (Multi-Domain Conversation) dataset. The dataset contains conversations from 10 different data sources covering personas, grounding in specific emotional contexts, goal-oriented (e.g., restaurant reservation) and general wikipedia topics (e.g, Cricket). The code is available here [run_seq2seq_no_trainer.py](https://github.com/pacman100/accelerate-deepspeed-test/blob/main/src/modeling/run_seq2seq_no_trainer.py). Current pratice to effectively measure the `Engagingness` and `Humanness` of Chatbots is via Human evlauations which are expensive [6]. As such for this example, the metric being tracked is BLEU score (which isn't ideal but is the conventional metric for such tasks). One can adapt the code to train larger T5 models if you have access to GPUs that support `bfloat16` precision else you will run into `NaN` loss values. We will run a quick benchmark on `10000` train samples and `1000` eval samples as we are interested in DeepSpeed vs DDP. We will leverage the DeepSpeed Zero Stage-2 config [zero2_config_accelerate.json](https://github.com/pacman100/accelerate-deepspeed-test/blob/main/src/modeling/configs/zero2_config_accelerate.json) (given below) For training. for detailed information on the various config features, please refer [DeeSpeed](https://www.deepspeed.ai) documentation. ```json { "fp16": { "enabled": "true", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 15, "hysteresis": 2, "min_loss_scale": 1 }, "optimizer": { "type": "AdamW", "params": { "lr": "auto", "weight_decay": "auto", "torch_adam": true, "adam_w_mode": true } }, "scheduler": { "type": "WarmupDecayLR", "params": { "warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto", "total_num_steps": "auto" } }, "zero_optimization": { "stage": 2, "allgather_partitions": true, "allgather_bucket_size": 2e8, "overlap_comm": true, "reduce_scatter": true, "reduce_bucket_size": 2e8, "contiguous_gradients": true }, "gradient_accumulation_steps": 1, "gradient_clipping": "auto", "steps_per_print": 2000, "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", "wall_clock_breakdown": false } ``` To enable DeepSpeed ZeRO Stage-2 with above config, please run `accelerate config` and provide the config file path when asked. For more details, refer the 🤗 `accelerate` official documentation for [DeepSpeed Config File](https://huggingface.co/docs/accelerate/deepspeed#deepspeed-config-file). **ZeRO Stage-2 DeepSpeed Config File Example** ```bash compute_environment: LOCAL_MACHINE deepspeed_config: deepspeed_config_file: /path/to/zero2_config_accelerate.json zero3_init_flag: false distributed_type: DEEPSPEED fsdp_config: {} machine_rank: 0 main_process_ip: null main_process_port: null main_training_function: main mixed_precision: fp16 num_machines: 1 num_processes: 2 use_cpu: false ``` Now, run below command for training: ```bash accelerate launch run_seq2seq_no_trainer.py \ --dataset_name "smangrul/MuDoConv" \ --max_source_length 128 \ --source_prefix "chatbot: " \ --max_target_length 64 \ --val_max_target_length 64 \ --val_min_target_length 20 \ --n_val_batch_generations 5 \ --n_train 10000 \ --n_val 1000 \ --pad_to_max_length \ --num_beams 10 \ --model_name_or_path "facebook/blenderbot-400M-distill" \ --per_device_train_batch_size 200 \ --per_device_eval_batch_size 100 \ --learning_rate 1e-6 \ --weight_decay 0.0 \ --num_train_epochs 1 \ --gradient_accumulation_steps 1 \ --num_warmup_steps 100 \ --output_dir "/tmp/deepspeed_zero_stage2_accelerate_test" \ --seed 25 \ --logging_steps 100 \ --with_tracking \ --report_to "wandb" \ --report_name "blenderbot_400M_finetuning" ``` When using DeepSpeed config, if user has specified `optimizer` and `scheduler` in config, the user will have to use `accelerate.utils.DummyOptim` and `accelerate.utils.DummyScheduler`. Those are the only minor changes that the user has to do. Below we show an example of the minimal changes required when using DeepSpeed config: ```diff - optimizer = torch.optim.Adam(optimizer_grouped_parameters, lr=args.learning_rate) + optimizer = accelerate.utils.DummyOptim(optimizer_grouped_parameters, lr=args.learning_rate) - lr_scheduler = get_scheduler( - name=args.lr_scheduler_type, - optimizer=optimizer, - num_warmup_steps=args.num_warmup_steps, - num_training_steps=args.max_train_steps, - ) + lr_scheduler = accelerate.utils.DummyScheduler( + optimizer, total_num_steps=args.max_train_steps, warmup_num_steps=args.num_warmup_steps + ) ``` --- | Method | Batch Size Max | Eval Size Max | Train time per epoch (seconds) | Eval time per epoch (seconds) | | --- | --- | --- | --- | --- | | DDP (Distributed Data Parallel) | 100 | 50 | 27.36 | 48.41 | | DeepSpeed ZeRO Stage 2 | **200** | **100** | **19.06** | **39.27** | Table 2: Benchmarking DeepSpeed ZeRO Stage-2 on BlenderBot (400M) model In our Single-Node Multi-GPU setup, the maximum batch size that DDP supports without OOM error is 100. In contrast, DeepSpeed Zero-Stage 2 enables batch size of 200 without running into OOM errors. Therefore, DeepSpeed enables to fit **2X** more data per GPU when compared to DDP. We observe ~**1.44X** speedup in training and ~**1.23X** speedup in evaluation as we are able to fit more data on the same available hardware. As this model is of medium size, the speedup isn't that exciting but this will improve with bigger models. You can chat with the Chatbot trained using the entire data at 🤗 Space [smangrul/Chat-E](https://huggingface.co/spaces/smangrul/Chat-E). You can give bot a persona, ground conversation to a particular emotion, use to in goal-oriented tasks or in a free flow manner. Below is a fun conversation with the chatbot 💬. You can find snapshots of more conversations using different contexts [here](https://github.com/pacman100/accelerate-deepspeed-test/tree/main/src/chatbot_snapshots). ![Chatbot](./assets/83_accelerate_deepspeed/chatbot.png) --- ## CPU/Disk Offloading to enable training humongous models that won’t fit the GPU memory On a single 24GB NVIDIA Titan RTX GPU, one cannot train GPT-XL Model (1.5B parameters) even with a batch size of 1. We will look at how we can use DeepSpeed ZeRO Stage-3 with CPU offloading of optimizer states, gradients and parameters to train GPT-XL Model. We will leverage the DeepSpeed Zero Stage-3 CPU offload config [zero3_offload_config_accelerate.json](https://github.com/pacman100/accelerate-deepspeed-test/blob/main/src/modeling/configs/zero3_offload_config_accelerate.json) (given below) for training. The rest of the process of using the config with 🤗 `accelerate` is similar to the above experiment. ```json { "fp16": { "enabled": true, "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "optimizer": { "type": "AdamW", "params": { "lr": "auto", "weight_decay": "auto" } }, "scheduler": { "type": "WarmupDecayLR", "params": { "warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto", "total_num_steps": "auto" } }, "zero_optimization": { "stage": 3, "offload_optimizer": { "device": "cpu", "pin_memory": true }, "offload_param": { "device": "cpu", "pin_memory": true }, "overlap_comm": true, "contiguous_gradients": true, "reduce_bucket_size": "auto", "stage3_prefetch_bucket_size": "auto", "stage3_param_persistence_threshold": "auto", "sub_group_size": 1e9, "stage3_max_live_parameters": 1e9, "stage3_max_reuse_distance": 1e9, "stage3_gather_16bit_weights_on_model_save": true }, "gradient_accumulation_steps": 1, "gradient_clipping": "auto", "steps_per_print": 2000, "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", "wall_clock_breakdown": false } ``` **ZeRO Stage-3 CPU Offload DeepSpeed Config File Example** ```bash compute_environment: LOCAL_MACHINE deepspeed_config: deepspeed_config_file: /path/to/zero3_offload_config_accelerate.json zero3_init_flag: true distributed_type: DEEPSPEED fsdp_config: {} machine_rank: 0 main_process_ip: null main_process_port: null main_training_function: main mixed_precision: fp16 num_machines: 1 num_processes: 2 use_cpu: false ``` Now, run below command for training: ```bash accelerate launch run_clm_no_trainer.py \ --config_name "gpt2-xl" \ --tokenizer_name "gpt2-xl" \ --dataset_name "wikitext" \ --dataset_config_name "wikitext-2-raw-v1" \ --block_size 128 \ --output_dir "/tmp/clm_deepspeed_stage3_offload__accelerate" \ --learning_rate 5e-4 \ --per_device_train_batch_size 16 \ --per_device_eval_batch_size 1 \ --num_train_epochs 1 \ --with_tracking \ --report_to "wandb"\ ``` --- | Method | Batch Size Max | Train time per epoch (seconds) | Notes | | --- | --- | --- | --- | | DDP (Distributed Data Parallel) | - | - | OOM Error | DeepSpeed ZeRO Stage 3 | **16** | 6608.35 | | Table 3: Benchmarking DeepSpeed ZeRO Stage-3 CPU Offload on GPT-XL (1.5B) model --- DDP will result in OOM error even with batch size 1. On the other hand, with DeepSpeed ZeRO Stage-3 CPU offload, we can train with a batch size of 16. Finally, please, remember that, 🤗 `Accelerate` only integrates DeepSpeed, therefore if you have any problems or questions with regards to DeepSpeed usage, please, file an issue with [DeepSpeed GitHub](https://github.com/microsoft/DeepSpeed/issues). # References [1] [Train Large, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers](http://nlp.cs.berkeley.edu/pubs/Li-Wallace-Shen-Lin-Keutzer-Klein-Gonzalez_2020_Transformers_paper.pdf) [2] [ZeRO: Memory Optimizations Toward Training Trillion Parameter Models](https://arxiv.org/pdf/1910.02054v3.pdf) [3] [DeepSpeed: Extreme-scale model training for everyone - Microsoft Research](https://www.microsoft.com/en-us/research/blog/deepspeed-extreme-scale-model-training-for-everyone/) [4] [Fit More and Train Faster With ZeRO via DeepSpeed and FairScale](https://huggingface.co/blog/zero-deepspeed-fairscale) [5] [Accelerate Large Model Training using PyTorch Fully Sharded Data Parallel](https://huggingface.co/blog/pytorch-fsdp) [6] [Recipes for building an open-domain chatbot](https://arxiv.org/pdf/2004.13637.pdf)
Liftoff! How to get started with your first ML project 🚀
nimaboscarino
June 29, 2022
your-first-ml-project
guide
https://huggingface.co/blog/your-first-ml-project
# Liftoff! How to get started with your first ML project 🚀 People who are new to the Machine Learning world often run into two recurring stumbling blocks. The first is choosing the right library to learn, which can be daunting when there are so many to pick from. Even once you’ve settled on a library and gone through some tutorials, the next issue is coming up with your first big project and scoping it properly to maximize your learning. If you’ve run into those problems, and if you're looking for a new ML library to add to your toolkit, you're in the right place! In this post I’ll take you through some tips for going from 0 to 100 with a new library by using [Sentence Transformers](https://www.sbert.net) (ST) as an example. We'll start by understanding the basics of what ST can do, and highlight some things that make it a great library to learn. Then, I'll share my battle-tested strategy for tackling your first self-driven project. We’ll also talk about how I built my first ST-powered project, and what I learned along the way 🥳 ## What is Sentence Transformers? Sentence embeddings? Semantic search? Cosine similarity?!?! 😱 Just a few short weeks ago, these terms were so confusing to me that they made my head spin. I’d heard that [Sentence Transformers](https://www.sbert.net) was a powerful and versatile library for working with language and image data and I was eager to play around with it, but I was worried that I would be out of my depth. As it turns out, I couldn’t have been more wrong! Sentence Transformers is [among the libraries that Hugging Face integrates with](https://huggingface.co/docs/hub/models-libraries), where it’s described with the following: > Compute dense vector representations for sentences, paragraphs, and images In a nutshell, Sentence Transformers answers one question: What if we could treat sentences as points in a multi-dimensional vector space? This means that ST lets you give it an arbitrary string of text (e.g., “I’m so glad I learned to code with Python!”), and it’ll transform it into a vector, such as `[0.2, 0.5, 1.3, 0.9]`. Another sentence, such as “Python is a great programming language.”, would be transformed into a different vector. These vectors are called “embeddings,” and [they play an essential role in Machine Learning](https://medium.com/@b.terryjack/nlp-everything-about-word-embeddings-9ea21f51ccfe). If these two sentences were embedded with the same model, then both would coexist in the same vector space, allowing for many interesting possibilities. What makes ST particularly useful is that, once you’ve generated some embeddings, you can use the built-in utility functions to compare how similar one sentence is to another, ***including synonyms!*** 🤯 One way to do this is with the [“Cosine Similarity”](https://www.machinelearningplus.com/nlp/cosine-similarity/) function. With ST, you can skip all the pesky math and call the *very* handy `util.cos_sim` function to get a score from -1 to 1 that signifies how “similar” the embedded sentences are in the vector space they share – the bigger the score is, the more similar the sentences are! <figure class="image table text-center m-0 w-full"> <img style="border:none;" alt="A flowchart showing sentences being embedded with Sentence Transformers, and then compared with Cosine Similarity" src="assets/84_first_ml_project/sentence-transformers-explained.svg" /> <figcaption>After embedding sentences, we can compare them with Cosine Similarity.</figcaption> </figure> Comparing sentences by similarity means that if we have a collection of sentences or paragraphs, we can quickly find the ones that match a particular search query with a process called *[semantic search](https://www.sbert.net/examples/applications/semantic-search/README.html)*. For some specific applications of this, see [this tutorial for making a GitHub code-searcher](https://huggingface.co/spaces/sentence-transformers/Sentence_Transformers_for_semantic_search) or this other tutorial on [building an FAQ engine](https://huggingface.co/blog/getting-started-with-embeddings) using Sentence Transformers. ## Why learn to use Sentence Transformers? First, it offers a low-barrier way to get hands-on experience with state-of-the-art models to generate [embeddings](https://daleonai.com/embeddings-explained). I found that creating my own sentence embeddings was a powerful learning tool that helped strengthen my understanding of how modern models work with text, and it also got the creative juices flowing for ideation! Within a few minutes of loading up the [msmarco-MiniLM-L-6-v3 model](https://huggingface.co/sentence-transformers/msmarco-MiniLM-L-6-v3) in a Jupyter notebook I’d come up with a bunch of fun project ideas just from embedding some sentences and running some of ST’s utility functions on them. Second, Sentence Transformers is an accessible entry-point to many important ML concepts that you can branch off into. For example, you can use it to learn about [clustering](https://www.sbert.net/examples/applications/clustering/README.html), [model distillation](https://www.sbert.net/examples/training/distillation/README.html), and even launch into text-to-image work with [CLIP](https://www.sbert.net/examples/applications/image-search/README.html). In fact, Sentence Transformers is so versatile that it’s skyrocketed to almost 8,000 stars on GitHub, with [more than 3,000 projects and packages depending on it](https://github.com/UKPLab/sentence-transformers/network/dependents?dependent_type=REPOSITORY&package_id=UGFja2FnZS00ODgyNDAwNzQ%3D). On top of the official docs, there’s an abundance of community-created content (look for some links at the end of this post 👀), and the library’s ubiquity has made it [popular in research](https://twitter.com/NimaBoscarino/status/1535331680805801984?s=20&t=gd0BycVE-H4_10G9w30DcQ). Third, embeddings are key for several industrial applications. Google searches use embeddings to [match text to text and text to images](https://cloud.google.com/blog/topics/developers-practitioners/meet-ais-multitool-vector-embeddings); Snapchat uses them to "[serve the right ad to the right user at the right time](https://eng.snap.com/machine-learning-snap-ad-ranking)"; and Meta (Facebook) uses them for [their social search](https://research.facebook.com/publications/embedding-based-retrieval-in-facebook-search/). In other words, embeddings allow you to build things like chatbots, recommendation systems, zero-shot classifiers, image search, FAQ systems, and more. On top of it all, it’s also supported with a ton of [Hugging Face integrations](https://huggingface.co/docs/hub/sentence-transformers) 🤗. ## Tackling your first project So you’ve decided to check out Sentence Transformers and worked through some examples in the docs… now what? Your first self-driven project (I call these Rocket Launch projects 🚀) is a big step in your learning journey, and you’ll want to make the most of it! Here’s a little recipe that I like to follow when I’m trying out a new tool: 1. **Do a brain dump of everything you know the tool’s capable of**: For Sentence Transformers this includes generating sentence embeddings, comparing sentences, [retrieve and re-rank for complex search tasks](https://www.sbert.net/examples/applications/retrieve_rerank/README.html), clustering, and searching for similar documents with [semantic search](https://www.sbert.net/examples/applications/semantic-search/README.html). 2. **Reflect on some interesting data sources:** There’s a huge collection of datasets on the [Hugging Face Hub](https://huggingface.co/datasets), or you can also consult lists like [awesome-public-datasets](https://github.com/awesomedata/awesome-public-datasets) for some inspiration. You can often find interesting data in unexpected places – your municipality, for example, may have an [open data portal](https://opendata.vancouver.ca/pages/home/). You’re going to spend a decent amount of time working with your data, so you may as well pick datasets that excite you! 3. **Pick a *secondary* tool that you’re somewhat comfortable with:** Why limit your experience to learning one tool at a time? [“Distributed practice”](https://senecalearning.com/en-GB/blog/top-10-most-effective-learning-strategies/) (a.k.a. “spaced repetition”) means spreading your learning across multiple sessions, and it’s been proven to be an effective strategy for learning new material. One way to actively do this is by practicing new skills even in situations where they’re not the main learning focus. If you’ve recently picked up a new tool, this is a great opportunity to multiply your learning potential by battle-testing your skills. I recommend only including one secondary tool in your Rocket Launch projects. 4. **Ideate:** Spend some time brainstorming on what different combination of the elements from the first 3 steps could look like! No idea is a bad idea, and I usually try to aim for quantity instead of stressing over quality. Before long you’ll find a few ideas that light that special spark of curiosity for you ✨ For my first Sentence Transformers project, I remembered that I had a little dataset of popular song lyrics kicking around, which I realized I could combine with ST’s semantic search functionality to create a fun playlist generator. I imagined that if I could ask a user for a text prompt (e.g. “I’m feeling wild and free!”), maybe I could find songs that had lyrics that matched the prompt! I’d also been making demos with [Gradio](https://gradio.app/) and had recently been working on scaling up my skills with the newly-released [Gradio Blocks](https://gradio.app/introduction_to_blocks/?utm_campaign=Gradio&utm_medium=web&utm_source=Gradio_4), so for my secondary tool I decided I would make a cool Blocks-based Gradio app to showcase my project. Never pass up a chance to feed two birds with one scone 🦆🐓 [Here’s what I ended up making!](https://huggingface.co/spaces/NimaBoscarino/playlist-generator) Keep an eye out for a future blog post where we'll break down how this was built 👀 <div class="hidden xl:block"> <div style="display: flex; flex-direction: column; align-items: center;"> <iframe src="https://nimaboscarino-playlist-generator.hf.space" frameBorder="0" width="1400" height="690" title="Gradio app" class="p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe> </div> </div> ## What can you expect to learn from your first project? Since every project is unique, your learning journey will also be unique! According to the [“constructivism” theory of learning](https://www.wgu.edu/blog/what-constructivism2005.html), knowledge is deeply personal and constructed by actively making connections to other knowledge we already possess. Through my Playlist Generator project, for example, I had to learn about the various pre-trained models that Sentence Transformers supports so that I could find one that matched my use-case. Since I was working with Gradio on [Hugging Face Spaces](https://huggingface.co/spaces), I learned about hosting my embeddings on the Hugging Face Hub and loading them into my app. To top it off, since I had a lot of lyrics to embed, I looked for ways to speed up the embedding process and even got to learn about [Sentence Transformers’ Multi-Processor support](https://www.sbert.net/examples/applications/computing-embeddings/README.html#multi-process-multi-gpu-encoding). --- Once you’ve gone through your first project, you’ll find that you’ll have even more ideas for things to work on! Have fun, and don’t forget to share your projects and everything you’ve learned with us over at [hf.co/join/discord](http://hf.co/join/discord) 🤗 Further reading: - [Getting Started with Embeddings](https://huggingface.co/blog/getting-started-with-embeddings) - [Sentence Transformers and Hugging Face](https://huggingface.co/docs/hub/sentence-transformers) - [Sentence_Transformers for Semantic Search - by Omar Espejel](https://huggingface.co/spaces/sentence-transformers/Sentence_Transformers_for_semantic_search) - [Pinecone.io - Sentence Embeddings](https://www.pinecone.io/learn/sentence-embeddings/#some-context) - [Sentence embeddings - by John Brandt](https://johnbrandt.org/blog/sentence-similarity/)
Policy Gradient with PyTorch
ThomasSimonini
June 30, 2022
deep-rl-pg
rl
https://huggingface.co/blog/deep-rl-pg
# Policy Gradient with PyTorch <h2>Unit 5, of the <a href="https://github.com/huggingface/deep-rl-class">Deep Reinforcement Learning Class with Hugging Face 🤗</a></h2> ⚠️ A **new updated version of this article is available here** 👉 [https://huggingface.co/deep-rl-course/unit1/introduction](https://huggingface.co/deep-rl-course/unit4/introduction) *This article is part of the Deep Reinforcement Learning Class. A free course from beginner to expert. Check the syllabus [here.](https://huggingface.co/deep-rl-course/unit0/introduction)* <img src="assets/85_policy_gradient/thumbnail.gif" alt="Thumbnail"/> --- ⚠️ A **new updated version of this article is available here** 👉 [https://huggingface.co/deep-rl-course/unit1/introduction](https://huggingface.co/deep-rl-course/unit4/introduction) *This article is part of the Deep Reinforcement Learning Class. A free course from beginner to expert. Check the syllabus [here.](https://huggingface.co/deep-rl-course/unit0/introduction)* [In the last unit](https://huggingface.co/blog/deep-rl-dqn), we learned about Deep Q-Learning. In this value-based Deep Reinforcement Learning algorithm, we **used a deep neural network to approximate the different Q-values for each possible action at a state.** Indeed, since the beginning of the course, we only studied value-based methods, **where we estimate a value function as an intermediate step towards finding an optimal policy.** <img src="https://huggingface.co/blog/assets/70_deep_rl_q_part1/link-value-policy.jpg" alt="Link value policy" /> Because, in value-based, **π exists only because of the action value estimates, since policy is just a function** (for instance, greedy-policy) that will select the action with the highest value given a state. But, with policy-based methods, we want to optimize the policy directly **without having an intermediate step of learning a value function.** So today, **we'll study our first Policy-Based method**: Reinforce. And we'll implement it from scratch using PyTorch. Before testing its robustness using CartPole-v1, PixelCopter, and Pong. <figure class="image table text-center m-0 w-full"> <img src="assets/85_policy_gradient/envs.gif" alt="Environments"/> </figure> Let's get started, - [What are Policy-Gradient Methods?](#what-are-policy-gradient-methods) - [An Overview of Policy Gradients](#an-overview-of-policy-gradients) - [The Advantages of Policy-Gradient Methods](#the-advantages-of-policy-gradient-methods) - [The Disadvantages of Policy-Gradient Methods](#the-disadvantages-of-policy-gradient-methods) - [Reinforce (Monte Carlo Policy Gradient)](#reinforce-monte-carlo-policy-gradient) ## What are Policy-Gradient Methods? Policy-Gradient is a subclass of Policy-Based Methods, a category of algorithms that **aims to optimize the policy directly without using a value function using different techniques.** The difference with Policy-Based Methods is that Policy-Gradient methods are a series of algorithms that aim to optimize the policy directly **by estimating the weights of the optimal policy using Gradient Ascent.** ### An Overview of Policy Gradients Why do we optimize the policy directly by estimating the weights of an optimal policy using Gradient Ascent in Policy Gradients Methods? Remember that reinforcement learning aims **to find an optimal behavior strategy (policy) to maximize its expected cumulative reward.** We also need to remember that a policy is a function that **given a state, outputs, a distribution over actions** (in our case using a stochastic policy). <figure class="image table text-center m-0 w-full"> <img src="https://huggingface.co/blog/assets/63_deep_rl_intro/pbm_2.jpg" alt="Stochastic Policy"/> </figure> Our goal with Policy-Gradients is to control the probability distribution of actions by tuning the policy such that **good actions (that maximize the return) are sampled more frequently in the future.** Let’s take a simple example: - We collect an episode by letting our policy interact with its environment. - We then look at the sum of rewards of the episode (expected return). If this sum is positive, we **consider that the actions taken during the episodes were good:** Therefore, we want to increase the P(a|s) (probability of taking that action at that state) for each state-action pair. The Policy Gradient algorithm (simplified) looks like this: <figure class="image table text-center m-0 w-full"> <img src="assets/85_policy_gradient/pg_bigpicture.jpg" alt="Policy Gradient Big Picture"/> </figure> But Deep Q-Learning is excellent! Why use policy gradient methods? ### The Advantages of Policy-Gradient Methods There are multiple advantages over Deep Q-Learning methods. Let's see some of them: 1. The simplicity of the integration: **we can estimate the policy directly without storing additional data (action values).** 2. Policy gradient methods can **learn a stochastic policy while value functions can't**. This has two consequences: a. We **don't need to implement an exploration/exploitation trade-off by hand**. Since we output a probability distribution over actions, the agent explores **the state space without always taking the same trajectory.** b. We also get rid of the problem of **perceptual aliasing**. Perceptual aliasing is when two states seem (or are) the same but need different actions. Let's take an example: we have an intelligent vacuum cleaner whose goal is to suck the dust and avoid killing the hamsters. <figure class="image table text-center m-0 w-full"> <img src="assets/85_policy_gradient/hamster1.jpg" alt="Hamster 1"/> </figure> Our vacuum cleaner can only perceive where the walls are. The problem is that the two red cases are aliased states because the agent perceives an upper and lower wall for each. <figure class="image table text-center m-0 w-full"> <img src="assets/85_policy_gradient/hamster2.jpg" alt="Hamster 1"/> </figure> Under a deterministic policy, the policy will either move right when in a red state or move left. **Either case will cause our agent to get stuck and never suck the dust**. Under a value-based RL algorithm, we learn a quasi-deterministic policy ("greedy epsilon strategy"). Consequently, our agent can spend a lot of time before finding the dust. On the other hand, an optimal stochastic policy will randomly move left or right in grey states. Consequently, **it will not be stuck and will reach the goal state with a high probability**. <figure class="image table text-center m-0 w-full"> <img src="assets/85_policy_gradient/hamster3.jpg" alt="Hamster 1"/> </figure> 3. Policy gradients are **more effective in high-dimensional action spaces and continuous actions spaces** Indeed, the problem with Deep Q-learning is that their **predictions assign a score (maximum expected future reward) for each possible action**, at each time step, given the current state. But what if we have an infinite possibility of actions? For instance, with a self-driving car, at each state, you can have a (near) infinite choice of actions (turning the wheel at 15°, 17.2°, 19,4°, honking, etc.). We'll need to output a Q-value for each possible action! And taking the max action of a continuous output is an optimization problem itself! Instead, with a policy gradient, we output a **probability distribution over actions.** ### The Disadvantages of Policy-Gradient Methods Naturally, Policy Gradient methods have also some disadvantages: - **Policy gradients converge a lot of time on a local maximum instead of a global optimum.** - Policy gradient goes faster, **step by step: it can take longer to train (inefficient).** - Policy gradient can have high variance (solution baseline). 👉 If you want to go deeper on the why the advantages and disadvantages of Policy Gradients methods, [you can check this video](https://youtu.be/y3oqOjHilio). Now that we have seen the big picture of Policy-Gradient and its advantages and disadvantages, **let's study and implement one of them**: Reinforce. ## Reinforce (Monte Carlo Policy Gradient) Reinforce, also called Monte-Carlo Policy Gradient, **uses an estimated return from an entire episode to update the policy parameter** \\(\theta\\). We have our policy π which has a parameter θ. This π, given a state, **outputs a probability distribution of actions**. <figure class="image table text-center m-0 w-full"> <img src="assets/85_policy_gradient/policy.jpg" alt="Policy"/> </figure> Where \\(\pi_\theta(a_t|s_t)\\) is the probability of the agent selecting action at from state st, given our policy. **But how do we know if our policy is good?** We need to have a way to measure it. To know that we define a score/objective function called \\(J(\theta)\\). The score function J is the expected return: <figure class="image table text-center m-0 w-full"> <img src="assets/85_policy_gradient/objective.jpg" alt="Return"/> </figure> Remember that policy gradient can be seen as an optimization problem. So we must find the best parameters (θ) to maximize the score function, J(θ). To do that we’re going to use the [Policy Gradient Theorem](https://www.youtube.com/watch?v=AKbX1Zvo7r8). I’m not going to dive on the mathematical details but if you’re interested check [this video](https://www.youtube.com/watch?v=AKbX1Zvo7r8) The Reinforce algorithm works like this: Loop: - Use the policy \\(\pi_\theta\\) to collect an episode \\(\tau\\) - Use the episode to estimate the gradient \\(\hat{g} = \nabla_\theta J(\theta)\\) <figure class="image table text-center m-0 w-full"> <img src="assets/85_policy_gradient/pg.jpg" alt="Policy Gradient"/> </figure> - Update the weights of the policy: \\(\theta \leftarrow \theta + \alpha \hat{g}\\) The interpretation we can make is this one: - \\(\nabla_\theta log \pi_\theta(a_t|s_t)\\) is the direction of **steepest increase of the (log) probability** of selecting action at from state st. => This tells use **how we should change the weights of policy** if we want to increase/decrease the log probability of selecting action at at state st. - \\(R(\tau)\\): is the scoring function: - If the return is high, it will push up the probabilities of the (state, action) combinations. - Else, if the return is low, it will push down the probabilities of the (state, action) combinations. Now that we studied the theory behind Reinforce, **you’re ready to code your Reinforce agent with PyTorch**. And you'll test its robustness using CartPole-v1, PixelCopter, and Pong. Start the tutorial here 👉 https://colab.research.google.com/github/huggingface/deep-rl-class/blob/main/unit5/unit5.ipynb The leaderboard to compare your results with your classmates 🏆 👉 https://huggingface.co/spaces/chrisjay/Deep-Reinforcement-Learning-Leaderboard <figure class="image table text-center m-0 w-full"> <img src="assets/85_policy_gradient/envs.gif" alt="Environments"/> </figure> --- Congrats on finishing this chapter! There was a lot of information. And congrats on finishing the tutorial. You’ve just coded your first Deep Reinforcement Learning agent from scratch using PyTorch and shared it on the Hub 🥳. It's **normal if you still feel confused** with all these elements. **This was the same for me and for all people who studied RL.** Take time to really grasp the material before continuing. Don't hesitate to train your agent in other environments. The **best way to learn is to try things on your own!** We published additional readings in the syllabus if you want to go deeper 👉 **[https://github.com/huggingface/deep-rl-class/blob/main/unit5/README.md](https://github.com/huggingface/deep-rl-class/blob/main/unit5/README.md)** In the next unit, we’re going to learn about a combination of Policy-Based and Value-based methods called Actor Critic Methods. And don't forget to share with your friends who want to learn 🤗! Finally, we want **to improve and update the course iteratively with your feedback**. If you have some, please fill this form 👉 **[https://forms.gle/3HgA7bEHwAmmLfwh9](https://forms.gle/3HgA7bEHwAmmLfwh9)** ### **Keep learning, stay awesome 🤗,**