text
stringlengths
23
371k
source
stringlengths
32
152
Advanced Interface features[[advanced-interface-features]] <CourseFloatingBanner chapter={9} classNames="absolute z-10 right-0 top-0" notebooks={[ {label: "Google Colab", value: "https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/en/chapter9/section6.ipynb"}, {label: "Aws Studio", value: "https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/master/course/en/chapter9/section6.ipynb"}, ]} /> Now that we can build and share a basic interface, let's explore some more advanced features such as state, and interpretation. ### Using state to persist data[[using-state-to-persist-data]] Gradio supports *session state*, where data persists across multiple submits within a page load. Session state is useful for building demos of, for example, chatbots where you want to persist data as the user interacts with the model. Note that session state does not share data between different users of your model. To store data in a session state, you need to do three things: 1. Pass in an *extra parameter* into your function, which represents the state of the interface. 1. At the end of the function, return the updated value of the state as an *extra return value*. 1. Add the 'state' input and 'state' output components when creating your `Interface`. See the chatbot example below: ```py import random import gradio as gr def chat(message, history): history = history or [] if message.startswith("How many"): response = random.randint(1, 10) elif message.startswith("How"): response = random.choice(["Great", "Good", "Okay", "Bad"]) elif message.startswith("Where"): response = random.choice(["Here", "There", "Somewhere"]) else: response = "I don't know" history.append((message, response)) return history, history iface = gr.Interface( chat, ["text", "state"], ["chatbot", "state"], allow_screenshot=False, allow_flagging="never", ) iface.launch() ``` <iframe src="https://course-demos-Chatbot-Demo.hf.space" frameBorder="0" height="350" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe> Notice how the state of the output component persists across submits. Note: you can pass in a default value to the state parameter, which is used as the initial value of the state. ### Using interpretation to understand predictions[[using-interpretation-to-understand-predictions]] Most machine learning models are black boxes and the internal logic of the function is hidden from the end user. To encourage transparency, we've made it very easy to add interpretation to your model by simply setting the interpretation keyword in the Interface class to default. This allows your users to understand what parts of the input are responsible for the output. Take a look at the simple interface below which shows an image classifier that also includes interpretation: ```py import requests import tensorflow as tf import gradio as gr inception_net = tf.keras.applications.MobileNetV2() # load the model # Download human-readable labels for ImageNet. response = requests.get("https://git.io/JJkYN") labels = response.text.split("\n") def classify_image(inp): inp = inp.reshape((-1, 224, 224, 3)) inp = tf.keras.applications.mobilenet_v2.preprocess_input(inp) prediction = inception_net.predict(inp).flatten() return {labels[i]: float(prediction[i]) for i in range(1000)} image = gr.Image(shape=(224, 224)) label = gr.Label(num_top_classes=3) title = "Gradio Image Classifiction + Interpretation Example" gr.Interface( fn=classify_image, inputs=image, outputs=label, interpretation="default", title=title ).launch() ``` Test the interpretation function by submitting an input then clicking Interpret under the output component. <iframe src="https://course-demos-gradio-image-interpretation.hf.space" frameBorder="0" height="570" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe> Besides the default interpretation method Gradio provides, you can also specify `shap` for the `interpretation` parameter and set the `num_shap` parameter. This uses Shapley-based interpretation, which you can read more about [here](https://christophm.github.io/interpretable-ml-book/shap.html). Lastly, you can also pass in your own interpretation function into the `interpretation` parameter. See an example in Gradio's getting started page [here](https://gradio.app/getting_started/). This wraps up our deep dive into the `Interface` class of Gradio. As we've seen, this class makes it simple to create machine learning demos in a few lines of Python code. However, sometimes you'll want to customise your demo by changing the layout or chaining multiple prediction functions together. Wouldn't it be nice if we could somehow split the `Interface` into customizable "blocks"? Fortunately, there is! That's the topic of the final section.
huggingface/course/blob/main/chapters/en/chapter9/6.mdx
Gradio Demo: bokeh_plot ``` !pip install -q gradio bokeh>=3.0 xyzservices ``` ``` import gradio as gr import xyzservices.providers as xyz from bokeh.models import ColumnDataSource, Whisker from bokeh.plotting import figure from bokeh.sampledata.autompg2 import autompg2 as df from bokeh.sampledata.penguins import data from bokeh.transform import factor_cmap, jitter, factor_mark def get_plot(plot_type): if plot_type == "map": plot = figure( x_range=(-2000000, 6000000), y_range=(-1000000, 7000000), x_axis_type="mercator", y_axis_type="mercator", ) plot.add_tile(xyz.OpenStreetMap.Mapnik) return plot elif plot_type == "whisker": classes = list(sorted(df["class"].unique())) p = figure( height=400, x_range=classes, background_fill_color="#efefef", title="Car class vs HWY mpg with quintile ranges", ) p.xgrid.grid_line_color = None g = df.groupby("class") upper = g.hwy.quantile(0.80) lower = g.hwy.quantile(0.20) source = ColumnDataSource(data=dict(base=classes, upper=upper, lower=lower)) error = Whisker( base="base", upper="upper", lower="lower", source=source, level="annotation", line_width=2, ) error.upper_head.size = 20 error.lower_head.size = 20 p.add_layout(error) p.circle( jitter("class", 0.3, range=p.x_range), "hwy", source=df, alpha=0.5, size=13, line_color="white", color=factor_cmap("class", "Light6", classes), ) return p elif plot_type == "scatter": SPECIES = sorted(data.species.unique()) MARKERS = ["hex", "circle_x", "triangle"] p = figure(title="Penguin size", background_fill_color="#fafafa") p.xaxis.axis_label = "Flipper Length (mm)" p.yaxis.axis_label = "Body Mass (g)" p.scatter( "flipper_length_mm", "body_mass_g", source=data, legend_group="species", fill_alpha=0.4, size=12, marker=factor_mark("species", MARKERS, SPECIES), color=factor_cmap("species", "Category10_3", SPECIES), ) p.legend.location = "top_left" p.legend.title = "Species" return p with gr.Blocks() as demo: with gr.Row(): plot_type = gr.Radio(value="scatter", choices=["scatter", "whisker", "map"]) plot = gr.Plot() plot_type.change(get_plot, inputs=[plot_type], outputs=[plot]) demo.load(get_plot, inputs=[plot_type], outputs=[plot]) if __name__ == "__main__": demo.launch() ```
gradio-app/gradio/blob/main/demo/bokeh_plot/run.ipynb
p align="center"> <br/> <img alt="huggingface_hub library logo" src="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/huggingface_hub.svg" width="376" height="59" style="max-width: 100%;"> <br/> </p> <p align="center"> <i>The official Python client for the Huggingface Hub.</i> </p> <p align="center"> <a href="https://huggingface.co/docs/huggingface_hub/en/index"><img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/huggingface_hub/index.svg?down_color=red&down_message=offline&up_message=online&label=doc"></a> <a href="https://github.com/huggingface/huggingface_hub/releases"><img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/huggingface_hub.svg"></a> <a href="https://github.com/huggingface/huggingface_hub"><img alt="PyPi version" src="https://img.shields.io/pypi/pyversions/huggingface_hub.svg"></a> <a href="https://pypi.org/project/huggingface-hub"><img alt="downloads" src="https://static.pepy.tech/badge/huggingface_hub/month"></a> <a href="https://codecov.io/gh/huggingface/huggingface_hub"><img alt="Code coverage" src="https://codecov.io/gh/huggingface/huggingface_hub/branch/main/graph/badge.svg?token=RXP95LE2XL"></a> </p> <h4 align="center"> <p> <b>English</b> | <a href="https://github.com/huggingface/huggingface_hub/blob/main/README_de.md">Deutsch</a> | <a href="https://github.com/huggingface/huggingface_hub/blob/main/README_hi.md">हिंदी</a> | <a href="https://github.com/huggingface/huggingface_hub/blob/main/README_ko.md">한국어</a> | <a href="https://github.com/huggingface/huggingface_hub/blob/main/README_cn.md">中文(简体)</a> <p> </h4> --- **Documentation**: <a href="https://hf.co/docs/huggingface_hub" target="_blank">https://hf.co/docs/huggingface_hub</a> **Source Code**: <a href="https://github.com/huggingface/huggingface_hub" target="_blank">https://github.com/huggingface/huggingface_hub</a> --- ## Welcome to the huggingface_hub library The `huggingface_hub` library allows you to interact with the [Hugging Face Hub](https://huggingface.co/), a platform democratizing open-source Machine Learning for creators and collaborators. Discover pre-trained models and datasets for your projects or play with the thousands of machine learning apps hosted on the Hub. You can also create and share your own models, datasets and demos with the community. The `huggingface_hub` library provides a simple way to do all these things with Python. ## Key features - [Download files](https://huggingface.co/docs/huggingface_hub/en/guides/download) from the Hub. - [Upload files](https://huggingface.co/docs/huggingface_hub/en/guides/upload) to the Hub. - [Manage your repositories](https://huggingface.co/docs/huggingface_hub/en/guides/repository). - [Run Inference](https://huggingface.co/docs/huggingface_hub/en/guides/inference) on deployed models. - [Search](https://huggingface.co/docs/huggingface_hub/en/guides/search) for models, datasets and Spaces. - [Share Model Cards](https://huggingface.co/docs/huggingface_hub/en/guides/model-cards) to document your models. - [Engage with the community](https://huggingface.co/docs/huggingface_hub/en/guides/community) through PRs and comments. ## Installation Install the `huggingface_hub` package with [pip](https://pypi.org/project/huggingface-hub/): ```bash pip install huggingface_hub ``` If you prefer, you can also install it with [conda](https://huggingface.co/docs/huggingface_hub/en/installation#install-with-conda). In order to keep the package minimal by default, `huggingface_hub` comes with optional dependencies useful for some use cases. For example, if you want have a complete experience for Inference, run: ```bash pip install huggingface_hub[inference] ``` To learn more installation and optional dependencies, check out the [installation guide](https://huggingface.co/docs/huggingface_hub/en/installation). ## Quick start ### Download files Download a single file ```py from huggingface_hub import hf_hub_download hf_hub_download(repo_id="tiiuae/falcon-7b-instruct", filename="config.json") ``` Or an entire repository ```py from huggingface_hub import snapshot_download snapshot_download("stabilityai/stable-diffusion-2-1") ``` Files will be downloaded in a local cache folder. More details in [this guide](https://huggingface.co/docs/huggingface_hub/en/guides/manage-cache). ### Login The Hugging Face Hub uses tokens to authenticate applications (see [docs](https://huggingface.co/docs/hub/security-tokens)). To login your machine, run the following CLI: ```bash huggingface-cli login # or using an environment variable huggingface-cli login --token $HUGGINGFACE_TOKEN ``` ### Create a repository ```py from huggingface_hub import create_repo create_repo(repo_id="super-cool-model") ``` ### Upload files Upload a single file ```py from huggingface_hub import upload_file upload_file( path_or_fileobj="/home/lysandre/dummy-test/README.md", path_in_repo="README.md", repo_id="lysandre/test-model", ) ``` Or an entire folder ```py from huggingface_hub import upload_folder upload_folder( folder_path="/path/to/local/space", repo_id="username/my-cool-space", repo_type="space", ) ``` For details in the [upload guide](https://huggingface.co/docs/huggingface_hub/en/guides/upload). ## Integrating to the Hub. We're partnering with cool open source ML libraries to provide free model hosting and versioning. You can find the existing integrations [here](https://huggingface.co/docs/hub/libraries). The advantages are: - Free model or dataset hosting for libraries and their users. - Built-in file versioning, even with very large files, thanks to a git-based approach. - Hosted inference API for all models publicly available. - In-browser widgets to play with the uploaded models. - Anyone can upload a new model for your library, they just need to add the corresponding tag for the model to be discoverable. - Fast downloads! We use Cloudfront (a CDN) to geo-replicate downloads so they're blazing fast from anywhere on the globe. - Usage stats and more features to come. If you would like to integrate your library, feel free to open an issue to begin the discussion. We wrote a [step-by-step guide](https://huggingface.co/docs/hub/adding-a-library) with ❤️ showing how to do this integration. ## Contributions (feature requests, bugs, etc.) are super welcome 💙💚💛💜🧡❤️ Everyone is welcome to contribute, and we value everybody's contribution. Code is not the only way to help the community. Answering questions, helping others, reaching out and improving the documentations are immensely valuable to the community. We wrote a [contribution guide](https://github.com/huggingface/huggingface_hub/blob/main/CONTRIBUTING.md) to summarize how to get started to contribute to this repository.
huggingface/huggingface_hub/blob/main/README.md
# JavaScript Client Library A javascript (and typescript) client to call Gradio APIs. ## Installation The Gradio JavaScript client is available on npm as `@gradio/client`. You can install it as below: ```sh npm i @gradio/client ``` ## Usage The JavaScript Gradio Client exposes two named imports, `client` and `duplicate`. ### `client` The client function connects to the API of a hosted Gradio space and returns an object that allows you to make calls to that API. The simplest example looks like this: ```ts import { client } from "@gradio/client"; const app = await client("user/space-name"); const result = await app.predict("/predict"); ``` This function accepts two arguments: `source` and `options`: #### `source` This is the url or name of the gradio app whose API you wish to connect to. This parameter is required and should always be a string. For example: ```ts client("user/space-name"); ``` #### `options` The options object can optionally be passed a second parameter. This object has two properties, `hf_token` and `status_callback`. ##### `hf_token` This should be a Hugging Face personal access token and is required if you wish to make calls to a private gradio api. This option is optional and should be a string starting with `"hf_"`. Example: ```ts import { client } from "@gradio/client"; const app = await client("user/space-name", { hf_token: "hf_..." }); ``` ##### `status_callback` This should be a function which will notify your of the status of a space if it is not running. If the gradio API you are connecting to is awake and running or is not hosted on Hugging Face space then this function will do nothing. **Additional context** Applications hosted on Hugging Face spaces can be in a number of different states. As spaces are a GitOps tool and will rebuild when new changes are pushed to the repository, they have various building, running and error states. If a space is not 'running' then the function passed as the `status_callback` will notify you of the current state of the space and the status of the space as it changes. Spaces that are building or sleeping can take longer than usual to respond, so you can use this information to give users feedback about the progress of their action. ```ts import { client, type SpaceStatus } from "@gradio/client"; const app = await client("user/space-name", { // The space_status parameter does not need to be manually annotated, this is just for illustration. space_status: (space_status: SpaceStatus) => console.log(space_status) }); ``` ```ts interface SpaceStatusNormal { status: "sleeping" | "running" | "building" | "error" | "stopped"; detail: | "SLEEPING" | "RUNNING" | "RUNNING_BUILDING" | "BUILDING" | "NOT_FOUND"; load_status: "pending" | "error" | "complete" | "generating"; message: string; } interface SpaceStatusError { status: "space_error"; detail: "NO_APP_FILE" | "CONFIG_ERROR" | "BUILD_ERROR" | "RUNTIME_ERROR"; load_status: "error"; message: string; discussions_enabled: boolean; type SpaceStatus = SpaceStatusNormal | SpaceStatusError; ``` The gradio client returns an object with a number of methods and properties: #### `predict` The `predict` method allows you to call an api endpoint and get a prediction result: ```ts import { client } from "@gradio/client"; const app = await client("user/space-name"); const result = await app.predict("/predict"); ``` `predict` accepts two parameters, `endpoint` and `payload`. It returns a promise that resolves to the prediction result. ##### `endpoint` This is the endpoint for an api request and is required. The default endpoint for a `gradio.Interface` is `"/predict"`. Explicitly named endpoints have a custom name. The endpoint names can be found on the "View API" page of a space. ```ts import { client } from "@gradio/client"; const app = await client("user/space-name"); const result = await app.predict("/predict"); ``` ##### `payload` The `payload` argument is generally optional but this depends on the API itself. If the API endpoint depends on values being passed in then it is required for the API request to succeed. The data that should be passed in is detailed on the "View API" page of a space, or accessible via the `view_api()` method of the client. ```ts import { client } from "@gradio/client"; const app = await client("user/space-name"); const result = await app.predict("/predict", [1, "Hello", "friends"]); ``` #### `submit` The `submit` method provides a more flexible way to call an API endpoint, providing you with status updates about the current progress of the prediction as well as supporting more complex endpoint types. ```ts import { client } from "@gradio/client"; const app = await client("user/space-name"); const submission = app.submit("/predict", payload); ``` The `submit` method accepts the same [`endpoint`](#endpoint) and [`payload`](#payload) arguments as `predict`. The `submit` method does not return a promise and should not be awaited, instead it returns an object with a `on`, `off`, and `cancel` methods. ##### `on` The `on` method allows you to subscribe to events related to the submitted API request. There are two types of event that can be subscribed to: `"data"` updates and `"status"` updates. `"data"` updates are issued when the API computes a value, the callback provided as the second argument will be called when such a value is sent to the client. The shape of the data depends on the way the API itself is constructed. This event may fire more than once if that endpoint supports emmitting new values over time. `"status` updates are issued when the status of a request changes. This information allows you to offer feedback to users when the queue position of the request changes, or when the request changes from queued to processing. The status payload look like this: ```ts interface Status { queue: boolean; code?: string; success?: boolean; stage: "pending" | "error" | "complete" | "generating"; size?: number; position?: number; eta?: number; message?: string; progress_data?: Array<{ progress: number | null; index: number | null; length: number | null; unit: string | null; desc: string | null; }>; time?: Date; } ``` Usage of these subscribe callback looks like this: ```ts import { client } from "@gradio/client"; const app = await client("user/space-name"); const submission = app .submit("/predict", payload) .on("data", (data) => console.log(data)) .on("status", (status: Status) => console.log(status)); ``` ##### `off` The `off` method unsubscribes from a specific event of the submitted job and works similarly to `document.removeEventListener`; both the event name and the original callback must be passed in to successfully unsubscribe: ```ts import { client } from "@gradio/client"; const app = await client("user/space-name"); const handle_data = (data) => console.log(data); const submission = app.submit("/predict", payload).on("data", handle_data); // later submission.off("/predict", handle_data); ``` ##### `destroy` The `destroy` method will remove all subscriptions to a job, regardless of whether or not they are `"data"` or `"status"` events. This is a convenience method for when you do not want to unsubscribe use the `off` method. ```js import { client } from "@gradio/client"; const app = await client("user/space-name"); const handle_data = (data) => console.log(data); const submission = app.submit("/predict", payload).on("data", handle_data); // later submission.destroy(); ``` ##### `cancel` Certain types of gradio function can run repeatedly and in some cases indefinitely. the `cancel` method will stop such an endpoints and prevent the API from issuing additional updates. ```ts import { client } from "@gradio/client"; const app = await client("user/space-name"); const submission = app .submit("/predict", payload) .on("data", (data) => console.log(data)); // later submission.cancel(); ``` #### `view_api` The `view_api` method provides details about the API you are connected to. It returns a JavaScript object of all named endpoints, unnamed endpoints and what values they accept and return. This method does not accept arguments. ```ts import { client } from "@gradio/client"; const app = await client("user/space-name"); const api_info = await app.view_api(); console.log(api_info); ``` #### `config` The `config` property contains the configuration for the gradio application you are connected to. This object may contain useful meta information about the application. ```ts import { client } from "@gradio/client"; const app = await client("user/space-name"); console.log(app.config); ``` ### `duplicate` The duplicate function will attempt to duplicate the space that is referenced and return an instance of `client` connected to that space. If the space has already been duplicated then it will not create a new duplicate and will instead connect to the existing duplicated space. The huggingface token that is passed in will dictate the user under which the space is created. `duplicate` accepts the same arguments as `client` with the addition of a `private` options property dictating whether the duplicated space should be private or public. A huggingface token is required for duplication to work. ```ts import { duplicate } from "@gradio/client"; const app = await duplicate("user/space-name", { hf_token: "hf_..." }); ``` This function accepts two arguments: `source` and `options`: #### `source` The space to duplicate and connect to. [See `client`'s `source` parameter](#source). #### `options` Accepts all options that `client` accepts, except `hf_token` is required. [See `client`'s `options` parameter](#source). `duplicate` also accepts one additional `options` property. ##### `private` This is an optional property specific to `duplicate`'s options object and will determine whether the space should be public or private. Spaces duplicated via the `duplicate` method are public by default. ```ts import { duplicate } from "@gradio/client"; const app = await duplicate("user/space-name", { hf_token: "hf_...", private: true }); ``` ##### `timeout` This is an optional property specific to `duplicate`'s options object and will set the timeout in minutes before the duplicated space will go to sleep. ```ts import { duplicate } from "@gradio/client"; const app = await duplicate("user/space-name", { hf_token: "hf_...", private: true, timeout: 5 }); ``` ##### `hardware` This is an optional property specific to `duplicate`'s options object and will set the hardware for the duplicated space. By default the hardware used will match that of the original space. If this cannot be obtained it will default to `"cpu-basic"`. For hardware upgrades (beyond the basic CPU tier), you may be required to provide [billing information on Hugging Face](https://huggingface.co/settings/billing). Possible hardware options are: - `"cpu-basic"` - `"cpu-upgrade"` - `"t4-small"` - `"t4-medium"` - `"a10g-small"` - `"a10g-large"` - `"a100-large"` ```ts import { duplicate } from "@gradio/client"; const app = await duplicate("user/space-name", { hf_token: "hf_...", private: true, hardware: "a10g-small" }); ```
gradio-app/gradio/blob/main/client/js/README.md
!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Use tokenizers from 🤗 Tokenizers The [`PreTrainedTokenizerFast`] depends on the [🤗 Tokenizers](https://huggingface.co/docs/tokenizers) library. The tokenizers obtained from the 🤗 Tokenizers library can be loaded very simply into 🤗 Transformers. Before getting in the specifics, let's first start by creating a dummy tokenizer in a few lines: ```python >>> from tokenizers import Tokenizer >>> from tokenizers.models import BPE >>> from tokenizers.trainers import BpeTrainer >>> from tokenizers.pre_tokenizers import Whitespace >>> tokenizer = Tokenizer(BPE(unk_token="[UNK]")) >>> trainer = BpeTrainer(special_tokens=["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"]) >>> tokenizer.pre_tokenizer = Whitespace() >>> files = [...] >>> tokenizer.train(files, trainer) ``` We now have a tokenizer trained on the files we defined. We can either continue using it in that runtime, or save it to a JSON file for future re-use. ## Loading directly from the tokenizer object Let's see how to leverage this tokenizer object in the 🤗 Transformers library. The [`PreTrainedTokenizerFast`] class allows for easy instantiation, by accepting the instantiated *tokenizer* object as an argument: ```python >>> from transformers import PreTrainedTokenizerFast >>> fast_tokenizer = PreTrainedTokenizerFast(tokenizer_object=tokenizer) ``` This object can now be used with all the methods shared by the 🤗 Transformers tokenizers! Head to [the tokenizer page](main_classes/tokenizer) for more information. ## Loading from a JSON file In order to load a tokenizer from a JSON file, let's first start by saving our tokenizer: ```python >>> tokenizer.save("tokenizer.json") ``` The path to which we saved this file can be passed to the [`PreTrainedTokenizerFast`] initialization method using the `tokenizer_file` parameter: ```python >>> from transformers import PreTrainedTokenizerFast >>> fast_tokenizer = PreTrainedTokenizerFast(tokenizer_file="tokenizer.json") ``` This object can now be used with all the methods shared by the 🤗 Transformers tokenizers! Head to [the tokenizer page](main_classes/tokenizer) for more information.
huggingface/transformers/blob/main/docs/source/en/fast_tokenizers.md
SK-ResNet **SK ResNet** is a variant of a [ResNet](https://www.paperswithcode.com/method/resnet) that employs a [Selective Kernel](https://paperswithcode.com/method/selective-kernel) unit. In general, all the large kernel convolutions in the original bottleneck blocks in ResNet are replaced by the proposed [SK convolutions](https://paperswithcode.com/method/selective-kernel-convolution), enabling the network to choose appropriate receptive field sizes in an adaptive manner. ## How do I use this model on an image? To load a pretrained model: ```py >>> import timm >>> model = timm.create_model('skresnet18', pretrained=True) >>> model.eval() ``` To load and preprocess the image: ```py >>> import urllib >>> from PIL import Image >>> from timm.data import resolve_data_config >>> from timm.data.transforms_factory import create_transform >>> config = resolve_data_config({}, model=model) >>> transform = create_transform(**config) >>> url, filename = ("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg") >>> urllib.request.urlretrieve(url, filename) >>> img = Image.open(filename).convert('RGB') >>> tensor = transform(img).unsqueeze(0) # transform and add batch dimension ``` To get the model predictions: ```py >>> import torch >>> with torch.no_grad(): ... out = model(tensor) >>> probabilities = torch.nn.functional.softmax(out[0], dim=0) >>> print(probabilities.shape) >>> # prints: torch.Size([1000]) ``` To get the top-5 predictions class names: ```py >>> # Get imagenet class mappings >>> url, filename = ("https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt", "imagenet_classes.txt") >>> urllib.request.urlretrieve(url, filename) >>> with open("imagenet_classes.txt", "r") as f: ... categories = [s.strip() for s in f.readlines()] >>> # Print top categories per image >>> top5_prob, top5_catid = torch.topk(probabilities, 5) >>> for i in range(top5_prob.size(0)): ... print(categories[top5_catid[i]], top5_prob[i].item()) >>> # prints class names and probabilities like: >>> # [('Samoyed', 0.6425196528434753), ('Pomeranian', 0.04062102362513542), ('keeshond', 0.03186424449086189), ('white wolf', 0.01739676296710968), ('Eskimo dog', 0.011717947199940681)] ``` Replace the model name with the variant you want to use, e.g. `skresnet18`. You can find the IDs in the model summaries at the top of this page. To extract image features with this model, follow the [timm feature extraction examples](../feature_extraction), just change the name of the model you want to use. ## How do I finetune this model? You can finetune any of the pre-trained models just by changing the classifier (the last layer). ```py >>> model = timm.create_model('skresnet18', pretrained=True, num_classes=NUM_FINETUNE_CLASSES) ``` To finetune on your own dataset, you have to write a training loop or adapt [timm's training script](https://github.com/rwightman/pytorch-image-models/blob/master/train.py) to use your dataset. ## How do I train this model? You can follow the [timm recipe scripts](../scripts) for training a new model afresh. ## Citation ```BibTeX @misc{li2019selective, title={Selective Kernel Networks}, author={Xiang Li and Wenhai Wang and Xiaolin Hu and Jian Yang}, year={2019}, eprint={1903.06586}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` <!-- Type: model-index Collections: - Name: SKResNet Paper: Title: Selective Kernel Networks URL: https://paperswithcode.com/paper/selective-kernel-networks Models: - Name: skresnet18 In Collection: SKResNet Metadata: FLOPs: 2333467136 Parameters: 11960000 File Size: 47923238 Architecture: - Convolution - Dense Connections - Global Average Pooling - Max Pooling - Residual Connection - Selective Kernel - Softmax Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 8x GPUs ID: skresnet18 LR: 0.1 Epochs: 100 Layers: 18 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 256 Image Size: '224' Weight Decay: 4.0e-05 Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/a7f95818e44b281137503bcf4b3e3e94d8ffa52f/timm/models/sknet.py#L148 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/skresnet18_ra-4eec2804.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 73.03% Top 5 Accuracy: 91.17% - Name: skresnet34 In Collection: SKResNet Metadata: FLOPs: 4711849952 Parameters: 22280000 File Size: 89299314 Architecture: - Convolution - Dense Connections - Global Average Pooling - Max Pooling - Residual Connection - Selective Kernel - Softmax Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 8x GPUs ID: skresnet34 LR: 0.1 Epochs: 100 Layers: 34 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 256 Image Size: '224' Weight Decay: 4.0e-05 Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/a7f95818e44b281137503bcf4b3e3e94d8ffa52f/timm/models/sknet.py#L165 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/skresnet34_ra-bdc0ccde.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 76.93% Top 5 Accuracy: 93.32% -->
huggingface/pytorch-image-models/blob/main/hfdocs/source/models/skresnet.mdx
Stable Diffusion text-to-image fine-tuning The `train_text_to_image.py` script shows how to fine-tune stable diffusion model on your own dataset. ___Note___: ___This script is experimental. The script fine-tunes the whole model and often times the model overfits and runs into issues like catastrophic forgetting. It's recommended to try different hyperparamters to get the best result on your dataset.___ ## Running locally with PyTorch ### Installing the dependencies Before running the scripts, make sure to install the library's training dependencies: **Important** To make sure you can successfully run the latest versions of the example scripts, we highly recommend **installing from source** and keeping the install up to date as we update the example scripts frequently and install some example-specific requirements. To do this, execute the following steps in a new virtual environment: ```bash git clone https://github.com/huggingface/diffusers cd diffusers pip install . ``` Then cd in the example folder and run ```bash pip install -r requirements.txt ``` And initialize an [🤗Accelerate](https://github.com/huggingface/accelerate/) environment with: ```bash accelerate config ``` ### Pokemon example You need to accept the model license before downloading or using the weights. In this example we'll use model version `v1-4`, so you'll need to visit [its card](https://huggingface.co/CompVis/stable-diffusion-v1-4), read the license and tick the checkbox if you agree. You have to be a registered user in 🤗 Hugging Face Hub, and you'll also need to use an access token for the code to work. For more information on access tokens, please refer to [this section of the documentation](https://huggingface.co/docs/hub/security-tokens). Run the following command to authenticate your token ```bash huggingface-cli login ``` If you have already cloned the repo, then you won't need to go through these steps. <br> ## Use ONNXRuntime to accelerate training In order to leverage onnxruntime to accelerate training, please use train_text_to_image.py The command to train a DDPM UNetCondition model on the Pokemon dataset with onnxruntime: ```bash export MODEL_NAME="CompVis/stable-diffusion-v1-4" export dataset_name="lambdalabs/pokemon-blip-captions" accelerate launch --mixed_precision="fp16" train_text_to_image.py \ --pretrained_model_name_or_path=$MODEL_NAME \ --dataset_name=$dataset_name \ --use_ema \ --resolution=512 --center_crop --random_flip \ --train_batch_size=1 \ --gradient_accumulation_steps=4 \ --gradient_checkpointing \ --max_train_steps=15000 \ --learning_rate=1e-05 \ --max_grad_norm=1 \ --lr_scheduler="constant" --lr_warmup_steps=0 \ --output_dir="sd-pokemon-model" ``` Please contact Prathik Rao (prathikr), Sunghoon Choi (hanbitmyths), Ashwini Khade (askhade), or Peng Wang (pengwa) on github with any questions.
huggingface/diffusers/blob/main/examples/research_projects/onnxruntime/text_to_image/README.md
!--- Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # Token classification Fine-tuning the library models for token classification task such as Named Entity Recognition (NER), Parts-of-speech tagging (POS) or phrase extraction (CHUNKS). The main script `run_ner.py` leverages the [🤗 Datasets](https://github.com/huggingface/datasets) library. You can easily customize it to your needs if you need extra processing on your datasets. It will either run on a datasets hosted on our [hub](https://huggingface.co/datasets) or with your own text files for training and validation, you might just need to add some tweaks in the data preprocessing. The following example fine-tunes BERT on CoNLL-2003: ```bash python run_ner.py \ --model_name_or_path bert-base-uncased \ --dataset_name conll2003 \ --output_dir /tmp/test-ner ``` To run on your own training and validation files, use the following command: ```bash python run_ner.py \ --model_name_or_path bert-base-uncased \ --train_file path_to_train_file \ --validation_file path_to_validation_file \ --output_dir /tmp/test-ner ``` **Note:** This script only works with models that have a fast tokenizer (backed by the [🤗 Tokenizers](https://github.com/huggingface/tokenizers) library) as it uses special features of those tokenizers. You can check if your favorite model has a fast tokenizer in [this table](https://huggingface.co/transformers/index.html#supported-frameworks).
huggingface/transformers/blob/main/examples/tensorflow/token-classification/README.md
iles in this directory are used in: - tests for the gradio library - example inputs in the view API documentation Warning: please be careful when renaming / deleting files
gradio-app/gradio/blob/main/test/test_files/README.md
How to Use the 3D Model Component Related spaces: https://huggingface.co/spaces/gradio/Model3D, https://huggingface.co/spaces/gradio/PIFu-Clothed-Human-Digitization, https://huggingface.co/spaces/gradio/dpt-depth-estimation-3d-obj Tags: VISION, IMAGE ## Introduction 3D models are becoming more popular in machine learning and make for some of the most fun demos to experiment with. Using `gradio`, you can easily build a demo of your 3D image model and share it with anyone. The Gradio 3D Model component accepts 3 file types including: _.obj_, _.glb_, & _.gltf_. This guide will show you how to build a demo for your 3D image model in a few lines of code; like the one below. Play around with 3D object by clicking around, dragging and zooming: <gradio-app space="gradio/Model3D"> </gradio-app> ### Prerequisites Make sure you have the `gradio` Python package already [installed](https://gradio.app/guides/quickstart). ## Taking a Look at the Code Let's take a look at how to create the minimal interface above. The prediction function in this case will just return the original 3D model mesh, but you can change this function to run inference on your machine learning model. We'll take a look at more complex examples below. ```python import gradio as gr import os def load_mesh(mesh_file_name): return mesh_file_name demo = gr.Interface( fn=load_mesh, inputs=gr.Model3D(), outputs=gr.Model3D( clear_color=[0.0, 0.0, 0.0, 0.0], label="3D Model"), examples=[ [os.path.join(os.path.dirname(__file__), "files/Bunny.obj")], [os.path.join(os.path.dirname(__file__), "files/Duck.glb")], [os.path.join(os.path.dirname(__file__), "files/Fox.gltf")], [os.path.join(os.path.dirname(__file__), "files/face.obj")], ], ) if __name__ == "__main__": demo.launch() ``` Let's break down the code above: `load_mesh`: This is our 'prediction' function and for simplicity, this function will take in the 3D model mesh and return it. Creating the Interface: - `fn`: the prediction function that is used when the user clicks submit. In our case this is the `load_mesh` function. - `inputs`: create a model3D input component. The input expects an uploaded file as a {str} filepath. - `outputs`: create a model3D output component. The output component also expects a file as a {str} filepath. - `clear_color`: this is the background color of the 3D model canvas. Expects RGBa values. - `label`: the label that appears on the top left of the component. - `examples`: list of 3D model files. The 3D model component can accept _.obj_, _.glb_, & _.gltf_ file types. - `cache_examples`: saves the predicted output for the examples, to save time on inference. ## Exploring a more complex Model3D Demo: Below is a demo that uses the DPT model to predict the depth of an image and then uses 3D Point Cloud to create a 3D object. Take a look at the [app.py](https://huggingface.co/spaces/gradio/dpt-depth-estimation-3d-obj/blob/main/app.py) file for a peek into the code and the model prediction function. <gradio-app space="gradio/dpt-depth-estimation-3d-obj"> </gradio-app> --- And you're done! That's all the code you need to build an interface for your Model3D model. Here are some references that you may find useful: - Gradio's ["Getting Started" guide](https://gradio.app/getting_started/) - The first [3D Model Demo](https://huggingface.co/spaces/gradio/Model3D) and [complete code](https://huggingface.co/spaces/gradio/Model3D/tree/main) (on Hugging Face Spaces)
gradio-app/gradio/blob/main/guides/09_other-tutorials/how-to-use-3D-model-component.md
!--- Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # Audio classification examples The following examples showcase how to fine-tune `Wav2Vec2` for audio classification using PyTorch. Speech recognition models that have been pretrained in unsupervised fashion on audio data alone, *e.g.* [Wav2Vec2](https://huggingface.co/transformers/main/model_doc/wav2vec2.html), [HuBERT](https://huggingface.co/transformers/main/model_doc/hubert.html), [XLSR-Wav2Vec2](https://huggingface.co/transformers/main/model_doc/xlsr_wav2vec2.html), have shown to require only very little annotated data to yield good performance on speech classification datasets. ## Single-GPU The following command shows how to fine-tune [wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the 🗣️ [Keyword Spotting subset](https://huggingface.co/datasets/superb#ks) of the SUPERB dataset. ```bash python run_audio_classification.py \ --model_name_or_path facebook/wav2vec2-base \ --dataset_name superb \ --dataset_config_name ks \ --output_dir wav2vec2-base-ft-keyword-spotting \ --overwrite_output_dir \ --remove_unused_columns False \ --do_train \ --do_eval \ --fp16 \ --learning_rate 3e-5 \ --max_length_seconds 1 \ --attention_mask False \ --warmup_ratio 0.1 \ --num_train_epochs 5 \ --per_device_train_batch_size 32 \ --gradient_accumulation_steps 4 \ --per_device_eval_batch_size 32 \ --dataloader_num_workers 4 \ --logging_strategy steps \ --logging_steps 10 \ --evaluation_strategy epoch \ --save_strategy epoch \ --load_best_model_at_end True \ --metric_for_best_model accuracy \ --save_total_limit 3 \ --seed 0 \ --push_to_hub ``` On a single V100 GPU (16GB), this script should run in ~14 minutes and yield accuracy of **98.26%**. 👀 See the results here: [anton-l/wav2vec2-base-ft-keyword-spotting](https://huggingface.co/anton-l/wav2vec2-base-ft-keyword-spotting) > If your model classification head dimensions do not fit the number of labels in the dataset, you can specify `--ignore_mismatched_sizes` to adapt it. ## Multi-GPU The following command shows how to fine-tune [wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) for 🌎 **Language Identification** on the [CommonLanguage dataset](https://huggingface.co/datasets/anton-l/common_language). ```bash python run_audio_classification.py \ --model_name_or_path facebook/wav2vec2-base \ --dataset_name common_language \ --audio_column_name audio \ --label_column_name language \ --output_dir wav2vec2-base-lang-id \ --overwrite_output_dir \ --remove_unused_columns False \ --do_train \ --do_eval \ --fp16 \ --learning_rate 3e-4 \ --max_length_seconds 16 \ --attention_mask False \ --warmup_ratio 0.1 \ --num_train_epochs 10 \ --per_device_train_batch_size 8 \ --gradient_accumulation_steps 4 \ --per_device_eval_batch_size 1 \ --dataloader_num_workers 8 \ --logging_strategy steps \ --logging_steps 10 \ --evaluation_strategy epoch \ --save_strategy epoch \ --load_best_model_at_end True \ --metric_for_best_model accuracy \ --save_total_limit 3 \ --seed 0 \ --push_to_hub ``` On 4 V100 GPUs (16GB), this script should run in ~1 hour and yield accuracy of **79.45%**. 👀 See the results here: [anton-l/wav2vec2-base-lang-id](https://huggingface.co/anton-l/wav2vec2-base-lang-id) ## Sharing your model on 🤗 Hub 0. If you haven't already, [sign up](https://huggingface.co/join) for a 🤗 account 1. Make sure you have `git-lfs` installed and git set up. ```bash $ apt install git-lfs ``` 2. Log in with your HuggingFace account credentials using `huggingface-cli` ```bash $ huggingface-cli login # ...follow the prompts ``` 3. When running the script, pass the following arguments: ```bash python run_audio_classification.py \ --push_to_hub \ --hub_model_id <username/model_id> \ ... ``` ### Examples The following table shows a couple of demonstration fine-tuning runs. It has been verified that the script works for the following datasets: - [SUPERB Keyword Spotting](https://huggingface.co/datasets/superb#ks) - [Common Language](https://huggingface.co/datasets/common_language) | Dataset | Pretrained Model | # transformer layers | Accuracy on eval | GPU setup | Training time | Fine-tuned Model & Logs | |---------|------------------|----------------------|------------------|-----------|---------------|--------------------------| | Keyword Spotting | [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) | 2 | 0.9706 | 1 V100 GPU | 11min | [here](https://huggingface.co/anton-l/distilhubert-ft-keyword-spotting) | | Keyword Spotting | [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) | 12 | 0.9826 | 1 V100 GPU | 14min | [here](https://huggingface.co/anton-l/wav2vec2-base-ft-keyword-spotting) | | Keyword Spotting | [facebook/hubert-base-ls960](https://huggingface.co/facebook/hubert-base-ls960) | 12 | 0.9819 | 1 V100 GPU | 14min | [here](https://huggingface.co/anton-l/hubert-base-ft-keyword-spotting) | | Keyword Spotting | [asapp/sew-mid-100k](https://huggingface.co/asapp/sew-mid-100k) | 24 | 0.9757 | 1 V100 GPU | 15min | [here](https://huggingface.co/anton-l/sew-mid-100k-ft-keyword-spotting) | | Common Language | [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) | 12 | 0.7945 | 4 V100 GPUs | 1h10m | [here](https://huggingface.co/anton-l/wav2vec2-base-lang-id) |
huggingface/transformers/blob/main/examples/pytorch/audio-classification/README.md
!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # Prompt weighting [[open-in-colab]] Prompt weighting provides a way to emphasize or de-emphasize certain parts of a prompt, allowing for more control over the generated image. A prompt can include several concepts, which gets turned into contextualized text embeddings. The embeddings are used by the model to condition its cross-attention layers to generate an image (read the Stable Diffusion [blog post](https://huggingface.co/blog/stable_diffusion) to learn more about how it works). Prompt weighting works by increasing or decreasing the scale of the text embedding vector that corresponds to its concept in the prompt because you may not necessarily want the model to focus on all concepts equally. The easiest way to prepare the prompt-weighted embeddings is to use [Compel](https://github.com/damian0815/compel), a text prompt-weighting and blending library. Once you have the prompt-weighted embeddings, you can pass them to any pipeline that has a [`prompt_embeds`](https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/text2img#diffusers.StableDiffusionPipeline.__call__.prompt_embeds) (and optionally [`negative_prompt_embeds`](https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/text2img#diffusers.StableDiffusionPipeline.__call__.negative_prompt_embeds)) parameter, such as [`StableDiffusionPipeline`], [`StableDiffusionControlNetPipeline`], and [`StableDiffusionXLPipeline`]. <Tip> If your favorite pipeline doesn't have a `prompt_embeds` parameter, please open an [issue](https://github.com/huggingface/diffusers/issues/new/choose) so we can add it! </Tip> This guide will show you how to weight and blend your prompts with Compel in 🤗 Diffusers. Before you begin, make sure you have the latest version of Compel installed: ```py # uncomment to install in Colab #!pip install compel --upgrade ``` For this guide, let's generate an image with the prompt `"a red cat playing with a ball"` using the [`StableDiffusionPipeline`]: ```py from diffusers import StableDiffusionPipeline, UniPCMultistepScheduler import torch pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", use_safetensors=True) pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) pipe.to("cuda") prompt = "a red cat playing with a ball" generator = torch.Generator(device="cpu").manual_seed(33) image = pipe(prompt, generator=generator, num_inference_steps=20).images[0] image ``` <div class="flex justify-center"> <img class="rounded-xl" src="https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/compel/forest_0.png"/> </div> ## Weighting You'll notice there is no "ball" in the image! Let's use compel to upweight the concept of "ball" in the prompt. Create a [`Compel`](https://github.com/damian0815/compel/blob/main/doc/compel.md#compel-objects) object, and pass it a tokenizer and text encoder: ```py from compel import Compel compel_proc = Compel(tokenizer=pipe.tokenizer, text_encoder=pipe.text_encoder) ``` compel uses `+` or `-` to increase or decrease the weight of a word in the prompt. To increase the weight of "ball": <Tip> `+` corresponds to the value `1.1`, `++` corresponds to `1.1^2`, and so on. Similarly, `-` corresponds to `0.9` and `--` corresponds to `0.9^2`. Feel free to experiment with adding more `+` or `-` in your prompt! </Tip> ```py prompt = "a red cat playing with a ball++" ``` Pass the prompt to `compel_proc` to create the new prompt embeddings which are passed to the pipeline: ```py prompt_embeds = compel_proc(prompt) generator = torch.manual_seed(33) image = pipe(prompt_embeds=prompt_embeds, generator=generator, num_inference_steps=20).images[0] image ``` <div class="flex justify-center"> <img class="rounded-xl" src="https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/compel/forest_1.png"/> </div> To downweight parts of the prompt, use the `-` suffix: ```py prompt = "a red------- cat playing with a ball" prompt_embeds = compel_proc(prompt) generator = torch.manual_seed(33) image = pipe(prompt_embeds=prompt_embeds, generator=generator, num_inference_steps=20).images[0] image ``` <div class="flex justify-center"> <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/compel-neg.png"/> </div> You can even up or downweight multiple concepts in the same prompt: ```py prompt = "a red cat++ playing with a ball----" prompt_embeds = compel_proc(prompt) generator = torch.manual_seed(33) image = pipe(prompt_embeds=prompt_embeds, generator=generator, num_inference_steps=20).images[0] image ``` <div class="flex justify-center"> <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/compel-pos-neg.png"/> </div> ## Blending You can also create a weighted *blend* of prompts by adding `.blend()` to a list of prompts and passing it some weights. Your blend may not always produce the result you expect because it breaks some assumptions about how the text encoder functions, so just have fun and experiment with it! ```py prompt_embeds = compel_proc('("a red cat playing with a ball", "jungle").blend(0.7, 0.8)') generator = torch.Generator(device="cuda").manual_seed(33) image = pipe(prompt_embeds=prompt_embeds, generator=generator, num_inference_steps=20).images[0] image ``` <div class="flex justify-center"> <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/compel-blend.png"/> </div> ## Conjunction A conjunction diffuses each prompt independently and concatenates their results by their weighted sum. Add `.and()` to the end of a list of prompts to create a conjunction: ```py prompt_embeds = compel_proc('["a red cat", "playing with a", "ball"].and()') generator = torch.Generator(device="cuda").manual_seed(55) image = pipe(prompt_embeds=prompt_embeds, generator=generator, num_inference_steps=20).images[0] image ``` <div class="flex justify-center"> <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/compel-conj.png"/> </div> ## Textual inversion [Textual inversion](../training/text_inversion) is a technique for learning a specific concept from some images which you can use to generate new images conditioned on that concept. Create a pipeline and use the [`~loaders.TextualInversionLoaderMixin.load_textual_inversion`] function to load the textual inversion embeddings (feel free to browse the [Stable Diffusion Conceptualizer](https://huggingface.co/spaces/sd-concepts-library/stable-diffusion-conceptualizer) for 100+ trained concepts): ```py import torch from diffusers import StableDiffusionPipeline from compel import Compel, DiffusersTextualInversionManager pipe = StableDiffusionPipeline.from_pretrained( "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True, variant="fp16").to("cuda") pipe.load_textual_inversion("sd-concepts-library/midjourney-style") ``` Compel provides a `DiffusersTextualInversionManager` class to simplify prompt weighting with textual inversion. Instantiate `DiffusersTextualInversionManager` and pass it to the `Compel` class: ```py textual_inversion_manager = DiffusersTextualInversionManager(pipe) compel_proc = Compel( tokenizer=pipe.tokenizer, text_encoder=pipe.text_encoder, textual_inversion_manager=textual_inversion_manager) ``` Incorporate the concept to condition a prompt with using the `<concept>` syntax: ```py prompt_embeds = compel_proc('("A red cat++ playing with a ball <midjourney-style>")') image = pipe(prompt_embeds=prompt_embeds).images[0] image ``` <div class="flex justify-center"> <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/compel-text-inversion.png"/> </div> ## DreamBooth [DreamBooth](../training/dreambooth) is a technique for generating contextualized images of a subject given just a few images of the subject to train on. It is similar to textual inversion, but DreamBooth trains the full model whereas textual inversion only fine-tunes the text embeddings. This means you should use [`~DiffusionPipeline.from_pretrained`] to load the DreamBooth model (feel free to browse the [Stable Diffusion Dreambooth Concepts Library](https://huggingface.co/sd-dreambooth-library) for 100+ trained models): ```py import torch from diffusers import DiffusionPipeline, UniPCMultistepScheduler from compel import Compel pipe = DiffusionPipeline.from_pretrained("sd-dreambooth-library/dndcoverart-v1", torch_dtype=torch.float16).to("cuda") pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) ``` Create a `Compel` class with a tokenizer and text encoder, and pass your prompt to it. Depending on the model you use, you'll need to incorporate the model's unique identifier into your prompt. For example, the `dndcoverart-v1` model uses the identifier `dndcoverart`: ```py compel_proc = Compel(tokenizer=pipe.tokenizer, text_encoder=pipe.text_encoder) prompt_embeds = compel_proc('("magazine cover of a dndcoverart dragon, high quality, intricate details, larry elmore art style").and()') image = pipe(prompt_embeds=prompt_embeds).images[0] image ``` <div class="flex justify-center"> <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/compel-dreambooth.png"/> </div> ## Stable Diffusion XL Stable Diffusion XL (SDXL) has two tokenizers and text encoders so it's usage is a bit different. To address this, you should pass both tokenizers and encoders to the `Compel` class: ```py from compel import Compel, ReturnedEmbeddingsType from diffusers import DiffusionPipeline from diffusers.utils import make_image_grid import torch pipeline = DiffusionPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", variant="fp16", use_safetensors=True, torch_dtype=torch.float16 ).to("cuda") compel = Compel( tokenizer=[pipeline.tokenizer, pipeline.tokenizer_2] , text_encoder=[pipeline.text_encoder, pipeline.text_encoder_2], returned_embeddings_type=ReturnedEmbeddingsType.PENULTIMATE_HIDDEN_STATES_NON_NORMALIZED, requires_pooled=[False, True] ) ``` This time, let's upweight "ball" by a factor of 1.5 for the first prompt, and downweight "ball" by 0.6 for the second prompt. The [`StableDiffusionXLPipeline`] also requires [`pooled_prompt_embeds`](https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/stable_diffusion_xl#diffusers.StableDiffusionXLInpaintPipeline.__call__.pooled_prompt_embeds) (and optionally [`negative_pooled_prompt_embeds`](https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/stable_diffusion_xl#diffusers.StableDiffusionXLInpaintPipeline.__call__.negative_pooled_prompt_embeds)) so you should pass those to the pipeline along with the conditioning tensors: ```py # apply weights prompt = ["a red cat playing with a (ball)1.5", "a red cat playing with a (ball)0.6"] conditioning, pooled = compel(prompt) # generate image generator = [torch.Generator().manual_seed(33) for _ in range(len(prompt))] images = pipeline(prompt_embeds=conditioning, pooled_prompt_embeds=pooled, generator=generator, num_inference_steps=30).images make_image_grid(images, rows=1, cols=2) ``` <div class="flex gap-4"> <div> <img class="rounded-xl" src="https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/compel/sdxl_ball1.png"/> <figcaption class="mt-2 text-center text-sm text-gray-500">"a red cat playing with a (ball)1.5"</figcaption> </div> <div> <img class="rounded-xl" src="https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/compel/sdxl_ball2.png"/> <figcaption class="mt-2 text-center text-sm text-gray-500">"a red cat playing with a (ball)0.6"</figcaption> </div> </div>
huggingface/diffusers/blob/main/docs/source/en/using-diffusers/weighted_prompts.md
Hugging Face Hub documentation The Hugging Face Hub is a platform with over 350k models, 75k datasets, and 150k demo apps (Spaces), all open source and publicly available, in an online platform where people can easily collaborate and build ML together. The Hub works as a central place where anyone can explore, experiment, collaborate, and build technology with Machine Learning. Are you ready to join the path towards open source Machine Learning? 🤗 <div class="grid grid-cols-1 gap-4 sm:grid-cols-2 lg:grid-cols-3 md:mt-10"> <div class="group flex flex-col space-y-2 rounded-xl border border-orange-100 bg-gradient-to-br from-orange-50 dark:bg-none px-6 py-4 transition-colors hover:shadow dark:border-orange-700"> <div class="flex items-center py-0.5 text-lg font-semibold text-orange-600 dark:text-gray-400 mb-1"> <svg class="shrink-0 mr-1.5 text-orange-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path fill="currentColor" d="M2.6 10.59L8.38 4.8l1.69 1.7c-.24.85.15 1.78.93 2.23v5.54c-.6.34-1 .99-1 1.73a2 2 0 0 0 2 2a2 2 0 0 0 2-2c0-.74-.4-1.39-1-1.73V9.41l2.07 2.09c-.07.15-.07.32-.07.5a2 2 0 0 0 2 2a2 2 0 0 0 2-2a2 2 0 0 0-2-2c-.18 0-.35 0-.5.07L13.93 7.5a1.98 1.98 0 0 0-1.15-2.34c-.43-.16-.88-.2-1.28-.09L9.8 3.38l.79-.78c.78-.79 2.04-.79 2.82 0l7.99 7.99c.79.78.79 2.04 0 2.82l-7.99 7.99c-.78.79-2.04.79-2.82 0L2.6 13.41c-.79-.78-.79-2.04 0-2.82Z"></path></svg>Repositories</div> <a class="transform !no-underline transition-colors hover:translate-x-px hover:text-gray-700" href="./repositories">Introduction</a> <a class="transform !no-underline transition-colors hover:translate-x-px hover:text-gray-700" href="./repositories-getting-started">Getting Started</a> <a class="transform !no-underline transition-colors hover:translate-x-px hover:text-gray-700" href="./repositories-settings">Repository Settings</a> <a class="transform !no-underline transition-colors hover:translate-x-px hover:text-gray-700" href="./repositories-pull-requests-discussions">Pull requests and Discussions</a> <a class="transform !no-underline transition-colors hover:translate-x-px hover:text-gray-700" href="./notifications">Notifications</a> <a class="transform !no-underline transition-colors hover:translate-x-px hover:text-gray-700" href="./collections">Collections</a> <a class="transform !no-underline transition-colors hover:translate-x-px hover:text-gray-700" href="./webhooks">Webhooks</a> <a class="transform !no-underline transition-colors hover:translate-x-px hover:text-gray-700" href="./repositories-next-steps">Next Steps</a> <a class="transform !no-underline transition-colors hover:translate-x-px hover:text-gray-700" href="./repositories-licenses">Licenses</a> </div> <div class="group flex flex-col space-y-2 rounded-xl border border-indigo-100 bg-gradient-to-br from-indigo-50 dark:bg-none px-6 py-4 transition-colors hover:shadow dark:border-indigo-700"> <div class="flex items-center py-0.5 text-lg font-semibold text-indigo-600 dark:text-gray-400 mb-1"> <svg class="shrink-0 mr-1.5 text-indigo-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path class="uim-quaternary" d="M20.23 7.24L12 12L3.77 7.24a1.98 1.98 0 0 1 .7-.71L11 2.76c.62-.35 1.38-.35 2 0l6.53 3.77c.29.173.531.418.7.71z" opacity=".25" fill="currentColor"></path><path class="uim-tertiary" d="M12 12v9.5a2.09 2.09 0 0 1-.91-.21L4.5 17.48a2.003 2.003 0 0 1-1-1.73v-7.5a2.06 2.06 0 0 1 .27-1.01L12 12z" opacity=".5" fill="currentColor"></path><path class="uim-primary" d="M20.5 8.25v7.5a2.003 2.003 0 0 1-1 1.73l-6.62 3.82c-.275.13-.576.198-.88.2V12l8.23-4.76c.175.308.268.656.27 1.01z" fill="currentColor"></path></svg> Models</div> <a class="!no-underline hover:opacity-60 transform transition-colors hover:translate-x-px" href="./models">Introduction</a> <a class="!no-underline hover:opacity-60 transform transition-colors hover:translate-x-px" href="./models-the-hub">The Model Hub</a> <a class="!no-underline hover:opacity-60 transform transition-colors hover:translate-x-px" href="./model-cards">Model Cards</a> <a class="!no-underline hover:opacity-60 transform transition-colors hover:translate-x-px" href="./models-gated">Gated Models</a> <a class="!no-underline hover:opacity-60 transform transition-colors hover:translate-x-px" href="./models-uploading">Uploading Models</a> <a class="!no-underline hover:opacity-60 transform transition-colors hover:translate-x-px" href="./models-downloading">Downloading Models</a> <a class="!no-underline hover:opacity-60 transform transition-colors hover:translate-x-px" href="./models-libraries">Libraries</a> <a class="!no-underline hover:opacity-60 transform transition-colors hover:translate-x-px" href="./models-tasks">Tasks</a> <a class="!no-underline hover:opacity-60 transform transition-colors hover:translate-x-px" href="./models-widgets">Widgets</a> <a class="!no-underline hover:opacity-60 transform transition-colors hover:translate-x-px" href="./models-inference">Inference API</a> <a class="!no-underline hover:opacity-60 transform transition-colors hover:translate-x-px" href="./models-download-stats">Download Stats</a> </div> <div class="group flex flex-col space-y-2 rounded-xl border border-red-100 bg-gradient-to-br from-red-50 dark:bg-none px-6 py-4 transition-colors hover:shadow dark:border-red-700"> <div class="flex items-center py-0.5 text-lg font-semibold text-red-600 dark:text-gray-400 mb-1"> <svg class="shrink-0 mr-1.5 text-red-400" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 25 25"><ellipse cx="12.5" cy="5" fill="currentColor" fill-opacity="0.25" rx="7.5" ry="2"></ellipse><path d="M12.5 15C16.6421 15 20 14.1046 20 13V20C20 21.1046 16.6421 22 12.5 22C8.35786 22 5 21.1046 5 20V13C5 14.1046 8.35786 15 12.5 15Z" fill="currentColor" opacity="0.5"></path><path d="M12.5 7C16.6421 7 20 6.10457 20 5V11.5C20 12.6046 16.6421 13.5 12.5 13.5C8.35786 13.5 5 12.6046 5 11.5V5C5 6.10457 8.35786 7 12.5 7Z" fill="currentColor" opacity="0.5"></path><path d="M5.23628 12C5.08204 12.1598 5 12.8273 5 13C5 14.1046 8.35786 15 12.5 15C16.6421 15 20 14.1046 20 13C20 12.8273 19.918 12.1598 19.7637 12C18.9311 12.8626 15.9947 13.5 12.5 13.5C9.0053 13.5 6.06886 12.8626 5.23628 12Z" fill="currentColor"></path></svg> Datasets</div> <a class="!no-underline hover:opacity-60 transform transition-colors hover:translate-x-px" href="./datasets">Introduction</a> <a class="!no-underline hover:opacity-60 transform transition-colors hover:translate-x-px" href="./datasets-overview">Datasets Overview</a> <a class="!no-underline hover:opacity-60 transform transition-colors hover:translate-x-px" href="./datasets-cards">Dataset Cards</a> <a class="!no-underline hover:opacity-60 transform transition-colors hover:translate-x-px" href="./datasets-gated">Gated Datasets</a> <a class="!no-underline hover:opacity-60 transform transition-colors hover:translate-x-px" href="./datasets-adding">Uploading Datasets</a> <a class="!no-underline hover:opacity-60 transform transition-colors hover:translate-x-px" href="./datasets-downloading">Downloading Datasets</a> <a class="!no-underline hover:opacity-60 transform transition-colors hover:translate-x-px" href="./datasets-libraries">Libraries</a> <a class="!no-underline hover:opacity-60 transform transition-colors hover:translate-x-px" href="./datasets-viewer">Dataset Viewer</a> <a class="!no-underline hover:opacity-60 transform transition-colors hover:translate-x-px" href="./datasets-download-stats">Download Stats</a> <a class="!no-underline hover:opacity-60 transform transition-colors hover:translate-x-px" href="./datasets-data-files-configuration">Data files Configuration</a> </div> <div class="group flex flex-col space-y-2 rounded-xl border border-blue-100 bg-gradient-to-br from-blue-50 dark:bg-none px-6 py-4 transition-colors hover:shadow dark:border-blue-700"> <div class="flex items-center py-0.5 text-lg font-semibold text-blue-600 dark:text-gray-400 mb-1"> <svg class="shrink-0 mr-1.5 text-blue-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 25 25"><path opacity=".5" d="M6.016 14.674v4.31h4.31v-4.31h-4.31ZM14.674 14.674v4.31h4.31v-4.31h-4.31ZM6.016 6.016v4.31h4.31v-4.31h-4.31Z" fill="currentColor"></path><path opacity=".75" fill-rule="evenodd" clip-rule="evenodd" d="M3 4.914C3 3.857 3.857 3 4.914 3h6.514c.884 0 1.628.6 1.848 1.414a5.171 5.171 0 0 1 7.31 7.31c.815.22 1.414.964 1.414 1.848v6.514A1.914 1.914 0 0 1 20.086 22H4.914A1.914 1.914 0 0 1 3 20.086V4.914Zm3.016 1.102v4.31h4.31v-4.31h-4.31Zm0 12.968v-4.31h4.31v4.31h-4.31Zm8.658 0v-4.31h4.31v4.31h-4.31Zm0-10.813a2.155 2.155 0 1 1 4.31 0 2.155 2.155 0 0 1-4.31 0Z" fill="currentColor"></path><path opacity=".25" d="M16.829 6.016a2.155 2.155 0 1 0 0 4.31 2.155 2.155 0 0 0 0-4.31Z" fill="currentColor"></path></svg> Spaces</div> <a class="!no-underline hover:opacity-60 transform transition-colors hover:translate-x-px" href="./spaces">Introduction</a> <a class="!no-underline hover:opacity-60 transform transition-colors hover:translate-x-px" href="./spaces-overview">Spaces Overview</a> <a class="!no-underline hover:opacity-60 transform transition-colors hover:translate-x-px" href="./spaces-sdks-gradio">Gradio Spaces</a> <a class="!no-underline hover:opacity-60 transform transition-colors hover:translate-x-px" href="./spaces-sdks-streamlit">Streamlit Spaces</a> <a class="!no-underline hover:opacity-60 transform transition-colors hover:translate-x-px" href="./spaces-sdks-static">Static HTML Spaces</a> <a class="!no-underline hover:opacity-60 transform transition-colors hover:translate-x-px" href="./spaces-sdks-docker">Docker Spaces</a> <a class="!no-underline hover:opacity-60 transform transition-colors hover:translate-x-px" href="./spaces-embed">Embed your Space</a> <a class="!no-underline hover:opacity-60 transform transition-colors hover:translate-x-px" href="./spaces-run-with-docker">Run with Docker</a> <a class="!no-underline hover:opacity-60 transform transition-colors hover:translate-x-px" href="./spaces-config-reference">Reference</a> <a class="!no-underline hover:opacity-60 transform transition-colors hover:translate-x-px" href="./spaces-changelog">Changelog</a> <a class="!no-underline hover:opacity-60 transform transition-colors hover:translate-x-px" href="./spaces-advanced">Advanced Topics</a> <a class="!no-underline hover:opacity-60 transform transition-colors hover:translate-x-px" href="./spaces-oauth">Sign in with HF</a> </div> <div class="group flex flex-col space-y-2 rounded-xl border border-green-100 bg-gradient-to-br from-green-50 dark:bg-none px-6 py-4 transition-colors hover:shadow dark:border-green-700"> <div class="flex items-center py-0.5 text-lg font-semibold text-green-600 dark:text-gray-400 mb-1"> <svg class="shrink-0 mr-1.5 text-green-500" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" viewBox="0 0 24 24"><path fill="currentColor" stroke="currentColor" d="M8.892 21.854a6.25 6.25 0 0 1-4.42-10.67l7.955-7.955a4.5 4.5 0 0 1 6.364 6.364l-6.895 6.894a2.816 2.816 0 0 1-3.89 0a2.75 2.75 0 0 1 .002-3.888l5.126-5.127a1 1 0 1 1 1.414 1.414l-5.126 5.127a.75.75 0 0 0 0 1.06a.768.768 0 0 0 1.06 0l6.895-6.894a2.503 2.503 0 0 0 0-3.535a2.56 2.56 0 0 0-3.536 0l-7.955 7.955a4.25 4.25 0 1 0 6.01 6.01l6.188-6.187a1 1 0 1 1 1.414 1.414l-6.187 6.186a6.206 6.206 0 0 1-4.42 1.832z"></path></svg> Other</div> <a class="!no-underline hover:opacity-60 transform transition-colors hover:translate-x-px" href="./organizations">Organizations</a> <a class="!no-underline hover:opacity-60 transform transition-colors hover:translate-x-px" href="./enterprise-hub">Enterprise Hub</a> <a class="!no-underline hover:opacity-60 transform transition-colors hover:translate-x-px" href="./billing">Billing</a> <a class="!no-underline hover:opacity-60 transform transition-colors hover:translate-x-px" href="./security">Security</a> <a class="!no-underline hover:opacity-60 transform transition-colors hover:translate-x-px" href="./moderation">Moderation</a> <a class="!no-underline hover:opacity-60 transform transition-colors hover:translate-x-px" href="./paper-pages">Paper Pages</a> <a class="!no-underline hover:opacity-60 transform transition-colors hover:translate-x-px" href="./search">Search</a> <a class="!no-underline hover:opacity-60 transform transition-colors hover:translate-x-px" href="./doi">Digital Object Identifier (DOI)</a> <a class="!no-underline hover:opacity-60 transform transition-colors hover:translate-x-px" href="./api">Hub API Endpoints</a> <a class="!no-underline hover:opacity-60 transform transition-colors hover:translate-x-px" href="./oauth">Sign in with HF</a> <a class="!no-underline hover:opacity-60 transform transition-colors hover:translate-x-px" href="https://huggingface.co/code-of-conduct">Contributor Code of Conduct</a> <a class="!no-underline hover:opacity-60 transform transition-colors hover:translate-x-px" href="https://huggingface.co/content-guidelines">Content Guidelines</a> </div> </div> ## What's the Hugging Face Hub? We are helping the community work together towards the goal of advancing Machine Learning 🔥. The Hugging Face Hub is a platform with over 120k models, 20k datasets, and 50k demos in which people can easily collaborate in their ML workflows. The Hub works as a central place where anyone can share, explore, discover, and experiment with open-source Machine Learning. No single company, including the Tech Titans, will be able to “solve AI” by themselves – the only way we'll achieve this is by sharing knowledge and resources in a community-centric approach. We are building the largest open-source collection of models, datasets, demos and metrics on the Hugging Face Hub to democratize and advance ML for everyone 🚀. We encourage you to read the [Code of Conduct](https://huggingface.co/code-of-conduct) and the [Content Guidelines](https://huggingface.co/content-guidelines) to familiarize yourself with the values that we expect our community members to uphold 🤗. ## What can you find on the Hub? The Hugging Face Hub hosts Git-based repositories, which are version-controlled buckets that can contain all your files. 💾 On it, you'll be able to upload and discover... - Models, _hosting the latest state-of-the-art models for NLP, vision, and audio tasks_ - Datasets, _featuring a wide variety of data for different domains and modalities_.. - Spaces, _interactive apps for demonstrating ML models directly in your browser_. The Hub offers **versioning, commit history, diffs, branches, and over a dozen library integrations**! You can learn more about the features that all repositories share in the [**Repositories documentation**](./repositories). ## Models You can discover and use dozens of thousands of open-source ML models shared by the community. To promote responsible model usage and development, model repos are equipped with [Model Cards](./model-cards) to inform users of each model's limitations and biases. Additional [metadata](./model-cards#model-card-metadata) about info such as their tasks, languages, and metrics can be included, with training metrics charts even added if the repository contains [TensorBoard traces](./tensorboard). It's also easy to add an [**inference widget**](./models-widgets) to your model, allowing anyone to play with the model directly in the browser! For programmatic access, an API is provided to [**instantly serve your model**](./models-inference). To upload models to the Hub, or download models and integrate them into your work, explore the [**Models documentation**](./models). You can also choose from [**over a dozen libraries**](./models-libraries) such as 🤗 Transformers, Asteroid, and ESPnet that support the Hub. ## Datasets The Hub is home to over 5,000 datasets in more than 100 languages that can be used for a broad range of tasks across NLP, Computer Vision, and Audio. The Hub makes it simple to find, download, and upload datasets. Datasets are accompanied by extensive documentation in the form of [**Dataset Cards**](./model-cards) and [**Dataset Preview**](./datasets-overview#datasets-on-the-hub) to let you explore the data directly in your browser. While many datasets are public, [**organizations**](./organizations) and individuals can create private datasets to comply with licensing or privacy issues. You can learn more about [**Datasets here on Hugging Face Hub documentation**](./datasets-overview). The [🤗 `datasets`](https://huggingface.co/docs/datasets/index) library allows you to programmatically interact with the datasets, so you can easily use datasets from the Hub in your projects. With a single line of code, you can access the datasets; even if they are so large they don't fit in your computer, you can use streaming to efficiently access the data. ## Spaces [Spaces](https://huggingface.co/spaces) is a simple way to host ML demo apps on the Hub. They allow you to build your ML portfolio, showcase your projects at conferences or to stakeholders, and work collaboratively with other people in the ML ecosystem. We currently support two awesome Python SDKs (**[Gradio](https://gradio.app/)** and **[Streamlit](https://streamlit.io/)**) that let you build cool apps in a matter of minutes. Users can also create static Spaces which are simple HTML/CSS/JavaScript page within a Space. After you've explored a few Spaces (take a look at our [Space of the Week!](https://huggingface.co/spaces)), dive into the [**Spaces documentation**](./spaces-overview) to learn all about how you can create your own Space. You'll also be able to upgrade your Space to run on a GPU or other accelerated hardware. ⚡️ ## Organizations Companies, universities and non-profits are an essential part of the Hugging Face community! The Hub offers [**Organizations**](./organizations), which can be used to group accounts and manage datasets, models, and Spaces. Educators can also create collaborative organizations for students using [Hugging Face for Classrooms](https://huggingface.co/classrooms). An organization's repositories will be featured on the organization’s page and every member of the organization will have the ability to contribute to the repository. In addition to conveniently grouping all of an organization's work, the Hub allows admins to set roles to [**control access to repositories**](./organizations-security), and manage their organization's [payment method and billing info](https://huggingface.co/pricing). Machine Learning is more fun when collaborating! 🔥 [Explore existing organizations](https://huggingface.co/organizations), create a new organization [here](https://huggingface.co/organizations/new), and then visit the [**Organizations documentation**](./organizations) to learn more. ## Security The Hugging Face Hub supports security and access control features to give you the peace of mind that your code, models, and data are safe. Visit the [**Security**](./security) section in these docs to learn about: - User Access Tokens - Access Control for Organizations - Signing commits with GPG - Malware scanning <img width="150" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/security-soc-1.jpg">
huggingface/hub-docs/blob/main/docs/hub/index.md
p align="center"> <br> <img src="https://huggingface.co/datasets/evaluate/media/resolve/main/evaluate-banner.png" width="400"/> <br> </p> <p align="center"> <a href="https://github.com/huggingface/evaluate/actions/workflows/ci.yml?query=branch%3Amain"> <img alt="Build" src="https://github.com/huggingface/evaluate/actions/workflows/ci.yml/badge.svg?branch=main"> </a> <a href="https://github.com/huggingface/evaluate/blob/master/LICENSE"> <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/evaluate.svg?color=blue"> </a> <a href="https://huggingface.co/docs/evaluate/index"> <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/evaluate/index.svg?down_color=red&down_message=offline&up_message=online"> </a> <a href="https://github.com/huggingface/evaluate/releases"> <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/evaluate.svg"> </a> <a href="CODE_OF_CONDUCT.md"> <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-2.0-4baaaa.svg"> </a> </p> 🤗 Evaluate is a library that makes evaluating and comparing models and reporting their performance easier and more standardized. It currently contains: - **implementations of dozens of popular metrics**: the existing metrics cover a variety of tasks spanning from NLP to Computer Vision, and include dataset-specific metrics for datasets. With a simple command like `accuracy = load("accuracy")`, get any of these metrics ready to use for evaluating a ML model in any framework (Numpy/Pandas/PyTorch/TensorFlow/JAX). - **comparisons and measurements**: comparisons are used to measure the difference between models and measurements are tools to evaluate datasets. - **an easy way of adding new evaluation modules to the 🤗 Hub**: you can create new evaluation modules and push them to a dedicated Space in the 🤗 Hub with `evaluate-cli create [metric name]`, which allows you to see easily compare different metrics and their outputs for the same sets of references and predictions. [🎓 **Documentation**](https://huggingface.co/docs/evaluate/) 🔎 **Find a [metric](https://huggingface.co/evaluate-metric), [comparison](https://huggingface.co/evaluate-comparison), [measurement](https://huggingface.co/evaluate-measurement) on the Hub** [🌟 **Add a new evaluation module**](https://huggingface.co/docs/evaluate/) 🤗 Evaluate also has lots of useful features like: - **Type checking**: the input types are checked to make sure that you are using the right input formats for each metric - **Metric cards**: each metrics comes with a card that describes the values, limitations and their ranges, as well as providing examples of their usage and usefulness. - **Community metrics:** Metrics live on the Hugging Face Hub and you can easily add your own metrics for your project or to collaborate with others. # Installation ## With pip 🤗 Evaluate can be installed from PyPi and has to be installed in a virtual environment (venv or conda for instance) ```bash pip install evaluate ``` # Usage 🤗 Evaluate's main methods are: - `evaluate.list_evaluation_modules()` to list the available metrics, comparisons and measurements - `evaluate.load(module_name, **kwargs)` to instantiate an evaluation module - `results = module.compute(*kwargs)` to compute the result of an evaluation module # Adding a new evaluation module First install the necessary dependencies to create a new metric with the following command: ```bash pip install evaluate[template] ``` Then you can get started with the following command which will create a new folder for your metric and display the necessary steps: ```bash evaluate-cli create "Awesome Metric" ``` See this [step-by-step guide](https://huggingface.co/docs/evaluate/creating_and_sharing) in the documentation for detailed instructions. ## Credits Thanks to [@marella](https://github.com/marella) for letting us use the `evaluate` namespace on PyPi previously used by his [library](https://github.com/marella/evaluate).
huggingface/evaluate/blob/main/README.md
SelecSLS **SelecSLS** uses novel selective long and short range skip connections to improve the information flow allowing for a drastically faster network without compromising accuracy. ## How do I use this model on an image? To load a pretrained model: ```python import timm model = timm.create_model('selecsls42b', pretrained=True) model.eval() ``` To load and preprocess the image: ```python import urllib from PIL import Image from timm.data import resolve_data_config from timm.data.transforms_factory import create_transform config = resolve_data_config({}, model=model) transform = create_transform(**config) url, filename = ("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg") urllib.request.urlretrieve(url, filename) img = Image.open(filename).convert('RGB') tensor = transform(img).unsqueeze(0) # transform and add batch dimension ``` To get the model predictions: ```python import torch with torch.no_grad(): out = model(tensor) probabilities = torch.nn.functional.softmax(out[0], dim=0) print(probabilities.shape) # prints: torch.Size([1000]) ``` To get the top-5 predictions class names: ```python # Get imagenet class mappings url, filename = ("https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt", "imagenet_classes.txt") urllib.request.urlretrieve(url, filename) with open("imagenet_classes.txt", "r") as f: categories = [s.strip() for s in f.readlines()] # Print top categories per image top5_prob, top5_catid = torch.topk(probabilities, 5) for i in range(top5_prob.size(0)): print(categories[top5_catid[i]], top5_prob[i].item()) # prints class names and probabilities like: # [('Samoyed', 0.6425196528434753), ('Pomeranian', 0.04062102362513542), ('keeshond', 0.03186424449086189), ('white wolf', 0.01739676296710968), ('Eskimo dog', 0.011717947199940681)] ``` Replace the model name with the variant you want to use, e.g. `selecsls42b`. You can find the IDs in the model summaries at the top of this page. To extract image features with this model, follow the [timm feature extraction examples](https://rwightman.github.io/pytorch-image-models/feature_extraction/), just change the name of the model you want to use. ## How do I finetune this model? You can finetune any of the pre-trained models just by changing the classifier (the last layer). ```python model = timm.create_model('selecsls42b', pretrained=True, num_classes=NUM_FINETUNE_CLASSES) ``` To finetune on your own dataset, you have to write a training loop or adapt [timm's training script](https://github.com/rwightman/pytorch-image-models/blob/master/train.py) to use your dataset. ## How do I train this model? You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh. ## Citation ```BibTeX @article{Mehta_2020, title={XNect}, volume={39}, ISSN={1557-7368}, url={http://dx.doi.org/10.1145/3386569.3392410}, DOI={10.1145/3386569.3392410}, number={4}, journal={ACM Transactions on Graphics}, publisher={Association for Computing Machinery (ACM)}, author={Mehta, Dushyant and Sotnychenko, Oleksandr and Mueller, Franziska and Xu, Weipeng and Elgharib, Mohamed and Fua, Pascal and Seidel, Hans-Peter and Rhodin, Helge and Pons-Moll, Gerard and Theobalt, Christian}, year={2020}, month={Jul} } ``` <!-- Type: model-index Collections: - Name: SelecSLS Paper: Title: 'XNect: Real-time Multi-Person 3D Motion Capture with a Single RGB Camera' URL: https://paperswithcode.com/paper/xnect-real-time-multi-person-3d-human-pose Models: - Name: selecsls42b In Collection: SelecSLS Metadata: FLOPs: 3824022528 Parameters: 32460000 File Size: 129948954 Architecture: - Batch Normalization - Convolution - Dense Connections - Dropout - Global Average Pooling - ReLU - SelecSLS Block Tasks: - Image Classification Training Techniques: - Cosine Annealing - Random Erasing Training Data: - ImageNet ID: selecsls42b Crop Pct: '0.875' Image Size: '224' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/b9843f954b0457af2db4f9dea41a8538f51f5d78/timm/models/selecsls.py#L335 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-selecsls/selecsls42b-8af30141.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 77.18% Top 5 Accuracy: 93.39% - Name: selecsls60 In Collection: SelecSLS Metadata: FLOPs: 4610472600 Parameters: 30670000 File Size: 122839714 Architecture: - Batch Normalization - Convolution - Dense Connections - Dropout - Global Average Pooling - ReLU - SelecSLS Block Tasks: - Image Classification Training Techniques: - Cosine Annealing - Random Erasing Training Data: - ImageNet ID: selecsls60 Crop Pct: '0.875' Image Size: '224' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/b9843f954b0457af2db4f9dea41a8538f51f5d78/timm/models/selecsls.py#L342 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-selecsls/selecsls60-bbf87526.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 77.99% Top 5 Accuracy: 93.83% - Name: selecsls60b In Collection: SelecSLS Metadata: FLOPs: 4657653144 Parameters: 32770000 File Size: 131252898 Architecture: - Batch Normalization - Convolution - Dense Connections - Dropout - Global Average Pooling - ReLU - SelecSLS Block Tasks: - Image Classification Training Techniques: - Cosine Annealing - Random Erasing Training Data: - ImageNet ID: selecsls60b Crop Pct: '0.875' Image Size: '224' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/b9843f954b0457af2db4f9dea41a8538f51f5d78/timm/models/selecsls.py#L349 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-selecsls/selecsls60b-94e619b5.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 78.41% Top 5 Accuracy: 94.18% -->
huggingface/pytorch-image-models/blob/main/docs/models/selecsls.md
!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # Textual inversion [[open-in-colab]] The [`StableDiffusionPipeline`] supports textual inversion, a technique that enables a model like Stable Diffusion to learn a new concept from just a few sample images. This gives you more control over the generated images and allows you to tailor the model towards specific concepts. You can get started quickly with a collection of community created concepts in the [Stable Diffusion Conceptualizer](https://huggingface.co/spaces/sd-concepts-library/stable-diffusion-conceptualizer). This guide will show you how to run inference with textual inversion using a pre-learned concept from the Stable Diffusion Conceptualizer. If you're interested in teaching a model new concepts with textual inversion, take a look at the [Textual Inversion](../training/text_inversion) training guide. Import the necessary libraries: ```py import torch from diffusers import StableDiffusionPipeline from diffusers.utils import make_image_grid ``` ## Stable Diffusion 1 and 2 Pick a Stable Diffusion checkpoint and a pre-learned concept from the [Stable Diffusion Conceptualizer](https://huggingface.co/spaces/sd-concepts-library/stable-diffusion-conceptualizer): ```py pretrained_model_name_or_path = "runwayml/stable-diffusion-v1-5" repo_id_embeds = "sd-concepts-library/cat-toy" ``` Now you can load a pipeline, and pass the pre-learned concept to it: ```py pipeline = StableDiffusionPipeline.from_pretrained( pretrained_model_name_or_path, torch_dtype=torch.float16, use_safetensors=True ).to("cuda") pipeline.load_textual_inversion(repo_id_embeds) ``` Create a prompt with the pre-learned concept by using the special placeholder token `<cat-toy>`, and choose the number of samples and rows of images you'd like to generate: ```py prompt = "a grafitti in a favela wall with a <cat-toy> on it" num_samples_per_row = 2 num_rows = 2 ``` Then run the pipeline (feel free to adjust the parameters like `num_inference_steps` and `guidance_scale` to see how they affect image quality), save the generated images and visualize them with the helper function you created at the beginning: ```py all_images = [] for _ in range(num_rows): images = pipeline(prompt, num_images_per_prompt=num_samples_per_row, num_inference_steps=50, guidance_scale=7.5).images all_images.extend(images) grid = make_image_grid(all_images, num_rows, num_samples_per_row) grid ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/textual_inversion_inference.png"> </div> ## Stable Diffusion XL Stable Diffusion XL (SDXL) can also use textual inversion vectors for inference. In contrast to Stable Diffusion 1 and 2, SDXL has two text encoders so you'll need two textual inversion embeddings - one for each text encoder model. Let's download the SDXL textual inversion embeddings and have a closer look at it's structure: ```py from huggingface_hub import hf_hub_download from safetensors.torch import load_file file = hf_hub_download("dn118/unaestheticXL", filename="unaestheticXLv31.safetensors") state_dict = load_file(file) state_dict ``` ``` {'clip_g': tensor([[ 0.0077, -0.0112, 0.0065, ..., 0.0195, 0.0159, 0.0275], ..., [-0.0170, 0.0213, 0.0143, ..., -0.0302, -0.0240, -0.0362]], 'clip_l': tensor([[ 0.0023, 0.0192, 0.0213, ..., -0.0385, 0.0048, -0.0011], ..., [ 0.0475, -0.0508, -0.0145, ..., 0.0070, -0.0089, -0.0163]], ``` There are two tensors, `"clip_g"` and `"clip_l"`. `"clip_g"` corresponds to the bigger text encoder in SDXL and refers to `pipe.text_encoder_2` and `"clip_l"` refers to `pipe.text_encoder`. Now you can load each tensor separately by passing them along with the correct text encoder and tokenizer to [`~loaders.TextualInversionLoaderMixin.load_textual_inversion`]: ```py from diffusers import AutoPipelineForText2Image import torch pipe = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", variant="fp16", torch_dtype=torch.float16) pipe.to("cuda") pipe.load_textual_inversion(state_dict["clip_g"], token="unaestheticXLv31", text_encoder=pipe.text_encoder_2, tokenizer=pipe.tokenizer_2) pipe.load_textual_inversion(state_dict["clip_l"], token="unaestheticXLv31", text_encoder=pipe.text_encoder, tokenizer=pipe.tokenizer) # the embedding should be used as a negative embedding, so we pass it as a negative prompt generator = torch.Generator().manual_seed(33) image = pipe("a woman standing in front of a mountain", negative_prompt="unaestheticXLv31", generator=generator).images[0] image ```
huggingface/diffusers/blob/main/docs/source/en/using-diffusers/textual_inversion_inference.md
Using ESPnet at Hugging Face `espnet` is an end-to-end toolkit for speech processing, including automatic speech recognition, text to speech, speech enhancement, dirarization and other tasks. ## Exploring ESPnet in the Hub You can find hundreds of `espnet` models by filtering at the left of the [models page](https://huggingface.co/models?library=espnet&sort=downloads). All models on the Hub come up with useful features: 1. An automatically generated model card with a description, a training configuration, licenses and more. 2. Metadata tags that help for discoverability and contain information such as license, language and datasets. 3. An interactive widget you can use to play out with the model directly in the browser. 4. An Inference API that allows to make inference requests. <div class="flex justify-center"> <img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/libraries-espnet_widget.png"/> <img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/libraries-espnet_widget-dark.png"/> </div> ## Using existing models For a full guide on loading pre-trained models, we recommend checking out the [official guide](https://github.com/espnet/espnet_model_zoo)). If you're interested in doing inference, different classes for different tasks have a `from_pretrained` method that allows loading models from the Hub. For example: * `Speech2Text` for Automatic Speech Recognition. * `Text2Speech` for Text to Speech. * `SeparateSpeech` for Audio Source Separation. Here is an inference example: ```py import soundfile from espnet2.bin.tts_inference import Text2Speech text2speech = Text2Speech.from_pretrained("model_name") speech = text2speech("foobar")["wav"] soundfile.write("out.wav", speech.numpy(), text2speech.fs, "PCM_16") ``` If you want to see how to load a specific model, you can click `Use in ESPnet` and you will be given a working snippet that you can load it! <div class="flex justify-center"> <img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/libraries-espnet_snippet.png"/> <img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/libraries-espnet_snippet-dark.png"/> </div> ## Sharing your models `ESPnet` outputs a `zip` file that can be uploaded to Hugging Face easily. For a full guide on sharing models, we recommend checking out the [official guide](https://github.com/espnet/espnet_model_zoo#register-your-model)). The `run.sh` script allows to upload a given model to a Hugging Face repository. ```bash ./run.sh --stage 15 --skip_upload_hf false --hf_repo username/model_repo ``` ## Additional resources * ESPnet [docs](https://espnet.github.io/espnet/index.html). * ESPnet model zoo [repository](https://github.com/espnet/espnet_model_zoo). * Integration [docs](https://github.com/asteroid-team/asteroid/blob/master/docs/source/readmes/pretrained_models.md).
huggingface/hub-docs/blob/main/docs/hub/espnet.md
!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # SeamlessM4T-v2 ## Overview The SeamlessM4T-v2 model was proposed in [Seamless: Multilingual Expressive and Streaming Speech Translation](https://ai.meta.com/research/publications/seamless-multilingual-expressive-and-streaming-speech-translation/) by the Seamless Communication team from Meta AI. SeamlessM4T-v2 is a collection of models designed to provide high quality translation, allowing people from different linguistic communities to communicate effortlessly through speech and text. It is an improvement on the [previous version](https://huggingface.co/docs/transformers/main/model_doc/seamless_m4t). For more details on the differences between v1 and v2, refer to section [Difference with SeamlessM4T-v1](#difference-with-seamlessm4t-v1). SeamlessM4T-v2 enables multiple tasks without relying on separate models: - Speech-to-speech translation (S2ST) - Speech-to-text translation (S2TT) - Text-to-speech translation (T2ST) - Text-to-text translation (T2TT) - Automatic speech recognition (ASR) [`SeamlessM4Tv2Model`] can perform all the above tasks, but each task also has its own dedicated sub-model. The abstract from the paper is the following: *Recent advancements in automatic speech translation have dramatically expanded language coverage, improved multimodal capabilities, and enabled a wide range of tasks and functionalities. That said, large-scale automatic speech translation systems today lack key features that help machine-mediated communication feel seamless when compared to human-to-human dialogue. In this work, we introduce a family of models that enable end-to-end expressive and multilingual translations in a streaming fashion. First, we contribute an improved version of the massively multilingual and multimodal SeamlessM4T model—SeamlessM4T v2. This newer model, incorporating an updated UnitY2 framework, was trained on more low-resource language data. The expanded version of SeamlessAlign adds 114,800 hours of automatically aligned data for a total of 76 languages. SeamlessM4T v2 provides the foundation on which our two newest models, SeamlessExpressive and SeamlessStreaming, are initiated. SeamlessExpressive enables translation that preserves vocal styles and prosody. Compared to previous efforts in expressive speech research, our work addresses certain underexplored aspects of prosody, such as speech rate and pauses, while also preserving the style of one’s voice. As for SeamlessStreaming, our model leverages the Efficient Monotonic Multihead Attention (EMMA) mechanism to generate low-latency target translations without waiting for complete source utterances. As the first of its kind, SeamlessStreaming enables simultaneous speech-to-speech/text translation for multiple source and target languages. To understand the performance of these models, we combined novel and modified versions of existing automatic metrics to evaluate prosody, latency, and robustness. For human evaluations, we adapted existing protocols tailored for measuring the most relevant attributes in the preservation of meaning, naturalness, and expressivity. To ensure that our models can be used safely and responsibly, we implemented the first known red-teaming effort for multimodal machine translation, a system for the detection and mitigation of added toxicity, a systematic evaluation of gender bias, and an inaudible localized watermarking mechanism designed to dampen the impact of deepfakes. Consequently, we bring major components from SeamlessExpressive and SeamlessStreaming together to form Seamless, the first publicly available system that unlocks expressive cross-lingual communication in real-time. In sum, Seamless gives us a pivotal look at the technical foundation needed to turn the Universal Speech Translator from a science fiction concept into a real-world technology. Finally, contributions in this work—including models, code, and a watermark detector—are publicly released and accessible at the link below.* ## Usage In the following example, we'll load an Arabic audio sample and an English text sample and convert them into Russian speech and French text. First, load the processor and a checkpoint of the model: ```python >>> from transformers import AutoProcessor, SeamlessM4Tv2Model >>> processor = AutoProcessor.from_pretrained("facebook/seamless-m4t-v2-large") >>> model = SeamlessM4Tv2Model.from_pretrained("facebook/seamless-m4t-v2-large") ``` You can seamlessly use this model on text or on audio, to generated either translated text or translated audio. Here is how to use the processor to process text and audio: ```python >>> # let's load an audio sample from an Arabic speech corpus >>> from datasets import load_dataset >>> dataset = load_dataset("arabic_speech_corpus", split="test", streaming=True) >>> audio_sample = next(iter(dataset))["audio"] >>> # now, process it >>> audio_inputs = processor(audios=audio_sample["array"], return_tensors="pt") >>> # now, process some English text as well >>> text_inputs = processor(text = "Hello, my dog is cute", src_lang="eng", return_tensors="pt") ``` ### Speech [`SeamlessM4Tv2Model`] can *seamlessly* generate text or speech with few or no changes. Let's target Russian voice translation: ```python >>> audio_array_from_text = model.generate(**text_inputs, tgt_lang="rus")[0].cpu().numpy().squeeze() >>> audio_array_from_audio = model.generate(**audio_inputs, tgt_lang="rus")[0].cpu().numpy().squeeze() ``` With basically the same code, I've translated English text and Arabic speech to Russian speech samples. ### Text Similarly, you can generate translated text from audio files or from text with the same model. You only have to pass `generate_speech=False` to [`SeamlessM4Tv2Model.generate`]. This time, let's translate to French. ```python >>> # from audio >>> output_tokens = model.generate(**audio_inputs, tgt_lang="fra", generate_speech=False) >>> translated_text_from_audio = processor.decode(output_tokens[0].tolist()[0], skip_special_tokens=True) >>> # from text >>> output_tokens = model.generate(**text_inputs, tgt_lang="fra", generate_speech=False) >>> translated_text_from_text = processor.decode(output_tokens[0].tolist()[0], skip_special_tokens=True) ``` ### Tips #### 1. Use dedicated models [`SeamlessM4Tv2Model`] is transformers top level model to generate speech and text, but you can also use dedicated models that perform the task without additional components, thus reducing the memory footprint. For example, you can replace the audio-to-audio generation snippet with the model dedicated to the S2ST task, the rest is exactly the same code: ```python >>> from transformers import SeamlessM4Tv2ForSpeechToSpeech >>> model = SeamlessM4Tv2ForSpeechToSpeech.from_pretrained("facebook/seamless-m4t-v2-large") ``` Or you can replace the text-to-text generation snippet with the model dedicated to the T2TT task, you only have to remove `generate_speech=False`. ```python >>> from transformers import SeamlessM4Tv2ForTextToText >>> model = SeamlessM4Tv2ForTextToText.from_pretrained("facebook/seamless-m4t-v2-large") ``` Feel free to try out [`SeamlessM4Tv2ForSpeechToText`] and [`SeamlessM4Tv2ForTextToSpeech`] as well. #### 2. Change the speaker identity You have the possibility to change the speaker used for speech synthesis with the `speaker_id` argument. Some `speaker_id` works better than other for some languages! #### 3. Change the generation strategy You can use different [generation strategies](../generation_strategies) for text generation, e.g `.generate(input_ids=input_ids, text_num_beams=4, text_do_sample=True)` which will perform multinomial beam-search decoding on the text model. Note that speech generation only supports greedy - by default - or multinomial sampling, which can be used with e.g. `.generate(..., speech_do_sample=True, speech_temperature=0.6)`. #### 4. Generate speech and text at the same time Use `return_intermediate_token_ids=True` with [`SeamlessM4Tv2Model`] to return both speech and text ! ## Model architecture SeamlessM4T-v2 features a versatile architecture that smoothly handles the sequential generation of text and speech. This setup comprises two sequence-to-sequence (seq2seq) models. The first model translates the input modality into translated text, while the second model generates speech tokens, known as "unit tokens," from the translated text. Each modality has its own dedicated encoder with a unique architecture. Additionally, for speech output, a vocoder inspired by the [HiFi-GAN](https://arxiv.org/abs/2010.05646) architecture is placed on top of the second seq2seq model. ### Difference with SeamlessM4T-v1 The architecture of this new version differs from the first in a few aspects: #### Improvements on the second-pass model The second seq2seq model, named text-to-unit model, is now non-auto regressive, meaning that it computes units in a **single forward pass**. This achievement is made possible by: - the use of **character-level embeddings**, meaning that each character of the predicted translated text has its own embeddings, which are then used to predict the unit tokens. - the use of an intermediate duration predictor, that predicts speech duration at the **character-level** on the predicted translated text. - the use of a new text-to-unit decoder mixing convolutions and self-attention to handle longer context. #### Difference in the speech encoder The speech encoder, which is used during the first-pass generation process to predict the translated text, differs mainly from the previous speech encoder through these mechanisms: - the use of chunked attention mask to prevent attention across chunks, ensuring that each position attends only to positions within its own chunk and a fixed number of previous chunks. - the use of relative position embeddings which only considers distance between sequence elements rather than absolute positions. Please refer to [Self-Attentionwith Relative Position Representations (Shaw et al.)](https://arxiv.org/abs/1803.02155) for more details. - the use of a causal depth-wise convolution instead of a non-causal one. ### Generation process Here's how the generation process works: - Input text or speech is processed through its specific encoder. - A decoder creates text tokens in the desired language. - If speech generation is required, the second seq2seq model, generates unit tokens in an non auto-regressive way. - These unit tokens are then passed through the final vocoder to produce the actual speech. This model was contributed by [ylacombe](https://huggingface.co/ylacombe). The original code can be found [here](https://github.com/facebookresearch/seamless_communication). ## SeamlessM4Tv2Model [[autodoc]] SeamlessM4Tv2Model - generate ## SeamlessM4Tv2ForTextToSpeech [[autodoc]] SeamlessM4Tv2ForTextToSpeech - generate ## SeamlessM4Tv2ForSpeechToSpeech [[autodoc]] SeamlessM4Tv2ForSpeechToSpeech - generate ## SeamlessM4Tv2ForTextToText [[autodoc]] transformers.SeamlessM4Tv2ForTextToText - forward - generate ## SeamlessM4Tv2ForSpeechToText [[autodoc]] transformers.SeamlessM4Tv2ForSpeechToText - forward - generate ## SeamlessM4Tv2Config [[autodoc]] SeamlessM4Tv2Config
huggingface/transformers/blob/main/docs/source/en/model_doc/seamless_m4t_v2.md
Model Card components **Model Card Components** are special elements that you can inject directly into your Model Card markdown to display powerful custom components in your model page. These components are authored by us, feel free to share ideas about new Model Card component in [this discussion](https://huggingface.co/spaces/huggingface/HuggingDiscussions/discussions/17). ## The Gallery component Add the `<Gallery />` component to your text-to-image model card to showcase your images generation. For example, ```md <Gallery /> ## Model description TintinIA is fine-tuned version of Stable-Diffusion-xl trained on 125 comics panels from Tintin album. ``` <div class="flex justify-center"> <img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/models-gallery.png"/> <img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/models-gallery-dark.png"/> </div> The `<Gallery/>` component will use your Model Card [widget metadata](/docs/hub/models-widgets-examples#text-to-image) to display the images with each associated prompt. ```yaml widget: - text: "drawing of tintin in a shop" output: url: "images/shop.png" - text: "drawing of tintin watching rugby" output: url: "images/rugby.png" parameters: negative_prompt: "blurry" - text: "tintin working at the office" output: url: "images/office.png" ``` > Hint: Support of Card Components through the GUI editor coming soon...
huggingface/hub-docs/blob/main/docs/hub/model-cards-components.md
Discord 101 [[discord-101]] Hey there! My name is Huggy, the dog 🐕, and I'm looking forward to train with you during this RL Course! Although I don't know much about fetching sticks (yet), I know one or two things about Discord. So I wrote this guide to help you learn about it! <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit0/huggy-logo.jpg" alt="Huggy Logo"/> Discord is a free chat platform. If you've used Slack, **it's quite similar**. There is a Hugging Face Community Discord server with 50000 members you can <a href="https://discord.gg/ydHrjt3WP5">join with a single click here</a>. So many humans to play with! Starting in Discord can be a bit intimidating, so let me take you through it. When you [sign-up to our Discord server](http://hf.co/join/discord), you'll choose your interests. Make sure to **click "Reinforcement Learning,"** and you'll get access to the Reinforcement Learning Category containing all the course-related channels. If you feel like joining even more channels, go for it! 🚀 Then click next, you'll then get to **introduce yourself in the `#introduce-yourself` channel**. <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit0/discord2.jpg" alt="Discord"/> They are in the reinforcement learning category. **Don't forget to sign up to these channels** by clicking on 🤖 Reinforcement Learning in `role-assigment`. - `rl-announcements`: where we give the **lastest information about the course**. - `rl-discussions`: where you can **exchange about RL and share information**. - `rl-study-group`: where you can **ask questions and exchange with your classmates**. - `rl-i-made-this`: where you can **share your projects and models**. The HF Community Server has a thriving community of human beings interested in many areas, so you can also learn from those. There are paper discussions, events, and many other things. Was this useful? There are a couple of tips I can share with you: - There are **voice channels** you can use as well, although most people prefer text chat. - You can **use markdown style** for text chats. So if you're writing code, you can use that style. Sadly this does not work as well for links. - You can open threads as well! It's a good idea when **it's a long conversation**. I hope this is useful! And if you have questions, just ask! See you later! Huggy 🐶
huggingface/deep-rl-class/blob/main/units/en/unit0/discord101.mdx
!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Wav2Vec2 ## Overview The Wav2Vec2 model was proposed in [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli. The abstract from the paper is the following: *We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.* This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten). ## Usage tips - Wav2Vec2 is a speech model that accepts a float array corresponding to the raw waveform of the speech signal. - Wav2Vec2 model was trained using connectionist temporal classification (CTC) so the model output has to be decoded using [`Wav2Vec2CTCTokenizer`]. ## Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Wav2Vec2. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. <PipelineTag pipeline="audio-classification"/> - A notebook on how to [leverage a pretrained Wav2Vec2 model for emotion classification](https://colab.research.google.com/github/m3hrdadfi/soxan/blob/main/notebooks/Emotion_recognition_in_Greek_speech_using_Wav2Vec2.ipynb). 🌎 - [`Wav2Vec2ForCTC`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/audio-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/audio_classification.ipynb). - [Audio classification task guide](../tasks/audio_classification) <PipelineTag pipeline="automatic-speech-recognition"/> - A blog post on [boosting Wav2Vec2 with n-grams in 🤗 Transformers](https://huggingface.co/blog/wav2vec2-with-ngram). - A blog post on how to [finetune Wav2Vec2 for English ASR with 🤗 Transformers](https://huggingface.co/blog/fine-tune-wav2vec2-english). - A blog post on [finetuning XLS-R for Multi-Lingual ASR with 🤗 Transformers](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2). - A notebook on how to [create YouTube captions from any video by transcribing audio with Wav2Vec2](https://colab.research.google.com/github/Muennighoff/ytclipcc/blob/main/wav2vec_youtube_captions.ipynb). 🌎 - [`Wav2Vec2ForCTC`] is supported by a notebook on [how to finetune a speech recognition model in English](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/speech_recognition.ipynb), and [how to finetune a speech recognition model in any language](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multi_lingual_speech_recognition.ipynb). - [Automatic speech recognition task guide](../tasks/asr) 🚀 Deploy - A blog post on how to deploy Wav2Vec2 for [Automatic Speech Recogntion with Hugging Face's Transformers & Amazon SageMaker](https://www.philschmid.de/automatic-speech-recognition-sagemaker). ## Wav2Vec2Config [[autodoc]] Wav2Vec2Config ## Wav2Vec2CTCTokenizer [[autodoc]] Wav2Vec2CTCTokenizer - __call__ - save_vocabulary - decode - batch_decode - set_target_lang ## Wav2Vec2FeatureExtractor [[autodoc]] Wav2Vec2FeatureExtractor - __call__ ## Wav2Vec2Processor [[autodoc]] Wav2Vec2Processor - __call__ - pad - from_pretrained - save_pretrained - batch_decode - decode ## Wav2Vec2ProcessorWithLM [[autodoc]] Wav2Vec2ProcessorWithLM - __call__ - pad - from_pretrained - save_pretrained - batch_decode - decode ### Decoding multiple audios If you are planning to decode multiple batches of audios, you should consider using [`~Wav2Vec2ProcessorWithLM.batch_decode`] and passing an instantiated `multiprocessing.Pool`. Otherwise, [`~Wav2Vec2ProcessorWithLM.batch_decode`] performance will be slower than calling [`~Wav2Vec2ProcessorWithLM.decode`] for each audio individually, as it internally instantiates a new `Pool` for every call. See the example below: ```python >>> # Let's see how to use a user-managed pool for batch decoding multiple audios >>> from multiprocessing import get_context >>> from transformers import AutoTokenizer, AutoProcessor, AutoModelForCTC >>> from datasets import load_dataset >>> import datasets >>> import torch >>> # import model, feature extractor, tokenizer >>> model = AutoModelForCTC.from_pretrained("patrickvonplaten/wav2vec2-base-100h-with-lm").to("cuda") >>> processor = AutoProcessor.from_pretrained("patrickvonplaten/wav2vec2-base-100h-with-lm") >>> # load example dataset >>> dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> dataset = dataset.cast_column("audio", datasets.Audio(sampling_rate=16_000)) >>> def map_to_array(batch): ... batch["speech"] = batch["audio"]["array"] ... return batch >>> # prepare speech data for batch inference >>> dataset = dataset.map(map_to_array, remove_columns=["audio"]) >>> def map_to_pred(batch, pool): ... inputs = processor(batch["speech"], sampling_rate=16_000, padding=True, return_tensors="pt") ... inputs = {k: v.to("cuda") for k, v in inputs.items()} ... with torch.no_grad(): ... logits = model(**inputs).logits ... transcription = processor.batch_decode(logits.cpu().numpy(), pool).text ... batch["transcription"] = transcription ... return batch >>> # note: pool should be instantiated *after* `Wav2Vec2ProcessorWithLM`. >>> # otherwise, the LM won't be available to the pool's sub-processes >>> # select number of processes and batch_size based on number of CPU cores available and on dataset size >>> with get_context("fork").Pool(processes=2) as pool: ... result = dataset.map( ... map_to_pred, batched=True, batch_size=2, fn_kwargs={"pool": pool}, remove_columns=["speech"] ... ) >>> result["transcription"][:2] ['MISTER QUILTER IS THE APOSTLE OF THE MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL', "NOR IS MISTER COULTER'S MANNER LESS INTERESTING THAN HIS MATTER"] ``` ## Wav2Vec2 specific outputs [[autodoc]] models.wav2vec2_with_lm.processing_wav2vec2_with_lm.Wav2Vec2DecoderWithLMOutput [[autodoc]] models.wav2vec2.modeling_wav2vec2.Wav2Vec2BaseModelOutput [[autodoc]] models.wav2vec2.modeling_wav2vec2.Wav2Vec2ForPreTrainingOutput [[autodoc]] models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2BaseModelOutput [[autodoc]] models.wav2vec2.modeling_flax_wav2vec2.FlaxWav2Vec2ForPreTrainingOutput <frameworkcontent> <pt> ## Wav2Vec2Model [[autodoc]] Wav2Vec2Model - forward ## Wav2Vec2ForCTC [[autodoc]] Wav2Vec2ForCTC - forward - load_adapter ## Wav2Vec2ForSequenceClassification [[autodoc]] Wav2Vec2ForSequenceClassification - forward ## Wav2Vec2ForAudioFrameClassification [[autodoc]] Wav2Vec2ForAudioFrameClassification - forward ## Wav2Vec2ForXVector [[autodoc]] Wav2Vec2ForXVector - forward ## Wav2Vec2ForPreTraining [[autodoc]] Wav2Vec2ForPreTraining - forward </pt> <tf> ## TFWav2Vec2Model [[autodoc]] TFWav2Vec2Model - call ## TFWav2Vec2ForSequenceClassification [[autodoc]] TFWav2Vec2ForSequenceClassification - call ## TFWav2Vec2ForCTC [[autodoc]] TFWav2Vec2ForCTC - call </tf> <jax> ## FlaxWav2Vec2Model [[autodoc]] FlaxWav2Vec2Model - __call__ ## FlaxWav2Vec2ForCTC [[autodoc]] FlaxWav2Vec2ForCTC - __call__ ## FlaxWav2Vec2ForPreTraining [[autodoc]] FlaxWav2Vec2ForPreTraining - __call__ </jax> </frameworkcontent>
huggingface/transformers/blob/main/docs/source/en/model_doc/wav2vec2.md
Semantic segmentation Semantic segmentation datasets are used to train a model to classify every pixel in an image. There are a wide variety of applications enabled by these datasets such as background removal from images, stylizing images, or scene understanding for autonomous driving. This guide will show you how to apply transformations to an image segmentation dataset. Before you start, make sure you have up-to-date versions of `albumentations` and `cv2` installed: ```bash pip install -U albumentations opencv-python ``` [Albumentations](https://albumentations.ai/) is a Python library for performing data augmentation for computer vision. It supports various computer vision tasks such as image classification, object detection, segmentation, and keypoint estimation. This guide uses the [Scene Parsing](https://huggingface.co/datasets/scene_parse_150) dataset for segmenting and parsing an image into different image regions associated with semantic categories, such as sky, road, person, and bed. Load the `train` split of the dataset and take a look at an example: ```py >>> from datasets import load_dataset >>> dataset = load_dataset("scene_parse_150", split="train") >>> index = 10 >>> dataset[index] {'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=683x512 at 0x7FB37B0EC810>, 'annotation': <PIL.PngImagePlugin.PngImageFile image mode=L size=683x512 at 0x7FB37B0EC9D0>, 'scene_category': 927} ``` The dataset has three fields: * `image`: a PIL image object. * `annotation`: segmentation mask of the image. * `scene_category`: the label or scene category of the image (like “kitchen” or “office”). Next, check out an image with: ```py >>> dataset[index]["image"] ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/datasets/image_seg.png"> </div> Similarly, you can check out the respective segmentation mask: ```py >>> dataset[index]["annotation"] ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/datasets/seg_mask.png"> </div> We can also add a [color palette](https://github.com/tensorflow/models/blob/3f1ca33afe3c1631b733ea7e40c294273b9e406d/research/deeplab/utils/get_dataset_colormap.py#L51) on the segmentation mask and overlay it on top of the original image to visualize the dataset: After defining the color palette, you should be ready to visualize some overlays. ```py >>> import matplotlib.pyplot as plt >>> def visualize_seg_mask(image: np.ndarray, mask: np.ndarray): ... color_seg = np.zeros((mask.shape[0], mask.shape[1], 3), dtype=np.uint8) ... palette = np.array(create_ade20k_label_colormap()) ... for label, color in enumerate(palette): ... color_seg[mask == label, :] = color ... color_seg = color_seg[..., ::-1] # convert to BGR ... img = np.array(image) * 0.5 + color_seg * 0.5 # plot the image with the segmentation map ... img = img.astype(np.uint8) ... plt.figure(figsize=(15, 10)) ... plt.imshow(img) ... plt.axis("off") ... plt.show() >>> visualize_seg_mask( ... np.array(dataset[index]["image"]), ... np.array(dataset[index]["annotation"]) ... ) ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/datasets/seg_overlay.png"> </div> Now apply some augmentations with `albumentations`. You’ll first resize the image and adjust its brightness. ```py >>> import albumentations >>> transform = albumentations.Compose( ... [ ... albumentations.Resize(256, 256), ... albumentations.RandomBrightnessContrast(brightness_limit=0.3, contrast_limit=0.3, p=0.5), ... ] ... ) ``` Create a function to apply the transformation to the images: ```py >>> def transforms(examples): ... transformed_images, transformed_masks = [], [] ... ... for image, seg_mask in zip(examples["image"], examples["annotation"]): ... image, seg_mask = np.array(image), np.array(seg_mask) ... transformed = transform(image=image, mask=seg_mask) ... transformed_images.append(transformed["image"]) ... transformed_masks.append(transformed["mask"]) ... ... examples["pixel_values"] = transformed_images ... examples["label"] = transformed_masks ... return examples ``` Use the [`~Dataset.set_transform`] function to apply the transformation on-the-fly to batches of the dataset to consume less disk space: ```py >>> dataset.set_transform(transforms) ``` You can verify the transformation worked by indexing into the `pixel_values` and `label` of an example: ```py >>> image = np.array(dataset[index]["pixel_values"]) >>> mask = np.array(dataset[index]["label"]) >>> visualize_seg_mask(image, mask) ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/datasets/albumentations_seg.png"> </div> In this guide, you have used `albumentations` for augmenting the dataset. It's also possible to use `torchvision` to apply some similar transforms. ```py >>> from torchvision.transforms import Resize, ColorJitter, Compose >>> transformation_chain = Compose([ ... Resize((256, 256)), ... ColorJitter(brightness=0.25, contrast=0.25, saturation=0.25, hue=0.1) ... ]) >>> resize = Resize((256, 256)) >>> def train_transforms(example_batch): ... example_batch["pixel_values"] = [transformation_chain(x) for x in example_batch["image"]] ... example_batch["label"] = [resize(x) for x in example_batch["annotation"]] ... return example_batch >>> dataset.set_transform(train_transforms) >>> image = np.array(dataset[index]["pixel_values"]) >>> mask = np.array(dataset[index]["label"]) >>> visualize_seg_mask(image, mask) ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/datasets/torchvision_seg.png"> </div> <Tip> Now that you know how to process a dataset for semantic segmentation, learn [how to train a semantic segmentation model](https://huggingface.co/docs/transformers/tasks/semantic_segmentation) and use it for inference. </Tip>
huggingface/datasets/blob/main/docs/source/semantic_segmentation.mdx
Introduction [[introduction]] One of the most critical tasks in Deep Reinforcement Learning is to **find a good set of training hyperparameters**. <img src="https://raw.githubusercontent.com/optuna/optuna/master/docs/image/optuna-logo.png" alt="Optuna Logo"/> [Optuna](https://optuna.org/) is a library that helps you to automate the search. In this Unit, we'll study a **little bit of the theory behind automatic hyperparameter tuning**. We'll first try to optimize the parameters of the DQN studied in the last unit manually. We'll then **learn how to automate the search using Optuna**.
huggingface/deep-rl-class/blob/main/units/en/unitbonus2/introduction.mdx
-- title: "AudioLDM 2, but faster ⚡️" thumbnail: /blog/assets/161_audioldm2/thumbnail.png authors: - user: sanchit-gandhi --- # AudioLDM 2, but faster ⚡️ <a target="_blank" href="https://colab.research.google.com/github/sanchit-gandhi/notebooks/blob/main/AudioLDM-2.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> AudioLDM 2 was proposed in [AudioLDM 2: Learning Holistic Audio Generation with Self-supervised Pretraining](https://arxiv.org/abs/2308.05734) by Haohe Liu et al. AudioLDM 2 takes a text prompt as input and predicts the corresponding audio. It can generate realistic sound effects, human speech and music. While the generated audios are of high quality, running inference with the original implementation is very slow: a 10 second audio sample takes upwards of 30 seconds to generate. This is due to a combination of factors, including a deep multi-stage modelling approach, large checkpoint sizes, and un-optimised code. In this blog post, we showcase how to use AudioLDM 2 in the Hugging Face 🧨 Diffusers library, exploring a range of code optimisations such as half-precision, flash attention, and compilation, and model optimisations such as scheduler choice and negative prompting, to reduce the inference time by over **10 times**, with minimal degradation in quality of the output audio. The blog post is also accompanied by a more streamlined [Colab notebook](https://colab.research.google.com/github/sanchit-gandhi/notebooks/blob/main/AudioLDM-2.ipynb), that contains all the code but fewer explanations. Read to the end to find out how to generate a 10 second audio sample in just 1 second! ## Model overview Inspired by [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/overview), AudioLDM 2 is a text-to-audio _latent diffusion model (LDM)_ that learns continuous audio representations from text embeddings. The overall generation process is summarised as follows: 1. Given a text input \\(\boldsymbol{x}\\), two text encoder models are used to compute the text embeddings: the text-branch of [CLAP](https://huggingface.co/docs/transformers/main/en/model_doc/clap), and the text-encoder of [Flan-T5](https://huggingface.co/docs/transformers/main/en/model_doc/flan-t5) $$ \boldsymbol{E}_{1} = \text{CLAP}\left(\boldsymbol{x} \right); \quad \boldsymbol{E}_{2} = \text{T5}\left(\boldsymbol{x}\right) $$ The CLAP text embeddings are trained to be aligned with the embeddings of the corresponding audio sample, whereas the Flan-T5 embeddings give a better representation of the semantics of the text. 2. These text embeddings are projected to a shared embedding space through individual linear projections: $$ \boldsymbol{P}_{1} = \boldsymbol{W}_{\text{CLAP}} \boldsymbol{E}_{1}; \quad \boldsymbol{P}_{2} = \boldsymbol{W}_{\text{T5}}\boldsymbol{E}_{2} $$ In the `diffusers` implementation, these projections are defined by the [AudioLDM2ProjectionModel](https://huggingface.co/docs/diffusers/api/pipelines/audioldm2/AudioLDM2ProjectionModel). 3. A [GPT2](https://huggingface.co/docs/transformers/main/en/model_doc/gpt2) language model (LM) is used to auto-regressively generate a sequence of \\(N\\) new embedding vectors, conditional on the projected CLAP and Flan-T5 embeddings: $$ \tilde{\boldsymbol{E}}_{i} = \text{GPT2}\left(\boldsymbol{P}_{1}, \boldsymbol{P}_{2}, \tilde{\boldsymbol{E}}_{1:i-1}\right) \qquad \text{for } i=1,\dots,N $$ 4. The generated embedding vectors \\(\tilde{\boldsymbol{E}}_{1:N}\\) and Flan-T5 text embeddings \\(\boldsymbol{E}_{2}\\) are used as cross-attention conditioning in the LDM, which *de-noises* a random latent via a reverse diffusion process. The LDM is run in the reverse diffusion process for a total of \\(T\\) inference steps: $$ \boldsymbol{z}_{t} = \text{LDM}\left(\boldsymbol{z}_{t-1} | \tilde{\boldsymbol{E}}_{1:N}, \boldsymbol{E}_{2}\right) \qquad \text{for } t = 1, \dots, T $$ where the initial latent variable \\(\boldsymbol{z}_{0}\\) is drawn from a normal distribution \\(\mathcal{N} \left(\boldsymbol{0}, \boldsymbol{I} \right)\\). The [UNet](https://huggingface.co/docs/diffusers/api/pipelines/audioldm2/AudioLDM2UNet2DConditionModel) of the LDM is unique in the sense that it takes **two** sets of cross-attention embeddings, \\(\tilde{\boldsymbol{E}}_{1:N}\\) from the GPT2 language model and \\(\boldsymbol{E}_{2}\\) from Flan-T5, as opposed to one cross-attention conditioning as in most other LDMs. 5. The final de-noised latents \\(\boldsymbol{z}_{T}\\) are passed to the VAE decoder to recover the Mel spectrogram \\(\boldsymbol{s}\\): $$ \boldsymbol{s} = \text{VAE}_{\text{dec}} \left(\boldsymbol{z}_{T}\right) $$ 6. The Mel spectrogram is passed to the vocoder to obtain the output audio waveform \\(\mathbf{y}\\): $$ \boldsymbol{y} = \text{Vocoder}\left(\boldsymbol{s}\right) $$ The diagram below demonstrates how a text input is passed through the text conditioning models, with the two prompt embeddings used as cross-conditioning in the LDM: <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/161_audioldm2/audioldm2.png?raw=true" width="600"/> </p> For full details on how the AudioLDM 2 model is trained, the reader is referred to the [AudioLDM 2 paper](https://arxiv.org/abs/2308.05734). Hugging Face 🧨 Diffusers provides an end-to-end inference pipeline class [`AudioLDM2Pipeline`](https://huggingface.co/docs/diffusers/main/en/api/pipelines/audioldm2) that wraps this multi-stage generation process into a single callable object, enabling you to generate audio samples from text in just a few lines of code. AudioLDM 2 comes in three variants. Two of these checkpoints are applicable to the general task of text-to-audio generation. The third checkpoint is trained exclusively on text-to-music generation. See the table below for details on the three official checkpoints, which can all be found on the [Hugging Face Hub](https://huggingface.co/models?search=cvssp/audioldm2): | Checkpoint | Task | Model Size | Training Data / h | |-----------------------------------------------------------------------|---------------|------------|-------------------| | [cvssp/audioldm2](https://huggingface.co/cvssp/audioldm2) | Text-to-audio | 1.1B | 1150k | | [cvssp/audioldm2-music](https://huggingface.co/cvssp/audioldm2-music) | Text-to-music | 1.1B | 665k | | [cvssp/audioldm2-large](https://huggingface.co/cvssp/audioldm2-large) | Text-to-audio | 1.5B | 1150k | Now that we've covered a high-level overview of how the AudioLDM 2 generation process works, let's put this theory into practice! ## Load the pipeline For the purposes of this tutorial, we'll initialise the pipeline with the pre-trained weights from the base checkpoint, [cvssp/audioldm2](https://huggingface.co/cvssp/audioldm2). We can load the entirety of the pipeline using the [`.from_pretrained`](https://huggingface.co/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.from_pretrained) method, which will instantiate the pipeline and load the pre-trained weights: ```python from diffusers import AudioLDM2Pipeline model_id = "cvssp/audioldm2" pipe = AudioLDM2Pipeline.from_pretrained(model_id) ``` **Output:** ``` Loading pipeline components...: 100%|███████████████████████████████████████████| 11/11 [00:01<00:00, 7.62it/s] ``` The pipeline can be moved to the GPU in much the same way as a standard PyTorch nn module: ```python pipe.to("cuda"); ``` Great! We'll define a Generator and set a seed for reproducibility. This will allow us to tweak our prompts and observe the effect that they have on the generations by fixing the starting latents in the LDM model: ```python import torch generator = torch.Generator("cuda").manual_seed(0) ``` Now we're ready to perform our first generation! We'll use the same running example throughout this notebook, where we'll condition the audio generations on a fixed text prompt and use the same seed throughout. The [`audio_length_in_s`](https://huggingface.co/docs/diffusers/main/en/api/pipelines/audioldm2#diffusers.AudioLDM2Pipeline.__call__.audio_length_in_s) argument controls the length of the generated audio. It defaults to the audio length that the LDM was trained on (10.24 seconds): ```python prompt = "The sound of Brazilian samba drums with waves gently crashing in the background" audio = pipe(prompt, audio_length_in_s=10.24, generator=generator).audios[0] ``` **Output:** ``` 100%|███████████████████████████████████████████| 200/200 [00:13<00:00, 15.27it/s] ``` Cool! That run took about 13 seconds to generate. Let's have a listen to the output audio: ```python from IPython.display import Audio Audio(audio, rate=16000) ``` <audio controls> <source src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/161_audioldm2/sample_1.wav" type="audio/wav"> Your browser does not support the audio element. </audio> Sounds much like our text prompt! The quality is good, but still has artefacts of background noise. We can provide the pipeline with a [*negative prompt*](https://huggingface.co/docs/diffusers/main/en/api/pipelines/audioldm2#diffusers.AudioLDM2Pipeline.__call__.negative_prompt) to discourage the pipeline from generating certain features. In this case, we'll pass a negative prompt that discourages the model from generating low quality audio in the outputs. We'll omit the `audio_length_in_s` argument and leave it to take its default value: ```python negative_prompt = "Low quality, average quality." audio = pipe(prompt, negative_prompt=negative_prompt, generator=generator.manual_seed(0)).audios[0] ``` **Output:** ``` 100%|███████████████████████████████████████████| 200/200 [00:12<00:00, 16.50it/s] ``` The inference time is un-changed when using a negative prompt \\({}^1\\); we simply replace the unconditional input to the LDM with the negative input. That means any gains we get in audio quality we get for free. Let's take a listen to the resulting audio: ```python Audio(audio, rate=16000) ``` <audio controls> <source src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/161_audioldm2/sample_2.wav" type="audio/wav"> Your browser does not support the audio element. </audio> There's definitely an improvement in the overall audio quality - there are less noise artefacts and the audio generally sounds sharper. \\({}^1\\) Note that in practice, we typically see a reduction in inference time going from our first generation to our second. This is due to a CUDA "warm-up" that occurs the first time we run the computation. The second generation is a better benchmark for our actual inference time. ## Optimisation 1: Flash Attention PyTorch 2.0 and upwards includes an optimised and memory-efficient implementation of the attention operation through the [`torch.nn.functional.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention) (SDPA) function. This function automatically applies several in-built optimisations depending on the inputs, and runs faster and more memory-efficient than the vanilla attention implementation. Overall, the SDPA function gives similar behaviour to *flash attention*, as proposed in the paper [Fast and Memory-Efficient Exact Attention with IO-Awareness](https://arxiv.org/abs/2205.14135) by Dao et. al. These optimisations will be enabled by default in Diffusers if PyTorch 2.0 is installed and if `torch.nn.functional.scaled_dot_product_attention` is available. To use it, just install torch 2.0 or higher as per the [official instructions](https://pytorch.org/get-started/locally/), and then use the pipeline as is 🚀 ```python audio = pipe(prompt, negative_prompt=negative_prompt, generator=generator.manual_seed(0)).audios[0] ``` **Output:** ``` 100%|███████████████████████████████████████████| 200/200 [00:12<00:00, 16.60it/s] ``` For more details on the use of SDPA in `diffusers`, refer to the corresponding [documentation](https://huggingface.co/docs/diffusers/optimization/torch2.0). ## Optimisation 2: Half-Precision By default, the `AudioLDM2Pipeline` loads the model weights in float32 (full) precision. All the model computations are also performed in float32 precision. For inference, we can safely convert the model weights and computations to float16 (half) precision, which will give us an improvement to inference time and GPU memory, with an impercivable change to generation quality. We can load the weights in float16 precision by passing the [`torch_dtype`](https://huggingface.co/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.from_pretrained.torch_dtype) argument to `.from_pretrained`: ```python pipe = AudioLDM2Pipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe.to("cuda"); ``` Let's run generation in float16 precision and listen to the audio outputs: ```python audio = pipe(prompt, negative_prompt=negative_prompt, generator=generator.manual_seed(0)).audios[0] Audio(audio, rate=16000) ``` **Output:** ``` 100%|███████████████████████████████████████████| 200/200 [00:09<00:00, 20.94it/s] ``` <audio controls> <source src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/161_audioldm2/sample_3.wav" type="audio/wav"> Your browser does not support the audio element. </audio> The audio quality is largely un-changed from the full precision generation, with an inference speed-up of about 2 seconds. In our experience, we've not seen any significant audio degradation using `diffusers` pipelines with float16 precision, but consistently reap a substantial inference speed-up. Thus, we recommend using float16 precision by default. ## Optimisation 3: Torch Compile To get an additional speed-up, we can use the new `torch.compile` feature. Since the UNet of the pipeline is usually the most computationally expensive, we wrap the unet with `torch.compile`, leaving the rest of the sub-models (text encoders and VAE) as is: ```python pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) ``` After wrapping the UNet with `torch.compile` the first inference step we run is typically going to be slow, due to the overhead of compiling the forward pass of the UNet. Let's run the pipeline forward with the compilation step get this longer run out of the way. Note that the first inference step might take up to 2 minutes to compile, so be patient! ```python audio = pipe(prompt, negative_prompt=negative_prompt, generator=generator.manual_seed(0)).audios[0] ``` **Output:** ``` 100%|███████████████████████████████████████████| 200/200 [01:23<00:00, 2.39it/s] ``` Great! Now that the UNet is compiled, we can now run the full diffusion process and reap the benefits of faster inference: ```python audio = pipe(prompt, negative_prompt=negative_prompt, generator=generator.manual_seed(0)).audios[0] ``` **Output:** ``` 100%|███████████████████████████████████████████| 200/200 [00:04<00:00, 48.98it/s] ``` Only 4 seconds to generate! In practice, you will only have to compile the UNet once, and then get faster inference for all successive generations. This means that the time taken to compile the model is amortised by the gains in subsequent inference time. For more information and options regarding `torch.compile`, refer to the [torch compile](https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html) docs. ## Optimisation 4: Scheduler Another option is to reduce the number of inference steps. Choosing a more efficient scheduler can help decrease the number of steps without sacrificing the output audio quality. You can find which schedulers are compatible with the `AudioLDM2Pipeline` by calling the [`schedulers.compatibles`](https://huggingface.co/docs/diffusers/v0.20.0/en/api/schedulers/overview#diffusers.SchedulerMixin) attribute: ```python pipe.scheduler.compatibles ``` **Output:** ``` [diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler, diffusers.schedulers.scheduling_k_dpm_2_discrete.KDPM2DiscreteScheduler, diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler, diffusers.schedulers.scheduling_unipc_multistep.UniPCMultistepScheduler, diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler, diffusers.schedulers.scheduling_pndm.PNDMScheduler, diffusers.schedulers.scheduling_dpmsolver_singlestep.DPMSolverSinglestepScheduler, diffusers.schedulers.scheduling_heun_discrete.HeunDiscreteScheduler, diffusers.schedulers.scheduling_ddpm.DDPMScheduler, diffusers.schedulers.scheduling_deis_multistep.DEISMultistepScheduler, diffusers.utils.dummy_torch_and_torchsde_objects.DPMSolverSDEScheduler, diffusers.schedulers.scheduling_ddim.DDIMScheduler, diffusers.schedulers.scheduling_k_dpm_2_ancestral_discrete.KDPM2AncestralDiscreteScheduler, diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteScheduler] ``` Alright! We've got a long list of schedulers to choose from 📝. By default, AudioLDM 2 uses the [`DDIMScheduler`](https://huggingface.co/docs/diffusers/api/schedulers/ddim), and requires 200 inference steps to get good quality audio generations. However, more performant schedulers, like [`DPMSolverMultistepScheduler`](https://huggingface.co/docs/diffusers/main/en/api/schedulers/multistep_dpm_solver#diffusers.DPMSolverMultistepScheduler), require only **20-25 inference steps** to achieve similar results. Let's see how we can switch the AudioLDM 2 scheduler from DDIM to DPM Multistep. We'll use the [`ConfigMixin.from_config()`](https://huggingface.co/docs/diffusers/main/en/api/configuration#diffusers.ConfigMixin.from_config) method to load a [`DPMSolverMultistepScheduler`](https://huggingface.co/docs/diffusers/main/en/api/schedulers/multistep_dpm_solver#diffusers.DPMSolverMultistepScheduler) from the configuration of our original [`DDIMScheduler`](https://huggingface.co/docs/diffusers/api/schedulers/ddim): ```python from diffusers import DPMSolverMultistepScheduler pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) ``` Let's set the number of inference steps to 20 and re-run the generation with the new scheduler. Since the shape of the LDM latents are un-changed, we don't have to repeat the compilation step: ```python audio = pipe(prompt, negative_prompt=negative_prompt, num_inference_steps=20, generator=generator.manual_seed(0)).audios[0] ``` **Output:** ``` 100%|███████████████████████████████████████████| 20/20 [00:00<00:00, 49.14it/s] ``` That took less than **1 second** to generate the audio! Let's have a listen to the resulting generation: ```python Audio(audio, rate=16000) ``` <audio controls> <source src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/161_audioldm2/sample_4.wav" type="audio/wav"> Your browser does not support the audio element. </audio> More or less the same as our original audio sample, but only a fraction of the generation time! 🧨 Diffusers pipelines are designed to be *composable*, allowing you two swap out schedulers and other components for more performant counterparts with ease. ## What about memory? The length of the audio sample we want to generate dictates the *width* of the latent variables we de-noise in the LDM. Since the memory of the cross-attention layers in the UNet scales with sequence length (width) squared, generating very long audio samples might lead to out-of-memory errors. Our batch size also governs our memory usage, controlling the number of samples that we generate. We've already mentioned that loading the model in float16 half precision gives strong memory savings. Using PyTorch 2.0 SDPA also gives a memory improvement, but this might not be suffienct for extremely large sequence lengths. Let's try generating an audio sample 2.5 minutes (150 seconds) in duration. We'll also generate 4 candidate audios by setting [`num_waveforms_per_prompt`](https://huggingface.co/docs/diffusers/main/en/api/pipelines/audioldm2#diffusers.AudioLDM2Pipeline.__call__.num_waveforms_per_prompt)`=4`. Once [`num_waveforms_per_prompt`](https://huggingface.co/docs/diffusers/main/en/api/pipelines/audioldm2#diffusers.AudioLDM2Pipeline.__call__.num_waveforms_per_prompt)`>1`, automatic scoring is performed between the generated audios and the text prompt: the audios and text prompts are embedded in the CLAP audio-text embedding space, and then ranked based on their cosine similarity scores. We can access the 'best' waveform as that in position `0`. Since we've changed the width of the latent variables in the UNet, we'll have to perform another torch compilation step with the new latent variable shapes. In the interest of time, we'll re-load the pipeline without torch compile, such that we're not hit with a lengthy compilation step first up: ```python pipe = AudioLDM2Pipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe.to("cuda") audio = pipe(prompt, negative_prompt=negative_prompt, num_waveforms_per_prompt=4, audio_length_in_s=150, num_inference_steps=20, generator=generator.manual_seed(0)).audios[0] ``` **Output:** ``` --------------------------------------------------------------------------- OutOfMemoryError Traceback (most recent call last) <ipython-input-33-c4cae6410ff5> in <cell line: 5>() 3 pipe.to("cuda") 4 ----> 5 audio = pipe(prompt, negative_prompt=negative_prompt, num_waveforms_per_prompt=4, audio_length_in_s=150, num_inference_steps=20, generator=generator.manual_seed(0)).audios[0] 23 frames /usr/local/lib/python3.10/dist-packages/torch/nn/modules/linear.py in forward(self, input) 112 113 def forward(self, input: Tensor) -> Tensor: --> 114 return F.linear(input, self.weight, self.bias) 115 116 def extra_repr(self) -> str: OutOfMemoryError: CUDA out of memory. Tried to allocate 1.95 GiB. GPU 0 has a total capacty of 14.75 GiB of which 1.66 GiB is free. Process 414660 has 13.09 GiB memory in use. Of the allocated memory 10.09 GiB is allocated by PyTorch, and 1.92 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF ``` Unless you have a GPU with high RAM, the code above probably returned an OOM error. While the AudioLDM 2 pipeline involves several components, only the model being used has to be on the GPU at any one time. The remainder of the modules can be offloaded to the CPU. This technique, called *CPU offload*, can reduce memory usage, with a very low penalty to inference time. We can enable CPU offload on our pipeline with the function [enable_model_cpu_offload()](https://huggingface.co/docs/diffusers/main/en/api/pipelines/audioldm2#diffusers.AudioLDM2Pipeline.enable_model_cpu_offload): ```python pipe.enable_model_cpu_offload() ``` Running generation with CPU offload is then the same as before: ```python audio = pipe(prompt, negative_prompt=negative_prompt, num_waveforms_per_prompt=4, audio_length_in_s=150, num_inference_steps=20, generator=generator.manual_seed(0)).audios[0] ``` **Output:** ``` 100%|███████████████████████████████████████████| 20/20 [00:36<00:00, 1.82s/it] ``` And with that, we can generate four samples, each of 150 seconds in duration, all in one call to the pipeline! Using the large AudioLDM 2 checkpoint will result in higher overall memory usage than the base checkpoint, since the UNet is over twice the size (750M parameters compared to 350M), so this memory saving trick is particularly beneficial here. ## Conclusion In this blog post, we showcased four optimisation methods that are available out of the box with 🧨 Diffusers, taking the generation time of AudioLDM 2 from 14 seconds down to less than 1 second. We also highlighted how to employ memory saving tricks, such as half-precision and CPU offload, to reduce peak memory usage for long audio samples or large checkpoint sizes. Blog post by [Sanchit Gandhi](https://huggingface.co/sanchit-gandhi). Many thanks to [Vaibhav Srivastav](https://huggingface.co/reach-vb) and [Sayak Paul](https://huggingface.co/sayakpaul) for their constructive comments. Spectrogram image source: [Getting to Know the Mel Spectrogram](https://towardsdatascience.com/getting-to-know-the-mel-spectrogram-31bca3e2d9d0). Waveform image source: [Aalto Speech Processing](https://speechprocessingbook.aalto.fi/Representations/Waveform.html).
huggingface/blog/blob/main/audioldm2.md
@gradio/dataset ## 0.1.13 ### Patch Changes - Updated dependencies [[`828fb9e`](https://github.com/gradio-app/gradio/commit/828fb9e6ce15b6ea08318675a2361117596a1b5d), [`73268ee`](https://github.com/gradio-app/gradio/commit/73268ee2e39f23ebdd1e927cb49b8d79c4b9a144)]: - @gradio/[email protected] - @gradio/[email protected] - @gradio/[email protected] ## 0.1.12 ### Patch Changes - Updated dependencies [[`245d58e`](https://github.com/gradio-app/gradio/commit/245d58eff788e8d44a59d37a2d9b26d0f08a62b4)]: - @gradio/[email protected] - @gradio/[email protected] ## 0.1.11 ### Patch Changes - Updated dependencies [[`5d51fbc`](https://github.com/gradio-app/gradio/commit/5d51fbce7826da840a2fd4940feb5d9ad6f1bc5a), [`34f9431`](https://github.com/gradio-app/gradio/commit/34f943101bf7dd6b8a8974a6131c1ed7c4a0dac0)]: - @gradio/[email protected] - @gradio/[email protected] ## 0.1.10 ### Patch Changes - Updated dependencies [[`6a9151d`](https://github.com/gradio-app/gradio/commit/6a9151d5c9432c724098da7d88a539aaaf5ffe88), [`d76bcaa`](https://github.com/gradio-app/gradio/commit/d76bcaaaf0734aaf49a680f94ea9d4d22a602e70), [`67ddd40`](https://github.com/gradio-app/gradio/commit/67ddd40b4b70d3a37cb1637c33620f8d197dbee0), [`4d1cbbc`](https://github.com/gradio-app/gradio/commit/4d1cbbcf30833ef1de2d2d2710c7492a379a9a00)]: - @gradio/[email protected] - @gradio/[email protected] - @gradio/[email protected] ## 0.1.9 ### Patch Changes - Updated dependencies []: - @gradio/[email protected] - @gradio/[email protected] ## 0.1.8 ### Patch Changes - Updated dependencies [[`71f1a1f99`](https://github.com/gradio-app/gradio/commit/71f1a1f9931489d465c2c1302a5c8d768a3cd23a)]: - @gradio/[email protected] - @gradio/[email protected] ## 0.1.7 ### Patch Changes - Updated dependencies [[`9caddc17b`](https://github.com/gradio-app/gradio/commit/9caddc17b1dea8da1af8ba724c6a5eab04ce0ed8)]: - @gradio/[email protected] - @gradio/[email protected] ## 0.1.6 ### Patch Changes - Updated dependencies [[`2f805a7dd`](https://github.com/gradio-app/gradio/commit/2f805a7dd3d2b64b098f659dadd5d01258290521), [`f816136a0`](https://github.com/gradio-app/gradio/commit/f816136a039fa6011be9c4fb14f573e4050a681a)]: - @gradio/[email protected] - @gradio/[email protected] ## 0.1.5 ### Patch Changes - Updated dependencies [[`324867f63`](https://github.com/gradio-app/gradio/commit/324867f63c920113d89a565892aa596cf8b1e486)]: - @gradio/[email protected] - @gradio/[email protected] ## 0.1.4 ### Patch Changes - Updated dependencies [[`854b482f5`](https://github.com/gradio-app/gradio/commit/854b482f598e0dc47673846631643c079576da9c), [`f1409f95e`](https://github.com/gradio-app/gradio/commit/f1409f95ed39c5565bed6a601e41f94e30196a57)]: - @gradio/[email protected] - @gradio/[email protected] ## 0.1.3 ### Patch Changes - Updated dependencies [[`bca6c2c80`](https://github.com/gradio-app/gradio/commit/bca6c2c80f7e5062427019de45c282238388af95), [`3cdeabc68`](https://github.com/gradio-app/gradio/commit/3cdeabc6843000310e1a9e1d17190ecbf3bbc780), [`fad92c29d`](https://github.com/gradio-app/gradio/commit/fad92c29dc1f5cd84341aae417c495b33e01245f)]: - @gradio/[email protected] - @gradio/[email protected] - @gradio/[email protected] ## 0.1.2 ### Patch Changes - Updated dependencies [[`aaa55ce85`](https://github.com/gradio-app/gradio/commit/aaa55ce85e12f95aba9299445e9c5e59824da18e)]: - @gradio/[email protected] ## 0.1.1 ### Patch Changes - Updated dependencies [[`2ba14b284`](https://github.com/gradio-app/gradio/commit/2ba14b284f908aa13859f4337167a157075a68eb)]: - @gradio/[email protected] - @gradio/[email protected] ## 0.1.0 ### Features - [#5498](https://github.com/gradio-app/gradio/pull/5498) [`287fe6782`](https://github.com/gradio-app/gradio/commit/287fe6782825479513e79a5cf0ba0fbfe51443d7) - fix circular dependency with client + upload. Thanks [@pngwn](https://github.com/pngwn)! - [#5498](https://github.com/gradio-app/gradio/pull/5498) [`287fe6782`](https://github.com/gradio-app/gradio/commit/287fe6782825479513e79a5cf0ba0fbfe51443d7) - pass props to example components and to example outputs. Thanks [@pngwn](https://github.com/pngwn)! - [#5498](https://github.com/gradio-app/gradio/pull/5498) [`287fe6782`](https://github.com/gradio-app/gradio/commit/287fe6782825479513e79a5cf0ba0fbfe51443d7) - Clean root url. Thanks [@pngwn](https://github.com/pngwn)! - [#5498](https://github.com/gradio-app/gradio/pull/5498) [`287fe6782`](https://github.com/gradio-app/gradio/commit/287fe6782825479513e79a5cf0ba0fbfe51443d7) - Adds the ability to build the frontend and backend of custom components in preparation for publishing to pypi using `gradio_component build`. Thanks [@pngwn](https://github.com/pngwn)! ## 0.1.0-beta.2 ### Features - [#6143](https://github.com/gradio-app/gradio/pull/6143) [`e4f7b4b40`](https://github.com/gradio-app/gradio/commit/e4f7b4b409323b01aa01b39e15ce6139e29aa073) - fix circular dependency with client + upload. Thanks [@pngwn](https://github.com/pngwn)! ## 0.1.0-beta.1 ### Features - [#6016](https://github.com/gradio-app/gradio/pull/6016) [`83e947676`](https://github.com/gradio-app/gradio/commit/83e947676d327ca2ab6ae2a2d710c78961c771a0) - Format js in v4 branch. Thanks [@freddyaboulton](https://github.com/freddyaboulton)! - [#6014](https://github.com/gradio-app/gradio/pull/6014) [`cad537aac`](https://github.com/gradio-app/gradio/commit/cad537aac57998560c9f44a37499be734de66349) - pass props to example components and to example outputs. Thanks [@pngwn](https://github.com/pngwn)! ## 0.1.0-beta.0 ### Features - [#5960](https://github.com/gradio-app/gradio/pull/5960) [`319c30f3f`](https://github.com/gradio-app/gradio/commit/319c30f3fccf23bfe1da6c9b132a6a99d59652f7) - rererefactor frontend files. Thanks [@pngwn](https://github.com/pngwn)! - [#5498](https://github.com/gradio-app/gradio/pull/5498) [`85ba6de13`](https://github.com/gradio-app/gradio/commit/85ba6de136a45b3e92c74e410bb27e3cbe7138d7) - Adds the ability to build the frontend and backend of custom components in preparation for publishing to pypi using `gradio_component build`. Thanks [@pngwn](https://github.com/pngwn)! # @gradio/box ## 0.0.5-beta.4 ### Features - [#5648](https://github.com/gradio-app/gradio/pull/5648) [`c573e2339`](https://github.com/gradio-app/gradio/commit/c573e2339b86c85b378dc349de5e9223a3c3b04a) - Publish all components to npm. Thanks [@freddyaboulton](https://github.com/freddyaboulton)! ## 0.0.5-beta.3 ### Patch Changes - Updated dependencies []: - @gradio/[email protected] ## 0.0.5-beta.2 ### Patch Changes - Updated dependencies []: - @gradio/[email protected] ## 0.0.5-beta.1 ### Patch Changes - Updated dependencies []: - @gradio/[email protected] ## 0.0.5-beta.0 ### Patch Changes - Updated dependencies [[`681f10c31`](https://github.com/gradio-app/gradio/commit/681f10c315a75cc8cd0473c9a0167961af7696db), [`1385dc688`](https://github.com/gradio-app/gradio/commit/1385dc6881f2d8ae7a41106ec21d33e2ef04d6a9)]: - @gradio/[email protected] ## 0.0.4 ### Patch Changes - Updated dependencies []: - @gradio/[email protected] ## 0.0.3 ### Patch Changes - Updated dependencies []: - @gradio/[email protected] ## 0.0.2 ### Features - [#5215](https://github.com/gradio-app/gradio/pull/5215) [`fbdad78a`](https://github.com/gradio-app/gradio/commit/fbdad78af4c47454cbb570f88cc14bf4479bbceb) - Lazy load interactive or static variants of a component individually, rather than loading both variants regardless. This change will improve performance for many applications. Thanks [@pngwn](https://github.com/pngwn)!
gradio-app/gradio/blob/main/js/dataset/CHANGELOG.md
Stable Diffusion Text-to-Image Fine-Tuning This example shows how to leverage ONNX Runtime Training to fine-tune stable diffusion model on your own dataset. Our team has tested finetuning `CompVis/stable-diffusion-v1-4` model on the `lambdalabs/pokemon-blip-captions` dataset and achieved the following speedup: ![image](https://github.com/microsoft/onnxruntime-training-examples/assets/31260940/00f199b1-3a84-4369-924d-fd6c613bd3b4) ___Note___: ___This script is experimental. The script fine-tunes the whole model and often times the model overfits and runs into issues like catastrophic forgetting. It's recommended to try different hyperparamters to get the best result on your dataset.___ ## Running locally with PyTorch ### Installing the dependencies ___Note___: This example requires PyTorch nightly and [ONNX Runtime](https://github.com/Microsoft/onnxruntime) nightly ```bash pip install --pre torch torchvision --index-url https://download.pytorch.org/whl/nightly/cu118 pip install onnxruntime-training --pre -f https://download.onnxruntime.ai/onnxruntime_nightly_cu118.html python -m onnxruntime.training.ortmodule.torch_cpp_extensions.install ``` Or get your environment ready via Docker: [examples/onnxruntime/training/docker/Dockerfile-ort-nightly-cu118](https://github.com/huggingface/optimum/blob/main/examples/onnxruntime/training/docker/Dockerfile-ort-nightly-cu118) Then, cd in the example folder and run ```bash pip install -r requirements.txt ``` And initialize an [🤗Accelerate](https://github.com/huggingface/accelerate/) environment with: ```bash accelerate config ``` ### Pokemon example You have to be a registered user in 🤗 Hugging Face Hub, and you'll also need to use an access token for the code to work. For more information on access tokens, please refer to [this section of the documentation](https://huggingface.co/docs/hub/security-tokens). Run the following command to authenticate your token ```bash huggingface-cli login ``` If you have already cloned the repo, then you won't need to go through these steps. <br> #### Hardware Cited performance metrics used an NIVIDIA V100, 8-GPU cluster. **___Note: Change the `resolution` to 768 if you are using the [stable-diffusion-2](https://huggingface.co/stabilityai/stable-diffusion-2) 768x768 model.___** <!-- accelerate_snippet_start --> ```bash export MODEL_NAME="CompVis/stable-diffusion-v1-4" export dataset_name="lambdalabs/pokemon-blip-captions" accelerate launch --mixed_precision="fp16" train_text_to_image.py --ort \ --pretrained_model_name_or_path=$MODEL_NAME \ --dataset_name=$dataset_name \ --use_ema \ --resolution=512 --center_crop --random_flip \ --train_batch_size=1 \ --gradient_accumulation_steps=4 \ --gradient_checkpointing \ --max_train_steps=15000 \ --learning_rate=1e-05 \ --max_grad_norm=1 \ --lr_scheduler="constant" --lr_warmup_steps=0 \ --output_dir="sd-pokemon-model" ``` <!-- accelerate_snippet_end --> To run on your own training files prepare the dataset according to the format required by `datasets`, you can find the instructions for how to do that in this [document](https://huggingface.co/docs/datasets/v2.4.0/en/image_load#imagefolder-with-metadata). If you wish to use custom loading logic, you should modify the script, we have left pointers for that in the training script. ```bash export MODEL_NAME="CompVis/stable-diffusion-v1-4" export TRAIN_DIR="path_to_your_dataset" accelerate launch --mixed_precision="fp16" train_text_to_image.py --ort \ --pretrained_model_name_or_path=$MODEL_NAME \ --train_data_dir=$TRAIN_DIR \ --use_ema \ --resolution=512 --center_crop --random_flip \ --train_batch_size=1 \ --gradient_accumulation_steps=4 \ --gradient_checkpointing \ --max_train_steps=15000 \ --learning_rate=1e-05 \ --max_grad_norm=1 \ --lr_scheduler="constant" --lr_warmup_steps=0 \ --output_dir="sd-pokemon-model" ``` Once the training is finished the model will be saved in the `output_dir` specified in the command. In this example it's `sd-pokemon-model`. To load the fine-tuned model for inference just pass that path to `ORTStableDiffusionPipeline` ```python from optimum.onnxruntime import ORTStableDiffusionPipeline model_path = "path_to_saved_model" pipe = ORTStableDiffusionPipeline.from_pretrained(model_path, torch_dtype=torch.float16) pipe.to("cuda") image = pipe(prompt="yoda").images[0] image.save("yoda-pokemon.png") ``` #### Training with multiple GPUs `accelerate` allows for seamless multi-GPU training. Follow the instructions [here](https://huggingface.co/docs/accelerate/basic_tutorials/launch) for running distributed training with `accelerate`. Here is an example command: ```bash export MODEL_NAME="CompVis/stable-diffusion-v1-4" export dataset_name="lambdalabs/pokemon-blip-captions" accelerate launch --mixed_precision="fp16" --multi_gpu train_text_to_image.py --ort \ --pretrained_model_name_or_path=$MODEL_NAME \ --dataset_name=$dataset_name \ --use_ema \ --resolution=512 --center_crop --random_flip \ --train_batch_size=1 \ --gradient_accumulation_steps=4 \ --gradient_checkpointing \ --max_train_steps=15000 \ --learning_rate=1e-05 \ --max_grad_norm=1 \ --lr_scheduler="constant" --lr_warmup_steps=0 \ --output_dir="sd-pokemon-model" ```
huggingface/optimum/blob/main/examples/onnxruntime/training/stable-diffusion/text-to-image/README.md
-- title: "Accelerating Stable Diffusion Inference on Intel CPUs" thumbnail: /blog/assets/136_stable_diffusion_inference_intel/01.png authors: - user: juliensimon - user: echarlaix --- # Accelerating Stable Diffusion Inference on Intel CPUs Recently, we introduced the latest generation of [Intel Xeon](https://www.intel.com/content/www/us/en/products/details/processors/xeon/scalable.html) CPUs (code name Sapphire Rapids), its new hardware features for deep learning acceleration, and how to use them to accelerate [distributed fine-tuning](https://huggingface.co/blog/intel-sapphire-rapids) and [inference](https://huggingface.co/blog/intel-sapphire-rapids-inference) for natural language processing Transformers. In this post, we're going to show you different techniques to accelerate Stable Diffusion models on Sapphire Rapids CPUs. A follow-up post will do the same for distributed fine-tuning. At the time of writing, the simplest way to get your hands on a Sapphire Rapids server is to use the Amazon EC2 [R7iz](https://aws.amazon.com/ec2/instance-types/r7iz/) instance family. As it's still in preview, you have to [sign up](https://pages.awscloud.com/R7iz-Preview.html) to get access. Like in previous posts, I'm using an `r7iz.metal-16xl` instance (64 vCPU, 512GB RAM) with an Ubuntu 20.04 AMI (`ami-07cd3e6c4915b2d18`). Let's get started! Code samples are available on [Gitlab](https://gitlab.com/juliensimon/huggingface-demos/-/tree/main/optimum/stable_diffusion_intel). ## The Diffusers library The [Diffusers](https://huggingface.co/docs/diffusers/index) library makes it extremely simple to generate images with Stable Diffusion models. If you're not familiar with these models, here's a great [illustrated introduction](https://jalammar.github.io/illustrated-stable-diffusion/). First, let's create a virtual environment with the required libraries: Transformers, Diffusers, Accelerate, and PyTorch. ``` virtualenv sd_inference source sd_inference/bin/activate pip install pip --upgrade pip install transformers diffusers accelerate torch==1.13.1 ``` Then, we write a simple benchmarking function that repeatedly runs inference, and returns the average latency for a single-image generation. ```python import time def elapsed_time(pipeline, prompt, nb_pass=10, num_inference_steps=20): # warmup images = pipeline(prompt, num_inference_steps=10).images start = time.time() for _ in range(nb_pass): _ = pipeline(prompt, num_inference_steps=num_inference_steps, output_type="np") end = time.time() return (end - start) / nb_pass ``` Now, let's build a `StableDiffusionPipeline` with the default `float32` data type, and measure its inference latency. ```python from diffusers import StableDiffusionPipeline model_id = "runwayml/stable-diffusion-v1-5" pipe = StableDiffusionPipeline.from_pretrained(model_id) prompt = "sailing ship in storm by Rembrandt" latency = elapsed_time(pipe, prompt) print(latency) ``` The average latency is **32.3 seconds**. As demonstrated by this [Intel Space](https://huggingface.co/spaces/Intel/Stable-Diffusion-Side-by-Side), the same code runs on a previous generation Intel Xeon (code name Ice Lake) in about 45 seconds. Out of the box, we can see that Sapphire Rapids CPUs are quite faster without any code change! Now, let's accelerate! ## Optimum Intel and OpenVINO [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) accelerates end-to-end pipelines on Intel architectures. Its API is extremely similar to the vanilla [Diffusers](https://huggingface.co/docs/diffusers/index) API, making it trivial to adapt existing code. Optimum Intel supports [OpenVINO](https://docs.openvino.ai/latest/index.html), an Intel open-source toolkit for high-performance inference. Optimum Intel and OpenVINO can be installed as follows: ``` pip install optimum[openvino] ``` Starting from the code above, we only need to replace `StableDiffusionPipeline` with `OVStableDiffusionPipeline`. To load a PyTorch model and convert it to the OpenVINO format on-the-fly, you can set `export=True` when loading your model. ```python from optimum.intel.openvino import OVStableDiffusionPipeline ... ov_pipe = OVStableDiffusionPipeline.from_pretrained(model_id, export=True) latency = elapsed_time(ov_pipe, prompt) print(latency) # Don't forget to save the exported model ov_pipe.save_pretrained("./openvino") ``` OpenVINO automatically optimizes the model for the `bfloat16` format. Thanks to this, the average latency is now **16.7 seconds**, a sweet 2x speedup. The pipeline above support dynamic input shapes, with no restriction on the number of images or their resolution. With Stable Diffusion, your application is usually restricted to one (or a few) different output resolutions, such as 512x512, or 256x256. Thus, it makes a lot of sense to unlock significant acceleration by reshaping the pipeline to a fixed resolution. If you need more than one output resolution, you can simply maintain a few pipeline instances, one for each resolution. ```python ov_pipe.reshape(batch_size=1, height=512, width=512, num_images_per_prompt=1) latency = elapsed_time(ov_pipe, prompt) ``` With a static shape, average latency is slashed to **4.7 seconds**, an additional 3.5x speedup. As you can see, OpenVINO is a simple and efficient way to accelerate Stable Diffusion inference. When combined with a Sapphire Rapids CPU, it delivers almost 10x speedup compared to vanilla inference on Ice Lake Xeons. If you can't or don't want to use OpenVINO, the rest of this post will show you a series of other optimization techniques. Fasten your seatbelt! ## System-level optimization Diffuser models are large multi-gigabyte models, and image generation is a memory-intensive operation. By installing a high-performance memory allocation library, we should be able to speed up memory operations and parallelize them across the Xeon cores. Please note that this will change the default memory allocation library on your system. Of course, you can go back to the default library by uninstalling the new one. [jemalloc](https://jemalloc.net/) and [tcmalloc](https://github.com/gperftools/gperftools) are equally interesting. Here, I'm installing `jemalloc` as my tests give it a slight performance edge. It can also be tweaked for a particular workload, for example to maximize CPU utilization. You can refer to the [tuning guide](https://github.com/jemalloc/jemalloc/blob/dev/TUNING.md) for details. ``` sudo apt-get install -y libjemalloc-dev export LD_PRELOAD=$LD_PRELOAD:/usr/lib/x86_64-linux-gnu/libjemalloc.so export MALLOC_CONF="oversize_threshold:1,background_thread:true,metadata_thp:auto,dirty_decay_ms: 60000,muzzy_decay_ms:60000" ``` Next, we install the `libiomp` library to optimize parallel processing. It's part of [Intel OpenMP* Runtime](https://www.intel.com/content/www/us/en/docs/cpp-compiler/developer-guide-reference/2021-8/openmp-run-time-library-routines.html). ``` sudo apt-get install intel-mkl export LD_PRELOAD=$LD_PRELOAD:/usr/lib/x86_64-linux-gnu/libiomp5.so export OMP_NUM_THREADS=32 ``` Finally, we install the [numactl](https://github.com/numactl/numactl) command line tool. This lets us pin our Python process to specific cores, and avoid some of the overhead related to context switching. ``` numactl -C 0-31 python sd_blog_1.py ``` Thanks to these optimizations, our original Diffusers code now predicts in **11.8 seconds**. That's almost 3x faster, without any code change. These tools are certainly working great on our 32-core Xeon. We're far from done. Let's add the Intel Extension for PyTorch to the mix. ## IPEX and BF16 The [Intel Extension for Pytorch](https://intel.github.io/intel-extension-for-pytorch/) (IPEX) extends PyTorch and takes advantage of hardware acceleration features present on Intel CPUs, such as [AVX-512](https://en.wikipedia.org/wiki/AVX-512) Vector Neural Network Instructions (AVX512 VNNI) and [Advanced Matrix Extensions](https://en.wikipedia.org/wiki/Advanced_Matrix_Extensions) (AMX). Let's install it. ``` pip install intel_extension_for_pytorch==1.13.100 ``` We then update our code to optimize each pipeline element with IPEX (you can list them by printing the `pipe` object). This requires converting them to the channels-last format. ```python import torch import intel_extension_for_pytorch as ipex ... pipe = StableDiffusionPipeline.from_pretrained(model_id) # to channels last pipe.unet = pipe.unet.to(memory_format=torch.channels_last) pipe.vae = pipe.vae.to(memory_format=torch.channels_last) pipe.text_encoder = pipe.text_encoder.to(memory_format=torch.channels_last) pipe.safety_checker = pipe.safety_checker.to(memory_format=torch.channels_last) # Create random input to enable JIT compilation sample = torch.randn(2,4,64,64) timestep = torch.rand(1)*999 encoder_hidden_status = torch.randn(2,77,768) input_example = (sample, timestep, encoder_hidden_status) # optimize with IPEX pipe.unet = ipex.optimize(pipe.unet.eval(), dtype=torch.bfloat16, inplace=True, sample_input=input_example) pipe.vae = ipex.optimize(pipe.vae.eval(), dtype=torch.bfloat16, inplace=True) pipe.text_encoder = ipex.optimize(pipe.text_encoder.eval(), dtype=torch.bfloat16, inplace=True) pipe.safety_checker = ipex.optimize(pipe.safety_checker.eval(), dtype=torch.bfloat16, inplace=True) ``` We also enable the `bloat16` data format to leverage the AMX tile matrix multiply unit (TMMU) accelerator present on Sapphire Rapids CPUs. ```python with torch.cpu.amp.autocast(enabled=True, dtype=torch.bfloat16): latency = elapsed_time(pipe, prompt) print(latency) ``` With this updated version, inference latency is further reduced from 11.9 seconds to **5.4 seconds**. That's more than 2x acceleration thanks to IPEX and AMX. Can we extract a bit more performance? Yes, with schedulers! ## Schedulers The Diffusers library lets us attach a [scheduler](https://huggingface.co/docs/diffusers/using-diffusers/schedulers) to a Stable Diffusion pipeline. Schedulers try to find the best trade-off between denoising speed and denoising quality. According to the documentation: "*At the time of writing this doc DPMSolverMultistepScheduler gives arguably the best speed/quality trade-off and can be run with as little as 20 steps.*" Let's try it. ```python from diffusers import StableDiffusionPipeline, DPMSolverMultistepScheduler ... dpm = DPMSolverMultistepScheduler.from_pretrained(model_id, subfolder="scheduler") pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=dpm) ``` With this final version, inference latency is now down to **5.05 seconds**. Compared to our initial Sapphire Rapids baseline (32.3 seconds), this is almost 6.5x faster! <kbd> <img src="assets/136_stable_diffusion_inference_intel/01.png"> </kbd> *Environment: Amazon EC2 r7iz.metal-16xl, Ubuntu 20.04, Linux 5.15.0-1031-aws, libjemalloc-dev 5.2.1-1, intel-mkl 2020.0.166-1, PyTorch 1.13.1, Intel Extension for PyTorch 1.13.1, transformers 4.27.2, diffusers 0.14, accelerate 0.17.1, openvino 2023.0.0.dev20230217, optimum 1.7.1, optimum-intel 1.7* ## Conclusion The ability to generate high-quality images in seconds should work well for a lot of use cases, such as customer apps, content generation for marketing and media, or synthetic data for dataset augmentation. Here are some resources to help you get started: * Diffusers [documentation](https://huggingface.co/docs/diffusers) * Optimum Intel [documentation](https://huggingface.co/docs/optimum/main/en/intel/inference) * [Intel IPEX](https://github.com/intel/intel-extension-for-pytorch) on GitHub * [Developer resources](https://www.intel.com/content/www/us/en/developer/partner/hugging-face.html) from Intel and Hugging Face. If you have questions or feedback, we'd love to read them on the [Hugging Face forum](https://discuss.huggingface.co/). Thanks for reading!
huggingface/blog/blob/main/stable-diffusion-inference-intel.md
Gradio Demo: blocks_kinematics ``` !pip install -q gradio ``` ``` import pandas as pd import numpy as np import gradio as gr def plot(v, a): g = 9.81 theta = a / 180 * 3.14 tmax = ((2 * v) * np.sin(theta)) / g timemat = tmax * np.linspace(0, 1, 40) x = (v * timemat) * np.cos(theta) y = ((v * timemat) * np.sin(theta)) - ((0.5 * g) * (timemat**2)) df = pd.DataFrame({"x": x, "y": y}) return df demo = gr.Blocks() with demo: gr.Markdown( r"Let's do some kinematics! Choose the speed and angle to see the trajectory. Remember that the range $R = v_0^2 \cdot \frac{\sin(2\theta)}{g}$" ) with gr.Row(): speed = gr.Slider(1, 30, 25, label="Speed") angle = gr.Slider(0, 90, 45, label="Angle") output = gr.LinePlot( x="x", y="y", overlay_point=True, tooltip=["x", "y"], x_lim=[0, 100], y_lim=[0, 60], width=350, height=300, ) btn = gr.Button(value="Run") btn.click(plot, [speed, angle], output) if __name__ == "__main__": demo.launch() ```
gradio-app/gradio/blob/main/demo/blocks_kinematics/run.ipynb
PNASNet **Progressive Neural Architecture Search**, or **PNAS**, is a method for learning the structure of convolutional neural networks (CNNs). It uses a sequential model-based optimization (SMBO) strategy, where we search the space of cell structures, starting with simple (shallow) models and progressing to complex ones, pruning out unpromising structures as we go. ## How do I use this model on an image? To load a pretrained model: ```python import timm model = timm.create_model('pnasnet5large', pretrained=True) model.eval() ``` To load and preprocess the image: ```python import urllib from PIL import Image from timm.data import resolve_data_config from timm.data.transforms_factory import create_transform config = resolve_data_config({}, model=model) transform = create_transform(**config) url, filename = ("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg") urllib.request.urlretrieve(url, filename) img = Image.open(filename).convert('RGB') tensor = transform(img).unsqueeze(0) # transform and add batch dimension ``` To get the model predictions: ```python import torch with torch.no_grad(): out = model(tensor) probabilities = torch.nn.functional.softmax(out[0], dim=0) print(probabilities.shape) # prints: torch.Size([1000]) ``` To get the top-5 predictions class names: ```python # Get imagenet class mappings url, filename = ("https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt", "imagenet_classes.txt") urllib.request.urlretrieve(url, filename) with open("imagenet_classes.txt", "r") as f: categories = [s.strip() for s in f.readlines()] # Print top categories per image top5_prob, top5_catid = torch.topk(probabilities, 5) for i in range(top5_prob.size(0)): print(categories[top5_catid[i]], top5_prob[i].item()) # prints class names and probabilities like: # [('Samoyed', 0.6425196528434753), ('Pomeranian', 0.04062102362513542), ('keeshond', 0.03186424449086189), ('white wolf', 0.01739676296710968), ('Eskimo dog', 0.011717947199940681)] ``` Replace the model name with the variant you want to use, e.g. `pnasnet5large`. You can find the IDs in the model summaries at the top of this page. To extract image features with this model, follow the [timm feature extraction examples](https://rwightman.github.io/pytorch-image-models/feature_extraction/), just change the name of the model you want to use. ## How do I finetune this model? You can finetune any of the pre-trained models just by changing the classifier (the last layer). ```python model = timm.create_model('pnasnet5large', pretrained=True, num_classes=NUM_FINETUNE_CLASSES) ``` To finetune on your own dataset, you have to write a training loop or adapt [timm's training script](https://github.com/rwightman/pytorch-image-models/blob/master/train.py) to use your dataset. ## How do I train this model? You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh. ## Citation ```BibTeX @misc{liu2018progressive, title={Progressive Neural Architecture Search}, author={Chenxi Liu and Barret Zoph and Maxim Neumann and Jonathon Shlens and Wei Hua and Li-Jia Li and Li Fei-Fei and Alan Yuille and Jonathan Huang and Kevin Murphy}, year={2018}, eprint={1712.00559}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` <!-- Type: model-index Collections: - Name: PNASNet Paper: Title: Progressive Neural Architecture Search URL: https://paperswithcode.com/paper/progressive-neural-architecture-search Models: - Name: pnasnet5large In Collection: PNASNet Metadata: FLOPs: 31458865950 Parameters: 86060000 File Size: 345153926 Architecture: - Average Pooling - Batch Normalization - Convolution - Depthwise Separable Convolution - Dropout - ReLU Tasks: - Image Classification Training Techniques: - Label Smoothing - RMSProp - Weight Decay Training Data: - ImageNet Training Resources: 100x NVIDIA P100 GPUs ID: pnasnet5large LR: 0.015 Dropout: 0.5 Crop Pct: '0.911' Momentum: 0.9 Batch Size: 1600 Image Size: '331' Interpolation: bicubic Label Smoothing: 0.1 Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/pnasnet.py#L343 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-cadene/pnasnet5large-bf079911.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 0.98% Top 5 Accuracy: 18.58% -->
huggingface/pytorch-image-models/blob/main/docs/models/pnasnet.md
Annotated Model Card Template ## Template [modelcard_template.md file](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md) ## Directions Fully filling out a model card requires input from a few different roles. (One person may have more than one role.) We’ll refer to these roles as the **developer**, who writes the code and runs training; the **sociotechnic**, who is skilled at analyzing the interaction of technology and society long-term (this includes lawyers, ethicists, sociologists, or rights advocates); and the **project organizer**, who understands the overall scope and reach of the model, can roughly fill out each part of the card, and who serves as a contact person for model card updates. * The **developer** is necessary for filling out [Training Procedure](#training-procedure-optional) and [Technical Specifications](#technical-specifications-optional). They are also particularly useful for the “Limitations” section of [Bias, Risks, and Limitations](#bias-risks-and-limitations). They are responsible for providing [Results](#results) for the Evaluation, and ideally work with the other roles to define the rest of the Evaluation: [Testing Data, Factors & Metrics](#testing-data-factors--metrics). * The **sociotechnic** is necessary for filling out “Bias” and “Risks” within [Bias, Risks, and Limitations](#bias-risks-and-limitations), and particularly useful for “Out of Scope Use” within [Uses](#uses). * The **project organizer** is necessary for filling out [Model Details](#model-details) and [Uses](#uses). They might also fill out [Training Data](#training-data). Project organizers could also be in charge of [Citation](#citation-optional), [Glossary](#glossary-optional), [Model Card Contact](#model-card-contact), [Model Card Authors](#model-card-authors-optional), and [More Information](#more-information-optional). _Instructions are provided below, in italics._ Template variable names appear in `monospace`. # Model Name **Section Overview:** Provide the model name and a 1-2 sentence summary of what the model is. `model_id` `model_summary` # Table of Contents **Section Overview:** Provide this with links to each section, to enable people to easily jump around/use the file in other locations with the preserved TOC/print out the content/etc. # Model Details **Section Overview:** This section provides basic information about what the model is, its current status, and where it came from. It should be useful for anyone who wants to reference the model. ## Model Description `model_description` _Provide basic details about the model. This includes the architecture, version, if it was introduced in a paper, if an original implementation is available, and the creators. Any copyright should be attributed here. General information about training procedures, parameters, and important disclaimers can also be mentioned in this section._ * **Developed by:** `developers` _List (and ideally link to) the people who built the model._ * **Funded by:** `funded_by` _List (and ideally link to) the funding sources that financially, computationally, or otherwise supported or enabled this model._ * **Shared by [optional]:** `shared_by` _List (and ideally link to) the people/organization making the model available online._ * **Model type:** `model_type` _You can name the “type” as:_ _1. Supervision/Learning Method_ _2. Machine Learning Type_ _3. Modality_ * **Language(s)** [NLP]: `language` _Use this field when the system uses or processes natural (human) language._ * **License:** `license` _Name and link to the license being used._ * **Finetuned From Model [optional]:** `base_model` _If this model has another model as its base, link to that model here._ ## Model Sources [optional] * **Repository:** `repo` * **Paper [optional]:** `paper` * **Demo [optional]:** `demo` _Provide sources for the user to directly see the model and its details. Additional kinds of resources – training logs, lessons learned, etc. – belong in the [More Information](#more-information-optional) section. If you include one thing for this section, link to the repository._ # Uses **Section Overview:** This section addresses questions around how the model is intended to be used in different applied contexts, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model. Note this section is not intended to include the license usage details. For that, link directly to the license. ## Direct Use `direct_use` _Explain how the model can be used without fine-tuning, post-processing, or plugging into a pipeline. An example code snippet is recommended._ ## Downstream Use [optional] `downstream_use` _Explain how this model can be used when fine-tuned for a task or when plugged into a larger ecosystem or app. An example code snippet is recommended._ ## Out-of-Scope Use `out_of_scope_use` _List how the model may foreseeably be misused and address what users ought not do with the model._ # Bias, Risks, and Limitations **Section Overview:** This section identifies foreseeable harms, misunderstandings, and technical and sociotechnical limitations. It also provides information on warnings and potential mitigations. `bias_risks_limitations` _What are the known or foreseeable issues stemming from this model?_ ## Recommendations `bias_recommendations` _What are recommendations with respect to the foreseeable issues? This can include everything from “downsample your image” to filtering explicit content._ # Training Details **Section Overview:** This section provides information to describe and replicate training, including the training data, the speed and size of training elements, and the environmental impact of training. This relates heavily to the [Technical Specifications](#technical-specifications-optional) as well, and content here should link to that section when it is relevant to the training procedure. It is useful for people who want to learn more about the model inputs and training footprint. It is relevant for anyone who wants to know the basics of what the model is learning. ## Training Data `training_data` _Write 1-2 sentences on what the training data is. Ideally this links to a Dataset Card for further information. Links to documentation related to data pre-processing or additional filtering may go here as well as in [More Information](#more-information-optional)._ ## Training Procedure [optional] ### Preprocessing `preprocessing` _Detail tokenization, resizing/rewriting (depending on the modality), etc._ ### Speeds, Sizes, Times `speeds_sizes_times` _Detail throughput, start/end time, checkpoint sizes, etc._ # Evaluation **Section Overview:** This section describes the evaluation protocols, what is being measured in the evaluation, and provides the results. Evaluation is ideally constructed with factors, such as domain and demographic subgroup, and metrics, such as accuracy, which are prioritized in light of foreseeable error contexts and groups. Target fairness metrics should be decided based on which errors are more likely to be problematic in light of the model use. ## Testing Data, Factors & Metrics ### Testing Data `testing_data` _Ideally this links to a Dataset Card for the testing data._ ### Factors `testing_factors` _What are the foreseeable characteristics that will influence how the model behaves? This includes domain and context, as well as population subgroups. Evaluation should ideally be **disaggregated** across factors in order to uncover disparities in performance._ ### Metrics `testing_metrics` _What metrics will be used for evaluation in light of tradeoffs between different errors?_ ## Results `results` _Results should be based on the Factors and Metrics defined above._ ### Summary `results_summary` _What do the results say? This can function as a kind of tl;dr for general audiences._ # Model Examination [optional] **Section Overview:** This is an experimental section some developers are beginning to add, where work on explainability/interpretability may go. `model_examination` # Environmental Impact **Section Overview:** Summarizes the information necessary to calculate environmental impacts such as electricity usage and carbon emissions. * **Hardware Type:** `hardware_type` * **Hours used:** `hours_used` * **Cloud Provider:** `cloud_provider` * **Compute Region:** `cloud_region` * **Carbon Emitted:** `co2_emitted` _Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700)._ # Technical Specifications [optional] **Section Overview:** This section includes details about the model objective and architecture, and the compute infrastructure. It is useful for people interested in model development. Writing this section usually requires the model developer to be directly involved. ## Model Architecture and Objective `model_specs` ## Compute Infrastructure `compute_infrastructure` ### Hardware `hardware_requirements` _What are the minimum hardware requirements, e.g. processing, storage, and memory requirements?_ ### Software `software` # Citation [optional] **Section Overview:** The developers’ preferred citation for this model. This is often a paper. ### BibTeX `citation_bibtex` ### APA `citation_apa` # Glossary [optional] **Section Overview:** This section defines common terms and how metrics are calculated. `glossary` _Clearly define terms in order to be accessible across audiences._ # More Information [optional] **Section Overview:** This section provides links to writing on dataset creation, technical specifications, lessons learned, and initial results. `more_information` # Model Card Authors [optional] **Section Overview:** This section lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction. `model_card_authors` # Model Card Contact **Section Overview:** Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors `model_card_contact` # How to Get Started with the Model **Section Overview:** Provides a code snippet to show how to use the model. `get_started_code`
huggingface/hub-docs/blob/main/docs/hub/model-card-annotated.md
!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Perplexity of fixed-length models [[open-in-colab]] Perplexity (PPL) is one of the most common metrics for evaluating language models. Before diving in, we should note that the metric applies specifically to classical language models (sometimes called autoregressive or causal language models) and is not well defined for masked language models like BERT (see [summary of the models](model_summary)). Perplexity is defined as the exponentiated average negative log-likelihood of a sequence. If we have a tokenized sequence \\(X = (x_0, x_1, \dots, x_t)\\), then the perplexity of \\(X\\) is, $$\text{PPL}(X) = \exp \left\{ {-\frac{1}{t}\sum_i^t \log p_\theta (x_i|x_{<i}) } \right\}$$ where \\(\log p_\theta (x_i|x_{<i})\\) is the log-likelihood of the ith token conditioned on the preceding tokens \\(x_{<i}\\) according to our model. Intuitively, it can be thought of as an evaluation of the model's ability to predict uniformly among the set of specified tokens in a corpus. Importantly, this means that the tokenization procedure has a direct impact on a model's perplexity which should always be taken into consideration when comparing different models. This is also equivalent to the exponentiation of the cross-entropy between the data and model predictions. For more intuition about perplexity and its relationship to Bits Per Character (BPC) and data compression, check out this [fantastic blog post on The Gradient](https://thegradient.pub/understanding-evaluation-metrics-for-language-models/). ## Calculating PPL with fixed-length models If we weren't limited by a model's context size, we would evaluate the model's perplexity by autoregressively factorizing a sequence and conditioning on the entire preceding subsequence at each step, as shown below. <img width="600" alt="Full decomposition of a sequence with unlimited context length" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/ppl_full.gif"/> When working with approximate models, however, we typically have a constraint on the number of tokens the model can process. The largest version of [GPT-2](model_doc/gpt2), for example, has a fixed length of 1024 tokens, so we cannot calculate \\(p_\theta(x_t|x_{<t})\\) directly when \\(t\\) is greater than 1024. Instead, the sequence is typically broken into subsequences equal to the model's maximum input size. If a model's max input size is \\(k\\), we then approximate the likelihood of a token \\(x_t\\) by conditioning only on the \\(k-1\\) tokens that precede it rather than the entire context. When evaluating the model's perplexity of a sequence, a tempting but suboptimal approach is to break the sequence into disjoint chunks and add up the decomposed log-likelihoods of each segment independently. <img width="600" alt="Suboptimal PPL not taking advantage of full available context" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/ppl_chunked.gif"/> This is quick to compute since the perplexity of each segment can be computed in one forward pass, but serves as a poor approximation of the fully-factorized perplexity and will typically yield a higher (worse) PPL because the model will have less context at most of the prediction steps. Instead, the PPL of fixed-length models should be evaluated with a sliding-window strategy. This involves repeatedly sliding the context window so that the model has more context when making each prediction. <img width="600" alt="Sliding window PPL taking advantage of all available context" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/ppl_sliding.gif"/> This is a closer approximation to the true decomposition of the sequence probability and will typically yield a more favorable score. The downside is that it requires a separate forward pass for each token in the corpus. A good practical compromise is to employ a strided sliding window, moving the context by larger strides rather than sliding by 1 token a time. This allows computation to proceed much faster while still giving the model a large context to make predictions at each step. ## Example: Calculating perplexity with GPT-2 in 🤗 Transformers Let's demonstrate this process with GPT-2. ```python from transformers import GPT2LMHeadModel, GPT2TokenizerFast device = "cuda" model_id = "gpt2-large" model = GPT2LMHeadModel.from_pretrained(model_id).to(device) tokenizer = GPT2TokenizerFast.from_pretrained(model_id) ``` We'll load in the WikiText-2 dataset and evaluate the perplexity using a few different sliding-window strategies. Since this dataset is small and we're just doing one forward pass over the set, we can just load and encode the entire dataset in memory. ```python from datasets import load_dataset test = load_dataset("wikitext", "wikitext-2-raw-v1", split="test") encodings = tokenizer("\n\n".join(test["text"]), return_tensors="pt") ``` With 🤗 Transformers, we can simply pass the `input_ids` as the `labels` to our model, and the average negative log-likelihood for each token is returned as the loss. With our sliding window approach, however, there is overlap in the tokens we pass to the model at each iteration. We don't want the log-likelihood for the tokens we're just treating as context to be included in our loss, so we can set these targets to `-100` so that they are ignored. The following is an example of how we could do this with a stride of `512`. This means that the model will have at least 512 tokens for context when calculating the conditional likelihood of any one token (provided there are 512 preceding tokens available to condition on). ```python import torch from tqdm import tqdm max_length = model.config.n_positions stride = 512 seq_len = encodings.input_ids.size(1) nlls = [] prev_end_loc = 0 for begin_loc in tqdm(range(0, seq_len, stride)): end_loc = min(begin_loc + max_length, seq_len) trg_len = end_loc - prev_end_loc # may be different from stride on last loop input_ids = encodings.input_ids[:, begin_loc:end_loc].to(device) target_ids = input_ids.clone() target_ids[:, :-trg_len] = -100 with torch.no_grad(): outputs = model(input_ids, labels=target_ids) # loss is calculated using CrossEntropyLoss which averages over valid labels # N.B. the model only calculates loss over trg_len - 1 labels, because it internally shifts the labels # to the left by 1. neg_log_likelihood = outputs.loss nlls.append(neg_log_likelihood) prev_end_loc = end_loc if end_loc == seq_len: break ppl = torch.exp(torch.stack(nlls).mean()) ``` Running this with the stride length equal to the max input length is equivalent to the suboptimal, non-sliding-window strategy we discussed above. The smaller the stride, the more context the model will have in making each prediction, and the better the reported perplexity will typically be. When we run the above with `stride = 1024`, i.e. no overlap, the resulting PPL is `19.44`, which is about the same as the `19.93` reported in the GPT-2 paper. By using `stride = 512` and thereby employing our striding window strategy, this jumps down to `16.45`. This is not only a more favorable score, but is calculated in a way that is closer to the true autoregressive decomposition of a sequence likelihood.
huggingface/transformers/blob/main/docs/source/en/perplexity.md
!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # 🤗 Optimum notebooks You can find here a list of the notebooks associated with each accelerator in 🤗 Optimum. ## Optimum Habana | Notebook | Description | Colab | Studio Lab | |:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:| | [How to use DeepSpeed to train models with billions of parameters on Habana Gaudi](https://github.com/huggingface/optimum-habana/blob/main/notebooks/AI_HW_Summit_2022.ipynb) | Show how to use DeepSpeed to pre-train/fine-tune the 1.6B-parameter GPT2-XL for causal language modeling on Habana Gaudi. | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/optimum-habana/blob/main/notebooks/AI_HW_Summit_2022.ipynb) | [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/optimum-habana/blob/main/notebooks/AI_HW_Summit_2022.ipynb) | ## Optimum Intel ### OpenVINO | Notebook | Description | Colab | Studio Lab | |:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:| | [How to run inference with OpenVINO](https://github.com/huggingface/optimum-intel/blob/main/notebooks/openvino/optimum_openvino_inference.ipynb) | Explains how to export your model to OpenVINO and run inference with OpenVINO Runtime on various tasks| [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/optimum-intel/blob/main/notebooks/openvino/optimum_openvino_inference.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/optimum-intel/blob/main/notebooks/openvino/optimum_openvino_inference.ipynb)| | [How to quantize a question answering model with NNCF](https://github.com/huggingface/optimum-intel/blob/main/notebooks/openvino/question_answering_quantization.ipynb) | Show how to apply post-training quantization on a question answering model using [NNCF](https://github.com/openvinotoolkit/nncf) and to accelerate inference with OpenVINO| [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/optimum-intel/blob/main/notebooks/openvino/question_answering_quantization.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/optimum-intel/blob/main/notebooks/openvino/question_answering_quantization.ipynb)| | [Compare outputs of a quantized Stable Diffusion model with its full-precision counterpart](https://github.com/huggingface/optimum-intel/blob/main/notebooks/openvino/stable_diffusion_quantization.ipynb) | Show how to load and compare outputs from two Stable Diffusion models with different precision| [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/optimum-intel/blob/main/notebooks/openvino/stable_diffusion_quantization.ipynb)| [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/optimum-intel/blob/main/notebooks/openvino/stable_diffusion_quantization.ipynb)| ### Neural Compressor | Notebook | Description | Colab | Studio Lab | |:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:| | [How to quantize a model with Intel Neural Compressor for text classification](https://github.com/huggingface/notebooks/blob/main/examples/text_classification_quantization_inc.ipynb) | Show how to apply quantization while training your model using Intel [Neural Compressor](https://github.com/intel/neural-compressor) for any GLUE task. | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification_quantization_inc.ipynb) | [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/text_classification_quantization_inc.ipynb) | ## Optimum ONNX Runtime | Notebook | Description | Colab | Studio Lab | |:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:| | [How to quantize a model with ONNX Runtime for text classification](https://github.com/huggingface/notebooks/blob/main/examples/text_classification_quantization_ort.ipynb) | Show how to apply static and dynamic quantization on a model using [ONNX Runtime](https://github.com/microsoft/onnxruntime) for any GLUE task. | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification_quantization_ort.ipynb) | [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/text_classification_quantization_ort.ipynb) | | [How to fine-tune a model for text classification with ONNX Runtime](https://github.com/huggingface/notebooks/blob/main/examples/text_classification_ort.ipynb) | Show how to DistilBERT model on GLUE tasks using [ONNX Runtime](https://github.com/microsoft/onnxruntime). | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification_ort.ipynb) | [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/text_classification_ort.ipynb) | | [How to fine-tune a model for summarization with ONNX Runtime](https://github.com/huggingface/notebooks/blob/main/examples/summarization_ort.ipynb) | Show how to fine-tune a T5 model on the BBC news corpus. | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/summarization_ort.ipynb) | [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/summarization_ort.ipynb) | | [How to fine-tune DeBERTa for question-answering with ONNX Runtime](https://github.com/huggingface/notebooks/blob/main/examples/question_answering_ort.ipynb) | Show how to fine-tune a DeBERTa model on the squad. | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering_ort.ipynb) | [![Open in AWS Studio](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/question_answering_ort.ipynb) |
huggingface/optimum/blob/main/notebooks/README.md
-- title: "A Complete Guide to Audio Datasets" thumbnail: /blog/assets/116_audio_datasets/thumbnail.jpg authors: - user: sanchit-gandhi --- # A Complete Guide to Audio Datasets <!--- Note to reviewer: comments and TODOs are included in this format. ---> <a target="_blank" href="https://colab.research.google.com/github/sanchit-gandhi/notebooks/blob/main/audio_datasets_colab.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> ## Introduction 🤗 Datasets is an open-source library for downloading and preparing datasets from all domains. Its minimalistic API allows users to download and prepare datasets in just one line of Python code, with a suite of functions that enable efficient pre-processing. The number of datasets available is unparalleled, with all the most popular machine learning datasets available to download. Not only this, but 🤗 Datasets comes prepared with multiple audio-specific features that make working with audio datasets easy for researchers and practitioners alike. In this blog, we'll demonstrate these features, showcasing why 🤗 Datasets is the go-to place for downloading and preparing audio datasets. ## Contents 1. [The Hub](#the-hub) 2. [Load an Audio Dataset](#load-an-audio-dataset) 3. [Easy to Load, Easy to Process](#easy-to-load-easy-to-process) 4. [Streaming Mode: The Silver Bullet](#streaming-mode-the-silver-bullet) 5. [A Tour of Audio Datasets on the Hub](#a-tour-of-audio-datasets-on-the-hub) 6. [Closing Remarks](#closing-remarks) ## The Hub The Hugging Face Hub is a platform for hosting models, datasets and demos, all open source and publicly available. It is home to a growing collection of audio datasets that span a variety of domains, tasks and languages. Through tight integrations with 🤗 Datasets, all the datasets on the Hub can be downloaded in one line of code. Let's head to the Hub and filter the datasets by task: * [Speech Recognition Datasets on the Hub](https://huggingface.co/datasets?task_categories=task_categories:automatic-speech-recognition&sort=downloads) * [Audio Classification Datasets on the Hub](https://huggingface.co/datasets?task_categories=task_categories:audio-classification&sort=downloads) <figure> <img src="assets/116_audio_datasets/hub_asr_datasets.jpg" alt="Trulli" style="width:100%"> </figure> At the time of writing, there are 77 speech recognition datasets and 28 audio classification datasets on the Hub, with these numbers ever-increasing. You can select any one of these datasets to suit your needs. Let's check out the first speech recognition result. Clicking on [`common_voice`](https://huggingface.co/datasets/common_voice) brings up the dataset card: <figure> <img src="assets/116_audio_datasets/common_voice.jpg" alt="Trulli" style="width:100%"> </figure> Here, we can find additional information about the dataset, see what models are trained on the dataset and, most excitingly, listen to actual audio samples. The Dataset Preview is presented in the middle of the dataset card. It shows us the first 100 samples for each subset and split. What's more, it's loaded up the audio samples ready for us to listen to in real-time. If we hit the play button on the first sample, we can listen to the audio and see the corresponding text. The Dataset Preview is a brilliant way of experiencing audio datasets before committing to using them. You can pick any dataset on the Hub, scroll through the samples and listen to the audio for the different subsets and splits, gauging whether it's the right dataset for your needs. Once you've selected a dataset, it's trivial to load the data so that you can start using it. ## Load an Audio Dataset One of the key defining features of 🤗 Datasets is the ability to download and prepare a dataset in just one line of Python code. This is made possible through the [`load_dataset`](https://huggingface.co/docs/datasets/loading#load) function. Conventionally, loading a dataset involves: i) downloading the raw data, ii) extracting it from its compressed format, and iii) preparing individual samples and splits. Using `load_dataset`, all of the heavy lifting is done under the hood. Let's take the example of loading the [GigaSpeech](https://huggingface.co/datasets/speechcolab/gigaspeech) dataset from Speech Colab. GigaSpeech is a relatively recent speech recognition dataset for benchmarking academic speech systems and is one of many audio datasets available on the Hugging Face Hub. To load the GigaSpeech dataset, we simply take the dataset's identifier on the Hub (`speechcolab/gigaspeech`) and specify it to the [`load_dataset`](https://huggingface.co/docs/datasets/loading#load) function. GigaSpeech comes in five configurations of increasing size, ranging from `xs` (10 hours) to `xl`(10,000 hours). For the purpose of this tutorial, we'll load the smallest of these configurations. The dataset's identifier and the desired configuration are all that we require to download the dataset: ```python from datasets import load_dataset gigaspeech = load_dataset("speechcolab/gigaspeech", "xs") print(gigaspeech) ``` **Print Output:** ```python DatasetDict({ train: Dataset({ features: ['segment_id', 'speaker', 'text', 'audio', 'begin_time', 'end_time', 'audio_id', 'title', 'url', 'source', 'category', 'original_full_path'], num_rows: 9389 }) validation: Dataset({ features: ['segment_id', 'speaker', 'text', 'audio', 'begin_time', 'end_time', 'audio_id', 'title', 'url', 'source', 'category', 'original_full_path'], num_rows: 6750 }) test: Dataset({ features: ['segment_id', 'speaker', 'text', 'audio', 'begin_time', 'end_time', 'audio_id', 'title', 'url', 'source', 'category', 'original_full_path'], num_rows: 25619 }) }) ``` And just like that, we have the GigaSpeech dataset ready! There simply is no easier way of loading an audio dataset. We can see that we have the training, validation and test splits pre-partitioned, with the corresponding information for each. The object `gigaspeech` returned by the `load_dataset` function is a [`DatasetDict`](https://huggingface.co/docs/datasets/package_reference/main_classes#datasets.DatasetDict). We can treat it in much the same way as an ordinary Python dictionary. To get the train split, we pass the corresponding key to the `gigaspeech` dictionary: ```python print(gigaspeech["train"]) ``` **Print Output:** ```python Dataset({ features: ['segment_id', 'speaker', 'text', 'audio', 'begin_time', 'end_time', 'audio_id', 'title', 'url', 'source', 'category', 'original_full_path'], num_rows: 9389 }) ``` This returns a [`Dataset`](https://huggingface.co/docs/datasets/v2.7.1/en/package_reference/main_classes#datasets.Dataset) object, which contains the data for the training split. We can go one level deeper and get the first item of the split. Again, this is possible through standard Python indexing: ```python print(gigaspeech["train"][0]) ``` **Print Output:** ```python {'segment_id': 'YOU0000000315_S0000660', 'speaker': 'N/A', 'text': "AS THEY'RE LEAVING <COMMA> CAN KASH PULL ZAHRA ASIDE REALLY QUICKLY <QUESTIONMARK>", 'audio': {'path': '/home/sanchit_huggingface_co/.cache/huggingface/datasets/downloads/extracted/7f8541f130925e9b2af7d37256f2f61f9d6ff21bf4a94f7c1a3803ec648d7d79/xs_chunks_0000/YOU0000000315_S0000660.wav', 'array': array([0.0005188 , 0.00085449, 0.00012207, ..., 0.00125122, 0.00076294, 0.00036621], dtype=float32), 'sampling_rate': 16000 }, 'begin_time': 2941.889892578125, 'end_time': 2945.070068359375, 'audio_id': 'YOU0000000315', 'title': 'Return to Vasselheim | Critical Role: VOX MACHINA | Episode 43', 'url': 'https://www.youtube.com/watch?v=zr2n1fLVasU', 'source': 2, 'category': 24, 'original_full_path': 'audio/youtube/P0004/YOU0000000315.opus', } ``` We can see that there are a number of features returned by the training split, including `segment_id`, `speaker`, `text`, `audio` and more. For speech recognition, we'll be concerned with the `text` and `audio` columns. Using 🤗 Datasets' [`remove_columns`](https://huggingface.co/docs/datasets/process#remove) method, we can remove the dataset features not required for speech recognition: ```python COLUMNS_TO_KEEP = ["text", "audio"] all_columns = gigaspeech["train"].column_names columns_to_remove = set(all_columns) - set(COLUMNS_TO_KEEP) gigaspeech = gigaspeech.remove_columns(columns_to_remove) ``` Let's check that we've successfully retained the `text` and `audio` columns: ```python print(gigaspeech["train"][0]) ``` **Print Output:** ```python {'text': "AS THEY'RE LEAVING <COMMA> CAN KASH PULL ZAHRA ASIDE REALLY QUICKLY <QUESTIONMARK>", 'audio': {'path': '/home/sanchit_huggingface_co/.cache/huggingface/datasets/downloads/extracted/7f8541f130925e9b2af7d37256f2f61f9d6ff21bf4a94f7c1a3803ec648d7d79/xs_chunks_0000/YOU0000000315_S0000660.wav', 'array': array([0.0005188 , 0.00085449, 0.00012207, ..., 0.00125122, 0.00076294, 0.00036621], dtype=float32), 'sampling_rate': 16000}} ``` Great! We can see that we've got the two required columns `text` and `audio`. The `text` is a string with the sample transcription and the `audio` a 1-dimensional array of amplitude values at a sampling rate of 16KHz. That's our dataset loaded! ## Easy to Load, Easy to Process Loading a dataset with 🤗 Datasets is just half of the fun. We can now use the suite of tools available to efficiently pre-process our data ready for model training or inference. In this Section, we'll perform three stages of data pre-processing: 1. [Resampling the Audio Data](#1-resampling-the-audio-data) 2. [Pre-Processing Function](#2-pre-processing-function) 3. [Filtering Function](#3-filtering-function) ### 1. Resampling the Audio Data The `load_dataset` function prepares audio samples with the sampling rate that they were published with. This is not always the sampling rate expected by our model. In this case, we need to _resample_ the audio to the correct sampling rate. We can set the audio inputs to our desired sampling rate using 🤗 Datasets' [`cast_column`](https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=cast_column#datasets.DatasetDict.cast_column) method. This operation does not change the audio in-place, but rather signals to `datasets` to resample the audio samples _on the fly_ when they are loaded. The following code cell will set the sampling rate to 8kHz: ```python from datasets import Audio gigaspeech = gigaspeech.cast_column("audio", Audio(sampling_rate=8000)) ``` Re-loading the first audio sample in the GigaSpeech dataset will resample it to the desired sampling rate: ```python print(gigaspeech["train"][0]) ``` **Print Output:** ```python {'text': "AS THEY'RE LEAVING <COMMA> CAN KASH PULL ZAHRA ASIDE REALLY QUICKLY <QUESTIONMARK>", 'audio': {'path': '/home/sanchit_huggingface_co/.cache/huggingface/datasets/downloads/extracted/7f8541f130925e9b2af7d37256f2f61f9d6ff21bf4a94f7c1a3803ec648d7d79/xs_chunks_0000/YOU0000000315_S0000660.wav', 'array': array([ 0.00046338, 0.00034808, -0.00086153, ..., 0.00099299, 0.00083484, 0.00080221], dtype=float32), 'sampling_rate': 8000} } ``` We can see that the sampling rate has been downsampled to 8kHz. The array values are also different, as we've now only got approximately one amplitude value for every two that we had before. Let's set the dataset sampling rate back to 16kHz, the sampling rate expected by most speech recognition models: ```python gigaspeech = gigaspeech.cast_column("audio", Audio(sampling_rate=16000)) print(gigaspeech["train"][0]) ``` **Print Output:** ```python {'text': "AS THEY'RE LEAVING <COMMA> CAN KASH PULL ZAHRA ASIDE REALLY QUICKLY <QUESTIONMARK>", 'audio': {'path': '/home/sanchit_huggingface_co/.cache/huggingface/datasets/downloads/extracted/7f8541f130925e9b2af7d37256f2f61f9d6ff21bf4a94f7c1a3803ec648d7d79/xs_chunks_0000/YOU0000000315_S0000660.wav', 'array': array([0.0005188 , 0.00085449, 0.00012207, ..., 0.00125122, 0.00076294, 0.00036621], dtype=float32), 'sampling_rate': 16000} } ``` Easy! `cast_column` provides a straightforward mechanism for resampling audio datasets as and when required. ### 2. Pre-Processing Function One of the most challenging aspects of working with audio datasets is preparing the data in the right format for our model. Using 🤗 Datasets' [`map`](https://huggingface.co/docs/datasets/v2.6.1/en/process#map) method, we can write a function to pre-process a single sample of the dataset, and then apply it to every sample without any code changes. First, let's load a processor object from 🤗 Transformers. This processor pre-processes the audio to input features and tokenises the target text to labels. The `AutoProcessor` class is used to load a processor from a given model checkpoint. In the example, we load the processor from OpenAI's [Whisper medium.en](https://huggingface.co/openai/whisper-medium.en) checkpoint, but you can change this to any model identifier on the Hugging Face Hub: ```python from transformers import AutoProcessor processor = AutoProcessor.from_pretrained("openai/whisper-medium.en") ``` Great! Now we can write a function that takes a single training sample and passes it through the `processor` to prepare it for our model. We'll also compute the input length of each audio sample, information that we'll need for the next data preparation step: ```python def prepare_dataset(batch): audio = batch["audio"] batch = processor(audio["array"], sampling_rate=audio["sampling_rate"], text=batch["text"]) batch["input_length"] = len(audio["array"]) / audio["sampling_rate"] return batch ``` We can apply the data preparation function to all of our training examples using 🤗 Datasets' `map` method. Here, we also remove the `text` and `audio` columns, since we have pre-processed the audio to input features and tokenised the text to labels: ```python gigaspeech = gigaspeech.map(prepare_dataset, remove_columns=gigaspeech["train"].column_names) ``` ### 3. Filtering Function Prior to training, we might have a heuristic for filtering our training data. For instance, we might want to filter any audio samples longer than 30s to prevent truncating the audio samples or risking out-of-memory errors. We can do this in much the same way that we prepared the data for our model in the previous step. We start by writing a function that indicates which samples to keep and which to discard. This function, `is_audio_length_in_range`, returns a boolean: samples that are shorter than 30s return True, and those that are longer False. ```python MAX_DURATION_IN_SECONDS = 30.0 def is_audio_length_in_range(input_length): return input_length < MAX_DURATION_IN_SECONDS ``` We can apply this filtering function to all of our training examples using 🤗 Datasets' [`filter`](https://huggingface.co/docs/datasets/process#select-and-filter) method, keeping all samples that are shorter than 30s (True) and discarding those that are longer (False): ```python gigaspeech["train"] = gigaspeech["train"].filter(is_audio_length_in_range, input_columns=["input_length"]) ``` And with that, we have the GigaSpeech dataset fully prepared for our model! In total, this process required 13 lines of Python code, right from loading the dataset to the final filtering step. Keeping the notebook as general as possible, we only performed the fundamental data preparation steps. However, there is no restriction to the functions you can apply to your audio dataset. You can extend the function `prepare_dataset` to perform much more involved operations, such as data augmentation, voice activity detection or noise reduction. With 🤗 Datasets, if you can write it in a Python function, you can apply it to your dataset! ## Streaming Mode: The Silver Bullet One of the biggest challenges faced with audio datasets is their sheer size. The `xs` configuration of GigaSpeech contained just 10 hours of training data, but amassed over 13GB of storage space for download and preparation. So what happens when we want to train on a larger split? The full `xl` configuration contains 10,000 hours of training data, requiring over 1TB of storage space. For most speech researchers, this well exceeds the specifications of a typical hard drive disk. Do we need to fork out and buy additional storage? Or is there a way we can train on these datasets with **no disk space constraints**? 🤗 Datasets allow us to do just this. It is made possible through the use of [_streaming_](https://huggingface.co/docs/datasets/stream) mode, depicted graphically in Figure 1. Streaming allows us to load the data progressively as we iterate over the dataset. Rather than downloading the whole dataset at once, we load the dataset sample by sample. We iterate over the dataset, loading and preparing samples _on the fly_ when they are needed. This way, we only ever load the samples that we're using, and not the ones that we're not! Once we're done with a sample, we continue iterating over the dataset and load the next one. This is analogous to _downloading_ a TV show versus _streaming_ it. When we download a TV show, we download the entire video offline and save it to our disk. We have to wait for the entire video to download before we can watch it and require as much disk space as size of the video file. Compare this to streaming a TV show. Here, we don’t download any part of the video to disk, but rather iterate over the remote video file and load each part in real-time as required. We don't have to wait for the full video to buffer before we can start watching, we can start as soon as the first portion of the video is ready! This is the same _streaming_ principle that we apply to loading datasets. <figure> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/datasets/streaming.gif" alt="Trulli" style="width:100%"> <figcaption align = "center"><b>Figure 1:</b> Streaming mode. The dataset is loaded progressively as we iterate over the dataset.</figcaption> </figure> Streaming mode has three primary advantages over downloading the entire dataset at once: 1. **Disk space:** samples are loaded to memory one-by-one as we iterate over the dataset. Since the data is not downloaded locally, there are no disk space requirements, so you can use datasets of arbitrary size. 2. **Download and processing time:** audio datasets are large and need a significant amount of time to download and process. With streaming, loading and processing is done on the fly, meaning you can start using the dataset as soon as the first sample is ready. 3. **Easy experimentation:** you can experiment on a handful samples to check that your script works without having to download the entire dataset. There is one caveat to streaming mode. When downloading a dataset, both the raw data and processed data are saved locally to disk. If we want to re-use this dataset, we can directly load the processed data from disk, skipping the download and processing steps. Consequently, we only have to perform the downloading and processing operations once, after which we can re-use the prepared data. With streaming mode, the data is not downloaded to disk. Thus, neither the downloaded nor pre-processed data are cached. If we want to re-use the dataset, the streaming steps must be repeated, with the audio files loaded and processed on the fly again. For this reason, it is advised to download datasets that you are likely to use multiple times. How can you enable streaming mode? Easy! Just set `streaming=True` when you load your dataset. The rest will be taken care for you: ```python gigaspeech = load_dataset("speechcolab/gigaspeech", "xs", streaming=True) ``` All the steps covered so far in this tutorial can be applied to the streaming dataset without any code changes. The only change is that you can no longer access individual samples using Python indexing (i.e. `gigaspeech["train"][sample_idx]`). Instead, you have to iterate over the dataset, using a `for` loop for example. Streaming mode can take your research to the next level: not only are the biggest datasets accessible to you, but you can easily evaluate systems over multiple datasets in one go without worrying about your disk space. Compared to evaluating on a single dataset, multi-dataset evaluation gives a better metric for the generalisation abilities of a speech recognition system (_c.f._ [End-to-end Speech Benchmark (ESB)](https://arxiv.org/abs/2210.13352)). The accompanying [Google Colab](https://colab.research.google.com/github/sanchit-gandhi/notebooks/blob/main/audio_datasets_colab.ipynb) provides an example for evaluating the Whisper model on eight English speech recognition datasets in one script using streaming mode. ## A Tour of Audio Datasets on The Hub This Section serves as a reference guide for the most popular speech recognition, speech translation and audio classification datasets on the Hugging Face Hub. We can apply everything that we've covered for the GigaSpeech dataset to any of the datasets on the Hub. All we have to do is switch the dataset identifier in the `load_dataset` function. It's that easy! 1. [English Speech Recognition](#english-speech-recognition) 2. [Multilingual Speech Recognition](#multilingual-speech-recognition) 3. [Speech Translation](#speech-translation) 4. [Audio Classification](#audio-classification) ### English Speech Recognition Speech recognition, or speech-to-text, is the task of mapping from spoken speech to written text, where both the speech and text are in the same language. We provide a summary of the most popular English speech recognition datasets on the Hub: | Dataset | Domain | Speaking Style | Train Hours | Casing | Punctuation | License | Recommended Use | |-----------------------------------------------------------------------------------------|-----------------------------|-----------------------|-------------|--------|-------------|-----------------|----------------------------------| | [LibriSpeech](https://huggingface.co/datasets/librispeech_asr) | Audiobook | Narrated | 960 | ❌ | ❌ | CC-BY-4.0 | Academic benchmarks | | [Common Voice 11](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0) | Wikipedia | Narrated | 2300 | ✅ | ✅ | CC0-1.0 | Non-native speakers | | [VoxPopuli](https://huggingface.co/datasets/facebook/voxpopuli) | European Parliament | Oratory | 540 | ❌ | ✅ | CC0 | Non-native speakers | | [TED-LIUM](https://huggingface.co/datasets/LIUM/tedlium) | TED talks | Oratory | 450 | ❌ | ❌ | CC-BY-NC-ND 3.0 | Technical topics | | [GigaSpeech](https://huggingface.co/datasets/speechcolab/gigaspeech) | Audiobook, podcast, YouTube | Narrated, spontaneous | 10000 | ❌ | ✅ | apache-2.0 | Robustness over multiple domains | | [SPGISpeech](https://huggingface.co/datasets/kensho/spgispeech) | Fincancial meetings | Oratory, spontaneous | 5000 | ✅ | ✅ | User Agreement | Fully formatted transcriptions | | [Earnings-22](https://huggingface.co/datasets/revdotcom/earnings22) | Fincancial meetings | Oratory, spontaneous | 119 | ✅ | ✅ | CC-BY-SA-4.0 | Diversity of accents | | [AMI](https://huggingface.co/datasets/edinburghcstr/ami) | Meetings | Spontaneous | 100 | ✅ | ✅ | CC-BY-4.0 | Noisy speech conditions | Refer to the [Google Colab](https://colab.research.google.com/github/sanchit-gandhi/notebooks/blob/main/audio_datasets_colab.ipynb) for a guide on evaluating a system on all eight English speech recognition datasets in one script. The following dataset descriptions are largely taken from the [ESB Benchmark](https://arxiv.org/abs/2210.13352) paper. #### [LibriSpeech ASR](https://huggingface.co/datasets/librispeech_asr) LibriSpeech is a standard large-scale dataset for evaluating ASR systems. It consists of approximately 1,000 hours of narrated audiobooks collected from the [LibriVox](https://librivox.org/) project. LibriSpeech has been instrumental in facilitating researchers to leverage a large body of pre-existing transcribed speech data. As such, it has become one of the most popular datasets for benchmarking academic speech systems. ```python librispeech = load_dataset("librispeech_asr", "all") ``` #### [Common Voice](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0) Common Voice is a series of crowd-sourced open-licensed speech datasets where speakers record text from Wikipedia in various languages. Since anyone can contribute recordings, there is significant variation in both audio quality and speakers. The audio conditions are challenging, with recording artefacts, accented speech, hesitations, and the presence of foreign words. The transcriptions are both cased and punctuated. The English subset of version 11.0 contains approximately 2,300 hours of validated data. Use of the dataset requires you to agree to the Common Voice terms of use, which can be found on the Hugging Face Hub: [mozilla-foundation/common_voice_11_0](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0). Once you have agreed to the terms of use, you will be granted access to the dataset. You will then need to provide an [authentication token](https://huggingface.co/settings/tokens) from the Hub when you load the dataset. ```python common_voice = load_dataset("mozilla-foundation/common_voice_11", "en", use_auth_token=True) ``` #### [VoxPopuli](https://huggingface.co/datasets/facebook/voxpopuli) VoxPopuli is a large-scale multilingual speech corpus consisting of data sourced from 2009-2020 European Parliament event recordings. Consequently, it occupies the unique domain of oratory, political speech, largely sourced from non-native speakers. The English subset contains approximately 550 hours labelled speech. ```python voxpopuli = load_dataset("facebook/voxpopuli", "en") ``` #### [TED-LIUM](https://huggingface.co/datasets/LIUM/tedlium) TED-LIUM is a dataset based on English-language TED Talk conference videos. The speaking style is oratory educational talks. The transcribed talks cover a range of different cultural, political, and academic topics, resulting in a technical vocabulary. The Release 3 (latest) edition of the dataset contains approximately 450 hours of training data. The validation and test data are from the legacy set, consistent with earlier releases. ```python tedlium = load_dataset("LIUM/tedlium", "release3") ``` #### [GigaSpeech](https://huggingface.co/datasets/speechcolab/gigaspeech) GigaSpeech is a multi-domain English speech recognition corpus curated from audiobooks, podcasts and YouTube. It covers both narrated and spontaneous speech over a variety of topics, such as arts, science and sports. It contains training splits varying from 10 hours - 10,000 hours and standardised validation and test splits. ```python gigaspeech = load_dataset("speechcolab/gigaspeech", "xs", use_auth_token=True) ``` #### [SPGISpeech](https://huggingface.co/datasets/kensho/spgispeech) SPGISpeech is an English speech recognition corpus composed of company earnings calls that have been manually transcribed by S&P Global, Inc. The transcriptions are fully-formatted according to a professional style guide for oratory and spontaneous speech. It contains training splits ranging from 200 hours - 5,000 hours, with canonical validation and test splits. ```python spgispeech = load_dataset("kensho/spgispeech", "s", use_auth_token=True) ``` #### [Earnings-22](https://huggingface.co/datasets/revdotcom/earnings22) Earnings-22 is a 119-hour corpus of English-language earnings calls collected from global companies. The dataset was developed with the goal of aggregating a broad range of speakers and accents covering a range of real-world financial topics. There is large diversity in the speakers and accents, with speakers taken from seven different language regions. Earnings-22 was published primarily as a test-only dataset. The Hub contains a version of the dataset that has been partitioned into train-validation-test splits. ```python earnings22 = load_dataset("revdotcom/earnings22") ``` #### [AMI](https://huggingface.co/datasets/edinburghcstr/ami) AMI comprises 100 hours of meeting recordings captured using different recording streams. The corpus contains manually annotated orthographic transcriptions of the meetings aligned at the word level. Individual samples of the AMI dataset contain very large audio files (between 10 and 60 minutes), which are segmented to lengths feasible for training most speech recognition systems. AMI contains two splits: IHM and SDM. IHM (individual headset microphone) contains easier near-field speech, and SDM (single distant microphone) harder far-field speech. ```python ami = load_dataset("edinburghcstr/ami", "ihm") ``` ### Multilingual Speech Recognition Multilingual speech recognition refers to speech recognition (speech-to-text) for all languages except English. #### [Multilingual LibriSpeech](https://huggingface.co/datasets/facebook/multilingual_librispeech) Multilingual LibriSpeech is the multilingual equivalent of the [LibriSpeech ASR](https://huggingface.co/datasets/librispeech_asr) corpus. It comprises a large corpus of read audiobooks taken from the [LibriVox](https://librivox.org/) project, making it a suitable dataset for academic research. It contains data split into eight high-resource languages - English, German, Dutch, Spanish, French, Italian, Portuguese and Polish. #### [Common Voice](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0) Common Voice is a series of crowd-sourced open-licensed speech datasets where speakers record text from Wikipedia in various languages. Since anyone can contribute recordings, there is significant variation in both audio quality and speakers. The audio conditions are challenging, with recording artefacts, accented speech, hesitations, and the presence of foreign words. The transcriptions are both cased and punctuated. As of version 11, there are over 100 languages available, both low and high-resource. #### [VoxPopuli](https://huggingface.co/datasets/facebook/voxpopuli) VoxPopuli is a large-scale multilingual speech corpus consisting of data sourced from 2009-2020 European Parliament event recordings. Consequently, it occupies the unique domain of oratory, political speech, largely sourced from non-native speakers. It contains labelled audio-transcription data for 15 European languages. #### [FLEURS](https://huggingface.co/datasets/google/fleurs) FLEURS (Few-shot Learning Evaluation of Universal Representations of Speech) is a dataset for evaluating speech recognition systems in 102 languages, including many that are classified as 'low-resource'. The data is derived from the [FLoRes-101](https://arxiv.org/abs/2106.03193) dataset, a machine translation corpus with 3001 sentence translations from English to 101 other languages. Native speakers are recorded narrating the sentence transcriptions in their native language. The recorded audio data is paired with the sentence transcriptions to yield multilingual speech recognition over all 101 languages. The training sets contain approximately 10 hours of supervised audio-transcription data per language. ### Speech Translation Speech translation is the task of mapping from spoken speech to written text, where the speech and text are in different languages (e.g. English speech to French text). #### [CoVoST 2](https://huggingface.co/datasets/covost2) CoVoST 2 is a large-scale multilingual speech translation corpus covering translations from 21 languages into English and from English into 15 languages. The dataset is created using Mozilla's open-source Common Voice database of crowd-sourced voice recordings. There are 2,900 hours of speech represented in the corpus. #### [FLEURS](https://huggingface.co/datasets/google/fleurs) FLEURS (Few-shot Learning Evaluation of Universal Representations of Speech) is a dataset for evaluating speech recognition systems in 102 languages, including many that are classified as 'low-resource'. The data is derived from the [FLoRes-101](https://arxiv.org/abs/2106.03193) dataset, a machine translation corpus with 3001 sentence translations from English to 101 other languages. Native speakers are recorded narrating the sentence transcriptions in their native languages. An \\(n\\)-way parallel corpus of speech translation data is constructed by pairing the recorded audio data with the sentence transcriptions for each of the 101 languages. The training sets contain approximately 10 hours of supervised audio-transcription data per source-target language combination. ### Audio Classification Audio classification is the task of mapping a raw audio input to a class label output. Practical applications of audio classification include keyword spotting, speaker intent and language identification. #### [SpeechCommands](https://huggingface.co/datasets/speech_commands) SpeechCommands is a dataset comprised of one-second audio files, each containing either a single spoken word in English or background noise. The words are taken from a small set of commands and are spoken by a number of different speakers. The dataset is designed to help train and evaluate small on-device keyword spotting systems. #### [Multilingual Spoken Words](https://huggingface.co/datasets/MLCommons/ml_spoken_words) Multilingual Spoken Words is a large-scale corpus of one-second audio samples, each containing a single spoken word. The dataset consists of 50 languages and more than 340,000 keywords, totalling 23.4 million one-second spoken examples or over 6,000 hours of audio. The audio-transcription data is sourced from the Mozilla Common Voice project. Time stamps are generated for every utterance on the word-level and used to extract individual spoken words and their corresponding transcriptions, thus forming a new corpus of single spoken words. The dataset's intended use is academic research and commercial applications in multilingual keyword spotting and spoken term search. #### [FLEURS](https://huggingface.co/datasets/google/fleurs) FLEURS (Few-shot Learning Evaluation of Universal Representations of Speech) is a dataset for evaluating speech recognition systems in 102 languages, including many that are classified as 'low-resource'. The data is derived from the [FLoRes-101](https://arxiv.org/abs/2106.03193) dataset, a machine translation corpus with 3001 sentence translations from English to 101 other languages. Native speakers are recorded narrating the sentence transcriptions in their native languages. The recorded audio data is paired with a label for the language in which it is spoken. The dataset can be used as an audio classification dataset for _language identification_: systems are trained to predict the language of each utterance in the corpus. ## Closing Remarks In this blog post, we explored the Hugging Face Hub and experienced the Dataset Preview, an effective means of listening to audio datasets before downloading them. We loaded an audio dataset with one line of Python code and performed a series of generic pre-processing steps to prepare it for a machine learning model. In total, this required just 13 lines of code, relying on simple Python functions to perform the necessary operations. We introduced streaming mode, a method for loading and preparing samples of audio data on the fly. We concluded by summarising the most popular speech recognition, speech translation and audio classification datasets on the Hub. Having read this blog, we hope you agree that 🤗 Datasets is the number one place for downloading and preparing audio datasets. 🤗 Datasets is made possible through the work of the community. If you would like to contribute a dataset, refer to the [Guide for Adding a New Dataset](https://huggingface.co/docs/datasets/share#share). *Thank you to the following individuals who help contribute to the blog post: Vaibhav Srivastav, Polina Kazakova, Patrick von Platen, Omar Sanseviero and Quentin Lhoest.*
huggingface/blog/blob/main/audio-datasets.md
Types of Evaluations in 🤗 Evaluate The goal of the 🤗 Evaluate library is to support different types of evaluation, depending on different goals, datasets and models. Here are the types of evaluations that are currently supported with a few examples for each: ## Metrics A metric measures the performance of a model on a given dataset. This is often based on an existing ground truth (i.e. a set of references), but there are also *referenceless metrics* which allow evaluating generated text by leveraging a pretrained model such as [GPT-2](https://huggingface.co/gpt2). Examples of metrics include: - [Accuracy](https://huggingface.co/metrics/accuracy) : the proportion of correct predictions among the total number of cases processed. - [Exact Match](https://huggingface.co/metrics/exact_match): the rate at which the input predicted strings exactly match their references. - [Mean Intersection over union (IoUO)](https://huggingface.co/metrics/mean_iou): the area of overlap between the predicted segmentation of an image and the ground truth divided by the area of union between the predicted segmentation and the ground truth. Metrics are often used to track model performance on benchmark datasets, and to report progress on tasks such as [machine translation](https://huggingface.co/tasks/translation) and [image classification](https://huggingface.co/tasks/image-classification). ## Comparisons Comparisons can be useful to compare the performance of two or more models on a single test dataset. For instance, the [McNemar Test](https://github.com/huggingface/evaluate/tree/main/comparisons/mcnemar) is a paired nonparametric statistical hypothesis test that takes the predictions of two models and compares them, aiming to measure whether the models's predictions diverge or not. The p value it outputs, which ranges from `0.0` to `1.0`, indicates the difference between the two models' predictions, with a lower p value indicating a more significant difference. Comparisons have yet to be systematically used when comparing and reporting model performance, however they are useful tools to go beyond simply comparing leaderboard scores and for getting more information on the way model prediction differ. ## Measurements In the 🤗 Evaluate library, measurements are tools for gaining more insights on datasets and model predictions. For instance, in the case of datasets, it can be useful to calculate the [average word length](https://github.com/huggingface/evaluate/tree/main/measurements/word_length) of a dataset's entries, and how it is distributed -- this can help when choosing the maximum input length for [Tokenizer](https://huggingface.co/docs/transformers/main_classes/tokenizer). In the case of model predictions, it can help to calculate the average [perplexity](https://huggingface.co/metrics/perplexity) of model predictions using different models such as [GPT-2](https://huggingface.co/gpt2) and [BERT](https://huggingface.co/bert-base-uncased), which can indicate the quality of generated text when no reference is available. All three types of evaluation supported by the 🤗 Evaluate library are meant to be mutually complementary, and help our community carry out more mindful and responsible evaluation. We will continue adding more types of metrics, measurements and comparisons in coming months, and are counting on community involvement (via [PRs](https://github.com/huggingface/evaluate/compare) and [issues](https://github.com/huggingface/evaluate/issues/new/choose)) to make the library as extensive and inclusive as possible!
huggingface/evaluate/blob/main/docs/source/types_of_evaluations.mdx
-- title: The Annotated Diffusion Model thumbnail: /blog/assets/78_annotated-diffusion/thumbnail.png authors: - user: nielsr - user: kashif --- # The Annotated Diffusion Model <script async defer src="https://unpkg.com/medium-zoom-element@0/dist/medium-zoom-element.min.js"></script> <a target="_blank" href="https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/annotated_diffusion.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> In this blog post, we'll take a deeper look into **Denoising Diffusion Probabilistic Models** (also known as DDPMs, diffusion models, score-based generative models or simply [autoencoders](https://benanne.github.io/2022/01/31/diffusion.html)) as researchers have been able to achieve remarkable results with them for (un)conditional image/audio/video generation. Popular examples (at the time of writing) include [GLIDE](https://arxiv.org/abs/2112.10741) and [DALL-E 2](https://openai.com/dall-e-2/) by OpenAI, [Latent Diffusion](https://github.com/CompVis/latent-diffusion) by the University of Heidelberg and [ImageGen](https://imagen.research.google/) by Google Brain. We'll go over the original DDPM paper by ([Ho et al., 2020](https://arxiv.org/abs/2006.11239)), implementing it step-by-step in PyTorch, based on Phil Wang's [implementation](https://github.com/lucidrains/denoising-diffusion-pytorch) - which itself is based on the [original TensorFlow implementation](https://github.com/hojonathanho/diffusion). Note that the idea of diffusion for generative modeling was actually already introduced in ([Sohl-Dickstein et al., 2015](https://arxiv.org/abs/1503.03585)). However, it took until ([Song et al., 2019](https://arxiv.org/abs/1907.05600)) (at Stanford University), and then ([Ho et al., 2020](https://arxiv.org/abs/2006.11239)) (at Google Brain) who independently improved the approach. Note that there are [several perspectives](https://twitter.com/sedielem/status/1530894256168222722?s=20&t=mfv4afx1GcNQU5fZklpACw) on diffusion models. Here, we employ the discrete-time (latent variable model) perspective, but be sure to check out the other perspectives as well. Alright, let's dive in! ```python from IPython.display import Image Image(filename='assets/78_annotated-diffusion/ddpm_paper.png') ``` <p align="center"> <img src="assets/78_annotated-diffusion/ddpm_paper.png" width="500" /> </p> We'll install and import the required libraries first (assuming you have [PyTorch](https://pytorch.org/) installed). ```python !pip install -q -U einops datasets matplotlib tqdm import math from inspect import isfunction from functools import partial %matplotlib inline import matplotlib.pyplot as plt from tqdm.auto import tqdm from einops import rearrange, reduce from einops.layers.torch import Rearrange import torch from torch import nn, einsum import torch.nn.functional as F ``` ## What is a diffusion model? A (denoising) diffusion model isn't that complex if you compare it to other generative models such as Normalizing Flows, GANs or VAEs: they all convert noise from some simple distribution to a data sample. This is also the case here where **a neural network learns to gradually denoise data** starting from pure noise. In a bit more detail for images, the set-up consists of 2 processes: * a fixed (or predefined) forward diffusion process \\(q\\) of our choosing, that gradually adds Gaussian noise to an image, until you end up with pure noise * a learned reverse denoising diffusion process \\(p_\theta\\), where a neural network is trained to gradually denoise an image starting from pure noise, until you end up with an actual image. <p align="center"> <img src="assets/78_annotated-diffusion/diffusion_figure.png" width="600" /> </p> Both the forward and reverse process indexed by \\(t\\) happen for some number of finite time steps \\(T\\) (the DDPM authors use \\(T=1000\\)). You start with \\(t=0\\) where you sample a real image \\(\mathbf{x}_0\\) from your data distribution (let's say an image of a cat from ImageNet), and the forward process samples some noise from a Gaussian distribution at each time step \\(t\\), which is added to the image of the previous time step. Given a sufficiently large \\(T\\) and a well behaved schedule for adding noise at each time step, you end up with what is called an [isotropic Gaussian distribution](https://math.stackexchange.com/questions/1991961/gaussian-distribution-is-isotropic) at \\(t=T\\) via a gradual process. ## In more mathematical form Let's write this down more formally, as ultimately we need a tractable loss function which our neural network needs to optimize. Let \\(q(\mathbf{x}_0)\\) be the real data distribution, say of "real images". We can sample from this distribution to get an image, \\(\mathbf{x}_0 \sim q(\mathbf{x}_0)\\). We define the forward diffusion process \\(q(\mathbf{x}_t | \mathbf{x}_{t-1})\\) which adds Gaussian noise at each time step \\(t\\), according to a known variance schedule \\(0 < \beta_1 < \beta_2 < ... < \beta_T < 1\\) as $$ q(\mathbf{x}_t | \mathbf{x}_{t-1}) = \mathcal{N}(\mathbf{x}_t; \sqrt{1 - \beta_t} \mathbf{x}_{t-1}, \beta_t \mathbf{I}). $$ Recall that a normal distribution (also called Gaussian distribution) is defined by 2 parameters: a mean \\(\mu\\) and a variance \\(\sigma^2 \geq 0\\). Basically, each new (slightly noisier) image at time step \\(t\\) is drawn from a **conditional Gaussian distribution** with \\(\mathbf{\mu}_t = \sqrt{1 - \beta_t} \mathbf{x}_{t-1}\\) and \\(\sigma^2_t = \beta_t\\), which we can do by sampling \\(\mathbf{\epsilon} \sim \mathcal{N}(\mathbf{0}, \mathbf{I})\\) and then setting \\(\mathbf{x}_t = \sqrt{1 - \beta_t} \mathbf{x}_{t-1} + \sqrt{\beta_t} \mathbf{\epsilon}\\). Note that the \\(\beta_t\\) aren't constant at each time step \\(t\\) (hence the subscript) --- in fact one defines a so-called **"variance schedule"**, which can be linear, quadratic, cosine, etc. as we will see further (a bit like a learning rate schedule). So starting from \\(\mathbf{x}_0\\), we end up with \\(\mathbf{x}_1, ..., \mathbf{x}_t, ..., \mathbf{x}_T\\), where \\(\mathbf{x}_T\\) is pure Gaussian noise if we set the schedule appropriately. Now, if we knew the conditional distribution \\(p(\mathbf{x}_{t-1} | \mathbf{x}_t)\\), then we could run the process in reverse: by sampling some random Gaussian noise \\(\mathbf{x}_T\\), and then gradually "denoise" it so that we end up with a sample from the real distribution \\(\mathbf{x}_0\\). However, we don't know \\(p(\mathbf{x}_{t-1} | \mathbf{x}_t)\\). It's intractable since it requires knowing the distribution of all possible images in order to calculate this conditional probability. Hence, we're going to leverage a neural network to **approximate (learn) this conditional probability distribution**, let's call it \\(p_\theta (\mathbf{x}_{t-1} | \mathbf{x}_t)\\), with \\(\theta\\) being the parameters of the neural network, updated by gradient descent. Ok, so we need a neural network to represent a (conditional) probability distribution of the backward process. If we assume this reverse process is Gaussian as well, then recall that any Gaussian distribution is defined by 2 parameters: * a mean parametrized by \\(\mu_\theta\\); * a variance parametrized by \\(\Sigma_\theta\\); so we can parametrize the process as $$ p_\theta (\mathbf{x}_{t-1} | \mathbf{x}_t) = \mathcal{N}(\mathbf{x}_{t-1}; \mu_\theta(\mathbf{x}_{t},t), \Sigma_\theta (\mathbf{x}_{t},t))$$ where the mean and variance are also conditioned on the noise level \\(t\\). Hence, our neural network needs to learn/represent the mean and variance. However, the DDPM authors decided to **keep the variance fixed, and let the neural network only learn (represent) the mean \\(\mu_\theta\\) of this conditional probability distribution**. From the paper: > First, we set \\(\Sigma_\theta ( \mathbf{x}_t, t) = \sigma^2_t \mathbf{I}\\) to untrained time dependent constants. Experimentally, both \\(\sigma^2_t = \beta_t\\) and \\(\sigma^2_t = \tilde{\beta}_t\\) (see paper) had similar results. This was then later improved in the [Improved diffusion models](https://openreview.net/pdf?id=-NEXDKk8gZ) paper, where a neural network also learns the variance of this backwards process, besides the mean. So we continue, assuming that our neural network only needs to learn/represent the mean of this conditional probability distribution. ## Defining an objective function (by reparametrizing the mean) To derive an objective function to learn the mean of the backward process, the authors observe that the combination of \\(q\\) and \\(p_\theta\\) can be seen as a variational auto-encoder (VAE) [(Kingma et al., 2013)](https://arxiv.org/abs/1312.6114). Hence, the **variational lower bound** (also called ELBO) can be used to minimize the negative log-likelihood with respect to ground truth data sample \\(\mathbf{x}_0\\) (we refer to the VAE paper for details regarding ELBO). It turns out that the ELBO for this process is a sum of losses at each time step \\(t\\), \\(L = L_0 + L_1 + ... + L_T\\). By construction of the forward \\(q\\) process and backward process, each term (except for \\(L_0\\)) of the loss is actually the **KL divergence between 2 Gaussian distributions** which can be written explicitly as an L2-loss with respect to the means! A direct consequence of the constructed forward process \\(q\\), as shown by Sohl-Dickstein et al., is that we can sample \\(\mathbf{x}_t\\) at any arbitrary noise level conditioned on \\(\mathbf{x}_0\\) (since sums of Gaussians is also Gaussian). This is very convenient: we don't need to apply \\(q\\) repeatedly in order to sample \\(\mathbf{x}_t\\). We have that $$q(\mathbf{x}_t | \mathbf{x}_0) = \cal{N}(\mathbf{x}_t; \sqrt{\bar{\alpha}_t} \mathbf{x}_0, (1- \bar{\alpha}_t) \mathbf{I})$$ with \\(\alpha_t := 1 - \beta_t\\) and \\(\bar{\alpha}_t := \Pi_{s=1}^{t} \alpha_s\\). Let's refer to this equation as the "nice property". This means we can sample Gaussian noise and scale it appropriatly and add it to \\(\mathbf{x}_0\\) to get \\(\mathbf{x}_t\\) directly. Note that the \\(\bar{\alpha}_t\\) are functions of the known \\(\beta_t\\) variance schedule and thus are also known and can be precomputed. This then allows us, during training, to **optimize random terms of the loss function \\(L\\)** (or in other words, to randomly sample \\(t\\) during training and optimize \\(L_t\\)). Another beauty of this property, as shown in Ho et al. is that one can (after some math, for which we refer the reader to [this excellent blog post](https://lilianweng.github.io/posts/2021-07-11-diffusion-models/)) instead **reparametrize the mean to make the neural network learn (predict) the added noise (via a network \\(\mathbf{\epsilon}_\theta(\mathbf{x}_t, t)\\)) for noise level \\(t\\)** in the KL terms which constitute the losses. This means that our neural network becomes a noise predictor, rather than a (direct) mean predictor. The mean can be computed as follows: $$ \mathbf{\mu}_\theta(\mathbf{x}_t, t) = \frac{1}{\sqrt{\alpha_t}} \left( \mathbf{x}_t - \frac{\beta_t}{\sqrt{1- \bar{\alpha}_t}} \mathbf{\epsilon}_\theta(\mathbf{x}_t, t) \right)$$ The final objective function \\(L_t\\) then looks as follows (for a random time step \\(t\\) given \\(\mathbf{\epsilon} \sim \mathcal{N}(\mathbf{0}, \mathbf{I})\\) ): $$ \| \mathbf{\epsilon} - \mathbf{\epsilon}_\theta(\mathbf{x}_t, t) \|^2 = \| \mathbf{\epsilon} - \mathbf{\epsilon}_\theta( \sqrt{\bar{\alpha}_t} \mathbf{x}_0 + \sqrt{(1- \bar{\alpha}_t) } \mathbf{\epsilon}, t) \|^2.$$ Here, \\(\mathbf{x}_0\\) is the initial (real, uncorrupted) image, and we see the direct noise level \\(t\\) sample given by the fixed forward process. \\(\mathbf{\epsilon}\\) is the pure noise sampled at time step \\(t\\), and \\(\mathbf{\epsilon}_\theta (\mathbf{x}_t, t)\\) is our neural network. The neural network is optimized using a simple mean squared error (MSE) between the true and the predicted Gaussian noise. The training algorithm now looks as follows: <p align="center"> <img src="assets/78_annotated-diffusion/training.png" width="400" /> </p> In other words: * we take a random sample \\(\mathbf{x}_0\\) from the real unknown and possibily complex data distribution \\(q(\mathbf{x}_0)\\) * we sample a noise level \\(t\\) uniformally between \\(1\\) and \\(T\\) (i.e., a random time step) * we sample some noise from a Gaussian distribution and corrupt the input by this noise at level \\(t\\) (using the nice property defined above) * the neural network is trained to predict this noise based on the corrupted image \\(\mathbf{x}_t\\) (i.e. noise applied on \\(\mathbf{x}_0\\) based on known schedule \\(\beta_t\\)) In reality, all of this is done on batches of data, as one uses stochastic gradient descent to optimize neural networks. ## The neural network The neural network needs to take in a noised image at a particular time step and return the predicted noise. Note that the predicted noise is a tensor that has the same size/resolution as the input image. So technically, the network takes in and outputs tensors of the same shape. What type of neural network can we use for this? What is typically used here is very similar to that of an [Autoencoder](https://en.wikipedia.org/wiki/Autoencoder), which you may remember from typical "intro to deep learning" tutorials. Autoencoders have a so-called "bottleneck" layer in between the encoder and decoder. The encoder first encodes an image into a smaller hidden representation called the "bottleneck", and the decoder then decodes that hidden representation back into an actual image. This forces the network to only keep the most important information in the bottleneck layer. In terms of architecture, the DDPM authors went for a **U-Net**, introduced by ([Ronneberger et al., 2015](https://arxiv.org/abs/1505.04597)) (which, at the time, achieved state-of-the-art results for medical image segmentation). This network, like any autoencoder, consists of a bottleneck in the middle that makes sure the network learns only the most important information. Importantly, it introduced residual connections between the encoder and decoder, greatly improving gradient flow (inspired by ResNet in [He et al., 2015](https://arxiv.org/abs/1512.03385)). <p align="center"> <img src="assets/78_annotated-diffusion/unet_architecture.jpg" width="400" /> </p> As can be seen, a U-Net model first downsamples the input (i.e. makes the input smaller in terms of spatial resolution), after which upsampling is performed. Below, we implement this network, step-by-step. ### Network helpers First, we define some helper functions and classes which will be used when implementing the neural network. Importantly, we define a `Residual` module, which simply adds the input to the output of a particular function (in other words, adds a residual connection to a particular function). We also define aliases for the up- and downsampling operations. ```python def exists(x): return x is not None def default(val, d): if exists(val): return val return d() if isfunction(d) else d def num_to_groups(num, divisor): groups = num // divisor remainder = num % divisor arr = [divisor] * groups if remainder > 0: arr.append(remainder) return arr class Residual(nn.Module): def __init__(self, fn): super().__init__() self.fn = fn def forward(self, x, *args, **kwargs): return self.fn(x, *args, **kwargs) + x def Upsample(dim, dim_out=None): return nn.Sequential( nn.Upsample(scale_factor=2, mode="nearest"), nn.Conv2d(dim, default(dim_out, dim), 3, padding=1), ) def Downsample(dim, dim_out=None): # No More Strided Convolutions or Pooling return nn.Sequential( Rearrange("b c (h p1) (w p2) -> b (c p1 p2) h w", p1=2, p2=2), nn.Conv2d(dim * 4, default(dim_out, dim), 1), ) ``` ### Position embeddings As the parameters of the neural network are shared across time (noise level), the authors employ sinusoidal position embeddings to encode \\(t\\), inspired by the Transformer ([Vaswani et al., 2017](https://arxiv.org/abs/1706.03762)). This makes the neural network "know" at which particular time step (noise level) it is operating, for every image in a batch. The `SinusoidalPositionEmbeddings` module takes a tensor of shape `(batch_size, 1)` as input (i.e. the noise levels of several noisy images in a batch), and turns this into a tensor of shape `(batch_size, dim)`, with `dim` being the dimensionality of the position embeddings. This is then added to each residual block, as we will see further. ```python class SinusoidalPositionEmbeddings(nn.Module): def __init__(self, dim): super().__init__() self.dim = dim def forward(self, time): device = time.device half_dim = self.dim // 2 embeddings = math.log(10000) / (half_dim - 1) embeddings = torch.exp(torch.arange(half_dim, device=device) * -embeddings) embeddings = time[:, None] * embeddings[None, :] embeddings = torch.cat((embeddings.sin(), embeddings.cos()), dim=-1) return embeddings ``` ### ResNet block Next, we define the core building block of the U-Net model. The DDPM authors employed a Wide ResNet block ([Zagoruyko et al., 2016](https://arxiv.org/abs/1605.07146)), but Phil Wang has replaced the standard convolutional layer by a "weight standardized" version, which works better in combination with group normalization (see ([Kolesnikov et al., 2019](https://arxiv.org/abs/1912.11370)) for details). ```python class WeightStandardizedConv2d(nn.Conv2d): """ https://arxiv.org/abs/1903.10520 weight standardization purportedly works synergistically with group normalization """ def forward(self, x): eps = 1e-5 if x.dtype == torch.float32 else 1e-3 weight = self.weight mean = reduce(weight, "o ... -> o 1 1 1", "mean") var = reduce(weight, "o ... -> o 1 1 1", partial(torch.var, unbiased=False)) normalized_weight = (weight - mean) / (var + eps).rsqrt() return F.conv2d( x, normalized_weight, self.bias, self.stride, self.padding, self.dilation, self.groups, ) class Block(nn.Module): def __init__(self, dim, dim_out, groups=8): super().__init__() self.proj = WeightStandardizedConv2d(dim, dim_out, 3, padding=1) self.norm = nn.GroupNorm(groups, dim_out) self.act = nn.SiLU() def forward(self, x, scale_shift=None): x = self.proj(x) x = self.norm(x) if exists(scale_shift): scale, shift = scale_shift x = x * (scale + 1) + shift x = self.act(x) return x class ResnetBlock(nn.Module): """https://arxiv.org/abs/1512.03385""" def __init__(self, dim, dim_out, *, time_emb_dim=None, groups=8): super().__init__() self.mlp = ( nn.Sequential(nn.SiLU(), nn.Linear(time_emb_dim, dim_out * 2)) if exists(time_emb_dim) else None ) self.block1 = Block(dim, dim_out, groups=groups) self.block2 = Block(dim_out, dim_out, groups=groups) self.res_conv = nn.Conv2d(dim, dim_out, 1) if dim != dim_out else nn.Identity() def forward(self, x, time_emb=None): scale_shift = None if exists(self.mlp) and exists(time_emb): time_emb = self.mlp(time_emb) time_emb = rearrange(time_emb, "b c -> b c 1 1") scale_shift = time_emb.chunk(2, dim=1) h = self.block1(x, scale_shift=scale_shift) h = self.block2(h) return h + self.res_conv(x) ``` ### Attention module Next, we define the attention module, which the DDPM authors added in between the convolutional blocks. Attention is the building block of the famous Transformer architecture ([Vaswani et al., 2017](https://arxiv.org/abs/1706.03762)), which has shown great success in various domains of AI, from NLP and vision to [protein folding](https://www.deepmind.com/blog/alphafold-a-solution-to-a-50-year-old-grand-challenge-in-biology). Phil Wang employs 2 variants of attention: one is regular multi-head self-attention (as used in the Transformer), the other one is a [linear attention variant](https://github.com/lucidrains/linear-attention-transformer) ([Shen et al., 2018](https://arxiv.org/abs/1812.01243)), whose time- and memory requirements scale linear in the sequence length, as opposed to quadratic for regular attention. For an extensive explanation of the attention mechanism, we refer the reader to Jay Allamar's [wonderful blog post](https://jalammar.github.io/illustrated-transformer/). ```python class Attention(nn.Module): def __init__(self, dim, heads=4, dim_head=32): super().__init__() self.scale = dim_head**-0.5 self.heads = heads hidden_dim = dim_head * heads self.to_qkv = nn.Conv2d(dim, hidden_dim * 3, 1, bias=False) self.to_out = nn.Conv2d(hidden_dim, dim, 1) def forward(self, x): b, c, h, w = x.shape qkv = self.to_qkv(x).chunk(3, dim=1) q, k, v = map( lambda t: rearrange(t, "b (h c) x y -> b h c (x y)", h=self.heads), qkv ) q = q * self.scale sim = einsum("b h d i, b h d j -> b h i j", q, k) sim = sim - sim.amax(dim=-1, keepdim=True).detach() attn = sim.softmax(dim=-1) out = einsum("b h i j, b h d j -> b h i d", attn, v) out = rearrange(out, "b h (x y) d -> b (h d) x y", x=h, y=w) return self.to_out(out) class LinearAttention(nn.Module): def __init__(self, dim, heads=4, dim_head=32): super().__init__() self.scale = dim_head**-0.5 self.heads = heads hidden_dim = dim_head * heads self.to_qkv = nn.Conv2d(dim, hidden_dim * 3, 1, bias=False) self.to_out = nn.Sequential(nn.Conv2d(hidden_dim, dim, 1), nn.GroupNorm(1, dim)) def forward(self, x): b, c, h, w = x.shape qkv = self.to_qkv(x).chunk(3, dim=1) q, k, v = map( lambda t: rearrange(t, "b (h c) x y -> b h c (x y)", h=self.heads), qkv ) q = q.softmax(dim=-2) k = k.softmax(dim=-1) q = q * self.scale context = torch.einsum("b h d n, b h e n -> b h d e", k, v) out = torch.einsum("b h d e, b h d n -> b h e n", context, q) out = rearrange(out, "b h c (x y) -> b (h c) x y", h=self.heads, x=h, y=w) return self.to_out(out) ``` ### Group normalization The DDPM authors interleave the convolutional/attention layers of the U-Net with group normalization ([Wu et al., 2018](https://arxiv.org/abs/1803.08494)). Below, we define a `PreNorm` class, which will be used to apply groupnorm before the attention layer, as we'll see further. Note that there's been a [debate](https://tnq177.github.io/data/transformers_without_tears.pdf) about whether to apply normalization before or after attention in Transformers. ```python class PreNorm(nn.Module): def __init__(self, dim, fn): super().__init__() self.fn = fn self.norm = nn.GroupNorm(1, dim) def forward(self, x): x = self.norm(x) return self.fn(x) ``` ### Conditional U-Net Now that we've defined all building blocks (position embeddings, ResNet blocks, attention and group normalization), it's time to define the entire neural network. Recall that the job of the network \\(\mathbf{\epsilon}_\theta(\mathbf{x}_t, t)\\) is to take in a batch of noisy images and their respective noise levels, and output the noise added to the input. More formally: - the network takes a batch of noisy images of shape `(batch_size, num_channels, height, width)` and a batch of noise levels of shape `(batch_size, 1)` as input, and returns a tensor of shape `(batch_size, num_channels, height, width)` The network is built up as follows: * first, a convolutional layer is applied on the batch of noisy images, and position embeddings are computed for the noise levels * next, a sequence of downsampling stages are applied. Each downsampling stage consists of 2 ResNet blocks + groupnorm + attention + residual connection + a downsample operation * at the middle of the network, again ResNet blocks are applied, interleaved with attention * next, a sequence of upsampling stages are applied. Each upsampling stage consists of 2 ResNet blocks + groupnorm + attention + residual connection + an upsample operation * finally, a ResNet block followed by a convolutional layer is applied. Ultimately, neural networks stack up layers as if they were lego blocks (but it's important to [understand how they work](http://karpathy.github.io/2019/04/25/recipe/)). ```python class Unet(nn.Module): def __init__( self, dim, init_dim=None, out_dim=None, dim_mults=(1, 2, 4, 8), channels=3, self_condition=False, resnet_block_groups=4, ): super().__init__() # determine dimensions self.channels = channels self.self_condition = self_condition input_channels = channels * (2 if self_condition else 1) init_dim = default(init_dim, dim) self.init_conv = nn.Conv2d(input_channels, init_dim, 1, padding=0) # changed to 1 and 0 from 7,3 dims = [init_dim, *map(lambda m: dim * m, dim_mults)] in_out = list(zip(dims[:-1], dims[1:])) block_klass = partial(ResnetBlock, groups=resnet_block_groups) # time embeddings time_dim = dim * 4 self.time_mlp = nn.Sequential( SinusoidalPositionEmbeddings(dim), nn.Linear(dim, time_dim), nn.GELU(), nn.Linear(time_dim, time_dim), ) # layers self.downs = nn.ModuleList([]) self.ups = nn.ModuleList([]) num_resolutions = len(in_out) for ind, (dim_in, dim_out) in enumerate(in_out): is_last = ind >= (num_resolutions - 1) self.downs.append( nn.ModuleList( [ block_klass(dim_in, dim_in, time_emb_dim=time_dim), block_klass(dim_in, dim_in, time_emb_dim=time_dim), Residual(PreNorm(dim_in, LinearAttention(dim_in))), Downsample(dim_in, dim_out) if not is_last else nn.Conv2d(dim_in, dim_out, 3, padding=1), ] ) ) mid_dim = dims[-1] self.mid_block1 = block_klass(mid_dim, mid_dim, time_emb_dim=time_dim) self.mid_attn = Residual(PreNorm(mid_dim, Attention(mid_dim))) self.mid_block2 = block_klass(mid_dim, mid_dim, time_emb_dim=time_dim) for ind, (dim_in, dim_out) in enumerate(reversed(in_out)): is_last = ind == (len(in_out) - 1) self.ups.append( nn.ModuleList( [ block_klass(dim_out + dim_in, dim_out, time_emb_dim=time_dim), block_klass(dim_out + dim_in, dim_out, time_emb_dim=time_dim), Residual(PreNorm(dim_out, LinearAttention(dim_out))), Upsample(dim_out, dim_in) if not is_last else nn.Conv2d(dim_out, dim_in, 3, padding=1), ] ) ) self.out_dim = default(out_dim, channels) self.final_res_block = block_klass(dim * 2, dim, time_emb_dim=time_dim) self.final_conv = nn.Conv2d(dim, self.out_dim, 1) def forward(self, x, time, x_self_cond=None): if self.self_condition: x_self_cond = default(x_self_cond, lambda: torch.zeros_like(x)) x = torch.cat((x_self_cond, x), dim=1) x = self.init_conv(x) r = x.clone() t = self.time_mlp(time) h = [] for block1, block2, attn, downsample in self.downs: x = block1(x, t) h.append(x) x = block2(x, t) x = attn(x) h.append(x) x = downsample(x) x = self.mid_block1(x, t) x = self.mid_attn(x) x = self.mid_block2(x, t) for block1, block2, attn, upsample in self.ups: x = torch.cat((x, h.pop()), dim=1) x = block1(x, t) x = torch.cat((x, h.pop()), dim=1) x = block2(x, t) x = attn(x) x = upsample(x) x = torch.cat((x, r), dim=1) x = self.final_res_block(x, t) return self.final_conv(x) ``` ## Defining the forward diffusion process The forward diffusion process gradually adds noise to an image from the real distribution, in a number of time steps \\(T\\). This happens according to a **variance schedule**. The original DDPM authors employed a linear schedule: > We set the forward process variances to constants increasing linearly from \\(\beta_1 = 10^{−4}\\) to \\(\beta_T = 0.02\\). However, it was shown in ([Nichol et al., 2021](https://arxiv.org/abs/2102.09672)) that better results can be achieved when employing a cosine schedule. Below, we define various schedules for the \\(T\\) timesteps (we'll choose one later on). ```python def cosine_beta_schedule(timesteps, s=0.008): """ cosine schedule as proposed in https://arxiv.org/abs/2102.09672 """ steps = timesteps + 1 x = torch.linspace(0, timesteps, steps) alphas_cumprod = torch.cos(((x / timesteps) + s) / (1 + s) * torch.pi * 0.5) ** 2 alphas_cumprod = alphas_cumprod / alphas_cumprod[0] betas = 1 - (alphas_cumprod[1:] / alphas_cumprod[:-1]) return torch.clip(betas, 0.0001, 0.9999) def linear_beta_schedule(timesteps): beta_start = 0.0001 beta_end = 0.02 return torch.linspace(beta_start, beta_end, timesteps) def quadratic_beta_schedule(timesteps): beta_start = 0.0001 beta_end = 0.02 return torch.linspace(beta_start**0.5, beta_end**0.5, timesteps) ** 2 def sigmoid_beta_schedule(timesteps): beta_start = 0.0001 beta_end = 0.02 betas = torch.linspace(-6, 6, timesteps) return torch.sigmoid(betas) * (beta_end - beta_start) + beta_start ``` To start with, let's use the linear schedule for \\(T=300\\) time steps and define the various variables from the \\(\beta_t\\) which we will need, such as the cumulative product of the variances \\(\bar{\alpha}_t\\). Each of the variables below are just 1-dimensional tensors, storing values from \\(t\\) to \\(T\\). Importantly, we also define an `extract` function, which will allow us to extract the appropriate \\(t\\) index for a batch of indices. ```python timesteps = 300 # define beta schedule betas = linear_beta_schedule(timesteps=timesteps) # define alphas alphas = 1. - betas alphas_cumprod = torch.cumprod(alphas, axis=0) alphas_cumprod_prev = F.pad(alphas_cumprod[:-1], (1, 0), value=1.0) sqrt_recip_alphas = torch.sqrt(1.0 / alphas) # calculations for diffusion q(x_t | x_{t-1}) and others sqrt_alphas_cumprod = torch.sqrt(alphas_cumprod) sqrt_one_minus_alphas_cumprod = torch.sqrt(1. - alphas_cumprod) # calculations for posterior q(x_{t-1} | x_t, x_0) posterior_variance = betas * (1. - alphas_cumprod_prev) / (1. - alphas_cumprod) def extract(a, t, x_shape): batch_size = t.shape[0] out = a.gather(-1, t.cpu()) return out.reshape(batch_size, *((1,) * (len(x_shape) - 1))).to(t.device) ``` We'll illustrate with a cats image how noise is added at each time step of the diffusion process. ```python from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) # PIL image of shape HWC image ``` <img src="assets/78_annotated-diffusion/output_cats.jpeg" width="400" /> Noise is added to PyTorch tensors, rather than Pillow Images. We'll first define image transformations that allow us to go from a PIL image to a PyTorch tensor (on which we can add the noise), and vice versa. These transformations are fairly simple: we first normalize images by dividing by \\(255\\) (such that they are in the \\([0,1]\\) range), and then make sure they are in the \\([-1, 1]\\) range. From the DPPM paper: > We assume that image data consists of integers in \\(\{0, 1, ... , 255\}\\) scaled linearly to \\([−1, 1]\\). This ensures that the neural network reverse process operates on consistently scaled inputs starting from the standard normal prior \\(p(\mathbf{x}_T )\\). ```python from torchvision.transforms import Compose, ToTensor, Lambda, ToPILImage, CenterCrop, Resize image_size = 128 transform = Compose([ Resize(image_size), CenterCrop(image_size), ToTensor(), # turn into torch Tensor of shape CHW, divide by 255 Lambda(lambda t: (t * 2) - 1), ]) x_start = transform(image).unsqueeze(0) x_start.shape ``` <div class="output stream stdout"> Output: ---------------------------------------------------------------------------------------------------- torch.Size([1, 3, 128, 128]) </div> We also define the reverse transform, which takes in a PyTorch tensor containing values in \\([-1, 1]\\) and turn them back into a PIL image: ```python import numpy as np reverse_transform = Compose([ Lambda(lambda t: (t + 1) / 2), Lambda(lambda t: t.permute(1, 2, 0)), # CHW to HWC Lambda(lambda t: t * 255.), Lambda(lambda t: t.numpy().astype(np.uint8)), ToPILImage(), ]) ``` Let's verify this: ```python reverse_transform(x_start.squeeze()) ``` <img src="assets/78_annotated-diffusion/output_cats_verify.png" width="100" /> We can now define the forward diffusion process as in the paper: ```python # forward diffusion (using the nice property) def q_sample(x_start, t, noise=None): if noise is None: noise = torch.randn_like(x_start) sqrt_alphas_cumprod_t = extract(sqrt_alphas_cumprod, t, x_start.shape) sqrt_one_minus_alphas_cumprod_t = extract( sqrt_one_minus_alphas_cumprod, t, x_start.shape ) return sqrt_alphas_cumprod_t * x_start + sqrt_one_minus_alphas_cumprod_t * noise ``` Let's test it on a particular time step: ```python def get_noisy_image(x_start, t): # add noise x_noisy = q_sample(x_start, t=t) # turn back into PIL image noisy_image = reverse_transform(x_noisy.squeeze()) return noisy_image ``` ```python # take time step t = torch.tensor([40]) get_noisy_image(x_start, t) ``` <img src="assets/78_annotated-diffusion/output_cats_noisy.png" width="100" /> Let's visualize this for various time steps: ```python import matplotlib.pyplot as plt # use seed for reproducability torch.manual_seed(0) # source: https://pytorch.org/vision/stable/auto_examples/plot_transforms.html#sphx-glr-auto-examples-plot-transforms-py def plot(imgs, with_orig=False, row_title=None, **imshow_kwargs): if not isinstance(imgs[0], list): # Make a 2d grid even if there's just 1 row imgs = [imgs] num_rows = len(imgs) num_cols = len(imgs[0]) + with_orig fig, axs = plt.subplots(figsize=(200,200), nrows=num_rows, ncols=num_cols, squeeze=False) for row_idx, row in enumerate(imgs): row = [image] + row if with_orig else row for col_idx, img in enumerate(row): ax = axs[row_idx, col_idx] ax.imshow(np.asarray(img), **imshow_kwargs) ax.set(xticklabels=[], yticklabels=[], xticks=[], yticks=[]) if with_orig: axs[0, 0].set(title='Original image') axs[0, 0].title.set_size(8) if row_title is not None: for row_idx in range(num_rows): axs[row_idx, 0].set(ylabel=row_title[row_idx]) plt.tight_layout() ``` ```python plot([get_noisy_image(x_start, torch.tensor([t])) for t in [0, 50, 100, 150, 199]]) ``` <img src="assets/78_annotated-diffusion/output_cats_noisy_multiple.png" width="800" /> This means that we can now define the loss function given the model as follows: ```python def p_losses(denoise_model, x_start, t, noise=None, loss_type="l1"): if noise is None: noise = torch.randn_like(x_start) x_noisy = q_sample(x_start=x_start, t=t, noise=noise) predicted_noise = denoise_model(x_noisy, t) if loss_type == 'l1': loss = F.l1_loss(noise, predicted_noise) elif loss_type == 'l2': loss = F.mse_loss(noise, predicted_noise) elif loss_type == "huber": loss = F.smooth_l1_loss(noise, predicted_noise) else: raise NotImplementedError() return loss ``` The `denoise_model` will be our U-Net defined above. We'll employ the Huber loss between the true and the predicted noise. ## Define a PyTorch Dataset + DataLoader Here we define a regular [PyTorch Dataset](https://pytorch.org/tutorials/beginner/basics/data_tutorial.html). The dataset simply consists of images from a real dataset, like Fashion-MNIST, CIFAR-10 or ImageNet, scaled linearly to \\([−1, 1]\\). Each image is resized to the same size. Interesting to note is that images are also randomly horizontally flipped. From the paper: > We used random horizontal flips during training for CIFAR10; we tried training both with and without flips, and found flips to improve sample quality slightly. Here we use the 🤗 [Datasets library](https://huggingface.co/docs/datasets/index) to easily load the Fashion MNIST dataset from the [hub](https://huggingface.co/datasets/fashion_mnist). This dataset consists of images which already have the same resolution, namely 28x28. ```python from datasets import load_dataset # load dataset from the hub dataset = load_dataset("fashion_mnist") image_size = 28 channels = 1 batch_size = 128 ``` Next, we define a function which we'll apply on-the-fly on the entire dataset. We use the `with_transform` [functionality](https://huggingface.co/docs/datasets/v2.2.1/en/package_reference/main_classes#datasets.Dataset.with_transform) for that. The function just applies some basic image preprocessing: random horizontal flips, rescaling and finally make them have values in the \\([-1,1]\\) range. ```python from torchvision import transforms from torch.utils.data import DataLoader # define image transformations (e.g. using torchvision) transform = Compose([ transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Lambda(lambda t: (t * 2) - 1) ]) # define function def transforms(examples): examples["pixel_values"] = [transform(image.convert("L")) for image in examples["image"]] del examples["image"] return examples transformed_dataset = dataset.with_transform(transforms).remove_columns("label") # create dataloader dataloader = DataLoader(transformed_dataset["train"], batch_size=batch_size, shuffle=True) ``` ```python batch = next(iter(dataloader)) print(batch.keys()) ``` <div class="output stream stdout"> Output: ---------------------------------------------------------------------------------------------------- dict_keys(['pixel_values']) </div> ## Sampling As we'll sample from the model during training (in order to track progress), we define the code for that below. Sampling is summarized in the paper as Algorithm 2: <img src="assets/78_annotated-diffusion/sampling.png" width="500" /> Generating new images from a diffusion model happens by reversing the diffusion process: we start from \\(T\\), where we sample pure noise from a Gaussian distribution, and then use our neural network to gradually denoise it (using the conditional probability it has learned), until we end up at time step \\(t = 0\\). As shown above, we can derive a slighly less denoised image \\(\mathbf{x}_{t-1 }\\) by plugging in the reparametrization of the mean, using our noise predictor. Remember that the variance is known ahead of time. Ideally, we end up with an image that looks like it came from the real data distribution. The code below implements this. ```python @torch.no_grad() def p_sample(model, x, t, t_index): betas_t = extract(betas, t, x.shape) sqrt_one_minus_alphas_cumprod_t = extract( sqrt_one_minus_alphas_cumprod, t, x.shape ) sqrt_recip_alphas_t = extract(sqrt_recip_alphas, t, x.shape) # Equation 11 in the paper # Use our model (noise predictor) to predict the mean model_mean = sqrt_recip_alphas_t * ( x - betas_t * model(x, t) / sqrt_one_minus_alphas_cumprod_t ) if t_index == 0: return model_mean else: posterior_variance_t = extract(posterior_variance, t, x.shape) noise = torch.randn_like(x) # Algorithm 2 line 4: return model_mean + torch.sqrt(posterior_variance_t) * noise # Algorithm 2 (including returning all images) @torch.no_grad() def p_sample_loop(model, shape): device = next(model.parameters()).device b = shape[0] # start from pure noise (for each example in the batch) img = torch.randn(shape, device=device) imgs = [] for i in tqdm(reversed(range(0, timesteps)), desc='sampling loop time step', total=timesteps): img = p_sample(model, img, torch.full((b,), i, device=device, dtype=torch.long), i) imgs.append(img.cpu().numpy()) return imgs @torch.no_grad() def sample(model, image_size, batch_size=16, channels=3): return p_sample_loop(model, shape=(batch_size, channels, image_size, image_size)) ``` Note that the code above is a simplified version of the original implementation. We found our simplification (which is in line with Algorithm 2 in the paper) to work just as well as the [original, more complex implementation](https://github.com/hojonathanho/diffusion/blob/master/diffusion_tf/diffusion_utils.py), which employs [clipping](https://github.com/hojonathanho/diffusion/issues/5). ## Train the model Next, we train the model in regular PyTorch fashion. We also define some logic to periodically save generated images, using the `sample` method defined above. ```python from pathlib import Path def num_to_groups(num, divisor): groups = num // divisor remainder = num % divisor arr = [divisor] * groups if remainder > 0: arr.append(remainder) return arr results_folder = Path("./results") results_folder.mkdir(exist_ok = True) save_and_sample_every = 1000 ``` Below, we define the model, and move it to the GPU. We also define a standard optimizer (Adam). ```python from torch.optim import Adam device = "cuda" if torch.cuda.is_available() else "cpu" model = Unet( dim=image_size, channels=channels, dim_mults=(1, 2, 4,) ) model.to(device) optimizer = Adam(model.parameters(), lr=1e-3) ``` Let's start training! ```python from torchvision.utils import save_image epochs = 6 for epoch in range(epochs): for step, batch in enumerate(dataloader): optimizer.zero_grad() batch_size = batch["pixel_values"].shape[0] batch = batch["pixel_values"].to(device) # Algorithm 1 line 3: sample t uniformally for every example in the batch t = torch.randint(0, timesteps, (batch_size,), device=device).long() loss = p_losses(model, batch, t, loss_type="huber") if step % 100 == 0: print("Loss:", loss.item()) loss.backward() optimizer.step() # save generated images if step != 0 and step % save_and_sample_every == 0: milestone = step // save_and_sample_every batches = num_to_groups(4, batch_size) all_images_list = list(map(lambda n: sample(model, batch_size=n, channels=channels), batches)) all_images = torch.cat(all_images_list, dim=0) all_images = (all_images + 1) * 0.5 save_image(all_images, str(results_folder / f'sample-{milestone}.png'), nrow = 6) ``` <div class="output stream stdout"> Output: ---------------------------------------------------------------------------------------------------- Loss: 0.46477368474006653 Loss: 0.12143351882696152 Loss: 0.08106148988008499 Loss: 0.0801810547709465 Loss: 0.06122320517897606 Loss: 0.06310459971427917 Loss: 0.05681884288787842 Loss: 0.05729678273200989 Loss: 0.05497899278998375 Loss: 0.04439849033951759 Loss: 0.05415581166744232 Loss: 0.06020551547408104 Loss: 0.046830907464027405 Loss: 0.051029372960329056 Loss: 0.0478244312107563 Loss: 0.046767622232437134 Loss: 0.04305662214756012 Loss: 0.05216279625892639 Loss: 0.04748568311333656 Loss: 0.05107741802930832 Loss: 0.04588869959115982 Loss: 0.043014321476221085 Loss: 0.046371955424547195 Loss: 0.04952816292643547 Loss: 0.04472338408231735 </div> ## Sampling (inference) To sample from the model, we can just use our sample function defined above: ```python # sample 64 images samples = sample(model, image_size=image_size, batch_size=64, channels=channels) # show a random one random_index = 5 plt.imshow(samples[-1][random_index].reshape(image_size, image_size, channels), cmap="gray") ``` <img src="assets/78_annotated-diffusion/output.png" width="300" /> Seems like the model is capable of generating a nice T-shirt! Keep in mind that the dataset we trained on is pretty low-resolution (28x28). We can also create a gif of the denoising process: ```python import matplotlib.animation as animation random_index = 53 fig = plt.figure() ims = [] for i in range(timesteps): im = plt.imshow(samples[i][random_index].reshape(image_size, image_size, channels), cmap="gray", animated=True) ims.append([im]) animate = animation.ArtistAnimation(fig, ims, interval=50, blit=True, repeat_delay=1000) animate.save('diffusion.gif') plt.show() ``` <img src=" assets/78_annotated-diffusion/diffusion-sweater.gif" width="300" /> # Follow-up reads Note that the DDPM paper showed that diffusion models are a promising direction for (un)conditional image generation. This has since then (immensely) been improved, most notably for text-conditional image generation. Below, we list some important (but far from exhaustive) follow-up works: - Improved Denoising Diffusion Probabilistic Models ([Nichol et al., 2021](https://arxiv.org/abs/2102.09672)): finds that learning the variance of the conditional distribution (besides the mean) helps in improving performance - Cascaded Diffusion Models for High Fidelity Image Generation ([Ho et al., 2021](https://arxiv.org/abs/2106.15282)): introduces cascaded diffusion, which comprises a pipeline of multiple diffusion models that generate images of increasing resolution for high-fidelity image synthesis - Diffusion Models Beat GANs on Image Synthesis ([Dhariwal et al., 2021](https://arxiv.org/abs/2105.05233)): show that diffusion models can achieve image sample quality superior to the current state-of-the-art generative models by improving the U-Net architecture, as well as introducing classifier guidance - Classifier-Free Diffusion Guidance ([Ho et al., 2021](https://openreview.net/pdf?id=qw8AKxfYbI)): shows that you don't need a classifier for guiding a diffusion model by jointly training a conditional and an unconditional diffusion model with a single neural network - Hierarchical Text-Conditional Image Generation with CLIP Latents (DALL-E 2) ([Ramesh et al., 2022](https://cdn.openai.com/papers/dall-e-2.pdf)): uses a prior to turn a text caption into a CLIP image embedding, after which a diffusion model decodes it into an image - Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding (ImageGen) ([Saharia et al., 2022](https://arxiv.org/abs/2205.11487)): shows that combining a large pre-trained language model (e.g. T5) with cascaded diffusion works well for text-to-image synthesis Note that this list only includes important works until the time of writing, which is June 7th, 2022. For now, it seems that the main (perhaps only) disadvantage of diffusion models is that they require multiple forward passes to generate an image (which is not the case for generative models like GANs). However, there's [research going on](https://arxiv.org/abs/2204.13902) that enables high-fidelity generation in as few as 10 denoising steps.
huggingface/blog/blob/main/annotated-diffusion.md
Gradio Demo: blocks_neural_instrument_coding ``` !pip install -q gradio ``` ``` # Downloading files from the demo repo import os !wget -q https://github.com/gradio-app/gradio/raw/main/demo/blocks_neural_instrument_coding/flute.wav !wget -q https://github.com/gradio-app/gradio/raw/main/demo/blocks_neural_instrument_coding/new-sax-1.mp3 !wget -q https://github.com/gradio-app/gradio/raw/main/demo/blocks_neural_instrument_coding/new-sax-1.wav !wget -q https://github.com/gradio-app/gradio/raw/main/demo/blocks_neural_instrument_coding/new-sax.wav !wget -q https://github.com/gradio-app/gradio/raw/main/demo/blocks_neural_instrument_coding/sax.wav !wget -q https://github.com/gradio-app/gradio/raw/main/demo/blocks_neural_instrument_coding/sax2.wav !wget -q https://github.com/gradio-app/gradio/raw/main/demo/blocks_neural_instrument_coding/trombone.wav ``` ``` # A Blocks implementation of https://erlj.notion.site/Neural-Instrument-Cloning-from-very-few-samples-2cf41d8b630842ee8c7eb55036a1bfd6 import datetime import os import random import gradio as gr from gradio.components import Markdown as m def get_time(): now = datetime.datetime.now() return now.strftime("%m/%d/%Y, %H:%M:%S") def generate_recording(): return random.choice(["new-sax-1.mp3", "new-sax-1.wav"]) def reconstruct(audio): return random.choice(["new-sax-1.mp3", "new-sax-1.wav"]) io1 = gr.Interface( lambda x, y, z: os.path.join(os.path.abspath(''),"sax.wav"), [ gr.Slider(label="pitch"), gr.Slider(label="loudness"), gr.Audio(label="base audio file (optional)"), ], gr.Audio(), ) io2 = gr.Interface( lambda x, y, z: os.path.join(os.path.abspath(''),"flute.wav"), [ gr.Slider(label="pitch"), gr.Slider(label="loudness"), gr.Audio(label="base audio file (optional)"), ], gr.Audio(), ) io3 = gr.Interface( lambda x, y, z: os.path.join(os.path.abspath(''),"trombone.wav"), [ gr.Slider(label="pitch"), gr.Slider(label="loudness"), gr.Audio(label="base audio file (optional)"), ], gr.Audio(), ) io4 = gr.Interface( lambda x, y, z: os.path.join(os.path.abspath(''),"sax2.wav"), [ gr.Slider(label="pitch"), gr.Slider(label="loudness"), gr.Audio(label="base audio file (optional)"), ], gr.Audio(), ) demo = gr.Blocks(title="Neural Instrument Cloning") with demo.clear(): m( """ ## Neural Instrument Cloning from Very Few Samples <center><img src="https://media.istockphoto.com/photos/brass-trombone-picture-id490455809?k=20&m=490455809&s=612x612&w=0&h=l9KJvH_25z0QTLggHrcH_MsR4gPLH7uXwDPUAZ_C5zk=" width="400px"></center>""" ) m( """ This Blocks implementation is an adaptation [a report written](https://erlj.notion.site/Neural-Instrument-Cloning-from-very-few-samples-2cf41d8b630842ee8c7eb55036a1bfd6) by Nicolas Jonason and Bob L.T. Sturm. I've implemented it in Blocks to show off some cool features, such as embedding live ML demos. More on that ahead... ### What does this machine learning model do? It combines techniques from neural voice cloning with musical instrument synthesis. This makes it possible to produce neural instrument synthesisers from just seconds of target instrument audio. ### Audio Examples Here are some **real** 16 second saxophone recordings: """ ) gr.Audio(os.path.join(os.path.abspath(''),"sax.wav"), label="Here is a real 16 second saxophone recording:") gr.Audio(os.path.join(os.path.abspath(''),"sax.wav")) m( """\n Here is a **generated** saxophone recordings:""" ) a = gr.Audio(os.path.join(os.path.abspath(''),"new-sax.wav")) gr.Button("Generate a new saxophone recording") m( """ ### Inputs to the model The inputs to the model are: * pitch * loudness * base audio file """ ) m( """ Try the model live! """ ) gr.TabbedInterface( [io1, io2, io3, io4], ["Saxophone", "Flute", "Trombone", "Another Saxophone"] ) m( """ ### Using the model for cloning You can also use this model a different way, to simply clone the audio file and reconstruct it using machine learning. Here, we'll show a demo of that below: """ ) a2 = gr.Audio() a2.change(reconstruct, a2, a2) m( """ Thanks for reading this! As you may have realized, all of the "models" in this demo are fake. They are just designed to show you what is possible using Blocks 🤗. For details of the model, read the [original report here](https://erlj.notion.site/Neural-Instrument-Cloning-from-very-few-samples-2cf41d8b630842ee8c7eb55036a1bfd6). *Details for nerds*: this report was "launched" on: """ ) t = gr.Textbox(label="timestamp") demo.load(get_time, [], t) if __name__ == "__main__": demo.launch() ```
gradio-app/gradio/blob/main/demo/blocks_neural_instrument_coding/run.ipynb
!--- Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # JAX/Flax Examples This folder contains actively maintained examples of 🤗 Transformers using the JAX/Flax backend. Porting models and examples to JAX/Flax is an ongoing effort, and more will be added in the coming months. In particular, these examples are all designed to run fast on Cloud TPUs, and we include step-by-step guides to getting started with Cloud TPU. *NOTE*: Currently, there is no "Trainer" abstraction for JAX/Flax -- all examples contain an explicit training loop. The following table lists all of our examples on how to use 🤗 Transformers with the JAX/Flax backend: - with information about the model and dataset used, - whether or not they leverage the [🤗 Datasets](https://github.com/huggingface/datasets) library, - links to **Colab notebooks** to walk through the scripts and run them easily. | Task | Example model | Example dataset | 🤗 Datasets | Colab |---|---|---|:---:|:---:| | [**`causal-language-modeling`**](https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling) | GPT2 | OSCAR | ✅ | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/causal_language_modeling_flax.ipynb) | [**`masked-language-modeling`**](https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling) | RoBERTa | OSCAR | ✅ | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/masked_language_modeling_flax.ipynb) | [**`text-classification`**](https://github.com/huggingface/transformers/tree/main/examples/flax/text-classification) | BERT | GLUE | ✅ | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification_flax.ipynb) ## Intro: JAX and Flax [JAX](https://github.com/google/jax) is a numerical computation library that exposes a NumPy-like API with tracing capabilities. With JAX's `jit`, you can trace pure functions and compile them into efficient, fused accelerator code on both GPU and TPU. JAX supports additional transformations such as `grad` (for arbitrary gradients), `pmap` (for parallelizing computation on multiple devices), `remat` (for gradient checkpointing), `vmap` (automatic efficient vectorization), and `pjit` (for automatically sharded model parallelism). All JAX transformations compose arbitrarily with each other -- e.g., efficiently computing per-example gradients is simply `vmap(grad(f))`. [Flax](https://github.com/google/flax) builds on top of JAX with an ergonomic module abstraction using Python dataclasses that leads to concise and explicit code. Flax's "lifted" JAX transformations (e.g. `vmap`, `remat`) allow you to nest JAX transformation and modules in any way you wish. Flax is the most widely used JAX library, with [129 dependent projects](https://github.com/google/flax/network/dependents?package_id=UGFja2FnZS01MjEyMjA2MA%3D%3D) as of May 2021. It is also the library underlying all of the official Cloud TPU JAX examples. ## Running on Cloud TPU All of our JAX/Flax models are designed to run efficiently on Google Cloud TPUs. Here is [a guide for running JAX on Google Cloud TPU](https://cloud.google.com/tpu/docs/jax-quickstart-tpu-vm). Consider applying for the [Google TPU Research Cloud project](https://sites.research.google/trc/) for free TPU compute. Each example README contains more details on the specific model and training procedure. ## Running on single or multiple GPUs All of our JAX/Flax examples also run efficiently on single and multiple GPUs. You can use the same instructions in the README to launch training on GPU. Distributed training is supported out-of-the box and scripts will use all the GPUs that are detected. You should follow this [guide for installing JAX on GPUs](https://github.com/google/jax/#pip-installation-gpu-cuda) since the installation depends on your CUDA and CuDNN version. ## Supported models Porting models from PyTorch to JAX/Flax is an ongoing effort. Feel free to reach out if you are interested in contributing a model in JAX/Flax -- we'll be adding a guide for porting models from PyTorch in the upcoming few weeks. For a complete overview of models that are supported in JAX/Flax, please have a look at [this](https://huggingface.co/transformers/main/index.html#supported-frameworks) table. Over 3000 pretrained checkpoints are supported in JAX/Flax as of May 2021. Click [here](https://huggingface.co/models?filter=jax) to see the full list on the 🤗 hub. ## Upload the trained/fine-tuned model to the Hub All the example scripts support automatic upload of your final model to the [Model Hub](https://huggingface.co/models) by adding a `--push_to_hub` argument. It will then create a repository with your username slash the name of the folder you are using as `output_dir`. For instance, `"sgugger/test-mrpc"` if your username is `sgugger` and you are working in the folder `~/tmp/test-mrpc`. To specify a given repository name, use the `--hub_model_id` argument. You will need to specify the whole repository name (including your username), for instance `--hub_model_id sgugger/finetuned-bert-mrpc`. To upload to an organization you are a member of, just use the name of that organization instead of your username: `--hub_model_id huggingface/finetuned-bert-mrpc`. A few notes on this integration: - you will need to be logged in to the Hugging Face website locally for it to work, the easiest way to achieve this is to run `huggingface-cli login` and then type your username and password when prompted. You can also pass along your authentication token with the `--hub_token` argument. - the `output_dir` you pick will either need to be a new folder or a local clone of the distant repository you are using.
huggingface/transformers/blob/main/examples/flax/README.md
Gradio Demo: image_classifier ``` !pip install -q gradio numpy tensorflow ``` ``` # Downloading files from the demo repo import os os.mkdir('files') !wget -q -O files/imagenet_labels.json https://github.com/gradio-app/gradio/raw/main/demo/image_classifier/files/imagenet_labels.json os.mkdir('images') !wget -q -O images/cheetah1.jpg https://github.com/gradio-app/gradio/raw/main/demo/image_classifier/images/cheetah1.jpg !wget -q -O images/lion.jpg https://github.com/gradio-app/gradio/raw/main/demo/image_classifier/images/lion.jpg ``` ``` import os import requests import tensorflow as tf import gradio as gr inception_net = tf.keras.applications.MobileNetV2() # load the model # Download human-readable labels for ImageNet. response = requests.get("https://git.io/JJkYN") labels = response.text.split("\n") def classify_image(inp): inp = inp.reshape((-1, 224, 224, 3)) inp = tf.keras.applications.mobilenet_v2.preprocess_input(inp) prediction = inception_net.predict(inp).flatten() return {labels[i]: float(prediction[i]) for i in range(1000)} image = gr.Image(shape=(224, 224)) label = gr.Label(num_top_classes=3) demo = gr.Interface( fn=classify_image, inputs=image, outputs=label, examples=[ os.path.join(os.path.abspath(''), "images/cheetah1.jpg"), os.path.join(os.path.abspath(''), "images/lion.jpg") ] ) if __name__ == "__main__": demo.launch() ```
gradio-app/gradio/blob/main/demo/image_classifier/run.ipynb
Overview 🤗 Optimum provides an integration with Torch FX, a library for PyTorch that allows developers to implement custom transformations of their models that can be optimized for performance. <div class="mt-10"> <div class="w-full flex flex-col space-y-4 md:space-y-0 md:grid md:grid-cols-3 md:gap-y-4 md:gap-x-5"> <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./usage_guides/optimization" ><div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">How-to guides</div> <p class="text-gray-700">Practical guides to help you achieve a specific goal. Take a look at these guides to learn how to use 🤗 Optimum to solve real-world problems.</p> </a> <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./concept_guides/symbolic_tracer" ><div class="w-full text-center bg-gradient-to-br from-pink-400 to-pink-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Conceptual guides</div> <p class="text-gray-700">High-level explanations for building a better understanding about important topics such as quantization and graph optimization.</p> </a> <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./package_reference/optimization" ><div class="w-full text-center bg-gradient-to-br from-purple-400 to-purple-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Reference</div> <p class="text-gray-700">Technical descriptions of how the Torch FX classes and methods of 🤗 Optimum work.</p> </a> </div> </div>
huggingface/optimum/blob/main/docs/source/torch_fx/overview.mdx
Using Gradio Blocks Like Functions Tags: TRANSLATION, HUB, SPACES **Prerequisite**: This Guide builds on the Blocks Introduction. Make sure to [read that guide first](https://gradio.app/guides/quickstart/#blocks-more-flexibility-and-control). ## Introduction Did you know that apart from being a full-stack machine learning demo, a Gradio Blocks app is also a regular-old python function!? This means that if you have a gradio Blocks (or Interface) app called `demo`, you can use `demo` like you would any python function. So doing something like `output = demo("Hello", "friend")` will run the first event defined in `demo` on the inputs "Hello" and "friend" and store it in the variable `output`. If I put you to sleep 🥱, please bear with me! By using apps like functions, you can seamlessly compose Gradio apps. The following section will show how. ## Treating Blocks like functions Let's say we have the following demo that translates english text to german text. $code_english_translator I already went ahead and hosted it in Hugging Face spaces at [gradio/english_translator](https://huggingface.co/spaces/gradio/english_translator). You can see the demo below as well: $demo_english_translator Now, let's say you have an app that generates english text, but you wanted to additionally generate german text. You could either: 1. Copy the source code of my english-to-german translation and paste it in your app. 2. Load my english-to-german translation in your app and treat it like a normal python function. Option 1 technically always works, but it often introduces unwanted complexity. Option 2 lets you borrow the functionality you want without tightly coupling our apps. All you have to do is call the `Blocks.load` class method in your source file. After that, you can use my translation app like a regular python function! The following code snippet and demo shows how to use `Blocks.load`. Note that the variable `english_translator` is my english to german app, but its used in `generate_text` like a regular function. $code_generate_english_german $demo_generate_english_german ## How to control which function in the app to use If the app you are loading defines more than one function, you can specify which function to use with the `fn_index` and `api_name` parameters. In the code for our english to german demo, you'll see the following line: ```python translate_btn.click(translate, inputs=english, outputs=german, api_name="translate-to-german") ``` The `api_name` gives this function a unique name in our app. You can use this name to tell gradio which function in the upstream space you want to use: ```python english_generator(text, api_name="translate-to-german")[0]["generated_text"] ``` You can also use the `fn_index` parameter. Imagine my app also defined an english to spanish translation function. In order to use it in our text generation app, we would use the following code: ```python english_generator(text, fn_index=1)[0]["generated_text"] ``` Functions in gradio spaces are zero-indexed, so since the spanish translator would be the second function in my space, you would use index 1. ## Parting Remarks We showed how treating a Blocks app like a regular python helps you compose functionality across different apps. Any Blocks app can be treated like a function, but a powerful pattern is to `load` an app hosted on [Hugging Face Spaces](https://huggingface.co/spaces) prior to treating it like a function in your own app. You can also load models hosted on the [Hugging Face Model Hub](https://huggingface.co/models) - see the [Using Hugging Face Integrations](/using_hugging_face_integrations) guide for an example. ### Happy building! ⚒️
gradio-app/gradio/blob/main/guides/03_building-with-blocks/05_using-blocks-like-functions.md
-- title: "How Sempre Health is leveraging the Expert Acceleration Program to accelerate their ML roadmap" thumbnail: /blog/assets/70_sempre_health/thumbnail.jpg authors: - user: huggingface --- # How Sempre Health is leveraging the Expert Acceleration Program to accelerate their ML roadmap 👋 Hello, friends! We recently sat down with [Swaraj Banerjee](https://www.linkedin.com/in/swarajbanerjee/) and [Larry Zhang](https://www.linkedin.com/in/larry-zhang-b58642a3/) from [Sempre Health](https://www.semprehealth.com/), a startup that brings behavior-based, dynamic pricing to Healthcare. They are doing some exciting work with machine learning and are leveraging our [Expert Acceleration Program](https://huggingface.co/support) to accelerate their ML roadmap. An example of our collaboration is their new NLP pipeline to automatically classify and respond inbound messages. Since deploying it to production, they have seen more than 20% of incoming messages get automatically handled by this new system 🤯 having a massive impact on their business scalability and team workflow. In this short video, Swaraj and Larry walk us through some of their machine learning work and share their experience collaborating with our team via the [Expert Acceleration Program](https://huggingface.co/support). Check it out: <iframe width="100%" style="aspect-ratio: 16 / 9;"src="https://www.youtube.com/embed/QBOTlNJUtdk" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> If you'd like to accelerate your machine learning roadmap with the help of our experts, as Swaraj and Larry did, visit [hf.co/support](https://huggingface.co/support) to learn more about our Expert Acceleration Program and request a quote. ## Transcription: ### Introduction My name is Swaraj. I'm the CTO and co-founder at Sempre Health. I'm Larry, I'm a machine learning engineer at Sempre Health. We're working on medication adherence and affordability by combining SMS engagement and discounts for filling prescriptions. ### How do you apply Machine Learning at Sempre Health? Here at Sempre Health, we receive thousands of text messages from the patients on our platform every single day. A huge portion of these messages are messages that we can actually automatically respond to. So, for example, if a patient messages us a simple _“Thank you”_, we can automatically reply with _“You're welcome”_. Or if a patient says _“Can you refill my prescription?”_, we have systems in place to automatically call their pharmacy and submit a refill request on their behalf. We're using machine learning, specifically natural language processing (NLP), to help identify which of these thousands of text messages that we see daily are ones that we can automatically handle. ### What challenges were you facing before the Expert Acceleration Program? Our rule-based system caught about 80% of our inbound text messages, but we wanted to do much better. We knew that a statistical machine learning approach would be the only way to improve our parsing. When we looked around for what tools we could leverage, we found the language models on Hugging Face would be a great place to start. Even though Larry and I have backgrounds in machine learning and NLP, we were worried that we weren't formulating our problem perfectly, using the best model or neural network architecture for our particular use case and training data. ### How did you leverage the Expert Acceleration Program? The Hugging Face team really helped us in all aspects of implementing our NLP solution for this particular problem. They give us really good advice on how to get both representative as well as accurate labels for our text messages. They also saved us countless hours of research time by pointing us immediately to the right models and the right methods. I can definitely say with a lot of confidence that it would've taken us a lot longer to see the results that we see today without the Expert Acceleration Program. ### What surprised you about the Expert Acceleration Program? We knew what we wanted to get out of the program; we had this very concrete problem and we knew that if we used the Hugging Face libraries correctly, we could make a tremendous impact on our product. We were pleasantly surprised that we got the help that we wanted. The people that we worked with were really sharp, met us where we were, didn't require us to do a bunch of extra work, and so it was pleasantly surprising to get exactly what we wanted out of the program. ### What was the impact of collaborating with the Hugging Face team? The most important thing about this collaboration was making a tremendous impact on our business's scalability and our operations team's workflow. We launched our production NLP pipeline several weeks ago. Since then, we've consistently seen almost 20% of incoming messages get automatically handled by our new system. These are messages that would've created a ticket for our patient operations team before. So we've reduced a lot of low-value work from our team. ### For what type of AI problems should ML teams consider the Expert Acceleration Program? Here at Sempre Health, we're a pretty small team and we're just starting to explore how we can leverage ML to better our overall patient experience. The expertise of the Hugging Face team definitely expedited our development process for this project. So we'd recommend this program to any teams that are really looking to quickly add AI pipelines to their products without a lot of the hassle and development time that normally comes with machine learning development. --- With the Expert Acceleration Program, we've put together a world-class team to help customers build better ML solutions, faster. Our experts answer questions and find solutions as needed in your machine learning journey from research to production. Visit [hf.co/support](https://huggingface.co/support) to learn more and request a quote.
huggingface/blog/blob/main/sempre-health-eap-case-study.md
# 🔥 Model cards now live inside each huggingface.co model repo 🔥 For consistency, ease of use and scalability, `README.md` model cards now live directly inside each model repo on the HuggingFace model hub. ### How to update a model card You can directly update a model card inside any model repo you have **write access** to, i.e.: - a model under your username namespace - a model under any organization you are a part of. You can either: - update it, commit and push using your usual git workflow (command line, GUI, etc.) - or edit it directly from the website's UI. **What if you want to create or update a model card for a model you don't have write access to?** In that case, you can open a [Hub pull request](https://huggingface.co/docs/hub/repositories-pull-requests-discussions)! Check out the [announcement](https://huggingface.co/blog/community-update) of this feature for more details 🤗. ### What happened to the model cards here? We migrated every model card from the repo to its corresponding huggingface.co model repo. Individual commits were preserved, and they link back to the original commit on GitHub.
huggingface/transformers/blob/main/model_cards/README.md
!--⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Webhooks Server Webhooks are a foundation for MLOps-related features. They allow you to listen for new changes on specific repos or to all repos belonging to particular users/organizations you're interested in following. To learn more about webhooks on the Huggingface Hub, you can read the Webhooks [guide](https://huggingface.co/docs/hub/webhooks). <Tip> Check out this [guide](../guides/webhooks_server) for a step-by-step tutorial on how to setup your webhooks server and deploy it as a Space. </Tip> <Tip warning={true}> This is an experimental feature. This means that we are still working on improving the API. Breaking changes might be introduced in the future without prior notice. Make sure to pin the version of `huggingface_hub` in your requirements. A warning is triggered when you use an experimental feature. You can disable it by setting `HF_HUB_DISABLE_EXPERIMENTAL_WARNING=1` as an environment variable. </Tip> ## Server The server is a [Gradio](https://gradio.app/) app. It has a UI to display instructions for you or your users and an API to listen to webhooks. Implementing a webhook endpoint is as simple as decorating a function. You can then debug it by redirecting the Webhooks to your machine (using a Gradio tunnel) before deploying it to a Space. ### WebhooksServer [[autodoc]] huggingface_hub.WebhooksServer ### @webhook_endpoint [[autodoc]] huggingface_hub.webhook_endpoint ## Payload [`WebhookPayload`] is the main data structure that contains the payload from Webhooks. This is a `pydantic` class which makes it very easy to use with FastAPI. If you pass it as a parameter to a webhook endpoint, it will be automatically validated and parsed as a Python object. For more information about webhooks payload, you can refer to the Webhooks Payload [guide](https://huggingface.co/docs/hub/webhooks#webhook-payloads). [[autodoc]] huggingface_hub.WebhookPayload ### WebhookPayload [[autodoc]] huggingface_hub.WebhookPayload ### WebhookPayloadComment [[autodoc]] huggingface_hub.WebhookPayloadComment ### WebhookPayloadDiscussion [[autodoc]] huggingface_hub.WebhookPayloadDiscussion ### WebhookPayloadDiscussionChanges [[autodoc]] huggingface_hub.WebhookPayloadDiscussionChanges ### WebhookPayloadEvent [[autodoc]] huggingface_hub.WebhookPayloadEvent ### WebhookPayloadMovedTo [[autodoc]] huggingface_hub.WebhookPayloadMovedTo ### WebhookPayloadRepo [[autodoc]] huggingface_hub.WebhookPayloadRepo ### WebhookPayloadUrl [[autodoc]] huggingface_hub.WebhookPayloadUrl ### WebhookPayloadWebhook [[autodoc]] huggingface_hub.WebhookPayloadWebhook
huggingface/huggingface_hub/blob/main/docs/source/en/package_reference/webhooks_server.md
!--- Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <!--- A useful guide for English-Hindi translation of Hugging Face documentation - Add space around English words and numbers when they appear between Hindi characters. E.g., कुल मिलाकर 100 से अधिक भाषाएँ; ट्रांसफॉर्मर लाइब्रेरी का उपयोग करता है। - वर्गाकार उद्धरणों का प्रयोग करें, जैसे, "उद्धरण" Dictionary Hugging Face: गले लगाओ चेहरा token: शब्द (और मूल अंग्रेजी को कोष्ठक में चिह्नित करें) tokenize: टोकननाइज़ करें (और मूल अंग्रेज़ी को चिह्नित करने के लिए कोष्ठक का उपयोग करें) tokenizer: Tokenizer (मूल अंग्रेजी में कोष्ठक के साथ) transformer: transformer pipeline: समनुक्रम API: API (अनुवाद के बिना) inference: विचार Trainer: प्रशिक्षक। कक्षा के नाम के रूप में प्रस्तुत किए जाने पर अनुवादित नहीं किया गया। pretrained/pretrain: पूर्व प्रशिक्षण finetune: फ़ाइन ट्यूनिंग community: समुदाय example: जब विशिष्ट गोदाम example कैटलॉग करते समय "केस केस" के रूप में अनुवादित Python data structures (e.g., list, set, dict): मूल अंग्रेजी को चिह्नित करने के लिए सूचियों, सेटों, शब्दकोशों में अनुवाद करें और कोष्ठक का उपयोग करें NLP/Natural Language Processing: द्वारा NLP अनुवाद के बिना प्रकट होते हैं Natural Language Processing प्रस्तुत किए जाने पर प्राकृतिक भाषा संसाधन में अनुवाद करें checkpoint: जाँच बिंदु --> <p align="center"> <br> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_logo_name.png" width="400"/> <br> </p> <p align="center"> <a href="https://circleci.com/gh/huggingface/transformers"> <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main"> </a> <a href="https://github.com/huggingface/transformers/blob/main/LICENSE"> <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue"> </a> <a href="https://huggingface.co/docs/transformers/index"> <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online"> </a> <a href="https://github.com/huggingface/transformers/releases"> <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg"> </a> <a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md"> <img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg"> </a> <a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a> </p> <h4 align="center"> <p> <a href="https://github.com/huggingface/transformers/">English</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hans.md">简体中文</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_zh-hant.md">繁體中文</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_ko.md">한국어</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_es.md">Español</a> | <a href="https://github.com/huggingface/transformers/blob/main/README_ja.md">日本語</a> | <b>हिन्दी</b> | <a href="https://github.com/huggingface/transformers//blob/main/README_te.md">తెలుగు</a> | </p> </h4> <h3 align="center"> <p>Jax, PyTorch और TensorFlow के लिए उन्नत मशीन लर्निंग</p> </h3> <h3 align="center"> <a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a> </h3> 🤗 Transformers 100 से अधिक भाषाओं में पाठ वर्गीकरण, सूचना निष्कर्षण, प्रश्न उत्तर, सारांशीकरण, अनुवाद, पाठ निर्माण का समर्थन करने के लिए हजारों पूर्व-प्रशिक्षित मॉडल प्रदान करता है। इसका उद्देश्य सबसे उन्नत एनएलपी तकनीक को सभी के लिए सुलभ बनाना है। 🤗 Transformers त्वरित डाउनलोड और उपयोग के लिए एक एपीआई प्रदान करता है, जिससे आप किसी दिए गए पाठ पर एक पूर्व-प्रशिक्षित मॉडल ले सकते हैं, इसे अपने डेटासेट पर ठीक कर सकते हैं और इसे [मॉडल हब](https://huggingface.co/models) के माध्यम से समुदाय के साथ साझा कर सकते हैं। इसी समय, प्रत्येक परिभाषित पायथन मॉड्यूल पूरी तरह से स्वतंत्र है, जो संशोधन और तेजी से अनुसंधान प्रयोगों के लिए सुविधाजनक है। 🤗 Transformers तीन सबसे लोकप्रिय गहन शिक्षण पुस्तकालयों का समर्थन करता है: [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/) and [TensorFlow](https://www.tensorflow.org/) — और इसके साथ निर्बाध रूप से एकीकृत होता है। आप अपने मॉडल को सीधे एक ढांचे के साथ प्रशिक्षित कर सकते हैं और दूसरे के साथ लोड और अनुमान लगा सकते हैं। ## ऑनलाइन डेमो आप सबसे सीधे मॉडल पृष्ठ पर परीक्षण कर सकते हैं [model hub](https://huggingface.co/models) मॉडल पर। हम [निजी मॉडल होस्टिंग, मॉडल संस्करण, और अनुमान एपीआई](https://huggingface.co/pricing) भी प्रदान करते हैं।。 यहाँ कुछ उदाहरण हैं: - [शब्द को भरने के लिए मास्क के रूप में BERT का प्रयोग करें](https://huggingface.co/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France) - [इलेक्ट्रा के साथ नामित इकाई पहचान](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city) - [जीपीटी-2 के साथ टेक्स्ट जनरेशन](https://huggingface.co/gpt2?text=A+long+time+ago%2C+) - [रॉबर्टा के साथ प्राकृतिक भाषा निष्कर्ष](https://huggingface.co/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal) - [बार्ट के साथ पाठ सारांश](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct) - [डिस्टिलबर्ट के साथ प्रश्नोत्तर](https://huggingface.co/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species) - [अनुवाद के लिए T5 का प्रयोग करें](https://huggingface.co/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin) **[Write With Transformer](https://transformer.huggingface.co)**,हगिंग फेस टीम द्वारा बनाया गया, यह एक आधिकारिक पाठ पीढ़ी है demo。 ## यदि आप हगिंग फेस टीम से बीस्पोक समर्थन की तलाश कर रहे हैं <a target="_blank" href="https://huggingface.co/support"> <img alt="HuggingFace Expert Acceleration Program" src="https://huggingface.co/front/thumbnails/support.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);"> </a><br> ## जल्दी शुरू करें हम त्वरित उपयोग के लिए मॉडल प्रदान करते हैं `pipeline` (पाइपलाइन) एपीआई। पाइपलाइन पूर्व-प्रशिक्षित मॉडल और संबंधित पाठ प्रीप्रोसेसिंग को एकत्रित करती है। सकारात्मक और नकारात्मक भावना को निर्धारित करने के लिए पाइपलाइनों का उपयोग करने का एक त्वरित उदाहरण यहां दिया गया है: ```python >>> from transformers import pipeline # भावना विश्लेषण पाइपलाइन का उपयोग करना >>> classifier = pipeline('sentiment-analysis') >>> classifier('We are very happy to introduce pipeline to the transformers repository.') [{'label': 'POSITIVE', 'score': 0.9996980428695679}] ``` कोड की दूसरी पंक्ति पाइपलाइन द्वारा उपयोग किए गए पूर्व-प्रशिक्षित मॉडल को डाउनलोड और कैश करती है, जबकि कोड की तीसरी पंक्ति दिए गए पाठ पर मूल्यांकन करती है। यहां उत्तर 99 आत्मविश्वास के स्तर के साथ "सकारात्मक" है। कई एनएलपी कार्यों में आउट ऑफ़ द बॉक्स पाइपलाइनों का पूर्व-प्रशिक्षण होता है। उदाहरण के लिए, हम किसी दिए गए पाठ से किसी प्रश्न का उत्तर आसानी से निकाल सकते हैं: ``` python >>> from transformers import pipeline # प्रश्नोत्तर पाइपलाइन का उपयोग करना >>> question_answerer = pipeline('question-answering') >>> question_answerer({ ... 'question': 'What is the name of the repository ?', ... 'context': 'Pipeline has been included in the huggingface/transformers repository' ... }) {'score': 0.30970096588134766, 'start': 34, 'end': 58, 'answer': 'huggingface/transformers'} ``` उत्तर देने के अलावा, पूर्व-प्रशिक्षित मॉडल संगत आत्मविश्वास स्कोर भी देता है, जहां उत्तर टोकनयुक्त पाठ में शुरू और समाप्त होता है। आप [इस ट्यूटोरियल](https://huggingface.co/docs/transformers/task_summary) से पाइपलाइन एपीआई द्वारा समर्थित कार्यों के बारे में अधिक जान सकते हैं। अपने कार्य पर किसी भी पूर्व-प्रशिक्षित मॉडल को डाउनलोड करना और उसका उपयोग करना भी कोड की तीन पंक्तियों की तरह सरल है। यहाँ PyTorch संस्करण के लिए एक उदाहरण दिया गया है: ```python >>> from transformers import AutoTokenizer, AutoModel >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") >>> model = AutoModel.from_pretrained("bert-base-uncased") >>> inputs = tokenizer("Hello world!", return_tensors="pt") >>> outputs = model(**inputs) ``` यहाँ समकक्ष है TensorFlow कोड: ```python >>> from transformers import AutoTokenizer, TFAutoModel >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") >>> model = TFAutoModel.from_pretrained("bert-base-uncased") >>> inputs = tokenizer("Hello world!", return_tensors="tf") >>> outputs = model(**inputs) ``` टोकननाइज़र सभी पूर्व-प्रशिक्षित मॉडलों के लिए प्रीप्रोसेसिंग प्रदान करता है और इसे सीधे एक स्ट्रिंग (जैसे ऊपर दिए गए उदाहरण) या किसी सूची पर बुलाया जा सकता है। यह एक डिक्शनरी (तानाशाही) को आउटपुट करता है जिसे आप डाउनस्ट्रीम कोड में उपयोग कर सकते हैं या `**` अनपैकिंग एक्सप्रेशन के माध्यम से सीधे मॉडल को पास कर सकते हैं। मॉडल स्वयं एक नियमित [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) या [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) (आपके बैकएंड के आधार पर), जो हो सकता है सामान्य तरीके से उपयोग किया जाता है। [यह ट्यूटोरियल](https://huggingface.co/transformers/training.html) बताता है कि इस तरह के मॉडल को क्लासिक PyTorch या TensorFlow प्रशिक्षण लूप में कैसे एकीकृत किया जाए, या हमारे `ट्रेनर` एपीआई का उपयोग कैसे करें ताकि इसे जल्दी से फ़ाइन ट्यून किया जा सके।एक नया डेटासेट पे। ## ट्रांसफार्मर का उपयोग क्यों करें? 1. उपयोग में आसानी के लिए उन्नत मॉडल: - एनएलयू और एनएलजी पर बेहतर प्रदर्शन - प्रवेश के लिए कम बाधाओं के साथ शिक्षण और अभ्यास के अनुकूल - उपयोगकर्ता-सामना करने वाले सार तत्व, केवल तीन वर्गों को जानने की जरूरत है - सभी मॉडलों के लिए एकीकृत एपीआई 1. कम कम्प्यूटेशनल ओवरहेड और कम कार्बन उत्सर्जन: - शोधकर्ता हर बार नए सिरे से प्रशिक्षण देने के बजाय प्रशिक्षित मॉडल साझा कर सकते हैं - इंजीनियर गणना समय और उत्पादन ओवरहेड को कम कर सकते हैं - दर्जनों मॉडल आर्किटेक्चर, 2,000 से अधिक पूर्व-प्रशिक्षित मॉडल, 100 से अधिक भाषाओं का समर्थन 1.मॉडल जीवनचक्र के हर हिस्से को शामिल करता है: - कोड की केवल 3 पंक्तियों में उन्नत मॉडलों को प्रशिक्षित करें - मॉडल को मनमाने ढंग से विभिन्न डीप लर्निंग फ्रेमवर्क के बीच स्थानांतरित किया जा सकता है, जैसा आप चाहते हैं - निर्बाध रूप से प्रशिक्षण, मूल्यांकन और उत्पादन के लिए सबसे उपयुक्त ढांचा चुनें 1. आसानी से अनन्य मॉडल को अनुकूलित करें और अपनी आवश्यकताओं के लिए मामलों का उपयोग करें: - हम मूल पेपर परिणामों को पुन: पेश करने के लिए प्रत्येक मॉडल आर्किटेक्चर के लिए कई उपयोग के मामले प्रदान करते हैं - मॉडल की आंतरिक संरचना पारदर्शी और सुसंगत रहती है - मॉडल फ़ाइल को अलग से इस्तेमाल किया जा सकता है, जो संशोधन और त्वरित प्रयोग के लिए सुविधाजनक है ## मुझे ट्रांसफॉर्मर का उपयोग कब नहीं करना चाहिए? - यह लाइब्रेरी मॉड्यूलर न्यूरल नेटवर्क टूलबॉक्स नहीं है। मॉडल फ़ाइल में कोड जानबूझकर अल्पविकसित है, बिना अतिरिक्त सार इनकैप्सुलेशन के, ताकि शोधकर्ता अमूर्तता और फ़ाइल जंपिंग में शामिल हुए जल्दी से पुनरावृति कर सकें। - `ट्रेनर` एपीआई किसी भी मॉडल के साथ संगत नहीं है, यह केवल इस पुस्तकालय के मॉडल के लिए अनुकूलित है। यदि आप सामान्य मशीन लर्निंग के लिए उपयुक्त प्रशिक्षण लूप कार्यान्वयन की तलाश में हैं, तो कहीं और देखें। - हमारे सर्वोत्तम प्रयासों के बावजूद, [उदाहरण निर्देशिका](https://github.com/huggingface/transformers/tree/main/examples) में स्क्रिप्ट केवल उपयोग के मामले हैं। आपकी विशिष्ट समस्या के लिए, वे जरूरी नहीं कि बॉक्स से बाहर काम करें, और आपको कोड की कुछ पंक्तियों को सूट करने की आवश्यकता हो सकती है। ## स्थापित करना ### पिप का उपयोग करना इस रिपॉजिटरी का परीक्षण Python 3.8+, Flax 0.4.1+, PyTorch 1.10+ और TensorFlow 2.6+ के तहत किया गया है। आप [वर्चुअल एनवायरनमेंट](https://docs.python.org/3/library/venv.html) में 🤗 ट्रांसफॉर्मर इंस्टॉल कर सकते हैं। यदि आप अभी तक पायथन के वर्चुअल एनवायरनमेंट से परिचित नहीं हैं, तो कृपया इसे [उपयोगकर्ता निर्देश](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/) पढ़ें। सबसे पहले, पायथन के उस संस्करण के साथ एक आभासी वातावरण बनाएं जिसका आप उपयोग करने और उसे सक्रिय करने की योजना बना रहे हैं। फिर, आपको Flax, PyTorch या TensorFlow में से किसी एक को स्थापित करने की आवश्यकता है। अपने प्लेटफ़ॉर्म पर इन फ़्रेमवर्क को स्थापित करने के लिए, [TensorFlow स्थापना पृष्ठ](https://www.tensorflow.org/install/), [PyTorch स्थापना पृष्ठ](https://pytorch.org/get-started/locally) देखें start-locally या [Flax स्थापना पृष्ठ](https://github.com/google/flax#quick-install). जब इनमें से कोई एक बैकएंड सफलतापूर्वक स्थापित हो जाता है, तो ट्रांसफॉर्मर निम्नानुसार स्थापित किए जा सकते हैं: ```bash pip install transformers ``` यदि आप उपयोग के मामलों को आज़माना चाहते हैं या आधिकारिक रिलीज़ से पहले नवीनतम इन-डेवलपमेंट कोड का उपयोग करना चाहते हैं, तो आपको [सोर्स से इंस्टॉल करना होगा](https://huggingface.co/docs/transformers/installation#installing-from-) स्रोत। ### कोंडा का उपयोग करना ट्रांसफॉर्मर संस्करण 4.0.0 के बाद से, हमारे पास एक कोंडा चैनल है: `हगिंगफेस`। ट्रांसफॉर्मर कोंडा के माध्यम से निम्नानुसार स्थापित किया जा सकता है: ```shell script conda install -c huggingface transformers ``` कोंडा के माध्यम से Flax, PyTorch, या TensorFlow में से किसी एक को स्थापित करने के लिए, निर्देशों के लिए उनके संबंधित स्थापना पृष्ठ देखें। ## मॉडल आर्किटेक्चर [उपयोगकर्ता](https://huggingface.co/users) और [organization](https://huggingface.co) द्वारा ट्रांसफॉर्मर समर्थित [**सभी मॉडल चौकियों**](https://huggingface.co/models/users) हगिंगफेस.को/ऑर्गनाइजेशन), सभी को बिना किसी बाधा के हगिंगफेस.को [मॉडल हब](https://huggingface.co) के साथ एकीकृत किया गया है। चौकियों की वर्तमान संख्या: ![](https://img.shields.io/endpoint?url=https://huggingface.co/api/shields/models&color=brightgreen) 🤗 ट्रांसफॉर्मर वर्तमान में निम्नलिखित आर्किटेक्चर का समर्थन करते हैं (मॉडल के अवलोकन के लिए [यहां] देखें (https://huggingface.co/docs/transformers/model_summary)): 1. **[ALBERT](https://huggingface.co/docs/transformers/model_doc/albert)** (Google Research and the Toyota Technological Institute at Chicago) साथ थीसिस [ALBERT: A Lite BERT for Self-supervised भाषा प्रतिनिधित्व सीखना](https://arxiv.org/abs/1909.11942), झेंझोंग लैन, मिंगदा चेन, सेबेस्टियन गुडमैन, केविन गिम्पेल, पीयूष शर्मा, राडू सोरिकट 1. **[ALIGN](https://huggingface.co/docs/transformers/model_doc/align)** (Google Research से) Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, Yunhsuan Sung, Zhen Li, Tom Duerig. द्वाराअनुसंधान पत्र [Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision](https://arxiv.org/abs/2102.05918) के साथ जारी किया गया 1. **[AltCLIP](https://huggingface.co/docs/transformers/model_doc/altclip)** (from BAAI) released with the paper [AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities](https://arxiv.org/abs/2211.06679) by Chen, Zhongzhi and Liu, Guang and Zhang, Bo-Wen and Ye, Fulong and Yang, Qinghong and Wu, Ledell. 1. **[Audio Spectrogram Transformer](https://huggingface.co/docs/transformers/model_doc/audio-spectrogram-transformer)** (from MIT) released with the paper [AST: Audio Spectrogram Transformer](https://arxiv.org/abs/2104.01778) by Yuan Gong, Yu-An Chung, James Glass. 1. **[Autoformer](https://huggingface.co/docs/transformers/model_doc/autoformer)** (from Tsinghua University) released with the paper [Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting](https://arxiv.org/abs/2106.13008) by Haixu Wu, Jiehui Xu, Jianmin Wang, Mingsheng Long. 1. **[Bark](https://huggingface.co/docs/transformers/model_doc/bark)** (from Suno) released in the repository [suno-ai/bark](https://github.com/suno-ai/bark) by Suno AI team. 1. **[BART](https://huggingface.co/docs/transformers/model_doc/bart)** (फेसबुक) साथ थीसिस [बार्ट: प्राकृतिक भाषा निर्माण, अनुवाद के लिए अनुक्रम-से-अनुक्रम पूर्व प्रशिक्षण , और समझ](https://arxiv.org/pdf/1910.13461.pdf) पर निर्भर माइक लुईस, यिनहान लियू, नमन गोयल, मार्जन ग़ज़विनिनेजाद, अब्देलरहमान मोहम्मद, ओमर लेवी, वेस स्टोयानोव और ल्यूक ज़ेटलमॉयर 1. **[BARThez](https://huggingface.co/docs/transformers/model_doc/barthez)** (से École polytechnique) साथ थीसिस [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) पर निर्भर Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis रिहाई। 1. **[BARTpho](https://huggingface.co/docs/transformers/model_doc/bartpho)** (VinAI Research से) साथ में पेपर [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701)गुयेन लुओंग ट्रान, डुओंग मिन्ह ले और डाट क्वोक गुयेन द्वारा पोस्ट किया गया। 1. **[BEiT](https://huggingface.co/docs/transformers/model_doc/beit)** (Microsoft से) साथ में कागज [BEiT: BERT इमेज ट्रांसफॉर्मर्स का प्री-ट्रेनिंग](https://arxiv.org/abs/2106.08254) Hangbo Bao, Li Dong, Furu Wei द्वारा। 1. **[BERT](https://huggingface.co/docs/transformers/model_doc/bert)** (गूगल से) साथ वाला पेपर [बीईआरटी: प्री-ट्रेनिंग ऑफ डीप बिडायरेक्शनल ट्रांसफॉर्मर्स फॉर लैंग्वेज अंडरस्टैंडिंग](https://arxiv.org/abs/1810.04805) जैकब डेवलिन, मिंग-वेई चांग, ​​केंटन ली और क्रिस्टीना टौटानोवा द्वारा प्रकाशित किया गया था। . 1. **[BERT For Sequence Generation](https://huggingface.co/docs/transformers/model_doc/bert-generation)** (गूगल से) साथ देने वाला पेपर [सीक्वेंस जेनरेशन टास्क के लिए प्री-ट्रेंड चेकपॉइंट का इस्तेमाल करना](https ://arxiv.org/abs/1907.12461) साशा रोठे, शशि नारायण, अलियाक्सि सेवेरिन द्वारा। 1. **[BERTweet](https://huggingface.co/docs/transformers/model_doc/bertweet)** (VinAI Research से) साथ में पेपर [BERTweet: अंग्रेजी ट्वीट्स के लिए एक पूर्व-प्रशिक्षित भाषा मॉडल](https://aclanthology.org/2020.emnlp-demos.2/) डाट क्वोक गुयेन, थान वु और अन्ह तुआन गुयेन द्वारा प्रकाशित। 1. **[BigBird-Pegasus](https://huggingface.co/docs/transformers/model_doc/bigbird_pegasus)** (गूगल रिसर्च से) साथ वाला पेपर [बिग बर्ड: ट्रांसफॉर्मर्स फॉर लॉन्गर सीक्वेंस](https://arxiv .org/abs/2007.14062) मंज़िल ज़हीर, गुरु गुरुगणेश, अविनावा दुबे, जोशुआ आइंस्ली, क्रिस अल्बर्टी, सैंटियागो ओंटानोन, फिलिप फाम, अनिरुद्ध रावुला, किफ़ान वांग, ली यांग, अमर अहमद द्वारा। 1. **[BigBird-RoBERTa](https://huggingface.co/docs/transformers/model_doc/big_bird)** (गूगल रिसर्च से) साथ में पेपर [बिग बर्ड: ट्रांसफॉर्मर्स फॉर लॉन्गर सीक्वेंस](https://arxiv.org/abs/2007.14062) मंज़िल ज़हीर, गुरु गुरुगणेश, अविनावा दुबे, जोशुआ आइंस्ली, क्रिस अल्बर्टी, सैंटियागो ओंटानन, फिलिप फाम द्वारा , अनिरुद्ध रावुला, किफ़ान वांग, ली यांग, अमर अहमद द्वारा पोस्ट किया गया। 1. **[BioGpt](https://huggingface.co/docs/transformers/model_doc/biogpt)** (from Microsoft Research AI4Science) released with the paper [BioGPT: generative pre-trained transformer for biomedical text generation and mining](https://academic.oup.com/bib/advance-article/doi/10.1093/bib/bbac409/6713511?guestAccessKey=a66d9b5d-4f83-4017-bb52-405815c907b9) by Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon and Tie-Yan Liu. 1. **[BiT](https://huggingface.co/docs/transformers/model_doc/bit)** (from Google AI) released with the paper [Big Transfer (BiT) by Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, Neil Houlsby. 1. **[Blenderbot](https://huggingface.co/docs/transformers/model_doc/blenderbot)** (फेसबुक से) साथ में कागज [एक ओपन-डोमेन चैटबॉट बनाने की विधि](https://arxiv.org /abs/2004.13637) स्टीफन रोलर, एमिली दीनन, नमन गोयल, दा जू, मैरी विलियमसन, यिनहान लियू, जिंग जू, मायल ओट, कर्ट शस्टर, एरिक एम। स्मिथ, वाई-लैन बॉरो, जेसन वेस्टन द्वारा। 1. **[BlenderbotSmall](https://huggingface.co/docs/transformers/model_doc/blenderbot-small)** (फेसबुक से) साथ में पेपर [एक ओपन-डोमेन चैटबॉट बनाने की रेसिपी](https://arxiv .org/abs/2004.13637) स्टीफन रोलर, एमिली दीनन, नमन गोयल, दा जू, मैरी विलियमसन, यिनहान लियू, जिंग जू, मायल ओट, कर्ट शस्टर, एरिक एम स्मिथ, वाई-लैन बॉरो, जेसन वेस्टन द्वारा। 1. **[BLIP](https://huggingface.co/docs/transformers/model_doc/blip)** (from Salesforce) released with the paper [BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation](https://arxiv.org/abs/2201.12086) by Junnan Li, Dongxu Li, Caiming Xiong, Steven Hoi. 1. **[BLIP-2](https://huggingface.co/docs/transformers/model_doc/blip-2)** (Salesforce से) Junnan Li, Dongxu Li, Silvio Savarese, Steven Hoi. द्वाराअनुसंधान पत्र [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) के साथ जारी किया गया 1. **[BLOOM](https://huggingface.co/docs/transformers/model_doc/bloom)** (from BigScience workshop) released by the [BigSicence Workshop](https://bigscience.huggingface.co/). 1. **[BORT](https://huggingface.co/docs/transformers/model_doc/bort)** (एलेक्सा से) कागज के साथ [बीईआरटी के लिए ऑप्टिमल सबआर्किटेक्चर एक्सट्रैक्शन](https://arxiv.org/abs/ 2010.10499) एड्रियन डी विंटर और डैनियल जे पेरी द्वारा। 1. **[BridgeTower](https://huggingface.co/docs/transformers/model_doc/bridgetower)** (हरबिन इंस्टिट्यूट ऑफ़ टेक्नोलॉजी/माइक्रोसॉफ्ट रिसर्च एशिया/इंटेल लैब्स से) कागज के साथ [ब्रिजटॉवर: विजन-लैंग्वेज रिप्रेजेंटेशन लर्निंग में एनकोडर्स के बीच ब्रिज बनाना](<https://arxiv.org/abs/2206.08657>) by Xiao Xu, Chenfei Wu, Shachar Rosenman, Vasudev Lal, Wanxiang Che, Nan Duan. 1. **[BROS](https://huggingface.co/docs/transformers/model_doc/bros)** (NAVER CLOVA से) Teakgyu Hong, Donghyun Kim, Mingi Ji, Wonseok Hwang, Daehyun Nam, Sungrae Park. द्वाराअनुसंधान पत्र [BROS: A Pre-trained Language Model Focusing on Text and Layout for Better Key Information Extraction from Documents](https://arxiv.org/abs/2108.04539) के साथ जारी किया गया 1. **[ByT5](https://huggingface.co/docs/transformers/model_doc/byt5)** (Google अनुसंधान से) साथ में कागज [ByT5: पूर्व-प्रशिक्षित बाइट-टू-बाइट मॉडल के साथ एक टोकन-मुक्त भविष्य की ओर] (https://arxiv.org/abs/2105.13626) Linting Xue, Aditya Barua, Noah Constant, रामी अल-रफू, शरण नारंग, मिहिर काले, एडम रॉबर्ट्स, कॉलिन रैफेल द्वारा पोस्ट किया गया। 1. **[CamemBERT](https://huggingface.co/docs/transformers/model_doc/camembert)** (इनरिया/फेसबुक/सोरबोन से) साथ में कागज [CamemBERT: एक टेस्टी फ्रेंच लैंग्वेज मॉडल](https:// arxiv.org/abs/1911.03894) लुई मार्टिन*, बेंजामिन मुलर*, पेड्रो जेवियर ऑर्टिज़ सुआरेज़*, योआन ड्यूपॉन्ट, लॉरेंट रोमरी, एरिक विलेमोन्टे डे ला क्लर्जरी, जैमे सेडाह और बेनोइट सगोट द्वारा। 1. **[CANINE](https://huggingface.co/docs/transformers/model_doc/canine)** (Google रिसर्च से) साथ में दिया गया पेपर [कैनाइन: प्री-ट्रेनिंग ए एफिशिएंट टोकनाइजेशन-फ्री एनकोडर फॉर लैंग्वेज रिप्रेजेंटेशन]( https://arxiv.org/abs/2103.06874) जोनाथन एच क्लार्क, डैन गैरेट, यूलिया टर्क, जॉन विएटिंग द्वारा। 1. **[Chinese-CLIP](https://huggingface.co/docs/transformers/model_doc/chinese_clip)** (from OFA-Sys) released with the paper [Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese](https://arxiv.org/abs/2211.01335) by An Yang, Junshu Pan, Junyang Lin, Rui Men, Yichang Zhang, Jingren Zhou, Chang Zhou. 1. **[CLAP](https://huggingface.co/docs/transformers/model_doc/clap)** (LAION-AI से) Yusong Wu, Ke Chen, Tianyu Zhang, Yuchen Hui, Taylor Berg-Kirkpatrick, Shlomo Dubnov. द्वाराअनुसंधान पत्र [Large-scale Contrastive Language-Audio Pretraining with Feature Fusion and Keyword-to-Caption Augmentation](https://arxiv.org/abs/2211.06687) के साथ जारी किया गया 1. **[CLIP](https://huggingface.co/docs/transformers/model_doc/clip)** (OpenAI से) साथ वाला पेपर [लर्निंग ट्रांसफरेबल विजुअल मॉडल फ्रॉम नेचुरल लैंग्वेज सुपरविजन](https://arxiv.org /abs/2103.00020) एलेक रैडफोर्ड, जोंग वूक किम, क्रिस हैलासी, आदित्य रमेश, गेब्रियल गोह, संध्या अग्रवाल, गिरीश शास्त्री, अमांडा एस्केल, पामेला मिश्किन, जैक क्लार्क, ग्रेचेन क्रुएगर, इल्या सुत्स्केवर द्वारा। 1. **[CLIPSeg](https://huggingface.co/docs/transformers/model_doc/clipseg)** (from University of Göttingen) released with the paper [Image Segmentation Using Text and Image Prompts](https://arxiv.org/abs/2112.10003) by Timo Lüddecke and Alexander Ecker. 1. **[CLVP](https://huggingface.co/docs/transformers/model_doc/clvp)** released with the paper [Better speech synthesis through scaling](https://arxiv.org/abs/2305.07243) by James Betker. 1. **[CodeGen](https://huggingface.co/docs/transformers/model_doc/codegen)** (सेल्सफोर्स से) साथ में पेपर [प्रोग्राम सिंथेसिस के लिए एक संवादात्मक प्रतिमान](https://arxiv.org/abs/2203.13474) एरिक निजकैंप, बो पैंग, हिरोआकी हयाशी, लिफू तू, हुआन वांग, यिंगबो झोउ, सिल्वियो सावरेस, कैमिंग जिओंग रिलीज। 1. **[CodeLlama](https://huggingface.co/docs/transformers/model_doc/llama_code)** (MetaAI से) Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve. द्वाराअनुसंधान पत्र [Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) के साथ जारी किया गया 1. **[Conditional DETR](https://huggingface.co/docs/transformers/model_doc/conditional_detr)** (माइक्रोसॉफ्ट रिसर्च एशिया से) कागज के साथ [फास्ट ट्रेनिंग कन्वर्जेंस के लिए सशर्त डीईटीआर](https://arxiv. org/abs/2108.06152) डेपू मेंग, ज़ियाओकांग चेन, ज़ेजिया फैन, गैंग ज़ेंग, होउकियांग ली, युहुई युआन, लेई सन, जिंगडोंग वांग द्वारा। 1. **[ConvBERT](https://huggingface.co/docs/transformers/model_doc/convbert)** (YituTech से) साथ में कागज [ConvBERT: स्पैन-आधारित डायनेमिक कनवल्शन के साथ BERT में सुधार](https://arxiv .org/abs/2008.02496) जिहांग जियांग, वीहाओ यू, डाकान झोउ, युनपेंग चेन, जियाशी फेंग, शुइचेंग यान द्वारा। 1. **[ConvNeXT](https://huggingface.co/docs/transformers/model_doc/convnext)** (Facebook AI से) साथ वाला पेपर [A ConvNet for the 2020s](https://arxiv.org/abs /2201.03545) ज़ुआंग लियू, हेंज़ी माओ, चाओ-युआन वू, क्रिस्टोफ़ फीचटेनहोफ़र, ट्रेवर डेरेल, सैनिंग ज़ी द्वारा। 1. **[ConvNeXTV2](https://huggingface.co/docs/transformers/model_doc/convnextv2)** (from Facebook AI) released with the paper [ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders](https://arxiv.org/abs/2301.00808) by Sanghyun Woo, Shoubhik Debnath, Ronghang Hu, Xinlei Chen, Zhuang Liu, In So Kweon, Saining Xie. 1. **[CPM](https://huggingface.co/docs/transformers/model_doc/cpm)** (सिंघुआ यूनिवर्सिटी से) साथ में पेपर [सीपीएम: ए लार्ज-स्केल जेनेरेटिव चाइनीज प्री-ट्रेंड लैंग्वेज मॉडल](https : //arxiv.org/abs/2012.00413) झेंग्यान झांग, जू हान, हाओ झोउ, पेई के, युक्सियन गु, डेमिंग ये, युजिया किन, युशेंग सु, हाओझे जी, जियान गुआन, फैंचाओ क्यूई, ज़ियाओझी वांग, यानान झेंग द्वारा , गुओयांग ज़ेंग, हुआनकी काओ, शेंगकी चेन, डाइक्सुआन ली, ज़ेनबो सन, ज़ियुआन लियू, मिनली हुआंग, वेंटाओ हान, जी तांग, जुआनज़ी ली, ज़ियाओयान झू, माओसोंग सन। 1. **[CPM-Ant](https://huggingface.co/docs/transformers/model_doc/cpmant)** (from OpenBMB) released by the [OpenBMB](https://www.openbmb.org/). 1. **[CTRL](https://huggingface.co/docs/transformers/model_doc/ctrl)** (सेल्सफोर्स से) साथ में पेपर [CTRL: ए कंडिशनल ट्रांसफॉर्मर लैंग्वेज मॉडल फॉर कंट्रोलेबल जेनरेशन](https://arxiv.org/abs/1909.05858) नीतीश शिरीष केसकर*, ब्रायन मैककैन*, लव आर. वार्ष्णेय, कैमिंग जिओंग और रिचर्ड द्वारा सोचर द्वारा जारी किया गया। 1. **[CvT](https://huggingface.co/docs/transformers/model_doc/cvt)** (Microsoft से) साथ में दिया गया पेपर [CvT: इंट्रोड्यूसिंग कनवॉल्यूशन टू विजन ट्रांसफॉर्मर्स](https://arxiv.org/ एब्स/2103.15808) हैपिंग वू, बिन जिओ, नोएल कोडेला, मेंगचेन लियू, जियांग दाई, लू युआन, लेई झांग द्वारा। 1. **[Data2Vec](https://huggingface.co/docs/transformers/model_doc/data2vec)** (फेसबुक से) साथ में कागज [Data2Vec: भाषण, दृष्टि और भाषा में स्व-पर्यवेक्षित सीखने के लिए एक सामान्य ढांचा] (https://arxiv.org/abs/2202.03555) एलेक्सी बाएव्स्की, वेई-निंग सू, कियानटोंग जू, अरुण बाबू, जियाताओ गु, माइकल औली द्वारा पोस्ट किया गया। 1. **[DeBERTa](https://huggingface.co/docs/transformers/model_doc/deberta)** (Microsoft से) साथ में दिया गया पेपर [DeBERta: डिकोडिंग-एन्हांस्ड BERT विद डिसेंटैंगल्ड अटेंशन](https://arxiv. org/abs/2006.03654) पेंगचेंग हे, ज़ियाओडोंग लियू, जियानफेंग गाओ, वीज़ू चेन द्वारा। 1. **[DeBERTa-v2](https://huggingface.co/docs/transformers/model_doc/deberta-v2)** (Microsoft से) साथ में दिया गया पेपर [DeBERTa: डिकोडिंग-एन्हांस्ड BERT विथ डिसेंन्गल्ड अटेंशन](https: //arxiv.org/abs/2006.03654) पेंगचेंग हे, ज़ियाओडोंग लियू, जियानफेंग गाओ, वीज़ू चेन द्वारा पोस्ट किया गया। 1. **[Decision Transformer](https://huggingface.co/docs/transformers/model_doc/decision_transformer)** (बर्कले/फेसबुक/गूगल से) पेपर के साथ [डिसीजन ट्रांसफॉर्मर: रीनफोर्समेंट लर्निंग वाया सीक्वेंस मॉडलिंग](https : //arxiv.org/abs/2106.01345) लिली चेन, केविन लू, अरविंद राजेश्वरन, किमिन ली, आदित्य ग्रोवर, माइकल लास्किन, पीटर एबील, अरविंद श्रीनिवास, इगोर मोर्डच द्वारा पोस्ट किया गया। 1. **[Deformable DETR](https://huggingface.co/docs/transformers/model_doc/deformable_detr)** (सेंसटाइम रिसर्च से) साथ में पेपर [डिफॉर्मेबल डीईटीआर: डिफॉर्मेबल ट्रांसफॉर्मर्स फॉर एंड-टू-एंड ऑब्जेक्ट डिटेक्शन] (https://arxiv.org/abs/2010.04159) Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, जिफेंग दाई द्वारा पोस्ट किया गया। 1. **[DeiT](https://huggingface.co/docs/transformers/model_doc/deit)** (फेसबुक से) साथ में पेपर [ट्रेनिंग डेटा-एफिशिएंट इमेज ट्रांसफॉर्मर और डिस्टिलेशन थ्रू अटेंशन](https://arxiv .org/abs/2012.12877) ह्यूगो टौव्रोन, मैथ्यू कॉर्ड, मैथिज्स डूज़, फ़्रांसिस्को मस्सा, एलेक्ज़ेंडर सबलेरोल्स, हर्वे जेगौ द्वारा। 1. **[DePlot](https://huggingface.co/docs/transformers/model_doc/deplot)** (Google AI से) Fangyu Liu, Julian Martin Eisenschlos, Francesco Piccinno, Syrine Krichene, Chenxi Pang, Kenton Lee, Mandar Joshi, Wenhu Chen, Nigel Collier, Yasemin Altun. द्वाराअनुसंधान पत्र [DePlot: One-shot visual language reasoning by plot-to-table translation](https://arxiv.org/abs/2212.10505) के साथ जारी किया गया 1. **[DETA](https://huggingface.co/docs/transformers/model_doc/deta)** (from The University of Texas at Austin) released with the paper [NMS Strikes Back](https://arxiv.org/abs/2212.06137) by Jeffrey Ouyang-Zhang, Jang Hyun Cho, Xingyi Zhou, Philipp Krähenbühl. 1. **[DETR](https://huggingface.co/docs/transformers/model_doc/detr)** (फेसबुक से) साथ में कागज [ट्रांसफॉर्मर्स के साथ एंड-टू-एंड ऑब्जेक्ट डिटेक्शन](https://arxiv. org/abs/2005.12872) निकोलस कैरियन, फ़्रांसिस्को मस्सा, गेब्रियल सिनेव, निकोलस उसुनियर, अलेक्जेंडर किरिलोव, सर्गेई ज़ागोरुयको द्वारा। 1. **[DialoGPT](https://huggingface.co/docs/transformers/model_doc/dialogpt)** (माइक्रोसॉफ्ट रिसर्च से) कागज के साथ [DialoGPT: बड़े पैमाने पर जनरेटिव प्री-ट्रेनिंग फॉर कन्वर्सेशनल रिस्पांस जेनरेशन](https ://arxiv.org/abs/1911.00536) यिज़े झांग, सिकी सन, मिशेल गैली, येन-चुन चेन, क्रिस ब्रोकेट, जियांग गाओ, जियानफेंग गाओ, जिंगजिंग लियू, बिल डोलन द्वारा। 1. **[DiNAT](https://huggingface.co/docs/transformers/model_doc/dinat)** (from SHI Labs) released with the paper [Dilated Neighborhood Attention Transformer](https://arxiv.org/abs/2209.15001) by Ali Hassani and Humphrey Shi. 1. **[DINOv2](https://huggingface.co/docs/transformers/model_doc/dinov2)** (Meta AI से) Maxime Oquab, Timothée Darcet, Théo Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, Mahmoud Assran, Nicolas Ballas, Wojciech Galuba, Russell Howes, Po-Yao Huang, Shang-Wen Li, Ishan Misra, Michael Rabbat, Vasu Sharma, Gabriel Synnaeve, Hu Xu, Hervé Jegou, Julien Mairal, Patrick Labatut, Armand Joulin, Piotr Bojanowski. द्वाराअनुसंधान पत्र [DINOv2: Learning Robust Visual Features without Supervision](https://arxiv.org/abs/2304.07193) के साथ जारी किया गया 1. **[DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert)** (हगिंगफेस से), साथ में कागज [डिस्टिलबर्ट, बीईआरटी का डिस्टिल्ड वर्जन: छोटा, तेज, सस्ता और हल्का] (https://arxiv.org/abs/1910.01108) विक्टर सनह, लिसांड्रे डेब्यू और थॉमस वुल्फ द्वारा पोस्ट किया गया। यही तरीका GPT-2 को [DistilGPT2](https://github.com/huggingface/transformers/tree/main/examples/distillation), RoBERta से [DistilRoBERta](https://github.com) पर कंप्रेस करने के लिए भी लागू किया जाता है। / हगिंगफेस/ट्रांसफॉर्मर्स/ट्री/मेन/उदाहरण/डिस्टिलेशन), बहुभाषी BERT से [DistilmBERT](https://github.com/huggingface/transformers/tree/main/examples/distillation) और डिस्टिलबर्ट का जर्मन संस्करण। 1. **[DiT](https://huggingface.co/docs/transformers/model_doc/dit)** (माइक्रोसॉफ्ट रिसर्च से) साथ में पेपर [DiT: सेल्फ सुपरवाइज्ड प्री-ट्रेनिंग फॉर डॉक्यूमेंट इमेज ट्रांसफॉर्मर](https://arxiv.org/abs/2203.02378) जुनलॉन्ग ली, यिहेंग जू, टेंगचाओ लव, लेई कुई, चा झांग द्वारा फुरु वेई द्वारा पोस्ट किया गया। 1. **[Donut](https://huggingface.co/docs/transformers/model_doc/donut)** (NAVER से) साथ में कागज [OCR-मुक्त डॉक्यूमेंट अंडरस्टैंडिंग ट्रांसफॉर्मर](https://arxiv.org/abs /2111.15664) गीवूक किम, टीकग्यू होंग, मूनबिन यिम, जियोंग्योन नाम, जिनयॉन्ग पार्क, जिनयॉन्ग यिम, वोनसेओक ह्वांग, सांगडू यूं, डोंगयून हान, सेउंग्युन पार्क द्वारा। 1. **[DPR](https://huggingface.co/docs/transformers/model_doc/dpr)** (फेसबुक से) साथ में पेपर [ओपन-डोमेन क्वेश्चन आंसरिंग के लिए डेंस पैसेज रिट्रीवल](https://arxiv. org/abs/2004.04906) व्लादिमीर करपुखिन, बरलास ओज़ुज़, सेवन मिन, पैट्रिक लुईस, लेडेल वू, सर्गेई एडुनोव, डैनकी चेन, और वेन-ताऊ यिह द्वारा। 1. **[DPT](https://huggingface.co/docs/transformers/master/model_doc/dpt)** (इंटेल लैब्स से) साथ में कागज [विज़न ट्रांसफॉर्मर्स फॉर डेंस प्रेडिक्शन](https://arxiv.org /abs/2103.13413) रेने रैनफ्टल, एलेक्सी बोचकोवस्की, व्लादलेन कोल्टन द्वारा। 1. **[EfficientFormer](https://huggingface.co/docs/transformers/model_doc/efficientformer)** (from Snap Research) released with the paper [EfficientFormer: Vision Transformers at MobileNetSpeed](https://arxiv.org/abs/2206.01191) by Yanyu Li, Geng Yuan, Yang Wen, Ju Hu, Georgios Evangelidis, Sergey Tulyakov, Yanzhi Wang, Jian Ren. 1. **[EfficientNet](https://huggingface.co/docs/transformers/model_doc/efficientnet)** (from Google Brain) released with the paper [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](https://arxiv.org/abs/1905.11946) by Mingxing Tan, Quoc V. Le. 1. **[ELECTRA](https://huggingface.co/docs/transformers/model_doc/electra)** (Google रिसर्च/स्टैनफोर्ड यूनिवर्सिटी से) साथ में दिया गया पेपर [इलेक्ट्रा: जेनरेटर के बजाय भेदभाव करने वाले के रूप में टेक्स्ट एन्कोडर्स का पूर्व-प्रशिक्षण] (https://arxiv.org/abs/2003.10555) केविन क्लार्क, मिन्ह-थांग लुओंग, क्वोक वी. ले, क्रिस्टोफर डी. मैनिंग द्वारा पोस्ट किया गया। 1. **[EnCodec](https://huggingface.co/docs/transformers/model_doc/encodec)** (Meta AI से) Alexandre Défossez, Jade Copet, Gabriel Synnaeve, Yossi Adi. द्वाराअनुसंधान पत्र [High Fidelity Neural Audio Compression](https://arxiv.org/abs/2210.13438) के साथ जारी किया गया 1. **[EncoderDecoder](https://huggingface.co/docs/transformers/model_doc/encoder-decoder)** (Google रिसर्च से) साथ में दिया गया पेपर [सीक्वेंस जेनरेशन टास्क के लिए प्री-ट्रेंड चेकपॉइंट का इस्तेमाल करना](https:/ /arxiv.org/abs/1907.12461) साशा रोठे, शशि नारायण, अलियाक्सि सेवेरिन द्वारा। 1. **[ERNIE](https://huggingface.co/docs/transformers/model_doc/ernie)**(Baidu से) साथ देने वाला पेपर [ERNIE: एन्हांस्ड रिप्रेजेंटेशन थ्रू नॉलेज इंटीग्रेशन](https://arxiv.org/abs/1904.09223) यू सन, शुओहुआन वांग, युकुन ली, शिकुन फेंग, ज़ुई चेन, हान झांग, शिन तियान, डैनक्सियांग झू, हाओ तियान, हुआ वू द्वारा पोस्ट किया गया। 1. **[ErnieM](https://huggingface.co/docs/transformers/model_doc/ernie_m)** (Baidu से) Xuan Ouyang, Shuohuan Wang, Chao Pang, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang. द्वाराअनुसंधान पत्र [ERNIE-M: Enhanced Multilingual Representation by Aligning Cross-lingual Semantics with Monolingual Corpora](https://arxiv.org/abs/2012.15674) के साथ जारी किया गया 1. **[ESM](https://huggingface.co/docs/transformers/model_doc/esm)** (मेटा AI से) ट्रांसफॉर्मर प्रोटीन भाषा मॉडल हैं। **ESM-1b** पेपर के साथ जारी किया गया था [ अलेक्जेंडर राइव्स, जोशुआ मेयर, टॉम सर्कु, सिद्धार्थ गोयल, ज़ेमिंग लिन द्वारा जैविक संरचना और कार्य असुरक्षित सीखने को 250 मिलियन प्रोटीन अनुक्रमों तक स्केल करने से उभरता है] (https://www.pnas.org/content/118/15/e2016239118) जेसन लियू, डेमी गुओ, मायल ओट, सी. लॉरेंस ज़िटनिक, जेरी मा और रॉब फर्गस। **ESM-1v** को पेपर के साथ जारी किया गया था [भाषा मॉडल प्रोटीन फ़ंक्शन पर उत्परिवर्तन के प्रभावों की शून्य-शॉट भविष्यवाणी को सक्षम करते हैं] (https://doi.org/10.1101/2021.07.09.450648) जोशुआ मेयर, रोशन राव, रॉबर्ट वेरकुइल, जेसन लियू, टॉम सर्कु और अलेक्जेंडर राइव्स द्वारा। **ESM-2** को पेपर के साथ जारी किया गया था [भाषा मॉडल विकास के पैमाने पर प्रोटीन अनुक्रम सटीक संरचना भविष्यवाणी को सक्षम करते हैं](https://doi.org/10.1101/2022.07.20.500902) ज़ेमिंग लिन, हलील अकिन, रोशन राव, ब्रायन ही, झोंगकाई झू, वेंटिंग लू, ए द्वारा लान डॉस सैंटोस कोस्टा, मरियम फ़ज़ल-ज़रंडी, टॉम सर्कू, साल कैंडिडो, अलेक्जेंडर राइव्स। 1. **[Falcon](https://huggingface.co/docs/transformers/model_doc/falcon)** (from Technology Innovation Institute) by Almazrouei, Ebtesam and Alobeidli, Hamza and Alshamsi, Abdulaziz and Cappelli, Alessandro and Cojocaru, Ruxandra and Debbah, Merouane and Goffinet, Etienne and Heslow, Daniel and Launay, Julien and Malartic, Quentin and Noune, Badreddine and Pannier, Baptiste and Penedo, Guilherme. 1. **[FLAN-T5](https://huggingface.co/docs/transformers/model_doc/flan-t5)** (from Google AI) released in the repository [google-research/t5x](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints) by Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei 1. **[FLAN-UL2](https://huggingface.co/docs/transformers/model_doc/flan-ul2)** (from Google AI) released in the repository [google-research/t5x](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-ul2-checkpoints) by Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei 1. **[FlauBERT](https://huggingface.co/docs/transformers/model_doc/flaubert)** (CNRS से) साथ वाला पेपर [FlauBERT: Unsupervised Language Model Pre-training for फ़्रेंच](https://arxiv .org/abs/1912.05372) Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, बेंजामिन लेकोउटेक्स, अलेक्जेंड्रे अल्लाउज़ेन, बेनोइट क्रैबे, लॉरेंट बेसेसियर, डिडिएर श्वाब द्वारा। 1. **[FLAVA](https://huggingface.co/docs/transformers/model_doc/flava)** (FLAVA: A फाउंडेशनल लैंग्वेज एंड विजन अलाइनमेंट मॉडल) (https://arxiv) साथ वाला पेपर .org/abs/2112.04482) अमनप्रीत सिंह, रोंगहांग हू, वेदानुज गोस्वामी, गुइल्यूम कुएरॉन, वोज्शिएक गालुबा, मार्कस रोहरबैक, और डौवे कीला द्वारा। 1. **[FNet](https://huggingface.co/docs/transformers/model_doc/fnet)** (गूगल रिसर्च से) साथ वाला पेपर [FNet: मिक्सिंग टोकन विद फूरियर ट्रांसफॉर्म्स](https://arxiv.org /abs/2105.03824) जेम्स ली-थॉर्प, जोशुआ आइंस्ली, इल्या एकस्टीन, सैंटियागो ओंटानन द्वारा। 1. **[FocalNet](https://huggingface.co/docs/transformers/model_doc/focalnet)** (Microsoft Research से) Jianwei Yang, Chunyuan Li, Xiyang Dai, Lu Yuan, Jianfeng Gao. द्वाराअनुसंधान पत्र [Focal Modulation Networks](https://arxiv.org/abs/2203.11926) के साथ जारी किया गया 1. **[Funnel Transformer](https://huggingface.co/docs/transformers/model_doc/funnel)** (सीएमयू/गूगल ब्रेन से) साथ में कागज [फ़नल-ट्रांसफॉर्मर: कुशल भाषा प्रसंस्करण के लिए अनुक्रमिक अतिरेक को छानना](https://arxiv.org/abs/2006.03236) जिहांग दाई, गुओकुन लाई, यिमिंग यांग, क्वोक वी. ले ​​द्वारा रिहाई। 1. **[Fuyu](https://huggingface.co/docs/transformers/model_doc/fuyu)** (ADEPT से) रोहन बाविशी, एरिच एलसेन, कर्टिस हॉथोर्न, मैक्सवेल नी, ऑगस्टस ओडेना, अरुशी सोमानी, सागनाक तासिरलार [blog post](https://www.adept.ai/blog/fuyu-8b) 1. **[GIT](https://huggingface.co/docs/transformers/model_doc/git)** (from Microsoft Research) released with the paper [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) by Jianfeng Wang, Zhengyuan Yang, Xiaowei Hu, Linjie Li, Kevin Lin, Zhe Gan, Zicheng Liu, Ce Liu, Lijuan Wang. 1. **[GLPN](https://huggingface.co/docs/transformers/model_doc/glpn)** (KAIST से) साथ वाला पेपर [वर्टिकल कटडेप्थ के साथ मोनोकुलर डेप्थ एस्टीमेशन के लिए ग्लोबल-लोकल पाथ नेटवर्क्स](https:/ /arxiv.org/abs/2201.07436) डोयोन किम, वूंगह्युन गा, प्युंगवान आह, डोंगग्यू जू, सेहवान चुन, जुनमो किम द्वारा। 1. **[GPT](https://huggingface.co/docs/transformers/model_doc/openai-gpt)** (OpenAI से) साथ में दिया गया पेपर [जेनरेटिव प्री-ट्रेनिंग द्वारा भाषा की समझ में सुधार](https://blog .openai.com/language-unsupervised/) एलेक रैडफोर्ड, कार्तिक नरसिम्हन, टिम सालिमन्स और इल्या सुत्स्केवर द्वारा। 1. **[GPT Neo](https://huggingface.co/docs/transformers/model_doc/gpt_neo)** (EleutherAI से) रिपॉजिटरी के साथ [EleutherAI/gpt-neo](https://github.com/ EleutherAI /gpt-neo) रिलीज। सिड ब्लैक, स्टेला बिडरमैन, लियो गाओ, फिल वांग और कॉनर लेही द्वारा पोस्ट किया गया। 1. **[GPT NeoX](https://huggingface.co/docs/transformers/model_doc/gpt_neox)** (EleutherAI से) पेपर के साथ जारी किया गया [GPT-NeoX-20B: एक ओपन-सोर्स ऑटोरेग्रेसिव लैंग्वेज मॉडल] (https://arxiv.org/abs/2204.06745) सिड ब्लैक, स्टेला बिडरमैन, एरिक हैलाहन, क्वेंटिन एंथोनी, लियो गाओ, लॉरेंस गोल्डिंग, होरेस हे, कॉनर लेही, काइल मैकडोनेल, जेसन फांग, माइकल पाइलर, यूएसवीएसएन साई प्रशांत द्वारा , शिवांशु पुरोहित, लारिया रेनॉल्ड्स, जोनाथन टो, बेन वांग, सैमुअल वेनबैक 1. **[GPT NeoX Japanese](https://huggingface.co/docs/transformers/model_doc/gpt_neox_japanese)** (अबेजा के जरिए) शिन्या ओटानी, ताकायोशी मकाबे, अनुज अरोड़ा, क्यो हटोरी द्वारा। 1. **[GPT-2](https://huggingface.co/docs/transformers/model_doc/gpt2)** (ओपनएआई से) साथ में पेपर [लैंग्वेज मॉडल्स अनसुपरवाइज्ड मल्टीटास्क लर्नर्स हैं](https://blog.openai.com/better-language-models/) एलेक रैडफोर्ड*, जेफरी वू*, रेवन चाइल्ड, डेविड लुआन, डारियो एमोडी* द्वारा * और इल्या सुत्सकेवर** ने पोस्ट किया। 1. **[GPT-J](https://huggingface.co/docs/transformers/model_doc/gptj)** (EleutherAI से) साथ वाला पेपर [kingoflolz/mesh-transformer-jax](https://github. com/kingoflolz/mesh-transformer-jax/) बेन वांग और अरन कोमात्सुजाकी द्वारा। 1. **[GPT-Sw3](https://huggingface.co/docs/transformers/model_doc/gpt-sw3)** (from AI-Sweden) released with the paper [Lessons Learned from GPT-SW3: Building the First Large-Scale Generative Language Model for Swedish](http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.376.pdf) by Ariel Ekgren, Amaru Cuba Gyllensten, Evangelia Gogoulou, Alice Heiman, Severine Verlinden, Joey Öhman, Fredrik Carlsson, Magnus Sahlgren. 1. **[GPTBigCode](https://huggingface.co/docs/transformers/model_doc/gpt_bigcode)** (BigCode से) Loubna Ben Allal, Raymond Li, Denis Kocetkov, Chenghao Mou, Christopher Akiki, Carlos Munoz Ferrandis, Niklas Muennighoff, Mayank Mishra, Alex Gu, Manan Dey, Logesh Kumar Umapathi, Carolyn Jane Anderson, Yangtian Zi, Joel Lamy Poirier, Hailey Schoelkopf, Sergey Troshin, Dmitry Abulkhanov, Manuel Romero, Michael Lappert, Francesco De Toni, Bernardo García del Río, Qian Liu, Shamik Bose, Urvashi Bhattacharyya, Terry Yue Zhuo, Ian Yu, Paulo Villegas, Marco Zocca, Sourab Mangrulkar, David Lansky, Huu Nguyen, Danish Contractor, Luis Villa, Jia Li, Dzmitry Bahdanau, Yacine Jernite, Sean Hughes, Daniel Fried, Arjun Guha, Harm de Vries, Leandro von Werra. द्वाराअनुसंधान पत्र [SantaCoder: don't reach for the stars!](https://arxiv.org/abs/2301.03988) के साथ जारी किया गया 1. **[GPTSAN-japanese](https://huggingface.co/docs/transformers/model_doc/gptsan-japanese)** released in the repository [tanreinama/GPTSAN](https://github.com/tanreinama/GPTSAN/blob/main/report/model.md) by Toshiyuki Sakamoto(tanreinama). 1. **[Graphormer](https://huggingface.co/docs/transformers/model_doc/graphormer)** (from Microsoft) released with the paper [Do Transformers Really Perform Bad for Graph Representation?](https://arxiv.org/abs/2106.05234) by Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, Tie-Yan Liu. 1. **[GroupViT](https://huggingface.co/docs/transformers/model_doc/groupvit)** (UCSD, NVIDIA से) साथ में कागज [GroupViT: टेक्स्ट सुपरविजन से सिमेंटिक सेगमेंटेशन इमर्जेस](https://arxiv .org/abs/2202.11094) जियारुई जू, शालिनी डी मेलो, सिफ़ी लियू, वोनमिन बायन, थॉमस ब्रेउएल, जान कौट्ज़, ज़ियाओलोंग वांग द्वारा। 1. **[HerBERT](https://huggingface.co/docs/transformers/model_doc/herbert)** (Allegro.pl, AGH University of Science and Technology से) Piotr Rybak, Robert Mroczkowski, Janusz Tracz, Ireneusz Gawlik. द्वाराअनुसंधान पत्र [KLEJ: Comprehensive Benchmark for Polish Language Understanding](https://www.aclweb.org/anthology/2020.acl-main.111.pdf) के साथ जारी किया गया 1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (फेसबुक से) साथ में पेपर [ह्यूबर्ट: सेल्फ सुपरवाइज्ड स्पीच रिप्रेजेंटेशन लर्निंग बाय मास्क्ड प्रेडिक्शन ऑफ हिडन यूनिट्स](https ://arxiv.org/abs/2106.07447) वेई-निंग सू, बेंजामिन बोल्टे, याओ-हंग ह्यूबर्ट त्साई, कुशाल लखोटिया, रुस्लान सालाखुतदीनोव, अब्देलरहमान मोहम्मद द्वारा। 1. **[I-BERT](https://huggingface.co/docs/transformers/model_doc/ibert)** (बर्कले से) साथ में कागज [I-BERT: Integer-only BERT Quantization](https:// arxiv.org/abs/2101.01321) सेहून किम, अमीर घोलमी, ज़ेवेई याओ, माइकल डब्ल्यू महोनी, कर्ट केटज़र द्वारा। 1. **[IDEFICS](https://huggingface.co/docs/transformers/model_doc/idefics)** (from HuggingFace) released with the paper [OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents](https://huggingface.co/papers/2306.16527) by Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh. 1. **[ImageGPT](https://huggingface.co/docs/transformers/model_doc/imagegpt)** (from OpenAI) released with the paper [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) by Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever. 1. **[Informer](https://huggingface.co/docs/transformers/model_doc/informer)** (from Beihang University, UC Berkeley, Rutgers University, SEDD Company) released with the paper [Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting](https://arxiv.org/abs/2012.07436) by Haoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li, Hui Xiong, and Wancai Zhang. 1. **[InstructBLIP](https://huggingface.co/docs/transformers/model_doc/instructblip)** (Salesforce से) Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, Steven Hoi. द्वाराअनुसंधान पत्र [InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning](https://arxiv.org/abs/2305.06500) के साथ जारी किया गया 1. **[Jukebox](https://huggingface.co/docs/transformers/model_doc/jukebox)** (from OpenAI) released with the paper [Jukebox: A Generative Model for Music](https://arxiv.org/pdf/2005.00341.pdf) by Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford, Ilya Sutskever. 1. **[KOSMOS-2](https://huggingface.co/docs/transformers/model_doc/kosmos-2)** (from Microsoft Research Asia) released with the paper [Kosmos-2: Grounding Multimodal Large Language Models to the World](https://arxiv.org/abs/2306.14824) by Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei. 1. **[LayoutLM](https://huggingface.co/docs/transformers/model_doc/layoutlm)** (from Microsoft Research Asia) released with the paper [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou. 1. **[LayoutLMv2](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) by Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou. 1. **[LayoutLMv3](https://huggingface.co/docs/transformers/model_doc/layoutlmv3)** (माइक्रोसॉफ्ट रिसर्च एशिया से) साथ देने वाला पेपर [लेआउटएलएमवी3: यूनिफाइड टेक्स्ट और इमेज मास्किंग के साथ दस्तावेज़ एआई के लिए पूर्व-प्रशिक्षण](https://arxiv.org/abs/2204.08387) युपन हुआंग, टेंगचाओ लव, लेई कुई, युटोंग लू, फुरु वेई द्वारा पोस्ट किया गया। 1. **[LayoutXLM](https://huggingface.co/docs/transformers/model_doc/layoutxlm)** (from Microsoft Research Asia) released with the paper [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) by Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei. 1. **[LED](https://huggingface.co/docs/transformers/model_doc/led)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan. 1. **[LeViT](https://huggingface.co/docs/transformers/model_doc/levit)** (मेटा AI से) साथ वाला पेपर [LeViT: A Vision Transformer in ConvNet's Clothing for Faster Inference](https:/ /arxiv.org/abs/2104.01136) बेन ग्राहम, अलाएल्डिन एल-नौबी, ह्यूगो टौवरन, पियरे स्टॉक, आर्मंड जौलिन, हर्वे जेगौ, मैथिज डूज़ द्वारा। 1. **[LiLT](https://huggingface.co/docs/transformers/model_doc/lilt)** (दक्षिण चीन प्रौद्योगिकी विश्वविद्यालय से) साथ में कागज [LiLT: एक सरल लेकिन प्रभावी भाषा-स्वतंत्र लेआउट ट्रांसफार्मर संरचित दस्तावेज़ समझ के लिए](https://arxiv.org/abs/2202.13669) जियापेंग वांग, लियानवेन जिन, काई डिंग द्वारा पोस्ट किया गया। 1. **[LLaMA](https://huggingface.co/docs/transformers/model_doc/llama)** (The FAIR team of Meta AI से) Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, Guillaume Lample. द्वाराअनुसंधान पत्र [LLaMA: Open and Efficient Foundation Language Models](https://arxiv.org/abs/2302.13971) के साथ जारी किया गया 1. **[Llama2](https://huggingface.co/docs/transformers/model_doc/llama2)** (The FAIR team of Meta AI से) Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushka rMishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing EllenTan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom.. द्वाराअनुसंधान पत्र [Llama2: Open Foundation and Fine-Tuned Chat Models](https://ai.meta.com/research/publications/llama-2-open-foundation-and-fine-tuned-chat-models/XXX) के साथ जारी किया गया 1. **[LLaVa](https://huggingface.co/docs/transformers/model_doc/llava)** (Microsoft Research & University of Wisconsin-Madison से) Haotian Liu, Chunyuan Li, Yuheng Li and Yong Jae Lee. द्वाराअनुसंधान पत्र [Visual Instruction Tuning](https://arxiv.org/abs/2304.08485) के साथ जारी किया गया 1. **[Longformer](https://huggingface.co/docs/transformers/model_doc/longformer)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan. 1. **[LongT5](https://huggingface.co/docs/transformers/model_doc/longt5)** (मैंडी गुओ, जोशुआ आइंस्ली, डेविड यूथस, सैंटियागो ओंटानन, जियानमो नि, यूं-हुआन सुंग, यिनफेई यांग द्वारा पोस्ट किया गया। 1. **[LUKE](https://huggingface.co/docs/transformers/model_doc/luke)** (स्टूडियो औसिया से) साथ में पेपर [LUKE: डीप कॉन्टेक्स्टुअलाइज्ड एंटिटी रिप्रेजेंटेशन विद एंटिटी-अवेयर सेल्फ-अटेंशन](https ://arxiv.org/abs/2010.01057) Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto द्वारा। 1. **[LXMERT](https://huggingface.co/docs/transformers/model_doc/lxmert)** (UNC चैपल हिल से) साथ में पेपर [LXMERT: ओपन-डोमेन क्वेश्चन के लिए ट्रांसफॉर्मर से क्रॉस-मोडलिटी एनकोडर रिप्रेजेंटेशन सीखना Answering](https://arxiv.org/abs/1908.07490) हाओ टैन और मोहित बंसल द्वारा। 1. **[M-CTC-T](https://huggingface.co/docs/transformers/model_doc/mctct)** (from Facebook) released with the paper [Pseudo-Labeling For Massively Multilingual Speech Recognition](https://arxiv.org/abs/2111.00161) by Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, and Ronan Collobert. 1. **[M2M100](https://huggingface.co/docs/transformers/model_doc/m2m_100)** (फेसबुक से) साथ देने वाला पेपर [बियॉन्ड इंग्लिश-सेंट्रिक मल्टीलिंगुअल मशीन ट्रांसलेशन](https://arxiv.org/ एब्स/2010.11125) एंजेला फैन, श्रुति भोसले, होल्गर श्वेन्क, झी मा, अहमद अल-किश्की, सिद्धार्थ गोयल, मनदीप बैनेस, ओनूर सेलेबी, गुइल्लाम वेन्जेक, विश्रव चौधरी, नमन गोयल, टॉम बर्च, विटाली लिपचिंस्की, सर्गेई एडुनोव, एडौर्ड द्वारा ग्रेव, माइकल औली, आर्मंड जौलिन द्वारा पोस्ट किया गया। 1. **[MADLAD-400](https://huggingface.co/docs/transformers/model_doc/madlad-400)** (from Google) released with the paper [MADLAD-400: A Multilingual And Document-Level Large Audited Dataset](https://arxiv.org/abs/2309.04662) by Sneha Kudugunta, Isaac Caswell, Biao Zhang, Xavier Garcia, Christopher A. Choquette-Choo, Katherine Lee, Derrick Xin, Aditya Kusupati, Romi Stella, Ankur Bapna, Orhan Firat. 1. **[MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)** Jörg द्वारा [OPUS](http://opus.nlpl.eu/) डेटा से प्रशिक्षित मशीनी अनुवाद मॉडल पोस्ट किया गया टाइडेमैन द्वारा। [मैरियन फ्रेमवर्क](https://marian-nmt.github.io/) माइक्रोसॉफ्ट ट्रांसलेटर टीम द्वारा विकसित। 1. **[MarkupLM](https://huggingface.co/docs/transformers/model_doc/markuplm)** (माइक्रोसॉफ्ट रिसर्च एशिया से) साथ में पेपर [मार्कअपएलएम: विजुअली-रिच डॉक्यूमेंट अंडरस्टैंडिंग के लिए टेक्स्ट और मार्कअप लैंग्वेज का प्री-ट्रेनिंग] (https://arxiv.org/abs/2110.08518) जुनलॉन्ग ली, यिहेंग जू, लेई कुई, फुरु द्वारा वी द्वारा पोस्ट किया गया। 1. **[Mask2Former](https://huggingface.co/docs/transformers/model_doc/mask2former)** (FAIR and UIUC से) Bowen Cheng, Ishan Misra, Alexander G. Schwing, Alexander Kirillov, Rohit Girdhar. द्वाराअनुसंधान पत्र [Masked-attention Mask Transformer for Universal Image Segmentation](https://arxiv.org/abs/2112.01527) के साथ जारी किया गया 1. **[MaskFormer](https://huggingface.co/docs/transformers/model_doc/maskformer)** (मेटा और UIUC से) पेपर के साथ जारी किया गया [प्रति-पिक्सेल वर्गीकरण वह सब नहीं है जिसकी आपको सिमेंटिक सेगमेंटेशन की आवश्यकता है] (https://arxiv.org/abs/2107.06278) बोवेन चेंग, अलेक्जेंडर जी. श्विंग, अलेक्जेंडर किरिलोव द्वारा >>>>>> रिबेस ठीक करें 1. **[MatCha](https://huggingface.co/docs/transformers/model_doc/matcha)** (Google AI से) Fangyu Liu, Francesco Piccinno, Syrine Krichene, Chenxi Pang, Kenton Lee, Mandar Joshi, Yasemin Altun, Nigel Collier, Julian Martin Eisenschlos. द्वाराअनुसंधान पत्र [MatCha: Enhancing Visual Language Pretraining with Math Reasoning and Chart Derendering](https://arxiv.org/abs/2212.09662) के साथ जारी किया गया 1. **[mBART](https://huggingface.co/docs/transformers/model_doc/mbart)** (फेसबुक से) साथ में पेपर [न्यूरल मशीन ट्रांसलेशन के लिए मल्टीलिंगुअल डीनोइजिंग प्री-ट्रेनिंग](https://arxiv. org/abs/2001.08210) यिनहान लियू, जियाताओ गु, नमन गोयल, जियान ली, सर्गेई एडुनोव, मार्जन ग़ज़विनिनेजाद, माइक लुईस, ल्यूक ज़ेटलमॉयर द्वारा। 1. **[mBART-50](https://huggingface.co/docs/transformers/model_doc/mbart)** (फेसबुक से) साथ में पेपर [एक्स्टेंसिबल बहुभाषी प्रीट्रेनिंग और फाइनट्यूनिंग के साथ बहुभाषी अनुवाद](https://arxiv युकिंग टैंग, चाउ ट्रान, जियान ली, पेंग-जेन चेन, नमन गोयल, विश्रव चौधरी, जियाताओ गु, एंजेला फैन द्वारा .org/abs/2008.00401)। 1. **[MEGA](https://huggingface.co/docs/transformers/model_doc/mega)** (Facebook से) Xuezhe Ma, Chunting Zhou, Xiang Kong, Junxian He, Liangke Gui, Graham Neubig, Jonathan May, and Luke Zettlemoyer. द्वाराअनुसंधान पत्र [Mega: Moving Average Equipped Gated Attention](https://arxiv.org/abs/2209.10655) के साथ जारी किया गया 1. **[Megatron-BERT](https://huggingface.co/docs/transformers/model_doc/megatron-bert)** (NVIDIA से) कागज के साथ [Megatron-LM: मॉडल का उपयोग करके बहु-अरब पैरामीटर भाषा मॉडल का प्रशिक्षण Parallelism](https://arxiv.org/abs/1909.08053) मोहम्मद शोएबी, मोस्टोफा पटवारी, राउल पुरी, पैट्रिक लेग्रेस्ले, जेरेड कैस्पर और ब्रायन कैटानज़ारो द्वारा। 1. **[Megatron-GPT2](https://huggingface.co/docs/transformers/model_doc/megatron_gpt2)** (NVIDIA से) साथ वाला पेपर [Megatron-LM: ट्रेनिंग मल्टी-बिलियन पैरामीटर लैंग्वेज मॉडल्स यूजिंग मॉडल पैरेललिज़्म] (https://arxiv.org/abs/1909.08053) मोहम्मद शोएबी, मोस्टोफा पटवारी, राउल पुरी, पैट्रिक लेग्रेस्ले, जेरेड कैस्पर और ब्रायन कैटानज़ारो द्वारा पोस्ट किया गया। 1. **[MGP-STR](https://huggingface.co/docs/transformers/model_doc/mgp-str)** (Alibaba Research से) Peng Wang, Cheng Da, and Cong Yao. द्वाराअनुसंधान पत्र [Multi-Granularity Prediction for Scene Text Recognition](https://arxiv.org/abs/2209.03592) के साथ जारी किया गया 1. **[Mistral](https://huggingface.co/docs/transformers/model_doc/mistral)** (from Mistral AI) by The Mistral AI team: Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.. 1. **[Mixtral](https://huggingface.co/docs/transformers/model_doc/mixtral)** (from Mistral AI) by The [Mistral AI](https://mistral.ai) team: Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed. 1. **[mLUKE](https://huggingface.co/docs/transformers/model_doc/mluke)** (फ्रॉम Studio Ousia) साथ में पेपर [mLUKE: द पावर ऑफ एंटिटी रिप्रेजेंटेशन इन मल्टीलिंगुअल प्रीट्रेन्ड लैंग्वेज मॉडल्स](https://arxiv.org/abs/2110.08151) रयोकन री, इकुया यामाडा, और योशिमासा त्सुरोका द्वारा। 1. **[MMS](https://huggingface.co/docs/transformers/model_doc/mms)** (Facebook से) Vineel Pratap, Andros Tjandra, Bowen Shi, Paden Tomasello, Arun Babu, Sayani Kundu, Ali Elkahky, Zhaoheng Ni, Apoorv Vyas, Maryam Fazel-Zarandi, Alexei Baevski, Yossi Adi, Xiaohui Zhang, Wei-Ning Hsu, Alexis Conneau, Michael Auli. द्वाराअनुसंधान पत्र [Scaling Speech Technology to 1,000+ Languages](https://arxiv.org/abs/2305.13516) के साथ जारी किया गया 1. **[MobileBERT](https://huggingface.co/docs/transformers/model_doc/mobilebert)** (सीएमयू/गूगल ब्रेन से) साथ में कागज [मोबाइलबर्ट: संसाधन-सीमित उपकरणों के लिए एक कॉम्पैक्ट टास्क-अज्ञेय बीईआरटी] (https://arxiv.org/abs/2004.02984) Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, और Denny Zhou द्वारा पोस्ट किया गया। 1. **[MobileNetV1](https://huggingface.co/docs/transformers/model_doc/mobilenet_v1)** (from Google Inc.) released with the paper [MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications](https://arxiv.org/abs/1704.04861) by Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam. 1. **[MobileNetV2](https://huggingface.co/docs/transformers/model_doc/mobilenet_v2)** (from Google Inc.) released with the paper [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381) by Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen. 1. **[MobileViT](https://huggingface.co/docs/transformers/model_doc/mobilevit)** (Apple से) साथ में कागज [MobileViT: लाइट-वेट, जनरल-पर्पस, और मोबाइल-फ्रेंडली विजन ट्रांसफॉर्मर] (https://arxiv.org/abs/2110.02178) सचिन मेहता और मोहम्मद रस्तगरी द्वारा पोस्ट किया गया। 1. **[MobileViTV2](https://huggingface.co/docs/transformers/model_doc/mobilevitv2)** (Apple से) Sachin Mehta and Mohammad Rastegari. द्वाराअनुसंधान पत्र [Separable Self-attention for Mobile Vision Transformers](https://arxiv.org/abs/2206.02680) के साथ जारी किया गया 1. **[MPNet](https://huggingface.co/docs/transformers/model_doc/mpnet)** (from Microsoft Research) released with the paper [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu. 1. **[MPT](https://huggingface.co/docs/transformers/model_doc/mpt)** (MosaiML से) the MosaicML NLP Team. द्वाराअनुसंधान पत्र [llm-foundry](https://github.com/mosaicml/llm-foundry/) के साथ जारी किया गया 1. **[MRA](https://huggingface.co/docs/transformers/model_doc/mra)** (the University of Wisconsin - Madison से) Zhanpeng Zeng, Sourav Pal, Jeffery Kline, Glenn M Fung, Vikas Singh. द्वाराअनुसंधान पत्र [Multi Resolution Analysis (MRA)](https://arxiv.org/abs/2207.10284) के साथ जारी किया गया 1. **[MT5](https://huggingface.co/docs/transformers/model_doc/mt5)** (Google AI से) साथ वाला पेपर [mT5: एक व्यापक बहुभाषी पूर्व-प्रशिक्षित टेक्स्ट-टू-टेक्स्ट ट्रांसफॉर्मर]( https://arxiv.org/abs/2010.11934) लिंटिंग ज़ू, नोआ कॉन्सटेंट, एडम रॉबर्ट्स, मिहिर काले, रामी अल-रफू, आदित्य सिद्धांत, आदित्य बरुआ, कॉलिन रैफेल द्वारा पोस्ट किया गया। 1. **[MusicGen](https://huggingface.co/docs/transformers/model_doc/musicgen)** (from Meta) released with the paper [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284) by Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi and Alexandre Défossez. 1. **[MVP](https://huggingface.co/docs/transformers/model_doc/mvp)** (from RUC AI Box) released with the paper [MVP: Multi-task Supervised Pre-training for Natural Language Generation](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen. 1. **[NAT](https://huggingface.co/docs/transformers/model_doc/nat)** (from SHI Labs) released with the paper [Neighborhood Attention Transformer](https://arxiv.org/abs/2204.07143) by Ali Hassani, Steven Walton, Jiachen Li, Shen Li, and Humphrey Shi. 1. **[Nezha](https://huggingface.co/docs/transformers/model_doc/nezha)** (हुआवेई नूह के आर्क लैब से) साथ में कागज़ [NEZHA: चीनी भाषा समझ के लिए तंत्रिका प्रासंगिक प्रतिनिधित्व](https :/ /arxiv.org/abs/1909.00204) जुन्किउ वेई, ज़ियाओज़े रेन, ज़िआओगुआंग ली, वेनयोंग हुआंग, यी लियाओ, याशेंग वांग, जियाशू लिन, शिन जियांग, जिओ चेन और कुन लियू द्वारा। 1. **[NLLB](https://huggingface.co/docs/transformers/model_doc/nllb)** (फ्रॉम मेटा) साथ में पेपर [नो लैंग्वेज लेफ्ट बिहाइंड: स्केलिंग ह्यूमन-सेंटेड मशीन ट्रांसलेशन] (https://arxiv.org/abs/2207.04672) एनएलएलबी टीम द्वारा प्रकाशित। 1. **[NLLB-MOE](https://huggingface.co/docs/transformers/model_doc/nllb-moe)** (Meta से) the NLLB team. द्वाराअनुसंधान पत्र [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) के साथ जारी किया गया 1. **[Nougat](https://huggingface.co/docs/transformers/model_doc/nougat)** (Meta AI से) Lukas Blecher, Guillem Cucurull, Thomas Scialom, Robert Stojnic. द्वाराअनुसंधान पत्र [Nougat: Neural Optical Understanding for Academic Documents](https://arxiv.org/abs/2308.13418) के साथ जारी किया गया 1. **[Nyströmformer](https://huggingface.co/docs/transformers/model_doc/nystromformer)** (विस्कॉन्सिन विश्वविद्यालय - मैडिसन से) साथ में कागज [Nyströmformer: A Nyström- आधारित एल्गोरिथम आत्म-ध्यान का अनुमान लगाने के लिए ](https://arxiv.org/abs/2102.03902) युनयांग ज़िओंग, झानपेंग ज़ेंग, रुद्रसिस चक्रवर्ती, मिंगक्सिंग टैन, ग्लेन फंग, यिन ली, विकास सिंह द्वारा पोस्ट किया गया। 1. **[OneFormer](https://huggingface.co/docs/transformers/model_doc/oneformer)** (SHI Labs से) पेपर [OneFormer: One Transformer to Rule Universal Image Segmentation](https://arxiv.org/abs/2211.06220) जितेश जैन, जिआचेन ली, मांगटिक चिउ, अली हसनी, निकिता ओरलोव, हम्फ्री शि के द्वारा जारी किया गया है। 1. **[OpenLlama](https://huggingface.co/docs/transformers/model_doc/open-llama)** (from [s-JoL](https://huggingface.co/s-JoL)) released on GitHub (now removed). 1. **[OPT](https://huggingface.co/docs/transformers/master/model_doc/opt)** (from Meta AI) released with the paper [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) by Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al. 1. **[OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit)** (Google AI से) साथ में कागज [विज़न ट्रांसफॉर्मर्स के साथ सिंपल ओपन-वोकैबुलरी ऑब्जेक्ट डिटेक्शन](https:/ /arxiv.org/abs/2205.06230) मैथियास मिंडरर, एलेक्सी ग्रिट्सेंको, ऑस्टिन स्टोन, मैक्सिम न्यूमैन, डिर्क वीसेनबोर्न, एलेक्सी डोसोवित्स्की, अरविंद महेंद्रन, अनुराग अर्नब, मुस्तफा देहघानी, ज़ुओरन शेन, जिओ वांग, ज़ियाओहुआ झाई, थॉमस किफ़, और नील हॉल्सबी द्वारा पोस्ट किया गया। 1. **[OWLv2](https://huggingface.co/docs/transformers/model_doc/owlv2)** (Google AI से) Matthias Minderer, Alexey Gritsenko, Neil Houlsby. द्वाराअनुसंधान पत्र [Scaling Open-Vocabulary Object Detection](https://arxiv.org/abs/2306.09683) के साथ जारी किया गया 1. **[PatchTSMixer](https://huggingface.co/docs/transformers/model_doc/patchtsmixer)** ( IBM Research से) Vijay Ekambaram, Arindam Jati, Nam Nguyen, Phanwadee Sinthong, Jayant Kalagnanam. द्वाराअनुसंधान पत्र [TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting](https://arxiv.org/pdf/2306.09364.pdf) के साथ जारी किया गया 1. **[PatchTST](https://huggingface.co/docs/transformers/model_doc/patchtst)** (IBM से) Yuqi Nie, Nam H. Nguyen, Phanwadee Sinthong, Jayant Kalagnanam. द्वाराअनुसंधान पत्र [A Time Series is Worth 64 Words: Long-term Forecasting with Transformers](https://arxiv.org/pdf/2211.14730.pdf) के साथ जारी किया गया 1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (from Google) released with the paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu. 1. **[PEGASUS-X](https://huggingface.co/docs/transformers/model_doc/pegasus_x)** (Google की ओर से) साथ में दिया गया पेपर [लंबे इनपुट सारांश के लिए ट्रांसफ़ॉर्मरों को बेहतर तरीके से एक्सटेंड करना](https://arxiv .org/abs/2208.04347) जेसन फांग, याओ झाओ, पीटर जे लियू द्वारा। 1. **[Perceiver IO](https://huggingface.co/docs/transformers/model_doc/perceiver)** (दीपमाइंड से) साथ में पेपर [पर्सीवर आईओ: संरचित इनपुट और आउटपुट के लिए एक सामान्य वास्तुकला] (https://arxiv.org/abs/2107.14795) एंड्रयू जेगल, सेबेस्टियन बोरग्यूड, जीन-बैप्टिस्ट अलायराक, कार्ल डोर्श, कैटलिन इओनेस्कु, डेविड द्वारा डिंग, स्कंद कोप्पुला, डैनियल ज़ोरान, एंड्रयू ब्रॉक, इवान शेलहैमर, ओलिवियर हेनाफ, मैथ्यू एम। बोट्विनिक, एंड्रयू ज़िसरमैन, ओरिओल विनियल्स, जोआओ कैरेरा द्वारा पोस्ट किया गया। 1. **[Persimmon](https://huggingface.co/docs/transformers/model_doc/persimmon)** (ADEPT से) Erich Elsen, Augustus Odena, Maxwell Nye, Sağnak Taşırlar, Tri Dao, Curtis Hawthorne, Deepak Moparthi, Arushi Somani. द्वाराअनुसंधान पत्र [blog post](https://www.adept.ai/blog/persimmon-8b) के साथ जारी किया गया 1. **[Phi](https://huggingface.co/docs/transformers/model_doc/phi)** (from Microsoft) released with the papers - [Textbooks Are All You Need](https://arxiv.org/abs/2306.11644) by Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio César Teodoro Mendes, Allie Del Giorno, Sivakanth Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi, Adil Salim, Shital Shah, Harkirat Singh Behl, Xin Wang, Sébastien Bubeck, Ronen Eldan, Adam Tauman Kalai, Yin Tat Lee and Yuanzhi Li, [Textbooks Are All You Need II: phi-1.5 technical report](https://arxiv.org/abs/2309.05463) by Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar and Yin Tat Lee. 1. **[PhoBERT](https://huggingface.co/docs/transformers/model_doc/phobert)** (VinAI Research से) कागज के साथ [PhoBERT: वियतनामी के लिए पूर्व-प्रशिक्षित भाषा मॉडल](https://www .aclweb.org/anthology/2020.findings-emnlp.92/) डैट क्वोक गुयेन और अन्ह तुआन गुयेन द्वारा पोस्ट किया गया। 1. **[Pix2Struct](https://huggingface.co/docs/transformers/model_doc/pix2struct)** (Google से) Kenton Lee, Mandar Joshi, Iulia Turc, Hexiang Hu, Fangyu Liu, Julian Eisenschlos, Urvashi Khandelwal, Peter Shaw, Ming-Wei Chang, Kristina Toutanova. द्वाराअनुसंधान पत्र [Pix2Struct: Screenshot Parsing as Pretraining for Visual Language Understanding](https://arxiv.org/abs/2210.03347) के साथ जारी किया गया 1. **[PLBart](https://huggingface.co/docs/transformers/model_doc/plbart)** (UCLA NLP से) साथ वाला पेपर [प्रोग्राम अंडरस्टैंडिंग एंड जेनरेशन के लिए यूनिफाइड प्री-ट्रेनिंग](https://arxiv .org/abs/2103.06333) वसी उद्दीन अहमद, सैकत चक्रवर्ती, बैशाखी रे, काई-वेई चांग द्वारा। 1. **[PoolFormer](https://huggingface.co/docs/transformers/model_doc/poolformer)** (from Sea AI Labs) released with the paper [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) by Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng. 1. **[Pop2Piano](https://huggingface.co/docs/transformers/model_doc/pop2piano)** released with the paper [Pop2Piano : Pop Audio-based Piano Cover Generation](https://arxiv.org/abs/2211.00895) by Jongho Choi, Kyogu Lee. 1. **[ProphetNet](https://huggingface.co/docs/transformers/model_doc/prophetnet)** (माइक्रोसॉफ्ट रिसर्च से) साथ में पेपर [ProphetNet: प्रेडिक्टिंग फ्यूचर एन-ग्राम फॉर सीक्वेंस-टू-सीक्वेंस प्री-ट्रेनिंग ](https://arxiv.org/abs/2001.04063) यू यान, वीज़ेन क्यूई, येयुन गोंग, दयाहेंग लियू, नान डुआन, जिउशेंग चेन, रुओफ़ेई झांग और मिंग झोउ द्वारा पोस्ट किया गया। 1. **[PVT](https://huggingface.co/docs/transformers/model_doc/pvt)** (Nanjing University, The University of Hong Kong etc. से) Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, Ling Shao. द्वाराअनुसंधान पत्र [Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions](https://arxiv.org/pdf/2102.12122.pdf) के साथ जारी किया गया 1. **[QDQBert](https://huggingface.co/docs/transformers/model_doc/qdqbert)** (NVIDIA से) साथ वाला पेपर [डीप लर्निंग इंफ़ेक्शन के लिए इंटीजर क्वांटिज़ेशन: प्रिंसिपल्स एंड एम्पिरिकल इवैल्यूएशन](https:// arxiv.org/abs/2004.09602) हाओ वू, पैट्रिक जुड, जिआओजी झांग, मिखाइल इसेव और पॉलियस माइकेविसियस द्वारा। 1. **[RAG](https://huggingface.co/docs/transformers/model_doc/rag)** (फेसबुक से) साथ में कागज [रिट्रीवल-ऑगमेंटेड जेनरेशन फॉर नॉलेज-इंटेंसिव एनएलपी टास्क](https://arxiv .org/abs/2005.11401) पैट्रिक लुईस, एथन पेरेज़, अलेक्जेंड्रा पिक्टस, फैबियो पेट्रोनी, व्लादिमीर कारपुखिन, नमन गोयल, हेनरिक कुटलर, माइक लुईस, वेन-ताउ यिह, टिम रॉकटाशेल, सेबस्टियन रिडेल, डौवे कीला द्वारा। 1. **[REALM](https://huggingface.co/docs/transformers/model_doc/realm.html)** (Google अनुसंधान से) केल्विन गु, केंटन ली, ज़ोरा तुंग, पानुपोंग पसुपत और मिंग-वेई चांग द्वारा साथ में दिया गया पेपर [REALM: रिट्रीवल-ऑगमेंटेड लैंग्वेज मॉडल प्री-ट्रेनिंग](https://arxiv.org/abs/2002.08909)। 1. **[Reformer](https://huggingface.co/docs/transformers/model_doc/reformer)** (from Google Research) released with the paper [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) by Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya. 1. **[RegNet](https://huggingface.co/docs/transformers/model_doc/regnet)** (META रिसर्च से) [डिज़ाइनिंग नेटवर्क डिज़ाइन स्पेस] (https://arxiv.org/) पेपर के साथ जारी किया गया एब्स/2003.13678) इलिजा राडोसावोविक, राज प्रतीक कोसाराजू, रॉस गिर्शिक, कैमिंग ही, पिओटर डॉलर द्वारा। 1. **[RemBERT](https://huggingface.co/docs/transformers/model_doc/rembert)** (गूगल रिसर्च से) साथ वाला पेपर [पूर्व-प्रशिक्षित भाषा मॉडल में एम्बेडिंग कपलिंग पर पुनर्विचार](https://arxiv .org/pdf/2010.12821.pdf) ह्युंग वोन चुंग, थिबॉल्ट फ़ेवरी, हेनरी त्साई, एम. जॉनसन, सेबेस्टियन रुडर द्वारा। 1. **[ResNet](https://huggingface.co/docs/transformers/model_doc/resnet)** (माइक्रोसॉफ्ट रिसर्च से) [डीप रेसिडुअल लर्निंग फॉर इमेज रिकग्निशन] (https://arxiv. org/abs/1512.03385) कैमिंग हे, जियांग्यु झांग, शाओकिंग रेन, जियान सन द्वारा। 1. **[RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta)** (फेसबुक से), साथ में कागज [मजबूत रूप से अनुकूलित BERT प्रीट्रेनिंग दृष्टिकोण](https://arxiv.org/abs /1907.11692) यिनहान लियू, मायल ओट, नमन गोयल, जिंगफेई डू, मंदार जोशी, डैनकी चेन, ओमर लेवी, माइक लुईस, ल्यूक ज़ेटलमॉयर, वेसेलिन स्टोयानोव द्वारा। 1. **[RoBERTa-PreLayerNorm](https://huggingface.co/docs/transformers/model_doc/roberta-prelayernorm)** (from Facebook) released with the paper [fairseq: A Fast, Extensible Toolkit for Sequence Modeling](https://arxiv.org/abs/1904.01038) by Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, Michael Auli. 1. **[RoCBert](https://huggingface.co/docs/transformers/model_doc/roc_bert)** (from WeChatAI) released with the paper [RoCBert: Robust Chinese Bert with Multimodal Contrastive Pretraining](https://aclanthology.org/2022.acl-long.65.pdf) by HuiSu, WeiweiShi, XiaoyuShen, XiaoZhou, TuoJi, JiaruiFang, JieZhou. 1. **[RoFormer](https://huggingface.co/docs/transformers/model_doc/roformer)** (झुईई टेक्नोलॉजी से), साथ में पेपर [रोफॉर्मर: रोटरी पोजिशन एंबेडिंग के साथ एन्हांस्ड ट्रांसफॉर्मर] (https://arxiv.org/pdf/2104.09864v1.pdf) जियानलिन सु और यू लू और शेंगफेंग पैन और बो वेन और युनफेंग लियू द्वारा प्रकाशित। 1. **[RWKV](https://huggingface.co/docs/transformers/model_doc/rwkv)** (Bo Peng से) Bo Peng. द्वाराअनुसंधान पत्र [this repo](https://github.com/BlinkDL/RWKV-LM) के साथ जारी किया गया 1. **[SeamlessM4T](https://huggingface.co/docs/transformers/model_doc/seamless_m4t)** (from Meta AI) released with the paper [SeamlessM4T — Massively Multilingual & Multimodal Machine Translation](https://dl.fbaipublicfiles.com/seamless/seamless_m4t_paper.pdf) by the Seamless Communication team. 1. **[SeamlessM4Tv2](https://huggingface.co/docs/transformers/model_doc/seamless_m4t_v2)** (from Meta AI) released with the paper [Seamless: Multilingual Expressive and Streaming Speech Translation](https://ai.meta.com/research/publications/seamless-multilingual-expressive-and-streaming-speech-translation/) by the Seamless Communication team. 1. **[SegFormer](https://huggingface.co/docs/transformers/model_doc/segformer)** (from NVIDIA) released with the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo. 1. **[Segment Anything](https://huggingface.co/docs/transformers/model_doc/sam)** (Meta AI से) Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alex Berg, Wan-Yen Lo, Piotr Dollar, Ross Girshick. द्वाराअनुसंधान पत्र [Segment Anything](https://arxiv.org/pdf/2304.02643v1.pdf) के साथ जारी किया गया 1. **[SEW](https://huggingface.co/docs/transformers/model_doc/sew)** (ASAPP से) साथ देने वाला पेपर [भाषण पहचान के लिए अनसुपरवाइज्ड प्री-ट्रेनिंग में परफॉर्मेंस-एफिशिएंसी ट्रेड-ऑफ्स](https ://arxiv.org/abs/2109.06870) फेलिक्स वू, क्वांगयुन किम, जिंग पैन, क्यू हान, किलियन क्यू. वेनबर्गर, योव आर्टज़ी द्वारा। 1. **[SEW-D](https://huggingface.co/docs/transformers/model_doc/sew_d)** (ASAPP से) साथ में पेपर [भाषण पहचान के लिए अनसुपरवाइज्ड प्री-ट्रेनिंग में परफॉर्मेंस-एफिशिएंसी ट्रेड-ऑफ्स] (https://arxiv.org/abs/2109.06870) फेलिक्स वू, क्वांगयुन किम, जिंग पैन, क्यू हान, किलियन क्यू. वेनबर्गर, योआव आर्टज़ी द्वारा पोस्ट किया गया। 1. **[SpeechT5](https://huggingface.co/docs/transformers/model_doc/speecht5)** (from Microsoft Research) released with the paper [SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing](https://arxiv.org/abs/2110.07205) by Junyi Ao, Rui Wang, Long Zhou, Chengyi Wang, Shuo Ren, Yu Wu, Shujie Liu, Tom Ko, Qing Li, Yu Zhang, Zhihua Wei, Yao Qian, Jinyu Li, Furu Wei. 1. **[SpeechToTextTransformer](https://huggingface.co/docs/transformers/model_doc/speech_to_text)** (फेसबुक से), साथ में पेपर [फेयरसेक S2T: फास्ट स्पीच-टू-टेक्स्ट मॉडलिंग विद फेयरसेक](https: //arxiv.org/abs/2010.05171) चांगहान वांग, यूं तांग, जुताई मा, ऐनी वू, दिमित्रो ओखोनको, जुआन पिनो द्वारा पोस्ट किया गया。 1. **[SpeechToTextTransformer2](https://huggingface.co/docs/transformers/model_doc/speech_to_text_2)** (फेसबुक से) साथ में पेपर [लार्ज-स्केल सेल्फ- एंड सेमी-सुपरवाइज्ड लर्निंग फॉर स्पीच ट्रांसलेशन](https://arxiv.org/abs/2104.06678) चांगहान वांग, ऐनी वू, जुआन पिनो, एलेक्सी बेवस्की, माइकल औली, एलेक्सिस द्वारा Conneau द्वारा पोस्ट किया गया। 1. **[Splinter](https://huggingface.co/docs/transformers/model_doc/splinter)** (तेल अवीव यूनिवर्सिटी से) साथ में पेपर [स्पैन सिलेक्शन को प्री-ट्रेनिंग करके कुछ-शॉट क्वेश्चन आंसरिंग](https:// arxiv.org/abs/2101.00438) ओरि राम, युवल कर्स्टन, जोनाथन बेरेंट, अमीर ग्लोबर्सन, ओमर लेवी द्वारा। 1. **[SqueezeBERT](https://huggingface.co/docs/transformers/model_doc/squeezebert)** (बर्कले से) कागज के साथ [SqueezeBERT: कुशल तंत्रिका नेटवर्क के बारे में NLP को कंप्यूटर विज़न क्या सिखा सकता है?](https: //arxiv.org/abs/2006.11316) फॉरेस्ट एन. इनडोला, अल्बर्ट ई. शॉ, रवि कृष्णा, और कर्ट डब्ल्यू. केटज़र द्वारा। 1. **[SwiftFormer](https://huggingface.co/docs/transformers/model_doc/swiftformer)** (MBZUAI से) Abdelrahman Shaker, Muhammad Maaz, Hanoona Rasheed, Salman Khan, Ming-Hsuan Yang, Fahad Shahbaz Khan. द्वाराअनुसंधान पत्र [SwiftFormer: Efficient Additive Attention for Transformer-based Real-time Mobile Vision Applications](https://arxiv.org/abs/2303.15446) के साथ जारी किया गया 1. **[Swin Transformer](https://huggingface.co/docs/transformers/model_doc/swin)** (माइक्रोसॉफ्ट से) साथ में कागज [स्वाइन ट्रांसफॉर्मर: शिफ्टेड विंडोज का उपयोग कर पदानुक्रमित विजन ट्रांसफॉर्मर](https://arxiv .org/abs/2103.14030) ज़ी लियू, युटोंग लिन, यू काओ, हान हू, यिक्सुआन वेई, झेंग झांग, स्टीफन लिन, बैनिंग गुओ द्वारा। 1. **[Swin Transformer V2](https://huggingface.co/docs/transformers/model_doc/swinv2)** (Microsoft से) साथ वाला पेपर [Swin Transformer V2: स्केलिंग अप कैपेसिटी एंड रेजोल्यूशन](https:// ज़ी लियू, हान हू, युटोंग लिन, ज़ुलिआंग याओ, ज़ेंडा ज़ी, यिक्सुआन वेई, जिया निंग, यू काओ, झेंग झांग, ली डोंग, फुरु वेई, बैनिंग गुओ द्वारा arxiv.org/abs/2111.09883। 1. **[Swin2SR](https://huggingface.co/docs/transformers/model_doc/swin2sr)** (from University of Würzburg) released with the paper [Swin2SR: SwinV2 Transformer for Compressed Image Super-Resolution and Restoration](https://arxiv.org/abs/2209.11345) by Marcos V. Conde, Ui-Jin Choi, Maxime Burchi, Radu Timofte. 1. **[SwitchTransformers](https://huggingface.co/docs/transformers/model_doc/switch_transformers)** (from Google) released with the paper [Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity](https://arxiv.org/abs/2101.03961) by William Fedus, Barret Zoph, Noam Shazeer. 1. **[T5](https://huggingface.co/docs/transformers/model_doc/t5)** (来自 Google AI)कॉलिन रैफेल और नोम शज़ीर और एडम रॉबर्ट्स और कैथरीन ली और शरण नारंग और माइकल मटेना द्वारा साथ में पेपर [एक एकीकृत टेक्स्ट-टू-टेक्स्ट ट्रांसफॉर्मर के साथ स्थानांतरण सीखने की सीमा की खोज] (https://arxiv.org/abs/1910.10683) और यांकी झोउ और वेई ली और पीटर जे लियू। 1. **[T5v1.1](https://huggingface.co/docs/transformers/model_doc/t5v1.1)** (Google AI से) साथ वाला पेपर [google-research/text-to-text-transfer- ट्रांसफॉर्मर](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) कॉलिन रैफेल और नोम शज़ीर और एडम रॉबर्ट्स और कैथरीन ली और शरण नारंग द्वारा और माइकल मटेना और यांकी झोउ और वेई ली और पीटर जे लियू। 1. **[Table Transformer](https://huggingface.co/docs/transformers/model_doc/table-transformer)** (माइक्रोसॉफ्ट रिसर्च से) साथ में पेपर [पबटेबल्स-1एम: टूवर्ड्स कॉम्प्रिहेंसिव टेबल एक्सट्रैक्शन फ्रॉम अनस्ट्रक्चर्ड डॉक्यूमेंट्स ](https://arxiv.org/abs/2110.00061) ब्रैंडन स्मॉक, रोहित पेसाला, रॉबिन अब्राहम द्वारा पोस्ट किया गया। 1. **[TAPAS](https://huggingface.co/docs/transformers/model_doc/tapas)** (Google AI से) साथ में कागज [TAPAS: पूर्व-प्रशिक्षण के माध्यम से कमजोर पर्यवेक्षण तालिका पार्सिंग](https:// arxiv.org/abs/2004.02349) जोनाथन हर्ज़िग, पावेल क्रिज़िस्तोफ़ नोवाक, थॉमस मुलर, फ्रांसेस्को पिकिन्नो और जूलियन मार्टिन ईसेन्च्लोस द्वारा। 1. **[TAPEX](https://huggingface.co/docs/transformers/model_doc/tapex)** (माइक्रोसॉफ्ट रिसर्च से) साथ में पेपर [TAPEX: टेबल प्री-ट्रेनिंग थ्रू लर्निंग अ न्यूरल SQL एक्ज़ीक्यूटर](https: //arxiv.org/abs/2107.07653) कियान लियू, बेई चेन, जियाकी गुओ, मोर्टेज़ा ज़ियादी, ज़ेकी लिन, वीज़ू चेन, जियान-गुआंग लू द्वारा पोस्ट किया गया। 1. **[Time Series Transformer](https://huggingface.co/docs/transformers/model_doc/time_series_transformer)** (from HuggingFace). 1. **[TimeSformer](https://huggingface.co/docs/transformers/model_doc/timesformer)** (from Facebook) released with the paper [Is Space-Time Attention All You Need for Video Understanding?](https://arxiv.org/abs/2102.05095) by Gedas Bertasius, Heng Wang, Lorenzo Torresani. 1. **[Trajectory Transformer](https://huggingface.co/docs/transformers/model_doc/trajectory_transformers)** (from the University of California at Berkeley) released with the paper [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https://arxiv.org/abs/2106.02039) by Michael Janner, Qiyang Li, Sergey Levine 1. **[Transformer-XL](https://huggingface.co/docs/transformers/model_doc/transfo-xl)** (Google/CMU की ओर से) कागज के साथ [संस्करण-एक्स: एक ब्लॉग मॉडल चौकस चौक मॉडल मॉडल] (https://arxivorg/abs/1901.02860) क्वोकोक वी. ले, रुस्लैन सलाखुतदी 1. **[TrOCR](https://huggingface.co/docs/transformers/model_doc/trocr)** (from Microsoft) released with the paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei. 1. **[TVLT](https://huggingface.co/docs/transformers/model_doc/tvlt)** (from UNC Chapel Hill) released with the paper [TVLT: Textless Vision-Language Transformer](https://arxiv.org/abs/2209.14156) by Zineng Tang, Jaemin Cho, Yixin Nie, Mohit Bansal. 1. **[TVP](https://huggingface.co/docs/transformers/model_doc/tvp)** (from Intel) released with the paper [Text-Visual Prompting for Efficient 2D Temporal Video Grounding](https://arxiv.org/abs/2303.04995) by Yimeng Zhang, Xin Chen, Jinghan Jia, Sijia Liu, Ke Ding. 1. **[UL2](https://huggingface.co/docs/transformers/model_doc/ul2)** (from Google Research) released with the paper [Unifying Language Learning Paradigms](https://arxiv.org/abs/2205.05131v1) by Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, Donald Metzler 1. **[UMT5](https://huggingface.co/docs/transformers/model_doc/umt5)** (Google Research से) Hyung Won Chung, Xavier Garcia, Adam Roberts, Yi Tay, Orhan Firat, Sharan Narang, Noah Constant. द्वाराअनुसंधान पत्र [UniMax: Fairer and More Effective Language Sampling for Large-Scale Multilingual Pretraining](https://openreview.net/forum?id=kXwdL1cWOAi) के साथ जारी किया गया 1. **[UniSpeech](https://huggingface.co/docs/transformers/model_doc/unispeech)** (माइक्रोसॉफ्ट रिसर्च से) साथ में दिया गया पेपर [UniSpeech: यूनिफाइड स्पीच रिप्रेजेंटेशन लर्निंग विद लेबलेड एंड अनलेबल्ड डेटा](https:/ /arxiv.org/abs/2101.07597) चेंगई वांग, यू वू, याओ कियान, केनिची कुमातानी, शुजी लियू, फुरु वेई, माइकल ज़ेंग, ज़ुएदोंग हुआंग द्वारा। 1. **[UniSpeechSat](https://huggingface.co/docs/transformers/model_doc/unispeech-sat)** (माइक्रोसॉफ्ट रिसर्च से) कागज के साथ [UNISPEECH-SAT: यूनिवर्सल स्पीच रिप्रेजेंटेशन लर्निंग विद स्पीकर अवेयर प्री-ट्रेनिंग ](https://arxiv.org/abs/2110.05752) सानयुआन चेन, यू वू, चेंग्यी वांग, झेंगयांग चेन, झूओ चेन, शुजी लियू, जियान वू, याओ कियान, फुरु वेई, जिन्यु ली, जियांगज़ान यू द्वारा पोस्ट किया गया। 1. **[UnivNet](https://huggingface.co/docs/transformers/model_doc/univnet)** (from Kakao Corporation) released with the paper [UnivNet: A Neural Vocoder with Multi-Resolution Spectrogram Discriminators for High-Fidelity Waveform Generation](https://arxiv.org/abs/2106.07889) by Won Jang, Dan Lim, Jaesam Yoon, Bongwan Kim, and Juntae Kim. 1. **[UPerNet](https://huggingface.co/docs/transformers/model_doc/upernet)** (from Peking University) released with the paper [Unified Perceptual Parsing for Scene Understanding](https://arxiv.org/abs/1807.10221) by Tete Xiao, Yingcheng Liu, Bolei Zhou, Yuning Jiang, Jian Sun. 1. **[VAN](https://huggingface.co/docs/transformers/model_doc/van)** (सिंघुआ यूनिवर्सिटी और ननकाई यूनिवर्सिटी से) साथ में पेपर [विजुअल अटेंशन नेटवर्क](https://arxiv.org/ pdf/2202.09741.pdf) मेंग-हाओ गुओ, चेंग-ज़े लू, झेंग-निंग लियू, मिंग-मिंग चेंग, शि-मिन हू द्वारा। 1. **[VideoMAE](https://huggingface.co/docs/transformers/model_doc/videomae)** (मल्टीमीडिया कम्प्यूटिंग ग्रुप, नानजिंग यूनिवर्सिटी से) साथ में पेपर [वीडियोएमएई: मास्क्ड ऑटोएन्कोडर स्व-पर्यवेक्षित वीडियो प्री-ट्रेनिंग के लिए डेटा-कुशल सीखने वाले हैं] (https://arxiv.org/abs/2203.12602) ज़ान टोंग, यिबिंग सॉन्ग, जुए द्वारा वांग, लिमिन वांग द्वारा पोस्ट किया गया। 1. **[ViLT](https://huggingface.co/docs/transformers/model_doc/vilt)** (NAVER AI Lab/Kakao Enterprise/Kakao Brain से) साथ में कागज [ViLT: Vision-and-Language Transformer बिना कनवल्शन या रीजन सुपरविजन](https://arxiv.org/abs/2102.03334) वोनजे किम, बोक्यूंग सोन, इल्डू किम द्वारा पोस्ट किया गया। 1. **[VipLlava](https://huggingface.co/docs/transformers/model_doc/vipllava)** (University of Wisconsin–Madison से) Mu Cai, Haotian Liu, Siva Karthik Mustikovela, Gregory P. Meyer, Yuning Chai, Dennis Park, Yong Jae Lee. द्वाराअनुसंधान पत्र [Making Large Multimodal Models Understand Arbitrary Visual Prompts](https://arxiv.org/abs/2312.00784) के साथ जारी किया गया 1. **[Vision Transformer (ViT)](https://huggingface.co/docs/transformers/model_doc/vit)** (गूगल एआई से) कागज के साथ [एक इमेज इज़ वर्थ 16x16 वर्ड्स: ट्रांसफॉर्मर्स फॉर इमेज रिकॉग्निशन एट स्केल](https://arxiv.org/abs/2010.11929) एलेक्सी डोसोवित्स्की, लुकास बेयर, अलेक्जेंडर कोलेसनिकोव, डिर्क वीसेनबोर्न, शियाओहुआ झाई, थॉमस अनटरथिनर, मुस्तफा देहघानी, मैथियास मिंडरर, जॉर्ज हेगोल्ड, सिल्वेन गेली, जैकब उस्ज़कोरेइट द्वारा हॉल्सबी द्वारा पोस्ट किया गया। 1. **[VisualBERT](https://huggingface.co/docs/transformers/model_doc/visual_bert)** (UCLA NLP से) साथ वाला पेपर [VisualBERT: A Simple and Performant Baseline for Vision and Language](https:/ /arxiv.org/pdf/1908.03557) लियुनियन हेरोल्ड ली, मार्क यात्स्कर, दा यिन, चो-जुई हसीह, काई-वेई चांग द्वारा। 1. **[ViT Hybrid](https://huggingface.co/docs/transformers/model_doc/vit_hybrid)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby. 1. **[VitDet](https://huggingface.co/docs/transformers/model_doc/vitdet)** (Meta AI से) Yanghao Li, Hanzi Mao, Ross Girshick, Kaiming He. द्वाराअनुसंधान पत्र [Exploring Plain Vision Transformer Backbones for Object Detection](https://arxiv.org/abs/2203.16527) के साथ जारी किया गया 1. **[ViTMAE](https://huggingface.co/docs/transformers/model_doc/vit_mae)** (मेटा एआई से) साथ में कागज [मास्कड ऑटोएन्कोडर स्केलेबल विजन लर्नर्स हैं](https://arxiv.org/ एब्स/2111.06377) कैमिंग हे, ज़िनेली चेन, सेनिंग ज़ी, यांगहो ली, पिओट्र डॉलर, रॉस गिर्शिक द्वारा। 1. **[ViTMatte](https://huggingface.co/docs/transformers/model_doc/vitmatte)** (HUST-VL से) Jingfeng Yao, Xinggang Wang, Shusheng Yang, Baoyuan Wang. द्वाराअनुसंधान पत्र [ViTMatte: Boosting Image Matting with Pretrained Plain Vision Transformers](https://arxiv.org/abs/2305.15272) के साथ जारी किया गया 1. **[ViTMSN](https://huggingface.co/docs/transformers/model_doc/vit_msn)** (मेटा एआई से) साथ में कागज [लेबल-कुशल सीखने के लिए मास्क्ड स्याम देश के नेटवर्क](https://arxiv. org/abs/2204.07141) महमूद असरान, मथिल्डे कैरन, ईशान मिश्रा, पियोट्र बोजानोवस्की, फ्लोरियन बोर्डेस, पास्कल विंसेंट, आर्मंड जौलिन, माइकल रब्बत, निकोलस बल्लास द्वारा। 1. **[VITS](https://huggingface.co/docs/transformers/model_doc/vits)** (Kakao Enterprise से) Jaehyeon Kim, Jungil Kong, Juhee Son. द्वाराअनुसंधान पत्र [Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech](https://arxiv.org/abs/2106.06103) के साथ जारी किया गया 1. **[ViViT](https://huggingface.co/docs/transformers/model_doc/vivit)** (from Google Research) released with the paper [ViViT: A Video Vision Transformer](https://arxiv.org/abs/2103.15691) by Anurag Arnab, Mostafa Dehghani, Georg Heigold, Chen Sun, Mario Lučić, Cordelia Schmid. 1. **[Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2)** (फेसबुक एआई से) साथ में पेपर [wav2vec 2.0: ए फ्रेमवर्क फॉर सेल्फ-सुपरवाइज्ड लर्निंग ऑफ स्पीच रिप्रेजेंटेशन](https://arxiv.org/abs/2006.11477) एलेक्सी बेवस्की, हेनरी झोउ, अब्देलरहमान मोहम्मद, माइकल औली द्वारा। 1. **[Wav2Vec2-Conformer](https://huggingface.co/docs/transformers/model_doc/wav2vec2-conformer)** (Facebook AI से) साथ वाला पेपर [FAIRSEQ S2T: FAIRSEQ के साथ फास्ट स्पीच-टू-टेक्स्ट मॉडलिंग ](https://arxiv.org/abs/2010.05171) चांगहान वांग, यूं तांग, जुताई मा, ऐनी वू, सरव्या पोपुरी, दिमित्रो ओखोनको, जुआन पिनो द्वारा पोस्ट किया गया। 1. **[Wav2Vec2Phoneme](https://huggingface.co/docs/transformers/model_doc/wav2vec2_phoneme)** (Facebook AI से) साथ वाला पेपर [सरल और प्रभावी जीरो-शॉट क्रॉस-लिंगुअल फोनेम रिकॉग्निशन](https://arxiv.org/abs/2109.11680) कियानटोंग जू, एलेक्सी बाएव्स्की, माइकल औली द्वारा। 1. **[WavLM](https://huggingface.co/docs/transformers/model_doc/wavlm)** (माइक्रोसॉफ्ट रिसर्च से) पेपर के साथ जारी किया गया [WavLM: फुल स्टैक के लिए बड़े पैमाने पर स्व-पर्यवेक्षित पूर्व-प्रशिक्षण स्पीच प्रोसेसिंग](https://arxiv.org/abs/2110.13900) सानयुआन चेन, चेंगयी वांग, झेंगयांग चेन, यू वू, शुजी लियू, ज़ुओ चेन, जिन्यु ली, नाओयुकी कांडा, ताकुया योशियोका, ज़िओंग जिओ, जियान वू, लॉन्ग झोउ, शुओ रेन, यानमिन कियान, याओ कियान, जियान वू, माइकल ज़ेंग, फुरु वेई। 1. **[Whisper](https://huggingface.co/docs/transformers/model_doc/whisper)** (OpenAI से) साथ में कागज [बड़े पैमाने पर कमजोर पर्यवेक्षण के माध्यम से मजबूत भाषण पहचान](https://cdn. openai.com/papers/whisper.pdf) एलेक रैडफोर्ड, जोंग वूक किम, ताओ जू, ग्रेग ब्रॉकमैन, क्रिस्टीन मैकलीवे, इल्या सुत्स्केवर द्वारा। 1. **[X-CLIP](https://huggingface.co/docs/transformers/model_doc/xclip)** (माइक्रोसॉफ्ट रिसर्च से) कागज के साथ [एक्सपैंडिंग लैंग्वेज-इमेज प्रीट्रेन्ड मॉडल फॉर जनरल वीडियो रिकग्निशन](https://arxiv.org/abs/2208.02816) बोलिन नी, होउवेन पेंग, मिंगाओ चेन, सोंगयांग झांग, गाओफेंग मेंग, जियानलोंग फू, शिमिंग जियांग, हैबिन लिंग द्वारा। 1. **[X-MOD](https://huggingface.co/docs/transformers/model_doc/xmod)** (Meta AI से) Jonas Pfeiffer, Naman Goyal, Xi Lin, Xian Li, James Cross, Sebastian Riedel, Mikel Artetxe. द्वाराअनुसंधान पत्र [Lifting the Curse of Multilinguality by Pre-training Modular Transformers](http://dx.doi.org/10.18653/v1/2022.naacl-main.255) के साथ जारी किया गया 1. **[XGLM](https://huggingface.co/docs/transformers/model_doc/xglm)** (From Facebook AI) released with the paper [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668) by Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li. 1. **[XLM](https://huggingface.co/docs/transformers/model_doc/xlm)** (फेसबुक से) साथ में पेपर [क्रॉस-लिंगुअल लैंग्वेज मॉडल प्रीट्रेनिंग] (https://arxiv.org/abs/1901.07291) गिलाउम लैम्पल और एलेक्सिस कोनो द्वारा। 1. **[XLM-ProphetNet](https://huggingface.co/docs/transformers/model_doc/xlm-prophetnet)** (माइक्रोसॉफ्ट रिसर्च से) साथ में कागज [ProphetNet: प्रेडिक्टिंग फ्यूचर एन-ग्राम फॉर सीक्वेंस-टू- सीक्वेंस प्री-ट्रेनिंग](https://arxiv.org/abs/2001.04063) यू यान, वीज़ेन क्यूई, येयुन गोंग, दयाहेंग लियू, नान डुआन, जिउशेंग चेन, रुओफ़ेई झांग और मिंग झोउ द्वारा। 1. **[XLM-RoBERTa](https://huggingface.co/docs/transformers/model_doc/xlm-roberta)** (फेसबुक एआई से), साथ में पेपर [अनसुपरवाइज्ड क्रॉस-लिंगुअल रिप्रेजेंटेशन लर्निंग एट स्केल] (https://arxiv.org/abs/1911.02116) एलेक्सिस कोन्यू*, कार्तिकेय खंडेलवाल*, नमन गोयल, विश्रव चौधरी, गिलाउम वेनज़ेक, फ्रांसिस्को गुज़मैन द्वारा , एडौर्ड ग्रेव, मायल ओट, ल्यूक ज़ेटलमॉयर और वेसेलिन स्टोयानोव द्वारा। 1. **[XLM-RoBERTa-XL](https://huggingface.co/docs/transformers/model_doc/xlm-roberta-xl)** (Facebook AI से) साथ में कागज [बहुभाषी नकाबपोश भाषा के लिए बड़े पैमाने पर ट्रांसफॉर्मर ] मॉडलिंग](https://arxiv.org/abs/2105.00572) नमन गोयल, जिंगफेई डू, मायल ओट, गिरि अनंतरामन, एलेक्सिस कोनो द्वारा पोस्ट किया गया। 1. **[XLM-V](https://huggingface.co/docs/transformers/model_doc/xlm-v)** (from Meta AI) released with the paper [XLM-V: Overcoming the Vocabulary Bottleneck in Multilingual Masked Language Models](https://arxiv.org/abs/2301.10472) by Davis Liang, Hila Gonen, Yuning Mao, Rui Hou, Naman Goyal, Marjan Ghazvininejad, Luke Zettlemoyer, Madian Khabsa. 1. **[XLNet](https://huggingface.co/docs/transformers/model_doc/xlnet)** (Google/CMU से) साथ वाला पेपर [XLNet: जनरलाइज्ड ऑटोरेग्रेसिव प्रीट्रेनिंग फॉर लैंग्वेज अंडरस्टैंडिंग](https://arxiv ज़ीलिन यांग*, ज़िहांग दाई*, यिमिंग यांग, जैम कार्बोनेल, रुस्लान सलाखुतदीनोव, क्वोक वी. ले ​​द्वारा .org/abs/1906.08237)। 1. **[XLS-R](https://huggingface.co/docs/transformers/model_doc/xls_r)** (Facebook AI से) साथ वाला पेपर [XLS-R: सेल्फ सुपरवाइज्ड क्रॉस-लिंगुअल स्पीच रिप्रेजेंटेशन लर्निंग एट स्केल](https://arxiv.org/abs/2111.09296) अरुण बाबू, चांगहान वांग, एंड्रोस तजंद्रा, कुशाल लखोटिया, कियानटोंग जू, नमन गोयल, कृतिका सिंह, पैट्रिक वॉन प्लैटन, याथार्थ सराफ, जुआन पिनो, एलेक्सी बेवस्की, एलेक्सिस कोन्यू, माइकल औली द्वारा पोस्ट किया गया। 1. **[XLSR-Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/xlsr_wav2vec2)** (फेसबुक एआई से) साथ में पेपर [अनसुपरवाइज्ड क्रॉस-लिंगुअल रिप्रेजेंटेशन लर्निंग फॉर स्पीच रिकग्निशन] (https://arxiv.org/abs/2006.13979) एलेक्सिस कोन्यू, एलेक्सी बेवस्की, रोनन कोलोबर्ट, अब्देलरहमान मोहम्मद, माइकल औली द्वारा। 1. **[YOLOS](https://huggingface.co/docs/transformers/model_doc/yolos)** (हुआझोंग यूनिवर्सिटी ऑफ साइंस एंड टेक्नोलॉजी से) साथ में पेपर [यू ओनली लुक एट वन सीक्वेंस: रीथिंकिंग ट्रांसफॉर्मर इन विज़न थ्रू ऑब्जेक्ट डिटेक्शन](https://arxiv.org/abs/2106.00666) युक्सिन फेंग, बेनचेंग लियाओ, जिंगगैंग वांग, जेमिन फेंग, जियांग क्यूई, रुई वू, जियानवेई नीयू, वेन्यू लियू द्वारा पोस्ट किया गया। 1. **[YOSO](https://huggingface.co/docs/transformers/model_doc/yoso)** (विस्कॉन्सिन विश्वविद्यालय - मैडिसन से) साथ में पेपर [यू ओनली सैंपल (लगभग) ज़ानपेंग ज़ेंग, युनयांग ज़िओंग द्वारा , सत्य एन. रवि, शैलेश आचार्य, ग्लेन फंग, विकास सिंह द्वारा पोस्ट किया गया। 1. एक नए मॉडल में योगदान देना चाहते हैं? नए मॉडल जोड़ने में आपका मार्गदर्शन करने के लिए हमारे पास एक **विस्तृत मार्गदर्शिका और टेम्प्लेट** है। आप उन्हें [`टेम्पलेट्स`](./templates) निर्देशिका में पा सकते हैं। पीआर शुरू करने से पहले [योगदान दिशानिर्देश](./CONTRIBUTING.md) देखना और अनुरक्षकों से संपर्क करना या प्रतिक्रिया प्राप्त करने के लिए एक नया मुद्दा खोलना याद रखें। यह जांचने के लिए कि क्या किसी मॉडल में पहले से ही Flax, PyTorch या TensorFlow का कार्यान्वयन है, या यदि उसके पास Tokenizers लाइब्रेरी में संबंधित टोकन है, तो [यह तालिका](https://huggingface.co/docs/transformers/index#supported) देखें। -फ्रेमवर्क)। इन कार्यान्वयनों का परीक्षण कई डेटासेट पर किया गया है (देखें केस स्क्रिप्ट का उपयोग करें) और वैनिला कार्यान्वयन के लिए तुलनात्मक रूप से प्रदर्शन करना चाहिए। आप उपयोग के मामले के दस्तावेज़ [इस अनुभाग](https://huggingface.co/docs/transformers/examples) में व्यवहार का विवरण पढ़ सकते हैं। ## अधिक समझें |अध्याय | विवरण | |-|-| | [दस्तावेज़ीकरण](https://huggingface.co/transformers/) | पूरा एपीआई दस्तावेज़ीकरण और ट्यूटोरियल | | [कार्य सारांश](https://huggingface.co/docs/transformers/task_summary) | ट्रांसफॉर्मर समर्थित कार्य | | [प्रीप्रोसेसिंग ट्यूटोरियल](https://huggingface.co/docs/transformers/preprocessing) | मॉडल के लिए डेटा तैयार करने के लिए `टोकनाइज़र` का उपयोग करना | | [प्रशिक्षण और फाइन-ट्यूनिंग](https://huggingface.co/docs/transformers/training) | PyTorch/TensorFlow के ट्रेनिंग लूप या `ट्रेनर` API में ट्रांसफॉर्मर द्वारा दिए गए मॉडल का उपयोग करें | | [क्विक स्टार्ट: ट्वीकिंग एंड यूज़ केस स्क्रिप्ट्स](https://github.com/huggingface/transformers/tree/main/examples) | विभिन्न कार्यों के लिए केस स्क्रिप्ट का उपयोग करें | | [मॉडल साझा करना और अपलोड करना](https://huggingface.co/docs/transformers/model_sharing) | समुदाय के साथ अपने फाइन टूनड मॉडल अपलोड और साझा करें | | [माइग्रेशन](https://huggingface.co/docs/transformers/migration) | `पाइटोरच-ट्रांसफॉर्मर्स` या `पाइटोरच-प्रीट्रेनड-बर्ट` से ट्रांसफॉर्मर में माइग्रेट करना | ## उद्धरण हमने आधिकारिक तौर पर इस लाइब्रेरी का [पेपर](https://www.aclweb.org/anthology/2020.emnlp-demos.6/) प्रकाशित किया है, अगर आप ट्रान्सफ़ॉर्मर्स लाइब्रेरी का उपयोग करते हैं, तो कृपया उद्धृत करें: ```bibtex @inproceedings{wolf-etal-2020-transformers, title = "Transformers: State-of-the-Art Natural Language Processing", author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", month = oct, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6", pages = "38--45" } ```
huggingface/transformers/blob/main/README_hd.md
!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # Stable Diffusion pipelines Stable Diffusion is a text-to-image latent diffusion model created by the researchers and engineers from [CompVis](https://github.com/CompVis), [Stability AI](https://stability.ai/) and [LAION](https://laion.ai/). Latent diffusion applies the diffusion process over a lower dimensional latent space to reduce memory and compute complexity. This specific type of diffusion model was proposed in [High-Resolution Image Synthesis with Latent Diffusion Models](https://huggingface.co/papers/2112.10752) by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer. Stable Diffusion is trained on 512x512 images from a subset of the LAION-5B dataset. This model uses a frozen CLIP ViT-L/14 text encoder to condition the model on text prompts. With its 860M UNet and 123M text encoder, the model is relatively lightweight and can run on consumer GPUs. For more details about how Stable Diffusion works and how it differs from the base latent diffusion model, take a look at the Stability AI [announcement](https://stability.ai/blog/stable-diffusion-announcement) and our own [blog post](https://huggingface.co/blog/stable_diffusion#how-does-stable-diffusion-work) for more technical details. You can find the original codebase for Stable Diffusion v1.0 at [CompVis/stable-diffusion](https://github.com/CompVis/stable-diffusion) and Stable Diffusion v2.0 at [Stability-AI/stablediffusion](https://github.com/Stability-AI/stablediffusion) as well as their original scripts for various tasks. Additional official checkpoints for the different Stable Diffusion versions and tasks can be found on the [CompVis](https://huggingface.co/CompVis), [Runway](https://huggingface.co/runwayml), and [Stability AI](https://huggingface.co/stabilityai) Hub organizations. Explore these organizations to find the best checkpoint for your use-case! The table below summarizes the available Stable Diffusion pipelines, their supported tasks, and an interactive demo: <div class="flex justify-center"> <div class="rounded-xl border border-gray-200"> <table class="min-w-full divide-y-2 divide-gray-200 bg-white text-sm"> <thead> <tr> <th class="px-4 py-2 font-medium text-gray-900 text-left"> Pipeline </th> <th class="px-4 py-2 font-medium text-gray-900 text-left"> Supported tasks </th> <th class="px-4 py-2 font-medium text-gray-900 text-left"> 🤗 Space </th> </tr> </thead> <tbody class="divide-y divide-gray-200"> <tr> <td class="px-4 py-2 text-gray-700"> <a href="./text2img">StableDiffusion</a> </td> <td class="px-4 py-2 text-gray-700">text-to-image</td> <td class="px-4 py-2"><a href="https://huggingface.co/spaces/stabilityai/stable-diffusion"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"/></a> </td> </tr> <tr> <td class="px-4 py-2 text-gray-700"> <a href="./img2img">StableDiffusionImg2Img</a> </td> <td class="px-4 py-2 text-gray-700">image-to-image</td> <td class="px-4 py-2"><a href="https://huggingface.co/spaces/huggingface/diffuse-the-rest"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"/></a> </td> </tr> <tr> <td class="px-4 py-2 text-gray-700"> <a href="./inpaint">StableDiffusionInpaint</a> </td> <td class="px-4 py-2 text-gray-700">inpainting</td> <td class="px-4 py-2"><a href="https://huggingface.co/spaces/runwayml/stable-diffusion-inpainting"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"/></a> </td> </tr> <tr> <td class="px-4 py-2 text-gray-700"> <a href="./depth2img">StableDiffusionDepth2Img</a> </td> <td class="px-4 py-2 text-gray-700">depth-to-image</td> <td class="px-4 py-2"><a href="https://huggingface.co/spaces/radames/stable-diffusion-depth2img"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"/></a> </td> </tr> <tr> <td class="px-4 py-2 text-gray-700"> <a href="./image_variation">StableDiffusionImageVariation</a> </td> <td class="px-4 py-2 text-gray-700">image variation</td> <td class="px-4 py-2"><a href="https://huggingface.co/spaces/lambdalabs/stable-diffusion-image-variations"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"/></a> </td> </tr> <tr> <td class="px-4 py-2 text-gray-700"> <a href="./stable_diffusion_safe">StableDiffusionPipelineSafe</a> </td> <td class="px-4 py-2 text-gray-700">filtered text-to-image</td> <td class="px-4 py-2"><a href="https://huggingface.co/spaces/AIML-TUDA/unsafe-vs-safe-stable-diffusion"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"/></a> </td> </tr> <tr> <td class="px-4 py-2 text-gray-700"> <a href="./stable_diffusion_2">StableDiffusion2</a> </td> <td class="px-4 py-2 text-gray-700">text-to-image, inpainting, depth-to-image, super-resolution</td> <td class="px-4 py-2"><a href="https://huggingface.co/spaces/stabilityai/stable-diffusion"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"/></a> </td> </tr> <tr> <td class="px-4 py-2 text-gray-700"> <a href="./stable_diffusion_xl">StableDiffusionXL</a> </td> <td class="px-4 py-2 text-gray-700">text-to-image, image-to-image</td> <td class="px-4 py-2"><a href="https://huggingface.co/spaces/RamAnanth1/stable-diffusion-xl"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"/></a> </td> </tr> <tr> <td class="px-4 py-2 text-gray-700"> <a href="./latent_upscale">StableDiffusionLatentUpscale</a> </td> <td class="px-4 py-2 text-gray-700">super-resolution</td> <td class="px-4 py-2"><a href="https://huggingface.co/spaces/huggingface-projects/stable-diffusion-latent-upscaler"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"/></a> </td> </tr> <tr> <td class="px-4 py-2 text-gray-700"> <a href="./upscale">StableDiffusionUpscale</a> </td> <td class="px-4 py-2 text-gray-700">super-resolution</td> </tr> <tr> <td class="px-4 py-2 text-gray-700"> <a href="./ldm3d_diffusion">StableDiffusionLDM3D</a> </td> <td class="px-4 py-2 text-gray-700">text-to-rgb, text-to-depth, text-to-pano</td> <td class="px-4 py-2"><a href="https://huggingface.co/spaces/r23/ldm3d-space"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"/></a> </td> </tr> <tr> <td class="px-4 py-2 text-gray-700"> <a href="./ldm3d_diffusion">StableDiffusionUpscaleLDM3D</a> </td> <td class="px-4 py-2 text-gray-700">ldm3d super-resolution</td> </tr> </tbody> </table> </div> </div> ## Tips To help you get the most out of the Stable Diffusion pipelines, here are a few tips for improving performance and usability. These tips are applicable to all Stable Diffusion pipelines. ### Explore tradeoff between speed and quality [`StableDiffusionPipeline`] uses the [`PNDMScheduler`] by default, but 🤗 Diffusers provides many other schedulers (some of which are faster or output better quality) that are compatible. For example, if you want to use the [`EulerDiscreteScheduler`] instead of the default: ```py from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler pipeline = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4") pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config) # or euler_scheduler = EulerDiscreteScheduler.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="scheduler") pipeline = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", scheduler=euler_scheduler) ``` ### Reuse pipeline components to save memory To save memory and use the same components across multiple pipelines, use the `.components` method to avoid loading weights into RAM more than once. ```py from diffusers import ( StableDiffusionPipeline, StableDiffusionImg2ImgPipeline, StableDiffusionInpaintPipeline, ) text2img = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4") img2img = StableDiffusionImg2ImgPipeline(**text2img.components) inpaint = StableDiffusionInpaintPipeline(**text2img.components) # now you can use text2img(...), img2img(...), inpaint(...) just like the call methods of each respective pipeline ```
huggingface/diffusers/blob/main/docs/source/en/api/pipelines/stable_diffusion/overview.md
Flax API [[autodoc]] safetensors.flax.load_file [[autodoc]] safetensors.flax.load [[autodoc]] safetensors.flax.save_file [[autodoc]] safetensors.flax.save
huggingface/safetensors/blob/main/docs/source/api/flax.mdx
!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # FlauBERT <div class="flex flex-wrap space-x-1"> <a href="https://huggingface.co/models?filter=flaubert"> <img alt="Models" src="https://img.shields.io/badge/All_model_pages-flaubert-blueviolet"> </a> <a href="https://huggingface.co/spaces/docs-demos/flaubert_small_cased"> <img alt="Spaces" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"> </a> </div> ## Overview The FlauBERT model was proposed in the paper [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) by Hang Le et al. It's a transformer model pretrained using a masked language modeling (MLM) objective (like BERT). The abstract from the paper is the following: *Language models have become a key step to achieve state-of-the art results in many different Natural Language Processing (NLP) tasks. Leveraging the huge amount of unlabeled texts nowadays available, they provide an efficient way to pre-train continuous word representations that can be fine-tuned for a downstream task, along with their contextualization at the sentence level. This has been widely demonstrated for English using contextualized representations (Dai and Le, 2015; Peters et al., 2018; Howard and Ruder, 2018; Radford et al., 2018; Devlin et al., 2019; Yang et al., 2019b). In this paper, we introduce and share FlauBERT, a model learned on a very large and heterogeneous French corpus. Models of different sizes are trained using the new CNRS (French National Centre for Scientific Research) Jean Zay supercomputer. We apply our French language models to diverse NLP tasks (text classification, paraphrasing, natural language inference, parsing, word sense disambiguation) and show that most of the time they outperform other pretraining approaches. Different versions of FlauBERT as well as a unified evaluation protocol for the downstream tasks, called FLUE (French Language Understanding Evaluation), are shared to the research community for further reproducible experiments in French NLP.* This model was contributed by [formiel](https://huggingface.co/formiel). The original code can be found [here](https://github.com/getalp/Flaubert). Tips: - Like RoBERTa, without the sentence ordering prediction (so just trained on the MLM objective). ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/multiple_choice) ## FlaubertConfig [[autodoc]] FlaubertConfig ## FlaubertTokenizer [[autodoc]] FlaubertTokenizer <frameworkcontent> <pt> ## FlaubertModel [[autodoc]] FlaubertModel - forward ## FlaubertWithLMHeadModel [[autodoc]] FlaubertWithLMHeadModel - forward ## FlaubertForSequenceClassification [[autodoc]] FlaubertForSequenceClassification - forward ## FlaubertForMultipleChoice [[autodoc]] FlaubertForMultipleChoice - forward ## FlaubertForTokenClassification [[autodoc]] FlaubertForTokenClassification - forward ## FlaubertForQuestionAnsweringSimple [[autodoc]] FlaubertForQuestionAnsweringSimple - forward ## FlaubertForQuestionAnswering [[autodoc]] FlaubertForQuestionAnswering - forward </pt> <tf> ## TFFlaubertModel [[autodoc]] TFFlaubertModel - call ## TFFlaubertWithLMHeadModel [[autodoc]] TFFlaubertWithLMHeadModel - call ## TFFlaubertForSequenceClassification [[autodoc]] TFFlaubertForSequenceClassification - call ## TFFlaubertForMultipleChoice [[autodoc]] TFFlaubertForMultipleChoice - call ## TFFlaubertForTokenClassification [[autodoc]] TFFlaubertForTokenClassification - call ## TFFlaubertForQuestionAnsweringSimple [[autodoc]] TFFlaubertForQuestionAnsweringSimple - call </tf> </frameworkcontent>
huggingface/transformers/blob/main/docs/source/en/model_doc/flaubert.md
Organizations The Hugging Face Hub offers **Organizations**, which can be used to group accounts and manage datasets, models, and Spaces. The Hub also allows admins to set user roles to [**control access to repositories**](./organizations-security) and manage their organization's [payment method and billing info](https://huggingface.co/pricing). If an organization needs to track user access to a dataset due to licensing or privacy issues, an organization can enable [user access requests](./datasets-gated). ## Contents - [Managing Organizations](./organizations-managing) - [Organization Cards](./organizations-cards) - [Access Control in Organizations](./organizations-security) - [Enterprise Hub features](./enterprise-hub) - [SSO in Organizations](./enterprise-sso) - [Audit Logs](./audit-logs) - [Storage Regions](./storage-regions)
huggingface/hub-docs/blob/main/docs/hub/organizations.md
!--⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # 🤗 Hub client library The `huggingface_hub` library allows you to interact with the [Hugging Face Hub](https://hf.co), a machine learning platform for creators and collaborators. Discover pre-trained models and datasets for your projects or play with the hundreds of machine learning apps hosted on the Hub. You can also create and share your own models and datasets with the community. The `huggingface_hub` library provides a simple way to do all these things with Python. Read the [quick start guide](quick-start) to get up and running with the `huggingface_hub` library. You will learn how to download files from the Hub, create a repository, and upload files to the Hub. Keep reading to learn more about how to manage your repositories on the 🤗 Hub, how to interact in discussions or even how to access the Inference API. <div class="mt-10"> <div class="w-full flex flex-col space-y-4 md:space-y-0 md:grid md:grid-cols-2 md:gap-y-4 md:gap-x-5"> <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./guides/overview"> <div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">How-to guides</div> <p class="text-gray-700">Practical guides to help you achieve a specific goal. Take a look at these guides to learn how to use huggingface_hub to solve real-world problems.</p> </a> <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./package_reference/overview"> <div class="w-full text-center bg-gradient-to-br from-purple-400 to-purple-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Reference</div> <p class="text-gray-700">Exhaustive and technical description of huggingface_hub classes and methods.</p> </a> <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./concepts/git_vs_http"> <div class="w-full text-center bg-gradient-to-br from-pink-400 to-pink-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Conceptual guides</div> <p class="text-gray-700">High-level explanations for building a better understanding of huggingface_hub philosophy.</p> </a> </div> </div> <!-- <a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./tutorials/overview" ><div class="w-full text-center bg-gradient-to-br from-blue-400 to-blue-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Tutorials</div> <p class="text-gray-700">Learn the basics and become familiar with using huggingface_hub to programmatically interact with the 🤗 Hub!</p> </a> --> ## Contribute All contributions to the `huggingface_hub` are welcomed and equally valued! 🤗 Besides adding or fixing existing issues in the code, you can also help improve the documentation by making sure it is accurate and up-to-date, help answer questions on issues, and request new features you think will improve the library. Take a look at the [contribution guide](https://github.com/huggingface/huggingface_hub/blob/main/CONTRIBUTING.md) to learn more about how to submit a new issue or feature request, how to submit a pull request, and how to test your contributions to make sure everything works as expected. Contributors should also be respectful of our [code of conduct](https://github.com/huggingface/huggingface_hub/blob/main/CODE_OF_CONDUCT.md) to create an inclusive and welcoming collaborative space for everyone.
huggingface/huggingface_hub/blob/main/docs/source/en/index.md
Gradio Demo: animeganv2 ### Recreate the viral AnimeGAN image transformation demo. ``` !pip install -q gradio torch torchvision Pillow gdown numpy scipy cmake onnxruntime-gpu opencv-python-headless ``` ``` # Downloading files from the demo repo import os !wget -q https://github.com/gradio-app/gradio/raw/main/demo/animeganv2/gongyoo.jpeg !wget -q https://github.com/gradio-app/gradio/raw/main/demo/animeganv2/groot.jpeg ``` ``` import gradio as gr import torch model2 = torch.hub.load( "AK391/animegan2-pytorch:main", "generator", pretrained=True, progress=False ) model1 = torch.hub.load("AK391/animegan2-pytorch:main", "generator", pretrained="face_paint_512_v1") face2paint = torch.hub.load( 'AK391/animegan2-pytorch:main', 'face2paint', size=512,side_by_side=False ) def inference(img, ver): if ver == 'version 2 (🔺 robustness,🔻 stylization)': out = face2paint(model2, img) else: out = face2paint(model1, img) return out title = "AnimeGANv2" description = "Gradio Demo for AnimeGanv2 Face Portrait. To use it, simply upload your image, or click one of the examples to load them. Read more at the links below. Please use a cropped portrait picture for best results similar to the examples below." article = "<p style='text-align: center'><a href='https://github.com/bryandlee/animegan2-pytorch' target='_blank'>Github Repo Pytorch</a></p> <center><img src='https://visitor-badge.glitch.me/badge?page_id=akhaliq_animegan' alt='visitor badge'></center></p>" examples=[['groot.jpeg','version 2 (🔺 robustness,🔻 stylization)'],['gongyoo.jpeg','version 1 (🔺 stylization, 🔻 robustness)']] demo = gr.Interface( fn=inference, inputs=[gr.Image(type="pil"),gr.Radio(['version 1 (🔺 stylization, 🔻 robustness)','version 2 (🔺 robustness,🔻 stylization)'], type="value", value='version 2 (🔺 robustness,🔻 stylization)', label='version')], outputs=gr.Image(type="pil"), title=title, description=description, article=article, examples=examples) demo.launch() ```
gradio-app/gradio/blob/main/demo/animeganv2/run.ipynb
-- title: "Databricks ❤️ Hugging Face: up to 40% faster training and tuning of Large Language Models" thumbnail: /blog/assets/78_ml_director_insights/databricks.png authors: - user: alighodsi guest: true - user: maddiedawson guest: true --- # Databricks ❤️ Hugging Face: up to 40% faster training and tuning of Large Language Models Generative AI has been taking the world by storm. As the data and AI company, we have been on this journey with the release of the open source large language model [Dolly](https://huggingface.co/databricks/dolly-v2-12b), as well as the internally crowdsourced dataset licensed for research and commercial use that we used to fine-tune it, the [databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k). Both the model and dataset are available on Hugging Face. We’ve learned a lot throughout this process, and today we’re excited to announce our first of many official commits to the Hugging Face codebase that allows users to easily create a Hugging Face Dataset from an Apache Spark™ dataframe. #### “It's been great to see Databricks release models and datasets to the community, and now we see them extending that work with direct open source commitment to Hugging Face. Spark is one of the most efficient engines for working with data at scale, and it's great to see that users can now benefit from that technology to more effectively fine tune models from Hugging Face.” — Clem Delange, Hugging Face CEO ## Hugging Face gets first-class Spark support Over the past few weeks, we’ve gotten many requests from users asking for an easier way to load their Spark dataframe into a Hugging Face dataset that can be utilized for model training or tuning. Prior to today’s release, to get data from a Spark dataframe into a Hugging Face dataset, users had to write data into Parquet files and then point the Hugging Face dataset to these files to reload them. For example: ```swift from datasets import load_dataset train_df = train.write.parquet(train_dbfs_path, mode="overwrite") train_test = load_dataset("parquet", data_files={"train":f"/dbfs{train_dbfs_path}/*.parquet", "test":f"/dbfs{test_dbfs_path}/*.parquet"}) #16GB == 22min ``` Not only was this cumbersome, but it also meant that data had to be written to disk and then read in again. On top of that, the data would get rematerialized once loaded back into the dataset, which eats up more resources and, therefore, more time and cost. Using this method, we saw that a relatively small (16GB) dataset took about 22 minutes to go from Spark dataframe to Parquet, and then back into the Hugging Face dataset. With the latest Hugging Face release, we make it much simpler for users to accomplish the same task by simply calling the new “from_spark” function in Datasets: ```swift from datasets import Dataset df = [some Spark dataframe or Delta table loaded into df] dataset = Dataset.from_spark(df) #16GB == 12min ``` This allows users to use Spark to efficiently load and transform data for training or fine-tuning a model, then easily map their Spark dataframe into a Hugging Face dataset for super simple integration into their training pipelines. This combines cost savings and speed from Spark and optimizations like memory-mapping and smart caching from Hugging Face datasets. These improvements cut down the processing time for our example 16GB dataset by more than 40%, going from 22 minutes down to only 12 minutes. ## Why does this matter? As we transition to this new AI paradigm, organizations will need to use their extremely valuable data to augment their AI models if they want to get the best performance within their specific domain. This will almost certainly require work in the form of data transformations, and doing this efficiently over large datasets is something Spark was designed to do. Integrating Spark with Hugging Face gives you the cost-effectiveness and performance of Spark while retaining the pipeline integration that Hugging Face provides. ## Continued Open-Source Support We see this release as a new avenue to further contribute to the open source community, something that we believe Hugging Face does extremely well, as it has become the de facto repository for open source models and datasets. This is only the first of many contributions. We already have plans to add streaming support through Spark to make the dataset loading even faster. In order to become the best platform for users to jump into the world of AI, we’re working hard to provide the best tools to successfully train, tune, and deploy models. Not only will we continue contributing to Hugging Face, but we’ve also started releasing improvements to our other open source projects. A recent [MLflow](https://www.databricks.com/blog/2023/04/18/introducing-mlflow-23-enhanced-native-llm-support-and-new-features.html) release added support for the transformers library, OpenAI integration, and Langchain support. We also announced [AI Functions](https://www.databricks.com/blog/2023/04/18/introducing-ai-functions-integrating-large-language-models-databricks-sql.html) within Databricks SQL that lets users easily integrate OpenAI (or their own deployed models in the future) into their queries. To top it all off, we also released a [PyTorch distributor](https://www.databricks.com/blog/2023/04/20/pytorch-databricks-introducing-spark-pytorch-distributor.html) for Spark to simplify distributed PyTorch training on Databricks. _This article was originally published on April 26, 2023 in [Databricks's blog](https://www.databricks.com/blog/contributing-spark-loader-for-hugging-face-datasets)._
huggingface/blog/blob/main/databricks-case-study.md
Gradio Demo: image_segmentation ### Simple image segmentation using gradio's AnnotatedImage component. ``` !pip install -q gradio ``` ``` import gradio as gr import numpy as np import random with gr.Blocks() as demo: section_labels = [ "apple", "banana", "carrot", "donut", "eggplant", "fish", "grapes", "hamburger", "ice cream", "juice", ] with gr.Row(): num_boxes = gr.Slider(0, 5, 2, step=1, label="Number of boxes") num_segments = gr.Slider(0, 5, 1, step=1, label="Number of segments") with gr.Row(): img_input = gr.Image() img_output = gr.AnnotatedImage( color_map={"banana": "#a89a00", "carrot": "#ffae00"} ) section_btn = gr.Button("Identify Sections") selected_section = gr.Textbox(label="Selected Section") def section(img, num_boxes, num_segments): sections = [] for a in range(num_boxes): x = random.randint(0, img.shape[1]) y = random.randint(0, img.shape[0]) w = random.randint(0, img.shape[1] - x) h = random.randint(0, img.shape[0] - y) sections.append(((x, y, x + w, y + h), section_labels[a])) for b in range(num_segments): x = random.randint(0, img.shape[1]) y = random.randint(0, img.shape[0]) r = random.randint(0, min(x, y, img.shape[1] - x, img.shape[0] - y)) mask = np.zeros(img.shape[:2]) for i in range(img.shape[0]): for j in range(img.shape[1]): dist_square = (i - y) ** 2 + (j - x) ** 2 if dist_square < r**2: mask[i, j] = round((r**2 - dist_square) / r**2 * 4) / 4 sections.append((mask, section_labels[b + num_boxes])) return (img, sections) section_btn.click(section, [img_input, num_boxes, num_segments], img_output) def select_section(evt: gr.SelectData): return section_labels[evt.index] img_output.select(select_section, None, selected_section) if __name__ == "__main__": demo.launch() ```
gradio-app/gradio/blob/main/demo/image_segmentation/run.ipynb
Gradio Demo: stable-diffusion ### Note: This is a simplified version of the code needed to create the Stable Diffusion demo. See full code here: https://hf.co/spaces/stabilityai/stable-diffusion/tree/main ``` !pip install -q gradio diffusers transformers nvidia-ml-py3 ftfy torch ``` ``` import gradio as gr import torch from diffusers import StableDiffusionPipeline from PIL import Image import os auth_token = os.getenv("auth_token") model_id = "CompVis/stable-diffusion-v1-4" device = "cpu" pipe = StableDiffusionPipeline.from_pretrained( model_id, use_auth_token=auth_token, revision="fp16", torch_dtype=torch.float16 ) pipe = pipe.to(device) def infer(prompt, samples, steps, scale, seed): generator = torch.Generator(device=device).manual_seed(seed) images_list = pipe( [prompt] * samples, num_inference_steps=steps, guidance_scale=scale, generator=generator, ) images = [] safe_image = Image.open(r"unsafe.png") for i, image in enumerate(images_list["sample"]): if images_list["nsfw_content_detected"][i]: images.append(safe_image) else: images.append(image) return images block = gr.Blocks() with block: with gr.Group(): with gr.Row(): text = gr.Textbox( label="Enter your prompt", max_lines=1, placeholder="Enter your prompt", container=False, ) btn = gr.Button("Generate image") gallery = gr.Gallery( label="Generated images", show_label=False, elem_id="gallery", columns=[2], height="auto", ) advanced_button = gr.Button("Advanced options", elem_id="advanced-btn") with gr.Row(elem_id="advanced-options"): samples = gr.Slider(label="Images", minimum=1, maximum=4, value=4, step=1) steps = gr.Slider(label="Steps", minimum=1, maximum=50, value=45, step=1) scale = gr.Slider( label="Guidance Scale", minimum=0, maximum=50, value=7.5, step=0.1 ) seed = gr.Slider( label="Seed", minimum=0, maximum=2147483647, step=1, randomize=True, ) gr.on([text.submit, btn.click], infer, inputs=[text, samples, steps, scale, seed], outputs=gallery) advanced_button.click( None, [], text, ) block.launch() ```
gradio-app/gradio/blob/main/demo/stable-diffusion/run.ipynb
!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # DeepSpeed Integration [DeepSpeed](https://github.com/microsoft/DeepSpeed) implements everything described in the [ZeRO paper](https://arxiv.org/abs/1910.02054). Currently it provides full support for: 1. Optimizer state partitioning (ZeRO stage 1) 2. Gradient partitioning (ZeRO stage 2) 3. Parameter partitioning (ZeRO stage 3) 4. Custom mixed precision training handling 5. A range of fast CUDA-extension-based optimizers 6. ZeRO-Offload to CPU and NVMe ZeRO-Offload has its own dedicated paper: [ZeRO-Offload: Democratizing Billion-Scale Model Training](https://arxiv.org/abs/2101.06840). And NVMe-support is described in the paper [ZeRO-Infinity: Breaking the GPU Memory Wall for Extreme Scale Deep Learning](https://arxiv.org/abs/2104.07857). DeepSpeed ZeRO-2 is primarily used only for training, as its features are of no use to inference. DeepSpeed ZeRO-3 can be used for inference as well, since it allows huge models to be loaded on multiple GPUs, which won't be possible on a single GPU. 🤗 Transformers integrates [DeepSpeed](https://github.com/microsoft/DeepSpeed) via 2 options: 1. Integration of the core DeepSpeed features via [`Trainer`]. This is an everything-done-for-you type of integration - just supply your custom config file or use our template and you have nothing else to do. Most of this document is focused on this feature. 2. If you don't use [`Trainer`] and want to use your own Trainer where you integrated DeepSpeed yourself, core functionality functions like `from_pretrained` and `from_config` include integration of essential parts of DeepSpeed like `zero.Init` for ZeRO stage 3 and higher. To tap into this feature read the docs on [non-Trainer DeepSpeed Integration](#nontrainer-deepspeed-integration). What is integrated: Training: 1. DeepSpeed ZeRO training supports the full ZeRO stages 1, 2 and 3 with ZeRO-Infinity (CPU and NVME offload). Inference: 1. DeepSpeed ZeRO Inference supports ZeRO stage 3 with ZeRO-Infinity. It uses the same ZeRO protocol as training, but it doesn't use an optimizer and a lr scheduler and only stage 3 is relevant. For more details see: [zero-inference](#zero-inference). There is also DeepSpeed Inference - this is a totally different technology which uses Tensor Parallelism instead of ZeRO (coming soon). <a id='deepspeed-trainer-integration'></a> ## Trainer Deepspeed Integration <a id='deepspeed-installation'></a> ### Installation Install the library via pypi: ```bash pip install deepspeed ``` or via `transformers`' `extras`: ```bash pip install transformers[deepspeed] ``` or find more details on [the DeepSpeed's GitHub page](https://github.com/microsoft/deepspeed#installation) and [advanced install](https://www.deepspeed.ai/tutorials/advanced-install/). If you're still struggling with the build, first make sure to read [CUDA Extension Installation Notes](trainer#cuda-extension-installation-notes). If you don't prebuild the extensions and rely on them to be built at run time and you tried all of the above solutions to no avail, the next thing to try is to pre-build the modules before installing them. To make a local build for DeepSpeed: ```bash git clone https://github.com/microsoft/DeepSpeed/ cd DeepSpeed rm -rf build TORCH_CUDA_ARCH_LIST="8.6" DS_BUILD_CPU_ADAM=1 DS_BUILD_UTILS=1 pip install . \ --global-option="build_ext" --global-option="-j8" --no-cache -v \ --disable-pip-version-check 2>&1 | tee build.log ``` If you intend to use NVMe offload you will also need to include `DS_BUILD_AIO=1` in the instructions above (and also install *libaio-dev* system-wide). Edit `TORCH_CUDA_ARCH_LIST` to insert the code for the architectures of the GPU cards you intend to use. Assuming all your cards are the same you can get the arch via: ```bash CUDA_VISIBLE_DEVICES=0 python -c "import torch; print(torch.cuda.get_device_capability())" ``` So if you get `8, 6`, then use `TORCH_CUDA_ARCH_LIST="8.6"`. If you have multiple different cards, you can list all of them like so `TORCH_CUDA_ARCH_LIST="6.1;8.6"` If you need to use the same setup on multiple machines, make a binary wheel: ```bash git clone https://github.com/microsoft/DeepSpeed/ cd DeepSpeed rm -rf build TORCH_CUDA_ARCH_LIST="8.6" DS_BUILD_CPU_ADAM=1 DS_BUILD_UTILS=1 \ python setup.py build_ext -j8 bdist_wheel ``` it will generate something like `dist/deepspeed-0.3.13+8cd046f-cp38-cp38-linux_x86_64.whl` which now you can install as `pip install deepspeed-0.3.13+8cd046f-cp38-cp38-linux_x86_64.whl` locally or on any other machine. Again, remember to ensure to adjust `TORCH_CUDA_ARCH_LIST` to the target architectures. You can find the complete list of NVIDIA GPUs and their corresponding **Compute Capabilities** (same as arch in this context) [here](https://developer.nvidia.com/cuda-gpus). You can check the archs pytorch was built with using: ```bash python -c "import torch; print(torch.cuda.get_arch_list())" ``` Here is how to find out the arch for one of the installed GPUs. For example, for GPU 0: ```bash CUDA_VISIBLE_DEVICES=0 python -c "import torch; \ print(torch.cuda.get_device_properties(torch.device('cuda')))" ``` If the output is: ```bash _CudaDeviceProperties(name='GeForce RTX 3090', major=8, minor=6, total_memory=24268MB, multi_processor_count=82) ``` then you know that this card's arch is `8.6`. You can also leave `TORCH_CUDA_ARCH_LIST` out completely and then the build program will automatically query the architecture of the GPUs the build is made on. This may or may not match the GPUs on the target machines, that's why it's best to specify the desired archs explicitly. If after trying everything suggested you still encounter build issues, please, proceed with the GitHub Issue of [Deepspeed](https://github.com/microsoft/DeepSpeed/issues), <a id='deepspeed-multi-gpu'></a> ### Deployment with multiple GPUs To deploy the DeepSpeed integration adjust the [`Trainer`] command line arguments to include a new argument `--deepspeed ds_config.json`, where `ds_config.json` is the DeepSpeed configuration file as documented [here](https://www.deepspeed.ai/docs/config-json/). The file naming is up to you. It's recommended to use DeepSpeed's `add_config_arguments` utility to add the necessary command line arguments to your code. For more information please see [DeepSpeed's Argument Parsing](https://deepspeed.readthedocs.io/en/latest/initialize.html#argument-parsing) doc. You can use a launcher of your choice here. You can continue using the pytorch launcher: ```bash torch.distributed.run --nproc_per_node=2 your_program.py <normal cl args> --deepspeed ds_config.json ``` or use the launcher provided by `deepspeed`: ```bash deepspeed --num_gpus=2 your_program.py <normal cl args> --deepspeed ds_config.json ``` As you can see the arguments aren't the same, but for most needs either of them works. The full details on how to configure various nodes and GPUs can be found [here](https://www.deepspeed.ai/getting-started/#resource-configuration-multi-node). When you use the `deepspeed` launcher and you want to use all available gpus you can just omit the `--num_gpus` flag. Here is an example of running `run_translation.py` under DeepSpeed deploying all available GPUs: ```bash deepspeed examples/pytorch/translation/run_translation.py \ --deepspeed tests/deepspeed/ds_config_zero3.json \ --model_name_or_path t5-small --per_device_train_batch_size 1 \ --output_dir output_dir --overwrite_output_dir --fp16 \ --do_train --max_train_samples 500 --num_train_epochs 1 \ --dataset_name wmt16 --dataset_config "ro-en" \ --source_lang en --target_lang ro ``` Note that in the DeepSpeed documentation you are likely to see `--deepspeed --deepspeed_config ds_config.json` - i.e. two DeepSpeed-related arguments, but for the sake of simplicity, and since there are already so many arguments to deal with, we combined the two into a single argument. For some practical usage examples, please, see this [post](https://github.com/huggingface/transformers/issues/8771#issuecomment-759248400). <a id='deepspeed-one-gpu'></a> ### Deployment with one GPU To deploy DeepSpeed with one GPU adjust the [`Trainer`] command line arguments as follows: ```bash deepspeed --num_gpus=1 examples/pytorch/translation/run_translation.py \ --deepspeed tests/deepspeed/ds_config_zero2.json \ --model_name_or_path t5-small --per_device_train_batch_size 1 \ --output_dir output_dir --overwrite_output_dir --fp16 \ --do_train --max_train_samples 500 --num_train_epochs 1 \ --dataset_name wmt16 --dataset_config "ro-en" \ --source_lang en --target_lang ro ``` This is almost the same as with multiple-GPUs, but here we tell DeepSpeed explicitly to use just one GPU via `--num_gpus=1`. By default, DeepSpeed deploys all GPUs it can see on the given node. If you have only 1 GPU to start with, then you don't need this argument. The following [documentation](https://www.deepspeed.ai/getting-started/#resource-configuration-multi-node) discusses the launcher options. Why would you want to use DeepSpeed with just one GPU? 1. It has a ZeRO-offload feature which can delegate some computations and memory to the host's CPU and RAM, and thus leave more GPU resources for model's needs - e.g. larger batch size, or enabling a fitting of a very big model which normally won't fit. 2. It provides a smart GPU memory management system, that minimizes memory fragmentation, which again allows you to fit bigger models and data batches. While we are going to discuss the configuration in details next, the key to getting a huge improvement on a single GPU with DeepSpeed is to have at least the following configuration in the configuration file: ```json { "zero_optimization": { "stage": 2, "offload_optimizer": { "device": "cpu", "pin_memory": true }, "allgather_partitions": true, "allgather_bucket_size": 2e8, "reduce_scatter": true, "reduce_bucket_size": 2e8, "overlap_comm": true, "contiguous_gradients": true } } ``` which enables optimizer offload and some other important features. You may experiment with the buffer sizes, you will find more details in the discussion below. For a practical usage example of this type of deployment, please, see this [post](https://github.com/huggingface/transformers/issues/8771#issuecomment-759176685). You may also try the ZeRO-3 with CPU and NVMe offload as explained further in this document. <!--- TODO: Benchmark whether we can get better performance out of ZeRO-3 vs. ZeRO-2 on a single GPU, and then recommend ZeRO-3 config as starting one. --> Notes: - if you need to run on a specific GPU, which is different from GPU 0, you can't use `CUDA_VISIBLE_DEVICES` to limit the visible scope of available GPUs. Instead, you have to use the following syntax: ```bash deepspeed --include localhost:1 examples/pytorch/translation/run_translation.py ... ``` In this example, we tell DeepSpeed to use GPU 1 (second gpu). <a id='deepspeed-multi-node'></a> ### Deployment with multiple Nodes The information in this section isn't not specific to the DeepSpeed integration and is applicable to any multi-node program. But DeepSpeed provides a `deepspeed` launcher that is easier to use than other launchers unless you are in a SLURM environment. For the duration of this section let's assume that you have 2 nodes with 8 gpus each. And you can reach the first node with `ssh hostname1` and second node with `ssh hostname2`, and both must be able to reach each other via ssh locally without a password. Of course, you will need to rename these host (node) names to the actual host names you are working with. #### The torch.distributed.run(torchrun) launcher For example, to use `torch.distributed.run`, you could do: ```bash python -m torch.distributed.run --nproc_per_node=8 --nnode=2 --node_rank=0 --master_addr=hostname1 \ --master_port=9901 your_program.py <normal cl args> --deepspeed ds_config.json ``` You have to ssh to each node and run this same command on each one of them! There is no rush, the launcher will wait until both nodes will synchronize. For more information please see [torchrun](https://pytorch.org/docs/stable/elastic/run.html). Incidentally, this is also the launcher that replaced `torch.distributed.launch` a few pytorch versions back. #### The deepspeed launcher To use the `deepspeed` launcher instead, you have to first create a `hostfile` file: ``` hostname1 slots=8 hostname2 slots=8 ``` and then you can launch it as: ```bash deepspeed --num_gpus 8 --num_nodes 2 --hostfile hostfile --master_addr hostname1 --master_port=9901 \ your_program.py <normal cl args> --deepspeed ds_config.json ``` Unlike the `torch.distributed.run` launcher, `deepspeed` will automatically launch this command on both nodes! For more information please see [Resource Configuration (multi-node)](https://www.deepspeed.ai/getting-started/#resource-configuration-multi-node). #### Launching in a SLURM environment In the SLURM environment the following approach can be used. The following is a slurm script `launch.slurm` which you will need to adapt it to your specific SLURM environment. ```bash #SBATCH --job-name=test-nodes # name #SBATCH --nodes=2 # nodes #SBATCH --ntasks-per-node=1 # crucial - only 1 task per dist per node! #SBATCH --cpus-per-task=10 # number of cores per tasks #SBATCH --gres=gpu:8 # number of gpus #SBATCH --time 20:00:00 # maximum execution time (HH:MM:SS) #SBATCH --output=%x-%j.out # output file name export GPUS_PER_NODE=8 export MASTER_ADDR=$(scontrol show hostnames $SLURM_JOB_NODELIST | head -n 1) export MASTER_PORT=9901 srun --jobid $SLURM_JOBID bash -c 'python -m torch.distributed.run \ --nproc_per_node $GPUS_PER_NODE --nnodes $SLURM_NNODES --node_rank $SLURM_PROCID \ --master_addr $MASTER_ADDR --master_port $MASTER_PORT \ your_program.py <normal cl args> --deepspeed ds_config.json' ``` All is left is to schedule it to run: ```bash sbatch launch.slurm ``` `srun` will take care of launching the program simultaneously on all nodes. #### Use of Non-shared filesystem By default DeepSpeed expects that a multi-node environment uses a shared storage. If this is not the case and each node can only see the local filesystem, you need to adjust the config file to include a [`checkpoint`_section](https://www.deepspeed.ai/docs/config-json/#checkpoint-options) with the following setting: ```json { "checkpoint": { "use_node_local_storage": true } } ``` Alternatively, you can also use the [`Trainer`]'s `--save_on_each_node` argument, and the above config will be added automatically for you. <a id='deepspeed-notebook'></a> ### Deployment in Notebooks The problem with running notebook cells as a script is that there is no normal `deepspeed` launcher to rely on, so under certain setups we have to emulate it. If you're using only 1 GPU, here is how you'd have to adjust your training code in the notebook to use DeepSpeed. ```python # DeepSpeed requires a distributed environment even when only one process is used. # This emulates a launcher in the notebook import os os.environ["MASTER_ADDR"] = "localhost" os.environ["MASTER_PORT"] = "9994" # modify if RuntimeError: Address already in use os.environ["RANK"] = "0" os.environ["LOCAL_RANK"] = "0" os.environ["WORLD_SIZE"] = "1" # Now proceed as normal, plus pass the deepspeed config file training_args = TrainingArguments(..., deepspeed="ds_config_zero3.json") trainer = Trainer(...) trainer.train() ``` Note: `...` stands for the normal arguments that you'd pass to the functions. If you want to use more than 1 GPU, you must use a multi-process environment for DeepSpeed to work. That is, you have to use the launcher for that purpose and this cannot be accomplished by emulating the distributed environment presented at the beginning of this section. If you want to create the config file on the fly in the notebook in the current directory, you could have a dedicated cell with: ```python no-style %%bash cat <<'EOT' > ds_config_zero3.json { "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "optimizer": { "type": "AdamW", "params": { "lr": "auto", "betas": "auto", "eps": "auto", "weight_decay": "auto" } }, "scheduler": { "type": "WarmupLR", "params": { "warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto" } }, "zero_optimization": { "stage": 3, "offload_optimizer": { "device": "cpu", "pin_memory": true }, "offload_param": { "device": "cpu", "pin_memory": true }, "overlap_comm": true, "contiguous_gradients": true, "sub_group_size": 1e9, "reduce_bucket_size": "auto", "stage3_prefetch_bucket_size": "auto", "stage3_param_persistence_threshold": "auto", "stage3_max_live_parameters": 1e9, "stage3_max_reuse_distance": 1e9, "stage3_gather_16bit_weights_on_model_save": true }, "gradient_accumulation_steps": "auto", "gradient_clipping": "auto", "steps_per_print": 2000, "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", "wall_clock_breakdown": false } EOT ``` If the training script is in a normal file and not in the notebook cells, you can launch `deepspeed` normally via shell from a cell. For example, to use `run_translation.py` you would launch it with: ```python no-style !git clone https://github.com/huggingface/transformers !cd transformers; deepspeed examples/pytorch/translation/run_translation.py ... ``` or with `%%bash` magic, where you can write a multi-line code for the shell program to run: ```python no-style %%bash git clone https://github.com/huggingface/transformers cd transformers deepspeed examples/pytorch/translation/run_translation.py ... ``` In such case you don't need any of the code presented at the beginning of this section. Note: While `%%bash` magic is neat, but currently it buffers the output so you won't see the logs until the process completes. <a id='deepspeed-config'></a> ### Configuration For the complete guide to the DeepSpeed configuration options that can be used in its configuration file please refer to the [following documentation](https://www.deepspeed.ai/docs/config-json/). You can find dozens of DeepSpeed configuration examples that address various practical needs in [the DeepSpeedExamples repo](https://github.com/microsoft/DeepSpeedExamples): ```bash git clone https://github.com/microsoft/DeepSpeedExamples cd DeepSpeedExamples find . -name '*json' ``` Continuing the code from above, let's say you're looking to configure the Lamb optimizer. So you can search through the example `.json` files with: ```bash grep -i Lamb $(find . -name '*json') ``` Some more examples are to be found in the [main repo](https://github.com/microsoft/DeepSpeed) as well. When using DeepSpeed you always need to supply a DeepSpeed configuration file, yet some configuration parameters have to be configured via the command line. You will find the nuances in the rest of this guide. To get an idea of what DeepSpeed configuration file looks like, here is one that activates ZeRO stage 2 features, including optimizer states cpu offload, uses `AdamW` optimizer and `WarmupLR` scheduler and will enable mixed precision training if `--fp16` is passed: ```json { "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "optimizer": { "type": "AdamW", "params": { "lr": "auto", "betas": "auto", "eps": "auto", "weight_decay": "auto" } }, "scheduler": { "type": "WarmupLR", "params": { "warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto" } }, "zero_optimization": { "stage": 2, "offload_optimizer": { "device": "cpu", "pin_memory": true }, "allgather_partitions": true, "allgather_bucket_size": 2e8, "overlap_comm": true, "reduce_scatter": true, "reduce_bucket_size": 2e8, "contiguous_gradients": true }, "gradient_accumulation_steps": "auto", "gradient_clipping": "auto", "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", } ``` When you execute the program, DeepSpeed will log the configuration it received from the [`Trainer`] to the console, so you can see exactly what was the final configuration passed to it. <a id='deepspeed-config-passing'></a> ### Passing Configuration As discussed in this document normally the DeepSpeed configuration is passed as a path to a json file, but if you're not using the command line interface to configure the training, and instead instantiate the [`Trainer`] via [`TrainingArguments`] then for the `deepspeed` argument you can pass a nested `dict`. This allows you to create the configuration on the fly and doesn't require you to write it to the file system before passing it to [`TrainingArguments`]. To summarize you can do: ```python TrainingArguments(..., deepspeed="/path/to/ds_config.json") ``` or: ```python ds_config_dict = dict(scheduler=scheduler_params, optimizer=optimizer_params) TrainingArguments(..., deepspeed=ds_config_dict) ``` <a id='deepspeed-config-shared'></a> ### Shared Configuration <Tip warning={true}> This section is a must-read </Tip> Some configuration values are required by both the [`Trainer`] and DeepSpeed to function correctly, therefore, to prevent conflicting definitions, which could lead to hard to detect errors, we chose to configure those via the [`Trainer`] command line arguments. Additionally, some configuration values are derived automatically based on the model's configuration, so instead of remembering to manually adjust multiple values, it's the best to let the [`Trainer`] do the majority of configuration for you. Therefore, in the rest of this guide you will find a special configuration value: `auto`, which when set will be automatically replaced with the correct or most efficient value. Please feel free to choose to ignore this recommendation and set the values explicitly, in which case be very careful that your the [`Trainer`] arguments and DeepSpeed configurations agree. For example, are you using the same learning rate, or batch size, or gradient accumulation settings? if these mismatch the training may fail in very difficult to detect ways. You have been warned. There are multiple other values that are specific to DeepSpeed-only and those you will have to set manually to suit your needs. In your own programs, you can also use the following approach if you'd like to modify the DeepSpeed config as a master and configure [`TrainingArguments`] based on that. The steps are: 1. Create or load the DeepSpeed configuration to be used as a master configuration 2. Create the [`TrainingArguments`] object based on these values Do note that some values, such as `scheduler.params.total_num_steps` are calculated by [`Trainer`] during `train`, but you can of course do the math yourself. <a id='deepspeed-zero'></a> ### ZeRO [Zero Redundancy Optimizer (ZeRO)](https://www.deepspeed.ai/tutorials/zero/) is the workhorse of DeepSpeed. It supports 3 different levels (stages) of optimization. The first one is not quite interesting for scalability purposes, therefore this document focuses on stages 2 and 3. Stage 3 is further improved by the latest addition of ZeRO-Infinity. You will find more indepth information in the DeepSpeed documentation. The `zero_optimization` section of the configuration file is the most important part ([docs](https://www.deepspeed.ai/docs/config-json/#zero-optimizations-for-fp16-training)), since that is where you define which ZeRO stages you want to enable and how to configure them. You will find the explanation for each parameter in the DeepSpeed docs. This section has to be configured exclusively via DeepSpeed configuration - the [`Trainer`] provides no equivalent command line arguments. Note: currently DeepSpeed doesn't validate parameter names, so if you misspell any, it'll use the default setting for the parameter that got misspelled. You can watch the DeepSpeed engine start up log messages to see what values it is going to use. <a id='deepspeed-zero2-config'></a> #### ZeRO-2 Config The following is an example of configuration for ZeRO stage 2: ```json { "zero_optimization": { "stage": 2, "offload_optimizer": { "device": "cpu", "pin_memory": true }, "allgather_partitions": true, "allgather_bucket_size": 5e8, "overlap_comm": true, "reduce_scatter": true, "reduce_bucket_size": 5e8, "contiguous_gradients": true } } ``` **Performance tuning:** - enabling `offload_optimizer` should reduce GPU RAM usage (it requires `"stage": 2`) - `"overlap_comm": true` trades off increased GPU RAM usage to lower all-reduce latency. `overlap_comm` uses 4.5x the `allgather_bucket_size` and `reduce_bucket_size` values. So if they are set to 5e8, this requires a 9GB footprint (`5e8 x 2Bytes x 2 x 4.5`). Therefore, if you have a GPU with 8GB or less RAM, to avoid getting OOM-errors you will need to reduce those parameters to about `2e8`, which would require 3.6GB. You will want to do the same on larger capacity GPU as well, if you're starting to hit OOM. - when reducing these buffers you're trading communication speed to avail more GPU RAM. The smaller the buffer size is, the slower the communication gets, and the more GPU RAM will be available to other tasks. So if a bigger batch size is important, getting a slightly slower training time could be a good trade. Additionally, `deepspeed==0.4.4` added a new option `round_robin_gradients` which you can enable with: ```json { "zero_optimization": { "round_robin_gradients": true } } ``` This is a stage 2 optimization for CPU offloading that parallelizes gradient copying to CPU memory among ranks by fine-grained gradient partitioning. Performance benefit grows with gradient accumulation steps (more copying between optimizer steps) or GPU count (increased parallelism). <a id='deepspeed-zero3-config'></a> #### ZeRO-3 Config The following is an example of configuration for ZeRO stage 3: ```json { "zero_optimization": { "stage": 3, "offload_optimizer": { "device": "cpu", "pin_memory": true }, "offload_param": { "device": "cpu", "pin_memory": true }, "overlap_comm": true, "contiguous_gradients": true, "sub_group_size": 1e9, "reduce_bucket_size": "auto", "stage3_prefetch_bucket_size": "auto", "stage3_param_persistence_threshold": "auto", "stage3_max_live_parameters": 1e9, "stage3_max_reuse_distance": 1e9, "stage3_gather_16bit_weights_on_model_save": true } } ``` If you are getting OOMs, because your model or activations don't fit into the GPU memory and you have unutilized CPU memory offloading the optimizer states and parameters to CPU memory with `"device": "cpu"` may solve this limitation. If you don't want to offload to CPU memory, use `none` instead of `cpu` for the `device` entry. Offloading to NVMe is discussed further down. Pinned memory is enabled with `pin_memory` set to `true`. This feature can improve the throughput at the cost of making less memory available to other processes. Pinned memory is set aside to the specific process that requested it and its typically accessed much faster than normal CPU memory. **Performance tuning:** - `stage3_max_live_parameters`: `1e9` - `stage3_max_reuse_distance`: `1e9` If hitting OOM reduce `stage3_max_live_parameters` and `stage3_max_reuse_distance`. They should have minimal impact on performance unless you are doing activation checkpointing. `1e9` would consume ~2GB. The memory is shared by `stage3_max_live_parameters` and `stage3_max_reuse_distance`, so it's not additive, it's just 2GB total. `stage3_max_live_parameters` is the upper limit on how many full parameters you want to keep on the GPU at any given time. "reuse distance" is a metric we are using to figure out when will a parameter be used again in the future, and we use the `stage3_max_reuse_distance` to decide whether to throw away the parameter or to keep it. If a parameter is going to be used again in near future (less than `stage3_max_reuse_distance`) then we keep it to reduce communication overhead. This is super helpful when you have activation checkpointing enabled, where we do a forward recompute and backward passes a single layer granularity and want to keep the parameter in the forward recompute till the backward The following configuration values depend on the model's hidden size: - `reduce_bucket_size`: `hidden_size*hidden_size` - `stage3_prefetch_bucket_size`: `0.9 * hidden_size * hidden_size` - `stage3_param_persistence_threshold`: `10 * hidden_size` therefore set these values to `auto` and the [`Trainer`] will automatically assign the recommended values. But, of course, feel free to set these explicitly as well. `stage3_gather_16bit_weights_on_model_save` enables model fp16 weights consolidation when model gets saved. With large models and multiple GPUs this is an expensive operation both in terms of memory and speed. It's currently required if you plan to resume the training. Watch out for future updates that will remove this limitation and make things more flexible. If you're migrating from ZeRO-2 configuration note that `allgather_partitions`, `allgather_bucket_size` and `reduce_scatter` configuration parameters are not used in ZeRO-3. If you keep these in the config file they will just be ignored. - `sub_group_size`: `1e9` `sub_group_size` controls the granularity in which parameters are updated during optimizer steps. Parameters are grouped into buckets of `sub_group_size` and each buckets is updated one at a time. When used with NVMe offload in ZeRO-Infinity, `sub_group_size` therefore controls the granularity in which model states are moved in and out of CPU memory from NVMe during the optimizer step. This prevents running out of CPU memory for extremely large models. You can leave `sub_group_size` to its default value of *1e9* when not using NVMe offload. You may want to change its default value in the following cases: 1. Running into OOM during optimizer step: Reduce `sub_group_size` to reduce memory utilization of temporary buffers 2. Optimizer Step is taking a long time: Increase `sub_group_size` to improve bandwidth utilization as a result of the increased data buffers. #### ZeRO-0 Config Note that we're listing Stage 0 and 1 last since they are rarely used. Stage 0 is disabling all types of sharding and just using DeepSpeed as DDP. You can turn it on with: ```json { "zero_optimization": { "stage": 0 } } ``` This will essentially disable ZeRO without you needing to change anything else. #### ZeRO-1 Config Stage 1 is Stage 2 minus gradient sharding. You can always try it to speed things a tiny bit to only shard the optimizer states with: ```json { "zero_optimization": { "stage": 1 } } ``` <a id='deepspeed-nvme'></a> ### NVMe Support ZeRO-Infinity allows for training incredibly large models by extending GPU and CPU memory with NVMe memory. Thanks to smart partitioning and tiling algorithms each GPU needs to send and receive very small amounts of data during offloading so modern NVMe proved to be fit to allow for an even larger total memory pool available to your training process. ZeRO-Infinity requires ZeRO-3 enabled. The following configuration example enables NVMe to offload both optimizer states and the params: ```json { "zero_optimization": { "stage": 3, "offload_optimizer": { "device": "nvme", "nvme_path": "/local_nvme", "pin_memory": true, "buffer_count": 4, "fast_init": false }, "offload_param": { "device": "nvme", "nvme_path": "/local_nvme", "pin_memory": true, "buffer_count": 5, "buffer_size": 1e8, "max_in_cpu": 1e9 }, "aio": { "block_size": 262144, "queue_depth": 32, "thread_count": 1, "single_submit": false, "overlap_events": true }, "overlap_comm": true, "contiguous_gradients": true, "sub_group_size": 1e9, "reduce_bucket_size": "auto", "stage3_prefetch_bucket_size": "auto", "stage3_param_persistence_threshold": "auto", "stage3_max_live_parameters": 1e9, "stage3_max_reuse_distance": 1e9, "stage3_gather_16bit_weights_on_model_save": true }, } ``` You can choose to offload both optimizer states and params to NVMe, or just one of them or none. For example, if you have copious amounts of CPU memory available, by all means offload to CPU memory only as it'd be faster (hint: *"device": "cpu"*). Here is the full documentation for offloading [optimizer states](https://www.deepspeed.ai/docs/config-json/#optimizer-offloading) and [parameters](https://www.deepspeed.ai/docs/config-json/#parameter-offloading). Make sure that your `nvme_path` is actually an NVMe, since it will work with the normal hard drive or SSD, but it'll be much much slower. The fast scalable training was designed with modern NVMe transfer speeds in mind (as of this writing one can have ~3.5GB/s read, ~3GB/s write peak speeds). In order to figure out the optimal `aio` configuration block you must run a benchmark on your target setup, as [explained here](https://github.com/microsoft/DeepSpeed/issues/998). <a id='deepspeed-zero2-zero3-performance'></a> #### ZeRO-2 vs ZeRO-3 Performance ZeRO-3 is likely to be slower than ZeRO-2 if everything else is configured the same because the former has to gather model weights in addition to what ZeRO-2 does. If ZeRO-2 meets your needs and you don't need to scale beyond a few GPUs then you may choose to stick to it. It's important to understand that ZeRO-3 enables a much higher scalability capacity at a cost of speed. It's possible to adjust ZeRO-3 configuration to make it perform closer to ZeRO-2: - set `stage3_param_persistence_threshold` to a very large number - larger than the largest parameter, e.g., `6 * hidden_size * hidden_size`. This will keep the parameters on the GPUs. - turn off `offload_params` since ZeRO-2 doesn't have that option. The performance will likely improve significantly with just `offload_params` turned off, even if you don't change `stage3_param_persistence_threshold`. Of course, these changes will impact the size of the model you can train. So these help you to trade scalability for speed depending on your needs. <a id='deepspeed-zero2-example'></a> #### ZeRO-2 Example Here is a full ZeRO-2 auto-configuration file `ds_config_zero2.json`: ```json { "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "optimizer": { "type": "AdamW", "params": { "lr": "auto", "betas": "auto", "eps": "auto", "weight_decay": "auto" } }, "scheduler": { "type": "WarmupLR", "params": { "warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto" } }, "zero_optimization": { "stage": 2, "offload_optimizer": { "device": "cpu", "pin_memory": true }, "allgather_partitions": true, "allgather_bucket_size": 2e8, "overlap_comm": true, "reduce_scatter": true, "reduce_bucket_size": 2e8, "contiguous_gradients": true }, "gradient_accumulation_steps": "auto", "gradient_clipping": "auto", "steps_per_print": 2000, "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", "wall_clock_breakdown": false } ``` Here is a full ZeRO-2 all-enabled manually set configuration file. It is here mainly for you to see what the typical values look like, but we highly recommend using the one with multiple `auto` settings in it. ```json { "fp16": { "enabled": true, "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "optimizer": { "type": "AdamW", "params": { "lr": 3e-5, "betas": [0.8, 0.999], "eps": 1e-8, "weight_decay": 3e-7 } }, "scheduler": { "type": "WarmupLR", "params": { "warmup_min_lr": 0, "warmup_max_lr": 3e-5, "warmup_num_steps": 500 } }, "zero_optimization": { "stage": 2, "offload_optimizer": { "device": "cpu", "pin_memory": true }, "allgather_partitions": true, "allgather_bucket_size": 2e8, "overlap_comm": true, "reduce_scatter": true, "reduce_bucket_size": 2e8, "contiguous_gradients": true }, "steps_per_print": 2000, "wall_clock_breakdown": false } ``` <a id='deepspeed-zero3-example'></a> #### ZeRO-3 Example Here is a full ZeRO-3 auto-configuration file `ds_config_zero3.json`: ```json { "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "optimizer": { "type": "AdamW", "params": { "lr": "auto", "betas": "auto", "eps": "auto", "weight_decay": "auto" } }, "scheduler": { "type": "WarmupLR", "params": { "warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto" } }, "zero_optimization": { "stage": 3, "offload_optimizer": { "device": "cpu", "pin_memory": true }, "offload_param": { "device": "cpu", "pin_memory": true }, "overlap_comm": true, "contiguous_gradients": true, "sub_group_size": 1e9, "reduce_bucket_size": "auto", "stage3_prefetch_bucket_size": "auto", "stage3_param_persistence_threshold": "auto", "stage3_max_live_parameters": 1e9, "stage3_max_reuse_distance": 1e9, "stage3_gather_16bit_weights_on_model_save": true }, "gradient_accumulation_steps": "auto", "gradient_clipping": "auto", "steps_per_print": 2000, "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", "wall_clock_breakdown": false } ``` Here is a full ZeRO-3 all-enabled manually set configuration file. It is here mainly for you to see what the typical values look like, but we highly recommend using the one with multiple `auto` settings in it. ```json { "fp16": { "enabled": true, "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "optimizer": { "type": "AdamW", "params": { "lr": 3e-5, "betas": [0.8, 0.999], "eps": 1e-8, "weight_decay": 3e-7 } }, "scheduler": { "type": "WarmupLR", "params": { "warmup_min_lr": 0, "warmup_max_lr": 3e-5, "warmup_num_steps": 500 } }, "zero_optimization": { "stage": 3, "offload_optimizer": { "device": "cpu", "pin_memory": true }, "offload_param": { "device": "cpu", "pin_memory": true }, "overlap_comm": true, "contiguous_gradients": true, "sub_group_size": 1e9, "reduce_bucket_size": 1e6, "stage3_prefetch_bucket_size": 0.94e6, "stage3_param_persistence_threshold": 1e4, "stage3_max_live_parameters": 1e9, "stage3_max_reuse_distance": 1e9, "stage3_gather_16bit_weights_on_model_save": true }, "steps_per_print": 2000, "wall_clock_breakdown": false } ``` #### How to Choose Which ZeRO Stage and Offloads To Use For Best Performance So now you know there are all these different stages. How to decide which of them to use? This section will attempt to address this question. In general the following applies: - Speed-wise (left is faster than right) Stage 0 (DDP) > Stage 1 > Stage 2 > Stage 2 + offload > Stage 3 > Stage 3 + offloads - GPU Memory usage-wise (right is more GPU memory efficient than left) Stage 0 (DDP) < Stage 1 < Stage 2 < Stage 2 + offload < Stage 3 < Stage 3 + offloads So when you want to get the fastest execution while fitting into minimal number of GPUs, here is the process you could follow. We start with the fastest approach and if running into GPU OOM we then go to the next slower approach, but which will use less GPU memory. And so on and so forth. First of all set batch size to 1 (you can always use gradient accumulation for any desired effective batch size). 1. Enable `--gradient_checkpointing 1` (HF Trainer) or directly `model.gradient_checkpointing_enable()` - if OOM then 2. Try ZeRO stage 2 first. if OOM then 3. Try ZeRO stage 2 + `offload_optimizer` - if OOM then 4. Switch to ZeRO stage 3 - if OOM then 5. Enable `offload_param` to `cpu` - if OOM then 6. Enable `offload_optimizer` to `cpu` - if OOM then 7. If you still can't fit a batch size of 1 first check various default values and lower them if you can. For example, if you use `generate` and you don't use a wide search beam make it narrower as it'd take a lot of memory. 8. Definitely use mixed half-precision over fp32 - so bf16 on Ampere and higher GPUs and fp16 on older gpu architectures. 9. If you still OOM you could add more hardware or enable ZeRO-Infinity - that is switch offloads `offload_param` and `offload_optimizer` to `nvme`. You need to make sure it's a very fast nvme. As an anecdote I was able to infer BLOOM-176B on a tiny GPU using ZeRO-Infinity except it was extremely slow. But it worked! You can, of course, work through these steps in reverse by starting with the most GPU memory efficient config and then going backwards. Or try bi-secting it. Once you have your batch size 1 not leading to OOM, measure your effective throughput. Next try to increase the batch size to as large as you can, since the higher the batch size the more efficient the GPUs are as they perform the best when matrices they multiply are huge. Now the performance optimization game starts. You can turn off some offload features or step down in ZeRO stages and increase/decrease batch size and again measure your effective throughput. Rinse and repeat until satisfied. Don't spend forever on it, but if you're about to start a 3 months training - do spend a few days on it to find the most effective throughput-wise setup. So that your training cost will be the lowest and you will finish training faster. In the current crazy-paced ML world, if it takes you an extra month to train something you are likely to miss a golden opportunity. Of course, this is only me sharing an observation and in no way I'm trying to rush you. Before beginning to train BLOOM-176B I spent 2 days on this process and was able to increase throughput from 90 to 150 TFLOPs! This effort saved us more than one month of training time. These notes were written primarily for the training mode, but they should mostly apply for inference as well. For example, during inference Gradient Checkpointing is a no-op since it is only useful during training. Additionally, we found out that if you are doing a multi-GPU inference and not using [DeepSpeed-Inference](https://www.deepspeed.ai/tutorials/inference-tutorial/), [Accelerate](https://huggingface.co/blog/bloom-inference-pytorch-scripts) should provide a superior performance. Other quick related performance notes: - if you are training something from scratch always try to have tensors with shapes that are divisible by 16 (e.g. hidden size). For batch size try divisible by 2 at least. There are [wave and tile quanitization](https://developer.nvidia.com/blog/optimizing-gpu-performance-tensor-cores/) divisibility that is hardware-specific if you want to squeeze even higher performance from your GPUs. ### Activation Checkpointing or Gradient Checkpointing Activation checkpointing and gradient checkpointing are two distinct terms that refer to the same methodology. It's very confusing but this is how it is. Gradient checkpointing allows one to trade speed for GPU memory, which either allows one to overcome a GPU OOM, or increase their batch size, which often leads to a better performance. HF Transformers models don't know anything about DeepSpeed's activation checkpointing, so if you try to enable that feature in the DeepSpeed config file, nothing will happen. Therefore you have two ways to take advantage of this very beneficial feature: 1. If you want to use a HF Transformers models you can do `model.gradient_checkpointing_enable()` or use `--gradient_checkpointing` in the HF Trainer, which will automatically enable this for you. `torch.utils.checkpoint` is used there. 2. If you write your own model and you want to use DeepSpeed's activation checkpointing you can use the [API prescribed there](https://deepspeed.readthedocs.io/en/latest/activation-checkpointing.html). You can also take the HF Transformers modeling code and replace `torch.utils.checkpoint` with the DeepSpeed's API. The latter is more flexible since it allows you to offload the forward activations to the CPU memory instead of recalculating them. ### Optimizer and Scheduler As long as you don't enable `offload_optimizer` you can mix and match DeepSpeed and HuggingFace schedulers and optimizers, with the exception of using the combination of HuggingFace scheduler and DeepSpeed optimizer: | Combos | HF Scheduler | DS Scheduler | |:-------------|:-------------|:-------------| | HF Optimizer | Yes | Yes | | DS Optimizer | No | Yes | It is possible to use a non-DeepSpeed optimizer when `offload_optimizer` is enabled, as long as it has both CPU and GPU implementation (except LAMB). <a id='deepspeed-optimizer'></a> #### Optimizer DeepSpeed's main optimizers are Adam, AdamW, OneBitAdam, and Lamb. These have been thoroughly tested with ZeRO and are thus recommended to be used. It, however, can import other optimizers from `torch`. The full documentation is [here](https://www.deepspeed.ai/docs/config-json/#optimizer-parameters). If you don't configure the `optimizer` entry in the configuration file, the [`Trainer`] will automatically set it to `AdamW` and will use the supplied values or the defaults for the following command line arguments: `--learning_rate`, `--adam_beta1`, `--adam_beta2`, `--adam_epsilon` and `--weight_decay`. Here is an example of the auto-configured `optimizer` entry for `AdamW`: ```json { "optimizer": { "type": "AdamW", "params": { "lr": "auto", "betas": "auto", "eps": "auto", "weight_decay": "auto" } } } ``` Note that the command line arguments will set the values in the configuration file. This is so that there is one definitive source of the values and to avoid hard to find errors when for example, the learning rate is set to different values in different places. Command line rules. The values that get overridden are: - `lr` with the value of `--learning_rate` - `betas` with the value of `--adam_beta1 --adam_beta2` - `eps` with the value of `--adam_epsilon` - `weight_decay` with the value of `--weight_decay` Therefore please remember to tune the shared hyperparameters on the command line. You can also set the values explicitly: ```json { "optimizer": { "type": "AdamW", "params": { "lr": 0.001, "betas": [0.8, 0.999], "eps": 1e-8, "weight_decay": 3e-7 } } } ``` But then you're on your own synchronizing the [`Trainer`] command line arguments and the DeepSpeed configuration. If you want to use another optimizer which is not listed above, you will have to add to the top level configuration. ```json { "zero_allow_untested_optimizer": true } ``` Similarly to `AdamW`, you can configure other officially supported optimizers. Just remember that those may have different config values. e.g. for Adam you will want `weight_decay` around `0.01`. Additionally, offload works the best when it's used with Deepspeed's CPU Adam optimizer. If you want to use a different optimizer with offload, since `deepspeed==0.8.3` you need to also add: ```json { "zero_force_ds_cpu_optimizer": false } ``` to the top level configuration. <a id='deepspeed-scheduler'></a> #### Scheduler DeepSpeed supports `LRRangeTest`, `OneCycle`, `WarmupLR` and `WarmupDecayLR` learning rate schedulers. The full documentation is [here](https://www.deepspeed.ai/docs/config-json/#scheduler-parameters). Here is where the schedulers overlap between 🤗 Transformers and DeepSpeed: - `WarmupLR` via `--lr_scheduler_type constant_with_warmup` - `WarmupDecayLR` via `--lr_scheduler_type linear`. This is also the default value for `--lr_scheduler_type`, therefore, if you don't configure the scheduler this is scheduler that will get configured by default. If you don't configure the `scheduler` entry in the configuration file, the [`Trainer`] will use the values of `--lr_scheduler_type`, `--learning_rate` and `--warmup_steps` or `--warmup_ratio` to configure a 🤗 Transformers version of it. Here is an example of the auto-configured `scheduler` entry for `WarmupLR`: ```json { "scheduler": { "type": "WarmupLR", "params": { "warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto" } } } ``` Since *"auto"* is used the [`Trainer`] arguments will set the correct values in the configuration file. This is so that there is one definitive source of the values and to avoid hard to find errors when, for example, the learning rate is set to different values in different places. Command line rules. The values that get set are: - `warmup_min_lr` with the value of `0`. - `warmup_max_lr` with the value of `--learning_rate`. - `warmup_num_steps` with the value of `--warmup_steps` if provided. Otherwise will use `--warmup_ratio` multiplied by the number of training steps and rounded up. - `total_num_steps` with either the value of `--max_steps` or if it is not provided, derived automatically at run time based on the environment and the size of the dataset and other command line arguments (needed for `WarmupDecayLR`). You can, of course, take over any or all of the configuration values and set those yourself: ```json { "scheduler": { "type": "WarmupLR", "params": { "warmup_min_lr": 0, "warmup_max_lr": 0.001, "warmup_num_steps": 1000 } } } ``` But then you're on your own synchronizing the [`Trainer`] command line arguments and the DeepSpeed configuration. For example, for `WarmupDecayLR`, you can use the following entry: ```json { "scheduler": { "type": "WarmupDecayLR", "params": { "last_batch_iteration": -1, "total_num_steps": "auto", "warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto" } } } ``` and `total_num_steps`, `warmup_max_lr`, `warmup_num_steps` and `total_num_steps` will be set at loading time. <a id='deepspeed-fp32'></a> ### fp32 Precision Deepspeed supports the full fp32 and the fp16 mixed precision. Because of the much reduced memory needs and faster speed one gets with the fp16 mixed precision, the only time you will want to not use it is when the model you're using doesn't behave well under this training mode. Typically this happens when the model wasn't pretrained in the fp16 mixed precision (e.g. often this happens with bf16-pretrained models). Such models may overflow or underflow leading to `NaN` loss. If this is your case then you will want to use the full fp32 mode, by explicitly disabling the otherwise default fp16 mixed precision mode with: ```json { "fp16": { "enabled": false, } } ``` If you're using the Ampere-architecture based GPU, pytorch version 1.7 and higher will automatically switch to using the much more efficient tf32 format for some operations, but the results will still be in fp32. For details and benchmarks, please, see [TensorFloat-32(TF32) on Ampere devices](https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices). The document includes instructions on how to disable this automatic conversion if for some reason you prefer not to use it. With the 🤗 Trainer you can use `--tf32` to enable it, or disable it with `--tf32 0` or `--no_tf32`. By default the PyTorch default is used. <a id='deepspeed-amp'></a> ### Automatic Mixed Precision You can use automatic mixed precision with either a pytorch-like AMP way or the apex-like way: ### fp16 To configure pytorch AMP-like mode with fp16 (float16) set: ```json { "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 } } ``` and the [`Trainer`] will automatically enable or disable it based on the value of `args.fp16_backend`. The rest of config values are up to you. This mode gets enabled when `--fp16 --fp16_backend amp` or `--fp16_full_eval` command line args are passed. You can also enable/disable this mode explicitly: ```json { "fp16": { "enabled": true, "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 } } ``` But then you're on your own synchronizing the [`Trainer`] command line arguments and the DeepSpeed configuration. Here is the [documentation](https://www.deepspeed.ai/docs/config-json/#fp16-training-options). ### bf16 If bf16 (bfloat16) is desired instead of fp16 then the following configuration section is to be used: ```json { "bf16": { "enabled": "auto" } } ``` bf16 has the same dynamic range as fp32 and thus doesn't require loss scaling. This mode gets enabled when `--bf16` or `--bf16_full_eval` command line args are passed. You can also enable/disable this mode explicitly: ```json { "bf16": { "enabled": true } } ``` <Tip> As of `deepspeed==0.6.0` the bf16 support is new and experimental. If you use [gradient accumulation](#gradient-accumulation) with bf16-enabled, you need to be aware that it'll accumulate gradients in bf16, which may not be what you want due to this format's low precision, as it may lead to a lossy accumulation. A work is being done to fix that and provide an option to use a higher precision `dtype` (fp16 or fp32). </Tip> ### NCCL Collectives There is the `dtype` of the training regime and there is a separate `dtype` that is used for communication collectives like various reduction and gathering/scattering operations. All gather/scatter ops are performed in the same `dtype` the data is in, so if you're using bf16 training regime it gets gathered in bf16 - gathering is a non-lossy operation. Various reduce operations can be quite lossy, for example when gradients are averaged across multiple-gpus, if the communications are done in fp16 or bf16 the outcome is likely be lossy - since when one ads multiple numbers in low precision the result isn't exact. More so with bf16 as it has a lower precision than fp16. Often fp16 is good enough as the loss is minimal when averaging grads which are typically very small. Therefore, by default for half precision training fp16 is used as the default for reduction operations. But you have full control over this functionality and if you choose you can add a small overhead and ensure that reductions will be using fp32 as the accumulation dtype and only when the result is ready it'll get downcast to the half precision `dtype` you're training in. In order to override the default you simply add a new configuration entry: ```json { "communication_data_type": "fp32" } ``` The valid values as of this writing are "fp16", "bfp16", "fp32". note: stage zero 3 had a bug with regards to bf16 comm dtype that was fixed in `deepspeed==0.8.1` ### apex To configure apex AMP-like mode set: ```json "amp": { "enabled": "auto", "opt_level": "auto" } ``` and the [`Trainer`] will automatically configure it based on the values of `args.fp16_backend` and `args.fp16_opt_level`. This mode gets enabled when `--fp16 --fp16_backend apex --fp16_opt_level 01` command line args are passed. You can also configure this mode explicitly: ```json { "amp": { "enabled": true, "opt_level": "O1" } } ``` But then you're on your own synchronizing the [`Trainer`] command line arguments and the DeepSpeed configuration. Here is the [documentation](https://www.deepspeed.ai/docs/config-json/#automatic-mixed-precision-amp-training-options). <a id='deepspeed-bs'></a> ### Batch Size To configure batch size, use: ```json { "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto" } ``` and the [`Trainer`] will automatically set `train_micro_batch_size_per_gpu` to the value of `args.per_device_train_batch_size` and `train_batch_size` to `args.world_size * args.per_device_train_batch_size * args.gradient_accumulation_steps`. You can also set the values explicitly: ```json { "train_batch_size": 12, "train_micro_batch_size_per_gpu": 4 } ``` But then you're on your own synchronizing the [`Trainer`] command line arguments and the DeepSpeed configuration. <a id='deepspeed-grad-acc'></a> ### Gradient Accumulation To configure gradient accumulation set: ```json { "gradient_accumulation_steps": "auto" } ``` and the [`Trainer`] will automatically set it to the value of `args.gradient_accumulation_steps`. You can also set the value explicitly: ```json { "gradient_accumulation_steps": 3 } ``` But then you're on your own synchronizing the [`Trainer`] command line arguments and the DeepSpeed configuration. <a id='deepspeed-grad-clip'></a> ### Gradient Clipping To configure gradient gradient clipping set: ```json { "gradient_clipping": "auto" } ``` and the [`Trainer`] will automatically set it to the value of `args.max_grad_norm`. You can also set the value explicitly: ```json { "gradient_clipping": 1.0 } ``` But then you're on your own synchronizing the [`Trainer`] command line arguments and the DeepSpeed configuration. <a id='deepspeed-weight-extraction'></a> ### Getting The Model Weights Out As long as you continue training and resuming using DeepSpeed you don't need to worry about anything. DeepSpeed stores fp32 master weights in its custom checkpoint optimizer files, which are `global_step*/*optim_states.pt` (this is glob pattern), and are saved under the normal checkpoint. **FP16 Weights:** When a model is saved under ZeRO-2, you end up having the normal `pytorch_model.bin` file with the model weights, but they are only the fp16 version of the weights. Under ZeRO-3, things are much more complicated, since the model weights are partitioned out over multiple GPUs, therefore `"stage3_gather_16bit_weights_on_model_save": true` is required to get the `Trainer` to save the fp16 version of the weights. If this setting is `False` `pytorch_model.bin` won't be created. This is because by default DeepSpeed's `state_dict` contains a placeholder and not the real weights. If we were to save this `state_dict` it won't be possible to load it back. ```json { "zero_optimization": { "stage3_gather_16bit_weights_on_model_save": true } } ``` **FP32 Weights:** While the fp16 weights are fine for resuming training, if you finished finetuning your model and want to upload it to the [models hub](https://huggingface.co/models) or pass it to someone else you most likely will want to get the fp32 weights. This ideally shouldn't be done during training since this is a process that requires a lot of memory, and therefore best to be performed offline after the training is complete. But if desired and you have plenty of free CPU memory it can be done in the same training script. The following sections will discuss both approaches. **Live FP32 Weights Recovery:** This approach may not work if you model is large and you have little free CPU memory left, at the end of the training. If you have saved at least one checkpoint, and you want to use the latest one, you can do the following: ```python from transformers.trainer_utils import get_last_checkpoint from deepspeed.utils.zero_to_fp32 import load_state_dict_from_zero_checkpoint checkpoint_dir = get_last_checkpoint(trainer.args.output_dir) fp32_model = load_state_dict_from_zero_checkpoint(trainer.model, checkpoint_dir) ``` If you're using the `--load_best_model_at_end` class:*~transformers.TrainingArguments* argument (to track the best checkpoint), then you can finish the training by first saving the final model explicitly and then do the same as above: ```python from deepspeed.utils.zero_to_fp32 import load_state_dict_from_zero_checkpoint checkpoint_dir = os.path.join(trainer.args.output_dir, "checkpoint-final") trainer.deepspeed.save_checkpoint(checkpoint_dir) fp32_model = load_state_dict_from_zero_checkpoint(trainer.model, checkpoint_dir) ``` <Tip> Note, that once `load_state_dict_from_zero_checkpoint` was run, the `model` will no longer be usable in the DeepSpeed context of the same application. i.e. you will need to re-initialize the deepspeed engine, since `model.load_state_dict(state_dict)` will remove all the DeepSpeed magic from it. So do this only at the very end of the training. </Tip> Of course, you don't have to use class:*~transformers.Trainer* and you can adjust the examples above to your own trainer. If for some reason you want more refinement, you can also extract the fp32 `state_dict` of the weights and apply these yourself as is shown in the following example: ```python from deepspeed.utils.zero_to_fp32 import get_fp32_state_dict_from_zero_checkpoint state_dict = get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir) # already on cpu model = model.cpu() model.load_state_dict(state_dict) ``` **Offline FP32 Weights Recovery:** DeepSpeed creates a special conversion script `zero_to_fp32.py` which it places in the top-level of the checkpoint folder. Using this script you can extract the weights at any point. The script is standalone and you no longer need to have the configuration file or a `Trainer` to do the extraction. Let's say your checkpoint folder looks like this: ```bash $ ls -l output_dir/checkpoint-1/ -rw-rw-r-- 1 stas stas 1.4K Mar 27 20:42 config.json drwxrwxr-x 2 stas stas 4.0K Mar 25 19:52 global_step1/ -rw-rw-r-- 1 stas stas 12 Mar 27 13:16 latest -rw-rw-r-- 1 stas stas 827K Mar 27 20:42 optimizer.pt -rw-rw-r-- 1 stas stas 231M Mar 27 20:42 pytorch_model.bin -rw-rw-r-- 1 stas stas 623 Mar 27 20:42 scheduler.pt -rw-rw-r-- 1 stas stas 1.8K Mar 27 20:42 special_tokens_map.json -rw-rw-r-- 1 stas stas 774K Mar 27 20:42 spiece.model -rw-rw-r-- 1 stas stas 1.9K Mar 27 20:42 tokenizer_config.json -rw-rw-r-- 1 stas stas 339 Mar 27 20:42 trainer_state.json -rw-rw-r-- 1 stas stas 2.3K Mar 27 20:42 training_args.bin -rwxrw-r-- 1 stas stas 5.5K Mar 27 13:16 zero_to_fp32.py* ``` In this example there is just one DeepSpeed checkpoint sub-folder *global_step1*. Therefore to reconstruct the fp32 weights just run: ```bash python zero_to_fp32.py . pytorch_model.bin ``` This is it. `pytorch_model.bin` will now contain the full fp32 model weights consolidated from multiple GPUs. The script will automatically be able to handle either a ZeRO-2 or ZeRO-3 checkpoint. `python zero_to_fp32.py -h` will give you usage details. The script will auto-discover the deepspeed sub-folder using the contents of the file `latest`, which in the current example will contain `global_step1`. Note: currently the script requires 2x general RAM of the final fp32 model weights. ### ZeRO-3 and Infinity Nuances ZeRO-3 is quite different from ZeRO-2 because of its param sharding feature. ZeRO-Infinity further extends ZeRO-3 to support NVMe memory and multiple other speed and scalability improvements. While all the efforts were made for things to just work without needing any special changes to your models, in certain circumstances you may find the following information to be needed. #### Constructing Massive Models DeepSpeed/ZeRO-3 can handle models with Trillions of parameters which may not fit onto the existing RAM. In such cases, but also if you want the initialization to happen much faster, initialize the model using *deepspeed.zero.Init()* context manager (which is also a function decorator), like so: ```python from transformers import T5ForConditionalGeneration, T5Config import deepspeed with deepspeed.zero.Init(): config = T5Config.from_pretrained("t5-small") model = T5ForConditionalGeneration(config) ``` As you can see this gives you a randomly initialized model. If you want to use a pretrained model, `model_class.from_pretrained` will activate this feature as long as `is_deepspeed_zero3_enabled()` returns `True`, which currently is setup by the [`TrainingArguments`] object if the passed DeepSpeed configuration file contains ZeRO-3 config section. Thus you must create the [`TrainingArguments`] object **before** calling `from_pretrained`. Here is an example of a possible sequence: ```python from transformers import AutoModel, Trainer, TrainingArguments training_args = TrainingArguments(..., deepspeed=ds_config) model = AutoModel.from_pretrained("t5-small") trainer = Trainer(model=model, args=training_args, ...) ``` If you're using the official example scripts and your command line arguments include `--deepspeed ds_config.json` with ZeRO-3 config enabled, then everything is already done for you, since this is how example scripts are written. Note: If the fp16 weights of the model can't fit onto the memory of a single GPU this feature must be used. For full details on this method and other related features please refer to [Constructing Massive Models](https://deepspeed.readthedocs.io/en/latest/zero3.html#constructing-massive-models). Also when loading fp16-pretrained models, you will want to tell `from_pretrained` to use `torch_dtype=torch.float16`. For details, please, see [from_pretrained-torch-dtype](#from_pretrained-torch-dtype). #### Gathering Parameters Under ZeRO-3 on multiple GPUs no single GPU has all the parameters unless it's the parameters for the currently executing layer. So if you need to access all parameters from all layers at once there is a specific method to do it. Most likely you won't need it, but if you do please refer to [Gathering Parameters](https://deepspeed.readthedocs.io/en/latest/zero3.html#manual-parameter-coordination) We do however use it internally in several places, one such example is when loading pretrained model weights in `from_pretrained`. We load one layer at a time and immediately partition it to all participating GPUs, as for very large models it won't be possible to load it on one GPU and then spread it out to multiple GPUs, due to memory limitations. Also under ZeRO-3, if you write your own code and run into a model parameter weight that looks like: ```python tensor([1.0], device="cuda:0", dtype=torch.float16, requires_grad=True) ``` stress on `tensor([1.])`, or if you get an error where it says the parameter is of size `1`, instead of some much larger multi-dimensional shape, this means that the parameter is partitioned and what you see is a ZeRO-3 placeholder. <a id='deepspeed-zero-inference'></a> ### ZeRO Inference ZeRO Inference uses the same config as ZeRO-3 Training. You just don't need the optimizer and scheduler sections. In fact you can leave these in the config file if you want to share the same one with the training. They will just be ignored. Otherwise you just need to pass the usual [`TrainingArguments`] arguments. For example: ```bash deepspeed --num_gpus=2 your_program.py <normal cl args> --do_eval --deepspeed ds_config.json ``` The only important thing is that you need to use a ZeRO-3 configuration, since ZeRO-2 provides no benefit whatsoever for the inference as only ZeRO-3 performs sharding of parameters, whereas ZeRO-1 shards gradients and optimizer states. Here is an example of running `run_translation.py` under DeepSpeed deploying all available GPUs: ```bash deepspeed examples/pytorch/translation/run_translation.py \ --deepspeed tests/deepspeed/ds_config_zero3.json \ --model_name_or_path t5-small --output_dir output_dir \ --do_eval --max_eval_samples 50 --warmup_steps 50 \ --max_source_length 128 --val_max_target_length 128 \ --overwrite_output_dir --per_device_eval_batch_size 4 \ --predict_with_generate --dataset_config "ro-en" --fp16 \ --source_lang en --target_lang ro --dataset_name wmt16 \ --source_prefix "translate English to Romanian: " ``` Since for inference there is no need for additional large memory used by the optimizer states and the gradients you should be able to fit much larger batches and/or sequence length onto the same hardware. Additionally DeepSpeed is currently developing a related product called Deepspeed-Inference which has no relationship to the ZeRO technology, but instead uses tensor parallelism to scale models that can't fit onto a single GPU. This is a work in progress and we will provide the integration once that product is complete. ### Memory Requirements Since Deepspeed ZeRO can offload memory to CPU (and NVMe) the framework provides utils that allow one to tell how much CPU and GPU memory will be needed depending on the number of GPUs being used. Let's estimate how much memory is needed to finetune "bigscience/T0_3B" on a single GPU: ```bash $ python -c 'from transformers import AutoModel; \ from deepspeed.runtime.zero.stage3 import estimate_zero3_model_states_mem_needs_all_live; \ model = AutoModel.from_pretrained("bigscience/T0_3B"); \ estimate_zero3_model_states_mem_needs_all_live(model, num_gpus_per_node=1, num_nodes=1)' [...] Estimated memory needed for params, optim states and gradients for a: HW: Setup with 1 node, 1 GPU per node. SW: Model with 2783M total params, 65M largest layer params. per CPU | per GPU | Options 70.00GB | 0.25GB | offload_param=cpu , offload_optimizer=cpu , zero_init=1 70.00GB | 0.25GB | offload_param=cpu , offload_optimizer=cpu , zero_init=0 62.23GB | 5.43GB | offload_param=none, offload_optimizer=cpu , zero_init=1 62.23GB | 5.43GB | offload_param=none, offload_optimizer=cpu , zero_init=0 0.37GB | 46.91GB | offload_param=none, offload_optimizer=none, zero_init=1 15.56GB | 46.91GB | offload_param=none, offload_optimizer=none, zero_init=0 ``` So you can fit it on a single 80GB GPU and no CPU offload, or a tiny 8GB GPU but then need ~60GB of CPU memory. (Remember this is just the memory for params, optimizer states and gradients - you will need a bit more memory for cuda kernels, activations and temps.) Then it's a tradeoff of cost vs speed. It'll be cheaper to buy/rent a smaller GPU (or less GPUs since you can use multiple GPUs with Deepspeed ZeRO. But then it'll be slower, so even if you don't care about how fast something will be done, the slowdown has a direct impact on the duration of using the GPU and thus bigger cost. So experiment and compare which works the best. If you have enough GPU memory make sure to disable the CPU/NVMe offload as it'll make everything faster. For example, let's repeat the same for 2 GPUs: ```bash $ python -c 'from transformers import AutoModel; \ from deepspeed.runtime.zero.stage3 import estimate_zero3_model_states_mem_needs_all_live; \ model = AutoModel.from_pretrained("bigscience/T0_3B"); \ estimate_zero3_model_states_mem_needs_all_live(model, num_gpus_per_node=2, num_nodes=1)' [...] Estimated memory needed for params, optim states and gradients for a: HW: Setup with 1 node, 2 GPUs per node. SW: Model with 2783M total params, 65M largest layer params. per CPU | per GPU | Options 70.00GB | 0.25GB | offload_param=cpu , offload_optimizer=cpu , zero_init=1 70.00GB | 0.25GB | offload_param=cpu , offload_optimizer=cpu , zero_init=0 62.23GB | 2.84GB | offload_param=none, offload_optimizer=cpu , zero_init=1 62.23GB | 2.84GB | offload_param=none, offload_optimizer=cpu , zero_init=0 0.74GB | 23.58GB | offload_param=none, offload_optimizer=none, zero_init=1 31.11GB | 23.58GB | offload_param=none, offload_optimizer=none, zero_init=0 ``` So here you'd want 2x 32GB GPUs or higher without offloading to CPU. For full information please see [memory estimators](https://deepspeed.readthedocs.io/en/latest/memory.html). ### Filing Issues Here is how to file an issue so that we could quickly get to the bottom of the issue and help you to unblock your work. In your report please always include: 1. the full Deepspeed config file in the report 2. either the command line arguments if you were using the [`Trainer`] or [`TrainingArguments`] arguments if you were scripting the Trainer setup yourself. Please do not dump the [`TrainingArguments`] as it has dozens of entries that are irrelevant. 3. Output of: ```bash python -c 'import torch; print(f"torch: {torch.__version__}")' python -c 'import transformers; print(f"transformers: {transformers.__version__}")' python -c 'import deepspeed; print(f"deepspeed: {deepspeed.__version__}")' ``` 4. If possible include a link to a Google Colab notebook that we can reproduce the problem with. You can use this [notebook](https://github.com/stas00/porting/blob/master/transformers/deepspeed/DeepSpeed_on_colab_CLI.ipynb) as a starting point. 5. Unless it's impossible please always use a standard dataset that we can use and not something custom. 6. If possible try to use one of the existing [examples](https://github.com/huggingface/transformers/tree/main/examples/pytorch) to reproduce the problem with. Things to consider: - Deepspeed is often not the cause of the problem. Some of the filed issues proved to be Deepspeed-unrelated. That is once Deepspeed was removed from the setup, the problem was still there. Therefore, if it's not absolutely obvious it's a DeepSpeed-related problem, as in you can see that there is an exception and you can see that DeepSpeed modules are involved, first re-test your setup without DeepSpeed in it. And only if the problem persists then do mentioned Deepspeed and supply all the required details. - If it's clear to you that the issue is in the DeepSpeed core and not the integration part, please file the Issue directly with [Deepspeed](https://github.com/microsoft/DeepSpeed/). If you aren't sure, please do not worry, either Issue tracker will do, we will figure it out once you posted it and redirect you to another Issue tracker if need be. ### Troubleshooting #### the `deepspeed` process gets killed at startup without a traceback If the `deepspeed` process gets killed at launch time without a traceback, that usually means that the program tried to allocate more CPU memory than your system has or your process is allowed to allocate and the OS kernel killed that process. This is because your configuration file most likely has either `offload_optimizer` or `offload_param` or both configured to offload to `cpu`. If you have NVMe, experiment with offloading to NVMe if you're running under ZeRO-3. Here is how you can [estimate how much memory is needed for a specific model](https://deepspeed.readthedocs.io/en/latest/memory.html). #### training and/or eval/predict loss is `NaN` This often happens when one takes a model pre-trained in bf16 mixed precision mode and tries to use it under fp16 (with or without mixed precision). Most models trained on TPU and often the ones released by Google are in this category (e.g. almost all t5-based models). Here the solution is to either use fp32 or bf16 if your hardware supports it (TPU, Ampere GPUs or newer). The other problem may have to do with using fp16. When you configure this section: ```json { "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 } } ``` and you see in your log that Deepspeed reports `OVERFLOW!` as follows: ``` 0%| | 0/189 [00:00<?, ?it/s] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 262144, reducing to 262144 1%|▌ | 1/189 [00:00<01:26, 2.17it/s] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 262144, reducing to 131072.0 1%|█▏ [...] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1 14%|████████████████▌ | 27/189 [00:14<01:13, 2.21it/s] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1 15%|█████████████████▏ | 28/189 [00:14<01:13, 2.18it/s] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1 15%|█████████████████▊ | 29/189 [00:15<01:13, 2.18it/s] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1, reducing to 1 [...] ``` that means that the Deepspeed loss scaler can't figure out a scaling co-efficient that overcomes loss overflow. (the log was massaged to be more readable here.) In this case you usually need to raise the value of `initial_scale_power`. Setting it to `"initial_scale_power": 32` will typically resolve the problem. ### Notes - DeepSpeed works with the PyTorch [`Trainer`] but not TF [`TFTrainer`]. - While DeepSpeed has a pip installable PyPI package, it is highly recommended that it gets installed from [source](https://github.com/microsoft/deepspeed#installation) to best match your hardware and also if you need to enable certain features, like 1-bit Adam, which aren't available in the pypi distribution. - You don't have to use the [`Trainer`] to use DeepSpeed with 🤗 Transformers - you can use any model with your own trainer, and you will have to adapt the latter according to [the DeepSpeed integration instructions](https://www.deepspeed.ai/getting-started/#writing-deepspeed-models). ## Non-Trainer Deepspeed Integration The [`~integrations.HfDeepSpeedConfig`] is used to integrate Deepspeed into the 🤗 Transformers core functionality, when [`Trainer`] is not used. The only thing that it does is handling Deepspeed ZeRO-3 param gathering and automatically splitting the model onto multiple gpus during `from_pretrained` call. Everything else you have to do by yourself. When using [`Trainer`] everything is automatically taken care of. When not using [`Trainer`], to efficiently deploy DeepSpeed ZeRO-3, you must instantiate the [`~integrations.HfDeepSpeedConfig`] object before instantiating the model and keep that object alive. If you're using Deepspeed ZeRO-1 or ZeRO-2 you don't need to use `HfDeepSpeedConfig` at all. For example for a pretrained model: ```python from transformers.integrations import HfDeepSpeedConfig from transformers import AutoModel import deepspeed ds_config = {...} # deepspeed config object or path to the file # must run before instantiating the model to detect zero 3 dschf = HfDeepSpeedConfig(ds_config) # keep this object alive model = AutoModel.from_pretrained("gpt2") engine = deepspeed.initialize(model=model, config_params=ds_config, ...) ``` or for non-pretrained model: ```python from transformers.integrations import HfDeepSpeedConfig from transformers import AutoModel, AutoConfig import deepspeed ds_config = {...} # deepspeed config object or path to the file # must run before instantiating the model to detect zero 3 dschf = HfDeepSpeedConfig(ds_config) # keep this object alive config = AutoConfig.from_pretrained("gpt2") model = AutoModel.from_config(config) engine = deepspeed.initialize(model=model, config_params=ds_config, ...) ``` Please note that if you're not using the [`Trainer`] integration, you're completely on your own. Basically follow the documentation on the [Deepspeed](https://www.deepspeed.ai/) website. Also you have to configure explicitly the config file - you can't use `"auto"` values and you will have to put real values instead. ## HfDeepSpeedConfig [[autodoc]] integrations.HfDeepSpeedConfig - all ### Custom DeepSpeed ZeRO Inference Here is an example of how one could do DeepSpeed ZeRO Inference without using [`Trainer`] when one can't fit a model onto a single GPU. The solution includes using additional GPUs or/and offloading GPU memory to CPU memory. The important nuance to understand here is that the way ZeRO is designed you can process different inputs on different GPUs in parallel. The example has copious notes and is self-documenting. Make sure to: 1. disable CPU offload if you have enough GPU memory (since it slows things down) 2. enable bf16 if you own an Ampere or a newer GPU to make things faster. If you don't have that hardware you may enable fp16 as long as you don't use any model that was pre-trained in bf16 mixed precision (such as most t5 models). These usually overflow in fp16 and you will see garbage as output. ```python #!/usr/bin/env python # This script demonstrates how to use Deepspeed ZeRO in an inference mode when one can't fit a model # into a single GPU # # 1. Use 1 GPU with CPU offload # 2. Or use multiple GPUs instead # # First you need to install deepspeed: pip install deepspeed # # Here we use a 3B "bigscience/T0_3B" model which needs about 15GB GPU RAM - so 1 largish or 2 # small GPUs can handle it. or 1 small GPU and a lot of CPU memory. # # To use a larger model like "bigscience/T0" which needs about 50GB, unless you have an 80GB GPU - # you will need 2-4 gpus. And then you can adapt the script to handle more gpus if you want to # process multiple inputs at once. # # The provided deepspeed config also activates CPU memory offloading, so chances are that if you # have a lot of available CPU memory and you don't mind a slowdown you should be able to load a # model that doesn't normally fit into a single GPU. If you have enough GPU memory the program will # run faster if you don't want offload to CPU - so disable that section then. # # To deploy on 1 gpu: # # deepspeed --num_gpus 1 t0.py # or: # python -m torch.distributed.run --nproc_per_node=1 t0.py # # To deploy on 2 gpus: # # deepspeed --num_gpus 2 t0.py # or: # python -m torch.distributed.run --nproc_per_node=2 t0.py from transformers import AutoTokenizer, AutoConfig, AutoModelForSeq2SeqLM from transformers.integrations import HfDeepSpeedConfig import deepspeed import os import torch os.environ["TOKENIZERS_PARALLELISM"] = "false" # To avoid warnings about parallelism in tokenizers # distributed setup local_rank = int(os.getenv("LOCAL_RANK", "0")) world_size = int(os.getenv("WORLD_SIZE", "1")) torch.cuda.set_device(local_rank) deepspeed.init_distributed() model_name = "bigscience/T0_3B" config = AutoConfig.from_pretrained(model_name) model_hidden_size = config.d_model # batch size has to be divisible by world_size, but can be bigger than world_size train_batch_size = 1 * world_size # ds_config notes # # - enable bf16 if you use Ampere or higher GPU - this will run in mixed precision and will be # faster. # # - for older GPUs you can enable fp16, but it'll only work for non-bf16 pretrained models - e.g. # all official t5 models are bf16-pretrained # # - set offload_param.device to "none" or completely remove the `offload_param` section if you don't # - want CPU offload # # - if using `offload_param` you can manually finetune stage3_param_persistence_threshold to control # - which params should remain on gpus - the larger the value the smaller the offload size # # For indepth info on Deepspeed config see # https://huggingface.co/docs/transformers/main/main_classes/deepspeed # keeping the same format as json for consistency, except it uses lower case for true/false # fmt: off ds_config = { "fp16": { "enabled": False }, "bf16": { "enabled": False }, "zero_optimization": { "stage": 3, "offload_param": { "device": "cpu", "pin_memory": True }, "overlap_comm": True, "contiguous_gradients": True, "reduce_bucket_size": model_hidden_size * model_hidden_size, "stage3_prefetch_bucket_size": 0.9 * model_hidden_size * model_hidden_size, "stage3_param_persistence_threshold": 10 * model_hidden_size }, "steps_per_print": 2000, "train_batch_size": train_batch_size, "train_micro_batch_size_per_gpu": 1, "wall_clock_breakdown": False } # fmt: on # next line instructs transformers to partition the model directly over multiple gpus using # deepspeed.zero.Init when model's `from_pretrained` method is called. # # **it has to be run before loading the model AutoModelForSeq2SeqLM.from_pretrained(model_name)** # # otherwise the model will first be loaded normally and only partitioned at forward time which is # less efficient and when there is little CPU RAM may fail dschf = HfDeepSpeedConfig(ds_config) # keep this object alive # now a model can be loaded. model = AutoModelForSeq2SeqLM.from_pretrained(model_name) # initialise Deepspeed ZeRO and store only the engine object ds_engine = deepspeed.initialize(model=model, config_params=ds_config)[0] ds_engine.module.eval() # inference # Deepspeed ZeRO can process unrelated inputs on each GPU. So for 2 gpus you process 2 inputs at once. # If you use more GPUs adjust for more. # And of course if you have just one input to process you then need to pass the same string to both gpus # If you use only one GPU, then you will have only rank 0. rank = torch.distributed.get_rank() if rank == 0: text_in = "Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy" elif rank == 1: text_in = "Is this review positive or negative? Review: this is the worst restaurant ever" tokenizer = AutoTokenizer.from_pretrained(model_name) inputs = tokenizer.encode(text_in, return_tensors="pt").to(device=local_rank) with torch.no_grad(): outputs = ds_engine.module.generate(inputs, synced_gpus=True) text_out = tokenizer.decode(outputs[0], skip_special_tokens=True) print(f"rank{rank}:\n in={text_in}\n out={text_out}") ``` Let's save it as `t0.py` and run it: ``` $ deepspeed --num_gpus 2 t0.py rank0: in=Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy out=Positive rank1: in=Is this review positive or negative? Review: this is the worst restaurant ever out=negative ``` This was a very basic example and you will want to adapt it to your needs. ### `generate` nuances When using multiple GPUs with ZeRO Stage-3, one has to synchronize the GPUs by calling `generate(..., synced_gpus=True)`. If this is not done if one GPU finished generating before other GPUs the whole system will hang as the rest of the GPUs will not be able to received the shard of weights from the GPU that stopped generating. Starting from `transformers>=4.28`, if `synced_gpus` isn't explicitly specified, it'll be set to `True` automatically if these conditions are detected. But you can still override the value of `synced_gpus` if need to. ## Testing Deepspeed Integration If you submit a PR that involves DeepSpeed integration please note our CircleCI PR CI setup has no GPUs, so we only run tests requiring gpus on a different CI nightly. Therefore if you get a green CI report in your PR it doesn't mean DeepSpeed tests pass. To run DeepSpeed tests, please run at least: ``` RUN_SLOW=1 pytest tests/deepspeed/test_deepspeed.py ``` If you changed any of the modeling or pytorch examples code, then run the model zoo tests as well. The following will run all DeepSpeed tests: ``` RUN_SLOW=1 pytest tests/deepspeed ``` ## Main DeepSpeed Resources - [Project's github](https://github.com/microsoft/deepspeed) - [Usage docs](https://www.deepspeed.ai/getting-started/) - [API docs](https://deepspeed.readthedocs.io/en/latest/index.html) - [Blog posts](https://www.microsoft.com/en-us/research/search/?q=deepspeed) Papers: - [ZeRO: Memory Optimizations Toward Training Trillion Parameter Models](https://arxiv.org/abs/1910.02054) - [ZeRO-Offload: Democratizing Billion-Scale Model Training](https://arxiv.org/abs/2101.06840) - [ZeRO-Infinity: Breaking the GPU Memory Wall for Extreme Scale Deep Learning](https://arxiv.org/abs/2104.07857) Finally, please, remember that, HuggingFace [`Trainer`] only integrates DeepSpeed, therefore if you have any problems or questions with regards to DeepSpeed usage, please, file an issue with [DeepSpeed GitHub](https://github.com/microsoft/DeepSpeed/issues).
huggingface/transformers/blob/main/docs/source/en/main_classes/deepspeed.md
Download slices of rows Datasets Server provides a `/rows` endpoint for visualizing any slice of rows of a dataset. This will let you walk-through and inspect the data contained in a dataset. <div class="flex justify-center"> <img style="margin-bottom: 0;" class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/datasets-server/oasst1_light.png"/> <img style="margin-bottom: 0;" class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/datasets-server/oasst1_dark.png"/> </div> <Tip warning={true}> Currently, only {" "} <a href="./parquet">datasets with parquet exports</a> are supported so Datasets Server can extract any slice of rows without downloading the whole dataset. </Tip> This guide shows you how to use Datasets Server's `/rows` endpoint to download slices of a dataset. Feel free to also try it out with [Postman](https://www.postman.com/huggingface/workspace/hugging-face-apis/request/23242779-32d6a8be-b800-446a-8cee-f6b5ca1710df), [RapidAPI](https://rapidapi.com/hugging-face-hugging-face-default/api/hugging-face-datasets-api), or [ReDoc](https://redocly.github.io/redoc/?url=https://datasets-server.huggingface.co/openapi.json#operation/listFirstRows). The `/rows` endpoint accepts five query parameters: - `dataset`: the dataset name, for example `glue` or `mozilla-foundation/common_voice_10_0` - `config`: the configuration name, for example `cola` - `split`: the split name, for example `train` - `offset`: the offset of the slice, for example `150` - `length`: the length of the slice, for example `10` (maximum: `100`) <inferencesnippet> <python> ```python import requests headers = {"Authorization": f"Bearer {API_TOKEN}"} API_URL = "https://datasets-server.huggingface.co/rows?dataset=duorc&config=SelfRC&split=train&offset=150&length=10" def query(): response = requests.get(API_URL, headers=headers) return response.json() data = query() ``` </python> <js> ```js import fetch from "node-fetch"; async function query(data) { const response = await fetch( "https://datasets-server.huggingface.co/rows?dataset=duorc&config=SelfRC&split=train&offset=150&length=10", { headers: { Authorization: `Bearer ${API_TOKEN}` }, method: "GET" } ); const result = await response.json(); return result; } query().then((response) => { console.log(JSON.stringify(response)); }); ``` </js> <curl> ```curl curl https://datasets-server.huggingface.co/rows?dataset=duorc&config=SelfRC&split=train&offset=150&length=10 \ -X GET \ -H "Authorization: Bearer ${API_TOKEN}" ``` </curl> </inferencesnippet> The endpoint response is a JSON containing two keys: - The [`features`](https://huggingface.co/docs/datasets/about_dataset_features) of a dataset, including the column's name and data type. - The slice of `rows` of a dataset and the content contained in each column of a specific row. For example, here are the `features` and the slice of `rows` of the `duorc`/`SelfRC` train split from 150 to 151: ```json // https://datasets-server.huggingface.co/rows?dataset=duorc&config=SelfRC&split=train&offset=150&length=2 { "features": [ { "feature_idx": 0, "name": "plot_id", "type": { "dtype": "string", "_type": "Value" } }, { "feature_idx": 1, "name": "plot", "type": { "dtype": "string", "_type": "Value" } }, { "feature_idx": 2, "name": "title", "type": { "dtype": "string", "_type": "Value" } }, { "feature_idx": 3, "name": "question_id", "type": { "dtype": "string", "_type": "Value" } }, { "feature_idx": 4, "name": "question", "type": { "dtype": "string", "_type": "Value" } }, { "feature_idx": 5, "name": "answers", "type": { "feature": { "dtype": "string", "_type": "Value" }, "_type": "Sequence" } }, { "feature_idx": 6, "name": "no_answer", "type": { "dtype": "bool", "_type": "Value" } } ], "rows": [ { "row_idx": 150, "row": { "plot_id": "/m/03wj_q", "plot": "The film is centered on Mortal Kombat, a fighting tournament between the representatives of the realms of Earth and Outworld conceived by the Elder Gods amid looming invasion of the Earth by Outworld. If the realm of Outworld wins Mortal Kombat ten consecutive times, its Emperor Shao Kahn will be able to invade and conquer the Earth realm.\nShaolin monk Liu Kang and his comrades, movie star Johnny Cage and military officer Sonya Blade were handpicked by Raiden, the god of thunder and defender of the Earth realm, to overcome their powerful adversaries in order to prevent Outworld from winning their tenth straight Mortal Kombat tournament. Each of the three has his or her own reason for competing: Liu seeks revenge against the tournament host Shang Tsung for killing his brother Chan; Sonya seeks revenge on an Australian crime lord Kano; and Cage, having been branded as a fake by the media, seeks to prove otherwise.\nAt Shang Tsung's island, Liu is attracted to Princess Kitana, Shao Kahn's adopted daughter. Aware that Kitana is a dangerous adversary because she is the rightful heir to Outworld and that she will attempt to ally herself with the Earth warriors, Tsung orders the creature Reptile to spy on her. Liu defeats his first opponent and Sonya gets her revenge on Kano by snapping his neck. Cage encounters and barely beats Scorpion. Liu engages in a brief duel with Kitana, who secretly offers him cryptic advice for his next battle. Liu's next opponent is Sub-Zero, whose defense seems untouched because of his freezing abilities, until Liu recalls Kitana's advice and uses it to kill Sub-Zero.\nPrince Goro enters the tournament and mercilessly crushes every opponent he faces. One of Cage's peers, Art Lean, is defeated by Goro as well and has his soul taken by Shang Tsung. Sonya worries that they may not win against Goro, but Raiden disagrees. He reveals their own fears and egos are preventing them from winning the tournament.\nDespite Sonya's warning, Cage comes to Tsung to request a fight with Goro. The sorcerer accepts on the condition that he be allowed to challenge any opponent of his choosing, anytime and anywhere he chooses. Raiden tries to intervene, but the conditions are agreed upon before he can do so. After Shang Tsung leaves, Raiden confronts Cage for what he has done in challenging Goro, but is impressed when Cage shows his awareness of the gravity of the tournament. Cage faces Goro and uses guile and the element of surprise to defeat the defending champion. Now desperate, Tsung takes Sonya hostage and takes her to Outworld, intending to fight her as his opponent. Knowing that his powers are ineffective there and that Sonya cannot defeat Tsung by herself, Raiden sends Liu and Cage into Outworld in order to rescue Sonya and challenge Tsung. In Outworld, Liu is attacked by Reptile, but eventually gains the upper hand and defeats him. Afterward, Kitana meets up with Cage and Liu, revealing to the pair the origins of both herself and Outworld. Kitana allies with them and helps them to infiltrate Tsung's castle.\nInside the castle tower, Shang Tsung challenges Sonya to fight him, claiming that her refusal to accept will result in the Earth realm forfeiting Mortal Kombat (this is, in fact, a lie on Shang's part). All seems lost for Earth realm until Kitana, Liu, and Cage appear. Kitana berates Tsung for his treachery to the Emperor as Sonya is set free. Tsung challenges Cage, but is counter-challenged by Liu. During the lengthy battle, Liu faces not only Tsung, but the souls that Tsung had forcibly taken in past tournaments. In a last-ditch attempt to take advantage, Tsung morphs into Chan. Seeing through the charade, Liu renews his determination and ultimately fires an energy bolt at the sorcerer, knocking him down and impaling him on a row of spikes. Tsung's death releases all of the captive souls, including Chan's. Before ascending to the afterlife, Chan tells Liu that he will remain with him in spirit until they are once again reunited, after Liu dies.\nThe warriors return to Earth realm, where a victory celebration is taking place at the Shaolin temple. The jubilation abruptly stops, however, when Shao Kahn's giant figure suddenly appears in the skies. When the Emperor declares that he has come for everyone's souls, the warriors take up fighting stances.", "title": "Mortal Kombat", "question_id": "40c1866a-b214-11ba-be57-8979d2cefa90", "question": "Where is Sonya taken to?", "answers": ["Outworld"], "no_answer": false }, "truncated_cells": [] }, { "row_idx": 151, "row": { "plot_id": "/m/03wj_q", "plot": "The film is centered on Mortal Kombat, a fighting tournament between the representatives of the realms of Earth and Outworld conceived by the Elder Gods amid looming invasion of the Earth by Outworld. If the realm of Outworld wins Mortal Kombat ten consecutive times, its Emperor Shao Kahn will be able to invade and conquer the Earth realm.\nShaolin monk Liu Kang and his comrades, movie star Johnny Cage and military officer Sonya Blade were handpicked by Raiden, the god of thunder and defender of the Earth realm, to overcome their powerful adversaries in order to prevent Outworld from winning their tenth straight Mortal Kombat tournament. Each of the three has his or her own reason for competing: Liu seeks revenge against the tournament host Shang Tsung for killing his brother Chan; Sonya seeks revenge on an Australian crime lord Kano; and Cage, having been branded as a fake by the media, seeks to prove otherwise.\nAt Shang Tsung's island, Liu is attracted to Princess Kitana, Shao Kahn's adopted daughter. Aware that Kitana is a dangerous adversary because she is the rightful heir to Outworld and that she will attempt to ally herself with the Earth warriors, Tsung orders the creature Reptile to spy on her. Liu defeats his first opponent and Sonya gets her revenge on Kano by snapping his neck. Cage encounters and barely beats Scorpion. Liu engages in a brief duel with Kitana, who secretly offers him cryptic advice for his next battle. Liu's next opponent is Sub-Zero, whose defense seems untouched because of his freezing abilities, until Liu recalls Kitana's advice and uses it to kill Sub-Zero.\nPrince Goro enters the tournament and mercilessly crushes every opponent he faces. One of Cage's peers, Art Lean, is defeated by Goro as well and has his soul taken by Shang Tsung. Sonya worries that they may not win against Goro, but Raiden disagrees. He reveals their own fears and egos are preventing them from winning the tournament.\nDespite Sonya's warning, Cage comes to Tsung to request a fight with Goro. The sorcerer accepts on the condition that he be allowed to challenge any opponent of his choosing, anytime and anywhere he chooses. Raiden tries to intervene, but the conditions are agreed upon before he can do so. After Shang Tsung leaves, Raiden confronts Cage for what he has done in challenging Goro, but is impressed when Cage shows his awareness of the gravity of the tournament. Cage faces Goro and uses guile and the element of surprise to defeat the defending champion. Now desperate, Tsung takes Sonya hostage and takes her to Outworld, intending to fight her as his opponent. Knowing that his powers are ineffective there and that Sonya cannot defeat Tsung by herself, Raiden sends Liu and Cage into Outworld in order to rescue Sonya and challenge Tsung. In Outworld, Liu is attacked by Reptile, but eventually gains the upper hand and defeats him. Afterward, Kitana meets up with Cage and Liu, revealing to the pair the origins of both herself and Outworld. Kitana allies with them and helps them to infiltrate Tsung's castle.\nInside the castle tower, Shang Tsung challenges Sonya to fight him, claiming that her refusal to accept will result in the Earth realm forfeiting Mortal Kombat (this is, in fact, a lie on Shang's part). All seems lost for Earth realm until Kitana, Liu, and Cage appear. Kitana berates Tsung for his treachery to the Emperor as Sonya is set free. Tsung challenges Cage, but is counter-challenged by Liu. During the lengthy battle, Liu faces not only Tsung, but the souls that Tsung had forcibly taken in past tournaments. In a last-ditch attempt to take advantage, Tsung morphs into Chan. Seeing through the charade, Liu renews his determination and ultimately fires an energy bolt at the sorcerer, knocking him down and impaling him on a row of spikes. Tsung's death releases all of the captive souls, including Chan's. Before ascending to the afterlife, Chan tells Liu that he will remain with him in spirit until they are once again reunited, after Liu dies.\nThe warriors return to Earth realm, where a victory celebration is taking place at the Shaolin temple. The jubilation abruptly stops, however, when Shao Kahn's giant figure suddenly appears in the skies. When the Emperor declares that he has come for everyone's souls, the warriors take up fighting stances.", "title": "Mortal Kombat", "question_id": "f1fdefcf-1191-b5f9-4cae-4ce4d0a59da7", "question": "Who took Goro's soul?", "answers": ["Shang Tsung."], "no_answer": false }, "truncated_cells": [] } ] } ``` ## Image and audio samples Image and audio are represented by a URL that points to the file. ### Images Images are represented as a JSON object with three fields: - `src`: URL to the image file - `height`: height (in pixels) of the image - `width`: width (in pixels) of the image Here is an example of image, from the first row of the cifar100 dataset: ```json // https://datasets-server.huggingface.co/rows?dataset=cifar100&config=cifar100&split=train&offset=0&length=1 { "features": [ { "feature_idx": 0, "name": "img", "type": { "_type": "Image" } }, ... ], "rows": [ { "row_idx": 0, "row": { "img": { "src": "https://datasets-server.huggingface.co/cached-assets/cifar100/--/main/--/cifar100/train/0/img/image.jpg", "height": 32, "width": 32 }, "fine_label": 19, "coarse_label": 11 }, "truncated_cells": [] } ] } ``` ### Caching The images and audio samples are cached by the datasets server temporarily. Internally we empty the cached assets of certain datasets from time to time based on usage. If a certain asset is not available, you may have to call `/rows` again. ## Truncated responses Unlike `/first-rows`, there is currently no truncation in `/rows`. The `truncated_cells` field is still there but is always empty.
huggingface/datasets-server/blob/main/docs/source/rows.mdx
`@gradio/statustracker` ```html <script> import {StatusTracker, Toast, Loader} from `@gradio/statustracker`; </script> ``` StatusTracker ```javascript export let i18n: I18nFormatter; export let eta: number | null = null; export let queue = false; export let queue_position: number | null; export let queue_size: number | null; export let status: "complete" | "pending" | "error" | "generating"; export let scroll_to_output = false; export let timer = true; export let show_progress: "full" | "minimal" | "hidden" = "full"; export let message: string | null = null; export let progress: LoadingStatus["progress"] | null | undefined = null; export let variant: "default" | "center" = "default"; export let loading_text = "Loading..."; export let absolute = true; export let translucent = false; export let border = false; export let autoscroll: boolean; ``` Toast ```javascript export let messages: ToastMessage[] = []; ``` Loader ```javascript export let margin = true; ```
gradio-app/gradio/blob/main/js/statustracker/README.md
(Tensorflow) EfficientNet CondConv **EfficientNet** is a convolutional neural network architecture and scaling method that uniformly scales all dimensions of depth/width/resolution using a *compound coefficient*. Unlike conventional practice that arbitrary scales these factors, the EfficientNet scaling method uniformly scales network width, depth, and resolution with a set of fixed scaling coefficients. For example, if we want to use \\( 2^N \\) times more computational resources, then we can simply increase the network depth by \\( \alpha ^ N \\), width by \\( \beta ^ N \\), and image size by \\( \gamma ^ N \\), where \\( \alpha, \beta, \gamma \\) are constant coefficients determined by a small grid search on the original small model. EfficientNet uses a compound coefficient \\( \phi \\) to uniformly scales network width, depth, and resolution in a principled way. The compound scaling method is justified by the intuition that if the input image is bigger, then the network needs more layers to increase the receptive field and more channels to capture more fine-grained patterns on the bigger image. The base EfficientNet-B0 network is based on the inverted bottleneck residual blocks of [MobileNetV2](https://paperswithcode.com/method/mobilenetv2), in addition to squeeze-and-excitation blocks. This collection of models amends EfficientNet by adding [CondConv](https://paperswithcode.com/method/condconv) convolutions. The weights from this model were ported from [Tensorflow/TPU](https://github.com/tensorflow/tpu). ## How do I use this model on an image? To load a pretrained model: ```py >>> import timm >>> model = timm.create_model('tf_efficientnet_cc_b0_4e', pretrained=True) >>> model.eval() ``` To load and preprocess the image: ```py >>> import urllib >>> from PIL import Image >>> from timm.data import resolve_data_config >>> from timm.data.transforms_factory import create_transform >>> config = resolve_data_config({}, model=model) >>> transform = create_transform(**config) >>> url, filename = ("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg") >>> urllib.request.urlretrieve(url, filename) >>> img = Image.open(filename).convert('RGB') >>> tensor = transform(img).unsqueeze(0) # transform and add batch dimension ``` To get the model predictions: ```py >>> import torch >>> with torch.no_grad(): ... out = model(tensor) >>> probabilities = torch.nn.functional.softmax(out[0], dim=0) >>> print(probabilities.shape) >>> # prints: torch.Size([1000]) ``` To get the top-5 predictions class names: ```py >>> # Get imagenet class mappings >>> url, filename = ("https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt", "imagenet_classes.txt") >>> urllib.request.urlretrieve(url, filename) >>> with open("imagenet_classes.txt", "r") as f: ... categories = [s.strip() for s in f.readlines()] >>> # Print top categories per image >>> top5_prob, top5_catid = torch.topk(probabilities, 5) >>> for i in range(top5_prob.size(0)): ... print(categories[top5_catid[i]], top5_prob[i].item()) >>> # prints class names and probabilities like: >>> # [('Samoyed', 0.6425196528434753), ('Pomeranian', 0.04062102362513542), ('keeshond', 0.03186424449086189), ('white wolf', 0.01739676296710968), ('Eskimo dog', 0.011717947199940681)] ``` Replace the model name with the variant you want to use, e.g. `tf_efficientnet_cc_b0_4e`. You can find the IDs in the model summaries at the top of this page. To extract image features with this model, follow the [timm feature extraction examples](../feature_extraction), just change the name of the model you want to use. ## How do I finetune this model? You can finetune any of the pre-trained models just by changing the classifier (the last layer). ```py >>> model = timm.create_model('tf_efficientnet_cc_b0_4e', pretrained=True, num_classes=NUM_FINETUNE_CLASSES) ``` To finetune on your own dataset, you have to write a training loop or adapt [timm's training script](https://github.com/rwightman/pytorch-image-models/blob/master/train.py) to use your dataset. ## How do I train this model? You can follow the [timm recipe scripts](../scripts) for training a new model afresh. ## Citation ```BibTeX @article{DBLP:journals/corr/abs-1904-04971, author = {Brandon Yang and Gabriel Bender and Quoc V. Le and Jiquan Ngiam}, title = {Soft Conditional Computation}, journal = {CoRR}, volume = {abs/1904.04971}, year = {2019}, url = {http://arxiv.org/abs/1904.04971}, archivePrefix = {arXiv}, eprint = {1904.04971}, timestamp = {Thu, 25 Apr 2019 13:55:01 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1904-04971.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <!-- Type: model-index Collections: - Name: TF EfficientNet CondConv Paper: Title: 'CondConv: Conditionally Parameterized Convolutions for Efficient Inference' URL: https://paperswithcode.com/paper/soft-conditional-computation Models: - Name: tf_efficientnet_cc_b0_4e In Collection: TF EfficientNet CondConv Metadata: FLOPs: 224153788 Parameters: 13310000 File Size: 53490940 Architecture: - 1x1 Convolution - Average Pooling - Batch Normalization - CondConv - Convolution - Dense Connections - Dropout - Inverted Residual Block - Squeeze-and-Excitation Block - Swish Tasks: - Image Classification Training Techniques: - AutoAugment - Label Smoothing - RMSProp - Stochastic Depth - Weight Decay Training Data: - ImageNet ID: tf_efficientnet_cc_b0_4e LR: 0.256 Epochs: 350 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 2048 Image Size: '224' Weight Decay: 1.0e-05 Interpolation: bicubic RMSProp Decay: 0.9 Label Smoothing: 0.1 BatchNorm Momentum: 0.99 Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L1561 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_cc_b0_4e-4362b6b2.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 77.32% Top 5 Accuracy: 93.32% - Name: tf_efficientnet_cc_b0_8e In Collection: TF EfficientNet CondConv Metadata: FLOPs: 224158524 Parameters: 24010000 File Size: 96287616 Architecture: - 1x1 Convolution - Average Pooling - Batch Normalization - CondConv - Convolution - Dense Connections - Dropout - Inverted Residual Block - Squeeze-and-Excitation Block - Swish Tasks: - Image Classification Training Techniques: - AutoAugment - Label Smoothing - RMSProp - Stochastic Depth - Weight Decay Training Data: - ImageNet ID: tf_efficientnet_cc_b0_8e LR: 0.256 Epochs: 350 Crop Pct: '0.875' Momentum: 0.9 Batch Size: 2048 Image Size: '224' Weight Decay: 1.0e-05 Interpolation: bicubic RMSProp Decay: 0.9 Label Smoothing: 0.1 BatchNorm Momentum: 0.99 Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L1572 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_cc_b0_8e-66184a25.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 77.91% Top 5 Accuracy: 93.65% - Name: tf_efficientnet_cc_b1_8e In Collection: TF EfficientNet CondConv Metadata: FLOPs: 370427824 Parameters: 39720000 File Size: 159206198 Architecture: - 1x1 Convolution - Average Pooling - Batch Normalization - CondConv - Convolution - Dense Connections - Dropout - Inverted Residual Block - Squeeze-and-Excitation Block - Swish Tasks: - Image Classification Training Techniques: - AutoAugment - Label Smoothing - RMSProp - Stochastic Depth - Weight Decay Training Data: - ImageNet ID: tf_efficientnet_cc_b1_8e LR: 0.256 Epochs: 350 Crop Pct: '0.882' Momentum: 0.9 Batch Size: 2048 Image Size: '240' Weight Decay: 1.0e-05 Interpolation: bicubic RMSProp Decay: 0.9 Label Smoothing: 0.1 BatchNorm Momentum: 0.99 Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L1584 Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_cc_b1_8e-f7c79ae1.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 79.33% Top 5 Accuracy: 94.37% -->
huggingface/pytorch-image-models/blob/main/hfdocs/source/models/tf-efficientnet-condconv.mdx
!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ERNIE ## Overview ERNIE is a series of powerful models proposed by baidu, especially in Chinese tasks, including [ERNIE1.0](https://arxiv.org/abs/1904.09223), [ERNIE2.0](https://ojs.aaai.org/index.php/AAAI/article/view/6428), [ERNIE3.0](https://arxiv.org/abs/2107.02137), [ERNIE-Gram](https://arxiv.org/abs/2010.12148), [ERNIE-health](https://arxiv.org/abs/2110.07244), etc. These models are contributed by [nghuyong](https://huggingface.co/nghuyong) and the official code can be found in [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) (in PaddlePaddle). ### Usage example Take `ernie-1.0-base-zh` as an example: ```Python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("nghuyong/ernie-1.0-base-zh") model = AutoModel.from_pretrained("nghuyong/ernie-1.0-base-zh") ``` ### Model checkpoints | Model Name | Language | Description | |:-------------------:|:--------:|:-------------------------------:| | ernie-1.0-base-zh | Chinese | Layer:12, Heads:12, Hidden:768 | | ernie-2.0-base-en | English | Layer:12, Heads:12, Hidden:768 | | ernie-2.0-large-en | English | Layer:24, Heads:16, Hidden:1024 | | ernie-3.0-base-zh | Chinese | Layer:12, Heads:12, Hidden:768 | | ernie-3.0-medium-zh | Chinese | Layer:6, Heads:12, Hidden:768 | | ernie-3.0-mini-zh | Chinese | Layer:6, Heads:12, Hidden:384 | | ernie-3.0-micro-zh | Chinese | Layer:4, Heads:12, Hidden:384 | | ernie-3.0-nano-zh | Chinese | Layer:4, Heads:12, Hidden:312 | | ernie-health-zh | Chinese | Layer:12, Heads:12, Hidden:768 | | ernie-gram-zh | Chinese | Layer:12, Heads:12, Hidden:768 | You can find all the supported models from huggingface's model hub: [huggingface.co/nghuyong](https://huggingface.co/nghuyong), and model details from paddle's official repo: [PaddleNLP](https://paddlenlp.readthedocs.io/zh/latest/model_zoo/transformers/ERNIE/contents.html) and [ERNIE](https://github.com/PaddlePaddle/ERNIE/blob/repro). ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Causal language modeling task guide](../tasks/language_modeling) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/multiple_choice) ## ErnieConfig [[autodoc]] ErnieConfig - all ## Ernie specific outputs [[autodoc]] models.ernie.modeling_ernie.ErnieForPreTrainingOutput ## ErnieModel [[autodoc]] ErnieModel - forward ## ErnieForPreTraining [[autodoc]] ErnieForPreTraining - forward ## ErnieForCausalLM [[autodoc]] ErnieForCausalLM - forward ## ErnieForMaskedLM [[autodoc]] ErnieForMaskedLM - forward ## ErnieForNextSentencePrediction [[autodoc]] ErnieForNextSentencePrediction - forward ## ErnieForSequenceClassification [[autodoc]] ErnieForSequenceClassification - forward ## ErnieForMultipleChoice [[autodoc]] ErnieForMultipleChoice - forward ## ErnieForTokenClassification [[autodoc]] ErnieForTokenClassification - forward ## ErnieForQuestionAnswering [[autodoc]] ErnieForQuestionAnswering - forward
huggingface/transformers/blob/main/docs/source/en/model_doc/ernie.md
Designing Multi-Agents systems For this section, you're going to watch this excellent introduction to multi-agents made by <a href="https://www.youtube.com/channel/UCq0imsn84ShAe9PBOFnoIrg"> Brian Douglas </a>. <Youtube id="qgb0gyrpiGk" /> In this video, Brian talked about how to design multi-agent systems. He specifically took a multi-agents system of vacuum cleaners and asked: **how can can cooperate with each other**? We have two solutions to design this multi-agent reinforcement learning system (MARL). ## Decentralized system <figure> <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit10/decentralized.png" alt="Decentralized"/> <figcaption> Source: <a href="https://www.youtube.com/watch?v=qgb0gyrpiGk"> Introduction to Multi-Agent Reinforcement Learning </a> </figcaption> </figure> In decentralized learning, **each agent is trained independently from the others**. In the example given, each vacuum learns to clean as many places as it can **without caring about what other vacuums (agents) are doing**. The benefit is that **since no information is shared between agents, these vacuums can be designed and trained like we train single agents**. The idea here is that **our training agent will consider other agents as part of the environment dynamics**. Not as agents. However, the big drawback of this technique is that it will **make the environment non-stationary** since the underlying Markov decision process changes over time as other agents are also interacting in the environment. And this is problematic for many Reinforcement Learning algorithms **that can't reach a global optimum with a non-stationary environment**. ## Centralized approach <figure> <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit10/centralized.png" alt="Centralized"/> <figcaption> Source: <a href="https://www.youtube.com/watch?v=qgb0gyrpiGk"> Introduction to Multi-Agent Reinforcement Learning </a> </figcaption> </figure> In this architecture, **we have a high-level process that collects agents' experiences**: the experience buffer. And we'll use these experiences **to learn a common policy**. For instance, in the vacuum cleaner example, the observation will be: - The coverage map of the vacuums. - The position of all the vacuums. We use that collective experience **to train a policy that will move all three robots in the most beneficial way as a whole**. So each robot is learning from their common experience. We now have a stationary environment since all the agents are treated as a larger entity, and they know the change of other agents' policies (since it's the same as theirs). If we recap: - In a *decentralized approach*, we **treat all agents independently without considering the existence of the other agents.** - In this case, all agents **consider others agents as part of the environment**. - **It’s a non-stationarity environment condition**, so has no guarantee of convergence. - In a *centralized approach*: - A **single policy is learned from all the agents**. - Takes as input the present state of an environment and the policy outputs joint actions. - The reward is global.
huggingface/deep-rl-class/blob/main/units/en/unit7/multi-agent-setting.mdx
!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # Export a model to TFLite with optimum.exporters.tflite ## Summary Exporting a model to TFLite is as simple as ```bash optimum-cli export tflite --model bert-base-uncased --sequence_length 128 bert_tflite/ ``` Check out the help for more options: ```bash optimum-cli export tflite --help ``` ## Exporting a model to TFLite using the CLI To export a 🤗 Transformers model to TFLite, you'll first need to install some extra dependencies: ```bash pip install optimum[exporters-tf] ``` The Optimum TFLite export can be used through Optimum command-line. As only static input shapes are supported for now, they need to be specified during the export. ```bash optimum-cli export tflite --help usage: optimum-cli <command> [<args>] export tflite [-h] -m MODEL [--task TASK] [--atol ATOL] [--pad_token_id PAD_TOKEN_ID] [--cache_dir CACHE_DIR] [--trust-remote-code] [--batch_size BATCH_SIZE] [--sequence_length SEQUENCE_LENGTH] [--num_choices NUM_CHOICES] [--width WIDTH] [--height HEIGHT] [--num_channels NUM_CHANNELS] [--feature_size FEATURE_SIZE] [--nb_max_frames NB_MAX_FRAMES] [--audio_sequence_length AUDIO_SEQUENCE_LENGTH] output optional arguments: -h, --help show this help message and exit Required arguments: -m MODEL, --model MODEL Model ID on huggingface.co or path on disk to load model from. output Path indicating the directory where to store generated TFLite model. Optional arguments: --task TASK The task to export the model for. If not specified, the task will be auto-inferred based on the model. Available tasks depend on the model, but are among: ['default', 'fill-mask', 'text-generation', 'text2text-generation', 'text-classification', 'token-classification', 'multiple-choice', 'object-detection', 'question-answering', 'image-classification', 'image-segmentation', 'masked-im', 'semantic- segmentation', 'automatic-speech-recognition', 'audio-classification', 'audio-frame-classification', 'automatic-speech-recognition', 'audio-xvector', 'vision2seq- lm', 'stable-diffusion', 'zero-shot-object-detection']. For decoder models, use `xxx-with-past` to export the model using past key values in the decoder. --atol ATOL If specified, the absolute difference tolerance when validating the model. Otherwise, the default atol for the model will be used. --pad_token_id PAD_TOKEN_ID This is needed by some models, for some tasks. If not provided, will attempt to use the tokenizer to guess it. --cache_dir CACHE_DIR Path indicating where to store cache. --trust-remote-code Allow to use custom code for the modeling hosted in the model repository. This option should only be set for repositories you trust and in which you have read the code, as it will execute on your local machine arbitrary code present in the model repository. Input shapes: --batch_size BATCH_SIZE Batch size that the TFLite exported model will be able to take as input. --sequence_length SEQUENCE_LENGTH Sequence length that the TFLite exported model will be able to take as input. --num_choices NUM_CHOICES Only for the multiple-choice task. Num choices that the TFLite exported model will be able to take as input. --width WIDTH Vision tasks only. Image width that the TFLite exported model will be able to take as input. --height HEIGHT Vision tasks only. Image height that the TFLite exported model will be able to take as input. --num_channels NUM_CHANNELS Vision tasks only. Number of channels used to represent the image that the TFLite exported model will be able to take as input. (GREY = 1, RGB = 3, ARGB = 4) --feature_size FEATURE_SIZE Audio tasks only. Feature dimension of the extracted features by the feature extractor that the TFLite exported model will be able to take as input. --nb_max_frames NB_MAX_FRAMES Audio tasks only. Maximum number of frames that the TFLite exported model will be able to take as input. --audio_sequence_length AUDIO_SEQUENCE_LENGTH Audio tasks only. Audio sequence length that the TFLite exported model will be able to take as input. ```
huggingface/optimum/blob/main/docs/source/exporters/tflite/usage_guides/export_a_model.mdx
n this video we will see together what is the purpose of training a tokenizer, what are the key steps to follow and what is the easiest way to do it. You will ask yourself the question "Should I train a new tokenizer?" when you plan to train a new model from scratch. A trained tokenizer would not be suitable for your corpus if your corpus is in a different language, uses new characters such as accents or upper cased letters, has a specific vocabulary, for example medical or legal, or uses a different style, a language from another century for instance. For example, if I take the tokenizer trained for the bert-base-uncased model and ignore its normalization step then we can see that the tokenization operation on the English sentence "here is a sentence adapted to our tokenizer" produces a rather satisfactory list of tokens in the sense that this sentence of 8 words is tokenized into 9 tokens. On the other hand if we use this same tokenizer on a sentence in Bengali, we see that either a word is divided into many sub tokens or that the tokenizer does not know one of the unicode characters and returns only an unknown token. The fact that a "common" word is split into many tokens can be problematic because language models can only handle a sequence of tokens of limited length. A tokenizer that excessively splits your initial text may even impact the performance of your model. Unknown tokens are also problematic because the model will not be able to extract any information from the "unknown" part of the text. In this other example, we can see that the tokenizer replaces words containing characters with accents and capital letters with unknown tokens. Finally, if we use again this tokenizer to tokenize medical vocabulary we see again that a single word is divided into many sub tokens: 4 for "paracetamol" and "pharyngitis". Most of the tokenizers used by the current state of the art language models need to be trained on a corpus that is similar to the one used to pre-train the language model. This training consists in learning rules to divide the text into tokens and the way to learn these rules and use them depends on the chosen tokenizer model. Thus, to train a new tokenizer it is first necessary to build a training corpus composed of raw texts. Then, you have to choose an architecture for your tokenizer. Here there are two options: the simplest is to reuse the same architecture as the one of a tokenizer used by another model already trained,otherwise it is also possible to completely design your tokenizer but it requires more experience and attention. Once the architecture is chosen, one can thus train this tokenizer on your constituted corpus. Finally, the last thing that you need to do is to save the learned rules to be able to use this tokenizer which is now ready to be used. Let's take an example: let's say you want to train a GPT-2 model on Python code. Even if the python code is in English this type of text is very specific and deserves a tokenizer trained on it - to convince you of this we will see at the end the difference produced on an example. For that we are going to use the method "train_new_from_iterator" that all the fast tokenizers of the library have and thus in particular GPT2TokenizerFast. This is the simplest method in our case to have a tokenizer adapted to python code. Remember, the first step is to gather a training corpus. We will use a subpart of the CodeSearchNet dataset containing only python functions from open source libraries on Github. It's good timing, this dataset is known by the datasets library and we can load it in two lines of code. Then, as the "train_new_from_iterator" method expects a iterator of lists of texts we create the "get_training_corpus" function which will return an iterator. Now that we have our iterator on our python functions corpus, we can load the gpt-2 tokenizer architecture. Here "old_tokenizer" is not adapted to our corpus but we only need one more line to train it on our new corpus. An argument that is common to most of the tokenization algorithms used at the moment is the size of the vocabulary, we choose here the value 52 thousand. Finally, once the training is finished, we just have to save our new tokenizer locally or send it to the hub to be able to reuse it very easily afterwards. Finally, let's see together on an example if it was useful to re-train a tokenizer similar to gpt2 one. With the original tokenizer of GPT-2 we see that all spaces are isolated and the method name "randn" relatively common in python code is split in 2. With our new tokenizer, single and double indentations have been learned and the method "randn" is tokenized into 1 token. And with that, you now know how to train your very own tokenizers!
huggingface/course/blob/main/subtitles/en/raw/chapter6/02_train-new-tokenizer.md
!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # Community pipelines [[open-in-colab]] <Tip> For more context about the design choices behind community pipelines, please have a look at [this issue](https://github.com/huggingface/diffusers/issues/841). </Tip> Community pipelines allow you to get creative and build your own unique pipelines to share with the community. You can find all community pipelines in the [diffusers/examples/community](https://github.com/huggingface/diffusers/tree/main/examples/community) folder along with inference and training examples for how to use them. This guide showcases some of the community pipelines and hopefully it'll inspire you to create your own (feel free to open a PR with your own pipeline and we will merge it!). To load a community pipeline, use the `custom_pipeline` argument in [`DiffusionPipeline`] to specify one of the files in [diffusers/examples/community](https://github.com/huggingface/diffusers/tree/main/examples/community): ```py from diffusers import DiffusionPipeline pipe = DiffusionPipeline.from_pretrained( "CompVis/stable-diffusion-v1-4", custom_pipeline="filename_in_the_community_folder", use_safetensors=True ) ``` If a community pipeline doesn't work as expected, please open a GitHub issue and mention the author. You can learn more about community pipelines in the how to [load community pipelines](custom_pipeline_overview) and how to [contribute a community pipeline](contribute_pipeline) guides. ## Multilingual Stable Diffusion The multilingual Stable Diffusion pipeline uses a pretrained [XLM-RoBERTa](https://huggingface.co/papluca/xlm-roberta-base-language-detection) to identify a language and the [mBART-large-50](https://huggingface.co/facebook/mbart-large-50-many-to-one-mmt) model to handle the translation. This allows you to generate images from text in 20 languages. ```py import torch from diffusers import DiffusionPipeline from diffusers.utils import make_image_grid from transformers import ( pipeline, MBart50TokenizerFast, MBartForConditionalGeneration, ) device = "cuda" if torch.cuda.is_available() else "cpu" device_dict = {"cuda": 0, "cpu": -1} # add language detection pipeline language_detection_model_ckpt = "papluca/xlm-roberta-base-language-detection" language_detection_pipeline = pipeline("text-classification", model=language_detection_model_ckpt, device=device_dict[device]) # add model for language translation translation_tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50-many-to-one-mmt") translation_model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50-many-to-one-mmt").to(device) diffuser_pipeline = DiffusionPipeline.from_pretrained( "CompVis/stable-diffusion-v1-4", custom_pipeline="multilingual_stable_diffusion", detection_pipeline=language_detection_pipeline, translation_model=translation_model, translation_tokenizer=translation_tokenizer, torch_dtype=torch.float16, ) diffuser_pipeline.enable_attention_slicing() diffuser_pipeline = diffuser_pipeline.to(device) prompt = ["a photograph of an astronaut riding a horse", "Una casa en la playa", "Ein Hund, der Orange isst", "Un restaurant parisien"] images = diffuser_pipeline(prompt).images make_image_grid(images, rows=2, cols=2) ``` <div class="flex justify-center"> <img src="https://user-images.githubusercontent.com/4313860/198328706-295824a4-9856-4ce5-8e66-278ceb42fd29.png"/> </div> ## MagicMix [MagicMix](https://huggingface.co/papers/2210.16056) is a pipeline that can mix an image and text prompt to generate a new image that preserves the image structure. The `mix_factor` determines how much influence the prompt has on the layout generation, `kmin` controls the number of steps during the content generation process, and `kmax` determines how much information is kept in the layout of the original image. ```py from diffusers import DiffusionPipeline, DDIMScheduler from diffusers.utils import load_image, make_image_grid pipeline = DiffusionPipeline.from_pretrained( "CompVis/stable-diffusion-v1-4", custom_pipeline="magic_mix", scheduler=DDIMScheduler.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="scheduler"), ).to('cuda') img = load_image("https://user-images.githubusercontent.com/59410571/209578593-141467c7-d831-4792-8b9a-b17dc5e47816.jpg") mix_img = pipeline(img, prompt="bed", kmin=0.3, kmax=0.5, mix_factor=0.5) make_image_grid([img, mix_img], rows=1, cols=2) ``` <div class="flex gap-4"> <div> <img class="rounded-xl" src="https://user-images.githubusercontent.com/59410571/209578593-141467c7-d831-4792-8b9a-b17dc5e47816.jpg" /> <figcaption class="mt-2 text-center text-sm text-gray-500">original image</figcaption> </div> <div> <img class="rounded-xl" src="https://user-images.githubusercontent.com/59410571/209578602-70f323fa-05b7-4dd6-b055-e40683e37914.jpg" /> <figcaption class="mt-2 text-center text-sm text-gray-500">image and text prompt mix</figcaption> </div> </div>
huggingface/diffusers/blob/main/docs/source/en/using-diffusers/custom_pipeline_examples.md
-- title: 'Introducing Snowball Fight ☃️, our first ML-Agents environment' thumbnail: /blog/assets/39_introducing_snowball_fight/thumbnail.png authors: - user: ThomasSimonini --- # Introducing Snowball Fight ☃️, our First ML-Agents Environment We're excited to share our **first custom Deep Reinforcement Learning environment**: Snowball Fight 1vs1 🎉. ![gif](assets/39_introducing_snowball_fight/snowballfight.gif) Snowball Fight is a game made with Unity ML-Agents, where you shoot snowballs against a Deep Reinforcement Learning agent. The game is [**hosted on Hugging Face Spaces**](https://hf.co/spaces/launch). 👉 [You can play it online here](https://huggingface.co/spaces/ThomasSimonini/SnowballFight) In this post, we'll cover **the ecosystem we are working on for Deep Reinforcement Learning researchers and enthusiasts that use Unity ML-Agents**. ## Unity ML-Agents at Hugging Face The [Unity Machine Learning Agents Toolkit](https://github.com/Unity-Technologies/ml-agents) is an open source library that allows you to build games and simulations with Unity game engine to **serve as environments for training intelligent agents**. With this first step, our goal is to build an ecosystem on Hugging Face for Deep Reinforcement Learning researchers and enthusiasts that uses ML-Agents, with three features. 1. **Building and sharing custom environments.** We are developing and sharing exciting environments to experiment with new problems: snowball fights, racing, puzzles... All of them will be open source and hosted on the Hugging Face's Hub. 2. **Allowing you to easily host your environments, save models and share them** on the Hugging Face Hub. We have already published the Snowball Fight training environment [here](https://huggingface.co/ThomasSimonini/ML-Agents-SnowballFight-1vs1), but there will be more to come! 3. **You can now easily host your demos on Spaces** and showcase your results quickly with the rest of the ecosystem. ## Be part of the conversation: join our discord server! If you're using ML-Agents or interested in Deep Reinforcement Learning and want to be part of the conversation, **[you can join our discord server](https://discord.gg/YRAq8fMnUG)**. We just added two channels (and we'll add more in the future): - Deep Reinforcement Learning - ML-Agents [Our discord](https://discord.gg/YRAq8fMnUG) is the place where you can exchange about Hugging Face, NLP, Deep RL, and more! It's also in this discord that we'll announce all our new environments and features in the future. ## What's next? In the coming weeks and months, we will be extending the ecosystem by: - Writing some **technical tutorials on ML-Agents**. - Working on a **Snowball Fight 2vs2 version**, where the agents will collaborate in teams using [MA-POCA, a new Deep Reinforcement Learning algorithm](https://blog.unity.com/technology/ml-agents-plays-dodgeball) that trains cooperative behaviors in a team. ![screenshot2vs2](assets/39_introducing_snowball_fight/screenshot2vs2.png) - And we're building **new custom environments that will be hosted in Hugging Face**. ## Conclusion We're excited to see what you're working on with ML-Agents and how we can build features and tools **that help you to empower your work**. Don't forget to [join our discord server](https://discord.gg/YRAq8fMnUG) to be alerted of the new features.
huggingface/blog/blob/main/snowball-fight.md
ECA-ResNet An **ECA ResNet** is a variant on a [ResNet](https://paperswithcode.com/method/resnet) that utilises an [Efficient Channel Attention module](https://paperswithcode.com/method/efficient-channel-attention). Efficient Channel Attention is an architectural unit based on [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block) that reduces model complexity without dimensionality reduction. ## How do I use this model on an image? To load a pretrained model: ```py >>> import timm >>> model = timm.create_model('ecaresnet101d', pretrained=True) >>> model.eval() ``` To load and preprocess the image: ```py >>> import urllib >>> from PIL import Image >>> from timm.data import resolve_data_config >>> from timm.data.transforms_factory import create_transform >>> config = resolve_data_config({}, model=model) >>> transform = create_transform(**config) >>> url, filename = ("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg") >>> urllib.request.urlretrieve(url, filename) >>> img = Image.open(filename).convert('RGB') >>> tensor = transform(img).unsqueeze(0) # transform and add batch dimension ``` To get the model predictions: ```py >>> import torch >>> with torch.no_grad(): ... out = model(tensor) >>> probabilities = torch.nn.functional.softmax(out[0], dim=0) >>> print(probabilities.shape) >>> # prints: torch.Size([1000]) ``` To get the top-5 predictions class names: ```py >>> # Get imagenet class mappings >>> url, filename = ("https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt", "imagenet_classes.txt") >>> urllib.request.urlretrieve(url, filename) >>> with open("imagenet_classes.txt", "r") as f: ... categories = [s.strip() for s in f.readlines()] >>> # Print top categories per image >>> top5_prob, top5_catid = torch.topk(probabilities, 5) >>> for i in range(top5_prob.size(0)): ... print(categories[top5_catid[i]], top5_prob[i].item()) >>> # prints class names and probabilities like: >>> # [('Samoyed', 0.6425196528434753), ('Pomeranian', 0.04062102362513542), ('keeshond', 0.03186424449086189), ('white wolf', 0.01739676296710968), ('Eskimo dog', 0.011717947199940681)] ``` Replace the model name with the variant you want to use, e.g. `ecaresnet101d`. You can find the IDs in the model summaries at the top of this page. To extract image features with this model, follow the [timm feature extraction examples](../feature_extraction), just change the name of the model you want to use. ## How do I finetune this model? You can finetune any of the pre-trained models just by changing the classifier (the last layer). ```py >>> model = timm.create_model('ecaresnet101d', pretrained=True, num_classes=NUM_FINETUNE_CLASSES) ``` To finetune on your own dataset, you have to write a training loop or adapt [timm's training script](https://github.com/rwightman/pytorch-image-models/blob/master/train.py) to use your dataset. ## How do I train this model? You can follow the [timm recipe scripts](../scripts) for training a new model afresh. ## Citation ```BibTeX @misc{wang2020ecanet, title={ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks}, author={Qilong Wang and Banggu Wu and Pengfei Zhu and Peihua Li and Wangmeng Zuo and Qinghua Hu}, year={2020}, eprint={1910.03151}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` <!-- Type: model-index Collections: - Name: ECAResNet Paper: Title: 'ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks' URL: https://paperswithcode.com/paper/eca-net-efficient-channel-attention-for-deep Models: - Name: ecaresnet101d In Collection: ECAResNet Metadata: FLOPs: 10377193728 Parameters: 44570000 File Size: 178815067 Architecture: - 1x1 Convolution - Batch Normalization - Bottleneck Residual Block - Convolution - Efficient Channel Attention - Global Average Pooling - Max Pooling - ReLU - Residual Block - Residual Connection - Softmax - Squeeze-and-Excitation Block Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 4x RTX 2080Ti GPUs ID: ecaresnet101d LR: 0.1 Epochs: 100 Layers: 101 Crop Pct: '0.875' Batch Size: 256 Image Size: '224' Weight Decay: 0.0001 Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/a7f95818e44b281137503bcf4b3e3e94d8ffa52f/timm/models/resnet.py#L1087 Weights: https://imvl-automl-sh.oss-cn-shanghai.aliyuncs.com/darts/hyperml/hyperml/job_45402/outputs/ECAResNet101D_281c5844.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 82.18% Top 5 Accuracy: 96.06% - Name: ecaresnet101d_pruned In Collection: ECAResNet Metadata: FLOPs: 4463972081 Parameters: 24880000 File Size: 99852736 Architecture: - 1x1 Convolution - Batch Normalization - Bottleneck Residual Block - Convolution - Efficient Channel Attention - Global Average Pooling - Max Pooling - ReLU - Residual Block - Residual Connection - Softmax - Squeeze-and-Excitation Block Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet ID: ecaresnet101d_pruned Layers: 101 Crop Pct: '0.875' Image Size: '224' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/a7f95818e44b281137503bcf4b3e3e94d8ffa52f/timm/models/resnet.py#L1097 Weights: https://imvl-automl-sh.oss-cn-shanghai.aliyuncs.com/darts/hyperml/hyperml/job_45610/outputs/ECAResNet101D_P_75a3370e.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 80.82% Top 5 Accuracy: 95.64% - Name: ecaresnet50d In Collection: ECAResNet Metadata: FLOPs: 5591090432 Parameters: 25580000 File Size: 102579290 Architecture: - 1x1 Convolution - Batch Normalization - Bottleneck Residual Block - Convolution - Efficient Channel Attention - Global Average Pooling - Max Pooling - ReLU - Residual Block - Residual Connection - Softmax - Squeeze-and-Excitation Block Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet Training Resources: 4x RTX 2080Ti GPUs ID: ecaresnet50d LR: 0.1 Epochs: 100 Layers: 50 Crop Pct: '0.875' Batch Size: 256 Image Size: '224' Weight Decay: 0.0001 Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/a7f95818e44b281137503bcf4b3e3e94d8ffa52f/timm/models/resnet.py#L1045 Weights: https://imvl-automl-sh.oss-cn-shanghai.aliyuncs.com/darts/hyperml/hyperml/job_45402/outputs/ECAResNet50D_833caf58.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 80.61% Top 5 Accuracy: 95.31% - Name: ecaresnet50d_pruned In Collection: ECAResNet Metadata: FLOPs: 3250730657 Parameters: 19940000 File Size: 79990436 Architecture: - 1x1 Convolution - Batch Normalization - Bottleneck Residual Block - Convolution - Efficient Channel Attention - Global Average Pooling - Max Pooling - ReLU - Residual Block - Residual Connection - Softmax - Squeeze-and-Excitation Block Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet ID: ecaresnet50d_pruned Layers: 50 Crop Pct: '0.875' Image Size: '224' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/a7f95818e44b281137503bcf4b3e3e94d8ffa52f/timm/models/resnet.py#L1055 Weights: https://imvl-automl-sh.oss-cn-shanghai.aliyuncs.com/darts/hyperml/hyperml/job_45899/outputs/ECAResNet50D_P_9c67f710.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 79.71% Top 5 Accuracy: 94.88% - Name: ecaresnetlight In Collection: ECAResNet Metadata: FLOPs: 5276118784 Parameters: 30160000 File Size: 120956612 Architecture: - 1x1 Convolution - Batch Normalization - Bottleneck Residual Block - Convolution - Efficient Channel Attention - Global Average Pooling - Max Pooling - ReLU - Residual Block - Residual Connection - Softmax - Squeeze-and-Excitation Block Tasks: - Image Classification Training Techniques: - SGD with Momentum - Weight Decay Training Data: - ImageNet ID: ecaresnetlight Crop Pct: '0.875' Image Size: '224' Interpolation: bicubic Code: https://github.com/rwightman/pytorch-image-models/blob/a7f95818e44b281137503bcf4b3e3e94d8ffa52f/timm/models/resnet.py#L1077 Weights: https://imvl-automl-sh.oss-cn-shanghai.aliyuncs.com/darts/hyperml/hyperml/job_45402/outputs/ECAResNetLight_4f34b35b.pth Results: - Task: Image Classification Dataset: ImageNet Metrics: Top 1 Accuracy: 80.46% Top 5 Accuracy: 95.25% -->
huggingface/pytorch-image-models/blob/main/hfdocs/source/models/ecaresnet.mdx
@gradio/simpletextbox ## 0.1.6 ### Patch Changes - Updated dependencies [[`828fb9e`](https://github.com/gradio-app/gradio/commit/828fb9e6ce15b6ea08318675a2361117596a1b5d), [`73268ee`](https://github.com/gradio-app/gradio/commit/73268ee2e39f23ebdd1e927cb49b8d79c4b9a144)]: - @gradio/[email protected] - @gradio/[email protected] ## 0.1.5 ### Patch Changes - Updated dependencies [[`053bec9`](https://github.com/gradio-app/gradio/commit/053bec98be1127e083414024e02cf0bebb0b5142), [`4d1cbbc`](https://github.com/gradio-app/gradio/commit/4d1cbbcf30833ef1de2d2d2710c7492a379a9a00)]: - @gradio/[email protected] - @gradio/[email protected] - @gradio/[email protected] ## 0.1.4 ### Patch Changes - Updated dependencies [[`206af31`](https://github.com/gradio-app/gradio/commit/206af31d7c1a31013364a44e9b40cf8df304ba50)]: - @gradio/[email protected] - @gradio/[email protected] - @gradio/[email protected] ## 0.1.3 ### Patch Changes - Updated dependencies [[`9caddc17b`](https://github.com/gradio-app/gradio/commit/9caddc17b1dea8da1af8ba724c6a5eab04ce0ed8)]: - @gradio/[email protected] - @gradio/[email protected] - @gradio/[email protected] ## 0.1.2 ### Patch Changes - Updated dependencies [[`f816136a0`](https://github.com/gradio-app/gradio/commit/f816136a039fa6011be9c4fb14f573e4050a681a)]: - @gradio/[email protected] - @gradio/[email protected] - @gradio/[email protected] ## 0.1.1 ### Patch Changes - Updated dependencies [[`3cdeabc68`](https://github.com/gradio-app/gradio/commit/3cdeabc6843000310e1a9e1d17190ecbf3bbc780), [`fad92c29d`](https://github.com/gradio-app/gradio/commit/fad92c29dc1f5cd84341aae417c495b33e01245f)]: - @gradio/[email protected] - @gradio/[email protected] ## 0.1.0 ### Patch Changes - Updated dependencies [[`287fe6782`](https://github.com/gradio-app/gradio/commit/287fe6782825479513e79a5cf0ba0fbfe51443d7), [`287fe6782`](https://github.com/gradio-app/gradio/commit/287fe6782825479513e79a5cf0ba0fbfe51443d7), [`287fe6782`](https://github.com/gradio-app/gradio/commit/287fe6782825479513e79a5cf0ba0fbfe51443d7)]: - @gradio/[email protected] - @gradio/[email protected] - @gradio/[email protected] - @gradio/[email protected] ## 0.1.0-beta.2 ### Features - [#6149](https://github.com/gradio-app/gradio/pull/6149) [`90318b1dd`](https://github.com/gradio-app/gradio/commit/90318b1dd118ae08a695a50e7c556226234ab6dc) - swap `mode` on the frontned to `interactive` to match the backend. Thanks [@pngwn](https://github.com/pngwn)! ## 0.1.0-beta.1 ### Features - [#5990](https://github.com/gradio-app/gradio/pull/5990) [`85056de5c`](https://github.com/gradio-app/gradio/commit/85056de5cd4e90a10cbfcefab74037dbc622b26b) - V4: Simple textbox. Thanks [@freddyaboulton](https://github.com/freddyaboulton)!
gradio-app/gradio/blob/main/js/simpletextbox/CHANGELOG.md
Gradio Demo: chatbot_consecutive ``` !pip install -q gradio ``` ``` import gradio as gr import random import time with gr.Blocks() as demo: chatbot = gr.Chatbot() msg = gr.Textbox() clear = gr.Button("Clear") def user(user_message, history): return "", history + [[user_message, None]] def bot(history): bot_message = random.choice(["How are you?", "I love you", "I'm very hungry"]) time.sleep(2) history[-1][1] = bot_message return history msg.submit(user, [msg, chatbot], [msg, chatbot], queue=False).then( bot, chatbot, chatbot ) clear.click(lambda: None, None, chatbot, queue=False) demo.queue() if __name__ == "__main__": demo.launch() ```
gradio-app/gradio/blob/main/demo/chatbot_consecutive/run.ipynb
Gradio Demo: file_explorer ``` !pip install -q gradio ``` ``` import gradio as gr from pathlib import Path current_file_path = Path(__file__).resolve() relative_path = "path/to/file" absolute_path = (current_file_path.parent / ".." / ".." / "gradio").resolve() def get_file_content(file): return (file,) with gr.Blocks() as demo: gr.Markdown('### `FileExplorer` to `FileExplorer` -- `file_count="multiple"`') submit_btn = gr.Button("Select") with gr.Row(): file = gr.FileExplorer( glob="**/{components,themes}/*.py", # value=["themes/utils"], root=absolute_path, ignore_glob="**/__init__.py", ) file2 = gr.FileExplorer( glob="**/{components,themes}/**/*.py", root=absolute_path, ignore_glob="**/__init__.py", ) submit_btn.click(lambda x: x, file, file2) gr.Markdown("---") gr.Markdown('### `FileExplorer` to `Code` -- `file_count="single"`') with gr.Group(): with gr.Row(): file_3 = gr.FileExplorer( scale=1, glob="**/{components,themes}/**/*.py", value=["themes/utils"], file_count="single", root=absolute_path, ignore_glob="**/__init__.py", elem_id="file", ) code = gr.Code(lines=30, scale=2, language="python") file_3.change(get_file_content, file_3, code) if __name__ == "__main__": demo.launch() ```
gradio-app/gradio/blob/main/demo/file_explorer/run.ipynb
!--- Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # Question answering The script [`run_qa.py`](https://github.com/huggingface/optimum/blob/main/examples/onnxruntime/optimization/question-answering/run_qa.py) allows us to apply graph optimizations using [ONNX Runtime](https://github.com/microsoft/onnxruntime) for question answering tasks. Note that if your dataset contains samples with no possible answers (like SQuAD version 2), you need to pass along the flag `--version_2_with_negative`. The following example applies graph optimizations on a DistilBERT fine-tuned on the SQuAD1.0 dataset. Here the optimization level is selected to be 1, enabling basic optimizations such as redundant node eliminations and constant folding. Higher optimization level will result in hardware dependent optimized graph. ```bash python run_qa.py \ --model_name_or_path distilbert-base-uncased-distilled-squad \ --dataset_name squad \ --optimization_level 1 \ --do_eval \ --output_dir /tmp/optimized_distilbert_squad ``` In order to apply dynamic or static quantization, `quantization_approach` must be set to respectively `dynamic` or `static`.
huggingface/optimum/blob/main/examples/onnxruntime/optimization/question-answering/README.md
Visualizer <tokenizerslangcontent> <python> ## Annotation [[autodoc]] tokenizers.tools.Annotation ## EncodingVisualizer [[autodoc]] tokenizers.tools.EncodingVisualizer - __call__ </python> <rust> The Rust API Reference is available directly on the [Docs.rs](https://docs.rs/tokenizers/latest/tokenizers/) website. </rust> <node> The node API has not been documented yet. </node> </tokenizerslangcontent>
huggingface/tokenizers/blob/main/docs/source-doc-builder/api/visualizer.mdx
-- title: "Deep Learning with Proteins" thumbnail: /blog/assets/119_deep_learning_with_proteins/folding_example.png authors: - user: rocketknight1 --- # Deep Learning With Proteins I have two audiences in mind while writing this. One is biologists who are trying to get into machine learning, and the other is machine learners who are trying to get into biology. If you’re not familiar with either biology or machine learning then you’re still welcome to come along, but you might find it a bit confusing at times! And if you’re already familiar with both, then you probably don’t need this post at all - you can just skip straight to our example notebooks to see these models in action: - Fine-tuning protein language models ([PyTorch](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/protein_language_modeling.ipynb), [TensorFlow](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/protein_language_modeling-tf.ipynb)) - Protein folding with ESMFold ([PyTorch](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/protein_folding.ipynb) only for now because of `openfold` dependencies) ## Introduction for biologists: What the hell is a language model? The models used to handle proteins are heavily inspired by large language models like BERT and GPT. So to understand how these models work we’re going to go back in time to 2016 or so, before they existed. Donald Trump hasn’t been elected yet, Brexit hasn’t yet happened, and Deep Learning (DL) is the hot new technique that’s breaking new records every day. The key to DL’s success is that it uses artificial neural networks to learn complex patterns in data. DL has one critical problem, though - it needs a **lot** of data to work well, and on many tasks that data just isn’t available. Let’s say that you want to train a DL model to take a sentence in English as input and decide if it’s grammatically correct or not. So you assemble your training data, and it looks something like this: | Text | Label | | --- | --- | | The judge told the jurors to think carefully. | Correct | | The judge told that the jurors to think carefully. | Incorrect | | … | … | In theory, this task was completely possible at the time - if you fed training data like this into a DL model, it could learn to predict whether new sentences were grammatically correct or not. In practice, it didn’t work so well, because in 2016 most people randomly initialized a new model for each task they wanted to train them on. This meant that **models had to learn everything they needed to know just from the examples in the training data!** To understand just how difficult that is, pretend you’re a machine learning model and I’m giving you some training data for a task I want you to learn. Here it is: | Text | Label | | --- | --- | | Is í an stiúrthóir is fearr ar domhan! | 1 | | Is fuath liom an scannán seo. | 0 | | Scannán den scoth ab ea é. | 1 | | D’fhág mé an phictiúrlann tar éis fiche nóiméad! | 0 | I chose a language here that I’m hoping you’ve never seen before, and so I’m guessing you probably don’t feel very confident that you’ve learned this task. Maybe after hundreds or thousands of examples you might start to notice some recurring words or patterns in the inputs, and you might be able to make guesses that were better than random chance, but even then a new word or unusual phrasing would definitely be able to throw you and make you guess incorrectly. Not coincidentally, that’s about how well DL models performed at the time too! Now try the same task, but in English: | Text | Label | | --- | --- | | She’s the best director in the world! | 1 | | I hate this movie. | 0 | | It was an absolutely excellent film. | 1 | | I left the cinema after twenty minutes! | 0 | Now it’s easy - the task is just predicting whether a movie review is positive (1) or negative (0). With just two positive examples and two negative examples, you could probably do this task with close to 100% accuracy, because **you already have a vast pre-existing knowledge of English vocabulary and grammar, as well as cultural context surrounding movies and emotional expression.** Without that knowledge, things are more like the first task - you would need to read a huge number of examples before you begin to spot even superficial patterns in the inputs, and even if you took the time to study hundreds of thousands of examples your guesses would still be far less accurate than they are after only four examples in the English language task. ### The critical breakthrough: Transfer learning In machine learning, we call this concept of transferring prior knowledge to a new task “**transfer learning**”. Getting this kind of transfer learning to work for DL was a major goal for the field around 2016. Things like pre-trained word vectors (which are very interesting, but outside the scope of this blogpost!) did exist by 2016 and allowed some knowledge to be transferred to new models, but this knowledge transfer was still relatively superficial, and models still needed large amounts of training data to work well. This stage of affairs continued until 2018, when two huge papers landed, introducing the models [ULMFiT](https://arxiv.org/abs/1801.06146) and later [BERT](https://arxiv.org/abs/1810.04805). These were the first papers that got transfer learning in natural language to work really well, and BERT in particular marked the beginning of the era of pre-trained large language models. The trick, shared by both papers, is that they took advantage of the internal structure of the artificial neural networks in deep learning - they trained a neural net for a long time on a text task where training data was very abundant, and then they just copied the whole neural network to a new task, changing only the few neurons that corresponded to the network’s output. ![transfer learning](assets/119_deep_learning_with_proteins/transfer_learning.png) *This figure from [the ULMFiT paper](https://arxiv.org/abs/1801.06146) shows the enormous gains in performance from using transfer learning versus training a model from scratch on three separate tasks. In many cases, using transfer learning yields performance equivalent to having more than 100X as much training data. And don’t forget that this was published in 2018 - modern large language models can do even better!* The reason this works is that in the process of solving any non-trivial task, neural networks learn a lot of the structure of the input data - visual networks, given raw pixels, learn to identify lines and curves and edges; text networks, given raw text, learn details of grammatical structure. This information is not task-specific, however - the key reason transfer learning works is that **a lot of what you need to know to solve a task is not specific to that task!** To classify movie reviews you didn’t need to know a lot about movie reviews, but you did need a vast knowledge of English and cultural context. By picking a task where training data is abundant, we can get a neural network to learn that sort of “domain knowledge” and then later apply it to new tasks we care about, where training data might be a lot harder to come by. At this point, hopefully you understand what transfer learning is, and that a large language model is just a big neural network that’s been trained on lots of text data, which makes it a prime candidate for transferring to new tasks. We’ll see how these same techniques can be applied to proteins below, but first I need to write an introduction for the other half of my audience. Feel free to skip this next bit if you’re already familiar! ## Introduction for machine learning people: What the hell is a protein? To condense an entire degree into one sentence: Proteins do a lot of stuff. Some proteins are **enzymes** - they act as catalysts for chemical reactions. When your body converts nutrients to energy, each step of the path from food to muscle movement is catalyzed by an enzyme. Some proteins are **structural -** they give stability and shape, for example in connective tissue. If you’ve ever seen a cosmetics advertisement you’ve probably seen words like **collagen** and **elastin** and **keratin -** these are proteins that form a lot of the structure of our skin and hair. Other proteins are critical in health and disease - everyone probably remembers endless news reports on the **spike protein** of the COVID-19 virus. The COVID spike protein binds to a protein called ACE2 that is found on the surface of human cells, which allows it to enter the cell and deliver its payload of viral RNA. Because this interaction was so critical to infection, modelling these proteins and their interactions was a huge focus during the pandemic. Proteins are composed of multiple **amino acids.** Amino acids are relatively simple molecules that all share the same molecular backbone, and the chemistry of this backbone allows amino acids to fuse together, so that the individual molecules can become a long chain. The critical thing to understand here is that there are only a few different amino acids - 20 standard ones, plus maybe a couple of rare and weird ones depending on the specific organism in question. What gives rise to the huge diversity of proteins is that these **amino acids can be combined in any order,** and the resulting protein chain can have vastly different shapes and functions as a result, as different parts of the chain stick and fold onto each other. Think of text as an analogy here - English only has 26 letters, and yet think of all the different kinds of things you can write with combinations of those 26 letters! In fact, because there are so few amino acids, biologists can assign a unique letter of the alphabet to each one. This means that you can write a protein just as a text string! For example, let’s say a protein has the amino acids Methionine, Alanine and Histidine in a chain. The [corresponding letters](https://en.wikipedia.org/wiki/Amino_acid#Table_of_standard_amino_acid_abbreviations_and_properties) for those amino acids are just M, A and H, and so we could write that chain as just “MAH”. Most proteins contain hundreds or even thousands of amino acids rather than just three, though! ![protein structure](assets/119_deep_learning_with_proteins/protein_structure.png) *This figure shows two representations of a protein. All amino acids contain a Carbon-Carbon-Nitrogen sequence. When amino acids are fused into a protein, this repeated pattern will run throughout its entire length, where it is called the protein’s “backbone”. Amino acids differ, however, in their “side chain”, which is the name given to the atoms attached to this C-C-N backbone. The lower figure uses generic side chains labelled as R1, R2 and R3, which could be any amino acid. In the upper figure, the central amino acid has a CH3 side chain - this identifies it as the amino acid* Alanine, *which is represented by the letter A.* ([Image source](https://commons.wikimedia.org/wiki/File:Peptide-Figure-Revised.png)) Even though we can write them as text strings, proteins aren’t actually a “language”, at least not any kind of language that Noam Chomsky would recognize. But they do have a few language-like features that make them a very similar domain to text from a machine learning perspective: Proteins are long strings in a fixed, small alphabet, and although any string is possible in theory, in practice only a very small subset of strings actually make “sense”. Random text is garbage, and random proteins are just a shapeless blob. Also, information is lost if you just consider parts of a protein in isolation, in the same way that information is lost if you just read a single sentence extracted from a larger text. A region of a protein may only assume its natural shape in the presence of other parts of the protein that stabilize and correct that shape! This means that long-range interactions, of the kind that are well-captured by global self-attention, are very important to modelling proteins correctly. At this point, hopefully you have a vague idea of what a protein is and why biologists care about them so much - despite their small ‘alphabet’ of amino acids, they have a vast diversity of structure and function, and being able to understand and predict those structures and functions just from looking at the raw ‘string’ of amino acids would be an extremely valuable research tool. ## Bringing it together: Machine learning with proteins So now we've seen how transfer learning with language models works, and we've seen what proteins are. And once you have that background, the next step isn't too hard - we can use the same transfer learning ideas on proteins! Instead of pre-training a model on a task involving English text, we train it on a task where the inputs are proteins, but where a lot of training data is available. Once we've done that, our model has hopefully learned a lot about the structure of proteins, in the same way that language models learn a lot about the structure of language. That makes pre-trained protein models a prime candidate for transferring to any other protein-based task! What kind of machine learning tasks do biologists care about training protein models on? The most famous protein modelling task is **protein folding**. The task here is to, given the amino acid chain like “MLKNV…”, predict the final shape that protein will fold into. This is an enormously important task, because accurately predicting the shape and structure of a protein gives a lot of insights into what the protein does, and how it does it. People have been studying this problem since long before modern machine learning - some of the earliest massive distributed computing projects like Folding@Home used atomic-level simulations at incredible spatial and temporal resolution to model protein folding, and there is an entire field of *protein crystallography* that uses X-ray diffraction to observe the structure of proteins isolated from living cells. Like a lot of other fields, though, the arrival of deep learning changed everything. AlphaFold and especially AlphaFold2 used transformer deep learning models with a number of protein-specific additions to achieve exceptional results at predicting the structure of novel proteins just from the raw amino acid sequence. If protein folding is what you’re interested in, we highly recommend checking out [our ESMFold notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/protein_folding.ipynb) - ESMFold is a new model that’s similar to AlphaFold2, but it’s more of a ‘pure’ deep learning model that does not require any external databases or search steps to run. As a result, the setup process is much less painful and the model runs much more quickly, while still retaining outstanding accuracy. ![folding example](assets/119_deep_learning_with_proteins/folding_example.png) *The predicted structure for the homodimeric* P. multocida *protein **Glucosamine-6-phosphate deaminase**. This structure and visualization was generated in seconds using the ESMFold notebook linked above. Darker blue colours indicate regions of highest structure confidence.* Protein folding isn’t the only task of interest, though! There are a wide range of classification tasks that biologists might want to do with proteins - maybe they want to predict which part of the cell that protein will operate in, or which amino acids in the protein will receive certain modifications after the protein is created. In the language of machine learning, tasks like these are called **sequence classification** when you want to classify the entire protein (for example, predicting its subcellular localization), or **token classification** when you want to classify each amino acid (for example, predicting which individual amino acids will receive post-translational modifications). The key takeaway, though, is that even though proteins are very different to language, they can be handled by almost exactly the same machine learning approach - large-scale pre-training on a big database of protein sequences, followed by **transfer learning** to a wide range of tasks of interest where training data might be much sparser. In fact, in some respects it’s even simpler than a large language model like BERT, because no complex splitting and parsing of words is required - proteins don’t have “word” divisions, and so the easiest approach is to simply convert each amino acid to a single input token. ## Sounds cool, but I don’t know where to start! If you’re already familiar with deep learning, then you’ll find that the code for fine-tuning protein models looks extremely similar to the code for fine-tuning language models. We have example notebooks for both [PyTorch](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/protein_language_modeling.ipynb) and [TensorFlow](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/protein_language_modeling-tf.ipynb) if you’re curious, and you can get huge amounts of annotated data from open-access protein databases like [UniProt](https://www.uniprot.org/), which has a REST API as well as a nice web interface. Your main difficulty will be finding interesting research directions to explore, which is somewhat beyond the scope of this document - but I’m sure there are plenty of biologists out there who’d love to collaborate with you! If you’re a biologist, on the other hand, you probably have several ideas for what you want to try, but might be a little intimidated about diving into machine learning code. Don’t panic! We’ve designed the example notebooks ([PyTorch](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/protein_language_modeling.ipynb), [TensorFlow](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/protein_language_modeling-tf.ipynb)) so that the data-loading section is quite independent of the rest. This means that if you have a **sequence classification** or **token classification** task in mind, all you need to do is build a list of protein sequences and a list of corresponding labels, and then swap out our data loading code for any code that loads or generates those lists. Although the specific examples linked use [ESM-2](https://www.biorxiv.org/content/10.1101/2022.07.20.500902v1) as the base pre-trained model, as it’s the current state of the art, people in the field are also likely to be familiar with the Rost lab whose models like [ProtBERT](https://huggingface.co/Rostlab/prot_bert) ([paper link](https://www.biorxiv.org/content/10.1101/2020.07.12.199554v3)) were some of the earliest models of their kind and have seen phenomenal interest from the bioinformatics community. Much of the code in the linked examples can be swapped over to using a base like ProtBERT simply by changing the checkpoint path from `facebook/esm2...` to something like `Rostlab/prot_bert`. ## Conclusion The intersection of deep learning and biology is going to be an incredibly active and fruitful field in the next few years. One of the things that makes deep learning such a fast-moving field, though, is the speed with which people can reproduce results and adapt new models for their own use. In that spirit, if you train a model that you think would be useful to the community, please share it! The notebooks linked above contain code to upload models to the Hub, where they can be freely accessed and built upon by other researchers - in addition to the benefits to the field, this is a great way to get visibility and citations for your associated papers as well. You can even make a live web demo with [Spaces](https://huggingface.co/docs/hub/spaces-overview) so that other researchers can input protein sequences and get results for free without needing to write a single line of code. Good luck, and may Reviewer 2 be kind to you!
huggingface/blog/blob/main/deep-learning-with-proteins.md
!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # Effective and efficient diffusion [[open-in-colab]] Getting the [`DiffusionPipeline`] to generate images in a certain style or include what you want can be tricky. Often times, you have to run the [`DiffusionPipeline`] several times before you end up with an image you're happy with. But generating something out of nothing is a computationally intensive process, especially if you're running inference over and over again. This is why it's important to get the most *computational* (speed) and *memory* (GPU vRAM) efficiency from the pipeline to reduce the time between inference cycles so you can iterate faster. This tutorial walks you through how to generate faster and better with the [`DiffusionPipeline`]. Begin by loading the [`runwayml/stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5) model: ```python from diffusers import DiffusionPipeline model_id = "runwayml/stable-diffusion-v1-5" pipeline = DiffusionPipeline.from_pretrained(model_id, use_safetensors=True) ``` The example prompt you'll use is a portrait of an old warrior chief, but feel free to use your own prompt: ```python prompt = "portrait photo of a old warrior chief" ``` ## Speed <Tip> 💡 If you don't have access to a GPU, you can use one for free from a GPU provider like [Colab](https://colab.research.google.com/)! </Tip> One of the simplest ways to speed up inference is to place the pipeline on a GPU the same way you would with any PyTorch module: ```python pipeline = pipeline.to("cuda") ``` To make sure you can use the same image and improve on it, use a [`Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) and set a seed for [reproducibility](./using-diffusers/reproducibility): ```python import torch generator = torch.Generator("cuda").manual_seed(0) ``` Now you can generate an image: ```python image = pipeline(prompt, generator=generator).images[0] image ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/stable_diffusion_101/sd_101_1.png"> </div> This process took ~30 seconds on a T4 GPU (it might be faster if your allocated GPU is better than a T4). By default, the [`DiffusionPipeline`] runs inference with full `float32` precision for 50 inference steps. You can speed this up by switching to a lower precision like `float16` or running fewer inference steps. Let's start by loading the model in `float16` and generate an image: ```python import torch pipeline = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16, use_safetensors=True) pipeline = pipeline.to("cuda") generator = torch.Generator("cuda").manual_seed(0) image = pipeline(prompt, generator=generator).images[0] image ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/stable_diffusion_101/sd_101_2.png"> </div> This time, it only took ~11 seconds to generate the image, which is almost 3x faster than before! <Tip> 💡 We strongly suggest always running your pipelines in `float16`, and so far, we've rarely seen any degradation in output quality. </Tip> Another option is to reduce the number of inference steps. Choosing a more efficient scheduler could help decrease the number of steps without sacrificing output quality. You can find which schedulers are compatible with the current model in the [`DiffusionPipeline`] by calling the `compatibles` method: ```python pipeline.scheduler.compatibles [ diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler, diffusers.schedulers.scheduling_unipc_multistep.UniPCMultistepScheduler, diffusers.schedulers.scheduling_k_dpm_2_discrete.KDPM2DiscreteScheduler, diffusers.schedulers.scheduling_deis_multistep.DEISMultistepScheduler, diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler, diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler, diffusers.schedulers.scheduling_ddpm.DDPMScheduler, diffusers.schedulers.scheduling_dpmsolver_singlestep.DPMSolverSinglestepScheduler, diffusers.schedulers.scheduling_k_dpm_2_ancestral_discrete.KDPM2AncestralDiscreteScheduler, diffusers.utils.dummy_torch_and_torchsde_objects.DPMSolverSDEScheduler, diffusers.schedulers.scheduling_heun_discrete.HeunDiscreteScheduler, diffusers.schedulers.scheduling_pndm.PNDMScheduler, diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteScheduler, diffusers.schedulers.scheduling_ddim.DDIMScheduler, ] ``` The Stable Diffusion model uses the [`PNDMScheduler`] by default which usually requires ~50 inference steps, but more performant schedulers like [`DPMSolverMultistepScheduler`], require only ~20 or 25 inference steps. Use the [`~ConfigMixin.from_config`] method to load a new scheduler: ```python from diffusers import DPMSolverMultistepScheduler pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config) ``` Now set the `num_inference_steps` to 20: ```python generator = torch.Generator("cuda").manual_seed(0) image = pipeline(prompt, generator=generator, num_inference_steps=20).images[0] image ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/stable_diffusion_101/sd_101_3.png"> </div> Great, you've managed to cut the inference time to just 4 seconds! ⚡️ ## Memory The other key to improving pipeline performance is consuming less memory, which indirectly implies more speed, since you're often trying to maximize the number of images generated per second. The easiest way to see how many images you can generate at once is to try out different batch sizes until you get an `OutOfMemoryError` (OOM). Create a function that'll generate a batch of images from a list of prompts and `Generators`. Make sure to assign each `Generator` a seed so you can reuse it if it produces a good result. ```python def get_inputs(batch_size=1): generator = [torch.Generator("cuda").manual_seed(i) for i in range(batch_size)] prompts = batch_size * [prompt] num_inference_steps = 20 return {"prompt": prompts, "generator": generator, "num_inference_steps": num_inference_steps} ``` Start with `batch_size=4` and see how much memory you've consumed: ```python from diffusers.utils import make_image_grid images = pipeline(**get_inputs(batch_size=4)).images make_image_grid(images, 2, 2) ``` Unless you have a GPU with more vRAM, the code above probably returned an `OOM` error! Most of the memory is taken up by the cross-attention layers. Instead of running this operation in a batch, you can run it sequentially to save a significant amount of memory. All you have to do is configure the pipeline to use the [`~DiffusionPipeline.enable_attention_slicing`] function: ```python pipeline.enable_attention_slicing() ``` Now try increasing the `batch_size` to 8! ```python images = pipeline(**get_inputs(batch_size=8)).images make_image_grid(images, rows=2, cols=4) ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/stable_diffusion_101/sd_101_5.png"> </div> Whereas before you couldn't even generate a batch of 4 images, now you can generate a batch of 8 images at ~3.5 seconds per image! This is probably the fastest you can go on a T4 GPU without sacrificing quality. ## Quality In the last two sections, you learned how to optimize the speed of your pipeline by using `fp16`, reducing the number of inference steps by using a more performant scheduler, and enabling attention slicing to reduce memory consumption. Now you're going to focus on how to improve the quality of generated images. ### Better checkpoints The most obvious step is to use better checkpoints. The Stable Diffusion model is a good starting point, and since its official launch, several improved versions have also been released. However, using a newer version doesn't automatically mean you'll get better results. You'll still have to experiment with different checkpoints yourself, and do a little research (such as using [negative prompts](https://minimaxir.com/2022/11/stable-diffusion-negative-prompt/)) to get the best results. As the field grows, there are more and more high-quality checkpoints finetuned to produce certain styles. Try exploring the [Hub](https://huggingface.co/models?library=diffusers&sort=downloads) and [Diffusers Gallery](https://huggingface.co/spaces/huggingface-projects/diffusers-gallery) to find one you're interested in! ### Better pipeline components You can also try replacing the current pipeline components with a newer version. Let's try loading the latest [autoencoder](https://huggingface.co/stabilityai/stable-diffusion-2-1/tree/main/vae) from Stability AI into the pipeline, and generate some images: ```python from diffusers import AutoencoderKL vae = AutoencoderKL.from_pretrained("stabilityai/sd-vae-ft-mse", torch_dtype=torch.float16).to("cuda") pipeline.vae = vae images = pipeline(**get_inputs(batch_size=8)).images make_image_grid(images, rows=2, cols=4) ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/stable_diffusion_101/sd_101_6.png"> </div> ### Better prompt engineering The text prompt you use to generate an image is super important, so much so that it is called *prompt engineering*. Some considerations to keep during prompt engineering are: - How is the image or similar images of the one I want to generate stored on the internet? - What additional detail can I give that steers the model towards the style I want? With this in mind, let's improve the prompt to include color and higher quality details: ```python prompt += ", tribal panther make up, blue on red, side profile, looking away, serious eyes" prompt += " 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta" ``` Generate a batch of images with the new prompt: ```python images = pipeline(**get_inputs(batch_size=8)).images make_image_grid(images, rows=2, cols=4) ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/stable_diffusion_101/sd_101_7.png"> </div> Pretty impressive! Let's tweak the second image - corresponding to the `Generator` with a seed of `1` - a bit more by adding some text about the age of the subject: ```python prompts = [ "portrait photo of the oldest warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", "portrait photo of a old warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", "portrait photo of a warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", "portrait photo of a young warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", ] generator = [torch.Generator("cuda").manual_seed(1) for _ in range(len(prompts))] images = pipeline(prompt=prompts, generator=generator, num_inference_steps=25).images make_image_grid(images, 2, 2) ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/stable_diffusion_101/sd_101_8.png"> </div> ## Next steps In this tutorial, you learned how to optimize a [`DiffusionPipeline`] for computational and memory efficiency as well as improving the quality of generated outputs. If you're interested in making your pipeline even faster, take a look at the following resources: - Learn how [PyTorch 2.0](./optimization/torch2.0) and [`torch.compile`](https://pytorch.org/docs/stable/generated/torch.compile.html) can yield 5 - 300% faster inference speed. On an A100 GPU, inference can be up to 50% faster! - If you can't use PyTorch 2, we recommend you install [xFormers](./optimization/xformers). Its memory-efficient attention mechanism works great with PyTorch 1.13.1 for faster speed and reduced memory consumption. - Other optimization techniques, such as model offloading, are covered in [this guide](./optimization/fp16).
huggingface/diffusers/blob/main/docs/source/en/stable_diffusion.md
-- title: WER emoji: 🤗 colorFrom: blue colorTo: red sdk: gradio sdk_version: 3.19.1 app_file: app.py pinned: false tags: - evaluate - metric description: >- Word error rate (WER) is a common metric of the performance of an automatic speech recognition system. The general difficulty of measuring performance lies in the fact that the recognized word sequence can have a different length from the reference word sequence (supposedly the correct one). The WER is derived from the Levenshtein distance, working at the word level instead of the phoneme level. The WER is a valuable tool for comparing different systems as well as for evaluating improvements within one system. This kind of measurement, however, provides no details on the nature of translation errors and further work is therefore required to identify the main source(s) of error and to focus any research effort. This problem is solved by first aligning the recognized word sequence with the reference (spoken) word sequence using dynamic string alignment. Examination of this issue is seen through a theory called the power law that states the correlation between perplexity and word error rate. Word error rate can then be computed as: WER = (S + D + I) / N = (S + D + I) / (S + D + C) where S is the number of substitutions, D is the number of deletions, I is the number of insertions, C is the number of correct words, N is the number of words in the reference (N=S+D+C). This value indicates the average number of errors per reference word. The lower the value, the better the performance of the ASR system with a WER of 0 being a perfect score. --- # Metric Card for WER ## Metric description Word error rate (WER) is a common metric of the performance of an automatic speech recognition (ASR) system. The general difficulty of measuring the performance of ASR systems lies in the fact that the recognized word sequence can have a different length from the reference word sequence (supposedly the correct one). The WER is derived from the [Levenshtein distance](https://en.wikipedia.org/wiki/Levenshtein_distance), working at the word level. This problem is solved by first aligning the recognized word sequence with the reference (spoken) word sequence using dynamic string alignment. Examination of this issue is seen through a theory called the power law that states the correlation between [perplexity](https://huggingface.co/metrics/perplexity) and word error rate (see [this article](https://www.cs.cmu.edu/~roni/papers/eval-metrics-bntuw-9802.pdf) for further information). Word error rate can then be computed as: `WER = (S + D + I) / N = (S + D + I) / (S + D + C)` where `S` is the number of substitutions, `D` is the number of deletions, `I` is the number of insertions, `C` is the number of correct words, `N` is the number of words in the reference (`N=S+D+C`). ## How to use The metric takes two inputs: references (a list of references for each speech input) and predictions (a list of transcriptions to score). ```python from evaluate import load wer = load("wer") wer_score = wer.compute(predictions=predictions, references=references) ``` ## Output values This metric outputs a float representing the word error rate. ``` print(wer_score) 0.5 ``` This value indicates the average number of errors per reference word. The **lower** the value, the **better** the performance of the ASR system, with a WER of 0 being a perfect score. ### Values from popular papers This metric is highly dependent on the content and quality of the dataset, and therefore users can expect very different values for the same model but on different datasets. For example, datasets such as [LibriSpeech](https://huggingface.co/datasets/librispeech_asr) report a WER in the 1.8-3.3 range, whereas ASR models evaluated on [Timit](https://huggingface.co/datasets/timit_asr) report a WER in the 8.3-20.4 range. See the leaderboards for [LibriSpeech](https://paperswithcode.com/sota/speech-recognition-on-librispeech-test-clean) and [Timit](https://paperswithcode.com/sota/speech-recognition-on-timit) for the most recent values. ## Examples Perfect match between prediction and reference: ```python from evaluate import load wer = load("wer") predictions = ["hello world", "good night moon"] references = ["hello world", "good night moon"] wer_score = wer.compute(predictions=predictions, references=references) print(wer_score) 0.0 ``` Partial match between prediction and reference: ```python from evaluate import load wer = load("wer") predictions = ["this is the prediction", "there is an other sample"] references = ["this is the reference", "there is another one"] wer_score = wer.compute(predictions=predictions, references=references) print(wer_score) 0.5 ``` No match between prediction and reference: ```python from evaluate import load wer = load("wer") predictions = ["hello world", "good night moon"] references = ["hi everyone", "have a great day"] wer_score = wer.compute(predictions=predictions, references=references) print(wer_score) 1.0 ``` ## Limitations and bias WER is a valuable tool for comparing different systems as well as for evaluating improvements within one system. This kind of measurement, however, provides no details on the nature of translation errors and further work is therefore required to identify the main source(s) of error and to focus any research effort. ## Citation ```bibtex @inproceedings{woodard1982, author = {Woodard, J.P. and Nelson, J.T., year = {1982}, journal = {Workshop on standardisation for speech I/O technology, Naval Air Development Center, Warminster, PA}, title = {An information theoretic measure of speech recognition performance} } ``` ```bibtex @inproceedings{morris2004, author = {Morris, Andrew and Maier, Viktoria and Green, Phil}, year = {2004}, month = {01}, pages = {}, title = {From WER and RIL to MER and WIL: improved evaluation measures for connected speech recognition.} } ``` ## Further References - [Word Error Rate -- Wikipedia](https://en.wikipedia.org/wiki/Word_error_rate) - [Hugging Face Tasks -- Automatic Speech Recognition](https://huggingface.co/tasks/automatic-speech-recognition)
huggingface/evaluate/blob/main/metrics/wer/README.md
he Trainer API. The Transformers library provides a Trainer API that allows you to easily fine-tune transformer models on your own dataset. The Trainer class take your datasets, your model as well as the training hyperparameters and can perform the training on any kind of setup (CPU, GPU, multi GPUs, TPUs). It can also compute the predictions on any dataset, and if you provided metrics, evaluate your model on any dataset. It can also handle final data-processing such as dynamic padding as long as you provide the tokenizer or a given data collator. We will try this API on the MRPC dataset, since it's relatively small and easy to preprocess. As we saw in the Datasets overview video, here is how we can preprocess it. We do not apply padding during the preprocessing as we will use dynamic padding with our DataCollatorWithPadding. Note that we don't do the final steps of renaming/removing columns or set the format to torch tensors: the Trainer will do all of this automatically for us by analyzing the model signature. The last steps before creating the Trainer are to define our model and some training hyperparameters. We saw how to do the first in the model API video. For the second, we use the TrainingArguments class. It only needs a path to a folder where results and checkpoints will be saved, but you can also customize all the hyperparameters the Trainer will use: learning rate, number of training epochs etc. It's then very easy to create a Trainer and launch a training. This should display a progress bar and after a few minutes (if you are running on a GPU) you should have the training finished. The result will be rather anticlimatic however, as you will only get a training loss which doesn't really tell you anything about how you model is performing. This is because we didn't specify anything metric for the evaluation. To get those metrics, we will first gather the predictions on the whole evaluation set using the predict method. It returns a namedtuple with three fields: predictions (which contains the model predictions), label_ids (which contains the labels if your dataset had them) and metrics (which is empty here). The predictions are the logits of the models for all the sentences in the dataset, so a NumPy array of shape 408 by 2. To match them with our labels, we need to take the maximum logit for each prediction (to know which of the two classes was predicted), which we do with the argmax function. Then we can use a Metric from the Datasets library: it can be loaded as easily as our dataset with the load_metric function, and it returns the evaluation metric used for the dataser we are using. We can see our model did learn something as it is 85.7% accurate. To monitor the evaluation metrics during training we need to define a compute_metrics function that does the same step as before: it takes a namedtuple with predictions and labels and must return a dictionary with the metric we want to keep track of. By passing the epoch evaluation strategy to our TrainingArguments, we tell the Trainer to evaluate at the end of every epoch. Launching a training inside a notebook will then display a progress bar and complete the table you see here as you pass every epoch.
huggingface/course/blob/main/subtitles/en/raw/chapter3/03a_trainer-api.md
-- title: chrF emoji: 🤗 colorFrom: blue colorTo: red sdk: gradio sdk_version: 3.19.1 app_file: app.py pinned: false tags: - evaluate - metric description: >- ChrF and ChrF++ are two MT evaluation metrics. They both use the F-score statistic for character n-gram matches, and ChrF++ adds word n-grams as well which correlates more strongly with direct assessment. We use the implementation that is already present in sacrebleu. The implementation here is slightly different from sacrebleu in terms of the required input format. The length of the references and hypotheses lists need to be the same, so you may need to transpose your references compared to sacrebleu's required input format. See https://github.com/huggingface/datasets/issues/3154#issuecomment-950746534 See the README.md file at https://github.com/mjpost/sacreBLEU#chrf--chrf for more information. --- # Metric Card for chrF(++) ## Metric Description ChrF and ChrF++ are two MT evaluation metrics that use the F-score statistic for character n-gram matches. ChrF++ additionally includes word n-grams, which correlate more strongly with direct assessment. We use the implementation that is already present in sacrebleu. While this metric is included in sacreBLEU, the implementation here is slightly different from sacreBLEU in terms of the required input format. Here, the length of the references and hypotheses lists need to be the same, so you may need to transpose your references compared to sacrebleu's required input format. See https://github.com/huggingface/datasets/issues/3154#issuecomment-950746534 See the [sacreBLEU README.md](https://github.com/mjpost/sacreBLEU#chrf--chrf) for more information. ## How to Use At minimum, this metric requires a `list` of predictions and a `list` of `list`s of references: ```python >>> prediction = ["The relationship between cats and dogs is not exactly friendly.", "a good bookshop is just a genteel black hole that knows how to read."] >>> reference = [["The relationship between dogs and cats is not exactly friendly.", ], ["A good bookshop is just a genteel Black Hole that knows how to read."]] >>> chrf = evaluate.load("chrf") >>> results = chrf.compute(predictions=prediction, references=reference) >>> print(results) {'score': 84.64214891738334, 'char_order': 6, 'word_order': 0, 'beta': 2} ``` ### Inputs - **`predictions`** (`list` of `str`): The predicted sentences. - **`references`** (`list` of `list` of `str`): The references. There should be one reference sub-list for each prediction sentence. - **`char_order`** (`int`): Character n-gram order. Defaults to `6`. - **`word_order`** (`int`): Word n-gram order. If equals to 2, the metric is referred to as chrF++. Defaults to `0`. - **`beta`** (`int`): Determine the importance of recall w.r.t precision. Defaults to `2`. - **`lowercase`** (`bool`): If `True`, enables case-insensitivity. Defaults to `False`. - **`whitespace`** (`bool`): If `True`, include whitespaces when extracting character n-grams. Defaults to `False`. - **`eps_smoothing`** (`bool`): If `True`, applies epsilon smoothing similar to reference chrF++.py, NLTK, and Moses implementations. If `False`, takes into account effective match order similar to sacreBLEU < 2.0.0. Defaults to `False`. ### Output Values The output is a dictionary containing the following fields: - **`'score'`** (`float`): The chrF (chrF++) score. - **`'char_order'`** (`int`): The character n-gram order. - **`'word_order'`** (`int`): The word n-gram order. If equals to `2`, the metric is referred to as chrF++. - **`'beta'`** (`int`): Determine the importance of recall w.r.t precision. The output is formatted as below: ```python {'score': 61.576379378113785, 'char_order': 6, 'word_order': 0, 'beta': 2} ``` The chrF(++) score can be any value between `0.0` and `100.0`, inclusive. #### Values from Popular Papers ### Examples A simple example of calculating chrF: ```python >>> prediction = ["The relationship between cats and dogs is not exactly friendly.", "a good bookshop is just a genteel black hole that knows how to read."] >>> reference = [["The relationship between dogs and cats is not exactly friendly.", ], ["A good bookshop is just a genteel Black Hole that knows how to read."]] >>> chrf = evaluate.load("chrf") >>> results = chrf.compute(predictions=prediction, references=reference) >>> print(results) {'score': 84.64214891738334, 'char_order': 6, 'word_order': 0, 'beta': 2} ``` The same example, but with the argument `word_order=2`, to calculate chrF++ instead of chrF: ```python >>> prediction = ["The relationship between cats and dogs is not exactly friendly.", "a good bookshop is just a genteel black hole that knows how to read."] >>> reference = [["The relationship between dogs and cats is not exactly friendly.", ], ["A good bookshop is just a genteel Black Hole that knows how to read."]] >>> chrf = evaluate.load("chrf") >>> results = chrf.compute(predictions=prediction, ... references=reference, ... word_order=2) >>> print(results) {'score': 82.87263732906315, 'char_order': 6, 'word_order': 2, 'beta': 2} ``` The same chrF++ example as above, but with `lowercase=True` to normalize all case: ```python >>> prediction = ["The relationship between cats and dogs is not exactly friendly.", "a good bookshop is just a genteel black hole that knows how to read."] >>> reference = [["The relationship between dogs and cats is not exactly friendly.", ], ["A good bookshop is just a genteel Black Hole that knows how to read."]] >>> chrf = evaluate.load("chrf") >>> results = chrf.compute(predictions=prediction, ... references=reference, ... word_order=2, ... lowercase=True) >>> print(results) {'score': 92.12853119829202, 'char_order': 6, 'word_order': 2, 'beta': 2} ``` ## Limitations and Bias - According to [Popović 2017](https://www.statmt.org/wmt17/pdf/WMT70.pdf), chrF+ (where `word_order=1`) and chrF++ (where `word_order=2`) produce scores that correlate better with human judgements than chrF (where `word_order=0`) does. ## Citation ```bibtex @inproceedings{popovic-2015-chrf, title = "chr{F}: character n-gram {F}-score for automatic {MT} evaluation", author = "Popovi{\'c}, Maja", booktitle = "Proceedings of the Tenth Workshop on Statistical Machine Translation", month = sep, year = "2015", address = "Lisbon, Portugal", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/W15-3049", doi = "10.18653/v1/W15-3049", pages = "392--395", } @inproceedings{popovic-2017-chrf, title = "chr{F}++: words helping character n-grams", author = "Popovi{\'c}, Maja", booktitle = "Proceedings of the Second Conference on Machine Translation", month = sep, year = "2017", address = "Copenhagen, Denmark", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/W17-4770", doi = "10.18653/v1/W17-4770", pages = "612--618", } @inproceedings{post-2018-call, title = "A Call for Clarity in Reporting {BLEU} Scores", author = "Post, Matt", booktitle = "Proceedings of the Third Conference on Machine Translation: Research Papers", month = oct, year = "2018", address = "Belgium, Brussels", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/W18-6319", pages = "186--191", } ``` ## Further References - See the [sacreBLEU README.md](https://github.com/mjpost/sacreBLEU#chrf--chrf) for more information on this implementation.
huggingface/evaluate/blob/main/metrics/chrf/README.md
!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # Semantic Guidance Semantic Guidance for Diffusion Models was proposed in [SEGA: Instructing Text-to-Image Models using Semantic Guidance](https://huggingface.co/papers/2301.12247) and provides strong semantic control over image generation. Small changes to the text prompt usually result in entirely different output images. However, with SEGA a variety of changes to the image are enabled that can be controlled easily and intuitively, while staying true to the original image composition. The abstract from the paper is: *Text-to-image diffusion models have recently received a lot of interest for their astonishing ability to produce high-fidelity images from text only. However, achieving one-shot generation that aligns with the user's intent is nearly impossible, yet small changes to the input prompt often result in very different images. This leaves the user with little semantic control. To put the user in control, we show how to interact with the diffusion process to flexibly steer it along semantic directions. This semantic guidance (SEGA) generalizes to any generative architecture using classifier-free guidance. More importantly, it allows for subtle and extensive edits, changes in composition and style, as well as optimizing the overall artistic conception. We demonstrate SEGA's effectiveness on both latent and pixel-based diffusion models such as Stable Diffusion, Paella, and DeepFloyd-IF using a variety of tasks, thus providing strong evidence for its versatility, flexibility, and improvements over existing methods.* <Tip> Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-components-across-pipelines) section to learn how to efficiently load the same components into multiple pipelines. </Tip> ## SemanticStableDiffusionPipeline [[autodoc]] SemanticStableDiffusionPipeline - all - __call__ ## StableDiffusionSafePipelineOutput [[autodoc]] pipelines.semantic_stable_diffusion.pipeline_output.SemanticStableDiffusionPipelineOutput - all
huggingface/diffusers/blob/main/docs/source/en/api/pipelines/semantic_stable_diffusion.md
Using 🤗 Datasets Once you've found an interesting dataset on the Hugging Face Hub, you can load the dataset using 🤗 Datasets. You can click on the [**Use in dataset library** button](https://huggingface.co/datasets/samsum?library=true) to copy the code to load a dataset. First you need to [Login with your Hugging Face account](../huggingface_hub/quick-start#login), for example using: ``` huggingface-cli login ``` And then you can load a dataset from the Hugging Face Hub using ```python from datasets import load_dataset dataset = load_dataset("username/my_dataset") # or load the separate splits if the dataset has train/validation/test splits train_dataset = load_dataset("username/my_dataset", split="train") valid_dataset = load_dataset("username/my_dataset", split="validation") test_dataset = load_dataset("username/my_dataset", split="test") ``` You can also upload datasets to the Hugging Face Hub: ```python my_new_dataset.push_to_hub("username/my_new_dataset") ``` This creates a dataset repository `username/my_new_dataset` containing your Dataset in Parquet format, that you can reload later. For more information about using 🤗 Datasets, check out the [tutorials](https://huggingface.co/docs/datasets/tutorial) and [how-to guides](https://huggingface.co/docs/datasets/how_to) available in the 🤗 Datasets documentation.
huggingface/hub-docs/blob/main/docs/hub/datasets-usage.md
-- title: "Sentence Transformers in the Hugging Face Hub" authors: - user: osanseviero - user: nreimers --- # Sentence Transformers in the Hugging Face Hub Over the past few weeks, we've built collaborations with many Open Source frameworks in the machine learning ecosystem. One that gets us particularly excited is Sentence Transformers. [Sentence Transformers](https://github.com/UKPLab/sentence-transformers) is a framework for sentence, paragraph and image embeddings. This allows to derive semantically meaningful embeddings (1) which is useful for applications such as semantic search or multi-lingual zero shot classification. As part of Sentence Transformers [v2 release](https://github.com/UKPLab/sentence-transformers/releases/tag/v2.0.0), there are a lot of cool new features: - Sharing your models in the Hub easily. - Widgets and Inference API for sentence embeddings and sentence similarity. - Better sentence-embeddings models available ([benchmark](https://www.sbert.net/docs/pretrained_models.html#sentence-embedding-models) and [models](https://huggingface.co/sentence-transformers) in the Hub). With over 90 pretrained Sentence Transformers models for more than 100 languages in the Hub, anyone can benefit from them and easily use them. Pre-trained models can be loaded and used directly with few lines of code: ```python from sentence_transformers import SentenceTransformer sentences = ["Hello World", "Hallo Welt"] model = SentenceTransformer('sentence-transformers/paraphrase-MiniLM-L6-v2') embeddings = model.encode(sentences) print(embeddings) ``` But not only this. People will probably want to either demo their models or play with other models easily, so we're happy to announce the release of two new widgets in the Hub! The first one is the `feature-extraction` widget which shows the sentence embedding. <div><a class="text-xs block mb-3 text-gray-300" href="/sentence-transformers/distilbert-base-nli-max-tokens"><code>sentence-transformers/distilbert-base-nli-max-tokens</code></a> <div class="p-5 shadow-sm rounded-xl bg-white max-w-md"><div class="SVELTE_HYDRATER " data-props="{&quot;apiUrl&quot;:&quot;https://api-inference.huggingface.co&quot;,&quot;model&quot;:{&quot;author&quot;:&quot;sentence-transformers&quot;,&quot;autoArchitecture&quot;:&quot;AutoModel&quot;,&quot;branch&quot;:&quot;main&quot;,&quot;cardData&quot;:{&quot;pipeline_tag&quot;:&quot;feature-extraction&quot;,&quot;tags&quot;:[&quot;sentence-transformers&quot;,&quot;feature-extraction&quot;,&quot;sentence-similarity&quot;,&quot;transformers&quot;]},&quot;cardSource&quot;:true,&quot;config&quot;:{&quot;architectures&quot;:[&quot;DistilBertModel&quot;],&quot;model_type&quot;:&quot;distilbert&quot;},&quot;id&quot;:&quot;sentence-transformers/distilbert-base-nli-max-tokens&quot;,&quot;pipeline_tag&quot;:&quot;feature-extraction&quot;,&quot;library_name&quot;:&quot;sentence-transformers&quot;,&quot;mask_token&quot;:&quot;[MASK]&quot;,&quot;modelId&quot;:&quot;sentence-transformers/distilbert-base-nli-max-tokens&quot;,&quot;private&quot;:false,&quot;siblings&quot;:[{&quot;rfilename&quot;:&quot;.gitattributes&quot;},{&quot;rfilename&quot;:&quot;README.md&quot;},{&quot;rfilename&quot;:&quot;config.json&quot;},{&quot;rfilename&quot;:&quot;config_sentence_transformers.json&quot;},{&quot;rfilename&quot;:&quot;modules.json&quot;},{&quot;rfilename&quot;:&quot;pytorch_model.bin&quot;},{&quot;rfilename&quot;:&quot;sentence_bert_config.json&quot;},{&quot;rfilename&quot;:&quot;special_tokens_map.json&quot;},{&quot;rfilename&quot;:&quot;tokenizer.json&quot;},{&quot;rfilename&quot;:&quot;tokenizer_config.json&quot;},{&quot;rfilename&quot;:&quot;vocab.txt&quot;},{&quot;rfilename&quot;:&quot;1_Pooling/config.json&quot;}],&quot;tags&quot;:[&quot;pytorch&quot;,&quot;distilbert&quot;,&quot;arxiv:1908.10084&quot;,&quot;sentence-transformers&quot;,&quot;feature-extraction&quot;,&quot;sentence-similarity&quot;,&quot;transformers&quot;,&quot;pipeline_tag:feature-extraction&quot;],&quot;tag_objs&quot;:[{&quot;id&quot;:&quot;feature-extraction&quot;,&quot;label&quot;:&quot;Feature Extraction&quot;,&quot;type&quot;:&quot;pipeline_tag&quot;},{&quot;id&quot;:&quot;pytorch&quot;,&quot;label&quot;:&quot;PyTorch&quot;,&quot;type&quot;:&quot;library&quot;},{&quot;id&quot;:&quot;sentence-transformers&quot;,&quot;label&quot;:&quot;Sentence Transformers&quot;,&quot;type&quot;:&quot;library&quot;},{&quot;id&quot;:&quot;transformers&quot;,&quot;label&quot;:&quot;Transformers&quot;,&quot;type&quot;:&quot;library&quot;},{&quot;id&quot;:&quot;arxiv:1908.10084&quot;,&quot;label&quot;:&quot;arxiv:1908.10084&quot;,&quot;type&quot;:&quot;arxiv&quot;},{&quot;id&quot;:&quot;distilbert&quot;,&quot;label&quot;:&quot;distilbert&quot;,&quot;type&quot;:&quot;other&quot;},{&quot;id&quot;:&quot;sentence-similarity&quot;,&quot;label&quot;:&quot;sentence-similarity&quot;,&quot;type&quot;:&quot;other&quot;},{&quot;id&quot;:&quot;pipeline_tag:feature-extraction&quot;,&quot;label&quot;:&quot;pipeline_tag:feature-extraction&quot;,&quot;type&quot;:&quot;other&quot;}]},&quot;shouldUpdateUrl&quot;:true}" data-target="InferenceWidget"><div class="flex flex-col w-full max-w-full"> <div class="font-semibold flex items-center mb-2"><div class="text-lg flex items-center"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="-ml-1 mr-1 text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg> Hosted inference API</div> <a target="_blank" href="/docs"><svg class="ml-1.5 text-sm text-gray-400 hover:text-black" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M17 22v-8h-4v2h2v6h-3v2h8v-2h-3z" fill="currentColor"></path><path d="M16 8a1.5 1.5 0 1 0 1.5 1.5A1.5 1.5 0 0 0 16 8z" fill="currentColor"></path><path d="M16 30a14 14 0 1 1 14-14a14 14 0 0 1-14 14zm0-26a12 12 0 1 0 12 12A12 12 0 0 0 16 4z" fill="currentColor"></path></svg></a></div> <div class="flex items-center text-sm text-gray-500 mb-1.5"><div class="inline-flex items-center"><svg class="mr-1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M27 3H5a2 2 0 0 0-2 2v22a2 2 0 0 0 2 2h22a2 2 0 0 0 2-2V5a2 2 0 0 0-2-2zm0 2v4H5V5zm-10 6h10v7H17zm-2 7H5v-7h10zM5 20h10v7H5zm12 7v-7h10v7z"></path></svg> <span>Feature Extraction</span></div> <div class="ml-auto"></div></div> <form><div class="flex h-10"><input class="form-input-alt flex-1 rounded-r-none " placeholder="Your sentence here..." required="" type="text"> <button class="btn-widget w-24 h-10 px-5 rounded-l-none border-l-0 " type="submit">Compute</button></div></form> <div class="mt-1.5"><div class="text-gray-400 text-xs">This model is currently loaded and running on the Inference API.</div> </div> <div class="mt-auto pt-4 flex items-center text-xs text-gray-500"><button class="flex items-center cursor-not-allowed text-gray-300" disabled=""><svg class="mr-1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32" style="transform: rotate(360deg);"><path d="M31 16l-7 7l-1.41-1.41L28.17 16l-5.58-5.59L24 9l7 7z" fill="currentColor"></path><path d="M1 16l7-7l1.41 1.41L3.83 16l5.58 5.59L8 23l-7-7z" fill="currentColor"></path><path d="M12.419 25.484L17.639 6l1.932.518L14.35 26z" fill="currentColor"></path></svg> JSON Output</button> <button class="flex items-center ml-auto"><svg class="mr-1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M22 16h2V8h-8v2h6v6z" fill="currentColor"></path><path d="M8 24h8v-2h-6v-6H8v8z" fill="currentColor"></path><path d="M26 28H6a2.002 2.002 0 0 1-2-2V6a2.002 2.002 0 0 1 2-2h20a2.002 2.002 0 0 1 2 2v20a2.002 2.002 0 0 1-2 2zM6 6v20h20.001L26 6z" fill="currentColor"></path></svg> Maximize</button></div> </div></div></div> But seeing a bunch of numbers might not be very useful to you (unless you're able to understand the embeddings from a quick look, which would be impressive!). We're also introducing a new widget for a common use case of Sentence Transformers: computing sentence similarity. <!-- Hackiest hack ever for the draft --> <div><a class="text-xs block mb-3 text-gray-300" href="/sentence-transformers/paraphrase-MiniLM-L6-v2"><code>sentence-transformers/paraphrase-MiniLM-L6-v2</code></a> <div class="p-5 shadow-sm rounded-xl bg-white max-w-md"><div class="SVELTE_HYDRATER " data-props="{&quot;apiUrl&quot;:&quot;https://api-inference.huggingface.co&quot;,&quot;model&quot;:{&quot;author&quot;:&quot;sentence-transformers&quot;,&quot;autoArchitecture&quot;:&quot;AutoModel&quot;,&quot;branch&quot;:&quot;main&quot;,&quot;cardData&quot;:{&quot;tags&quot;:[&quot;sentence-transformers&quot;,&quot;sentence-similarity&quot;]},&quot;cardSource&quot;:true,&quot;config&quot;:{&quot;architectures&quot;:[&quot;RobertaModel&quot;],&quot;model_type&quot;:&quot;roberta&quot;},&quot;pipeline_tag&quot;:&quot;sentence-similarity&quot;,&quot;library_name&quot;:&quot;sentence-transformers&quot;,&quot;mask_token&quot;:&quot;<mask>&quot;,&quot;modelId&quot;:&quot;sentence-transformers/paraphrase-MiniLM-L6-v2&quot;,&quot;private&quot;:false,&quot;tags&quot;:[&quot;pytorch&quot;,&quot;jax&quot;,&quot;roberta&quot;,&quot;sentence-transformers&quot;,&quot;sentence-similarity&quot;],&quot;tag_objs&quot;:[{&quot;id&quot;:&quot;sentence-similarity&quot;,&quot;label&quot;:&quot;Sentence Similarity&quot;,&quot;type&quot;:&quot;pipeline_tag&quot;},{&quot;id&quot;:&quot;pytorch&quot;,&quot;label&quot;:&quot;PyTorch&quot;,&quot;type&quot;:&quot;library&quot;},{&quot;id&quot;:&quot;jax&quot;,&quot;label&quot;:&quot;JAX&quot;,&quot;type&quot;:&quot;library&quot;},{&quot;id&quot;:&quot;sentence-transformers&quot;,&quot;label&quot;:&quot;Sentence Transformers&quot;,&quot;type&quot;:&quot;library&quot;},{&quot;id&quot;:&quot;roberta&quot;,&quot;label&quot;:&quot;roberta&quot;,&quot;type&quot;:&quot;other&quot;}],&quot;widgetData&quot;:[{&quot;source_sentence&quot;:&quot;That is a happy person&quot;,&quot;sentences&quot;:[&quot;That is a happy dog&quot;,&quot;That is a very happy person&quot;,&quot;Today is a sunny day&quot;]}]},&quot;shouldUpdateUrl&quot;:false}" data-target="InferenceWidget"><div class="flex flex-col w-full max-w-full "> <div class="font-semibold flex items-center mb-2"><div class="text-lg flex items-center"><svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" class="-ml-1 mr-1 text-yellow-500" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><path d="M11 15H6l7-14v8h5l-7 14v-8z" fill="currentColor"></path></svg> Hosted inference API</div> <a target="_blank" href="/docs"><svg class="ml-1.5 text-sm text-gray-400 hover:text-black" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M17 22v-8h-4v2h2v6h-3v2h8v-2h-3z" fill="currentColor"></path><path d="M16 8a1.5 1.5 0 1 0 1.5 1.5A1.5 1.5 0 0 0 16 8z" fill="currentColor"></path><path d="M16 30a14 14 0 1 1 14-14a14 14 0 0 1-14 14zm0-26a12 12 0 1 0 12 12A12 12 0 0 0 16 4z" fill="currentColor"></path></svg></a></div> <div class="flex items-center text-sm text-gray-500 mb-1.5"><div class="inline-flex items-center"><svg class="mr-1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M30 15H17V2h-2v13H2v2h13v13h2V17h13v-2z"></path><path d="M25.586 20L27 21.414L23.414 25L27 28.586L25.586 30l-5-5l5-5z"></path><path d="M11 30H3a1 1 0 0 1-.894-1.447l4-8a1.041 1.041 0 0 1 1.789 0l4 8A1 1 0 0 1 11 30zm-6.382-2h4.764L7 23.236z"></path><path d="M28 12h-6a2.002 2.002 0 0 1-2-2V4a2.002 2.002 0 0 1 2-2h6a2.002 2.002 0 0 1 2 2v6a2.002 2.002 0 0 1-2 2zm-6-8v6h6.001L28 4z"></path><path d="M7 12a5 5 0 1 1 5-5a5.006 5.006 0 0 1-5 5zm0-8a3 3 0 1 0 3 3a3.003 3.003 0 0 0-3-3z"></path></svg> <span>Sentence Similarity</span></div> <div class="ml-auto"></div></div> <form class="flex flex-col space-y-2"><label class="block "> <span class="text-sm text-gray-500">Source Sentence</span> <input class="mt-1.5 form-input-alt block w-full " placeholder="Your sentence here..." type="text"></label> <label class="block "> <span class="text-sm text-gray-500">Sentences to compare to</span> <input class="mt-1.5 form-input-alt block w-full " placeholder="Your sentence here..." type="text"></label> <label class="block "> <input class=" form-input-alt block w-full " placeholder="Your sentence here..." type="text"></label><label class="block "> <input class=" form-input-alt block w-full " placeholder="Your sentence here..." type="text"></label> <button class="btn-widget w-full h-10 px-5" type="submit">Add Sentence</button> <button class="btn-widget w-24 h-10 px-5 " type="submit">Compute</button></form> <div class="mt-1.5"><div class="text-gray-400 text-xs">This model can be loaded on the Inference API on-demand.</div> </div> <div class="mt-auto pt-4 flex items-center text-xs text-gray-500"><button class="flex items-center cursor-not-allowed text-gray-300" disabled=""><svg class="mr-1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32" style="transform: rotate(360deg);"><path d="M31 16l-7 7l-1.41-1.41L28.17 16l-5.58-5.59L24 9l7 7z" fill="currentColor"></path><path d="M1 16l7-7l1.41 1.41L3.83 16l5.58 5.59L8 23l-7-7z" fill="currentColor"></path><path d="M12.419 25.484L17.639 6l1.932.518L14.35 26z" fill="currentColor"></path></svg> JSON Output</button> <button class="flex items-center ml-auto"><svg class="mr-1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M22 16h2V8h-8v2h6v6z" fill="currentColor"></path><path d="M8 24h8v-2h-6v-6H8v8z" fill="currentColor"></path><path d="M26 28H6a2.002 2.002 0 0 1-2-2V6a2.002 2.002 0 0 1 2-2h20a2.002 2.002 0 0 1 2 2v20a2.002 2.002 0 0 1-2 2zM6 6v20h20.001L26 6z" fill="currentColor"></path></svg> Maximize</button></div> </div></div></div> </div> Of course, on top of the widgets, we also provide API endpoints in our Inference API that you can use to programmatically call your models! ```python import json import requests API_URL = "https://api-inference.huggingface.co/models/sentence-transformers/paraphrase-MiniLM-L6-v2" headers = {"Authorization": "Bearer YOUR_TOKEN"} def query(payload): response = requests.post(API_URL, headers=headers, json=payload) return response.json() data = query( { "inputs": { "source_sentence": "That is a happy person", "sentences": [ "That is a happy dog", "That is a very happy person", "Today is a sunny day" ] } } ) ``` ## Unleashing the Power of Sharing So why is this powerful? In a matter of minutes, you can share your trained models with the whole community. ```python from sentence_transformers import SentenceTransformer # Load or train a model model.save_to_hub("my_new_model") ``` Now you will have a [repository](https://huggingface.co/osanseviero/my_new_model) in the Hub which hosts your model. A model card was automatically created. It describes the architecture by listing the layers and shows how to use the model with both `Sentence Transformers` and `🤗 Transformers`. You can also try out the widget and use the Inference API straight away! If this was not exciting enough, your models will also be easily discoverable by [filtering for all](https://huggingface.co/models?filter=sentence-transformers) `Sentence Transformers` models. ## What's next? Moving forward, we want to make this integration even more useful. In our roadmap, we expect training and evaluation data to be included in the automatically created model card, like is the case in `transformers` from version `v4.8`. And what's next for you? We're very excited to see your contributions! If you already have a `Sentence Transformer` repo in the Hub, you can now enable the widget and Inference API by changing the model card metadata. ```yaml --- tags: - sentence-transformers - sentence-similarity # Or feature-extraction! --- ``` If you don't have any model in the Hub and want to learn more about Sentence Transformers, head to [www.SBERT.net](https://www.sbert.net)! ## Would you like to integrate your library to the Hub? This integration is possible thanks to the [`huggingface_hub`](https://github.com/huggingface/huggingface_hub) library which has all our widgets and the API for all our supported libraries. If you would like to integrate your library to the Hub, we have a [guide](https://huggingface.co/docs/hub/models-adding-libraries) for you! ## References 1. Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. [https://arxiv.org/abs/1908.10084](https://arxiv.org/abs/1908.10084)
huggingface/blog/blob/main/sentence-transformers-in-the-hub.md
!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # VQModel The VQ-VAE model was introduced in [Neural Discrete Representation Learning](https://huggingface.co/papers/1711.00937) by Aaron van den Oord, Oriol Vinyals and Koray Kavukcuoglu. The model is used in 🤗 Diffusers to decode latent representations into images. Unlike [`AutoencoderKL`], the [`VQModel`] works in a quantized latent space. The abstract from the paper is: *Learning useful representations without supervision remains a key challenge in machine learning. In this paper, we propose a simple yet powerful generative model that learns such discrete representations. Our model, the Vector Quantised-Variational AutoEncoder (VQ-VAE), differs from VAEs in two key ways: the encoder network outputs discrete, rather than continuous, codes; and the prior is learnt rather than static. In order to learn a discrete latent representation, we incorporate ideas from vector quantisation (VQ). Using the VQ method allows the model to circumvent issues of "posterior collapse" -- where the latents are ignored when they are paired with a powerful autoregressive decoder -- typically observed in the VAE framework. Pairing these representations with an autoregressive prior, the model can generate high quality images, videos, and speech as well as doing high quality speaker conversion and unsupervised learning of phonemes, providing further evidence of the utility of the learnt representations.* ## VQModel [[autodoc]] VQModel ## VQEncoderOutput [[autodoc]] models.vq_model.VQEncoderOutput
huggingface/diffusers/blob/main/docs/source/en/api/models/vq.md
``python from transformers import AutoModelForSeq2SeqLM from peft import get_peft_config, get_peft_model, get_peft_model_state_dict, LoraConfig, TaskType import torch from datasets import load_dataset import os os.environ["TOKENIZERS_PARALLELISM"] = "false" from transformers import AutoTokenizer from torch.utils.data import DataLoader from transformers import default_data_collator, get_linear_schedule_with_warmup from tqdm import tqdm from datasets import load_dataset device = "cuda" model_name_or_path = "bigscience/mt0-large" tokenizer_name_or_path = "bigscience/mt0-large" checkpoint_name = "financial_sentiment_analysis_lora_v1.pt" text_column = "sentence" label_column = "text_label" max_length = 128 lr = 1e-3 num_epochs = 3 batch_size = 8 ``` ```python # creating model peft_config = LoraConfig(task_type=TaskType.SEQ_2_SEQ_LM, inference_mode=False, r=8, lora_alpha=32, lora_dropout=0.1) model = AutoModelForSeq2SeqLM.from_pretrained(model_name_or_path) model = get_peft_model(model, peft_config) model.print_trainable_parameters() model ``` ```python # loading dataset dataset = load_dataset("financial_phrasebank", "sentences_allagree") dataset = dataset["train"].train_test_split(test_size=0.1) dataset["validation"] = dataset["test"] del dataset["test"] classes = dataset["train"].features["label"].names dataset = dataset.map( lambda x: {"text_label": [classes[label] for label in x["label"]]}, batched=True, num_proc=1, ) dataset["train"][0] ``` ```python # data preprocessing tokenizer = AutoTokenizer.from_pretrained(model_name_or_path) def preprocess_function(examples): inputs = examples[text_column] targets = examples[label_column] model_inputs = tokenizer(inputs, max_length=max_length, padding="max_length", truncation=True, return_tensors="pt") labels = tokenizer(targets, max_length=3, padding="max_length", truncation=True, return_tensors="pt") labels = labels["input_ids"] labels[labels == tokenizer.pad_token_id] = -100 model_inputs["labels"] = labels return model_inputs processed_datasets = dataset.map( preprocess_function, batched=True, num_proc=1, remove_columns=dataset["train"].column_names, load_from_cache_file=False, desc="Running tokenizer on dataset", ) train_dataset = processed_datasets["train"] eval_dataset = processed_datasets["validation"] train_dataloader = DataLoader( train_dataset, shuffle=True, collate_fn=default_data_collator, batch_size=batch_size, pin_memory=True ) eval_dataloader = DataLoader(eval_dataset, collate_fn=default_data_collator, batch_size=batch_size, pin_memory=True) ``` ```python # optimizer and lr scheduler optimizer = torch.optim.AdamW(model.parameters(), lr=lr) lr_scheduler = get_linear_schedule_with_warmup( optimizer=optimizer, num_warmup_steps=0, num_training_steps=(len(train_dataloader) * num_epochs), ) ``` ```python # training and evaluation model = model.to(device) for epoch in range(num_epochs): model.train() total_loss = 0 for step, batch in enumerate(tqdm(train_dataloader)): batch = {k: v.to(device) for k, v in batch.items()} outputs = model(**batch) loss = outputs.loss total_loss += loss.detach().float() loss.backward() optimizer.step() lr_scheduler.step() optimizer.zero_grad() model.eval() eval_loss = 0 eval_preds = [] for step, batch in enumerate(tqdm(eval_dataloader)): batch = {k: v.to(device) for k, v in batch.items()} with torch.no_grad(): outputs = model(**batch) loss = outputs.loss eval_loss += loss.detach().float() eval_preds.extend( tokenizer.batch_decode(torch.argmax(outputs.logits, -1).detach().cpu().numpy(), skip_special_tokens=True) ) eval_epoch_loss = eval_loss / len(eval_dataloader) eval_ppl = torch.exp(eval_epoch_loss) train_epoch_loss = total_loss / len(train_dataloader) train_ppl = torch.exp(train_epoch_loss) print(f"{epoch=}: {train_ppl=} {train_epoch_loss=} {eval_ppl=} {eval_epoch_loss=}") ``` ```python # print accuracy correct = 0 total = 0 for pred, true in zip(eval_preds, dataset["validation"]["text_label"]): if pred.strip() == true.strip(): correct += 1 total += 1 accuracy = correct / total * 100 print(f"{accuracy=} % on the evaluation dataset") print(f"{eval_preds[:10]=}") print(f"{dataset['validation']['text_label'][:10]=}") ``` ```python # saving model peft_model_id = f"{model_name_or_path}_{peft_config.peft_type}_{peft_config.task_type}" model.save_pretrained(peft_model_id) ``` ```python ckpt = f"{peft_model_id}/adapter_model.bin" !du -h $ckpt ``` ```python from peft import PeftModel, PeftConfig peft_model_id = f"{model_name_or_path}_{peft_config.peft_type}_{peft_config.task_type}" config = PeftConfig.from_pretrained(peft_model_id) model = AutoModelForSeq2SeqLM.from_pretrained(config.base_model_name_or_path) model = PeftModel.from_pretrained(model, peft_model_id) ``` ```python model.eval() i = 13 inputs = tokenizer(dataset["validation"][text_column][i], return_tensors="pt") print(dataset["validation"][text_column][i]) print(inputs) with torch.no_grad(): outputs = model.generate(input_ids=inputs["input_ids"], max_new_tokens=10) print(outputs) print(tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True)) ```
huggingface/peft/blob/main/examples/conditional_generation/peft_lora_seq2seq.ipynb
!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # ControlNet with Stable Diffusion XL ControlNet was introduced in [Adding Conditional Control to Text-to-Image Diffusion Models](https://huggingface.co/papers/2302.05543) by Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. With a ControlNet model, you can provide an additional control image to condition and control Stable Diffusion generation. For example, if you provide a depth map, the ControlNet model generates an image that'll preserve the spatial information from the depth map. It is a more flexible and accurate way to control the image generation process. The abstract from the paper is: *We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models. ControlNet locks the production-ready large diffusion models, and reuses their deep and robust encoding layers pretrained with billions of images as a strong backbone to learn a diverse set of conditional controls. The neural architecture is connected with "zero convolutions" (zero-initialized convolution layers) that progressively grow the parameters from zero and ensure that no harmful noise could affect the finetuning. We test various conditioning controls, eg, edges, depth, segmentation, human pose, etc, with Stable Diffusion, using single or multiple conditions, with or without prompts. We show that the training of ControlNets is robust with small (<50k) and large (>1m) datasets. Extensive results show that ControlNet may facilitate wider applications to control image diffusion models.* You can find additional smaller Stable Diffusion XL (SDXL) ControlNet checkpoints from the 🤗 [Diffusers](https://huggingface.co/diffusers) Hub organization, and browse [community-trained](https://huggingface.co/models?other=stable-diffusion-xl&other=controlnet) checkpoints on the Hub. <Tip warning={true}> 🧪 Many of the SDXL ControlNet checkpoints are experimental, and there is a lot of room for improvement. Feel free to open an [Issue](https://github.com/huggingface/diffusers/issues/new/choose) and leave us feedback on how we can improve! </Tip> If you don't see a checkpoint you're interested in, you can train your own SDXL ControlNet with our [training script](../../../../../examples/controlnet/README_sdxl). <Tip> Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-components-across-pipelines) section to learn how to efficiently load the same components into multiple pipelines. </Tip> ## StableDiffusionXLControlNetPipeline [[autodoc]] StableDiffusionXLControlNetPipeline - all - __call__ ## StableDiffusionXLControlNetImg2ImgPipeline [[autodoc]] StableDiffusionXLControlNetImg2ImgPipeline - all - __call__ ## StableDiffusionXLControlNetInpaintPipeline [[autodoc]] StableDiffusionXLControlNetInpaintPipeline - all - __call__ ## StableDiffusionPipelineOutput [[autodoc]] pipelines.stable_diffusion.StableDiffusionPipelineOutput
huggingface/diffusers/blob/main/docs/source/en/api/pipelines/controlnet_sdxl.md
Metric Card for *Current Metric* ***Metric Card Instructions:*** *Copy this file into the relevant metric folder, then fill it out and save it as README.md. Feel free to take a look at existing metric cards if you'd like examples.* ## Metric Description *Give a brief overview of this metric.* ## How to Use *Give general statement of how to use the metric* *Provide simplest possible example for using the metric* ### Inputs *List all input arguments in the format below* - **input_field** *(type): Definition of input, with explanation if necessary. State any default value(s).* ### Output Values *Explain what this metric outputs (e.g. a single score, a list of scores)* *Give an example of what the metric output looks like.* *State the range of possible values that the metric's output can take, as well as what in that range is considered good. For example: "This metric can take on any value between 0 and 100, inclusive. Higher scores are better."* #### Values from Popular Papers *Give examples, preferrably with links, to papers that have reported this metric, along with the values they have reported.* ### Examples *Give code examples of the metric being used. Try to include examples that clear up any potential ambiguity left from the metric description above. If possible, provide a range of examples that show both typical and atypical results, as well as examples where a variety of input parameters are passed.* ## Limitations and Bias *Note any known limitations or biases that the metric has, with links and references if possible.* ## Citation *Cite the source where this metric was introduced.* ## Further References *Add any useful further references.*
huggingface/datasets/blob/main/templates/metric_card_template.md
运行后台任务 Related spaces: https://huggingface.co/spaces/freddyaboulton/gradio-google-forms Tags: TASKS, SCHEDULED, TABULAR, DATA ## 简介 本指南介绍了如何从 gradio 应用程序中运行后台任务。 后台任务是在您的应用程序的请求-响应生命周期之外执行的操作,可以是一次性的或定期的。 后台任务的示例包括定期将数据与外部数据库同步或通过电子邮件发送模型预测报告。 ## 概述 我们将创建一个简单的“Google Forms”风格的应用程序,用于收集 gradio 库的用户反馈。 我们将使用一个本地 sqlite 数据库来存储数据,但我们将定期将数据库的状态与[HuggingFace Dataset](https://huggingface.co/datasets)同步,以便始终备份我们的用户评论。 同步将在每 60 秒运行的后台任务中进行。 在演示结束时,您将拥有一个完全可工作的应用程序,类似于以下应用程序 : <gradio-app space="freddyaboulton/gradio-google-forms"> </gradio-app> ## 第一步 - 编写数据库逻辑 💾 我们的应用程序将存储评论者的姓名,他们对 gradio 给出的评分(1 到 5 的范围),以及他们想要分享的关于该库的任何评论。让我们编写一些代码,创建一个数据库表来存储这些数据。我们还将编写一些函数,以将评论插入该表中并获取最新的 10 条评论。 我们将使用 `sqlite3` 库来连接我们的 sqlite 数据库,但 gradio 可以与任何库一起使用。 代码如下 : ```python DB_FILE = "./reviews.db" db = sqlite3.connect(DB_FILE) # Create table if it doesn't already exist try: db.execute("SELECT * FROM reviews").fetchall() db.close() except sqlite3.OperationalError: db.execute( ''' CREATE TABLE reviews (id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP NOT NULL, name TEXT, review INTEGER, comments TEXT) ''') db.commit() db.close() def get_latest_reviews(db: sqlite3.Connection): reviews = db.execute("SELECT * FROM reviews ORDER BY id DESC limit 10").fetchall() total_reviews = db.execute("Select COUNT(id) from reviews").fetchone()[0] reviews = pd.DataFrame(reviews, columns=["id", "date_created", "name", "review", "comments"]) return reviews, total_reviews def add_review(name: str, review: int, comments: str): db = sqlite3.connect(DB_FILE) cursor = db.cursor() cursor.execute("INSERT INTO reviews(name, review, comments) VALUES(?,?,?)", [name, review, comments]) db.commit() reviews, total_reviews = get_latest_reviews(db) db.close() return reviews, total_reviews ``` 让我们还写一个函数,在 gradio 应用程序加载时加载最新的评论 : ```python def load_data(): db = sqlite3.connect(DB_FILE) reviews, total_reviews = get_latest_reviews(db) db.close() return reviews, total_reviews ``` ## 第二步 - 创建 gradio 应用 ⚡ 现在我们已经定义了数据库逻辑,我们可以使用 gradio 创建一个动态的网页来询问用户的反馈意见! 使用以下代码段 : ```python with gr.Blocks() as demo: with gr.Row(): with gr.Column(): name = gr.Textbox(label="Name", placeholder="What is your name?") review = gr.Radio(label="How satisfied are you with using gradio?", choices=[1, 2, 3, 4, 5]) comments = gr.Textbox(label="Comments", lines=10, placeholder="Do you have any feedback on gradio?") submit = gr.Button(value="Submit Feedback") with gr.Column(): data = gr.Dataframe(label="Most recently created 10 rows") count = gr.Number(label="Total number of reviews") submit.click(add_review, [name, review, comments], [data, count]) demo.load(load_data, None, [data, count]) ``` ## 第三步 - 与 HuggingFace 数据集同步 🤗 在第 2 步后我们可以调用 `demo.launch()` 来运行一个完整功能的应用程序。然而,我们的数据将存储在本地机器上。如果 sqlite 文件意外删除,我们将丢失所有评论!让我们将我们的数据备份到 HuggingFace hub 的数据集中。 在继续之前,请在[此处](https://huggingface.co/datasets)创建一个数据集。 现在,在我们脚本的**顶部**,我们将使用[huggingface hub 客户端库](https://huggingface.co/docs/huggingface_hub/index)连接到我们的数据集并获取最新的备份。 ```python TOKEN = os.environ.get('HUB_TOKEN') repo = huggingface_hub.Repository( local_dir="data", repo_type="dataset", clone_from="<name-of-your-dataset>", use_auth_token=TOKEN ) repo.git_pull() shutil.copyfile("./data/reviews.db", DB_FILE) ``` 请注意,您需要从 HuggingFace 的“设置”选项卡中获取访问令牌,以上代码才能正常工作。在脚本中,通过环境变量安全访问令牌。 ![access_token](/assets/guides/access_token.png) 现在,我们将创建一个后台任务,每 60 秒将我们的本地数据库与数据集中的数据同步一次。 我们将使用[AdvancedPythonScheduler](https://apscheduler.readthedocs.io/en/3.x/)来处理调度。 然而,这并不是唯一可用的任务调度库。请随意使用您熟悉的任何库。 备份数据的函数如下 : ```python from apscheduler.schedulers.background import BackgroundScheduler def backup_db(): shutil.copyfile(DB_FILE, "./data/reviews.db") db = sqlite3.connect(DB_FILE) reviews = db.execute("SELECT * FROM reviews").fetchall() pd.DataFrame(reviews).to_csv("./data/reviews.csv", index=False) print("updating db") repo.push_to_hub(blocking=False, commit_message=f"Updating data at {datetime.datetime.now()}") scheduler = BackgroundScheduler() scheduler.add_job(func=backup_db, trigger="interval", seconds=60) scheduler.start() ``` ## 第四步(附加)- 部署到 HuggingFace Spaces 您可以使用 HuggingFace [Spaces](https://huggingface.co/spaces) 平台免费部署这个应用程序 ✨ 如果您之前没有使用过 Spaces,请查看[此处](/using_hugging_face_integrations)的先前指南。 您将需要将 `HUB_TOKEN` 环境变量作为指南中的一个秘密使用。 ## 结论 恭喜!您知道如何在您的 gradio 应用程序中按计划运行后台任务⏲️。 在 Spaces 上运行的应用程序可在[此处](https://huggingface.co/spaces/freddyaboulton/gradio-google-forms)查看。 完整的代码在[此处](https://huggingface.co/spaces/freddyaboulton/gradio-google-forms/blob/main/app.py)。
gradio-app/gradio/blob/main/guides/cn/07_other-tutorials/running-background-tasks.md
!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Swin2SR ## Overview The Swin2SR model was proposed in [Swin2SR: SwinV2 Transformer for Compressed Image Super-Resolution and Restoration](https://arxiv.org/abs/2209.11345) by Marcos V. Conde, Ui-Jin Choi, Maxime Burchi, Radu Timofte. Swin2R improves the [SwinIR](https://github.com/JingyunLiang/SwinIR/) model by incorporating [Swin Transformer v2](swinv2) layers which mitigates issues such as training instability, resolution gaps between pre-training and fine-tuning, and hunger on data. The abstract from the paper is the following: *Compression plays an important role on the efficient transmission and storage of images and videos through band-limited systems such as streaming services, virtual reality or videogames. However, compression unavoidably leads to artifacts and the loss of the original information, which may severely degrade the visual quality. For these reasons, quality enhancement of compressed images has become a popular research topic. While most state-of-the-art image restoration methods are based on convolutional neural networks, other transformers-based methods such as SwinIR, show impressive performance on these tasks. In this paper, we explore the novel Swin Transformer V2, to improve SwinIR for image super-resolution, and in particular, the compressed input scenario. Using this method we can tackle the major issues in training transformer vision models, such as training instability, resolution gaps between pre-training and fine-tuning, and hunger on data. We conduct experiments on three representative tasks: JPEG compression artifacts removal, image super-resolution (classical and lightweight), and compressed image super-resolution. Experimental results demonstrate that our method, Swin2SR, can improve the training convergence and performance of SwinIR, and is a top-5 solution at the "AIM 2022 Challenge on Super-Resolution of Compressed Image and Video".* <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/swin2sr_architecture.png" alt="drawing" width="600"/> <small> Swin2SR architecture. Taken from the <a href="https://arxiv.org/abs/2209.11345">original paper.</a> </small> This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/mv-lab/swin2sr). ## Resources Demo notebooks for Swin2SR can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/Swin2SR). A demo Space for image super-resolution with SwinSR can be found [here](https://huggingface.co/spaces/jjourney1125/swin2sr). ## Swin2SRImageProcessor [[autodoc]] Swin2SRImageProcessor - preprocess ## Swin2SRConfig [[autodoc]] Swin2SRConfig ## Swin2SRModel [[autodoc]] Swin2SRModel - forward ## Swin2SRForImageSuperResolution [[autodoc]] Swin2SRForImageSuperResolution - forward
huggingface/transformers/blob/main/docs/source/en/model_doc/swin2sr.md
!--- Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # Language modelling examples This folder contains some scripts showing examples of *language model pre-training* with the 🤗 Transformers library. For straightforward use-cases you may be able to use these scripts without modification, although we have also included comments in the code to indicate areas that you may need to adapt to your own projects. The two scripts have almost identical arguments, but they differ in the type of LM they train - a causal language model (like GPT) or a masked language model (like BERT). Masked language models generally train more quickly and perform better when fine-tuned on new tasks with a task-specific output head, like text classification. However, their ability to generate text is weaker than causal language models. ## Pre-training versus fine-tuning These scripts can be used to both *pre-train* a language model completely from scratch, as well as to *fine-tune* a language model on text from your domain of interest. To start with an existing pre-trained language model you can use the `--model_name_or_path` argument, or to train from scratch you can use the `--model_type` argument to indicate the class of model architecture to initialize. ### Multi-GPU and TPU usage By default, these scripts use a `MirroredStrategy` and will use multiple GPUs effectively if they are available. TPUs can also be used by passing the name of the TPU resource with the `--tpu` argument. ## run_mlm.py This script trains a masked language model. ### Example command ``` python run_mlm.py \ --model_name_or_path distilbert-base-cased \ --output_dir output \ --dataset_name wikitext \ --dataset_config_name wikitext-103-raw-v1 ``` When using a custom dataset, the validation file can be separately passed as an input argument. Otherwise some split (customizable) of training data is used as validation. ``` python run_mlm.py \ --model_name_or_path distilbert-base-cased \ --output_dir output \ --train_file train_file_path ``` ## run_clm.py This script trains a causal language model. ### Example command ``` python run_clm.py \ --model_name_or_path distilgpt2 \ --output_dir output \ --dataset_name wikitext \ --dataset_config_name wikitext-103-raw-v1 ``` When using a custom dataset, the validation file can be separately passed as an input argument. Otherwise some split (customizable) of training data is used as validation. ``` python run_clm.py \ --model_name_or_path distilgpt2 \ --output_dir output \ --train_file train_file_path ```
huggingface/transformers/blob/main/examples/tensorflow/language-modeling/README.md
-- title: "Running IF with 🧨 diffusers on a Free Tier Google Colab" thumbnail: /blog/assets/if/thumbnail.jpg authors: - user: shonenkov guest: true - user: Gugutse guest: true - user: ZeroShot-AI guest: true - user: williamberman - user: patrickvonplaten - user: multimodalart --- # Running IF with 🧨 diffusers on a Free Tier Google Colab <a target="_blank" href="https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/deepfloyd_if_free_tier_google_colab.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> **TL;DR**: We show how to run one of the most powerful open-source text to image models **IF** on a free-tier Google Colab with 🧨 diffusers. You can also explore the capabilities of the model directly in the [Hugging Face Space](https://huggingface.co/spaces/DeepFloyd/IF). <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/if/nabla.jpg" alt="if-collage"><br> <em>Image compressed from official <a href="https://github.com/deep-floyd/IF/blob/release/pics/nabla.jpg">IF GitHub repo</a>.</em> </p> ## Introduction IF is a pixel-based text-to-image generation model and was [released in late April 2023 by DeepFloyd](https://github.com/deep-floyd/IF). The model architecture is strongly inspired by [Google's closed-sourced Imagen](https://imagen.research.google/). IF has two distinct advantages compared to existing text-to-image models like Stable Diffusion: - The model operates directly in "pixel space" (*i.e.,* on uncompressed images) instead of running the denoising process in the latent space such as [Stable Diffusion](http://hf.co/blog/stable_diffusion). - The model is trained on outputs of [T5-XXL](https://huggingface.co/google/t5-v1_1-xxl), a more powerful text encoder than [CLIP](https://openai.com/research/clip), used by Stable Diffusion as the text encoder. As a result, IF is better at generating images with high-frequency details (*e.g.,* human faces and hands) and is the first open-source image generation model that can reliably generate images with text. The downside of operating in pixel space and using a more powerful text encoder is that IF has a significantly higher amount of parameters. T5, IF\'s text-to-image UNet, and IF\'s upscaler UNet have 4.5B, 4.3B, and 1.2B parameters respectively. Compared to [Stable Diffusion 2.1](https://huggingface.co/stabilityai/stable-diffusion-2-1)\'s text encoder and UNet having just 400M and 900M parameters, respectively. Nevertheless, it is possible to run IF on consumer hardware if one optimizes the model for low-memory usage. We will show you can do this with 🧨 diffusers in this blog post. In 1.), we explain how to use IF for text-to-image generation, and in 2.) and 3.), we go over IF's image variation and image inpainting capabilities. 💡 **Note**: We are trading gains in memory by gains in speed here to make it possible to run IF in a free-tier Google Colab. If you have access to high-end GPUs such as an A100, we recommend leaving all model components on GPU for maximum speed, as done in the [official IF demo](https://huggingface.co/spaces/DeepFloyd/IF). 💡 **Note**: Some of the larger images have been compressed to load faster in the blog format. When using the official model, they should be even better quality! Let\'s dive in 🚀! <p align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/if/meme.png"><br> <em>IF's text generation capabilities</em> </p> ## Table of contents * [Accepting the license](#accepting-the-license) * [Optimizing IF to run on memory constrained hardware](#optimizing-if-to-run-on-memory-constrained-hardware) * [Available resources](#available-resources) * [Install dependencies](#install-dependencies) * [Text-to-image generation](#1-text-to-image-generation) * [Image variation](#2-image-variation) * [Inpainting](#3-inpainting) ## Accepting the license Before you can use IF, you need to accept its usage conditions. To do so: - 1. Make sure to have a [Hugging Face account](https://huggingface.co/join) and be logged in - 2. Accept the license on the model card of [DeepFloyd/IF-I-XL-v1.0](https://huggingface.co/DeepFloyd/IF-I-XL-v1.0). Accepting the license on the stage I model card will auto accept for the other IF models. - 3. Make sure to login locally. Install `huggingface_hub` ```sh pip install huggingface_hub --upgrade ``` run the login function in a Python shell ```py from huggingface_hub import login login() ``` and enter your [Hugging Face Hub access token](https://huggingface.co/docs/hub/security-tokens#what-are-user-access-tokens). ## Optimizing IF to run on memory constrained hardware State-of-the-art ML should not just be in the hands of an elite few. Democratizing ML means making models available to run on more than just the latest and greatest hardware. The deep learning community has created world class tools to run resource intensive models on consumer hardware: - [🤗 accelerate](https://github.com/huggingface/accelerate) provides utilities for working with [large models](https://huggingface.co/docs/accelerate/usage_guides/big_modeling). - [bitsandbytes](https://github.com/TimDettmers/bitsandbytes) makes [8-bit quantization](https://github.com/TimDettmers/bitsandbytes#features) available to all PyTorch models. - [🤗 safetensors](https://github.com/huggingface/safetensors) not only ensures that save code is executed but also significantly speeds up the loading time of large models. Diffusers seamlessly integrates the above libraries to allow for a simple API when optimizing large models. The free-tier Google Colab is both CPU RAM constrained (13 GB RAM) as well as GPU VRAM constrained (15 GB RAM for T4), which makes running the whole >10B IF model challenging! Let\'s map out the size of IF\'s model components in full float32 precision: - [T5-XXL Text Encoder](https://huggingface.co/DeepFloyd/IF-I-XL-v1.0/tree/main/text_encoder): 20GB - [Stage 1 UNet](https://huggingface.co/DeepFloyd/IF-I-XL-v1.0/tree/main/unet): 17.2 GB - [Stage 2 Super Resolution UNet](https://huggingface.co/DeepFloyd/IF-II-L-v1.0/blob/main/pytorch_model.bin): 2.5 GB - [Stage 3 Super Resolution Model](https://huggingface.co/stabilityai/stable-diffusion-x4-upscaler): 3.4 GB There is no way we can run the model in float32 as the T5 and Stage 1 UNet weights are each larger than the available CPU RAM. In float16, the component sizes are 11GB, 8.6GB, and 1.25GB for T5, Stage1 and Stage2 UNets, respectively, which is doable for the GPU, but we're still running into CPU memory overflow errors when loading the T5 (some CPU is occupied by other processes). Therefore, we lower the precision of T5 even more by using `bitsandbytes` 8bit quantization, which allows saving the T5 checkpoint with as little as [8 GB](https://huggingface.co/DeepFloyd/IF-I-XL-v1.0/blob/main/text_encoder/model.8bit.safetensors). Now that each component fits individually into both CPU and GPU memory, we need to make sure that components have all the CPU and GPU memory for themselves when needed. Diffusers supports modularly loading individual components i.e. we can load the text encoder without loading the UNet. This modular loading will ensure that we only load the component we need at a given step in the pipeline to avoid exhausting the available CPU RAM and GPU VRAM. Let\'s give it a try 🚀 ![t2i_64](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/if/t2i_64.png) ## Available resources The free-tier Google Colab comes with around 13 GB CPU RAM: ``` python !grep MemTotal /proc/meminfo ``` ```bash MemTotal: 13297192 kB ``` And an NVIDIA T4 with 15 GB VRAM: ``` python !nvidia-smi ``` ```bash Sun Apr 23 23:14:19 2023 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 525.85.12 Driver Version: 525.85.12 CUDA Version: 12.0 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 Tesla T4 Off | 00000000:00:04.0 Off | 0 | | N/A 72C P0 32W / 70W | 1335MiB / 15360MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| +-----------------------------------------------------------------------------+ ``` ## Install dependencies Some optimizations can require up-to-date versions of dependencies. If you are having issues, please double check and upgrade versions. ``` python ! pip install --upgrade \ diffusers~=0.16 \ transformers~=4.28 \ safetensors~=0.3 \ sentencepiece~=0.1 \ accelerate~=0.18 \ bitsandbytes~=0.38 \ torch~=2.0 -q ``` ## 1. Text-to-image generation We will walk step by step through text-to-image generation with IF using Diffusers. We will explain briefly APIs and optimizations, but more in-depth explanations can be found in the official documentation for [Diffusers](https://huggingface.co/docs/diffusers/index), [Transformers](https://huggingface.co/docs/transformers/index), [Accelerate](https://huggingface.co/docs/accelerate/index), and [bitsandbytes](https://github.com/TimDettmers/bitsandbytes). ### 1.1 Load text encoder We will load T5 using 8bit quantization. Transformers directly supports [bitsandbytes](https://huggingface.co/docs/transformers/main/en/main_classes/quantization#load-a-large-model-in-8bit) through the `load_in_8bit` flag. The flag `variant="8bit"` will download pre-quantized weights. We also use the `device_map` flag to allow `transformers` to offload model layers to the CPU or disk. Transformers big modeling supports arbitrary device maps, which can be used to separately load model parameters directly to available devices. Passing `"auto"` will automatically create a device map. See the `transformers` [docs](https://huggingface.co/docs/accelerate/usage_guides/big_modeling#designing-a-device-map) for more information. ``` python from transformers import T5EncoderModel text_encoder = T5EncoderModel.from_pretrained( "DeepFloyd/IF-I-XL-v1.0", subfolder="text_encoder", device_map="auto", load_in_8bit=True, variant="8bit" ) ``` ### 1.2 Create text embeddings The Diffusers API for accessing diffusion models is the `DiffusionPipeline` class and its subclasses. Each instance of `DiffusionPipeline` is a fully self contained set of methods and models for running diffusion networks. We can override the models it uses by passing alternative instances as keyword arguments to `from_pretrained`. In this case, we pass `None` for the `unet` argument, so no UNet will be loaded. This allows us to run the text embedding portion of the diffusion process without loading the UNet into memory. ``` python from diffusers import DiffusionPipeline pipe = DiffusionPipeline.from_pretrained( "DeepFloyd/IF-I-XL-v1.0", text_encoder=text_encoder, # pass the previously instantiated 8bit text encoder unet=None, device_map="auto" ) ``` IF also comes with a super resolution pipeline. We will save the prompt embeddings so we can later directly pass them to the super resolution pipeline. This will allow the super resolution pipeline to be loaded **without** a text encoder. Instead of [an astronaut just riding a horse](https://huggingface.co/blog/stable_diffusion), let\'s hand them a sign as well! Let\'s define a fitting prompt: ``` python prompt = "a photograph of an astronaut riding a horse holding a sign that says Pixel's in space" ``` and run it through the 8bit quantized T5 model: ``` python prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) ``` ### 1.3 Free memory Once the prompt embeddings have been created. We do not need the text encoder anymore. However, it is still in memory on the GPU. We need to remove it so that we can load the UNet. It's non-trivial to free PyTorch memory. We must garbage-collect the Python objects which point to the actual memory allocated on the GPU. First, use the Python keyword `del` to delete all Python objects referencing allocated GPU memory ``` python del text_encoder del pipe ``` Deleting the python object is not enough to free the GPU memory. Garbage collection is when the actual GPU memory is freed. Additionally, we will call `torch.cuda.empty_cache()`. This method isn\'t strictly necessary as the cached cuda memory will be immediately available for further allocations. Emptying the cache allows us to verify in the Colab UI that the memory is available. We\'ll use a helper function `flush()` to flush memory. ``` python import gc import torch def flush(): gc.collect() torch.cuda.empty_cache() ``` and run it ``` python flush() ``` ### 1.4 Stage 1: The main diffusion process With our now available GPU memory, we can re-load the `DiffusionPipeline` with only the UNet to run the main diffusion process. The `variant` and `torch_dtype` flags are used by Diffusers to download and load the weights in 16 bit floating point format. ``` python pipe = DiffusionPipeline.from_pretrained( "DeepFloyd/IF-I-XL-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16, device_map="auto" ) ``` Often, we directly pass the text prompt to `DiffusionPipeline.__call__`. However, we previously computed our text embeddings which we can pass instead. IF also comes with a super resolution diffusion process. Setting `output_type="pt"` will return raw PyTorch tensors instead of a PIL image. This way, we can keep the PyTorch tensors on GPU and pass them directly to the stage 2 super resolution pipeline. Let\'s define a random generator and run the stage 1 diffusion process. ``` python generator = torch.Generator().manual_seed(1) image = pipe( prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, output_type="pt", generator=generator, ).images ``` Let\'s manually convert the raw tensors to PIL and have a sneak peek at the final result. The output of stage 1 is a 64x64 image. ``` python from diffusers.utils import pt_to_pil pil_image = pt_to_pil(image) pipe.watermarker.apply_watermark(pil_image, pipe.unet.config.sample_size) pil_image[0] ``` ![t2i_64](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/if/t2i_64.png) And again, we remove the Python pointer and free CPU and GPU memory: ``` python del pipe flush() ``` ### 1.5 Stage 2: Super Resolution 64x64 to 256x256 IF comes with a separate diffusion process for upscaling. We run each diffusion process with a separate pipeline. The super resolution pipeline can be loaded with a text encoder if needed. However, we will usually have pre-computed text embeddings from the first IF pipeline. If so, load the pipeline without the text encoder. Create the pipeline ``` python pipe = DiffusionPipeline.from_pretrained( "DeepFloyd/IF-II-L-v1.0", text_encoder=None, # no use of text encoder => memory savings! variant="fp16", torch_dtype=torch.float16, device_map="auto" ) ``` and run it, re-using the pre-computed text embeddings ``` python image = pipe( image=image, prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, output_type="pt", generator=generator, ).images ``` Again we can inspect the intermediate results. ``` python pil_image = pt_to_pil(image) pipe.watermarker.apply_watermark(pil_image, pipe.unet.config.sample_size) pil_image[0] ``` ![t2i_upscaled](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/if/t2i_upscaled.png) And again, we delete the Python pointer and free memory ``` python del pipe flush() ``` ### 1.6 Stage 3: Super Resolution 256x256 to 1024x1024 The second super resolution model for IF is the previously release [Stability AI\'s x4 Upscaler](https://huggingface.co/stabilityai/stable-diffusion-x4-upscaler). Let\'s create the pipeline and load it directly on GPU with `device_map="auto"`. ``` python pipe = DiffusionPipeline.from_pretrained( "stabilityai/stable-diffusion-x4-upscaler", torch_dtype=torch.float16, device_map="auto" ) ``` 🧨 diffusers makes independently developed diffusion models easily composable as pipelines can be chained together. Here we can just take the previous PyTorch tensor output and pass it to the tage 3 pipeline as `image=image`. 💡 **Note**: The x4 Upscaler does not use T5 and has [its own text encoder](https://huggingface.co/stabilityai/stable-diffusion-x4-upscaler/tree/main/text_encoder). Therefore, we cannot use the previously created prompt embeddings and instead must pass the original prompt. ``` python pil_image = pipe(prompt, generator=generator, image=image).images ``` Unlike the IF pipelines, the IF watermark will not be added by default to outputs from the Stable Diffusion x4 upscaler pipeline. We can instead manually apply the watermark. ``` python from diffusers.pipelines.deepfloyd_if import IFWatermarker watermarker = IFWatermarker.from_pretrained("DeepFloyd/IF-I-XL-v1.0", subfolder="watermarker") watermarker.apply_watermark(pil_image, pipe.unet.config.sample_size) ``` View output image ``` python pil_image[0] ``` ![t2i_upscaled_2](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/if/t2i_upscaled_2.png) Et voila! A beautiful 1024x1024 image in a free-tier Google Colab. We have shown how 🧨 diffusers makes it easy to decompose and modularly load resource-intensive diffusion models. 💡 **Note**: We don\'t recommend using the above setup in production. 8bit quantization, manual de-allocation of model weights, and disk offloading all trade off memory for time (i.e., inference speed). This can be especially noticable if the diffusion pipeline is re-used. In production, we recommend using a 40GB A100 with all model components left on the GPU. See [**the official IF demo**](https://huggingface.co/spaces/DeepFloyd/IF). ## 2. Image variation The same IF checkpoints can also be used for text guided image variation and inpainting. The core diffusion process is the same as text-to-image generation except the initial noised image is created from the image to be varied or inpainted. To run image variation, load the same checkpoints with `IFImg2ImgPipeline.from_pretrained()` and `IFImg2ImgSuperResolution.from_pretrained()`. The APIs for memory optimization are all the same! Let\'s free the memory from the previous section. ``` python del pipe flush() ``` For image variation, we start with an initial image that we want to adapt. For this section, we will adapt the famous \"Slaps Roof of Car\" meme. Let\'s download it from the internet. ``` python import requests url = "https://i.kym-cdn.com/entries/icons/original/000/026/561/car.jpg" response = requests.get(url) ``` and load it into a PIL Image ``` python from PIL import Image from io import BytesIO original_image = Image.open(BytesIO(response.content)).convert("RGB") original_image = original_image.resize((768, 512)) original_image ``` ![iv_sample](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/if/iv_sample.png) The image variation pipeline take both PIL images and raw tensors. View the docstrings for more indepth documentation on expected inputs, [here](https://huggingface.co/docs/diffusers/v0.16.0/en/api/pipelines/if#diffusers.IFImg2ImgPipeline.__call__). ### 2.1 Text Encoder Image variation is guided by text, so we can define a prompt and encode it with T5\'s Text Encoder. Again we load the text encoder into 8bit precision. ``` python from transformers import T5EncoderModel text_encoder = T5EncoderModel.from_pretrained( "DeepFloyd/IF-I-XL-v1.0", subfolder="text_encoder", device_map="auto", load_in_8bit=True, variant="8bit" ) ``` For image variation, we load the checkpoint with [`IFImg2ImgPipeline`](https://huggingface.co/docs/diffusers/v0.16.0/en/api/pipelines/if#diffusers.IFImg2ImgPipeline). When using `DiffusionPipeline.from_pretrained(...)`, checkpoints are loaded into their default pipeline. The default pipeline for the IF is the text-to-image [`IFPipeline`](https://huggingface.co/docs/diffusers/v0.16.0/en/api/pipelines/if#diffusers.IFPipeline). When loading checkpoints with a non-default pipeline, the pipeline must be explicitly specified. ``` python from diffusers import IFImg2ImgPipeline pipe = IFImg2ImgPipeline.from_pretrained( "DeepFloyd/IF-I-XL-v1.0", text_encoder=text_encoder, unet=None, device_map="auto" ) ``` Let\'s turn our salesman into an anime character. ``` python prompt = "anime style" ``` As before, we create the text embeddings with T5 ``` python prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) ``` and free GPU and CPU memory. First, remove the Python pointers ``` python del text_encoder del pipe ``` and then free the memory ``` python flush() ``` ### 2.2 Stage 1: The main diffusion process Next, we only load the stage 1 UNet weights into the pipeline object, just like we did in the previous section. ``` python pipe = IFImg2ImgPipeline.from_pretrained( "DeepFloyd/IF-I-XL-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16, device_map="auto" ) ``` The image variation pipeline requires both the original image and the prompt embeddings. We can optionally use the `strength` argument to configure the amount of variation. `strength` directly controls the amount of noise added. Higher strength means more noise which means more variation. ``` python generator = torch.Generator().manual_seed(0) image = pipe( image=original_image, prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, output_type="pt", generator=generator, ).images ``` Let\'s check the intermediate 64x64 again. ``` python pil_image = pt_to_pil(image) pipe.watermarker.apply_watermark(pil_image, pipe.unet.config.sample_size) pil_image[0] ``` ![iv_sample_1](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/if/iv_sample_1.png) Looks good! We can free the memory and upscale the image again. ``` python del pipe flush() ``` ### 2.3 Stage 2: Super Resolution For super resolution, load the checkpoint with `IFImg2ImgSuperResolutionPipeline` and the same checkpoint as before. ``` python from diffusers import IFImg2ImgSuperResolutionPipeline pipe = IFImg2ImgSuperResolutionPipeline.from_pretrained( "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16, device_map="auto" ) ``` 💡 **Note**: The image variation super resolution pipeline requires the generated image as well as the original image. You can also use the Stable Diffusion x4 upscaler on this image. Feel free to try it out using the code snippets in section 1.6. ``` python image = pipe( image=image, original_image=original_image, prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, generator=generator, ).images[0] image ``` ![iv_sample_2](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/if/iv_sample_2.png) Nice! Let\'s free the memory and look at the final inpainting pipelines. ``` python del pipe flush() ``` ## 3. Inpainting The IF inpainting pipeline is the same as the image variation, except only a select area of the image is denoised. We specify the area to inpaint with an image mask. Let\'s show off IF\'s amazing \"letter generation\" capabilities. We can replace this sign text with different slogan. First let\'s download the image ``` python import requests url = "https://i.imgflip.com/5j6x75.jpg" response = requests.get(url) ``` and turn it into a PIL Image ``` python from PIL import Image from io import BytesIO original_image = Image.open(BytesIO(response.content)).convert("RGB") original_image = original_image.resize((512, 768)) original_image ``` ![inpainting_sample](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/if/inpainting_sample.png) We will mask the sign so we can replace its text. For convenience, we have pre-generated the mask and loaded it into a HF dataset. Let\'s download it. ``` python from huggingface_hub import hf_hub_download mask_image = hf_hub_download("diffusers/docs-images", repo_type="dataset", filename="if/sign_man_mask.png") mask_image = Image.open(mask_image) mask_image ``` ![masking_sample](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/if/masking_sample.png) 💡 **Note**: You can create masks yourself by manually creating a greyscale image. ``` python from PIL import Image import numpy as np height = 64 width = 64 example_mask = np.zeros((height, width), dtype=np.int8) # Set masked pixels to 255 example_mask[20:30, 30:40] = 255 # Make sure to create the image in mode 'L' # meaning single channel grayscale example_mask = Image.fromarray(example_mask, mode='L') example_mask ``` ![masking_by_hand](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/if/masking_by_hand.png) Now we can start inpainting 🎨🖌 ### 3.1. Text Encoder Again, we load the text encoder first ``` python from transformers import T5EncoderModel text_encoder = T5EncoderModel.from_pretrained( "DeepFloyd/IF-I-XL-v1.0", subfolder="text_encoder", device_map="auto", load_in_8bit=True, variant="8bit" ) ``` This time, we initialize the `IFInpaintingPipeline` in-painting pipeline with the text encoder weights. ``` python from diffusers import IFInpaintingPipeline pipe = IFInpaintingPipeline.from_pretrained( "DeepFloyd/IF-I-XL-v1.0", text_encoder=text_encoder, unet=None, device_map="auto" ) ``` Alright, let\'s have the man advertise for more layers instead. ``` python prompt = 'the text, "just stack more layers"' ``` Having defined the prompt, we can create the prompt embeddings ``` python prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) ``` Just like before, we free the memory ``` python del text_encoder del pipe flush() ``` ### 3.2 Stage 1: The main diffusion process Just like before, we now load the stage 1 pipeline with only the UNet. ``` python pipe = IFInpaintingPipeline.from_pretrained( "DeepFloyd/IF-I-XL-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16, device_map="auto" ) ``` Now, we need to pass the input image, the mask image, and the prompt embeddings. ``` python image = pipe( image=original_image, mask_image=mask_image, prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, output_type="pt", generator=generator, ).images ``` Let\'s take a look at the intermediate output. ``` python pil_image = pt_to_pil(image) pipe.watermarker.apply_watermark(pil_image, pipe.unet.config.sample_size) pil_image[0] ``` ![inpainted_output](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/if/inpainted_output.png) Looks good! The text is pretty consistent! Let\'s free the memory so we can upscale the image ``` python del pipe flush() ``` ### 3.3 Stage 2: Super Resolution For super resolution, load the checkpoint with `IFInpaintingSuperResolutionPipeline`. ``` python from diffusers import IFInpaintingSuperResolutionPipeline pipe = IFInpaintingSuperResolutionPipeline.from_pretrained( "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16, device_map="auto" ) ``` The inpainting super resolution pipeline requires the generated image, the original image, the mask image, and the prompt embeddings. Let\'s do a final denoising run. ``` python image = pipe( image=image, original_image=original_image, mask_image=mask_image, prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, generator=generator, ).images[0] image ``` ![inpainted_final_output](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/if/inpainted_final_output.png) Nice, the model generated text without making a single spelling error! ## Conclusion IF in 32-bit floating point precision uses 40 GB of weights in total. We showed how using only open source models and libraries, IF can be run on a free-tier Google Colab instance. The ML ecosystem benefits deeply from the sharing of open tools and open models. This notebook alone used models from DeepFloyd, StabilityAI, and [Google](https://huggingface.co/google). The libraries used \-- Diffusers, Transformers, Accelerate, and bitsandbytes \-- all benefit from countless contributors from different organizations. A massive thank you to the DeepFloyd team for the creation and open sourcing of IF, and for contributing to the democratization of good machine learning 🤗.
huggingface/blog/blob/main/if.md
Accelerated inference on NVIDIA GPUs By default, ONNX Runtime runs inference on CPU devices. However, it is possible to place supported operations on an NVIDIA GPU, while leaving any unsupported ones on CPU. In most cases, this allows costly operations to be placed on GPU and significantly accelerate inference. This guide will show you how to run inference on two execution providers that ONNX Runtime supports for NVIDIA GPUs: * `CUDAExecutionProvider`: Generic acceleration on NVIDIA CUDA-enabled GPUs. * `TensorrtExecutionProvider`: Uses NVIDIA’s [TensorRT](https://developer.nvidia.com/tensorrt) inference engine and generally provides the best runtime performance. <Tip warning={true}> Due to a limitation of ONNX Runtime, it is not possible to run quantized models on `CUDAExecutionProvider` and only models with static quantization can be run on `TensorrtExecutionProvider`. </Tip> ## CUDAExecutionProvider ### CUDA installation Provided the CUDA and cuDNN [requirements](https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements) are satisfied, install the additional dependencies by running ```bash pip install optimum[onnxruntime-gpu] ``` To avoid conflicts between `onnxruntime` and `onnxruntime-gpu`, make sure the package `onnxruntime` is not installed by running `pip uninstall onnxruntime` prior to installing Optimum. ### Checking the CUDA installation is successful Before going further, run the following sample code to check whether the install was successful: ```python >>> from optimum.onnxruntime import ORTModelForSequenceClassification >>> from transformers import AutoTokenizer >>> ort_model = ORTModelForSequenceClassification.from_pretrained( ... "philschmid/tiny-bert-sst2-distilled", ... export=True, ... provider="CUDAExecutionProvider", ... ) >>> tokenizer = AutoTokenizer.from_pretrained("philschmid/tiny-bert-sst2-distilled") >>> inputs = tokenizer("expectations were low, actual enjoyment was high", return_tensors="pt", padding=True) >>> outputs = ort_model(**inputs) >>> assert ort_model.providers == ["CUDAExecutionProvider", "CPUExecutionProvider"] ``` In case this code runs gracefully, congratulations, the installation is successful! If you encounter the following error or similar, ``` ValueError: Asked to use CUDAExecutionProvider as an ONNX Runtime execution provider, but the available execution providers are ['CPUExecutionProvider']. ``` then something is wrong with the CUDA or ONNX Runtime installation. ### Use CUDA execution provider with floating-point models For non-quantized models, the use is straightforward. Simply specify the `provider` argument in the `ORTModel.from_pretrained()` method. Here's an example: ```python >>> from optimum.onnxruntime import ORTModelForSequenceClassification >>> ort_model = ORTModelForSequenceClassification.from_pretrained( ... "distilbert-base-uncased-finetuned-sst-2-english", ... export=True, ... provider="CUDAExecutionProvider", ... ) ``` The model can then be used with the common 🤗 Transformers API for inference and evaluation, such as [pipelines](https://huggingface.co/docs/optimum/onnxruntime/usage_guides/pipelines). When using Transformers pipeline, note that the `device` argument should be set to perform pre- and post-processing on GPU, following the example below: ```python >>> from optimum.pipelines import pipeline >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased-finetuned-sst-2-english") >>> pipe = pipeline(task="text-classification", model=ort_model, tokenizer=tokenizer, device="cuda:0") >>> result = pipe("Both the music and visual were astounding, not to mention the actors performance.") >>> print(result) # doctest: +IGNORE_RESULT # printing: [{'label': 'POSITIVE', 'score': 0.9997727274894714}] ``` Additionally, you can pass the session option `log_severity_level = 0` (verbose), to check whether all nodes are indeed placed on the CUDA execution provider or not: ```python >>> import onnxruntime >>> session_options = onnxruntime.SessionOptions() >>> session_options.log_severity_level = 0 >>> ort_model = ORTModelForSequenceClassification.from_pretrained( ... "distilbert-base-uncased-finetuned-sst-2-english", ... export=True, ... provider="CUDAExecutionProvider", ... session_options=session_options ... ) ``` You should see the following logs: ``` 2022-10-18 14:59:13.728886041 [V:onnxruntime:, session_state.cc:1193 VerifyEachN odeIsAssignedToAnEp] Provider: [CPUExecutionProvider]: [Gather (Gather_76), Uns queeze (Unsqueeze_78), Gather (Gather_97), Gather (Gather_100), Concat (Concat_1 10), Unsqueeze (Unsqueeze_125), ...] 2022-10-18 14:59:13.728906431 [V:onnxruntime:, session_state.cc:1193 VerifyEachN odeIsAssignedToAnEp] Provider: [CUDAExecutionProvider]: [Shape (Shape_74), Slic e (Slice_80), Gather (Gather_81), Gather (Gather_82), Add (Add_83), Shape (Shape _95), MatMul (MatMul_101), ...] ``` In this example, we can see that all the costly MatMul operations are placed on the CUDA execution provider. ### Use CUDA execution provider with quantized models Due to current limitations in ONNX Runtime, it is not possible to use quantized models with `CUDAExecutionProvider`. The reasons are as follows: * When using [🤗 Optimum dynamic quantization](quantization#dynamic-quantization-example), nodes as [`MatMulInteger`](https://github.com/onnx/onnx/blob/v1.12.0/docs/Operators.md#MatMulInteger), [`DynamicQuantizeLinear`](https://github.com/onnx/onnx/blob/v1.12.0/docs/Operators.md#DynamicQuantizeLinear) may be inserted in the ONNX graph, that cannot be consumed by the CUDA execution provider. * When using [static quantization](quantization#static-quantization-example), the ONNX computation graph will contain matrix multiplications and convolutions in floating-point arithmetic, along with Quantize + Dequantize operations to simulate quantization. In this case, although the costly matrix multiplications and convolutions will be run on the GPU, they will use floating-point arithmetic as the `CUDAExecutionProvider` can not consume the Quantize + Dequantize nodes to replace them by the operations using integer arithmetic. ### Reduce memory footprint with IOBinding [IOBinding](https://onnxruntime.ai/docs/api/python/api_summary.html#iobinding) is an efficient way to avoid expensive data copying when using GPUs. By default, ONNX Runtime will copy the input from the CPU (even if the tensors are already copied to the targeted device), and assume that outputs also need to be copied back to the CPU from GPUs after the run. These data copying overheads between the host and devices are expensive, and __can lead to worse inference latency than vanilla PyTorch__ especially for the decoding process. To avoid the slowdown, 🤗 Optimum adopts the IOBinding to copy inputs onto GPUs and pre-allocate memory for outputs prior the inference. When instanciating the `ORTModel`, set the value of the argument `use_io_binding` to choose whether to turn on the IOBinding during the inference. `use_io_binding` is set to `True` by default, if you choose CUDA as execution provider. And if you want to turn off IOBinding: ```python >>> from transformers import AutoTokenizer, pipeline >>> from optimum.onnxruntime import ORTModelForSeq2SeqLM # Load the model from the hub and export it to the ONNX format >>> model = ORTModelForSeq2SeqLM.from_pretrained("t5-small", export=True, use_io_binding=False) >>> tokenizer = AutoTokenizer.from_pretrained("t5-small") # Create a pipeline >>> onnx_translation = pipeline("translation_en_to_fr", model=model, tokenizer=tokenizer, device="cuda:0") ``` For the time being, IOBinding is supported for task-defined ORT models, if you want us to add support for custom models, file us an issue on the Optimum's repository. ### Observed time gains We tested three common models with a decoding process: `GPT2` / `T5-small` / `M2M100-418M`, and the benchmark was run on a versatile Tesla T4 GPU (more environment details at the end of this section). Here are some performance results running with `CUDAExecutionProvider` when IOBinding has been turned on. We have tested input sequence length from 8 to 512, and generated outputs both with greedy search and beam search (`num_beam=5`): <table><tr> <td> <p align="center"> <img alt="GPT2" src="https://huggingface.co/datasets/optimum/documentation-images/resolve/main/onnxruntime/t4_res_ort_gpt2.png" width="450"> <br> <em style="color: grey">GPT2</em> </p> </td> <td> <p align="center"> <img alt="T5-small" src="https://huggingface.co/datasets/optimum/documentation-images/resolve/main/onnxruntime/t4_res_ort_t5_s.png" width="450"> <br> <em style="color: grey">T5-small</em> </p> </td></tr> <tr><td> <p align="center"> <img alt="M2M100-418M" src="https://huggingface.co/datasets/optimum/documentation-images/resolve/main/onnxruntime/t4_res_ort_m2m100_418m.png" width="450"> <br> <em style="color: grey">M2M100-418M</em> </p> </td> </tr></table> And here is a summary for the saving time with different sequence lengths (32 / 128) and generation modes(greedy search / beam search) while using ONNX Runtime compared with PyTorch: <table><tr> <td> <p align="center"> <img alt="seq32" src="https://huggingface.co/datasets/optimum/documentation-images/resolve/main/onnxruntime/inference_models_32.png" width="800"> <br> <em style="color: grey">sequence length: 32</em> </p> </td></tr> <tr><td> <p align="center"> <img alt="seq128" src="https://huggingface.co/datasets/optimum/documentation-images/resolve/main/onnxruntime/inference_models_128.png" width="800"> <br> <em style="color: grey">sequence length: 128</em> </p> </td> </tr></table> Environment: ``` +-----------------------------------------------------------------------------+ | NVIDIA-SMI 440.33.01 Driver Version: 440.33.01 CUDA Version: 11.3 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 Tesla T4 On | 00000000:00:1E.0 Off | 0 | | N/A 28C P8 8W / 70W | 0MiB / 15109MiB | 0% Default | +-------------------------------+----------------------+----------------------+ - Platform: Linux-5.4.0-1089-aws-x86_64-with-glibc2.29 - Python version: 3.8.10 - `transformers` version: 4.24.0 - `optimum` version: 1.5.0 - PyTorch version: 1.12.0+cu113 ``` Note that previous experiments are run with __vanilla ONNX__ models exported directly from the exporter. If you are interested in __further acceleration__, with `ORTOptimizer` you can optimize the graph and convert your model to FP16 if you have a GPU with mixed precision capabilities. ## TensorrtExecutionProvider TensorRT uses its own set of optimizations, and **generally does not support the optimizations from [`~onnxruntime.ORTOptimizer`]**. We therefore recommend to use the original ONNX models when using TensorrtExecutionProvider ([reference](https://github.com/microsoft/onnxruntime/issues/10905#issuecomment-1072649358)). ### TensorRT installation The easiest way to use TensorRT as the execution provider for models optimized through 🤗 Optimum is with the available ONNX Runtime `TensorrtExecutionProvider`. In order to use 🤗 Optimum with TensorRT in a local environment, we recommend following the NVIDIA installation guides: * CUDA toolkit: https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html * cuDNN: https://docs.nvidia.com/deeplearning/cudnn/install-guide/index.html * TensorRT: https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html For TensorRT, we recommend the Tar File Installation method. Alternatively, TensorRT may be installable with `pip` by following [these instructions](https://github.com/microsoft/onnxruntime/issues/9986). Once the required packages are installed, the following environment variables need to be set with the appropriate paths for ONNX Runtime to detect TensorRT installation: ```bash export CUDA_PATH=/usr/local/cuda export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda-x.x/lib64:/path/to/TensorRT-8.x.x/lib ``` ### Checking the TensorRT installation is successful Before going further, run the following sample code to check whether the install was successful: ```python >>> from optimum.onnxruntime import ORTModelForSequenceClassification >>> from transformers import AutoTokenizer >>> ort_model = ORTModelForSequenceClassification.from_pretrained( ... "philschmid/tiny-bert-sst2-distilled", ... export=True, ... provider="TensorrtExecutionProvider", ... ) >>> tokenizer = AutoTokenizer.from_pretrained("philschmid/tiny-bert-sst2-distilled") >>> inp = tokenizer("expectations were low, actual enjoyment was high", return_tensors="pt", padding=True) >>> result = ort_model(**inp) >>> assert ort_model.providers == ["TensorrtExecutionProvider", "CUDAExecutionProvider", "CPUExecutionProvider"] ``` In case this code runs gracefully, congratulations, the installation is successful! In case the above `assert` fails, or you encounter the following warning ``` Failed to create TensorrtExecutionProvider. Please reference https://onnxruntime.ai/docs/execution-providers/TensorRT-ExecutionProvider.html#requirements to ensure all dependencies are met. ``` something is wrong with the TensorRT or ONNX Runtime installation. ### TensorRT engine build and warmup TensorRT requires to build its [inference engine](https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#build-phase) ahead of inference, which takes some time due to the model optimization and nodes fusion. To avoid rebuilding the engine every time the model is loaded, ONNX Runtime provides a pair of options to save the engine: `trt_engine_cache_enable` and `trt_engine_cache_path`. We recommend setting these two provider options when using the TensorRT execution provider. The usage is as follows, where [`optimum/gpt2`](https://huggingface.co/optimum/gpt2) is an ONNX model converted from PyTorch using the [Optimum ONNX exporter](https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model): ```python >>> from optimum.onnxruntime import ORTModelForCausalLM >>> provider_options = { ... "trt_engine_cache_enable": True, ... "trt_engine_cache_path": "tmp/trt_cache_gpt2_example" ... } # the TensorRT engine is not built here, it will be when doing inference >>> ort_model = ORTModelForCausalLM.from_pretrained( ... "optimum/gpt2", ... use_cache=False, ... provider="TensorrtExecutionProvider", ... provider_options=provider_options ... ) ``` TensorRT builds its engine depending on specified input shapes. Unfortunately, in the [current ONNX Runtime implementation](https://github.com/microsoft/onnxruntime/blob/613920d6c5f53a8e5e647c5f1dcdecb0a8beef31/onnxruntime/core/providers/tensorrt/tensorrt_execution_provider.cc#L1677-L1688) (references: [1](https://github.com/microsoft/onnxruntime/issues/13559), [2](https://github.com/microsoft/onnxruntime/issues/13851)), the engine is rebuilt every time an input has a shape smaller than the previously smallest encountered shape, and conversely if the input has a shape larger than the previously largest encountered shape. For example, if a model takes `(batch_size, input_ids)` as inputs, and the model takes successively the inputs: 1. `input.shape: (4, 5) --> the engine is built (first input)` 2. `input.shape: (4, 10) --> engine rebuilt (10 larger than 5)` 3. `input.shape: (4, 7) --> no rebuild (5 <= 7 <= 10)` 4. `input.shape: (4, 12) --> engine rebuilt (10 <= 12)` 5. `input.shape: (4, 3) --> engine rebuilt (3 <= 5)` One big issue is that building the engine can be time consuming, especially for large models. Therefore, as a workaround, one recommendation is to **first build the TensorRT engine with an input of small shape, and then with an input of large shape to have an engine valid for all shapes inbetween**. This allows to avoid rebuilding the engine for new small and large shapes, which is unwanted once the model is deployed for inference. Passing the engine cache path in the provider options, the engine can therefore be built once for all and used fully for inference thereafter. For example, for text generation, the engine can be built with: ```python >>> import os >>> from transformers import AutoTokenizer >>> from optimum.onnxruntime import ORTModelForCausalLM >>> os.makedirs("tmp/trt_cache_gpt2_example", exist_ok=True) >>> provider_options = { ... "trt_engine_cache_enable": True, ... "trt_engine_cache_path": "tmp/trt_cache_gpt2_example" ... } >>> ort_model = ORTModelForCausalLM.from_pretrained( ... "optimum/gpt2", ... use_cache=False, ... provider="TensorrtExecutionProvider", ... provider_options=provider_options, ... ) >>> tokenizer = AutoTokenizer.from_pretrained("optimum/gpt2") >>> print("Building engine for a short sequence...") # doctest: +IGNORE_RESULT >>> text = ["short"] >>> encoded_input = tokenizer(text, return_tensors="pt").to("cuda") >>> output = ort_model(**encoded_input) >>> print("Building engine for a long sequence...") # doctest: +IGNORE_RESULT >>> text = [" a very long input just for demo purpose, this is very long" * 10] >>> encoded_input = tokenizer(text, return_tensors="pt").to("cuda") >>> output = ort_model(**encoded_input) ``` The engine is stored as: ![TensorRT engine cache folder](https://huggingface.co/datasets/optimum/documentation-images/resolve/main/onnxruntime/tensorrt_cache.png) Once the engine is built, the cache can be reloaded and generation does not need to rebuild the engine: ```python >>> from transformers import AutoTokenizer >>> from optimum.onnxruntime import ORTModelForCausalLM >>> provider_options = { ... "trt_engine_cache_enable": True, ... "trt_engine_cache_path": "tmp/trt_cache_gpt2_example" ... } >>> ort_model = ORTModelForCausalLM.from_pretrained( ... "optimum/gpt2", ... use_cache=False, ... provider="TensorrtExecutionProvider", ... provider_options=provider_options, ... ) >>> tokenizer = AutoTokenizer.from_pretrained("optimum/gpt2") >>> text = ["Replace me by any text you'd like."] >>> encoded_input = tokenizer(text, return_tensors="pt").to("cuda") >>> for i in range(3): ... output = ort_model.generate(**encoded_input) ... print(tokenizer.decode(output[0])) # doctest: +IGNORE_RESULT ``` #### Warmup Once the engine is built, it is recommended to do before inference **one or a few warmup steps**, as the first inference runs have [some overhead](https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#trtexec-flags). ### Use TensorRT execution provider with floating-point models For non-quantized models, the use is straightforward, by simply using the `provider` argument in `ORTModel.from_pretrained()`. For example: ```python >>> from optimum.onnxruntime import ORTModelForSequenceClassification >>> ort_model = ORTModelForSequenceClassification.from_pretrained( ... "distilbert-base-uncased-finetuned-sst-2-english", ... export=True, ... provider="TensorrtExecutionProvider", ... ) ``` [As previously for `CUDAExecutionProvider`](#use-cuda-execution-provider-with-floatingpoint-models), by passing the session option `log_severity_level = 0` (verbose), we can check in the logs whether all nodes are indeed placed on the TensorRT execution provider or not: ``` 2022-09-22 14:12:48.371513741 [V:onnxruntime:, session_state.cc:1188 VerifyEachNodeIsAssignedToAnEp] All nodes have been placed on [TensorrtExecutionProvider] ``` The model can then be used with the common 🤗 Transformers API for inference and evaluation, such as [pipelines](https://huggingface.co/docs/optimum/onnxruntime/usage_guides/pipelines). ### Use TensorRT execution provider with quantized models When it comes to quantized models, TensorRT only supports models that use [**static** quantization](https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#enable_int8_c) with [**symmetric quantization** for weights and activations](https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#intro-quantization). 🤗 Optimum provides a quantization config ready to be used with [`~onnxruntime.ORTQuantizer`] with the constraints of TensorRT quantization: ```python >>> from optimum.onnxruntime import AutoQuantizationConfig >>> qconfig = AutoQuantizationConfig.tensorrt(per_channel=False) ``` Using this `qconfig`, static quantization can be performed as explained in the [static quantization guide](quantization#static-quantization-example). In the code sample below, after performing static quantization, the resulting model is loaded into the [`~onnxruntime.ORTModel`] class using TensorRT as the execution provider. ONNX Runtime graph optimization needs to be disabled for the model to be consumed and optimized by TensorRT, and the fact that INT8 operations are used needs to be specified to TensorRT. ```python >>> import onnxruntime >>> from transformers import AutoTokenizer >>> from optimum.onnxruntime import ORTModelForSequenceClassification >>> session_options = onnxruntime.SessionOptions() >>> session_options.graph_optimization_level = onnxruntime.GraphOptimizationLevel.ORT_DISABLE_ALL >>> tokenizer = AutoTokenizer.from_pretrained("fxmarty/distilbert-base-uncased-sst2-onnx-int8-for-tensorrt") >>> ort_model = ORTModelForSequenceClassification.from_pretrained( ... "fxmarty/distilbert-base-uncased-sst2-onnx-int8-for-tensorrt", ... provider="TensorrtExecutionProvider", ... session_options=session_options, ... provider_options={"trt_int8_enable": True}, >>> ) >>> inp = tokenizer("TensorRT is a bit painful to use, but at the end of day it runs smoothly and blazingly fast!", return_tensors="np") >>> res = ort_model(**inp) >>> print(res) >>> print(ort_model.config.id2label[res.logits[0].argmax()]) >>> # SequenceClassifierOutput(loss=None, logits=array([[-0.545066 , 0.5609764]], dtype=float32), hidden_states=None, attentions=None) >>> # POSITIVE ``` The model can then be used with the common 🤗 Transformers API for inference and evaluation, such as [pipelines](https://huggingface.co/docs/optimum/onnxruntime/usage_guides/pipelines). ### TensorRT limitations for quantized models As highlighted in the previous section, TensorRT supports only a limited range of quantized models: * Static quantization only * Weights and activations quantization ranges are symmetric * Weights need to be stored in float32 in the ONNX model, thus there is no storage space saving from quantization. TensorRT indeed requires to insert full Quantize + Dequantize pairs. Normally, weights would be stored in fixed point 8-bits format and only a `DequantizeLinear` would be applied on the weights. In case `provider="TensorrtExecutionProvider"` is passed and the model has not been quantized strictly following these constraints, various errors may be raised, where error messages can be unclear. ### Observed time gains Nvidia Nsight Systems tool can be used to profile the execution time on GPU. Before profiling or measuring latency/throughput, it is a good practice to do a few **warmup steps**. Coming soon!
huggingface/optimum/blob/main/docs/source/onnxruntime/usage_guides/gpu.mdx
Additional Readings [[additional-readings]] These are **optional readings** if you want to go deeper. ## PPO Explained - [Towards Delivering a Coherent Self-Contained Explanation of Proximal Policy Optimization by Daniel Bick](https://fse.studenttheses.ub.rug.nl/25709/1/mAI_2021_BickD.pdf) - [What is the way to understand Proximal Policy Optimization Algorithm in RL?](https://stackoverflow.com/questions/46422845/what-is-the-way-to-understand-proximal-policy-optimization-algorithm-in-rl) - [Foundations of Deep RL Series, L4 TRPO and PPO by Pieter Abbeel](https://youtu.be/KjWF8VIMGiY) - [OpenAI PPO Blogpost](https://openai.com/blog/openai-baselines-ppo/) - [Spinning Up RL PPO](https://spinningup.openai.com/en/latest/algorithms/ppo.html) - [Paper Proximal Policy Optimization Algorithms](https://arxiv.org/abs/1707.06347) ## PPO Implementation details - [The 37 Implementation Details of Proximal Policy Optimization](https://iclr-blog-track.github.io/2022/03/25/ppo-implementation-details/) - [Part 1 of 3 — Proximal Policy Optimization Implementation: 11 Core Implementation Details](https://www.youtube.com/watch?v=MEt6rrxH8W4) ## Importance Sampling - [Importance Sampling Explained](https://youtu.be/C3p2wI4RAi8)
huggingface/deep-rl-class/blob/main/units/en/unit8/additional-readings.mdx
-- title: "Hugging Face and IBM partner on watsonx.ai, the next-generation enterprise studio for AI builders" thumbnail: /blog/assets/144_ibm/01.png authors: - user: juliensimon --- # Hugging Face and IBM partner on watsonx.ai, the next-generation enterprise studio for AI builders <kbd> <img src="assets/144_ibm/01.png"> </kbd> All hype aside, it's hard to deny the profound impact that AI is having on society and businesses. From startups to enterprises to the public sector, every customer we talk to is busy experimenting with large language models and generative AI, identifying the most promising use cases, and gradually bringing them to production. The #1 comment we get from customers is that no single model will rule them all. They understand the value of building the best model for each use case to maximize its relevance on company data while optimizing the compute budget. Of course, privacy and intellectual property are also top concerns, and customers want to ensure they maintain complete control. As AI finds its way into every department and business unit, customers also realize the need to train and deploy many different models. In a large multinational organization, this could mean running hundreds, even thousands, of models at any time. Given the pace of AI innovation, newer and higher-performance model architectures will also lead customers to replace their models quicker than expected, reinforcing the need to train and deploy new models in production quickly and seamlessly. All of this will only happen with standardization and automation. Organizations can't afford to build models, tools, and infrastructure from scratch for new projects. Fortunately, the last few years have seen some very positive developments: 1. **Model standardization**: the [Transformer](https://arxiv.org/abs/1706.03762) architecture is now the de facto standard for Deep Learning applications like Natural Language Processing, Computer Vision, Audio, Speech, and more. It’s now easier to build tools and workflows that perform well across many use cases. 2. **Pre-trained models**: [hundreds of thousands](https://huggingface.co/models) of pre-trained models are just a click away. You can discover and test them directly on [Hugging Face](https://huggingface.co) and quickly shortlist the promising ones for your projects. 3. **Open-source libraries**: the Hugging Face [libraries](https://huggingface.co/docs) let you download pre-trained models with a single line of code, and you can start experimenting with your data in minutes. From training to deployment to hardware optimization, customers can rely on a consistent set of community-driven tools that work the same everywhere, from their laptops to their production environment. In addition, our cloud partnerships let customers use Hugging Face models and libraries at any scale without worrying about provisioning infrastructure and building technical environments. This makes it much easier to get high-quality models out the door at a rapid pace without having to reinvent the wheel. Following up on our collaboration with AWS on Amazon SageMaker and Microsoft on Azure Machine Learning, we're thrilled to work with none other than IBM on their new AI studio, [watsonx.ai](https://www.ibm.com/products/watsonx-ai). [watsonx.ai](http://watsonx.ai) is the next-generation enterprise studio for AI builders to train, validate, tune, and deploy both traditional ML and new generative AI capabilities, powered by foundation models. IBM decided that open source should be at the core of watsonx.ai. We couldn't agree more! Built on [RedHat OpenShift](https://www.redhat.com/en/technologies/cloud-computing/openshift), watsonx.ai will be available in the cloud and on-premise. This is excellent news for customers who cannot use the cloud because of strict compliance rules or are more comfortable working with their confidential data on their infrastructure. Until now, these customers often had to build their in-house ML platform. They now have an open-source off-the-shelf alternative deployed and managed using standard DevOps tools. Under the hood, watsonx.ai also integrates many Hugging Face open-source libraries, such as [transformers](https://github.com/huggingface/transformers) (100k+ GitHub stars!), [accelerate](https://github.com/huggingface/accelerate), [peft](https://github.com/huggingface/peft) and our [Text Generation Inference](https://github.com/huggingface/text-generation-inference) server, to name a few. We're happy to partner with IBM and to collaborate on the watsonx AI and data platform so that Hugging Face customers can work natively with their Hugging Face models and datasets to multiply the impact of AI across businesses. In addition, IBM has also developed its own collection of Large Language Models, and we will work with their team to open-source them and make them easily available in the Hugging Face Hub. To learn more, watch Dr. Darío Gil, SVP and Director of IBM Research, and our CEO Clem Delangue, [announce our collaboration](https://youtu.be/FrDnPTPgEmk?t=1077), [walk through the watsonx platform](https://youtu.be/FrDnPTPgEmk?t=283), and present IBM’s [suite of Large Language Models](https://youtu.be/FrDnPTPgEmk?t=586) in an IBM THINK 2023 keynote. Our joint team is hard at work at the moment. We can't wait to show you what we've been up to! The most iconic of technology companies joining forces with an up-and-coming startup to tackle AI in the Enterprise... who would have thought? Fascinating times. Stay tuned!
huggingface/blog/blob/main/huggingface-and-ibm.md
!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # MobileBERT ## Overview The MobileBERT model was proposed in [MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices](https://arxiv.org/abs/2004.02984) by Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou. It's a bidirectional transformer based on the BERT model, which is compressed and accelerated using several approaches. The abstract from the paper is the following: *Natural Language Processing (NLP) has recently achieved great success by using huge pre-trained models with hundreds of millions of parameters. However, these models suffer from heavy model sizes and high latency such that they cannot be deployed to resource-limited mobile devices. In this paper, we propose MobileBERT for compressing and accelerating the popular BERT model. Like the original BERT, MobileBERT is task-agnostic, that is, it can be generically applied to various downstream NLP tasks via simple fine-tuning. Basically, MobileBERT is a thin version of BERT_LARGE, while equipped with bottleneck structures and a carefully designed balance between self-attentions and feed-forward networks. To train MobileBERT, we first train a specially designed teacher model, an inverted-bottleneck incorporated BERT_LARGE model. Then, we conduct knowledge transfer from this teacher to MobileBERT. Empirical studies show that MobileBERT is 4.3x smaller and 5.5x faster than BERT_BASE while achieving competitive results on well-known benchmarks. On the natural language inference tasks of GLUE, MobileBERT achieves a GLUEscore o 77.7 (0.6 lower than BERT_BASE), and 62 ms latency on a Pixel 4 phone. On the SQuAD v1.1/v2.0 question answering task, MobileBERT achieves a dev F1 score of 90.0/79.2 (1.5/2.1 higher than BERT_BASE).* This model was contributed by [vshampor](https://huggingface.co/vshampor). The original code can be found [here](https://github.com/google-research/google-research/tree/master/mobilebert). ## Usage tips - MobileBERT is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than the left. - MobileBERT is similar to BERT and therefore relies on the masked language modeling (MLM) objective. It is therefore efficient at predicting masked tokens and at NLU in general, but is not optimal for text generation. Models trained with a causal language modeling (CLM) objective are better in that regard. ## Resources - [Text classification task guide](../tasks/sequence_classification) - [Token classification task guide](../tasks/token_classification) - [Question answering task guide](../tasks/question_answering) - [Masked language modeling task guide](../tasks/masked_language_modeling) - [Multiple choice task guide](../tasks/multiple_choice) ## MobileBertConfig [[autodoc]] MobileBertConfig ## MobileBertTokenizer [[autodoc]] MobileBertTokenizer ## MobileBertTokenizerFast [[autodoc]] MobileBertTokenizerFast ## MobileBert specific outputs [[autodoc]] models.mobilebert.modeling_mobilebert.MobileBertForPreTrainingOutput [[autodoc]] models.mobilebert.modeling_tf_mobilebert.TFMobileBertForPreTrainingOutput <frameworkcontent> <pt> ## MobileBertModel [[autodoc]] MobileBertModel - forward ## MobileBertForPreTraining [[autodoc]] MobileBertForPreTraining - forward ## MobileBertForMaskedLM [[autodoc]] MobileBertForMaskedLM - forward ## MobileBertForNextSentencePrediction [[autodoc]] MobileBertForNextSentencePrediction - forward ## MobileBertForSequenceClassification [[autodoc]] MobileBertForSequenceClassification - forward ## MobileBertForMultipleChoice [[autodoc]] MobileBertForMultipleChoice - forward ## MobileBertForTokenClassification [[autodoc]] MobileBertForTokenClassification - forward ## MobileBertForQuestionAnswering [[autodoc]] MobileBertForQuestionAnswering - forward </pt> <tf> ## TFMobileBertModel [[autodoc]] TFMobileBertModel - call ## TFMobileBertForPreTraining [[autodoc]] TFMobileBertForPreTraining - call ## TFMobileBertForMaskedLM [[autodoc]] TFMobileBertForMaskedLM - call ## TFMobileBertForNextSentencePrediction [[autodoc]] TFMobileBertForNextSentencePrediction - call ## TFMobileBertForSequenceClassification [[autodoc]] TFMobileBertForSequenceClassification - call ## TFMobileBertForMultipleChoice [[autodoc]] TFMobileBertForMultipleChoice - call ## TFMobileBertForTokenClassification [[autodoc]] TFMobileBertForTokenClassification - call ## TFMobileBertForQuestionAnswering [[autodoc]] TFMobileBertForQuestionAnswering - call </tf> </frameworkcontent>
huggingface/transformers/blob/main/docs/source/en/model_doc/mobilebert.md
Hands-on <CourseFloatingBanner classNames="absolute z-10 right-0 top-0" notebooks={[ {label: "Google Colab", value: "https://colab.research.google.com/github/huggingface/deep-rl-class/blob/main/notebooks/unit8/unit8_part1.ipynb"} ]} askForHelpUrl="http://hf.co/join/discord" /> Now that we studied the theory behind PPO, the best way to understand how it works **is to implement it from scratch.** Implementing an architecture from scratch is the best way to understand it, and it's a good habit. We have already done it for a value-based method with Q-Learning and a Policy-based method with Reinforce. So, to be able to code it, we're going to use two resources: - A tutorial made by [Costa Huang](https://github.com/vwxyzjn). Costa is behind [CleanRL](https://github.com/vwxyzjn/cleanrl), a Deep Reinforcement Learning library that provides high-quality single-file implementation with research-friendly features. - In addition to the tutorial, to go deeper, you can read the 13 core implementation details: [https://iclr-blog-track.github.io/2022/03/25/ppo-implementation-details/](https://iclr-blog-track.github.io/2022/03/25/ppo-implementation-details/) Then, to test its robustness, we're going to train it in: - [LunarLander-v2](https://www.gymlibrary.ml/environments/box2d/lunar_lander/) <figure class="image table text-center m-0 w-full"> <video alt="LunarLander" style="max-width: 70%; margin: auto;" autoplay loop autobuffer muted playsinline > <source src="assets/63_deep_rl_intro/lunarlander.mp4" type="video/mp4"> </video> </figure> And finally, we will push the trained model to the Hub to evaluate and visualize your agent playing. LunarLander-v2 is the first environment you used when you started this course. At that time, you didn't know how it worked, and now you can code it from scratch and train it. **How incredible is that 🤩.** <iframe src="https://giphy.com/embed/pynZagVcYxVUk" width="480" height="480" frameBorder="0" class="giphy-embed" allowFullScreen></iframe><p><a href="https://giphy.com/gifs/the-office-michael-heartbreak-pynZagVcYxVUk">via GIPHY</a></p> Let's get started! 🚀 The colab notebook: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/deep-rl-class/blob/master/notebooks/unit8/unit8_part1.ipynb) # Unit 8: Proximal Policy Gradient (PPO) with PyTorch 🤖 <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit9/thumbnail.png" alt="Unit 8"/> In this notebook, you'll learn to **code your PPO agent from scratch with PyTorch using CleanRL implementation as model**. To test its robustness, we're going to train it in: - [LunarLander-v2 🚀](https://www.gymlibrary.dev/environments/box2d/lunar_lander/) We're constantly trying to improve our tutorials, so **if you find some issues in this notebook**, please [open an issue on the GitHub Repo](https://github.com/huggingface/deep-rl-class/issues). ## Objectives of this notebook 🏆 At the end of the notebook, you will: - Be able to **code your PPO agent from scratch using PyTorch**. - Be able to **push your trained agent and the code to the Hub** with a nice video replay and an evaluation score 🔥. ## Prerequisites 🏗️ Before diving into the notebook, you need to: 🔲 📚 Study [PPO by reading Unit 8](https://huggingface.co/deep-rl-course/unit8/introduction) 🤗 To validate this hands-on for the [certification process](https://huggingface.co/deep-rl-course/en/unit0/introduction#certification-process), you need to push one model, we don't ask for a minimal result but we **advise you to try different hyperparameters settings to get better results**. If you don't find your model, **go to the bottom of the page and click on the refresh button** For more information about the certification process, check this section 👉 https://huggingface.co/deep-rl-course/en/unit0/introduction#certification-process ## Set the GPU 💪 - To **accelerate the agent's training, we'll use a GPU**. To do that, go to `Runtime > Change Runtime type` <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/notebooks/gpu-step1.jpg" alt="GPU Step 1"> - `Hardware Accelerator > GPU` <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/notebooks/gpu-step2.jpg" alt="GPU Step 2"> ## Create a virtual display 🔽 During the notebook, we'll need to generate a replay video. To do so, with colab, **we need to have a virtual screen to be able to render the environment** (and thus record the frames). Hence the following cell will install the librairies and create and run a virtual screen 🖥 ```python apt install python-opengl apt install ffmpeg apt install xvfb pip install pyglet==1.5 pip install pyvirtualdisplay ``` ```python # Virtual display from pyvirtualdisplay import Display virtual_display = Display(visible=0, size=(1400, 900)) virtual_display.start() ``` ## Install dependencies 🔽 For this exercise, we use `gym==0.21` because the video was recorded with Gym. ```python pip install gym==0.22 pip install imageio-ffmpeg pip install huggingface_hub pip install gym[box2d]==0.22 ``` ## Let's code PPO from scratch with Costa Huang's tutorial - For the core implementation of PPO we're going to use the excellent [Costa Huang](https://costa.sh/) tutorial. - In addition to the tutorial, to go deeper you can read the 37 core implementation details: https://iclr-blog-track.github.io/2022/03/25/ppo-implementation-details/ 👉 The video tutorial: https://youtu.be/MEt6rrxH8W4 ```python from IPython.display import HTML HTML( '<iframe width="560" height="315" src="https://www.youtube.com/embed/MEt6rrxH8W4" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>' ) ``` ## Add the Hugging Face Integration 🤗 - In order to push our model to the Hub, we need to define a function `package_to_hub` - Add dependencies we need to push our model to the Hub ```python from huggingface_hub import HfApi, upload_folder from huggingface_hub.repocard import metadata_eval_result, metadata_save from pathlib import Path import datetime import tempfile import json import shutil import imageio from wasabi import Printer msg = Printer() ``` - Add new argument in `parse_args()` function to define the repo-id where we want to push the model. ```python # Adding HuggingFace argument parser.add_argument( "--repo-id", type=str, default="ThomasSimonini/ppo-CartPole-v1", help="id of the model repository from the Hugging Face Hub {username/repo_name}", ) ``` - Next, we add the methods needed to push the model to the Hub - These methods will: - `_evalutate_agent()`: evaluate the agent. - `_generate_model_card()`: generate the model card of your agent. - `_record_video()`: record a video of your agent. ```python def package_to_hub( repo_id, model, hyperparameters, eval_env, video_fps=30, commit_message="Push agent to the Hub", token=None, logs=None, ): """ Evaluate, Generate a video and Upload a model to Hugging Face Hub. This method does the complete pipeline: - It evaluates the model - It generates the model card - It generates a replay video of the agent - It pushes everything to the hub :param repo_id: id of the model repository from the Hugging Face Hub :param model: trained model :param eval_env: environment used to evaluate the agent :param fps: number of fps for rendering the video :param commit_message: commit message :param logs: directory on local machine of tensorboard logs you'd like to upload """ msg.info( "This function will save, evaluate, generate a video of your agent, " "create a model card and push everything to the hub. " "It might take up to 1min. \n " "This is a work in progress: if you encounter a bug, please open an issue." ) # Step 1: Clone or create the repo repo_url = HfApi().create_repo( repo_id=repo_id, token=token, private=False, exist_ok=True, ) with tempfile.TemporaryDirectory() as tmpdirname: tmpdirname = Path(tmpdirname) # Step 2: Save the model torch.save(model.state_dict(), tmpdirname / "model.pt") # Step 3: Evaluate the model and build JSON mean_reward, std_reward = _evaluate_agent(eval_env, 10, model) # First get datetime eval_datetime = datetime.datetime.now() eval_form_datetime = eval_datetime.isoformat() evaluate_data = { "env_id": hyperparameters.env_id, "mean_reward": mean_reward, "std_reward": std_reward, "n_evaluation_episodes": 10, "eval_datetime": eval_form_datetime, } # Write a JSON file with open(tmpdirname / "results.json", "w") as outfile: json.dump(evaluate_data, outfile) # Step 4: Generate a video video_path = tmpdirname / "replay.mp4" record_video(eval_env, model, video_path, video_fps) # Step 5: Generate the model card generated_model_card, metadata = _generate_model_card( "PPO", hyperparameters.env_id, mean_reward, std_reward, hyperparameters ) _save_model_card(tmpdirname, generated_model_card, metadata) # Step 6: Add logs if needed if logs: _add_logdir(tmpdirname, Path(logs)) msg.info(f"Pushing repo {repo_id} to the Hugging Face Hub") repo_url = upload_folder( repo_id=repo_id, folder_path=tmpdirname, path_in_repo="", commit_message=commit_message, token=token, ) msg.info(f"Your model is pushed to the Hub. You can view your model here: {repo_url}") return repo_url def _evaluate_agent(env, n_eval_episodes, policy): """ Evaluate the agent for ``n_eval_episodes`` episodes and returns average reward and std of reward. :param env: The evaluation environment :param n_eval_episodes: Number of episode to evaluate the agent :param policy: The agent """ episode_rewards = [] for episode in range(n_eval_episodes): state = env.reset() step = 0 done = False total_rewards_ep = 0 while done is False: state = torch.Tensor(state).to(device) action, _, _, _ = policy.get_action_and_value(state) new_state, reward, done, info = env.step(action.cpu().numpy()) total_rewards_ep += reward if done: break state = new_state episode_rewards.append(total_rewards_ep) mean_reward = np.mean(episode_rewards) std_reward = np.std(episode_rewards) return mean_reward, std_reward def record_video(env, policy, out_directory, fps=30): images = [] done = False state = env.reset() img = env.render(mode="rgb_array") images.append(img) while not done: state = torch.Tensor(state).to(device) # Take the action (index) that have the maximum expected future reward given that state action, _, _, _ = policy.get_action_and_value(state) state, reward, done, info = env.step( action.cpu().numpy() ) # We directly put next_state = state for recording logic img = env.render(mode="rgb_array") images.append(img) imageio.mimsave(out_directory, [np.array(img) for i, img in enumerate(images)], fps=fps) def _generate_model_card(model_name, env_id, mean_reward, std_reward, hyperparameters): """ Generate the model card for the Hub :param model_name: name of the model :env_id: name of the environment :mean_reward: mean reward of the agent :std_reward: standard deviation of the mean reward of the agent :hyperparameters: training arguments """ # Step 1: Select the tags metadata = generate_metadata(model_name, env_id, mean_reward, std_reward) # Transform the hyperparams namespace to string converted_dict = vars(hyperparameters) converted_str = str(converted_dict) converted_str = converted_str.split(", ") converted_str = "\n".join(converted_str) # Step 2: Generate the model card model_card = f""" # PPO Agent Playing {env_id} This is a trained model of a PPO agent playing {env_id}. # Hyperparameters """ return model_card, metadata def generate_metadata(model_name, env_id, mean_reward, std_reward): """ Define the tags for the model card :param model_name: name of the model :param env_id: name of the environment :mean_reward: mean reward of the agent :std_reward: standard deviation of the mean reward of the agent """ metadata = {} metadata["tags"] = [ env_id, "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", ] # Add metrics eval = metadata_eval_result( model_pretty_name=model_name, task_pretty_name="reinforcement-learning", task_id="reinforcement-learning", metrics_pretty_name="mean_reward", metrics_id="mean_reward", metrics_value=f"{mean_reward:.2f} +/- {std_reward:.2f}", dataset_pretty_name=env_id, dataset_id=env_id, ) # Merges both dictionaries metadata = {**metadata, **eval} return metadata def _save_model_card(local_path, generated_model_card, metadata): """Saves a model card for the repository. :param local_path: repository directory :param generated_model_card: model card generated by _generate_model_card() :param metadata: metadata """ readme_path = local_path / "README.md" readme = "" if readme_path.exists(): with readme_path.open("r", encoding="utf8") as f: readme = f.read() else: readme = generated_model_card with readme_path.open("w", encoding="utf-8") as f: f.write(readme) # Save our metrics to Readme metadata metadata_save(readme_path, metadata) def _add_logdir(local_path: Path, logdir: Path): """Adds a logdir to the repository. :param local_path: repository directory :param logdir: logdir directory """ if logdir.exists() and logdir.is_dir(): # Add the logdir to the repository under new dir called logs repo_logdir = local_path / "logs" # Delete current logs if they exist if repo_logdir.exists(): shutil.rmtree(repo_logdir) # Copy logdir into repo logdir shutil.copytree(logdir, repo_logdir) ``` - Finally, we call this function at the end of the PPO training ```python # Create the evaluation environment eval_env = gym.make(args.env_id) package_to_hub( repo_id=args.repo_id, model=agent, # The model we want to save hyperparameters=args, eval_env=gym.make(args.env_id), logs=f"runs/{run_name}", ) ``` - Here's what the final ppo.py file looks like: ```python # docs and experiment results can be found at https://docs.cleanrl.dev/rl-algorithms/ppo/#ppopy import argparse import os import random import time from distutils.util import strtobool import gym import numpy as np import torch import torch.nn as nn import torch.optim as optim from torch.distributions.categorical import Categorical from torch.utils.tensorboard import SummaryWriter from huggingface_hub import HfApi, upload_folder from huggingface_hub.repocard import metadata_eval_result, metadata_save from pathlib import Path import datetime import tempfile import json import shutil import imageio from wasabi import Printer msg = Printer() def parse_args(): # fmt: off parser = argparse.ArgumentParser() parser.add_argument("--exp-name", type=str, default=os.path.basename(__file__).rstrip(".py"), help="the name of this experiment") parser.add_argument("--seed", type=int, default=1, help="seed of the experiment") parser.add_argument("--torch-deterministic", type=lambda x: bool(strtobool(x)), default=True, nargs="?", const=True, help="if toggled, `torch.backends.cudnn.deterministic=False`") parser.add_argument("--cuda", type=lambda x: bool(strtobool(x)), default=True, nargs="?", const=True, help="if toggled, cuda will be enabled by default") parser.add_argument("--track", type=lambda x: bool(strtobool(x)), default=False, nargs="?", const=True, help="if toggled, this experiment will be tracked with Weights and Biases") parser.add_argument("--wandb-project-name", type=str, default="cleanRL", help="the wandb's project name") parser.add_argument("--wandb-entity", type=str, default=None, help="the entity (team) of wandb's project") parser.add_argument("--capture-video", type=lambda x: bool(strtobool(x)), default=False, nargs="?", const=True, help="weather to capture videos of the agent performances (check out `videos` folder)") # Algorithm specific arguments parser.add_argument("--env-id", type=str, default="CartPole-v1", help="the id of the environment") parser.add_argument("--total-timesteps", type=int, default=50000, help="total timesteps of the experiments") parser.add_argument("--learning-rate", type=float, default=2.5e-4, help="the learning rate of the optimizer") parser.add_argument("--num-envs", type=int, default=4, help="the number of parallel game environments") parser.add_argument("--num-steps", type=int, default=128, help="the number of steps to run in each environment per policy rollout") parser.add_argument("--anneal-lr", type=lambda x: bool(strtobool(x)), default=True, nargs="?", const=True, help="Toggle learning rate annealing for policy and value networks") parser.add_argument("--gae", type=lambda x: bool(strtobool(x)), default=True, nargs="?", const=True, help="Use GAE for advantage computation") parser.add_argument("--gamma", type=float, default=0.99, help="the discount factor gamma") parser.add_argument("--gae-lambda", type=float, default=0.95, help="the lambda for the general advantage estimation") parser.add_argument("--num-minibatches", type=int, default=4, help="the number of mini-batches") parser.add_argument("--update-epochs", type=int, default=4, help="the K epochs to update the policy") parser.add_argument("--norm-adv", type=lambda x: bool(strtobool(x)), default=True, nargs="?", const=True, help="Toggles advantages normalization") parser.add_argument("--clip-coef", type=float, default=0.2, help="the surrogate clipping coefficient") parser.add_argument("--clip-vloss", type=lambda x: bool(strtobool(x)), default=True, nargs="?", const=True, help="Toggles whether or not to use a clipped loss for the value function, as per the paper.") parser.add_argument("--ent-coef", type=float, default=0.01, help="coefficient of the entropy") parser.add_argument("--vf-coef", type=float, default=0.5, help="coefficient of the value function") parser.add_argument("--max-grad-norm", type=float, default=0.5, help="the maximum norm for the gradient clipping") parser.add_argument("--target-kl", type=float, default=None, help="the target KL divergence threshold") # Adding HuggingFace argument parser.add_argument("--repo-id", type=str, default="ThomasSimonini/ppo-CartPole-v1", help="id of the model repository from the Hugging Face Hub {username/repo_name}") args = parser.parse_args() args.batch_size = int(args.num_envs * args.num_steps) args.minibatch_size = int(args.batch_size // args.num_minibatches) # fmt: on return args def package_to_hub( repo_id, model, hyperparameters, eval_env, video_fps=30, commit_message="Push agent to the Hub", token=None, logs=None, ): """ Evaluate, Generate a video and Upload a model to Hugging Face Hub. This method does the complete pipeline: - It evaluates the model - It generates the model card - It generates a replay video of the agent - It pushes everything to the hub :param repo_id: id of the model repository from the Hugging Face Hub :param model: trained model :param eval_env: environment used to evaluate the agent :param fps: number of fps for rendering the video :param commit_message: commit message :param logs: directory on local machine of tensorboard logs you'd like to upload """ msg.info( "This function will save, evaluate, generate a video of your agent, " "create a model card and push everything to the hub. " "It might take up to 1min. \n " "This is a work in progress: if you encounter a bug, please open an issue." ) # Step 1: Clone or create the repo repo_url = HfApi().create_repo( repo_id=repo_id, token=token, private=False, exist_ok=True, ) with tempfile.TemporaryDirectory() as tmpdirname: tmpdirname = Path(tmpdirname) # Step 2: Save the model torch.save(model.state_dict(), tmpdirname / "model.pt") # Step 3: Evaluate the model and build JSON mean_reward, std_reward = _evaluate_agent(eval_env, 10, model) # First get datetime eval_datetime = datetime.datetime.now() eval_form_datetime = eval_datetime.isoformat() evaluate_data = { "env_id": hyperparameters.env_id, "mean_reward": mean_reward, "std_reward": std_reward, "n_evaluation_episodes": 10, "eval_datetime": eval_form_datetime, } # Write a JSON file with open(tmpdirname / "results.json", "w") as outfile: json.dump(evaluate_data, outfile) # Step 4: Generate a video video_path = tmpdirname / "replay.mp4" record_video(eval_env, model, video_path, video_fps) # Step 5: Generate the model card generated_model_card, metadata = _generate_model_card( "PPO", hyperparameters.env_id, mean_reward, std_reward, hyperparameters ) _save_model_card(tmpdirname, generated_model_card, metadata) # Step 6: Add logs if needed if logs: _add_logdir(tmpdirname, Path(logs)) msg.info(f"Pushing repo {repo_id} to the Hugging Face Hub") repo_url = upload_folder( repo_id=repo_id, folder_path=tmpdirname, path_in_repo="", commit_message=commit_message, token=token, ) msg.info(f"Your model is pushed to the Hub. You can view your model here: {repo_url}") return repo_url def _evaluate_agent(env, n_eval_episodes, policy): """ Evaluate the agent for ``n_eval_episodes`` episodes and returns average reward and std of reward. :param env: The evaluation environment :param n_eval_episodes: Number of episode to evaluate the agent :param policy: The agent """ episode_rewards = [] for episode in range(n_eval_episodes): state = env.reset() step = 0 done = False total_rewards_ep = 0 while done is False: state = torch.Tensor(state).to(device) action, _, _, _ = policy.get_action_and_value(state) new_state, reward, done, info = env.step(action.cpu().numpy()) total_rewards_ep += reward if done: break state = new_state episode_rewards.append(total_rewards_ep) mean_reward = np.mean(episode_rewards) std_reward = np.std(episode_rewards) return mean_reward, std_reward def record_video(env, policy, out_directory, fps=30): images = [] done = False state = env.reset() img = env.render(mode="rgb_array") images.append(img) while not done: state = torch.Tensor(state).to(device) # Take the action (index) that have the maximum expected future reward given that state action, _, _, _ = policy.get_action_and_value(state) state, reward, done, info = env.step( action.cpu().numpy() ) # We directly put next_state = state for recording logic img = env.render(mode="rgb_array") images.append(img) imageio.mimsave(out_directory, [np.array(img) for i, img in enumerate(images)], fps=fps) def _generate_model_card(model_name, env_id, mean_reward, std_reward, hyperparameters): """ Generate the model card for the Hub :param model_name: name of the model :env_id: name of the environment :mean_reward: mean reward of the agent :std_reward: standard deviation of the mean reward of the agent :hyperparameters: training arguments """ # Step 1: Select the tags metadata = generate_metadata(model_name, env_id, mean_reward, std_reward) # Transform the hyperparams namespace to string converted_dict = vars(hyperparameters) converted_str = str(converted_dict) converted_str = converted_str.split(", ") converted_str = "\n".join(converted_str) # Step 2: Generate the model card model_card = f""" # PPO Agent Playing {env_id} This is a trained model of a PPO agent playing {env_id}. # Hyperparameters """ return model_card, metadata def generate_metadata(model_name, env_id, mean_reward, std_reward): """ Define the tags for the model card :param model_name: name of the model :param env_id: name of the environment :mean_reward: mean reward of the agent :std_reward: standard deviation of the mean reward of the agent """ metadata = {} metadata["tags"] = [ env_id, "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", ] # Add metrics eval = metadata_eval_result( model_pretty_name=model_name, task_pretty_name="reinforcement-learning", task_id="reinforcement-learning", metrics_pretty_name="mean_reward", metrics_id="mean_reward", metrics_value=f"{mean_reward:.2f} +/- {std_reward:.2f}", dataset_pretty_name=env_id, dataset_id=env_id, ) # Merges both dictionaries metadata = {**metadata, **eval} return metadata def _save_model_card(local_path, generated_model_card, metadata): """Saves a model card for the repository. :param local_path: repository directory :param generated_model_card: model card generated by _generate_model_card() :param metadata: metadata """ readme_path = local_path / "README.md" readme = "" if readme_path.exists(): with readme_path.open("r", encoding="utf8") as f: readme = f.read() else: readme = generated_model_card with readme_path.open("w", encoding="utf-8") as f: f.write(readme) # Save our metrics to Readme metadata metadata_save(readme_path, metadata) def _add_logdir(local_path: Path, logdir: Path): """Adds a logdir to the repository. :param local_path: repository directory :param logdir: logdir directory """ if logdir.exists() and logdir.is_dir(): # Add the logdir to the repository under new dir called logs repo_logdir = local_path / "logs" # Delete current logs if they exist if repo_logdir.exists(): shutil.rmtree(repo_logdir) # Copy logdir into repo logdir shutil.copytree(logdir, repo_logdir) def make_env(env_id, seed, idx, capture_video, run_name): def thunk(): env = gym.make(env_id) env = gym.wrappers.RecordEpisodeStatistics(env) if capture_video: if idx == 0: env = gym.wrappers.RecordVideo(env, f"videos/{run_name}") env.seed(seed) env.action_space.seed(seed) env.observation_space.seed(seed) return env return thunk def layer_init(layer, std=np.sqrt(2), bias_const=0.0): torch.nn.init.orthogonal_(layer.weight, std) torch.nn.init.constant_(layer.bias, bias_const) return layer class Agent(nn.Module): def __init__(self, envs): super().__init__() self.critic = nn.Sequential( layer_init(nn.Linear(np.array(envs.single_observation_space.shape).prod(), 64)), nn.Tanh(), layer_init(nn.Linear(64, 64)), nn.Tanh(), layer_init(nn.Linear(64, 1), std=1.0), ) self.actor = nn.Sequential( layer_init(nn.Linear(np.array(envs.single_observation_space.shape).prod(), 64)), nn.Tanh(), layer_init(nn.Linear(64, 64)), nn.Tanh(), layer_init(nn.Linear(64, envs.single_action_space.n), std=0.01), ) def get_value(self, x): return self.critic(x) def get_action_and_value(self, x, action=None): logits = self.actor(x) probs = Categorical(logits=logits) if action is None: action = probs.sample() return action, probs.log_prob(action), probs.entropy(), self.critic(x) if __name__ == "__main__": args = parse_args() run_name = f"{args.env_id}__{args.exp_name}__{args.seed}__{int(time.time())}" if args.track: import wandb wandb.init( project=args.wandb_project_name, entity=args.wandb_entity, sync_tensorboard=True, config=vars(args), name=run_name, monitor_gym=True, save_code=True, ) writer = SummaryWriter(f"runs/{run_name}") writer.add_text( "hyperparameters", "|param|value|\n|-|-|\n%s" % ("\n".join([f"|{key}|{value}|" for key, value in vars(args).items()])), ) # TRY NOT TO MODIFY: seeding random.seed(args.seed) np.random.seed(args.seed) torch.manual_seed(args.seed) torch.backends.cudnn.deterministic = args.torch_deterministic device = torch.device("cuda" if torch.cuda.is_available() and args.cuda else "cpu") # env setup envs = gym.vector.SyncVectorEnv( [make_env(args.env_id, args.seed + i, i, args.capture_video, run_name) for i in range(args.num_envs)] ) assert isinstance(envs.single_action_space, gym.spaces.Discrete), "only discrete action space is supported" agent = Agent(envs).to(device) optimizer = optim.Adam(agent.parameters(), lr=args.learning_rate, eps=1e-5) # ALGO Logic: Storage setup obs = torch.zeros((args.num_steps, args.num_envs) + envs.single_observation_space.shape).to(device) actions = torch.zeros((args.num_steps, args.num_envs) + envs.single_action_space.shape).to(device) logprobs = torch.zeros((args.num_steps, args.num_envs)).to(device) rewards = torch.zeros((args.num_steps, args.num_envs)).to(device) dones = torch.zeros((args.num_steps, args.num_envs)).to(device) values = torch.zeros((args.num_steps, args.num_envs)).to(device) # TRY NOT TO MODIFY: start the game global_step = 0 start_time = time.time() next_obs = torch.Tensor(envs.reset()).to(device) next_done = torch.zeros(args.num_envs).to(device) num_updates = args.total_timesteps // args.batch_size for update in range(1, num_updates + 1): # Annealing the rate if instructed to do so. if args.anneal_lr: frac = 1.0 - (update - 1.0) / num_updates lrnow = frac * args.learning_rate optimizer.param_groups[0]["lr"] = lrnow for step in range(0, args.num_steps): global_step += 1 * args.num_envs obs[step] = next_obs dones[step] = next_done # ALGO LOGIC: action logic with torch.no_grad(): action, logprob, _, value = agent.get_action_and_value(next_obs) values[step] = value.flatten() actions[step] = action logprobs[step] = logprob # TRY NOT TO MODIFY: execute the game and log data. next_obs, reward, done, info = envs.step(action.cpu().numpy()) rewards[step] = torch.tensor(reward).to(device).view(-1) next_obs, next_done = torch.Tensor(next_obs).to(device), torch.Tensor(done).to(device) for item in info: if "episode" in item.keys(): print(f"global_step={global_step}, episodic_return={item['episode']['r']}") writer.add_scalar("charts/episodic_return", item["episode"]["r"], global_step) writer.add_scalar("charts/episodic_length", item["episode"]["l"], global_step) break # bootstrap value if not done with torch.no_grad(): next_value = agent.get_value(next_obs).reshape(1, -1) if args.gae: advantages = torch.zeros_like(rewards).to(device) lastgaelam = 0 for t in reversed(range(args.num_steps)): if t == args.num_steps - 1: nextnonterminal = 1.0 - next_done nextvalues = next_value else: nextnonterminal = 1.0 - dones[t + 1] nextvalues = values[t + 1] delta = rewards[t] + args.gamma * nextvalues * nextnonterminal - values[t] advantages[t] = lastgaelam = delta + args.gamma * args.gae_lambda * nextnonterminal * lastgaelam returns = advantages + values else: returns = torch.zeros_like(rewards).to(device) for t in reversed(range(args.num_steps)): if t == args.num_steps - 1: nextnonterminal = 1.0 - next_done next_return = next_value else: nextnonterminal = 1.0 - dones[t + 1] next_return = returns[t + 1] returns[t] = rewards[t] + args.gamma * nextnonterminal * next_return advantages = returns - values # flatten the batch b_obs = obs.reshape((-1,) + envs.single_observation_space.shape) b_logprobs = logprobs.reshape(-1) b_actions = actions.reshape((-1,) + envs.single_action_space.shape) b_advantages = advantages.reshape(-1) b_returns = returns.reshape(-1) b_values = values.reshape(-1) # Optimizing the policy and value network b_inds = np.arange(args.batch_size) clipfracs = [] for epoch in range(args.update_epochs): np.random.shuffle(b_inds) for start in range(0, args.batch_size, args.minibatch_size): end = start + args.minibatch_size mb_inds = b_inds[start:end] _, newlogprob, entropy, newvalue = agent.get_action_and_value( b_obs[mb_inds], b_actions.long()[mb_inds] ) logratio = newlogprob - b_logprobs[mb_inds] ratio = logratio.exp() with torch.no_grad(): # calculate approx_kl http://joschu.net/blog/kl-approx.html old_approx_kl = (-logratio).mean() approx_kl = ((ratio - 1) - logratio).mean() clipfracs += [((ratio - 1.0).abs() > args.clip_coef).float().mean().item()] mb_advantages = b_advantages[mb_inds] if args.norm_adv: mb_advantages = (mb_advantages - mb_advantages.mean()) / (mb_advantages.std() + 1e-8) # Policy loss pg_loss1 = -mb_advantages * ratio pg_loss2 = -mb_advantages * torch.clamp(ratio, 1 - args.clip_coef, 1 + args.clip_coef) pg_loss = torch.max(pg_loss1, pg_loss2).mean() # Value loss newvalue = newvalue.view(-1) if args.clip_vloss: v_loss_unclipped = (newvalue - b_returns[mb_inds]) ** 2 v_clipped = b_values[mb_inds] + torch.clamp( newvalue - b_values[mb_inds], -args.clip_coef, args.clip_coef, ) v_loss_clipped = (v_clipped - b_returns[mb_inds]) ** 2 v_loss_max = torch.max(v_loss_unclipped, v_loss_clipped) v_loss = 0.5 * v_loss_max.mean() else: v_loss = 0.5 * ((newvalue - b_returns[mb_inds]) ** 2).mean() entropy_loss = entropy.mean() loss = pg_loss - args.ent_coef * entropy_loss + v_loss * args.vf_coef optimizer.zero_grad() loss.backward() nn.utils.clip_grad_norm_(agent.parameters(), args.max_grad_norm) optimizer.step() if args.target_kl is not None: if approx_kl > args.target_kl: break y_pred, y_true = b_values.cpu().numpy(), b_returns.cpu().numpy() var_y = np.var(y_true) explained_var = np.nan if var_y == 0 else 1 - np.var(y_true - y_pred) / var_y # TRY NOT TO MODIFY: record rewards for plotting purposes writer.add_scalar("charts/learning_rate", optimizer.param_groups[0]["lr"], global_step) writer.add_scalar("losses/value_loss", v_loss.item(), global_step) writer.add_scalar("losses/policy_loss", pg_loss.item(), global_step) writer.add_scalar("losses/entropy", entropy_loss.item(), global_step) writer.add_scalar("losses/old_approx_kl", old_approx_kl.item(), global_step) writer.add_scalar("losses/approx_kl", approx_kl.item(), global_step) writer.add_scalar("losses/clipfrac", np.mean(clipfracs), global_step) writer.add_scalar("losses/explained_variance", explained_var, global_step) print("SPS:", int(global_step / (time.time() - start_time))) writer.add_scalar("charts/SPS", int(global_step / (time.time() - start_time)), global_step) envs.close() writer.close() # Create the evaluation environment eval_env = gym.make(args.env_id) package_to_hub( repo_id=args.repo_id, model=agent, # The model we want to save hyperparameters=args, eval_env=gym.make(args.env_id), logs=f"runs/{run_name}", ) ``` To be able to share your model with the community there are three more steps to follow: 1️⃣ (If it's not already done) create an account to HF ➡ https://huggingface.co/join 2️⃣ Sign in and get your authentication token from the Hugging Face website. - Create a new token (https://huggingface.co/settings/tokens) **with write role** <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/notebooks/create-token.jpg" alt="Create HF Token"> - Copy the token - Run the cell below and paste the token ```python from huggingface_hub import notebook_login notebook_login() !git config --global credential.helper store ``` If you don't want to use Google Colab or a Jupyter Notebook, you need to use this command instead: `huggingface-cli login` ## Let's start the training 🔥 ⚠️ ⚠️ ⚠️ Don't use **the same repo id with the one you used for the Unit 1** - Now that you've coded PPO from scratch and added the Hugging Face Integration, we're ready to start the training 🔥 - First, you need to copy all your code to a file you create called `ppo.py` <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit9/step1.png" alt="PPO"/> <img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit9/step2.png" alt="PPO"/> - Now we just need to run this python script using `python <name-of-python-script>.py` with the additional parameters we defined using `argparse` - You should modify more hyperparameters otherwise the training will not be super stable. ```python !python ppo.py --env-id="LunarLander-v2" --repo-id="YOUR_REPO_ID" --total-timesteps=50000 ``` ## Some additional challenges 🏆 The best way to learn **is to try things on your own**! Why not try another environment? Or why not trying to modify the implementation to work with Gymnasium? See you in Unit 8, part 2 where we're going to train agents to play Doom 🔥 ## Keep learning, stay awesome 🤗
huggingface/deep-rl-class/blob/main/units/en/unit8/hands-on-cleanrl.mdx
!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # AudioLDM 2 AudioLDM 2 was proposed in [AudioLDM 2: Learning Holistic Audio Generation with Self-supervised Pretraining](https://arxiv.org/abs/2308.05734) by Haohe Liu et al. AudioLDM 2 takes a text prompt as input and predicts the corresponding audio. It can generate text-conditional sound effects, human speech and music. Inspired by [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/overview), AudioLDM 2 is a text-to-audio _latent diffusion model (LDM)_ that learns continuous audio representations from text embeddings. Two text encoder models are used to compute the text embeddings from a prompt input: the text-branch of [CLAP](https://huggingface.co/docs/transformers/main/en/model_doc/clap) and the encoder of [Flan-T5](https://huggingface.co/docs/transformers/main/en/model_doc/flan-t5). These text embeddings are then projected to a shared embedding space by an [AudioLDM2ProjectionModel](https://huggingface.co/docs/diffusers/main/api/pipelines/audioldm2#diffusers.AudioLDM2ProjectionModel). A [GPT2](https://huggingface.co/docs/transformers/main/en/model_doc/gpt2) _language model (LM)_ is used to auto-regressively predict eight new embedding vectors, conditional on the projected CLAP and Flan-T5 embeddings. The generated embedding vectors and Flan-T5 text embeddings are used as cross-attention conditioning in the LDM. The [UNet](https://huggingface.co/docs/diffusers/main/en/api/pipelines/audioldm2#diffusers.AudioLDM2UNet2DConditionModel) of AudioLDM 2 is unique in the sense that it takes **two** cross-attention embeddings, as opposed to one cross-attention conditioning, as in most other LDMs. The abstract of the paper is the following: *Although audio generation shares commonalities across different types of audio, such as speech, music, and sound effects, designing models for each type requires careful consideration of specific objectives and biases that can significantly differ from those of other types. To bring us closer to a unified perspective of audio generation, this paper proposes a framework that utilizes the same learning method for speech, music, and sound effect generation. Our framework introduces a general representation of audio, called "language of audio" (LOA). Any audio can be translated into LOA based on AudioMAE, a self-supervised pre-trained representation learning model. In the generation process, we translate any modalities into LOA by using a GPT-2 model, and we perform self-supervised audio generation learning with a latent diffusion model conditioned on LOA. The proposed framework naturally brings advantages such as in-context learning abilities and reusable self-supervised pretrained AudioMAE and latent diffusion models. Experiments on the major benchmarks of text-to-audio, text-to-music, and text-to-speech demonstrate state-of-the-art or competitive performance against previous approaches. Our code, pretrained model, and demo are available at [this https URL](https://audioldm.github.io/audioldm2).* This pipeline was contributed by [sanchit-gandhi](https://huggingface.co/sanchit-gandhi). The original codebase can be found at [haoheliu/audioldm2](https://github.com/haoheliu/audioldm2). ## Tips ### Choosing a checkpoint AudioLDM2 comes in three variants. Two of these checkpoints are applicable to the general task of text-to-audio generation. The third checkpoint is trained exclusively on text-to-music generation. All checkpoints share the same model size for the text encoders and VAE. They differ in the size and depth of the UNet. See table below for details on the three checkpoints: | Checkpoint | Task | UNet Model Size | Total Model Size | Training Data / h | |-----------------------------------------------------------------|---------------|-----------------|------------------|-------------------| | [audioldm2](https://huggingface.co/cvssp/audioldm2) | Text-to-audio | 350M | 1.1B | 1150k | | [audioldm2-large](https://huggingface.co/cvssp/audioldm2-large) | Text-to-audio | 750M | 1.5B | 1150k | | [audioldm2-music](https://huggingface.co/cvssp/audioldm2-music) | Text-to-music | 350M | 1.1B | 665k | ### Constructing a prompt * Descriptive prompt inputs work best: use adjectives to describe the sound (e.g. "high quality" or "clear") and make the prompt context specific (e.g. "water stream in a forest" instead of "stream"). * It's best to use general terms like "cat" or "dog" instead of specific names or abstract objects the model may not be familiar with. * Using a **negative prompt** can significantly improve the quality of the generated waveform, by guiding the generation away from terms that correspond to poor quality audio. Try using a negative prompt of "Low quality." ### Controlling inference * The _quality_ of the predicted audio sample can be controlled by the `num_inference_steps` argument; higher steps give higher quality audio at the expense of slower inference. * The _length_ of the predicted audio sample can be controlled by varying the `audio_length_in_s` argument. ### Evaluating generated waveforms: * The quality of the generated waveforms can vary significantly based on the seed. Try generating with different seeds until you find a satisfactory generation. * Multiple waveforms can be generated in one go: set `num_waveforms_per_prompt` to a value greater than 1. Automatic scoring will be performed between the generated waveforms and prompt text, and the audios ranked from best to worst accordingly. The following example demonstrates how to construct good music generation using the aforementioned tips: [example](https://huggingface.co/docs/diffusers/main/en/api/pipelines/audioldm2#diffusers.AudioLDM2Pipeline.__call__.example). <Tip> Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-components-across-pipelines) section to learn how to efficiently load the same components into multiple pipelines. </Tip> ## AudioLDM2Pipeline [[autodoc]] AudioLDM2Pipeline - all - __call__ ## AudioLDM2ProjectionModel [[autodoc]] AudioLDM2ProjectionModel - forward ## AudioLDM2UNet2DConditionModel [[autodoc]] AudioLDM2UNet2DConditionModel - forward ## AudioPipelineOutput [[autodoc]] pipelines.AudioPipelineOutput
huggingface/diffusers/blob/main/docs/source/en/api/pipelines/audioldm2.md
Use a custom Container Image Inference Endpoints not only allows you to [customize your inference handler](/docs/inference-endpoints/guides/custom_handler), but it also allows you to provide a custom container image. Those can be public images like `tensorflow/serving:2.7.3` or private Images hosted on [Docker Hub](https://hub.docker.com/), [AWS ECR](https://aws.amazon.com/ecr/?nc1=h_ls), [Azure ACR](https://azure.microsoft.com/de-de/services/container-registry/), or [Google GCR](https://cloud.google.com/container-registry?hl=de). <img src="https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/custom_container.png" alt="custom container config" /> The [creation flow](/docs/inference-endpoints/guides/create_endpoint) of your Image artifacts from a custom image is the same as the base image. This means Inference Endpoints will create a unique image artifact derived from your provided image, including all Model Artifacts. The Model Artifacts (weights) are stored under `/repository`. For example, if you use` tensorflow/serving` as your custom image, then you have to set `model_base_path="/repository": ``` tensorflow_model_server \ --rest_api_port=5000 \ --model_name=my_model \ --model_base_path="/repository" ```
huggingface/hf-endpoints-documentation/blob/main/docs/source/guides/custom_container.mdx
he tokenizer pipeline. In this video, we'll look at how a tokenizer converts raw text to numbers that a Transformer model can make sense of, like when we execute this code. Here is a quick overview of what happens inside the tokenizer object: first the text is split into tokens, which are words, parts of words, or punctuation symbols. Then the tokenizer adds potential special tokens and converts each token to their unique respective ID as defined by the tokenizer's vocabulary. As we'll see it doesn't actually happen in this order, but viewing it like this is better for understanding what happens. The first step is to split our input text into tokens with the tokenize method. To do this, the tokenizer may first perform some operations like lowercasing all words, then follow a set of rules to split the result in small chunks of text. Most of the Transformers models use a subword tokenization algorithm, which means that one given word can be split in several tokens, like tokenize here. Look at the "Tokenization algorithms" videos linked below for more information! The ## prefix we see in front of ize is the convention used by BERT to indicate this token is not the beginning of a word. Other tokenizers may use different conventions however: for instance ALBERT tokenizers will add a long underscore in front of all the tokens that had a space before them, which is a convention used by sentencepiece tokenizers. The second step of the tokenization pipeline is to map those tokens to their respective IDs as defined by the vocabulary of the tokenizer. This is why we need to download a file when we instantiate a tokenizer with the from_pretrained method: we have to make sure we use the same mapping as when the model was pretrained. To do this, we use the convert_tokens_to_ids method. You may have noticed that we don't have the exact same result as in our first slide — or not, as this looks like a list of random numbers, in which case allow me to refresh your memory. We had a number at the beginning and at the end that are missing, those are the special tokens. The special tokens are added by the prepare_for_model method, which knows the indices of those tokens in the vocabulary and just adds the proper numbers. You can look at the special tokens (and more generally at how the tokenizer has changed your text) by using the decode method on the outputs of the tokenizer object. As for the prefix for beginning of words/part of words, those special tokens vary depending on which tokenizer you are using. The BERT tokenizer uses [CLS] and [SEP] but the roberta tokenizer uses html-like anchors <s> and </s>. Now that you know how the tokenizer works, you can forget all those intermediaries methods and only remember that you just have to call it on your input texts. The inputs don't contain the inputs IDs however, to learn what the attention mask is, check out the "Batch inputs together" video. To learn about token type IDs, look at the "Process pairs of sentences" video.
huggingface/course/blob/main/subtitles/en/raw/chapter2/04e_tokenizer-pipeline.md
!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # mT5 <div class="flex flex-wrap space-x-1"> <a href="https://huggingface.co/models?filter=mt5"> <img alt="Models" src="https://img.shields.io/badge/All_model_pages-mt5-blueviolet"> </a> <a href="https://huggingface.co/spaces/docs-demos/mt5-small-finetuned-arxiv-cs-finetuned-arxiv-cs-full"> <img alt="Spaces" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue"> </a> </div> ## Overview The mT5 model was presented in [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel. The abstract from the paper is the following: *The recent "Text-to-Text Transfer Transformer" (T5) leveraged a unified text-to-text format and scale to attain state-of-the-art results on a wide variety of English-language NLP tasks. In this paper, we introduce mT5, a multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset covering 101 languages. We detail the design and modified training of mT5 and demonstrate its state-of-the-art performance on many multilingual benchmarks. We also describe a simple technique to prevent "accidental translation" in the zero-shot setting, where a generative model chooses to (partially) translate its prediction into the wrong language. All of the code and model checkpoints used in this work are publicly available.* Note: mT5 was only pre-trained on [mC4](https://huggingface.co/datasets/mc4) excluding any supervised training. Therefore, this model has to be fine-tuned before it is usable on a downstream task, unlike the original T5 model. Since mT5 was pre-trained unsupervisedly, there's no real advantage to using a task prefix during single-task fine-tuning. If you are doing multi-task fine-tuning, you should use a prefix. Google has released the following variants: - [google/mt5-small](https://huggingface.co/google/mt5-small) - [google/mt5-base](https://huggingface.co/google/mt5-base) - [google/mt5-large](https://huggingface.co/google/mt5-large) - [google/mt5-xl](https://huggingface.co/google/mt5-xl) - [google/mt5-xxl](https://huggingface.co/google/mt5-xxl). This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten). The original code can be found [here](https://github.com/google-research/multilingual-t5). ## Resources - [Translation task guide](../tasks/translation) - [Summarization task guide](../tasks/summarization) ## MT5Config [[autodoc]] MT5Config ## MT5Tokenizer [[autodoc]] MT5Tokenizer See [`T5Tokenizer`] for all details. ## MT5TokenizerFast [[autodoc]] MT5TokenizerFast See [`T5TokenizerFast`] for all details. <frameworkcontent> <pt> ## MT5Model [[autodoc]] MT5Model ## MT5ForConditionalGeneration [[autodoc]] MT5ForConditionalGeneration ## MT5EncoderModel [[autodoc]] MT5EncoderModel ## MT5ForSequenceClassification [[autodoc]] MT5ForSequenceClassification ## MT5ForQuestionAnswering [[autodoc]] MT5ForQuestionAnswering </pt> <tf> ## TFMT5Model [[autodoc]] TFMT5Model ## TFMT5ForConditionalGeneration [[autodoc]] TFMT5ForConditionalGeneration ## TFMT5EncoderModel [[autodoc]] TFMT5EncoderModel </tf> <jax> ## FlaxMT5Model [[autodoc]] FlaxMT5Model ## FlaxMT5ForConditionalGeneration [[autodoc]] FlaxMT5ForConditionalGeneration ## FlaxMT5EncoderModel [[autodoc]] FlaxMT5EncoderModel </jax> </frameworkcontent>
huggingface/transformers/blob/main/docs/source/en/model_doc/mt5.md
-- title: The State of Computer Vision at Hugging Face 🤗 thumbnail: /blog/assets/cv_state/thumbnail.png authors: - user: sayakpaul --- # The State of Computer Vision at Hugging Face 🤗 At Hugging Face, we pride ourselves on democratizing the field of artificial intelligence together with the community. As a part of that mission, we began focusing our efforts on computer vision over the last year. What started as a [PR for having Vision Transformers (ViT) in 🤗 Transformers](https://github.com/huggingface/transformers/pull/10950) has now grown into something much bigger – 8 core vision tasks, over 3000 models, and over 100 datasets on the Hugging Face Hub. A lot of exciting things have happened since ViTs joined the Hub. In this blog post, we’ll summarize what went down and what’s coming to support the continuous progress of Computer Vision from the 🤗 ecosystem. Here is a list of things we’ll cover: - [Supported vision tasks and Pipelines](#support-for-pipelines) - [Training your own vision models](#training-your-own-models) - [Integration with `timm`](#🤗-🤝-timm) - [Diffusers](#🧨-diffusers) - [Support for third-party libraries](#support-for-third-party-libraries) - [Deployment](#deployment) - and much more! ## Enabling the community: One task at a time 👁 The Hugging Face Hub is home to over 100,000 public models for different tasks such as next-word prediction, mask filling, token classification, sequence classification, and so on. As of today, we support [8 core vision tasks](https://huggingface.co/tasks) providing many model checkpoints: - Image classification - Image segmentation - (Zero-shot) object detection - Video classification - Depth estimation - Image-to-image synthesis - Unconditional image generation - Zero-shot image classification Each of these tasks comes with at least 10 model checkpoints on the Hub for you to explore. Furthermore, we support [tasks](https://huggingface.co/tasks) that lie at the intersection of vision and language such as: - Image-to-text (image captioning, OCR) - Text-to-image - Document question-answering - Visual question-answering These tasks entail not only state-of-the-art Transformer-based architectures such as [ViT](https://huggingface.co/docs/transformers/model_doc/vit), [Swin](https://huggingface.co/docs/transformers/model_doc/swin), [DETR](https://huggingface.co/docs/transformers/model_doc/detr) but also *pure convolutional* architectures like [ConvNeXt](https://huggingface.co/docs/transformers/model_doc/convnext), [ResNet](https://huggingface.co/docs/transformers/model_doc/resnet), [RegNet](https://huggingface.co/docs/transformers/model_doc/regnet), and more! Architectures like ResNets are still very much relevant for a myriad of industrial use cases and hence the support of these non-Transformer architectures in 🤗 Transformers. It’s also important to note that the models on the Hub are not just from the Transformers library but also from other third-party libraries. For example, even though we support tasks like unconditional image generation on the Hub, we don’t have any models supporting that task in Transformers yet (such as [this](https://huggingface.co/ceyda/butterfly_cropped_uniq1K_512)). Supporting all ML tasks, whether they are solved with Transformers or a third-party library is a part of our mission to foster a collaborative open-source Machine Learning ecosystem. ## Support for Pipelines We developed [Pipelines](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines) to equip practitioners with the tools they need to easily incorporate machine learning into their toolbox. They provide an easy way to perform inference on a given input with respect to a task. We have support for [seven vision tasks](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#computer-vision) in Pipelines. Here is an example of using Pipelines for depth estimation: ```py from transformers import pipeline depth_estimator = pipeline(task="depth-estimation", model="Intel/dpt-large") output = depth_estimator("http://images.cocodataset.org/val2017/000000039769.jpg") # This is a tensor with the values being the depth expressed # in meters for each pixel output["depth"] ``` <div align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/cv_state/depth_estimation_output.png"/> </div> The interface remains the same even for tasks like visual question-answering: ```py from transformers import pipeline oracle = pipeline(model="dandelin/vilt-b32-finetuned-vqa") image_url = "https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg" oracle(question="What's the animal doing?", image=image_url, top_k=1) # [{'score': 0.778620, 'answer': 'laying down'}] ``` ## Training your own models While being able to use a model for off-the-shelf inference is a great way to get started, fine-tuning is where the community gets the most benefits. This is especially true when your datasets are custom, and you’re not getting good performance out of the pre-trained models. Transformers provides a [Trainer API](https://huggingface.co/docs/transformers/main_classes/trainer) for everything related to training. Currently, `Trainer` seamlessly supports the following tasks: image classification, image segmentation, video classification, object detection, and depth estimation. Fine-tuning models for other vision tasks are also supported, just not by `Trainer`. As long as the loss computation is included in a model from Transformers computes loss for a given task, it should be eligible for fine-tuning for the task. If you find issues, please [report](https://github.com/huggingface/transformers/issues) them on GitHub. **Where do I find the code?** - [Model documentation](https://huggingface.co/docs/transformers/index#supported-models) - [Hugging Face notebooks](https://github.com/huggingface/notebooks) - [Hugging Face example scripts](https://github.com/huggingface/transformers/tree/main/examples) - [Task pages](https://huggingface.co/tasks) [Hugging Face example scripts](https://github.com/huggingface/transformers/tree/main/examples) include different [self-supervised pre-training strategies](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-pretraining) like [MAE](https://arxiv.org/abs/2111.06377), and [contrastive image-text pre-training strategies](https://github.com/huggingface/transformers/tree/main/examples/pytorch/contrastive-image-text) like [CLIP](https://arxiv.org/abs/2103.00020). These scripts are valuable resources for the research community as well as for practitioners willing to run pre-training from scratch on custom data corpora. Some tasks are not inherently meant for fine-tuning, though. Examples include zero-shot image classification (such as [CLIP](https://huggingface.co/docs/transformers/main/en/model_doc/clip)), zero-shot object detection (such as [OWL-ViT](https://huggingface.co/docs/transformers/main/en/model_doc/owlvit)), and zero-shot segmentation (such as [CLIPSeg](https://huggingface.co/docs/transformers/model_doc/clipseg)). We’ll revisit these models in this post. ## Integrations with Datasets [Datasets](https://huggingface.co/docs/datasets) provides easy access to thousands of datasets of different modalities. As mentioned earlier, the Hub has over 100 datasets for computer vision. Some examples worth noting here: [ImageNet-1k](https://huggingface.co/datasets/imagenet-1k), [Scene Parsing](https://huggingface.co/datasets/scene_parse_150), [NYU Depth V2](https://huggingface.co/datasets/sayakpaul/nyu_depth_v2), [COYO-700M](https://huggingface.co/datasets/kakaobrain/coyo-700m), and [LAION-400M](https://huggingface.co/datasets/laion/laion400m). With these datasets being on the Hub, one can easily load them with just two lines of code: ```py from datasets import load_dataset dataset = load_dataset("scene_parse_150") ``` Besides these datasets, we provide integration support with augmentation libraries like [albumentations](https://github.com/huggingface/notebooks/blob/main/examples/image_classification_albumentations.ipynb) and [Kornia](https://github.com/huggingface/notebooks/blob/main/examples/image_classification_kornia.ipynb). The community can take advantage of the flexibility and performance of Datasets and powerful augmentation transformations provided by these libraries. In addition to these, we also provide [dedicated data-loading guides](https://huggingface.co/docs/datasets/image_load) for core vision tasks: image classification, image segmentation, object detection, and depth estimation. ## 🤗 🤝 timm `timm`, also known as [pytorch-image-models](https://github.com/rwightman/pytorch-image-models), is an open-source collection of state-of-the-art PyTorch image models, pre-trained weights, and utility scripts for training, inference, and validation. We have over 200 models from `timm` on the Hub and more are on the way. Check out the [documentation](https://huggingface.co/docs/timm/index) to know more about this integration. ## 🧨 Diffusers [Diffusers](https://huggingface.co/docs/diffusers) provides pre-trained vision and audio diffusion models, and serves as a modular toolbox for inference and training. With this library, you can generate plausible images from natural language inputs amongst other creative use cases. Here is an example: ```py from diffusers import DiffusionPipeline generator = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4") generator.to(“cuda”) image = generator("An image of a squirrel in Picasso style").images[0] ``` <div align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/cv_state/sd_output.png"/> </div> This type of technology can empower a new generation of creative applications and also aid artists coming from different backgrounds. To know more about Diffusers and the different use cases, check out the [official documentation](https://huggingface.co/docs/diffusers). The literature on Diffusion-based models is developing at a rapid pace which is why we partnered with [Jonathan Whitaker](https://github.com/johnowhitaker) to develop a course on it. The course is free, and you can check it out [here](https://github.com/huggingface/diffusion-models-class). ## Support for third-party libraries Central to the Hugging Face ecosystem is the [Hugging Face Hub](https://huggingface.co/docs/hub), which lets people collaborate effectively on Machine Learning. As mentioned earlier, we not only support models from 🤗 Transformers on the Hub but also models from other third-party libraries. To this end, we provide [several utilities](https://huggingface.co/docs/hub/models-adding-libraries) so that you can integrate your own library with the Hub. One of the primary advantages of doing this is that it becomes very easy to share artifacts (such as models and datasets) with the community, thereby making it easier for your users to try out your models. When you have your models hosted on the Hub, you can also [add custom inference widgets](https://github.com/huggingface/api-inference-community) for them. Inference widgets allow users to quickly check out the models. This helps with improving user engagement. <div align="center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/cv_state/task_widget_generation.png"/> </div> ## Spaces for computer vision demos With [Spaces](https://huggingface.co/docs/hub/spaces-overview), one can easily demonstrate their Machine Learning models. Spaces support direct integrations with [Gradio](https://gradio.app/), [Streamlit](https://streamlit.io/), and [Docker](https://www.docker.com/) empowering practitioners to have a great amount of flexibility while showcasing their models. You can bring in your own Machine Learning framework to build a demo with  Spaces. The Gradio library provides several components for building Computer Vision applications on  Spaces such as [Video](https://gradio.app/docs/#video), [Gallery](https://gradio.app/docs/#gallery), and [Model3D](https://gradio.app/docs/#model3d). The community has been hard at work building some amazing Computer Vision applications that are powered by Spaces: - [Generate 3D voxels from a predicted depth map of an input image](https://huggingface.co/spaces/radames/dpt-depth-estimation-3d-voxels) - [Open vocabulary semantic segmentation](https://huggingface.co/spaces/facebook/ov-seg) - [Narrate videos by generating captions](https://huggingface.co/spaces/nateraw/lavila) - [Classify videos from YouTube](https://huggingface.co/spaces/fcakyon/video-classification) - [Zero-shot video classification](https://huggingface.co/spaces/fcakyon/zero-shot-video-classification) - [Visual question-answering](https://huggingface.co/spaces/nielsr/vilt-vqa) - [Use zero-shot image classification to find best captions for an image to generate similar images](https://huggingface.co/spaces/pharma/CLIP-Interrogator) ## 🤗 AutoTrain [AutoTrain](https://huggingface.co/autotrain) provides a “no-code” solution to train state-of-the-art Machine Learning models for tasks like text classification, text summarization, named entity recognition, and more. For Computer Vision, we currently support [image classification](https://huggingface.co/blog/autotrain-image-classification), but one can expect more task coverage. AutoTrain also enables [automatic model evaluation](https://huggingface.co/spaces/autoevaluate/model-evaluator). This application allows you to evaluate 🤗 Transformers [models](https://huggingface.co/models?library=transformers&sort=downloads) across a wide variety of [datasets](https://huggingface.co/datasets) on the Hub. The results of your evaluation will be displayed on the [public leaderboards](https://huggingface.co/spaces/autoevaluate/leaderboards). You can check [this blog post](https://huggingface.co/blog/eval-on-the-hub) for more details. ## The technical philosophy In this section, we wanted to share our philosophy behind adding support for Computer Vision in 🤗 Transformers so that the community is aware of the design choices specific to this area. Even though Transformers started with NLP, we support multiple modalities today, for example – vision, audio, vision-language, and Reinforcement Learning. For all of these modalities, all the corresponding models from Transformers enjoy some common benefits: - Easy model download with a single line of code with `from_pretrained()` - Easy model upload with `push_to_hub()` - Support for loading huge checkpoints with efficient checkpoint sharding techniques - Optimization support (with tools like [Optimum](https://huggingface.co/docs/optimum)) - Initialization from model configurations - Support for both PyTorch and TensorFlow (non-exhaustive) - and many more Unlike tokenizers, we have preprocessors (such as [this](https://huggingface.co/docs/transformers/model_doc/vit#transformers.ViTImageProcessor)) that take care of preparing data for the vision models. We have worked hard to ensure the user experience of using a vision model still feels easy and similar: ```py from transformers import ViTImageProcessor, ViTForImageClassification import torch from datasets import load_dataset dataset = load_dataset("huggingface/cats-image") image = dataset["test"]["image"][0] image_processor  = ViTImageProcessor.from_pretrained("google/vit-base-patch16-224") model = ViTForImageClassification.from_pretrained("google/vit-base-patch16-224") inputs = image_processor(image, return_tensors="pt") with torch.no_grad(): logits = model(**inputs).logits # model predicts one of the 1000 ImageNet classes predicted_label = logits.argmax(-1).item() print(model.config.id2label[predicted_label]) # Egyptian cat ``` Even for a difficult task like object detection, the user experience doesn’t change very much: ```py from transformers import AutoImageProcessor, AutoModelForObjectDetection from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) image_processor = AutoImageProcessor.from_pretrained("microsoft/conditional-detr-resnet-50") model = AutoModelForObjectDetection.from_pretrained("microsoft/conditional-detr-resnet-50") inputs = image_processor(images=image, return_tensors="pt") outputs = model(**inputs) # convert outputs (bounding boxes and class logits) to COCO API target_sizes = torch.tensor([image.size[::-1]]) results = image_processor.post_process_object_detection( outputs, threshold=0.5, target_sizes=target_sizes )[0] for score, label, box in zip(results["scores"], results["labels"], results["boxes"]): box = [round(i, 2) for i in box.tolist()] print( f"Detected {model.config.id2label[label.item()]} with confidence " f"{round(score.item(), 3)} at location {box}" ) ``` Leads to: ```bash Detected remote with confidence 0.833 at location [38.31, 72.1, 177.63, 118.45] Detected cat with confidence 0.831 at location [9.2, 51.38, 321.13, 469.0] Detected cat with confidence 0.804 at location [340.3, 16.85, 642.93, 370.95] Detected remote with confidence 0.683 at location [334.48, 73.49, 366.37, 190.01] Detected couch with confidence 0.535 at location [0.52, 1.19, 640.35, 475.1] ``` ## Zero-shot models for vision There’s been a surge of models that reformulate core vision tasks like segmentation and detection in interesting ways and introduce even more flexibility. We support a few of those from Transformers: - [CLIP](https://huggingface.co/docs/transformers/main/en/model_doc/clip) that enables zero-shot image classification with prompts. Given an image, you’d prompt the CLIP model with a natural language query like “an image of {}”. The hope is to get the class label as the answer. - [OWL-ViT](https://huggingface.co/docs/transformers/main/en/model_doc/owlvit) that allows for language-conditioned zero-shot object detection and image-conditioned one-shot object detection. This means you can detect objects in an image even if the underlying model didn’t learn to detect them during training! You can refer to [this notebook](https://github.com/huggingface/notebooks/tree/main/examples#:~:text=zeroshot_object_detection_with_owlvit.ipynb) to know more. - [CLIPSeg](https://huggingface.co/docs/transformers/model_doc/clipseg) that supports language-conditioned zero-shot image segmentation and image-conditioned one-shot image segmentation. This means you can segment objects in an image even if the underlying model didn’t learn to segment them during training! You can refer to [this blog post](https://huggingface.co/blog/clipseg-zero-shot) that illustrates this idea. [GroupViT](https://huggingface.co/docs/transformers/model_doc/groupvit) also supports the task of zero-shot segmentation. - [X-CLIP](https://huggingface.co/docs/transformers/main/en/model_doc/xclip) that showcases zero-shot generalization to videos. Precisely, it supports zero-shot video classification. Check out [this notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/X-CLIP/Zero_shot_classify_a_YouTube_video_with_X_CLIP.ipynb) for more details. The community can expect to see more zero-shot models for computer vision being supported from 🤗Transformers in the coming days. ## Deployment As our CTO Julien says - “real artists ship” 🚀 We support the deployment of these vision models through [🤗Inference Endpoints](https://huggingface.co/inference-endpoints). Inference Endpoints integrates directly with compatible models pertaining to image classification, object detection, and image segmentation. For other tasks, you can use the [custom handlers](https://huggingface.co/docs/inference-endpoints/guides/custom_handler). Since we also provide many vision models in TensorFlow from 🤗Transformers for their deployment, we either recommend using the custom handlers or following these resources: - [Deploying TensorFlow Vision Models in Hugging Face with TF Serving](https://huggingface.co/blog/tf-serving-vision) - [Deploying 🤗 ViT on Kubernetes with TF Serving](https://huggingface.co/blog/deploy-tfserving-kubernetes) - [Deploying 🤗 ViT on Vertex AI](https://huggingface.co/blog/deploy-vertex-ai) - [Deploying ViT with TFX and Vertex AI](https://github.com/deep-diver/mlops-hf-tf-vision-models) ## Conclusion In this post, we gave you a rundown of the things currently supported from the Hugging Face ecosystem to empower the next generation of Computer Vision applications. We hope you’ll enjoy using these offerings to build reliably and responsibly. There is a lot to be done, though. Here are some things you can expect to see: - Direct support of videos from 🤗 Datasets - Supporting more industry-relevant tasks like image similarity - Interoperability of the image datasets with TensorFlow - A course on Computer Vision from the 🤗 community As always, we welcome your patches, PRs, model checkpoints, datasets, and other contributions! 🤗 *Acknowlegements: Thanks to Omar Sanseviero, Nate Raw, Niels Rogge, Alara Dirik, Amy Roberts, Maria Khalusova, and Lysandre Debut for their rigorous and timely reviews on the blog draft. Thanks to Chunte Lee for creating the blog thumbnail.*
huggingface/blog/blob/main/cv_state.md
Sharing demos with others[[sharing-demos-with-others]] <CourseFloatingBanner chapter={9} classNames="absolute z-10 right-0 top-0" notebooks={[ {label: "Google Colab", value: "https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/en/chapter9/section4.ipynb"}, {label: "Aws Studio", value: "https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/master/course/en/chapter9/section4.ipynb"}, ]} /> Now that you've built a demo, you'll probably want to share it with others. Gradio demos can be shared in two ways: using a ***temporary share link*** or ***permanent hosting on Spaces***. We'll cover both of these approaches shortly. But before you share your demo, you may want to polish it up 💅. ### Polishing your Gradio demo:[[polishing-your-gradio-demo]] <div class="flex justify-center"> <img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter9/gradio-demo-overview.png" alt="Overview of a gradio interface"> <img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter9/gradio-demo-overview-dark.png" alt="Overview of a gradio interface"> </div> To add additional content to your demo, the `Interface` class supports some optional parameters: - `title`: you can give a title to your demo, which appears _above_ the input and output components. - `description`: you can give a description (in text, Markdown, or HTML) for the interface, which appears above the input and output components and below the title. - `article`: you can also write an expanded article (in text, Markdown, or HTML) explaining the interface. If provided, it appears _below_ the input and output components. - `theme`: don't like the default colors? Set the theme to use one of `default`, `huggingface`, `grass`, `peach`. You can also add the `dark-` prefix, e.g. `dark-peach` for dark theme (or just `dark` for the default dark theme). - `examples`: to make your demo *way easier to use*, you can provide some example inputs for the function. These appear below the UI components and can be used to populate the interface. These should be provided as a nested list, in which the outer list consists of samples and each inner list consists of an input corresponding to each input component. - `live`: if you want to make your demo "live", meaning that your model reruns every time the input changes, you can set `live=True`. This makes sense to use with quick models (we'll see an example at the end of this section) Using the options above, we end up with a more complete interface. Run the code below so you can chat with Rick and Morty: ```py title = "Ask Rick a Question" description = """ The bot was trained to answer questions based on Rick and Morty dialogues. Ask Rick anything! <img src="https://huggingface.co/spaces/course-demos/Rick_and_Morty_QA/resolve/main/rick.png" width=200px> """ article = "Check out [the original Rick and Morty Bot](https://huggingface.co/spaces/kingabzpro/Rick_and_Morty_Bot) that this demo is based off of." gr.Interface( fn=predict, inputs="textbox", outputs="text", title=title, description=description, article=article, examples=[["What are you doing?"], ["Where should we time travel to?"]], ).launch() ``` Using the options above, we end up with a more complete interface. Try the interface below: <iframe src="https://course-demos-Rick-and-Morty-QA.hf.space" frameBorder="0" height="800" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe> ### Sharing your demo with temporary links[[sharing-your-demo-with-temporary-links]] Now that we have a working demo of our machine learning model, let's learn how to easily share a link to our interface. Interfaces can be easily shared publicly by setting `share=True` in the `launch()` method: ```python gr.Interface(classify_image, "image", "label").launch(share=True) ``` This generates a public, shareable link that you can send to anybody! When you send this link, the user on the other side can try out the model in their browser for up to 72 hours. Because the processing happens on your device (as long as your device stays on!), you don't have to worry about packaging any dependencies. If you're working out of a Google Colab notebook, a share link is always automatically created. It usually looks something like this: **XXXXX.gradio.app**. Although the link is served through a Gradio link, we are only a proxy for your local server, and do not store any data sent through the interfaces. Keep in mind, however, that these links are publicly accessible, meaning that anyone can use your model for prediction! Therefore, make sure not to expose any sensitive information through the functions you write, or allow any critical changes to occur on your device. If you set `share=False` (the default), only a local link is created. ### Hosting your demo on Hugging Face Spaces[[hosting-your-demo-on-hugging-face-spaces]] A share link that you can pass around to collegues is cool, but how can you permanently host your demo and have it exist in its own "space" on the internet? Hugging Face Spaces provides the infrastructure to permanently host your Gradio model on the internet, **for free**! Spaces allows you to create and push to a (public or private) repo, where your Gradio interface code will exist in an `app.py` file. [Read a step-by-step tutorial](https://huggingface.co/blog/gradio-spaces) to get started, or watch an example video below. <Youtube id="LS9Y2wDVI0k" /> ## ✏️ Let's apply it![[lets-apply-it]] Using what we just learned in the sections so far, let's create the sketch recognition demo we saw in [section one of this chapter](/course/chapter9/1). Let's add some customization to our interface and set `share=True` to create a public link we can pass around. We can load the labels from [class_names.txt](https://huggingface.co/spaces/dawood/Sketch-Recognition/blob/main/class_names.txt) and load the pre-trained pytorch model from [pytorch_model.bin](https://huggingface.co/spaces/dawood/Sketch-Recognition/blob/main/pytorch_model.bin). Download these files by following the link and clicking download on the top left corner of the file preview. Let's take a look at the code below to see how we use these files to load our model and create a `predict()` function: ```py from pathlib import Path import torch import gradio as gr from torch import nn LABELS = Path("class_names.txt").read_text().splitlines() model = nn.Sequential( nn.Conv2d(1, 32, 3, padding="same"), nn.ReLU(), nn.MaxPool2d(2), nn.Conv2d(32, 64, 3, padding="same"), nn.ReLU(), nn.MaxPool2d(2), nn.Conv2d(64, 128, 3, padding="same"), nn.ReLU(), nn.MaxPool2d(2), nn.Flatten(), nn.Linear(1152, 256), nn.ReLU(), nn.Linear(256, len(LABELS)), ) state_dict = torch.load("pytorch_model.bin", map_location="cpu") model.load_state_dict(state_dict, strict=False) model.eval() def predict(im): x = torch.tensor(im, dtype=torch.float32).unsqueeze(0).unsqueeze(0) / 255.0 with torch.no_grad(): out = model(x) probabilities = torch.nn.functional.softmax(out[0], dim=0) values, indices = torch.topk(probabilities, 5) return {LABELS[i]: v.item() for i, v in zip(indices, values)} ``` Now that we have a `predict()` function. The next step is to define and launch our gradio interface: ```py interface = gr.Interface( predict, inputs="sketchpad", outputs="label", theme="huggingface", title="Sketch Recognition", description="Who wants to play Pictionary? Draw a common object like a shovel or a laptop, and the algorithm will guess in real time!", article="<p style='text-align: center'>Sketch Recognition | Demo Model</p>", live=True, ) interface.launch(share=True) ``` <iframe src="https://course-demos-Sketch-Recognition.hf.space" frameBorder="0" height="650" title="Gradio app" class="container p-0 flex-grow space-iframe" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe> Notice the `live=True` parameter in `Interface`, which means that the sketch demo makes a prediction every time someone draws on the sketchpad (no submit button!). Furthermore, we also set the `share=True` argument in the `launch()` method. This will create a public link that you can send to anyone! When you send this link, the user on the other side can try out the sketch recognition model. To reiterate, you could also host the model on Hugging Face Spaces, which is how we are able to embed the demo above. Next up, we'll cover other ways that Gradio can be used with the Hugging Face ecosystem!
huggingface/course/blob/main/chapters/en/chapter9/4.mdx
Gradio Demo: file_component ``` !pip install -q gradio ``` ``` import gradio as gr with gr.Blocks() as demo: gr.File() demo.launch() ```
gradio-app/gradio/blob/main/demo/file_component/run.ipynb
``python from transformers import AutoModelForCausalLM from peft import get_peft_config, get_peft_model, PrefixTuningConfig, TaskType, PeftType import torch from datasets import load_dataset import os from transformers import AutoTokenizer from torch.utils.data import DataLoader from transformers import default_data_collator, get_linear_schedule_with_warmup from tqdm import tqdm from datasets import load_dataset device = "cuda" model_name_or_path = "bigscience/bloomz-560m" tokenizer_name_or_path = "bigscience/bloomz-560m" peft_config = PrefixTuningConfig(task_type=TaskType.CAUSAL_LM, num_virtual_tokens=30) dataset_name = "twitter_complaints" checkpoint_name = f"{dataset_name}_{model_name_or_path}_{peft_config.peft_type}_{peft_config.task_type}_v1.pt".replace( "/", "_" ) text_column = "Tweet text" label_column = "text_label" max_length = 64 lr = 3e-2 num_epochs = 50 batch_size = 8 ``` ```python from datasets import load_dataset dataset = load_dataset("ought/raft", dataset_name) classes = [k.replace("_", " ") for k in dataset["train"].features["Label"].names] print(classes) dataset = dataset.map( lambda x: {"text_label": [classes[label] for label in x["Label"]]}, batched=True, num_proc=1, ) print(dataset) dataset["train"][0] ``` ```python # data preprocessing tokenizer = AutoTokenizer.from_pretrained(model_name_or_path) if tokenizer.pad_token_id is None: tokenizer.pad_token_id = tokenizer.eos_token_id target_max_length = max([len(tokenizer(class_label)["input_ids"]) for class_label in classes]) print(target_max_length) def preprocess_function(examples): batch_size = len(examples[text_column]) inputs = [f"{text_column} : {x} Label : " for x in examples[text_column]] targets = [str(x) for x in examples[label_column]] model_inputs = tokenizer(inputs) labels = tokenizer(targets, add_special_tokens=False) # don't add bos token because we concatenate with inputs for i in range(batch_size): sample_input_ids = model_inputs["input_ids"][i] label_input_ids = labels["input_ids"][i] + [tokenizer.eos_token_id] # print(i, sample_input_ids, label_input_ids) model_inputs["input_ids"][i] = sample_input_ids + label_input_ids labels["input_ids"][i] = [-100] * len(sample_input_ids) + label_input_ids model_inputs["attention_mask"][i] = [1] * len(model_inputs["input_ids"][i]) # print(model_inputs) for i in range(batch_size): sample_input_ids = model_inputs["input_ids"][i] label_input_ids = labels["input_ids"][i] model_inputs["input_ids"][i] = [tokenizer.pad_token_id] * ( max_length - len(sample_input_ids) ) + sample_input_ids model_inputs["attention_mask"][i] = [0] * (max_length - len(sample_input_ids)) + model_inputs[ "attention_mask" ][i] labels["input_ids"][i] = [-100] * (max_length - len(sample_input_ids)) + label_input_ids model_inputs["input_ids"][i] = torch.tensor(model_inputs["input_ids"][i][:max_length]) model_inputs["attention_mask"][i] = torch.tensor(model_inputs["attention_mask"][i][:max_length]) labels["input_ids"][i] = torch.tensor(labels["input_ids"][i][:max_length]) model_inputs["labels"] = labels["input_ids"] return model_inputs processed_datasets = dataset.map( preprocess_function, batched=True, num_proc=1, remove_columns=dataset["train"].column_names, load_from_cache_file=False, desc="Running tokenizer on dataset", ) train_dataset = processed_datasets["train"] eval_dataset = processed_datasets["train"] train_dataloader = DataLoader( train_dataset, shuffle=True, collate_fn=default_data_collator, batch_size=batch_size, pin_memory=True ) eval_dataloader = DataLoader(eval_dataset, collate_fn=default_data_collator, batch_size=batch_size, pin_memory=True) ``` ```python def test_preprocess_function(examples): batch_size = len(examples[text_column]) inputs = [f"{text_column} : {x} Label : " for x in examples[text_column]] model_inputs = tokenizer(inputs) # print(model_inputs) for i in range(batch_size): sample_input_ids = model_inputs["input_ids"][i] model_inputs["input_ids"][i] = [tokenizer.pad_token_id] * ( max_length - len(sample_input_ids) ) + sample_input_ids model_inputs["attention_mask"][i] = [0] * (max_length - len(sample_input_ids)) + model_inputs[ "attention_mask" ][i] model_inputs["input_ids"][i] = torch.tensor(model_inputs["input_ids"][i][:max_length]) model_inputs["attention_mask"][i] = torch.tensor(model_inputs["attention_mask"][i][:max_length]) return model_inputs test_dataset = dataset["test"].map( test_preprocess_function, batched=True, num_proc=1, remove_columns=dataset["train"].column_names, load_from_cache_file=False, desc="Running tokenizer on dataset", ) test_dataloader = DataLoader(test_dataset, collate_fn=default_data_collator, batch_size=batch_size, pin_memory=True) next(iter(test_dataloader)) ``` ```python next(iter(train_dataloader)) ``` ```python len(test_dataloader) ``` ```python next(iter(test_dataloader)) ``` ```python # creating model model = AutoModelForCausalLM.from_pretrained(model_name_or_path) model = get_peft_model(model, peft_config) model.print_trainable_parameters() ``` ```python model.print_trainable_parameters() ``` ```python model ``` ```python model.peft_config ``` ```python # model # optimizer and lr scheduler optimizer = torch.optim.AdamW(model.parameters(), lr=lr) lr_scheduler = get_linear_schedule_with_warmup( optimizer=optimizer, num_warmup_steps=0, num_training_steps=(len(train_dataloader) * num_epochs), ) ``` ```python # training and evaluation model = model.to(device) for epoch in range(num_epochs): model.train() total_loss = 0 for step, batch in enumerate(tqdm(train_dataloader)): batch = {k: v.to(device) for k, v in batch.items()} # print(batch) # print(batch["input_ids"].shape) outputs = model(**batch) loss = outputs.loss total_loss += loss.detach().float() loss.backward() optimizer.step() lr_scheduler.step() optimizer.zero_grad() model.eval() eval_loss = 0 eval_preds = [] for step, batch in enumerate(tqdm(eval_dataloader)): batch = {k: v.to(device) for k, v in batch.items()} with torch.no_grad(): outputs = model(**batch) loss = outputs.loss eval_loss += loss.detach().float() eval_preds.extend( tokenizer.batch_decode(torch.argmax(outputs.logits, -1).detach().cpu().numpy(), skip_special_tokens=True) ) eval_epoch_loss = eval_loss / len(eval_dataloader) eval_ppl = torch.exp(eval_epoch_loss) train_epoch_loss = total_loss / len(train_dataloader) train_ppl = torch.exp(train_epoch_loss) print(f"{epoch=}: {train_ppl=} {train_epoch_loss=} {eval_ppl=} {eval_epoch_loss=}") ``` ```python model.eval() i = 16 inputs = tokenizer(f'{text_column} : {dataset["test"][i]["Tweet text"]} Label : ', return_tensors="pt") print(dataset["test"][i]["Tweet text"]) print(inputs) with torch.no_grad(): inputs = {k: v.to(device) for k, v in inputs.items()} outputs = model.generate( input_ids=inputs["input_ids"], attention_mask=inputs["attention_mask"], max_new_tokens=10, eos_token_id=3 ) print(outputs) print(tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True)) ``` You can push model to hub or save model locally. - Option1: Pushing the model to Hugging Face Hub ```python model.push_to_hub( f"{dataset_name}_{model_name_or_path}_{peft_config.peft_type}_{peft_config.task_type}".replace("/", "_"), token = "hf_..." ) ``` token (`bool` or `str`, *optional*): `token` is to be used for HTTP Bearer authorization when accessing remote files. If `True`, will use the token generated when running `huggingface-cli login` (stored in `~/.huggingface`). Will default to `True` if `repo_url` is not specified. Or you can get your token from https://huggingface.co/settings/token ``` - Or save model locally ```python peft_model_id = f"{dataset_name}_{model_name_or_path}_{peft_config.peft_type}_{peft_config.task_type}".replace("/", "_") model.save_pretrained(peft_model_id) ``` ```python # saving model peft_model_id = f"{dataset_name}_{model_name_or_path}_{peft_config.peft_type}_{peft_config.task_type}".replace( "/", "_" ) model.save_pretrained(peft_model_id) ``` ```python ckpt = f"{peft_model_id}/adapter_model.bin" !du -h $ckpt ``` ```python from peft import PeftModel, PeftConfig peft_model_id = f"{dataset_name}_{model_name_or_path}_{peft_config.peft_type}_{peft_config.task_type}".replace( "/", "_" ) config = PeftConfig.from_pretrained(peft_model_id) model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path) model = PeftModel.from_pretrained(model, peft_model_id) ``` ```python model.to(device) model.eval() i = 4 inputs = tokenizer(f'{text_column} : {dataset["test"][i]["Tweet text"]} Label : ', return_tensors="pt") print(dataset["test"][i]["Tweet text"]) print(inputs) with torch.no_grad(): inputs = {k: v.to(device) for k, v in inputs.items()} outputs = model.generate( input_ids=inputs["input_ids"], attention_mask=inputs["attention_mask"], max_new_tokens=10, eos_token_id=3 ) print(outputs) print(tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True)) ```
huggingface/peft/blob/main/examples/causal_language_modeling/peft_prefix_tuning_clm.ipynb
ow to preprocess pairs of sentences? We have seen how to tokenize single sentences and batch them together in the "Batching inputs together" video. If this code look unfamiliar to you, be sure to check that video again! Here we will focus on tasks that classify pairs of sentences. For instance, we may want to classify whether two texts are paraphrases or not. Here is an example taken from the Quora Question Pairs dataset, which focuses on identifying duplicate questions. In the first pair, the two questions are duplicates; in the second, they are not. Another pair classification problem is when we want to know if two sentences are logically related or not (a problem called Natural Language Inference or NLI). In this example taken from the MultiNLI dataset, we have a pair of sentences for each possible label: contradiction, neutral or entailment (which is a fancy way of saying the first sentence implies the second). So classifying pairs of sentences is a problem worth studying. In fact, in the GLUE benchmark (which is an academic benchmark for text classification), 8 of the 10 datasets are focused on tasks using pairs of sentences. That's why models like BERT are often pretrained with a dual objective: on top of the language modeling objective, they often have an objective related to sentence pairs. For instance, during pretraining, BERT is shown pairs of sentences and must predict both the value of randomly masked tokens and whether the second sentence follows from the first. Fortunately, the tokenizer from the Transformers library has a nice API to deal with pairs of sentences: you just have to pass them as two arguments to the tokenizer. On top of the input IDs and the attention mask we studied already, it returns a new field called token type IDs, which tells the model which tokens belong to the first sentence and which ones belong to the second sentence. Zooming in a little bit, here are the input IDs, aligned with the tokens they correspond to, their respective token type ID and attention mask. We can see the tokenizer also added special tokens so we have a CLS token, the tokens from the first sentence, a SEP token, the tokens from the second sentence, and a final SEP token. If we have several pairs of sentences, we can tokenize them together by passing the list of first sentences, then the list of second sentences and all the keyword arguments we studied already, like padding=True. Zooming in at the result, we can see how the tokenizer added padding to the second pair of sentences, to make the two outputs the same length, and properly dealt with token type IDS and attention masks for the two sentences. This is then all ready to pass through our model!
huggingface/course/blob/main/subtitles/en/raw/chapter3/02c_preprocess-sentence-pairs-tf.md
Gradio Demo: blocks_group ``` !pip install -q gradio ``` ``` import gradio as gr def greet(name): return "Hello " + name + "!" with gr.Blocks() as demo: gr.Markdown("### This is a couple of elements without any gr.Group. Form elements naturally group together anyway.") gr.Textbox("A") gr.Number(3) gr.Button() gr.Image() gr.Slider() gr.Markdown("### This is the same set put in a gr.Group.") with gr.Group(): gr.Textbox("A") gr.Number(3) gr.Button() gr.Image() gr.Slider() gr.Markdown("### Now in a Row, no group.") with gr.Row(): gr.Textbox("A") gr.Number(3) gr.Button() gr.Image() gr.Slider() gr.Markdown("### Now in a Row in a group.") with gr.Group(): with gr.Row(): gr.Textbox("A") gr.Number(3) gr.Button() gr.Image() gr.Slider() gr.Markdown("### Several rows grouped together.") with gr.Group(): with gr.Row(): gr.Textbox("A") gr.Number(3) gr.Button() with gr.Row(): gr.Image() gr.Audio() gr.Markdown("### Several columns grouped together. If columns are uneven, there is a gray group background.") with gr.Group(): with gr.Row(): with gr.Column(): name = gr.Textbox(label="Name") btn = gr.Button("Hello") gr.Dropdown(["a", "b", "c"], interactive=True) gr.Number() gr.Textbox() with gr.Column(): gr.Image() gr.Dropdown(["a", "b", "c"], interactive=True) with gr.Row(): gr.Number(scale=2) gr.Textbox() gr.Markdown("### container=False removes label, padding, and block border, placing elements 'directly' on background.") gr.Radio([1,2,3], container=False) gr.Textbox(container=False) gr.Image("https://picsum.photos/id/237/200/300", container=False, height=200) gr.Markdown("### Textbox, Dropdown, and Number input boxes takes up full space when within a group without a container.") with gr.Group(): name = gr.Textbox(label="Name") output = gr.Textbox(show_label=False, container=False) greet_btn = gr.Button("Greet") with gr.Row(): gr.Dropdown(["a", "b", "c"], interactive=True, container=False) gr.Textbox(container=False) gr.Number(container=False) gr.Image(height=100) greet_btn.click(fn=greet, inputs=name, outputs=output, api_name="greet") gr.Markdown("### More examples") with gr.Group(): gr.Chatbot() with gr.Row(): name = gr.Textbox(label="Prompot", container=False) go = gr.Button("go", scale=0) with gr.Column(): gr.Radio([1,2,3], container=False) gr.Slider(0, 20, container=False) with gr.Group(): with gr.Row(): gr.Dropdown(["a", "b", "c"], interactive=True, container=False, elem_id="here2") gr.Number(container=False) gr.Textbox(container=False) with gr.Row(): with gr.Column(): gr.Dropdown(["a", "b", "c"], interactive=True, container=False, elem_id="here2") with gr.Column(): gr.Number(container=False) with gr.Column(): gr.Textbox(container=False) if __name__ == "__main__": demo.launch() ```
gradio-app/gradio/blob/main/demo/blocks_group/run.ipynb