{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "VRNFFldCsmkI"
},
"source": [
"# CycleGAN\n",
"\n",
"**Author:** [A_K_Nain](https://twitter.com/A_K_Nain)
\n",
"**Date created:** 2020/08/12
\n",
"**Last modified:** 2020/08/12
\n",
"**Description:** Implementation of CycleGAN."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "jVwB_Ph0smkK"
},
"source": [
"## CycleGAN\n",
"\n",
"CycleGAN is a model that aims to solve the image-to-image translation\n",
"problem. The goal of the image-to-image translation problem is to learn the\n",
"mapping between an input image and an output image using a training set of\n",
"aligned image pairs. However, obtaining paired examples isn't always feasible.\n",
"CycleGAN tries to learn this mapping without requiring paired input-output images,\n",
"using cycle-consistent adversarial networks.\n",
"\n",
"- [Paper](https://arxiv.org/pdf/1703.10593.pdf)\n",
"- [Original implementation](https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "bOpg5meHsmkL"
},
"source": [
"## Setup"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "v46aKoQZtNcW"
},
"outputs": [],
"source": [
"%%capture\n",
"!pip install tensorflow_addons tensorflow_datasets"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "BgLCBj6asmkL"
},
"outputs": [],
"source": [
"import os\n",
"import numpy as np\n",
"import matplotlib.pyplot as plt\n",
"\n",
"import tensorflow as tf\n",
"from tensorflow import keras\n",
"from tensorflow.keras import layers\n",
"\n",
"import tensorflow_addons as tfa\n",
"import tensorflow_datasets as tfds\n",
"\n",
"tfds.disable_progress_bar()\n",
"autotune = tf.data.AUTOTUNE\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "pa0OuHEhsmkM"
},
"source": [
"## Prepare the dataset\n",
"\n",
"In this example, we will be using the\n",
"[horse to zebra](https://www.tensorflow.org/datasets/catalog/cycle_gan#cycle_ganhorse2zebra)\n",
"dataset."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "ejy7DeR-smkM"
},
"outputs": [],
"source": [
"# Load the horse-zebra dataset using tensorflow-datasets.\n",
"dataset, _ = tfds.load(\"cycle_gan/horse2zebra\", with_info=True, as_supervised=True)\n",
"train_horses, train_zebras = dataset[\"trainA\"], dataset[\"trainB\"]\n",
"test_horses, test_zebras = dataset[\"testA\"], dataset[\"testB\"]\n",
"\n",
"# Define the standard image size.\n",
"orig_img_size = (286, 286)\n",
"# Size of the random crops to be used during training.\n",
"input_img_size = (256, 256, 3)\n",
"# Weights initializer for the layers.\n",
"kernel_init = keras.initializers.RandomNormal(mean=0.0, stddev=0.02)\n",
"# Gamma initializer for instance normalization.\n",
"gamma_init = keras.initializers.RandomNormal(mean=0.0, stddev=0.02)\n",
"\n",
"buffer_size = 256\n",
"batch_size = 1\n",
"\n",
"\n",
"def normalize_img(img):\n",
" img = tf.cast(img, dtype=tf.float32)\n",
" # Map values in the range [-1, 1]\n",
" return (img / 127.5) - 1.0\n",
"\n",
"\n",
"def preprocess_train_image(img, label):\n",
" # Random flip\n",
" img = tf.image.random_flip_left_right(img)\n",
" # Resize to the original size first\n",
" img = tf.image.resize(img, [*orig_img_size])\n",
" # Random crop to 256X256\n",
" img = tf.image.random_crop(img, size=[*input_img_size])\n",
" # Normalize the pixel values in the range [-1, 1]\n",
" img = normalize_img(img)\n",
" return img\n",
"\n",
"\n",
"def preprocess_test_image(img, label):\n",
" # Only resizing and normalization for the test images.\n",
" img = tf.image.resize(img, [input_img_size[0], input_img_size[1]])\n",
" img = normalize_img(img)\n",
" return img\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ZjNdjd9PsmkN"
},
"source": [
"## Create `Dataset` objects"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "bFRrL22WsmkO"
},
"outputs": [],
"source": [
"\n",
"# Apply the preprocessing operations to the training data\n",
"train_horses = (\n",
" train_horses.map(preprocess_train_image, num_parallel_calls=autotune)\n",
" .cache()\n",
" .shuffle(buffer_size)\n",
" .batch(batch_size)\n",
")\n",
"train_zebras = (\n",
" train_zebras.map(preprocess_train_image, num_parallel_calls=autotune)\n",
" .cache()\n",
" .shuffle(buffer_size)\n",
" .batch(batch_size)\n",
")\n",
"\n",
"# Apply the preprocessing operations to the test data\n",
"test_horses = (\n",
" test_horses.map(preprocess_test_image, num_parallel_calls=autotune)\n",
" .cache()\n",
" .shuffle(buffer_size)\n",
" .batch(batch_size)\n",
")\n",
"test_zebras = (\n",
" test_zebras.map(preprocess_test_image, num_parallel_calls=autotune)\n",
" .cache()\n",
" .shuffle(buffer_size)\n",
" .batch(batch_size)\n",
")\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "cqnAgImAsmkO"
},
"source": [
"## Visualize some samples"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "vs6JE_nksmkP"
},
"outputs": [],
"source": [
"\n",
"_, ax = plt.subplots(4, 2, figsize=(10, 15))\n",
"for i, samples in enumerate(zip(train_horses.take(4), train_zebras.take(4))):\n",
" horse = (((samples[0][0] * 127.5) + 127.5).numpy()).astype(np.uint8)\n",
" zebra = (((samples[1][0] * 127.5) + 127.5).numpy()).astype(np.uint8)\n",
" ax[i, 0].imshow(horse)\n",
" ax[i, 1].imshow(zebra)\n",
"plt.show()\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "m7Wlng40smkP"
},
"source": [
"## Building blocks used in the CycleGAN generators and discriminators"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "gZgsli56smkQ"
},
"outputs": [],
"source": [
"\n",
"class ReflectionPadding2D(layers.Layer):\n",
" \"\"\"Implements Reflection Padding as a layer.\n",
"\n",
" Args:\n",
" padding(tuple): Amount of padding for the\n",
" spatial dimensions.\n",
"\n",
" Returns:\n",
" A padded tensor with the same type as the input tensor.\n",
" \"\"\"\n",
"\n",
" def __init__(self, padding=(1, 1), **kwargs):\n",
" self.padding = tuple(padding)\n",
" super(ReflectionPadding2D, self).__init__(**kwargs)\n",
"\n",
" def call(self, input_tensor, mask=None):\n",
" padding_width, padding_height = self.padding\n",
" padding_tensor = [\n",
" [0, 0],\n",
" [padding_height, padding_height],\n",
" [padding_width, padding_width],\n",
" [0, 0],\n",
" ]\n",
" return tf.pad(input_tensor, padding_tensor, mode=\"REFLECT\")\n",
"\n",
"\n",
"def residual_block(\n",
" x,\n",
" activation,\n",
" kernel_initializer=kernel_init,\n",
" kernel_size=(3, 3),\n",
" strides=(1, 1),\n",
" padding=\"valid\",\n",
" gamma_initializer=gamma_init,\n",
" use_bias=False,\n",
"):\n",
" dim = x.shape[-1]\n",
" input_tensor = x\n",
"\n",
" x = ReflectionPadding2D()(input_tensor)\n",
" x = layers.Conv2D(\n",
" dim,\n",
" kernel_size,\n",
" strides=strides,\n",
" kernel_initializer=kernel_initializer,\n",
" padding=padding,\n",
" use_bias=use_bias,\n",
" )(x)\n",
" x = tfa.layers.InstanceNormalization(gamma_initializer=gamma_initializer)(x)\n",
" x = activation(x)\n",
"\n",
" x = ReflectionPadding2D()(x)\n",
" x = layers.Conv2D(\n",
" dim,\n",
" kernel_size,\n",
" strides=strides,\n",
" kernel_initializer=kernel_initializer,\n",
" padding=padding,\n",
" use_bias=use_bias,\n",
" )(x)\n",
" x = tfa.layers.InstanceNormalization(gamma_initializer=gamma_initializer)(x)\n",
" x = layers.add([input_tensor, x])\n",
" return x\n",
"\n",
"\n",
"def downsample(\n",
" x,\n",
" filters,\n",
" activation,\n",
" kernel_initializer=kernel_init,\n",
" kernel_size=(3, 3),\n",
" strides=(2, 2),\n",
" padding=\"same\",\n",
" gamma_initializer=gamma_init,\n",
" use_bias=False,\n",
"):\n",
" x = layers.Conv2D(\n",
" filters,\n",
" kernel_size,\n",
" strides=strides,\n",
" kernel_initializer=kernel_initializer,\n",
" padding=padding,\n",
" use_bias=use_bias,\n",
" )(x)\n",
" x = tfa.layers.InstanceNormalization(gamma_initializer=gamma_initializer)(x)\n",
" if activation:\n",
" x = activation(x)\n",
" return x\n",
"\n",
"\n",
"def upsample(\n",
" x,\n",
" filters,\n",
" activation,\n",
" kernel_size=(3, 3),\n",
" strides=(2, 2),\n",
" padding=\"same\",\n",
" kernel_initializer=kernel_init,\n",
" gamma_initializer=gamma_init,\n",
" use_bias=False,\n",
"):\n",
" x = layers.Conv2DTranspose(\n",
" filters,\n",
" kernel_size,\n",
" strides=strides,\n",
" padding=padding,\n",
" kernel_initializer=kernel_initializer,\n",
" use_bias=use_bias,\n",
" )(x)\n",
" x = tfa.layers.InstanceNormalization(gamma_initializer=gamma_initializer)(x)\n",
" if activation:\n",
" x = activation(x)\n",
" return x\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "fXdgpeLGsmkQ"
},
"source": [
"## Build the generators\n",
"\n",
"The generator consists of downsampling blocks: nine residual blocks\n",
"and upsampling blocks. The structure of the generator is the following:\n",
"\n",
"```\n",
"c7s1-64 ==> Conv block with `relu` activation, filter size of 7\n",
"d128 ====|\n",
" |-> 2 downsampling blocks\n",
"d256 ====|\n",
"R256 ====|\n",
"R256 |\n",
"R256 |\n",
"R256 |\n",
"R256 |-> 9 residual blocks\n",
"R256 |\n",
"R256 |\n",
"R256 |\n",
"R256 ====|\n",
"u128 ====|\n",
" |-> 2 upsampling blocks\n",
"u64 ====|\n",
"c7s1-3 => Last conv block with `tanh` activation, filter size of 7.\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "G67IU1n_smkR"
},
"outputs": [],
"source": [
"\n",
"def get_resnet_generator(\n",
" filters=64,\n",
" num_downsampling_blocks=2,\n",
" num_residual_blocks=9,\n",
" num_upsample_blocks=2,\n",
" gamma_initializer=gamma_init,\n",
" name=None,\n",
"):\n",
" img_input = layers.Input(shape=input_img_size, name=name + \"_img_input\")\n",
" x = ReflectionPadding2D(padding=(3, 3))(img_input)\n",
" x = layers.Conv2D(filters, (7, 7), kernel_initializer=kernel_init, use_bias=False)(\n",
" x\n",
" )\n",
" x = tfa.layers.InstanceNormalization(gamma_initializer=gamma_initializer)(x)\n",
" x = layers.Activation(\"relu\")(x)\n",
"\n",
" # Downsampling\n",
" for _ in range(num_downsampling_blocks):\n",
" filters *= 2\n",
" x = downsample(x, filters=filters, activation=layers.Activation(\"relu\"))\n",
"\n",
" # Residual blocks\n",
" for _ in range(num_residual_blocks):\n",
" x = residual_block(x, activation=layers.Activation(\"relu\"))\n",
"\n",
" # Upsampling\n",
" for _ in range(num_upsample_blocks):\n",
" filters //= 2\n",
" x = upsample(x, filters, activation=layers.Activation(\"relu\"))\n",
"\n",
" # Final block\n",
" x = ReflectionPadding2D(padding=(3, 3))(x)\n",
" x = layers.Conv2D(3, (7, 7), padding=\"valid\")(x)\n",
" x = layers.Activation(\"tanh\")(x)\n",
"\n",
" model = keras.models.Model(img_input, x, name=name)\n",
" return model\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "72QKJBi8smkR"
},
"source": [
"## Build the discriminators\n",
"\n",
"The discriminators implement the following architecture:\n",
"`C64->C128->C256->C512`"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "IWRDg78-smkR"
},
"outputs": [],
"source": [
"\n",
"def get_discriminator(\n",
" filters=64, kernel_initializer=kernel_init, num_downsampling=3, name=None\n",
"):\n",
" img_input = layers.Input(shape=input_img_size, name=name + \"_img_input\")\n",
" x = layers.Conv2D(\n",
" filters,\n",
" (4, 4),\n",
" strides=(2, 2),\n",
" padding=\"same\",\n",
" kernel_initializer=kernel_initializer,\n",
" )(img_input)\n",
" x = layers.LeakyReLU(0.2)(x)\n",
"\n",
" num_filters = filters\n",
" for num_downsample_block in range(3):\n",
" num_filters *= 2\n",
" if num_downsample_block < 2:\n",
" x = downsample(\n",
" x,\n",
" filters=num_filters,\n",
" activation=layers.LeakyReLU(0.2),\n",
" kernel_size=(4, 4),\n",
" strides=(2, 2),\n",
" )\n",
" else:\n",
" x = downsample(\n",
" x,\n",
" filters=num_filters,\n",
" activation=layers.LeakyReLU(0.2),\n",
" kernel_size=(4, 4),\n",
" strides=(1, 1),\n",
" )\n",
"\n",
" x = layers.Conv2D(\n",
" 1, (4, 4), strides=(1, 1), padding=\"same\", kernel_initializer=kernel_initializer\n",
" )(x)\n",
"\n",
" model = keras.models.Model(inputs=img_input, outputs=x, name=name)\n",
" return model\n",
"\n",
"\n",
"# Get the generators\n",
"gen_G = get_resnet_generator(name=\"generator_G\")\n",
"gen_F = get_resnet_generator(name=\"generator_F\")\n",
"\n",
"# Get the discriminators\n",
"disc_X = get_discriminator(name=\"discriminator_X\")\n",
"disc_Y = get_discriminator(name=\"discriminator_Y\")\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "aX9EntYasmkS"
},
"source": [
"## Build the CycleGAN model\n",
"\n",
"We will override the `train_step()` method of the `Model` class\n",
"for training via `fit()`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "n8gUdq8gsmkS"
},
"outputs": [],
"source": [
"\n",
"class CycleGan(keras.Model):\n",
" def __init__(\n",
" self,\n",
" generator_G,\n",
" generator_F,\n",
" discriminator_X,\n",
" discriminator_Y,\n",
" lambda_cycle=10.0,\n",
" lambda_identity=0.5,\n",
" ):\n",
" super(CycleGan, self).__init__()\n",
" self.gen_G = generator_G\n",
" self.gen_F = generator_F\n",
" self.disc_X = discriminator_X\n",
" self.disc_Y = discriminator_Y\n",
" self.lambda_cycle = lambda_cycle\n",
" self.lambda_identity = lambda_identity\n",
"\n",
" def compile(\n",
" self,\n",
" gen_G_optimizer,\n",
" gen_F_optimizer,\n",
" disc_X_optimizer,\n",
" disc_Y_optimizer,\n",
" gen_loss_fn,\n",
" disc_loss_fn,\n",
" ):\n",
" super(CycleGan, self).compile()\n",
" self.gen_G_optimizer = gen_G_optimizer\n",
" self.gen_F_optimizer = gen_F_optimizer\n",
" self.disc_X_optimizer = disc_X_optimizer\n",
" self.disc_Y_optimizer = disc_Y_optimizer\n",
" self.generator_loss_fn = gen_loss_fn\n",
" self.discriminator_loss_fn = disc_loss_fn\n",
" self.cycle_loss_fn = keras.losses.MeanAbsoluteError()\n",
" self.identity_loss_fn = keras.losses.MeanAbsoluteError()\n",
"\n",
" def train_step(self, batch_data):\n",
" # x is Horse and y is zebra\n",
" real_x, real_y = batch_data\n",
"\n",
" # For CycleGAN, we need to calculate different\n",
" # kinds of losses for the generators and discriminators.\n",
" # We will perform the following steps here:\n",
" #\n",
" # 1. Pass real images through the generators and get the generated images\n",
" # 2. Pass the generated images back to the generators to check if we\n",
" # we can predict the original image from the generated image.\n",
" # 3. Do an identity mapping of the real images using the generators.\n",
" # 4. Pass the generated images in 1) to the corresponding discriminators.\n",
" # 5. Calculate the generators total loss (adverserial + cycle + identity)\n",
" # 6. Calculate the discriminators loss\n",
" # 7. Update the weights of the generators\n",
" # 8. Update the weights of the discriminators\n",
" # 9. Return the losses in a dictionary\n",
"\n",
" with tf.GradientTape(persistent=True) as tape:\n",
" # Horse to fake zebra\n",
" fake_y = self.gen_G(real_x, training=True)\n",
" # Zebra to fake horse -> y2x\n",
" fake_x = self.gen_F(real_y, training=True)\n",
"\n",
" # Cycle (Horse to fake zebra to fake horse): x -> y -> x\n",
" cycled_x = self.gen_F(fake_y, training=True)\n",
" # Cycle (Zebra to fake horse to fake zebra) y -> x -> y\n",
" cycled_y = self.gen_G(fake_x, training=True)\n",
"\n",
" # Identity mapping\n",
" same_x = self.gen_F(real_x, training=True)\n",
" same_y = self.gen_G(real_y, training=True)\n",
"\n",
" # Discriminator output\n",
" disc_real_x = self.disc_X(real_x, training=True)\n",
" disc_fake_x = self.disc_X(fake_x, training=True)\n",
"\n",
" disc_real_y = self.disc_Y(real_y, training=True)\n",
" disc_fake_y = self.disc_Y(fake_y, training=True)\n",
"\n",
" # Generator adverserial loss\n",
" gen_G_loss = self.generator_loss_fn(disc_fake_y)\n",
" gen_F_loss = self.generator_loss_fn(disc_fake_x)\n",
"\n",
" # Generator cycle loss\n",
" cycle_loss_G = self.cycle_loss_fn(real_y, cycled_y) * self.lambda_cycle\n",
" cycle_loss_F = self.cycle_loss_fn(real_x, cycled_x) * self.lambda_cycle\n",
"\n",
" # Generator identity loss\n",
" id_loss_G = (\n",
" self.identity_loss_fn(real_y, same_y)\n",
" * self.lambda_cycle\n",
" * self.lambda_identity\n",
" )\n",
" id_loss_F = (\n",
" self.identity_loss_fn(real_x, same_x)\n",
" * self.lambda_cycle\n",
" * self.lambda_identity\n",
" )\n",
"\n",
" # Total generator loss\n",
" total_loss_G = gen_G_loss + cycle_loss_G + id_loss_G\n",
" total_loss_F = gen_F_loss + cycle_loss_F + id_loss_F\n",
"\n",
" # Discriminator loss\n",
" disc_X_loss = self.discriminator_loss_fn(disc_real_x, disc_fake_x)\n",
" disc_Y_loss = self.discriminator_loss_fn(disc_real_y, disc_fake_y)\n",
"\n",
" # Get the gradients for the generators\n",
" grads_G = tape.gradient(total_loss_G, self.gen_G.trainable_variables)\n",
" grads_F = tape.gradient(total_loss_F, self.gen_F.trainable_variables)\n",
"\n",
" # Get the gradients for the discriminators\n",
" disc_X_grads = tape.gradient(disc_X_loss, self.disc_X.trainable_variables)\n",
" disc_Y_grads = tape.gradient(disc_Y_loss, self.disc_Y.trainable_variables)\n",
"\n",
" # Update the weights of the generators\n",
" self.gen_G_optimizer.apply_gradients(\n",
" zip(grads_G, self.gen_G.trainable_variables)\n",
" )\n",
" self.gen_F_optimizer.apply_gradients(\n",
" zip(grads_F, self.gen_F.trainable_variables)\n",
" )\n",
"\n",
" # Update the weights of the discriminators\n",
" self.disc_X_optimizer.apply_gradients(\n",
" zip(disc_X_grads, self.disc_X.trainable_variables)\n",
" )\n",
" self.disc_Y_optimizer.apply_gradients(\n",
" zip(disc_Y_grads, self.disc_Y.trainable_variables)\n",
" )\n",
"\n",
" return {\n",
" \"G_loss\": total_loss_G,\n",
" \"F_loss\": total_loss_F,\n",
" \"D_X_loss\": disc_X_loss,\n",
" \"D_Y_loss\": disc_Y_loss,\n",
" }\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "S5tmLYeBsmkT"
},
"source": [
"## Create a callback that periodically saves generated images"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "t9EqZ5uBsmkV"
},
"outputs": [],
"source": [
"\n",
"class GANMonitor(keras.callbacks.Callback):\n",
" \"\"\"A callback to generate and save images after each epoch\"\"\"\n",
"\n",
" def __init__(self, num_img=4):\n",
" self.num_img = num_img\n",
"\n",
" def on_epoch_end(self, epoch, logs=None):\n",
" _, ax = plt.subplots(4, 2, figsize=(12, 12))\n",
" for i, img in enumerate(test_horses.take(self.num_img)):\n",
" prediction = self.model.gen_G(img)[0].numpy()\n",
" prediction = (prediction * 127.5 + 127.5).astype(np.uint8)\n",
" img = (img[0] * 127.5 + 127.5).numpy().astype(np.uint8)\n",
"\n",
" ax[i, 0].imshow(img)\n",
" ax[i, 1].imshow(prediction)\n",
" ax[i, 0].set_title(\"Input image\")\n",
" ax[i, 1].set_title(\"Translated image\")\n",
" ax[i, 0].axis(\"off\")\n",
" ax[i, 1].axis(\"off\")\n",
"\n",
" prediction = keras.preprocessing.image.array_to_img(prediction)\n",
" prediction.save(\n",
" \"generated_img_{i}_{epoch}.png\".format(i=i, epoch=epoch + 1)\n",
" )\n",
" plt.show()\n",
" plt.close()\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ARuT30Z0smkV"
},
"source": [
"## Train the end-to-end model"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "5I6aQE7SsmkV"
},
"outputs": [],
"source": [
"\n",
"# Loss function for evaluating adversarial loss\n",
"adv_loss_fn = keras.losses.MeanSquaredError()\n",
"\n",
"# Define the loss function for the generators\n",
"def generator_loss_fn(fake):\n",
" fake_loss = adv_loss_fn(tf.ones_like(fake), fake)\n",
" return fake_loss\n",
"\n",
"\n",
"# Define the loss function for the discriminators\n",
"def discriminator_loss_fn(real, fake):\n",
" real_loss = adv_loss_fn(tf.ones_like(real), real)\n",
" fake_loss = adv_loss_fn(tf.zeros_like(fake), fake)\n",
" return (real_loss + fake_loss) * 0.5\n",
"\n",
"\n",
"# Create cycle gan model\n",
"cycle_gan_model = CycleGan(\n",
" generator_G=gen_G, generator_F=gen_F, discriminator_X=disc_X, discriminator_Y=disc_Y\n",
")\n",
"\n",
"# Compile the model\n",
"cycle_gan_model.compile(\n",
" gen_G_optimizer=keras.optimizers.Adam(learning_rate=2e-4, beta_1=0.5),\n",
" gen_F_optimizer=keras.optimizers.Adam(learning_rate=2e-4, beta_1=0.5),\n",
" disc_X_optimizer=keras.optimizers.Adam(learning_rate=2e-4, beta_1=0.5),\n",
" disc_Y_optimizer=keras.optimizers.Adam(learning_rate=2e-4, beta_1=0.5),\n",
" gen_loss_fn=generator_loss_fn,\n",
" disc_loss_fn=discriminator_loss_fn,\n",
")\n",
"# Callbacks\n",
"plotter = GANMonitor()\n",
"checkpoint_filepath = \"./model_checkpoints/cyclegan_checkpoints.{epoch:03d}\"\n",
"model_checkpoint_callback = keras.callbacks.ModelCheckpoint(\n",
" filepath=checkpoint_filepath\n",
")\n",
"\n",
"# Here we will train the model for just one epoch as each epoch takes around\n",
"# 7 minutes on a single P100 backed machine.\n",
"cycle_gan_model.fit(\n",
" tf.data.Dataset.zip((train_horses, train_zebras)),\n",
" epochs=1,\n",
" callbacks=[plotter, model_checkpoint_callback],\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "2PsVmLKAsmkW"
},
"source": [
"Test the performance of the model."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "aHEDqhP1smkW"
},
"outputs": [],
"source": [
"\n",
"# This model was trained for 90 epochs. We will be loading those weights\n",
"# here. Once the weights are loaded, we will take a few samples from the test\n",
"# data and check the model's performance."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "O-dNKakasmkW",
"colab": {
"base_uri": "https://localhost:8080/"
},
"outputId": "b988e2a2-2eb3-4e60-dfd1-3f6b5670aadb"
},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
" % Total % Received % Xferd Average Speed Time Time Time Current\n",
" Dload Upload Total Spent Left Speed\n",
"100 660 100 660 0 0 1880 0 --:--:-- --:--:-- --:--:-- 1880\n",
"100 273M 100 273M 0 0 6400k 0 0:00:43 0:00:43 --:--:-- 9.8M\n"
]
}
],
"source": [
"!curl -LO https://github.com/AakashKumarNain/CycleGAN_TF2/releases/download/v1.0/saved_checkpoints.zip\n",
"!unzip -qq saved_checkpoints.zip"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "HSYC0yy_smkW"
},
"outputs": [],
"source": [
"\n",
"# Load the checkpoints\n",
"weight_file = \"./saved_checkpoints/cyclegan_checkpoints.090\"\n",
"cycle_gan_model.load_weights(weight_file).expect_partial()\n",
"print(\"Weights loaded successfully\")\n",
"\n",
"_, ax = plt.subplots(4, 2, figsize=(10, 15))\n",
"for i, img in enumerate(test_horses.take(4)):\n",
" prediction = cycle_gan_model.gen_G(img, training=False)[0].numpy()\n",
" prediction = (prediction * 127.5 + 127.5).astype(np.uint8)\n",
" img = (img[0] * 127.5 + 127.5).numpy().astype(np.uint8)\n",
"\n",
" ax[i, 0].imshow(img)\n",
" ax[i, 1].imshow(prediction)\n",
" ax[i, 0].set_title(\"Input image\")\n",
" ax[i, 0].set_title(\"Input image\")\n",
" ax[i, 1].set_title(\"Translated image\")\n",
" ax[i, 0].axis(\"off\")\n",
" ax[i, 1].axis(\"off\")\n",
"\n",
" prediction = keras.preprocessing.image.array_to_img(prediction)\n",
" prediction.save(\"predicted_img_{i}.png\".format(i=i))\n",
"plt.tight_layout()\n",
"plt.show()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "dqGvqRtn3FCK"
},
"outputs": [],
"source": [
"%%capture\n",
"!pip install huggingface-hub\n",
"!sudo apt-get install git-lfs\n",
"!git-lfs install"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "dGDIa_LR0tMe",
"colab": {
"base_uri": "https://localhost:8080/"
},
"outputId": "3fb3f53b-1834-45e7-9915-d14d83241565"
},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"\n",
" _| _| _| _| _|_|_| _|_|_| _|_|_| _| _| _|_|_| _|_|_|_| _|_| _|_|_| _|_|_|_|\n",
" _| _| _| _| _| _| _| _|_| _| _| _| _| _| _| _|\n",
" _|_|_|_| _| _| _| _|_| _| _|_| _| _| _| _| _| _|_| _|_|_| _|_|_|_| _| _|_|_|\n",
" _| _| _| _| _| _| _| _| _| _| _|_| _| _| _| _| _| _| _|\n",
" _| _| _|_| _|_|_| _|_|_| _|_|_| _| _| _|_|_| _| _| _| _|_|_| _|_|_|_|\n",
"\n",
" To login, `huggingface_hub` now requires a token generated from https://huggingface.co/settings/token.\n",
" (Deprecated, will be removed in v0.3.0) To login with username and password instead, interrupt with Ctrl+C.\n",
" \n",
"Token: \n",
"Login successful\n",
"Your token has been saved to /root/.huggingface/token\n",
"\u001b[1m\u001b[31mAuthenticated through git-credential store but this isn't the helper defined on your machine.\n",
"You might have to re-authenticate when pushing to the Hugging Face Hub. Run the following command in your terminal in case you want to set this credential helper as the default\n",
"\n",
"git config --global credential.helper store\u001b[0m\n"
]
}
],
"source": [
"!huggingface-cli login"
]
},
{
"cell_type": "code",
"source": [
"from huggingface_hub.keras_mixin import push_to_hub_keras\n",
"push_to_hub_keras(model = cycle_gan_model.gen_G, repo_url = \"https://huggingface.co/keras-io/CycleGAN\", organization = \"keras-io\")"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 392,
"referenced_widgets": [
"9f66f2bc1fb84f53b2c326b515daa96f",
"4b348f6741934ab6b88ad5e886a82434",
"2c3728bab3284ee395e46c1d22e2ac7d",
"77ed1e6d34f5418fad61533c421aed45",
"b29512c9852440af8becf8d6864a163d",
"0a775f600f814d2c8b50ee5e329cc3dd",
"bec9223c3e21469187d5746a54fd8c00",
"6c4ce2edb90346ebaf8ceb9d214f8fcd",
"f2681a3f49674709828d886bc4369a00",
"48874b08b0e544bab61728cc706c1a77",
"68dafbdb04584e31bfa5781c19305f60",
"6a7cd029c1aa40608cc9d1a05e7d81e1",
"45cdd8e8b1d5481983054e6882d02681",
"f992b2e503ba461883ba92754f32c243",
"7c6d35b412564fa0aa57589e7a275f94",
"cd21fb3fe0804a2ab93afa9ff0006b31",
"b10d2fc6597f4264a6be63cf44293b30",
"68faaabe6b434d11801f2a5020471535",
"f569a72bcb1d4a26904e0cdccc48f710",
"ce2bb30406be4dcc8a0ff75cc133e520",
"077a56dbea0147aca3f30345474dbcd4",
"58a2ad54a42943faa8dbc4eeaf511791",
"9e8f7edd06c14ff785d0fa8913817164",
"5669cfef918f4bb886cb8326b76750c3",
"358b5b20b965421b958ef8b8b9ce8ffd",
"27c9ac578d6047cd83910befa69939ae",
"43e4512b87104f9b965cebcb3b52d1fe",
"281d15cdd58243389661e4ecc12b3379",
"d9aae11f4cd34b56ab92ebb379b86db0",
"e602268e25884e0c8ef24c7f7e55b10e",
"60bbe43fe4334d5aa81f0d3204245edc",
"ba2a44729ca94e7d9c0b7acb43417e36",
"66b4845f1cb64ba28eb49ea7b7f5bad1"
]
},
"id": "VSuvZe2bCzRZ",
"outputId": "623a7592-2f61-40af-9746-a301d18042b8"
},
"execution_count": null,
"outputs": [
{
"output_type": "stream",
"name": "stderr",
"text": [
"Cloning https://huggingface.co/keras-io/CycleGAN into local empty directory.\n",
"WARNING:huggingface_hub.repository:Cloning https://huggingface.co/keras-io/CycleGAN into local empty directory.\n"
]
},
{
"output_type": "stream",
"name": "stdout",
"text": [
"WARNING:tensorflow:Compiled the loaded model, but the compiled metrics have yet to be built. `model.compile_metrics` will be empty until you train or evaluate the model.\n"
]
},
{
"output_type": "stream",
"name": "stderr",
"text": [
"WARNING:tensorflow:Compiled the loaded model, but the compiled metrics have yet to be built. `model.compile_metrics` will be empty until you train or evaluate the model.\n",
"WARNING:absl:Function `_wrapped_model` contains input name(s) generator_G_img_input with unsupported characters which will be renamed to generator_g_img_input in the SavedModel.\n"
]
},
{
"output_type": "stream",
"name": "stdout",
"text": [
"INFO:tensorflow:Assets written to: CycleGAN/assets\n"
]
},
{
"output_type": "stream",
"name": "stderr",
"text": [
"INFO:tensorflow:Assets written to: CycleGAN/assets\n",
"Adding files tracked by Git LFS: ['variables/variables.data-00000-of-00001']. This may take a bit of time if the files are large.\n",
"WARNING:huggingface_hub.repository:Adding files tracked by Git LFS: ['variables/variables.data-00000-of-00001']. This may take a bit of time if the files are large.\n"
]
},
{
"output_type": "display_data",
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "9f66f2bc1fb84f53b2c326b515daa96f",
"version_minor": 0,
"version_major": 2
},
"text/plain": [
"Upload file variables/variables.data-00000-of-00001: 0%| | 3.39k/43.5M [00:00, ?B/s]"
]
},
"metadata": {}
},
{
"output_type": "display_data",
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "6a7cd029c1aa40608cc9d1a05e7d81e1",
"version_minor": 0,
"version_major": 2
},
"text/plain": [
"Upload file saved_model.pb: 0%| | 3.40k/1.51M [00:00, ?B/s]"
]
},
"metadata": {}
},
{
"output_type": "display_data",
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "9e8f7edd06c14ff785d0fa8913817164",
"version_minor": 0,
"version_major": 2
},
"text/plain": [
"Upload file keras_metadata.pb: 3%|2 | 3.40k/115k [00:00, ?B/s]"
]
},
"metadata": {}
},
{
"output_type": "stream",
"name": "stderr",
"text": [
"To https://huggingface.co/keras-io/CycleGAN\n",
" 39b9bac..0b34793 main -> main\n",
"\n",
"WARNING:huggingface_hub.repository:To https://huggingface.co/keras-io/CycleGAN\n",
" 39b9bac..0b34793 main -> main\n",
"\n"
]
},
{
"output_type": "execute_result",
"data": {
"application/vnd.google.colaboratory.intrinsic+json": {
"type": "string"
},
"text/plain": [
"'https://huggingface.co/keras-io/CycleGAN/commit/0b34793d94f9a0cc57128b6195f6b6358c0c4eaf'"
]
},
"metadata": {},
"execution_count": 19
}
]
},
{
"cell_type": "code",
"source": [
""
],
"metadata": {
"id": "npl2kR1rDCgX"
},
"execution_count": null,
"outputs": []
}
],
"metadata": {
"accelerator": "GPU",
"colab": {
"collapsed_sections": [],
"machine_shape": "hm",
"name": "cyclegan",
"provenance": [],
"toc_visible": true
},
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.0"
},
"widgets": {
"application/vnd.jupyter.widget-state+json": {
"9f66f2bc1fb84f53b2c326b515daa96f": {
"model_module": "@jupyter-widgets/controls",
"model_name": "HBoxModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "HBoxView",
"_dom_classes": [],
"_model_name": "HBoxModel",
"_view_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_view_count": null,
"_view_module_version": "1.5.0",
"box_style": "",
"layout": "IPY_MODEL_4b348f6741934ab6b88ad5e886a82434",
"_model_module": "@jupyter-widgets/controls",
"children": [
"IPY_MODEL_2c3728bab3284ee395e46c1d22e2ac7d",
"IPY_MODEL_77ed1e6d34f5418fad61533c421aed45",
"IPY_MODEL_b29512c9852440af8becf8d6864a163d"
]
}
},
"4b348f6741934ab6b88ad5e886a82434": {
"model_module": "@jupyter-widgets/base",
"model_name": "LayoutModel",
"model_module_version": "1.2.0",
"state": {
"_view_name": "LayoutView",
"grid_template_rows": null,
"right": null,
"justify_content": null,
"_view_module": "@jupyter-widgets/base",
"overflow": null,
"_model_module_version": "1.2.0",
"_view_count": null,
"flex_flow": null,
"width": null,
"min_width": null,
"border": null,
"align_items": null,
"bottom": null,
"_model_module": "@jupyter-widgets/base",
"top": null,
"grid_column": null,
"overflow_y": null,
"overflow_x": null,
"grid_auto_flow": null,
"grid_area": null,
"grid_template_columns": null,
"flex": null,
"_model_name": "LayoutModel",
"justify_items": null,
"grid_row": null,
"max_height": null,
"align_content": null,
"visibility": null,
"align_self": null,
"height": null,
"min_height": null,
"padding": null,
"grid_auto_rows": null,
"grid_gap": null,
"max_width": null,
"order": null,
"_view_module_version": "1.2.0",
"grid_template_areas": null,
"object_position": null,
"object_fit": null,
"grid_auto_columns": null,
"margin": null,
"display": null,
"left": null
}
},
"2c3728bab3284ee395e46c1d22e2ac7d": {
"model_module": "@jupyter-widgets/controls",
"model_name": "HTMLModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "HTMLView",
"style": "IPY_MODEL_0a775f600f814d2c8b50ee5e329cc3dd",
"_dom_classes": [],
"description": "",
"_model_name": "HTMLModel",
"placeholder": "",
"_view_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"value": "Upload file variables/variables.data-00000-of-00001: 100%",
"_view_count": null,
"_view_module_version": "1.5.0",
"description_tooltip": null,
"_model_module": "@jupyter-widgets/controls",
"layout": "IPY_MODEL_bec9223c3e21469187d5746a54fd8c00"
}
},
"77ed1e6d34f5418fad61533c421aed45": {
"model_module": "@jupyter-widgets/controls",
"model_name": "FloatProgressModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "ProgressView",
"style": "IPY_MODEL_6c4ce2edb90346ebaf8ceb9d214f8fcd",
"_dom_classes": [],
"description": "",
"_model_name": "FloatProgressModel",
"bar_style": "success",
"max": 45592214,
"_view_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"value": 45592214,
"_view_count": null,
"_view_module_version": "1.5.0",
"orientation": "horizontal",
"min": 0,
"description_tooltip": null,
"_model_module": "@jupyter-widgets/controls",
"layout": "IPY_MODEL_f2681a3f49674709828d886bc4369a00"
}
},
"b29512c9852440af8becf8d6864a163d": {
"model_module": "@jupyter-widgets/controls",
"model_name": "HTMLModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "HTMLView",
"style": "IPY_MODEL_48874b08b0e544bab61728cc706c1a77",
"_dom_classes": [],
"description": "",
"_model_name": "HTMLModel",
"placeholder": "",
"_view_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"value": " 43.5M/43.5M [00:33<00:00, 1.24MB/s]",
"_view_count": null,
"_view_module_version": "1.5.0",
"description_tooltip": null,
"_model_module": "@jupyter-widgets/controls",
"layout": "IPY_MODEL_68dafbdb04584e31bfa5781c19305f60"
}
},
"0a775f600f814d2c8b50ee5e329cc3dd": {
"model_module": "@jupyter-widgets/controls",
"model_name": "DescriptionStyleModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "StyleView",
"_model_name": "DescriptionStyleModel",
"description_width": "",
"_view_module": "@jupyter-widgets/base",
"_model_module_version": "1.5.0",
"_view_count": null,
"_view_module_version": "1.2.0",
"_model_module": "@jupyter-widgets/controls"
}
},
"bec9223c3e21469187d5746a54fd8c00": {
"model_module": "@jupyter-widgets/base",
"model_name": "LayoutModel",
"model_module_version": "1.2.0",
"state": {
"_view_name": "LayoutView",
"grid_template_rows": null,
"right": null,
"justify_content": null,
"_view_module": "@jupyter-widgets/base",
"overflow": null,
"_model_module_version": "1.2.0",
"_view_count": null,
"flex_flow": null,
"width": null,
"min_width": null,
"border": null,
"align_items": null,
"bottom": null,
"_model_module": "@jupyter-widgets/base",
"top": null,
"grid_column": null,
"overflow_y": null,
"overflow_x": null,
"grid_auto_flow": null,
"grid_area": null,
"grid_template_columns": null,
"flex": null,
"_model_name": "LayoutModel",
"justify_items": null,
"grid_row": null,
"max_height": null,
"align_content": null,
"visibility": null,
"align_self": null,
"height": null,
"min_height": null,
"padding": null,
"grid_auto_rows": null,
"grid_gap": null,
"max_width": null,
"order": null,
"_view_module_version": "1.2.0",
"grid_template_areas": null,
"object_position": null,
"object_fit": null,
"grid_auto_columns": null,
"margin": null,
"display": null,
"left": null
}
},
"6c4ce2edb90346ebaf8ceb9d214f8fcd": {
"model_module": "@jupyter-widgets/controls",
"model_name": "ProgressStyleModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "StyleView",
"_model_name": "ProgressStyleModel",
"description_width": "",
"_view_module": "@jupyter-widgets/base",
"_model_module_version": "1.5.0",
"_view_count": null,
"_view_module_version": "1.2.0",
"bar_color": null,
"_model_module": "@jupyter-widgets/controls"
}
},
"f2681a3f49674709828d886bc4369a00": {
"model_module": "@jupyter-widgets/base",
"model_name": "LayoutModel",
"model_module_version": "1.2.0",
"state": {
"_view_name": "LayoutView",
"grid_template_rows": null,
"right": null,
"justify_content": null,
"_view_module": "@jupyter-widgets/base",
"overflow": null,
"_model_module_version": "1.2.0",
"_view_count": null,
"flex_flow": null,
"width": null,
"min_width": null,
"border": null,
"align_items": null,
"bottom": null,
"_model_module": "@jupyter-widgets/base",
"top": null,
"grid_column": null,
"overflow_y": null,
"overflow_x": null,
"grid_auto_flow": null,
"grid_area": null,
"grid_template_columns": null,
"flex": null,
"_model_name": "LayoutModel",
"justify_items": null,
"grid_row": null,
"max_height": null,
"align_content": null,
"visibility": null,
"align_self": null,
"height": null,
"min_height": null,
"padding": null,
"grid_auto_rows": null,
"grid_gap": null,
"max_width": null,
"order": null,
"_view_module_version": "1.2.0",
"grid_template_areas": null,
"object_position": null,
"object_fit": null,
"grid_auto_columns": null,
"margin": null,
"display": null,
"left": null
}
},
"48874b08b0e544bab61728cc706c1a77": {
"model_module": "@jupyter-widgets/controls",
"model_name": "DescriptionStyleModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "StyleView",
"_model_name": "DescriptionStyleModel",
"description_width": "",
"_view_module": "@jupyter-widgets/base",
"_model_module_version": "1.5.0",
"_view_count": null,
"_view_module_version": "1.2.0",
"_model_module": "@jupyter-widgets/controls"
}
},
"68dafbdb04584e31bfa5781c19305f60": {
"model_module": "@jupyter-widgets/base",
"model_name": "LayoutModel",
"model_module_version": "1.2.0",
"state": {
"_view_name": "LayoutView",
"grid_template_rows": null,
"right": null,
"justify_content": null,
"_view_module": "@jupyter-widgets/base",
"overflow": null,
"_model_module_version": "1.2.0",
"_view_count": null,
"flex_flow": null,
"width": null,
"min_width": null,
"border": null,
"align_items": null,
"bottom": null,
"_model_module": "@jupyter-widgets/base",
"top": null,
"grid_column": null,
"overflow_y": null,
"overflow_x": null,
"grid_auto_flow": null,
"grid_area": null,
"grid_template_columns": null,
"flex": null,
"_model_name": "LayoutModel",
"justify_items": null,
"grid_row": null,
"max_height": null,
"align_content": null,
"visibility": null,
"align_self": null,
"height": null,
"min_height": null,
"padding": null,
"grid_auto_rows": null,
"grid_gap": null,
"max_width": null,
"order": null,
"_view_module_version": "1.2.0",
"grid_template_areas": null,
"object_position": null,
"object_fit": null,
"grid_auto_columns": null,
"margin": null,
"display": null,
"left": null
}
},
"6a7cd029c1aa40608cc9d1a05e7d81e1": {
"model_module": "@jupyter-widgets/controls",
"model_name": "HBoxModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "HBoxView",
"_dom_classes": [],
"_model_name": "HBoxModel",
"_view_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_view_count": null,
"_view_module_version": "1.5.0",
"box_style": "",
"layout": "IPY_MODEL_45cdd8e8b1d5481983054e6882d02681",
"_model_module": "@jupyter-widgets/controls",
"children": [
"IPY_MODEL_f992b2e503ba461883ba92754f32c243",
"IPY_MODEL_7c6d35b412564fa0aa57589e7a275f94",
"IPY_MODEL_cd21fb3fe0804a2ab93afa9ff0006b31"
]
}
},
"45cdd8e8b1d5481983054e6882d02681": {
"model_module": "@jupyter-widgets/base",
"model_name": "LayoutModel",
"model_module_version": "1.2.0",
"state": {
"_view_name": "LayoutView",
"grid_template_rows": null,
"right": null,
"justify_content": null,
"_view_module": "@jupyter-widgets/base",
"overflow": null,
"_model_module_version": "1.2.0",
"_view_count": null,
"flex_flow": null,
"width": null,
"min_width": null,
"border": null,
"align_items": null,
"bottom": null,
"_model_module": "@jupyter-widgets/base",
"top": null,
"grid_column": null,
"overflow_y": null,
"overflow_x": null,
"grid_auto_flow": null,
"grid_area": null,
"grid_template_columns": null,
"flex": null,
"_model_name": "LayoutModel",
"justify_items": null,
"grid_row": null,
"max_height": null,
"align_content": null,
"visibility": null,
"align_self": null,
"height": null,
"min_height": null,
"padding": null,
"grid_auto_rows": null,
"grid_gap": null,
"max_width": null,
"order": null,
"_view_module_version": "1.2.0",
"grid_template_areas": null,
"object_position": null,
"object_fit": null,
"grid_auto_columns": null,
"margin": null,
"display": null,
"left": null
}
},
"f992b2e503ba461883ba92754f32c243": {
"model_module": "@jupyter-widgets/controls",
"model_name": "HTMLModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "HTMLView",
"style": "IPY_MODEL_b10d2fc6597f4264a6be63cf44293b30",
"_dom_classes": [],
"description": "",
"_model_name": "HTMLModel",
"placeholder": "",
"_view_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"value": "Upload file saved_model.pb: 100%",
"_view_count": null,
"_view_module_version": "1.5.0",
"description_tooltip": null,
"_model_module": "@jupyter-widgets/controls",
"layout": "IPY_MODEL_68faaabe6b434d11801f2a5020471535"
}
},
"7c6d35b412564fa0aa57589e7a275f94": {
"model_module": "@jupyter-widgets/controls",
"model_name": "FloatProgressModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "ProgressView",
"style": "IPY_MODEL_f569a72bcb1d4a26904e0cdccc48f710",
"_dom_classes": [],
"description": "",
"_model_name": "FloatProgressModel",
"bar_style": "success",
"max": 1580532,
"_view_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"value": 1580532,
"_view_count": null,
"_view_module_version": "1.5.0",
"orientation": "horizontal",
"min": 0,
"description_tooltip": null,
"_model_module": "@jupyter-widgets/controls",
"layout": "IPY_MODEL_ce2bb30406be4dcc8a0ff75cc133e520"
}
},
"cd21fb3fe0804a2ab93afa9ff0006b31": {
"model_module": "@jupyter-widgets/controls",
"model_name": "HTMLModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "HTMLView",
"style": "IPY_MODEL_077a56dbea0147aca3f30345474dbcd4",
"_dom_classes": [],
"description": "",
"_model_name": "HTMLModel",
"placeholder": "",
"_view_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"value": " 1.51M/1.51M [00:33<00:00, 33.1kB/s]",
"_view_count": null,
"_view_module_version": "1.5.0",
"description_tooltip": null,
"_model_module": "@jupyter-widgets/controls",
"layout": "IPY_MODEL_58a2ad54a42943faa8dbc4eeaf511791"
}
},
"b10d2fc6597f4264a6be63cf44293b30": {
"model_module": "@jupyter-widgets/controls",
"model_name": "DescriptionStyleModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "StyleView",
"_model_name": "DescriptionStyleModel",
"description_width": "",
"_view_module": "@jupyter-widgets/base",
"_model_module_version": "1.5.0",
"_view_count": null,
"_view_module_version": "1.2.0",
"_model_module": "@jupyter-widgets/controls"
}
},
"68faaabe6b434d11801f2a5020471535": {
"model_module": "@jupyter-widgets/base",
"model_name": "LayoutModel",
"model_module_version": "1.2.0",
"state": {
"_view_name": "LayoutView",
"grid_template_rows": null,
"right": null,
"justify_content": null,
"_view_module": "@jupyter-widgets/base",
"overflow": null,
"_model_module_version": "1.2.0",
"_view_count": null,
"flex_flow": null,
"width": null,
"min_width": null,
"border": null,
"align_items": null,
"bottom": null,
"_model_module": "@jupyter-widgets/base",
"top": null,
"grid_column": null,
"overflow_y": null,
"overflow_x": null,
"grid_auto_flow": null,
"grid_area": null,
"grid_template_columns": null,
"flex": null,
"_model_name": "LayoutModel",
"justify_items": null,
"grid_row": null,
"max_height": null,
"align_content": null,
"visibility": null,
"align_self": null,
"height": null,
"min_height": null,
"padding": null,
"grid_auto_rows": null,
"grid_gap": null,
"max_width": null,
"order": null,
"_view_module_version": "1.2.0",
"grid_template_areas": null,
"object_position": null,
"object_fit": null,
"grid_auto_columns": null,
"margin": null,
"display": null,
"left": null
}
},
"f569a72bcb1d4a26904e0cdccc48f710": {
"model_module": "@jupyter-widgets/controls",
"model_name": "ProgressStyleModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "StyleView",
"_model_name": "ProgressStyleModel",
"description_width": "",
"_view_module": "@jupyter-widgets/base",
"_model_module_version": "1.5.0",
"_view_count": null,
"_view_module_version": "1.2.0",
"bar_color": null,
"_model_module": "@jupyter-widgets/controls"
}
},
"ce2bb30406be4dcc8a0ff75cc133e520": {
"model_module": "@jupyter-widgets/base",
"model_name": "LayoutModel",
"model_module_version": "1.2.0",
"state": {
"_view_name": "LayoutView",
"grid_template_rows": null,
"right": null,
"justify_content": null,
"_view_module": "@jupyter-widgets/base",
"overflow": null,
"_model_module_version": "1.2.0",
"_view_count": null,
"flex_flow": null,
"width": null,
"min_width": null,
"border": null,
"align_items": null,
"bottom": null,
"_model_module": "@jupyter-widgets/base",
"top": null,
"grid_column": null,
"overflow_y": null,
"overflow_x": null,
"grid_auto_flow": null,
"grid_area": null,
"grid_template_columns": null,
"flex": null,
"_model_name": "LayoutModel",
"justify_items": null,
"grid_row": null,
"max_height": null,
"align_content": null,
"visibility": null,
"align_self": null,
"height": null,
"min_height": null,
"padding": null,
"grid_auto_rows": null,
"grid_gap": null,
"max_width": null,
"order": null,
"_view_module_version": "1.2.0",
"grid_template_areas": null,
"object_position": null,
"object_fit": null,
"grid_auto_columns": null,
"margin": null,
"display": null,
"left": null
}
},
"077a56dbea0147aca3f30345474dbcd4": {
"model_module": "@jupyter-widgets/controls",
"model_name": "DescriptionStyleModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "StyleView",
"_model_name": "DescriptionStyleModel",
"description_width": "",
"_view_module": "@jupyter-widgets/base",
"_model_module_version": "1.5.0",
"_view_count": null,
"_view_module_version": "1.2.0",
"_model_module": "@jupyter-widgets/controls"
}
},
"58a2ad54a42943faa8dbc4eeaf511791": {
"model_module": "@jupyter-widgets/base",
"model_name": "LayoutModel",
"model_module_version": "1.2.0",
"state": {
"_view_name": "LayoutView",
"grid_template_rows": null,
"right": null,
"justify_content": null,
"_view_module": "@jupyter-widgets/base",
"overflow": null,
"_model_module_version": "1.2.0",
"_view_count": null,
"flex_flow": null,
"width": null,
"min_width": null,
"border": null,
"align_items": null,
"bottom": null,
"_model_module": "@jupyter-widgets/base",
"top": null,
"grid_column": null,
"overflow_y": null,
"overflow_x": null,
"grid_auto_flow": null,
"grid_area": null,
"grid_template_columns": null,
"flex": null,
"_model_name": "LayoutModel",
"justify_items": null,
"grid_row": null,
"max_height": null,
"align_content": null,
"visibility": null,
"align_self": null,
"height": null,
"min_height": null,
"padding": null,
"grid_auto_rows": null,
"grid_gap": null,
"max_width": null,
"order": null,
"_view_module_version": "1.2.0",
"grid_template_areas": null,
"object_position": null,
"object_fit": null,
"grid_auto_columns": null,
"margin": null,
"display": null,
"left": null
}
},
"9e8f7edd06c14ff785d0fa8913817164": {
"model_module": "@jupyter-widgets/controls",
"model_name": "HBoxModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "HBoxView",
"_dom_classes": [],
"_model_name": "HBoxModel",
"_view_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_view_count": null,
"_view_module_version": "1.5.0",
"box_style": "",
"layout": "IPY_MODEL_5669cfef918f4bb886cb8326b76750c3",
"_model_module": "@jupyter-widgets/controls",
"children": [
"IPY_MODEL_358b5b20b965421b958ef8b8b9ce8ffd",
"IPY_MODEL_27c9ac578d6047cd83910befa69939ae",
"IPY_MODEL_43e4512b87104f9b965cebcb3b52d1fe"
]
}
},
"5669cfef918f4bb886cb8326b76750c3": {
"model_module": "@jupyter-widgets/base",
"model_name": "LayoutModel",
"model_module_version": "1.2.0",
"state": {
"_view_name": "LayoutView",
"grid_template_rows": null,
"right": null,
"justify_content": null,
"_view_module": "@jupyter-widgets/base",
"overflow": null,
"_model_module_version": "1.2.0",
"_view_count": null,
"flex_flow": null,
"width": null,
"min_width": null,
"border": null,
"align_items": null,
"bottom": null,
"_model_module": "@jupyter-widgets/base",
"top": null,
"grid_column": null,
"overflow_y": null,
"overflow_x": null,
"grid_auto_flow": null,
"grid_area": null,
"grid_template_columns": null,
"flex": null,
"_model_name": "LayoutModel",
"justify_items": null,
"grid_row": null,
"max_height": null,
"align_content": null,
"visibility": null,
"align_self": null,
"height": null,
"min_height": null,
"padding": null,
"grid_auto_rows": null,
"grid_gap": null,
"max_width": null,
"order": null,
"_view_module_version": "1.2.0",
"grid_template_areas": null,
"object_position": null,
"object_fit": null,
"grid_auto_columns": null,
"margin": null,
"display": null,
"left": null
}
},
"358b5b20b965421b958ef8b8b9ce8ffd": {
"model_module": "@jupyter-widgets/controls",
"model_name": "HTMLModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "HTMLView",
"style": "IPY_MODEL_281d15cdd58243389661e4ecc12b3379",
"_dom_classes": [],
"description": "",
"_model_name": "HTMLModel",
"placeholder": "",
"_view_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"value": "Upload file keras_metadata.pb: 100%",
"_view_count": null,
"_view_module_version": "1.5.0",
"description_tooltip": null,
"_model_module": "@jupyter-widgets/controls",
"layout": "IPY_MODEL_d9aae11f4cd34b56ab92ebb379b86db0"
}
},
"27c9ac578d6047cd83910befa69939ae": {
"model_module": "@jupyter-widgets/controls",
"model_name": "FloatProgressModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "ProgressView",
"style": "IPY_MODEL_e602268e25884e0c8ef24c7f7e55b10e",
"_dom_classes": [],
"description": "",
"_model_name": "FloatProgressModel",
"bar_style": "success",
"max": 118258,
"_view_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"value": 118258,
"_view_count": null,
"_view_module_version": "1.5.0",
"orientation": "horizontal",
"min": 0,
"description_tooltip": null,
"_model_module": "@jupyter-widgets/controls",
"layout": "IPY_MODEL_60bbe43fe4334d5aa81f0d3204245edc"
}
},
"43e4512b87104f9b965cebcb3b52d1fe": {
"model_module": "@jupyter-widgets/controls",
"model_name": "HTMLModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "HTMLView",
"style": "IPY_MODEL_ba2a44729ca94e7d9c0b7acb43417e36",
"_dom_classes": [],
"description": "",
"_model_name": "HTMLModel",
"placeholder": "",
"_view_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"value": " 115k/115k [00:33<00:00, 3.08kB/s]",
"_view_count": null,
"_view_module_version": "1.5.0",
"description_tooltip": null,
"_model_module": "@jupyter-widgets/controls",
"layout": "IPY_MODEL_66b4845f1cb64ba28eb49ea7b7f5bad1"
}
},
"281d15cdd58243389661e4ecc12b3379": {
"model_module": "@jupyter-widgets/controls",
"model_name": "DescriptionStyleModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "StyleView",
"_model_name": "DescriptionStyleModel",
"description_width": "",
"_view_module": "@jupyter-widgets/base",
"_model_module_version": "1.5.0",
"_view_count": null,
"_view_module_version": "1.2.0",
"_model_module": "@jupyter-widgets/controls"
}
},
"d9aae11f4cd34b56ab92ebb379b86db0": {
"model_module": "@jupyter-widgets/base",
"model_name": "LayoutModel",
"model_module_version": "1.2.0",
"state": {
"_view_name": "LayoutView",
"grid_template_rows": null,
"right": null,
"justify_content": null,
"_view_module": "@jupyter-widgets/base",
"overflow": null,
"_model_module_version": "1.2.0",
"_view_count": null,
"flex_flow": null,
"width": null,
"min_width": null,
"border": null,
"align_items": null,
"bottom": null,
"_model_module": "@jupyter-widgets/base",
"top": null,
"grid_column": null,
"overflow_y": null,
"overflow_x": null,
"grid_auto_flow": null,
"grid_area": null,
"grid_template_columns": null,
"flex": null,
"_model_name": "LayoutModel",
"justify_items": null,
"grid_row": null,
"max_height": null,
"align_content": null,
"visibility": null,
"align_self": null,
"height": null,
"min_height": null,
"padding": null,
"grid_auto_rows": null,
"grid_gap": null,
"max_width": null,
"order": null,
"_view_module_version": "1.2.0",
"grid_template_areas": null,
"object_position": null,
"object_fit": null,
"grid_auto_columns": null,
"margin": null,
"display": null,
"left": null
}
},
"e602268e25884e0c8ef24c7f7e55b10e": {
"model_module": "@jupyter-widgets/controls",
"model_name": "ProgressStyleModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "StyleView",
"_model_name": "ProgressStyleModel",
"description_width": "",
"_view_module": "@jupyter-widgets/base",
"_model_module_version": "1.5.0",
"_view_count": null,
"_view_module_version": "1.2.0",
"bar_color": null,
"_model_module": "@jupyter-widgets/controls"
}
},
"60bbe43fe4334d5aa81f0d3204245edc": {
"model_module": "@jupyter-widgets/base",
"model_name": "LayoutModel",
"model_module_version": "1.2.0",
"state": {
"_view_name": "LayoutView",
"grid_template_rows": null,
"right": null,
"justify_content": null,
"_view_module": "@jupyter-widgets/base",
"overflow": null,
"_model_module_version": "1.2.0",
"_view_count": null,
"flex_flow": null,
"width": null,
"min_width": null,
"border": null,
"align_items": null,
"bottom": null,
"_model_module": "@jupyter-widgets/base",
"top": null,
"grid_column": null,
"overflow_y": null,
"overflow_x": null,
"grid_auto_flow": null,
"grid_area": null,
"grid_template_columns": null,
"flex": null,
"_model_name": "LayoutModel",
"justify_items": null,
"grid_row": null,
"max_height": null,
"align_content": null,
"visibility": null,
"align_self": null,
"height": null,
"min_height": null,
"padding": null,
"grid_auto_rows": null,
"grid_gap": null,
"max_width": null,
"order": null,
"_view_module_version": "1.2.0",
"grid_template_areas": null,
"object_position": null,
"object_fit": null,
"grid_auto_columns": null,
"margin": null,
"display": null,
"left": null
}
},
"ba2a44729ca94e7d9c0b7acb43417e36": {
"model_module": "@jupyter-widgets/controls",
"model_name": "DescriptionStyleModel",
"model_module_version": "1.5.0",
"state": {
"_view_name": "StyleView",
"_model_name": "DescriptionStyleModel",
"description_width": "",
"_view_module": "@jupyter-widgets/base",
"_model_module_version": "1.5.0",
"_view_count": null,
"_view_module_version": "1.2.0",
"_model_module": "@jupyter-widgets/controls"
}
},
"66b4845f1cb64ba28eb49ea7b7f5bad1": {
"model_module": "@jupyter-widgets/base",
"model_name": "LayoutModel",
"model_module_version": "1.2.0",
"state": {
"_view_name": "LayoutView",
"grid_template_rows": null,
"right": null,
"justify_content": null,
"_view_module": "@jupyter-widgets/base",
"overflow": null,
"_model_module_version": "1.2.0",
"_view_count": null,
"flex_flow": null,
"width": null,
"min_width": null,
"border": null,
"align_items": null,
"bottom": null,
"_model_module": "@jupyter-widgets/base",
"top": null,
"grid_column": null,
"overflow_y": null,
"overflow_x": null,
"grid_auto_flow": null,
"grid_area": null,
"grid_template_columns": null,
"flex": null,
"_model_name": "LayoutModel",
"justify_items": null,
"grid_row": null,
"max_height": null,
"align_content": null,
"visibility": null,
"align_self": null,
"height": null,
"min_height": null,
"padding": null,
"grid_auto_rows": null,
"grid_gap": null,
"max_width": null,
"order": null,
"_view_module_version": "1.2.0",
"grid_template_areas": null,
"object_position": null,
"object_fit": null,
"grid_auto_columns": null,
"margin": null,
"display": null,
"left": null
}
}
}
}
},
"nbformat": 4,
"nbformat_minor": 0
}