You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Thumbnail

Dataset Card for Latent Diffusion Super Sampling

Image datasets for building image/video upscaling networks.

This repository contains implementation of training and inference code for models trained on the following works:

Part 1: Trained Sub-Pixel Convolutional Network for Upscaling on 5000 individual 720p-4K and 1080p-4K image pairs

References: Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network, Shi et al

Shi, W., Caballero, J., Huszár, F., Totz, J., Aitken, A.P., Bishop, R., Rueckert, D. and Wang, Z., 2016.

Results:

720p images tested: 100
Average PSNR: 40.44 dB

1080p images tested: 100
Average PSNR: 43.05 dB

This outperforms the model in the proposed architecture (average is 28.09 dB)

Refer points 5,6,7 in Section "Dataset Description" for further information

Part 2: Trained Convolutional Neural Network Video Frame Interpolation via Spatially-adaptive Separable Convolution for realtime 4K, 1080p and 720p videos

References: Video Frame Interpolation via Adaptive Separable Convolution, Niklaus et al

[email protected], [email protected], [email protected]

Results:

720p Model:
•PSNR: 28.35
•SSIM: 0.78

1080p Model:
•PSNR: 29.67
•SSIM: 0.84

4K Model:
•PSNR: 33.74
•SSIM: 0.83

Refer points 8,9,10 in Section "Dataset Description" for further information

Part 3: Latent Diffusion Super Sampling coming soon!!!

Stay tuned>>>>>>>>>>>>>>>

Dataset Details

Consists of 300,000 ground truth 720p and 1080p frames with corresponding 4K output frames

Dataset Sources

YouTube

Uses

Diffusion networks, CNNs, Optical Flow Accelerators, etc.

Dataset Structure

  1. All images are in .jpg format
  2. Images are named in the following format: resolution_globalframenumber.jpg
  3. Resolution refers to either of 3: 720p, 1080p or 4K
  4. Globalframenumber is the frame number of the image under the respective resolution. eg: 4K_10090.jpg

Curation Rationale

  1. To build a real-time upscaling network using latent diffusion supersampling.
  2. Design algorithms for increasing temporal resolution (framerate up-conversion) of videos in real-time.

Dataset Card Authors

Alosh Denny

Dataset Card Contact

[email protected]

Downloads last month
129
4K_part1: Contains first part of 4K images
4K_part1: Contains first part of 4K images
4K_part2: Contains second part of 4K images
4K_part2: Contains second part of 4K images
720p: Contains 100,000 ground truth 720p images
720p: Contains 100,000 ground truth 720p images
1080p: Contains 100,000 ground truth 1080p images
1080p: Contains 100,000 ground truth 1080p images
Additionally, you will find 2 ESPCN (Efficient Sub Pixel Convolution Network) PyTorch models and a Jupyter Notebook (ESPCN.ipynb), which you can use for retraining or inference.
Additionally, you will find 2 ESPCN (Efficient Sub Pixel Convolution Network) PyTorch models and a Jupyter Notebook (ESPCN.ipynb), which you can use for retraining or inference.
Selected Super Resolution 5000 contains 5000 randomly picked image triplets for 4K, 1080p and 720p images.
Selected Super Resolution 5000 contains 5000 randomly picked image triplets for 4K, 1080p and 720p images.
Super Resolution Test 100 serves as the test dataset for the above training set.
Super Resolution Test 100 serves as the test dataset for the above training set.
In the latest update, 3 FIASC (Frame Interpolation via Adaptive Separable Convolutional) PyTorch models and a Jupyter Notebook (FIASC.ipynb) have been added to be used for retraining or inference.
In the latest update, 3 FIASC (Frame Interpolation via Adaptive Separable Convolutional) PyTorch models and a Jupyter Notebook (FIASC.ipynb) have been added to be used for retraining or inference.
Frame Interpolation Training contains 6416 frames used for training the models, each respectively for 4K, 1080p and 720p.
Frame Interpolation Training contains 6416 frames used for training the models, each respectively for 4K, 1080p and 720p.
Frame Interpolation Testing contains 1309 frames used for evaluating the models, each respectively for 4K, 1080p and 720p.
Frame Interpolation Testing contains 1309 frames used for evaluating the models, each respectively for 4K, 1080p and 720p.

Models trained or fine-tuned on aoxo/latent_diffusion_super_sampling