File size: 2,714 Bytes
29ef30d 963be64 95e9cd5 29ef30d 1672cb8 963be64 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 |
---
license: cc-by-4.0
pipeline_tag: image-to-image
tags:
- pytorch
- super-resolution
---
## 4xRealWebPhoto_v4_dat2
**Scale:** 4
**Architecture:** DAT
**Author:** Philip Hofmann
**License:** CC-BY-4.0
**Purpose:** Compression Removal, Deblur, Denoise, JPEG, WEBP, Restoration
**Subject:** Photography
**Input Type:** Images
**Date:** 04.04.2024
**Architecture Option:** DAT-2
**I/O Channels:** 3(RGB)->3(RGB)
**Dataset:** Nomos8k
**Dataset Size:** 8492
**OTF (on the fly augmentations):** No
**Pretrained Model:** DAT_2_x4
**Iterations:** 243'000
**Batch Size:** 4-6
**GT Size:** 128-256
**Description:** 4x Upscaling Model for Photos from the Web. The dataset consists of only downscaled photos (to handle good quality), downscaled and compressed photos (uploaded to the web and compressed by service provider), and downscale, compressed, rescaled, recompressed photos (downloaded from the web and re-uploaded to the web).
Applied lens blur, realistic noise with my ludvae200 model, JPG and WEBP compression (40-95), and down_up, linear, cubic_mitchell, lanczos, gaussian and box downsampling algorithms. For details on the degradation process, check out the pdf with its explanations and visualizations.
This is basically a dat2 version of my previous 4xRealWebPhoto_v3_atd model, but trained with a bit stronger noise values, and also a single image per variant so drastically reduced training dataset size.
**Showcase:**
[12 Slowpics Examples](https://slow.pics/s/TvJ21pJG)
![Example1](https://github.com/Phhofm/models/assets/14755670/c9725af0-6eb6-4e35-baa1-b5980830cb07)
![Example2](https://github.com/Phhofm/models/assets/14755670/5368fc50-2c11-45f8-88c0-9ea4c0deddc0)
![Example3](https://github.com/Phhofm/models/assets/14755670/7e91dc6b-5ea5-47ca-baad-2917851dbeac)
![Example4](https://github.com/Phhofm/models/assets/14755670/764bb4cb-4ec1-4cad-b144-2db91c31a508)
![Example5](https://github.com/Phhofm/models/assets/14755670/7042efeb-4b34-46bf-83e1-9873896ddf47)
![Example6](https://github.com/Phhofm/models/assets/14755670/7842af9a-91d7-4901-810c-3d83c7e168d7)
![Example7](https://github.com/Phhofm/models/assets/14755670/cf7c9ec8-cfcb-4dcc-b1be-82153dad7f39)
![Example8](https://github.com/Phhofm/models/assets/14755670/fce3b528-4716-45af-a23d-f6c2474e648f)
![Example9](https://github.com/Phhofm/models/assets/14755670/9adf0cb0-b807-4bc4-ade4-1266948babca)
![Example10](https://github.com/Phhofm/models/assets/14755670/8992e54a-e8b0-40c8-901f-cd0920fe7564)
![Example11](https://github.com/Phhofm/models/assets/14755670/322f8ec2-9c49-436c-a1d7-450ff9265b7a)
![Example12](https://github.com/Phhofm/models/assets/14755670/46d24ddc-df0f-48de-8c44-cde787628267) |