File size: 3,223 Bytes
eecef41
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
---
license: cc-by-4.0
pipeline_tag: image-to-image
tags:
- pytorch
---

[Link to Github Release](https://github.com/Phhofm/models/releases/tag/Ludvae200)  

# Ludvae200

Name: Ludvae200  
License: CC BY 4.0  
Author: Philip Hofmann  
Network: [LUD-VAE](https://github.com/zhengdharia/LUD-VAE)  
Scale: 1  
Release Date: 25.03.2024  
Purpose: 1x realistic noise degradation model  
Iterations: 190'000  
H_size: 64  
n_channels: 3  
dataloader_batch_size: 16  
H_noise_level: 8  
L_noise_level: 3  
Dataset: [RealLR200](https://drive.google.com/drive/folders/1L2VsQYQRKhWJxe6yWZU9FgBWSgBCk6mz)  
Number of train images: 200  
OTF Training: No  
Pretrained_Model_G: None  

Description:  
1x realistic noise degradation model, trained on the [RealLR200](https://drive.google.com/drive/folders/1L2VsQYQRKhWJxe6yWZU9FgBWSgBCk6mz) dataset as found released on the [SeeSR](https://github.com/cswry/SeeSR) github repo.  
Next to the ludvae200.pth model file, I provide a ludvae200.zip file which not only contains the code but also an inference script to run this model on the dataset of your choice.  
Adapt the ludvae200_inference.py script accordingly by adjusting the file paths at the beginning section, to your input folder, output folder, the folder path holding the ludvae200.pth model, and a folder path where you want the text file to be generated. I made the textfile generation the same way as I did in [Kim's Dataset Destroyer](https://github.com/Kim2091/helpful-scripts/tree/main/Dataset%20Destroyer), which means you will have each image file logged with each of the values used to degrade that specific image file in the resulting text file, which will append only and never overwrite.  

You can also adjust the strength settings inside the inference script file to fit to your needs. If you in general want less strong noise for example, you should adjust the temperature upper limit from 0.4 to 0.2 or go even lower.  
So in line 96 change "temperature_strength = uniform(0.1,0.4)" to "temperature_strength = uniform(0.1,0.2)" just to give an example.  

These values are defaulted to my needs of my last dataset degradation workflow I used, but feel free to adjust these values. You can also do the same as I did, temporarily using deterministic values with multiple runs to determine the min and max values of noise generation you deem suitable for your dataset needs.

An example of what this looked like for my last dataset workflow I used my model in:

Determining min and max values. Min value here is noise 1 temperature 0.1 which leads to visibly discernible noise, while max is simply the maximum degree of noise I would want my upscaling model trained on this dataset be able to handle from an input:

![Ludvae200_range](https://github.com/Phhofm/models/assets/14755670/2cf53d9b-a601-4fea-9440-84c968a23e50)

Then simply three examples of what these settings will produce:

![Ludvae200_example1](https://github.com/Phhofm/models/assets/14755670/ec81280a-777d-4e08-b2a5-65e5411f744c)
![Ludvae200_example](https://github.com/Phhofm/models/assets/14755670/67b6178c-b5d3-43a2-b2e0-b6a3cfea6500)
![Ludvae200_example2](https://github.com/Phhofm/models/assets/14755670/b0aba32a-e0fc-430b-aacf-7f1b160f2d0c)