Eugene Siow
commited on
Commit
•
491f0b6
1
Parent(s):
967fc29
Add comparison images.
Browse files- README.md +23 -2
- images/Set5_2_compare.png +0 -0
- images/Set5_4_compare.png +0 -0
README.md
CHANGED
@@ -11,7 +11,9 @@ metrics:
|
|
11 |
# Enhanced Deep Residual Networks for Single Image Super-Resolution (EDSR)
|
12 |
EDSR model pre-trained on DIV2K (800 images training, augmented to 4000 images, 100 images validation) for 2x, 3x and 4x image super resolution. It was introduced in the paper [Enhanced Deep Residual Networks for Single Image Super-Resolution](https://arxiv.org/abs/1707.02921) by Lim et al. and first released in [this repository](https://github.com/sanghyun-son/EDSR-PyTorch).
|
13 |
|
14 |
-
The goal of image super resolution is to restore a high resolution (HR) image from a single low resolution (LR) image.
|
|
|
|
|
15 |
## Model description
|
16 |
EDSR is a model that uses both deeper and wider architecture (32 ResBlocks and 256 channels) to improve performance. It uses both global and local skip connections, and up-scaling is done at the end of the network. It doesn't use batch normalization layers (input and output have similar distributions, normalizing intermediate features may not be desirable) instead it uses constant scaling layers to ensure stable training. An L1 loss function (absolute error) is used instead of L2 (MSE), the authors showed better performance empirically and it requires less computation.
|
17 |
|
@@ -45,7 +47,24 @@ Data augmentation is applied to the training set in the pre-processing stage whe
|
|
45 |
### Pretraining
|
46 |
The model was trained on GPU. The training code is provided below:
|
47 |
```python
|
48 |
-
from super_image import Trainer
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
49 |
```
|
50 |
## Evaluation results
|
51 |
The evaluation metrics include [PSNR](https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio#Quality_estimation_with_PSNR) and [SSIM](https://en.wikipedia.org/wiki/Structural_similarity#Algorithm).
|
@@ -73,6 +92,8 @@ The results columns below are represented below as `PSNR/SSIM`. They are compare
|
|
73 |
|Urban100 |3x | | |
|
74 |
|Urban100 |4x |23.14/0.6573 |**26.02/0.7832** |
|
75 |
|
|
|
|
|
76 |
## BibTeX entry and citation info
|
77 |
```bibtex
|
78 |
@InProceedings{Lim_2017_CVPR_Workshops,
|
|
|
11 |
# Enhanced Deep Residual Networks for Single Image Super-Resolution (EDSR)
|
12 |
EDSR model pre-trained on DIV2K (800 images training, augmented to 4000 images, 100 images validation) for 2x, 3x and 4x image super resolution. It was introduced in the paper [Enhanced Deep Residual Networks for Single Image Super-Resolution](https://arxiv.org/abs/1707.02921) by Lim et al. and first released in [this repository](https://github.com/sanghyun-son/EDSR-PyTorch).
|
13 |
|
14 |
+
The goal of image super resolution is to restore a high resolution (HR) image from a single low resolution (LR) image. The image below shows the ground truth (HR), the bicubic upscaling x2 and EDSR upscaling x2.
|
15 |
+
|
16 |
+
![Comparing Bicubic upscaling against EDSR x2 upscaling on Set5 Image 4](images/Set5_4_compare.png "Comparing Bicubic upscaling against EDSR x2 upscaling on Set5 Image 4")
|
17 |
## Model description
|
18 |
EDSR is a model that uses both deeper and wider architecture (32 ResBlocks and 256 channels) to improve performance. It uses both global and local skip connections, and up-scaling is done at the end of the network. It doesn't use batch normalization layers (input and output have similar distributions, normalizing intermediate features may not be desirable) instead it uses constant scaling layers to ensure stable training. An L1 loss function (absolute error) is used instead of L2 (MSE), the authors showed better performance empirically and it requires less computation.
|
19 |
|
|
|
47 |
### Pretraining
|
48 |
The model was trained on GPU. The training code is provided below:
|
49 |
```python
|
50 |
+
from super_image import Trainer, TrainingArguments, EdsrModel, EdsrConfig
|
51 |
+
|
52 |
+
training_args = TrainingArguments(
|
53 |
+
output_dir='./results', # output directory
|
54 |
+
num_train_epochs=1000, # total number of training epochs
|
55 |
+
)
|
56 |
+
|
57 |
+
config = EdsrConfig()
|
58 |
+
model = EdsrModel(config)
|
59 |
+
|
60 |
+
trainer = Trainer(
|
61 |
+
model=model, # the instantiated model to be trained
|
62 |
+
args=training_args, # training arguments, defined above
|
63 |
+
train_dataset=train_dataset, # training dataset
|
64 |
+
eval_dataset=val_dataset # evaluation dataset
|
65 |
+
)
|
66 |
+
|
67 |
+
trainer.train()
|
68 |
```
|
69 |
## Evaluation results
|
70 |
The evaluation metrics include [PSNR](https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio#Quality_estimation_with_PSNR) and [SSIM](https://en.wikipedia.org/wiki/Structural_similarity#Algorithm).
|
|
|
92 |
|Urban100 |3x | | |
|
93 |
|Urban100 |4x |23.14/0.6573 |**26.02/0.7832** |
|
94 |
|
95 |
+
![Comparing Bicubic upscaling against EDSR x2 upscaling on Set5 Image 2](images/Set5_2_compare.png "Comparing Bicubic upscaling against EDSR x2 upscaling on Set5 Image 2")
|
96 |
+
|
97 |
## BibTeX entry and citation info
|
98 |
```bibtex
|
99 |
@InProceedings{Lim_2017_CVPR_Workshops,
|
images/Set5_2_compare.png
ADDED
images/Set5_4_compare.png
ADDED