We used database is FEI FACE database, frontal images spatially
normalized_part1 and part2, a total of 400 images, and implemented a sequence
of image processing steps to automatic normalize, equalize and crop frontal
face images. The size of each image is 300×250, we put 360 images as
the training set, the remaining part as the test set. In the PSNR and SSIM (structural similarity index) numerical evaluation work, we compared the average of all test
images and evaluated their performance.
We have strict requirements on the
choice of parameters, because we want to keep the same number of x and y, and under the condition, . To calculate more convenient, all
the image size to make a small change (300×250 ~300×240). First, we test the object is
patch size P, when = 2, = 1,
and select the test size of were
4,8,16,32 and 64 respectively (). For the SSIM and
PSNR test, which tested 40 images. Fig. 5 shows the average
PSNR and SSIM for all test images when using different patch size values.
In the next test,
we tested the effect of the smooth parameter ? value on PSNR and SSIM, Fig. 6 shows
the average PSNR and SSIM of all test images when using different ? values (values range from 0.3 to 2.6).
results of tested PSNR and SSIM on different ?, when patch size P is 16. In the PSNR test, the best performance of PSNR can be
obtained with ? of 1.1, but when tested in SSIM, the highest value of
SSIM can be obtained when ? is 0.9. In the following, we took ? as 1.1 (The impact of ? on
SSIM is relatively small).
In the next test,
we tested the effect of the number of training images on PSNR and SSIM, Fig. 7 shows
the average PSNR and SSIM of all test images when the number of training images
results of tested PSNR and SSIM when the training images are taken in different numbers (100, 150, 200,
250, 300). The result shows more training images are available when the PSNR and
SSIM results can be better.
experiment tested the parameters used in our proposed SRLWR, the patch size p, ?
and the number of training images. These results showed that when p is 16 and ? is 1.1, the more training images, the better the performance.
also tested the performance of P and ? when the upscale factor is 4, and
under the following conditions:, where, = 4, = 1, . Fig. 8 shows the average PSNR and SSIM for all
The results of tested PSNR and SSIM on different patch sizes P and ?, when the P took 32 can get best
performance, no matter PSNR and SSIM. For the below test, when the ? took 1.2 can get best SSIM (0.911),
but when ? took 1.4 can get best PSNR
(32.163). To balance the performance, we took the middle ? value 1.3 as a following test.
4.3 Comparison of State-of-the-Art
In this part we
used our proposed SRLWR to compare some state of the art methods, under the FEI
face database and tested the average value of SSIM and PSNR (40 images). Our
algorithm used the following parameters to test, p = 16, = 2 and ?= 1 when upscale factor is 2, and p = 32, = 4 and ?= 1.3 when upscale factor is 4.
For all input
images, first use the Nearest Neighbor method downscale 4
times and then upscale 2 times, which makes all the input images more blurry
than usual (in normal cases, commonly used BICUBIC downscale 2 times). For
SRLSP code, we also used the same way to deal with low-resolution training
images. For VDSR code, we only took the grayscale image part. For other codes
we didn’t make any change, and numerical comparison result
is shown in Table .1.
The result of PSNR and SSIM test on the FEI database (downscale 4 times and
upscale 2 times).
In the test results, we found that our proposed method can got good
score in PSNR and SSIM numerical comparison, while the ordinary
super-resolution algorithms does not perform well when the input image is more
blurred. Due to code problem, we can’t test SRLSP when the upscale factor is 4.
In the next experiment, we also performed the numerical comparison between PSNR and SSIM. When the input image
is just downscale 2 times with using Nearest Neighbor interpolation method, and
the comparison result is shown in Table 2.
In this numerical comparison, we can see that although we proposed
is not as good as the SRLSP (very close), but we can still achieve good
An intuitive image evaluative method of using normalized pseudo-color
In this part, we introduced a subjective method of image evaluation,
this method makes it easier to see the difference between the ground truth and
the predicted image, Fig. 9 shows effect of our evaluation method, and the specific
steps as follows,
1. Calculate the difference between the ground truth and the predicted
image (image distance).
the absolute value of the first step.
3. For the result of step 2, normalization from 0 to 1 is performed.
4. Using the pseudo image to represent the result of step 3.