Single image dehazing using local linear fusion
2017; Institution of Engineering and Technology; Volume: 12; Issue: 5 Linguagem: Inglês
10.1049/iet-ipr.2017.0570
ISSN1751-9667
AutoresYakun Gao, Haiyan Chen, Haibin Li, Wenming Zhang,
Tópico(s)Advanced Vision and Imaging
ResumoIET Image ProcessingVolume 12, Issue 5 p. 637-643 Research ArticleFree Access Single image dehazing using local linear fusion Yakun Gao, Yakun Gao School of Electrical Engineering, Yanshan University, Qinhuangdao, People's Republic of ChinaSearch for more papers by this authorHaiyan Chen, Haiyan Chen School of Electrical Engineering, Yanshan University, Qinhuangdao, People's Republic of ChinaSearch for more papers by this authorHaibin Li, Haibin Li School of Electrical Engineering, Yanshan University, Qinhuangdao, People's Republic of ChinaSearch for more papers by this authorWenming Zhang, Corresponding Author Wenming Zhang zwmwen@ysu.edu.cn School of Electrical Engineering, Yanshan University, Qinhuangdao, People's Republic of ChinaSearch for more papers by this author Yakun Gao, Yakun Gao School of Electrical Engineering, Yanshan University, Qinhuangdao, People's Republic of ChinaSearch for more papers by this authorHaiyan Chen, Haiyan Chen School of Electrical Engineering, Yanshan University, Qinhuangdao, People's Republic of ChinaSearch for more papers by this authorHaibin Li, Haibin Li School of Electrical Engineering, Yanshan University, Qinhuangdao, People's Republic of ChinaSearch for more papers by this authorWenming Zhang, Corresponding Author Wenming Zhang zwmwen@ysu.edu.cn School of Electrical Engineering, Yanshan University, Qinhuangdao, People's Republic of ChinaSearch for more papers by this author First published: 01 May 2018 https://doi.org/10.1049/iet-ipr.2017.0570Citations: 9AboutSectionsPDF ToolsRequest permissionExport citationAdd to favoritesTrack citation ShareShare Give accessShare full text accessShare full-text accessPlease review our Terms and Conditions of Use and check box below to share full-text version of article.I have read and accept the Wiley Online Library Terms and Conditions of UseShareable LinkUse the link below to share a full-text version of this article with your friends and colleagues. Learn more.Copy URL Share a linkShare onFacebookTwitterLinkedInRedditWechat Abstract The authors propose a new single image dehazing method. Different from image restoration and image enhancement method, their method is based on the idea of image fusion. Image dehazing is to remove the influence of the haze between the scene and the camera. First, combined with the depth information, the haze layer is subtracted in the hazy image to improve the colour saturation, which produces the first input image. Then, the gamma correction is used on the grey image. Second, the details of the gamma correction image are enhanced to produce the second input image. Finally, the two input images are fused by local linear model to obtain the final restored image. Experimental results show that the restored image has high contrast, rich details, and without colour distortion in the sky area. 1 Introduction Due to the absorption and scattering effects of the particles in the hazy scene, the quality of the captured image degrades seriously. Degraded images often suffer from low contrast, less details and so on, and thus, failing to meet people's needs such as target recognition, feature extraction. Scholars are paying more attentions to the image dehazing algorithms. Considering whether based on the physical imaging model or not, the image dehazing algorithm can be roughly divided into two categories. One is image restoration, and the other is image enhancement. Fattal et al. [1] used the assumption that surface shading and transmission were not correlated in the local window. They used independent component analysis [2] and Markov random field model [3] to estimate the transmission. This method is based on the statistical characteristics of the data. It heavily relays on the quality of the input image and could not deal with the thick haze images and the low signal-to-noise ratio images. Tan [4] proposed an automatic dehazing algorithm based on a single image according to two basic observations: first, the clear or enhanced images usually have higher contrast than foggy images. Second, the air-light changes smoothly in a small local area. They do not take into account colour restoration, which makes the enhanced image unnatural. Since the patch-based operation is used to estimate the air-light model, some 'halo' effect can also appear in the resulting image especially in depth discontinue areas. The dark channel prior (DCP) method proposed by He et al. [5] is based on the physical model considers the effect of the depth information, and has a good restoration result. However, this method cannot deal with images with large white areas and the sky regions well. Li et al. [6] proposed a sky region prior, obtaining the sky region by processing colour edge detection, and putting forward a new colour normalisation method to avoid colour distortion. However, this algorithm is time-consuming and may not satisfy a minority of non-sky-region images. Mi et al. [7] estimated the initial transmission properly based on latent region-segmentation and refined the estimated initial transmission by an objective function with a novel weighted L1-norm regularisation term. They also defined an evaluation function to estimate the reliable global atmospheric light. However, this method cannot reliably distinguish pixels which are heavily haze contaminated, so sometimes it makes pixels supersaturated. Tarel and Hautiere [8] used the median filter to replace the minimum filter in He's. However, the median filter causes fuzz and expansion at the edge area. Meng et al. [9] improved the DCP dehazing algorithm by imposing an inherent boundary constraint on the transmission function and used the weighted L1-norm-based contextual regularisation method to optimise the transmission. The results show richer details than He's. However, it suffers from colour distortion problem in the sky area. Kim et al. [10] proposed an optimised contrast enhancement algorithm for hazy image clarity processing. This algorithm has good robustness and clear image restoration result. It can deal with the sky region well, but it is easy to lose details in the dark places. For image enhancement [11, 12], retinex is first proposed by Land and McCann [13], which was a model that described the invariance of colour. Retinex could efficiently handle low-brightness blurred images. However, it was a very complex algorithm [14]. Histogram equalisation (HE) was a simple but effective dehazing method, which expand the pixel dynamic distribution range. HE could be divided into two categories: the global HE and the local HE. Jun and Rong [15] used the global algorithm to enhance the contrast. It could enhance the global contrast, but could not handle the contrast of the local regions well. Local HE [16] could enhance the contrast of each part of the image and overcome the shortcomings of global transformation. However, local processing had the problem of block effect, large computation and colour distortion. Our algorithm is based on the idea of image fusion. First, combined with the rough depth information, the hazy image subtracts the haze layer to enhance the colour saturation to produce the first input image. Then, the gamma correction is used on the grey image and the details are increased. In this way, the second input image is produced. Under the assumption that the transmission is consistent in a small local area, we find that the physical imaging model satisfies the local linear model. Finally, as the colour of the first input image is over corrected, the two input images are fused together by the local linear model to solve this problem and obtain the restored image. The flowchart of the algorithm in this paper is shown in Fig. 1. Fig. 1Open in figure viewerPowerPoint Flowchart of the proposed algorithm 2 Dehazing using local linear image fusion 2.1 Physical imaging model Koschmieder [17] first proposed the physical imaging model. Narasimhan and Nayar [18] suggested that the scattering coefficient can be considered as a constant in homogeneous atmosphere, and simplified the physical imaging model as (1)where x is the pixel point, I is the observed image, J is the haze-free image without atmosphere pollution. A is the atmospheric light of the scene. t is the transmission of the scene, who is related to the atmospheric light scattering coefficient and the distance between the object and the camera, expressed as (2)where is the scattering coefficient of the atmosphere and d is the scene depth. Generally assuming, the atmospheric light and particles in the air are uniformly distributed and is a constant. Since is known, the haze-free image can be calculated through (1) after A and are estimated. It is an ill-posed equation obviously. The estimated and A depending on prior information are usually not accurate enough. So sometimes the restored images appear colour distortion in some regions. In this paper, we prefer to enhance the hazy image by local linear fusion, rather than estimate the accurate and A. 2.2 Enhance the colour saturation to produce the first input image Intuitively, removing haze is to remove the haze between the scene and the camera and the haze-free image should have more saturated colour. Guided by this principle, we remove the haze combining with rough depth information to improve the image colour saturation. 2.2.1 Get the rough depth map According to the imaging principle in the hazy weather, the dense haze regions usually have larger image pixel value. He et al.'s [5] DCP theory thought that the dark channel map reflected the thickness of the haze, so the depth map can be approximately represented by dark channel map. He et al. used the minimal filter to obtain the dark channel map, and this resulted that there was edge diffusion phenomenon at the depth abruption region: the edge spread to the side with larger pixel value, resulting in halo effects in the restored image. In order to avoid this phenomenon, guided filter [19] is used to optimise the dark channel map, but there is still halo effect in depth rapidly changed areas as shown in the red rectangle in Fig. 2b. The proposed method uses the tree filter [20] to produce the rough depth map. The minimal channel image (selecting the smallest value in the RGB channels) is filtered by the tree filter which is an edge-preserving filter. We take the filter result as the rough depth map . ('depth map' is not a true depth map. It is just a symbol that shows the rough depth information.). Fig. 2c shows the result after dealing with tree filter. Compared with Fig. 2b, we can see that the halo effect has been corrected to a large extent. Fig. 2Open in figure viewerPowerPoint Contrast result by using different rough depth map (a) Hazy image, (b) With dark channel, (c) With the proposed method 2.2.2 Haze removal using the rough depth map According to the haze imaging model, the reason why contrast and the colour information of the hazy image are weakened is that the increased atmospheric light. To avoid producing black spots in the final dehazed image, the operation is applied. Combining the rough depth information, the haze is subtracted from the three RGB channels, respectively, as (3)where , and are the colour enhanced images after removal the haze in each channel. is the first input image. The operation means pick out the maximal number in the bracket. , and are the original images of each colour channel. is the average pixel value of the grey hazy image. , and are the degrees of haze in each channel. , and are the adjustment parameters, in order to improve the overall brightness of the image and they can be calculated as (4)where power is the power operation. , and are the base numbers, , and are the power numbers. , and are the scaling factors and we set them to five in all the experiments. The thickness of the haze increases with the increasing distance. Thus, the haze degrees of each channel should be variables related to the depth information. We take the product of the average pixel value in the RGB three channels and the rough depth map as the estimation of the degree of the haze. We use (5) to obtain the degree of the haze (5)where is the rough depth map obtained in Section 2.2.1. As shown in Fig. 3, compared with the original hazy image, the images removed the haze improve the colour saturation, contrast and show better structure information. There is also one question that the colour of the processed images suffers from over-saturated problem. We will deal with this problem in Section 2.4. Fig. 3Open in figure viewerPowerPoint Colour saturation enhanced result (a) Hazy image, (b) Image subtracted haze layer 2.3 Gamma correction and adding details to produce the second input image 2.3.1 Gamma correction The hazy image shows higher brightness in the areas of large imaging distance and the structure information of the distant object is drowned in these areas. In order to decrease the brightness and enhance the structure information of these areas, Gamma correction is used on the grey hazy image, which can be called directly from Matlab. The Gamma curve is a special tone curve. The brightness of the output image can be adjusted by . When the , the curve is a straight line at 45° to the coordinate axis, which means that the brightness of the input and output image are same. When , it will cause the output to be darkened and the high grey area dynamic range increases, vice versa. Combined with the characteristics of hazy image, we should set . The overall brightness of the grey image after Gamma correction is decreased but the structure information in bright regions becomes more obvious as shown in Fig. 4(6)where is the grey hazy. is the image applying gamma correction to grey hazy image. is the adaptive correction parameter. We set as follows: (7)where is the average pixel value of the grey hazy image. We set in all the experiments. From the gamma correction curve, we can see that the brighter the picture, the darker it can be adjusted. Fig. 4Open in figure viewerPowerPoint Detail enhanced result (a) Grey image without Gamma correction, (b)Grey image with Gamma correction, (c) Grey image with Gamma correction and added details 2.3.2 Enhance details of the hazy image In addition to lower contrast, hazy image has a serious problem that the details are lost. In this section, we will solve this problem. (i) Get local contrast map After smoothed, an image will lose details, but if we use the original image to subtract the smoothed image we will get the detail map. So, we use guided filter on (obtained in Section 2.3.1) to smooth image, keeping the structure information and eliminating the details and noise. Based on this idea, we obtain the following local contrast map: (8)where is the local contrast map. is the image after guided filter operation on . (ii) Enhance detail information In this section, the local contrast map is used to enhance the details of the hazy image. The local contrast map is added to to enhance the detail information. The detail enhanced image is the second input image of the fusion process. We sign it as (we set ). (9) Figs. 4c and 5 show that the restored image has more details after enhancing details. Fig. 5Open in figure viewerPowerPoint Comparison chart of local amplified 2.4 Image fusion The haze subtracted image improves the contrast and the colour saturation, but the colour is over saturated as shown in Fig. 3b. Inspired by the idea of image fusion, we take the image subtracted the haze layer as the first input image, and the enhanced the details grey image as the second input image, using local linear model to fuse these two images. This operation can eliminate the phenomenon of colour distortion and enrich details of the restored image. In [19], the concept of guided filter is proposed, which assumes that the output image has a local linear relationship with the guide image. Here, we assume that the colour saturated image has the relationship with the haze-free image J as (1). Under the assumption that the transmission is a constant in a local area, we find that the hazy imaging process satisfies the local linear model. Then we get (10)where = , = , is a constant in the local block , and the atmospheric light A is a constant, so is fixed in the small region. The linear relationship is reversible, so (10) can be transformed as (11)where , , and and are invariants in the local window. We can have ∇ = ∇ , meaning that dehazed image preserves the structure information of the hazy image. To obtain the linear coefficients (, ), we need some other constraints. We model the dehazed image J as the second input image subtracting some noises () as (12)We seek a cost function (13) to minimise the difference between and while maintain the local linear model: (13)where is the dth local window in the image. ɛ is a penalising parameter, to prevent the too large. We set it to in this paper. Using linear regression, the solution of linear parameters is given as (14) (15)where and are the average and variance of in , is the number of pixels in , equalling to in quantity (r is the fusion radius). In this paper, we set , here w and h are the width and high of the image, respectively. Therefore, the fusion window can automatically be adjusted with the size of picture. is the mean of in . However, a pixel x is involved in all the overlapping windows covered x, so the values of and are not identical in different windows. Therefore, we average all the possible values of them. In order to adjust the colour of , and maintain the colour and structure information of . We use (16) to calculate J(16)where d is the variable local window, which covers the pixel x. () change with d. η is an adaptive parameter, set to , which ensures the restored image J is more closer to the first input image and remain structure information to a large extend. After the fusion process, the colour over-saturated problem could be corrected and the final colour saturation and detail enhanced haze-free image is obtained. Fig. 6 shows the flowchart of the fusion process and Fig. 7 is the dehazed results with the proposed method. Fig. 6Open in figure viewerPowerPoint Flowchart of fusion process Fig. 7Open in figure viewerPowerPoint Dehazed result by the proposed method (a), (c) Hazy images, (b), (d) Dehazed results with the proposed method 3 Test and analyse this method In order to evaluate the performance of the proposed method, we select some classical algorithms [5, 8] and the latest excellent algorithms [9, 10] to do contrast experiments. All the results are the primary results without post-processing. 3.1 Subjective evaluation Fig. 8 is the processed result to the synthetic hazy images. Fig. 9 shows experimental results on the thin haze images. Fig. 10 is the experimental results on the thick haze images. Fig. 8Open in figure viewerPowerPoint Processed results on the synthetic hazy images (a) Hazy images, (b) True images without haze, (c) He et al. results, (d) Tarel et al. results, (e) Meng et al. results, (f) Kim et al. results, (g) Proposed results Fig. 9Open in figure viewerPowerPoint Processed results on thin hazy images (a) Thin hazy images, (b) He et al. results, (c) Tarel et al. results, (d) Meng et al. results, (e) Kim et al. results, (f) Proposed results Fig. 10Open in figure viewerPowerPoint Processed results on thick hazy images (a) Thick hazy images, (b) He et al. results, (c) Tarel et al. results, (d) Meng et al. results, (e) Kim et al. results, (f) Proposed results Comparing the different results in Fig. 8, we can find that the results of Meng et al.'s method are the closest to the haze-free images, but in Figs. 9 and 10, we can find the Meng et al.'s method is not good at dealing with the sky regions. It is easy to find that the He's method has a good effect on dehazing, but the sky regions are also not well processed, as shown in Fig. 10.3b. The details of the He's algorithm are fewer than Meng's and our method. The dehazing effects by Tarel are not as good as Meng's and our method as shown in Figs. 9.2c and 9.3c. In addition, the image details cannot be processed well, such as around of the flags in Fig. 9.1c and the leaves in Fig. 9.4c. There is obvious effect to dehaze for Meng's method. However, sky regions are poorly processed with colour distortion, such as the sky regions in Figs. 10.2d and 10.3d. Kim et al.'s has good dehazing effect, especially for the sky area. But there are some areas which are not dehazed sufficiently, as shown in the red rectangular in Figs. 9.1e and 10.1e. Dehazed images by the proposed method are more clearer and have rich details. 3.2 Objective comment It is necessary to give an objective evaluation to the experiment results. The image quality assessment criterion can be divided into two categories: reference image quality assessment [21, 22] and no-reference image quality assessment [23]. In the field of image dehazing, the no-reference metric is widely used. A clear image should have clear edges, rich details etc. Well dehazed images should also have these features. Based on these analyses, we selected the following five objective evaluation methods. (i) Blind assessment indicator In [24], (e, ) are proposed to evaluate the enhanced degree of image edges which represent the enhanced degree of the image visibility. e is the increased rate of the visible edge of the dehazed image. is the degree of increased gradient in the dehazed image. Large e and values mean good visibility of the dehazed image (17) (18)where and are the number of visible edges in the dehazed image and the original hazy image, respectively. are the gradient in the dehazed image and the original hazy image, respectively. is the set of the visible edges in the dehazed image. (ii) Image visibility measurement (IVM) IVM is another new evaluation method proposed by Yu et al. [25] based on the blind evaluation. This method is based on segmentation of visible edges. Large IVM value means the good dehazed effect (19)where is the number of visible edges, is the total number of edges, ζ is the region containing the visible edges, and is the average contrast. (iii) Image contrast The contrast of the hazy-free picture is higher than that the hazy image. The higher the contrast of an image, the clearer the image is. Thus the contrast is also an important indicator to assess dehazed image. Tripathi and Mukhopadhyay [26] used the contrast gain [27] to compare the images processed by different dehazing algorithms. The larger the , the higher the contrast of the dehazed image (20)where and are the average contrast of dehazed image and the hazy image, respectively. (iv) Visual contrast measure (VCM) Rahman et al. [28] proposed a VCM to evaluate the quality of the restored image. The larger the VCM value is, the better the visibility of the image is, and the clearer the image is. (21)where is the number of local region where the standard deviation is greater than the given threshold value. is the total number of the regions. (v) Structural similarity index (SSIM) SSIM is an index to evaluate the similarity of two images. The large the SSIM value means the high the similarity degree of the two images. If a clear haze-free image is used as a reference, the SSIM value is better to close 1. However, we use the hazy image. Thus, we consider that the smaller SSIM is better. The above assessment measurements are used to evaluate the results of all the restored images in Figs. 9 and 10. Through the comparison of the data our method shows satisfactory results in Tables 1 and 2. We can see that the values of SSIM are the lowest in the tables except in Fig. 10.3, which indicate our method can remove fog in a large degree. The rest of the evaluation values is mostly the best values. Except lower than Tarel's in Fig. 10.3, the values of the proposed method are the highest in all the tables which indicates that the proposed algorithm can add more details. The e of Tarel's algorithm in Fig. 9.4 is the highest, but we observe that the corresponding restored images in Fig. 9.4c is not the best. There is some fog around leaves in Fig. 9.4c. The e of Meng in Fig. 10.2 is also the highest, but we notice that the corresponding dehazed image showing in Fig. 10.2d exists distortion in the sky area. Comprehensively, the proposed method has the best performance in image dehazing. Table 1. Objective quality assessment of Fig. 9 (the best performance is coloured in red. The ones showing equal value are marked red as they are larger than the other before rounded) Fig. 9.1 Fig. 9.2 Fig. 9.3 Fig. 9.4 e VCM SSIM e VCM SSIM e VCM SSIM e VCM SSIM He 5.2 1.3 0.07 65 0.93 7.0 1.6 0.25 61 070 18 1.1 0.34 23 0.53 4.9 1.1 0.11 68 0.92 Tarel 11 2.4 0.28 53 0.79 14 2.0 0.18 67 0.85 22 2.4 0.27 61 0.78 16 1.7 0.18 82 0.83 Meng 9.6 1.7 0.14 29 0.82 9.7 2.1 0.43 68 0.67 19 1.6 0.45 41 0.68 7.4 1.4 0.18 76 0.91 Kim 8.7 1.8 0.27 35 0.86 14 2.1 0.57 67 0.68 21 2.3 0.82 67 0.66 1.8 1.4 0.18 77 0.95 Our 11 3.0 0.24 63 0.75 15 3.6 0.57 78 0.56 23 3.6 0.83 79 0.52 8.0 2.5 0.38 87 0.74 Table 2. Objective quality assessment of Fig. 10 (the best performance is coloured in red) Fig. 10.1 Fig. 10.2 Fig. 10.3 e VCM SSIM e VCM SSIM e VCM SSIM He 17 1.8 0.11 26 0.78 18 1.7 0.12 50 0.91 11 0.9 0.07 15 0.62 Tarel 18 3.6 0.13 50 0.78 24 2.5 0.17 21 0.83 19 3.4 0.16 47 0.81 Meng 32 4.1 0.32 21 0.59 33 3.0 0.45 32 0.63 18 2.1 0.16 42 0.72 Kim 20 3.2 0.35 54 0.74 26 2.9 0.75 42 0.60 23 2.8 0.29 41 0.70 Our 30 5.0 0.47 30 0.55 31 4.4 0.57 56 0.54 24 3.4 0.36 26 0.70 Table 3 shows the cost on time in different methods. Kim's code is implemented with C programs, others with Matlab. We can see that Kim's method is the fastest. The time-consuming with our algorithm is slower than Kim's, slightly slower than He's, faster than Taral's and Meng's. Testing platform: PC 64-bit operating system, Intel(R) Core(TM) i3-2350 m CPU. Table 3. Time comparison (the best performance is coloured in red, and the second in blue) Fig. 9.1 Fig. 9.2 Fig. 9.3 Fig. 9.4 Fig. 10.1 Fig. 10.2 Fig. 10.3 size 600 × 450 600 × 400 600 × 400 1024 × 768 361 × 240 292 × 220 400 × 300 He 1.746 s 1.746 s 1.630 s 4.378 s 0.754 s 0.656 s 0.903 s Tarel 11.98 s 10.97 s 10.56 s 97.24 s 1.551 s 0.861 s 2.476 s Meng 6.913 s 5.175 s 5.293 s 13.77 s 2.887 s 2.638 s 3.154 s Kim 19.88 ms 19.78 ms 18.59 ms 38.11 ms 15.03 ms 15.41 ms 15.44 ms Our 2.6905 s 2.354 s 2.409 s 7.932 s 0.8216 0.6418 s 1.072 s 4 Conclusions In this paper, a new dehazing algorithm using local linear fusion is proposed. First, the haze of the original hazy image is subjected with the depth information to enhance the colour saturation to produce the first input image. Second, since the hazy image is brighter and the details are lost, the Gamma correction of the hazy grey image is performed, which can reduce the overall brightness of the hazy image. Then the denoised Gamma correct image is enhanced details to obtain the second input image. Finally, the two input images are fused together using the local linear model to obtain the haze-free image. Comparing with the image enhancement and image restoration, this method is based on the idea of image fusion and the local linear physical imaging model. This method takes into account the depth information and avoids the complex process of estimating transmission and atmospheric light. The experimental results show that the restored image has high contrast, clear textures and rich details. 5 Acknowledgments The work was partly supported by the Natural Science Foundation of Hebei Province of China under the project no. D2014203153, and the Natural Science Foundation of Hebei Province of China under the project no. D2015203310. 6 References 1Fattal R.: 'Single image dehazing', ACM Trans. Graph., 2008, 27, (3), pp. 72:1– 10 2Hyvärinen A., and Oja E.: 'Independent component analysis: algorithms and applications', Neural Netw.., 2000, 13, (4), pp. 411– 430 3Perez P.: 'Markov random fields and images', Institut de recherche en informatique et systèmes aléatoires, 1998, 11, pp. 413– 437 4Tan R.T.: ' Visibility in bad weather from a single image'. Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2008, pp. 1– 8 5He K., Sun J., and Tang X.: 'Single image haze removal using dark channel prior', IEEE Trans. Pattern Anal. Mach. Intell., 2011, 33, (12), pp. 2341– 2353 6Li Y., Miao Q., and Song J. et al: 'Single image haze removal based on haze physical characteristics and adaptive sky region detection', Neurocomputing, 2015, 182, pp. 221– 234 7Mi Z., Zhou H., and Zheng Y. et al: 'Single image dehazing via multi-scale gradient domain contrast enhancement', IET Image Process., 2016, 10, (3), pp. 206– 214 8Tarel J.P., and Hautiere N.: 'Fast visibility restoration from a single colour or gray level image'. IEEE Int. Conf. Comput. Vis., 2009, 30, pp. 2201– 2208 9Meng G., Wang Y., and Duan J. et al: ' Efficient image dehazing with boundary constraint and contextual regularization'. Proc. IEEE Int. Conf. Comput. Vis., Sydney, Australia, December 2013, pp. 617– 624 10Kim J.H., Jang W.D., and Sim J.Y. et al: 'Optimized contrast enhancement for real-time image and video dehazing', J. Vis. Commun. Image Represent., 2013, 24, (3), pp. 410– 425 11Hu W., Wang R., and Fang S. et al: 'Retinex algorithm for image enhancement based on bilateral filtering', J. Eng. Graph., 2010, 31, (2), pp. 104– 109 12Patel O., Maravi Y.P.S., and Sharma S.: 'A comparative study of histogram equalization based image enhancement techniques for brightness preservation and contrast enhancement', Signal Image Process, 2013, 4, (5), pp. 11– 25 13Land E.H., and McCann J.: 'Lightness and retinex theory', J. Opt. Soc. Amer., 1971, 61, (1), pp. 1– 11 14Petro A.B., Sbert C., and Morel J.-M.: 'Multiscale retinex', Image Process. Line, 2014, 4, pp. 71– 88 15Jun W.L., and Rong Z.: 'Image defogging algorithm of single colour image based on wavelet transform and histogram equalization', Appl. Math. Sci., 2013, 7, (79), pp. 3913– 3921 16Li L., Jin W., and Xu C. et al: 'Colour image enhancement using nonlinear sub-block overlapping local equilibrium algorithm under fog and haze weather conditions', Trans. Beijing Inst. Technol., 2013, 33, (5), pp. 516– 522 17Koschmeider H.: 'Therie der horizontalen sichtweite', Beitr. Phys. freien Atm., 1924, (12), pp. 171– 178 18Narasimhan S.G., and Nayar S.K.: 'Interactive (de) weathering of an image using physical models', Proc. IEEE Workshop Colour Photometric Methods Comput. Vis., 2003, 6, pp. 1– 8 19He K., Sun J., and Tang X.: 'Guided image filtering', IEEE Trans. Pattern Anal. Mach. Intell., 2013, 35, (6), pp. 1397– 1409 20Bao L.C., Song Y.B., and Yang Q.X. et al: 'Tree filtering: efficient structure-preserving smoothing with a minimum spanning tree', IEEE Trans. Image Process., 2014, 23, (2), pp. 555– 569 21Wang Z., Bovik A.C., and Sheikh H.R. et al: 'Image quality assessment: from error visibility to structural similarity', IEEE Trans. Image Process, 2004, 13, (4), pp. 600– 612 22Carnec M., Callet P.L., and Barba D.: 'Objective quality assessment of colour images based on a generic perceptual reduced reference, signal process', Image Commun., 2008, 23, (4), pp. 239– 256 23Sheikh H.R., Bovik A.C., and Cormack L.: 'No-reference quality assessment using natural scene statistics: JPEG2000', IEEE Trans. Image Process, 2005, 14, (11), pp. 1918– 1927 24Hautiére N., Tarel J.-P., and Aubert D. et al: 'Blind contrast enhancement assessment by gradient rationing at visible edges', Image Anal. Stereol. J., 2008, 27, (2), pp. 87– 95 25Yu X., Xiao C., and Deng M. et al: ' A classification algorithm to distinguish image as haze or non-haze'. Proc. IEEE Int. Conf. Image Graph, 2011, pp. 286– 289 26Tripathi A.K., and Mukhopadhyay S.: 'Removal of fog from images', IETE Tech. Rev., 2012, 29, (2), pp. 148– 156 27Economopoulos T., Asvestas P.A., and Matsopoulos G.K.: 'Contrast enhancement of images using partitioned iterated function systems', Image Vis. Comput., 2010, 28, (1), pp. 45– 54 28Rahman Z., Woodell G.A., and Jobson D.J.: ' A comparison of the multiscale retinex with other image enhancement techniques'. Proc. IS&T 50th Anniversary Conf., 1997, pp. 1– 6 Citing Literature Volume12, Issue5May 2018Pages 637-643 FiguresReferencesRelatedInformation
Referência(s)