Enhanced embedded zerotree wavelet algorithm for lossy image coding
2019; Institution of Engineering and Technology; Volume: 13; Issue: 8 Linguagem: Inglês
10.1049/iet-ipr.2018.6052
ISSN1751-9667
AutoresRania Boujelbene, Larbi Boubchir, Yousra Ben Jemâa,
Tópico(s)Advanced Image Fusion Techniques
ResumoIET Image ProcessingVolume 13, Issue 8 p. 1364-1374 Research ArticleFree Access Enhanced embedded zerotree wavelet algorithm for lossy image coding Rania Boujelbene, Corresponding Author Rania Boujelbene rania.boujelbene@enis.tn U2S Laboratory, University of Tunis-El Manar, Tunis, TunisiaSearch for more papers by this authorLarbi Boubchir, Larbi Boubchir orcid.org/0000-0002-5668-6801 LIASD Laboratory, University of Paris 8, Saint-Denis, FranceSearch for more papers by this authorYousra Ben Jemaa, Yousra Ben Jemaa U2S Laboratory, University of Tunis-El Manar, Tunis, TunisiaSearch for more papers by this author Rania Boujelbene, Corresponding Author Rania Boujelbene rania.boujelbene@enis.tn U2S Laboratory, University of Tunis-El Manar, Tunis, TunisiaSearch for more papers by this authorLarbi Boubchir, Larbi Boubchir orcid.org/0000-0002-5668-6801 LIASD Laboratory, University of Paris 8, Saint-Denis, FranceSearch for more papers by this authorYousra Ben Jemaa, Yousra Ben Jemaa U2S Laboratory, University of Tunis-El Manar, Tunis, TunisiaSearch for more papers by this author First published: 29 May 2019 https://doi.org/10.1049/iet-ipr.2018.6052Citations: 2AboutSectionsPDF ToolsRequest permissionExport citationAdd to favoritesTrack citation ShareShare Give accessShare full text accessShare full-text accessPlease review our Terms and Conditions of Use and check box below to share full-text version of article.I have read and accept the Wiley Online Library Terms and Conditions of UseShareable LinkUse the link below to share a full-text version of this article with your friends and colleagues. Learn more.Copy URL Share a linkShare onFacebookTwitterLinkedInRedditWechat Abstract Embedded zerotree wavelet (EZW) algorithm is the well-known effective coding technique for low-bit-rate image compression. In this study, the authors propose a modification of this algorithm, namely new enhanced EZW (NE-EZW), allowing to achieve a high compression performance in terms of peak-signal-to-noise ratio and bitrate for lossy image compression. To distribute probabilities in a more efficient way, the proposed approach is based on increasing the number of coefficients not to be encoded by the use of new symbols. Furthermore, the proposed method optimises the binary coding by the use of the compressor cell operator. Experimental results demonstrated the effectiveness of the proposed scheme over the conventional EZW and other improved EZW schemes for both natural and medical image coding applications. They have also shown that the proposed approach outperforms the most well-known algorithms, namely set partitioning in hierarchical trees (SPIHT) and JPEG2000. 1 Introduction Nowadays, digital images are broadly employed in computer applications. Uncompressed images demand significant storage capacity and transmission bandwidth [1, 2]. To overcome these limitations, image compression is considered as an efficient solution which is becoming more critical with the recent growth of data intensive, and multimedia-based web applications [3]. The main purpose of data compression is to reduce the volume of necessary data to represent a certain information quantity by removing redundant information [4]. Transform coding is one of the most effective strategies in image compression. The main goal of this transform serves to provide decorrelated coefficients and remove redundancy [5, 6]. Over the years, due to its time-frequency characteristics, wavelet transform (WT) has been a popular multiresolution analysis tool. Its discrete version (DWT) has become very effective for the compression of digital images with the property to encode and decode data progressively. These properties, which are helpful in image compression [7], have been exploited by different wavelet-based image coding schemes [8, 9]. Among the existing schemes [10-13], embedded zerotree wavelet (EZW) [14] was one of the first and powerful algorithms based on wavelet image compression. It is a remarkably effective, fast in execution and computationally simple technique for the bit stream of the wavelet-based image compression. However, it still has many shortcomings. Its main drawbacks are the fact that it has limited performance in terms of gain compression and it also contains many redundant symbols. So, many attempts to enhance the EZW coder's features and to reduce its limitations have been suggested in the literature [15-21]. Our work focuses on the improvements done to the EZW and presented in [20, 21] and in [16]. In [20], the authors propose an efficient image coding method based on the use of a new significant symbol map in order to reduce the number of zerotrees [21]. Each one of the presented symbols is coded on three bits instead of two for the EZW. As a result, the total number of bits is increased. However, the main objective behind compression is to further minimise the size of the image. In [16], the authors present a new image compression algorithm based on decreasing coding bits by the use of three symbols instead of the four used in EZW. As a result, the total number of bits is reduced as compared to EZW. However, the total number of symbols is increased. Furthermore, the probabilities of the symbols are not well redistributed as compared to the EZW and thereafter the entropy is not optimised in a better way. In order to reduce the total number of symbols while decreasing the total number of bits, we propose in this paper a new enhanced (NE) approach of the EZW algorithm for coding wavelet coefficients, namely NE-EZW, which combines the approaches [16, 20, 21] that have been presented above and which exploits their limitations. The main contributions of our proposed approach are highlighted as follows: Increasing the number of coefficients not to be encoded and reducing the total number of symbols by incorporating in our algorithm two steps that aim at eliminating unnecessary symbols. Decreasing the coding bits by the use of the compressor cell operator which aims at coding a consecutive number of symbols by a small number of bits. Indeed, the implementation of the compressor cell is modified as compared to the existing one to adapt it to the new added symbols. Furthermore, the robustness of the proposed NE-EZW compares advantageously with the EZW [14], IMP1EZW [20], SPIHT [10] and JPEG2000 [22] algorithms. The remainder of this paper is organised in the following way. In Section 2, we give an outline of the various states of the art about the EZW as well as the recent improvements. Section 3 describes the proposed NE-EZW algorithm. A case study illustrating the proposed methods and the existing ones is given in Section 4. In Section 5, the results obtained with the NE-EZW are analysed and compared with results from other algorithms. Finally, Section 7 concludes the paper. 2 Background We present in this section an overview of the different algorithm used in the proposed approach for the image compression. The principle of each algorithm is given. 2.1 Shapiro's EZW algorithm After the image decomposition step, image coding is carried out with the embedded zerotree wavelet coder. So, since its publication, EZW algorithm has attracted great attention. This algorithm is the first successful coding scheme developed for the WT [23]. It is based on progressive coding to compress an image into a bit stream with increasing accuracy. Progressive encoding is also known as embedded encoding which illustrates the E in EZW. The basic idea behind the EZW is to form a tree structure with its root located into the lowest-frequency sub-band after the DWT is applied to the image. During the coding process of EZW, as illustrated in Fig. 1, the coefficient will be compared to a predefined threshold Th. Fig. 1Open in figure viewerPowerPoint EZW process of encoding a wavelet coefficient All the coefficients are scanned in the order shown in Fig. 2. This guarantees that when a node is visited, all its parents will be already scanned. During the dominant pass and the subordinate pass, the process starts at the lowest-frequency sub-band path. Fig. 2Open in figure viewerPowerPoint Scanning order of sub-bands in a three-level wavelet decomposition. Sub-bands trained are named for horizontal and vertical low-or high-pass-band and level of decomposition, e.g. LH 3 represents horizontal low pass-band and vertical high pass-band at third recursion level In the dominant pass, each coefficient visited in the scan is classified as a positive significant (P) or a negative significant (N), an isolated zero (Z) or a zero tree root (T). A zero tree root is a coefficient that is insignificant and all its descendants are insignificant. In this situation, the complete tree can be coded using a single symbol (T) to achieve compression. The isolated zero (Z) is a coefficient that is insignificant but has a significant descendant. In other way, when the coefficient's magnitude is larger than its threshold, the coefficient is considered as significant. Either P or N will represent it depending on whether the value is positive or negative. In the subordinate pass, the coefficients are tested to determine the significant ones (P or N) that will be refined to an additional bit of precision. When all the coefficients are refined, the threshold Th is halved and the coding process will be repeated. The encoder stops the loop when the desired bitrates are reached, i.e. the number of transferable bits required is exceeded. The encoded symbols' bit stream that contain a combination of the symbols P, N, Z, T and the refinement bit ('1' or '0') are then arithmetically coded [24, 25] for bit stream transmission. 2.2 Modified EZW algorithm (IMP1EZW) The modified algorithm IMP1EZW presented in [20] is an efficient image coding method based on the principle of Shapiro's EZW [14]. The difference between these two algorithms lies in the significance test process used for the wavelet coefficients which allows to determine if a significant coefficients has or no significant descendants. Therefore, a new data structure of coding significant coefficient is defined to reduce the number of zerotrees and symbol redundancy of the original EZW, as follows: if a coefficient is tested and found to be significant, its descendants must also be tested. If at least one descendant is significant, then the coefficients are coded according to the doing rules of the EZW algorithm. However, if all the descendants are judged insignificant, the coefficients are coded according to the proposed algorithms coding rules, using the symbols Pt for positive coefficients and Nt for negative coefficients. The IMP1EZW algorithm is similar to EZW algorithm in the dominant pass but it adds two new additional symbols mostly dealing with significant coefficients [21]. For the binary encoding, each one symbol is coded on three bits. 2.3 Image compression algorithm based on decreasing bits coding In order to reduce the encoding length, an image compression algorithm based on decreasing bits is proposed [16]. In this algorithm, the wavelet coefficients are coded using the symbols P, N and Z firstly (positive P: greater than or equal to the threshold, negative N: greater than or equal to the threshold and Z: less than the threshold). Then, it encodes in accordance with the order of character encoding reducing the coding bits. So, the decreasing bits coding operator is used: if significant coefficients are not set as 0 after scanning, and these locations are recorded, skip it directly without coding at next scan. All the Z after the last symbols P or N are deleted. For the binary encoding, the symbols P and N are coded with 2 bits. The code of P is '11' and the code of N is '10'. For the symbol Z, the coding is as follows: when the number of consecutive Z is smaller than 5, each Z is separately coded with 2 bits ('01'). Otherwise, when the consecutive number of Z is greater than 5, these consecutive Z are transformed into a compressor cell. The compressor cell is coded using the symbol '00' for the head and the tail and a bit strings between the two '00', which are obtained from the binary representation of the consecutive number of Z. When the bit strings obtained from binarised numbers of Z presents 0, we add a redundant code '1' after each zero. 3 Novel enhanced EZW image coding: proposed approach To increase the compression performance, we propose a new enhanced EZW image coding algorithm (NE-EZW) which can be regarded as an extension of the EZW coding method. Unlike the previous improved versions of EZW, our proposed method contributes to the existing literature by proposing a new algorithm which ensures a triple trade-off between the total number of symbols/the size of the compressed image/the quality of the reconstructed image. The new NE-EZW approach suggests the use of two new added symbols in its dominant pass as compared to the original EZW. This increases the number of coefficients not to be encoded and hence decreases the total number of symbols. In addition, the proposed algorithm further reduces the total number of symbols by adding two steps which will be explained below. Also, the NE-EZW greatly reduces the amount of information by the use of the compressor cell notion in the binary coding step. As depicted in Fig. 3, the block diagram of the NE-EZW is composed of six steps known as initialisation, dominant pass (or significance mapping pass), remove the last T, de-bit encoding, binary encoding and subordinate pass. Fig. 3Open in figure viewerPowerPoint Flow chart for NE-EZW coding process The step of initialisation in the proposed NE-EZW coding algorithm consists in determining the initial threshold which is the largest as power of two less than or equal to the largest coefficient. The NE-EZW uses then a series of decreasing thresholds () and compares the wavelet coefficients with those thresholds. For the dominant pass, if the magnitude of a coefficient is larger than a given threshold Th, the node is called significant with respect to the given threshold [14]. Otherwise, the coefficient is insignificant. The proposed algorithm uses six symbols: P, N, Pt, Nt, Z and T. (i) If the coefficient is positive and its value is higher than or equal to the threshold Th, it is coded into P. (ii) If the coefficient is negative and its value is higher than or equal to the threshold Th, it is coded as N. (iii) If the coefficient is positive and its value is higher than or equal to the threshold Th and all the descendants are smaller than the threshold (insignificant), the wavelet coefficient is coded as Pt. (iv) If the coefficient is negative and its value is higher than or equal to the threshold Th and all the descendants are smaller than the threshold (insignificant), the wavelet coefficient is coded as Nt. (v) If the absolute value of the coefficient is less than the threshold Th and the wavelet coefficient has one or more significant descendants (greater than or equal to Th), it is coded as Z. (vi) If the absolute value of the coefficient is less than the threshold Th and has only insignificant descendants (smaller than Th), it is coded as T. After the step of deleting the last T, we go to the de-bit encoding step: if significant coefficients are set 1 after scanning and these locations are recorded: we skip these coefficients directly without coding at next scan. This is illustrated by Algorithm 1 (see Fig. 4). This algorithm reduces the total number of symbols and helps to decrease the total number of bits. Fig. 4Open in figure viewerPowerPoint Algorithm 1: de-bit encoding The output obtained by the step of de-bit encoding is transformed into the binary encoding and only the significant coefficients are transformed into the subordinate pass. For the binary encoding step, when P is encountered in the character encoding List, the code is set as '000', when N is encountered, the code is set as '001'; when Pt is encountered, the code is '100', when Nt is encountered, the code is '101', for the symbol Z, the code is '010', for the symbol T and when the numbers of consecutive T is smaller than 4, each T code is set as '110'; while encountering consecutive T with its numbers being greater or equal than 4, these consecutive T are transformed into a compressor cell. This is illustrated by Algorithm 2 (see Fig. 5). Fig. 5Open in figure viewerPowerPoint Algorithm 2: binary encoding The compressor cell is coded using first the code '111' as the head (beginning of coding) and other '111' as the tail (ending of coding). Secondly, the bit strings obtained from the binarised numbers of consecutive T are inserted between the head and the tail. When the bit strings present '11', we add a redundant code '0' after two one. Also, when the last bit obtained from binarised numbers of T presents 1, we add a redundant code '0'. Hence, a compressed cell is formed with a minimum number of bits. The final step in our flow chart is the arithmetic coding, which represents the third step of any image compression system. It is an optional step in our proposed algorithm. In fact, the outputs of the step of binary encoding and that of subordinate pass, which is the refinement pass which aims to refine the representation of the magnitude of those significant coefficients, will be coded using arithmetic coding before transmission in order to further improve the performance of the proposed algorithm. After finishing all the previous steps, the current threshold is halved. A new round starting by dominant pass is operated. This procedure repeats until a desired compression ratio is reached. The desired compression ratio represents the number of transferable bits required. Indeed, progressive coding consists of the ability to reconstruct the image at any time. To be progressive, the proposed algorithm codes each symbol obtained. The number of bits obtained after the coding step will then be compared with the desired bitrates. If the desired bitrates are reached, the coding is completed and the coded bit string is sent to the receiver. Otherwise, we continue the algorithm. 4 Case study In this section, a case study illustrating the proposed algorithm is presented. Let us consider the Shapiro's 8 × 8 matrix test shown in Fig. 6 [26] in terms of the steps of the proposed algorithm. Initialisation: For the test matrix, the maximum absolute value of the coefficients is 63. The threshold is determined by: . It is equal to 32 seeing that is the highest power of 2 less than the maximum absolute coefficient ( and ). Fig. 6Open in figure viewerPowerPoint Example of decomposition to three resolutions for the Shapiro 8 × 8 matrix Dominant pass: The wavelet coefficients are scanned for the path presented in Fig. 2. During the scan, each coefficient is compared to the actual threshold and is assigned a significance symbol (P, N, Pt, Nt, Z or T). The result of this step is: PNZTPtTTTTZTTTPTT. Remove the last T: After scanning the results obtained from step 2 in an inversed order, we obtained TTPTTTZTTTTPtTZNP. We delete all the T after the first P: PTTTZTTTTPtTZNP. So, the result of this step is: PNZTPtTTTTZTTTP. De-bit encoding: The locations of the significant coefficients 63, −34, 49 and 47 are recorded. Hence, we skip directly without coding these coefficients in the second iteration. Subordinate pass: A bit is assigned to the coefficient which is found to be significant. The coefficients are compared to the () value which is equal to 48. That is, for the coefficients 63, −34, 49 and 47, the subordinate assigned bits are 1, 0, 1, 0, respectively. Binary encoding: The code is: 000001010110100111100111010110110110000 Take a binary bit number as 39. For the second iteration, the threshold value is equal to 16 (). Table 1 shows the results of the proposed algorithm NE-EZW and the other algorithms for all the iterations from 2 to 6. Table 1. Results of all the iteration from 2 to 6 in EZW, IMP1EZW and our proposed algorithm NE-EZW In this table, D represents the list after the first four step of the diagram, B is the binary code and N is the number of bits. The numbers of bits N which were saved after each iteration from the NE-EZW, EZW and IMP1EZW algorithms are shown in Fig. 7. Fig. 7Open in figure viewerPowerPoint Total number of bits N found for all algorithms versus iterations for the Shapiro test matrix From this figure, we show clearly that the proposed algorithm outperforms the other algorithm for all iterations. It has a substantial reduction in the encoding length while having the same iteration with EZW and IMP1EZW. 5 Experimental results 5.1 Data and evaluation criteria To assess the efficiency of the proposed approach, a number of numerical experiments have been performed on a number of natural and medical images. For this purpose, the standard Waterloo 8-bit greyscale image set [27] containing 12 images of different sizes (Lena, Barbara, Boat, Mandrill, Zelda, Goldhill, Peppers, House, Washsat, France, Montage, Library) and a set of colour images [27, 28] containing four images of size 512 × 512 with 24 bpp (Lena, Peppers, Mandrill, House) have been considered to perform the required tests. On the other hand, the MeDEISA database [29] has been used to download two medical images. For a fair comparison, the proposed technique NE-EZW has been compared with the conventional EZW and its improved version IMP1EZW reported in [20] firstly and with the SPIHT [10] and JPEG2000 [22] secondly. To evaluate the quality and the distortion for the reconstructed image using the presented techniques, the following measures have been adopted in our experiments [30]: peak-signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM). PSNR is defined as follows: (1)where PIC is equal to 255 for the images in 8 bits per pixel, and (2)f and represent, respectively, the original image and the reconstructed image. is the total number of pixels in image. The PSNR of a colour image with colour components R, G and B (RGB plan: red green blue) is given by (3)Besides PSNR, we also use SSIM to evaluate the performance, which is considered to be correlated with the quality perception of the human visual system. The SSIM is a decimal value between 0 (zero correlation with the original image) and 1 (exact same image). It is defined as follows: (4)where , , , , represent, respectively, two images, luminance comparison, contrast comparison and structural comparison between two images. , and are used to adjust the importance of three parameters. To estimate the efficiency of our algorithm in a clearer way, we computed the entropy of the set of symbols generated by the presented algorithms. This entropy is defined as follows: (5)where and represent, respectively, the appearance frequency and the entropy of each symbol and is the total number of pixels in image (6)where p represents the probability of each symbol. 5.2 Results analysis The biorthogonal 9/7 wavelet filters, recommended for image compression in the JPEG2000 standard, have been employed in our experiments. Especially, all test images are decomposed up to level six. Note that wavelet-based image coding systems often employ five or six decomposition levels; while it has been shown in [10, 14] that there is no noticeable difference in performance between the two levels. As the size and the type of the image affects the coding performance, several image sizes (256 × 256, 512 × 512) and types (natural, medical) have been considered. 5.2.1 Experimental 1: natural images Firstly, we have analysed and compared the performance of the proposed compression method to the performance of the other coders on greyscale and colour images. (a) Greyscale images: We tested the performance of the proposed NE-EZW algorithm on the different greyscale image set above. In this paper, we have presented just the results of some test images of size 512 × 512. The results obtained by NE-EZW algorithm indicate that the use of six symbols instead of the four used in the original EZW algorithm reduces the total number of symbols. Also, the third and the fourth steps of the proposed algorithm further reduce the total number of symbols (see Table 2). Table 2. Total number of symbols, bits, entropy and appearance frequencies of each symbol obtained by the three algorithms applied to the Lena image for different thresholds Th (a) Proposed NE-EZW Th Total number of bits Total number of symbols Appearance frequency Total entropy P Pt N Nt Z T 128 11,586 4182 151 599 118 529 332 2453 0.0064 64 27,317 9818 316 1378 309 1342 789 5684 0.0153 32 59,323 21,216 871 2867 953 2807 1612 12,106 0.0329 16 120,612 43,004 2763 5000 2844 4932 3161 24,304 0.0656 8 250,293 89,179 8157 7598 8309 7705 7231 50,179 0.1351 4 572,352 202,968 25,547 10,394 25,546 10,538 19,806 111,137 0.3174 (b) IMP1EZW [20] Th Total number of bits Total number of symbols Appearance frequency Total entropy P Pt N Nt Z T 128 15,372 5124 151 599 118 529 1012 2715 0.0085 64 360,096 12,032 316 1378 309 1342 2388 6299 0.02 32 79,560 26,520 871 2867 953 2807 5305 13,717 0.0438 16 165,336 55,112 2763 5000 2844 4932 11,262 2831 0.09 8 345,012 115,004 8157 7598 8309 7705 23,849 59,386 0.1866 4 765,480 255,160 25,547 10,394 25,546 10,538 52,205 130,930 0.4199 (c) EZW [14] Th Total number of bits Total number of symbols Appearance frequency Total entropy P N Z T 128 19,272 9636 750 647 1012 7227 0.0114 64 45,824 22,912 1694 1651 2388 17,179 0.027 32 98,432 49,216 3738 3760 5305 36,413 0.0598 16 189,680 94,840 7763 7776 11,262 68,039 0.1224 8 352,432 176,216 15,755 16,014 23,849 120,598 0.2457 4 677,776 338,888 35,941 36,084 52,205 214,658 0.5189 Bold values are the values of the proposed algorithm which are the best as compared to the existing ones. Therefore, the total number of bits is minimised as compared to the two other algorithm. The overall gain of bits obtained varies between 3786 bits and 193,128 bits. This is due to the notion of compressor cell used in the binary coding. It is important to note that, for all the considered thresholds, the entropy of the symbols is always lower in the case of the proposed NE-EZW algorithm than that for the EZW and IMP1EZW algorithms. In fact, it should be noted that the reduction in the number of zerotrees increases the number of coefficients not to be encoded. Moreover, it increases the number of significant coefficients, which provides a better reconstruction. Experimentally, it has been noticed during the EZW coding that the symbol T is generally the most probable. It represents the highest appearance frequency compared to the other symbols (see Table 2). From Table 3, it can be clearly seen that the proposed method outperforms baseline EZW for all bit-rates on the test images. At the same bitrate, the PSNR values of the reconstructed images with the NE-EZW (without and with arithmetic coding) technique are higher than those for EZW. Clearly, the improvement over the standard EZW is significantly high around 0.88–2.11 dB for NE-EZW without arithmetic coding and up to 5 dB for NE-EZW with arithmetic coding. Table 3. Lossy performance comparison of the NE-EZW, IMP1EZW and EZW image codecs for several greyscale test images Image Bitrate, bpp EZW [14] IMP1EZW [20] Proposed NE-EZW Proposed NE-EZW + Arithmetic coding PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM Lena 0.25 30.05 0.881 31.04 0.908 32.06 0.9261 34.12 0.943 0.5 33.28 0.9381 33.97 0.95 35.601 0.9632 37.5 0.9694 1 36.82 0.9692 36.92 0.9705 38.82 0.9768 40.7 0.9888 Barbara 0.25 25.17 0.7994 25.43 0.816 26.05 0.8429 27.62 0.8829 0.5 28.03 0.8881 28.83 0.899 30.27 0.9105 32.55 0.9434 1 31.97 0.9501 32.84 0.9591 34.08 0.9612 37.5 0.98 Boat 0.25 27.34 0.8115 28.01 0.842 28.34 0.879 30.24 0.901 0.5 30.21 0.9002 30.51 0.912 31.42 0.9306 33.33 0.953 1 33.18 0.9372 33.98 0.949 34.66 0.9772 36.7 0.9899 Goldhill 0.25 28.32 0.803 28.99 0.8382 29.63 0.84 30.28 0.8983 0.5 30.19 0.8993 30.69 0.9001 31.09 0.912 33.2 0.9486 1 33.4 0.9486 33.7 0.9486 34.5 0.9596 36.98 0.9716 Bold values are the values of the proposed algorithm which are the best as compared to the existing ones. Moreover, the experimental results show that the performance of the NE-EZW coder surpasses that of the IMP1EZW coder [20] over all bitrates for different images. The improvement is notably considerable for several bitrates using arithmetic coding. Overall, this improvement varies between 1.29 and 4.66 dB. Also, it is observed from Table 3 that the NE-EZW produces high SSIM values when compared to the other algorithms. Figs. 8 and 9 show the reconstructed Barbara and Boat images for the EZW, IMP1EZW and NE-EZW algorithms at 0.25 bpp. In this subjective test, we compare the perceptual quality between the three algorithms. It can be observed that the visual quality of the images in Figs. 8c and 9c is better than the other. Fig. 8Open in figure viewerPowerPoint Subjective evaluation of the NE-EZW, IMP1EZW and EZW algorithms for the Barbara greyscale image at 0.25 bpp (a) Reconstructed Barbara image by EZW with PSNR = 25.17 dB, (b) Reconstructed Barbara image by IMP1EZW with PSNR = 25.43 dB, (c) Reconstructed Barbara image by NE-EZW with PSNR = 27.62 dB Fig. 9Open in figure viewerPowerPoint Subjective evaluation of the NE-EZW, IMP1EZW and EZW algorithms for Boat greyscale image at 0.25 bpp (a) Reconstructed Boat image by EZW with PSNR = 27.34 dB, (b) Reconstructed Boat image by IMP1EZW with PSNR = 28.01 dB, (c) Reconstructed Boat image by NE-EZW with PSNR = 30.24 dB In
Referência(s)