Artigo Revisado por pares

High‐throughput 2 bit low‐density parity‐check forward error correction for C‐RAN optical fronthaul based on a hard‐decision algorithm

2018; Institution of Engineering and Technology; Volume: 13; Issue: 2 Linguagem: Inglês

10.1049/iet-cds.2018.5102

ISSN

1751-8598

Autores

Ao Li, Vahid Meghdadi, Jean‐Pierre Cances, Christelle Aupetit‐Berthelemot,

Tópico(s)

Advanced Wireless Communication Techniques

Resumo

IET Circuits, Devices & SystemsVolume 13, Issue 2 p. 111-116 Review ArticleFree Access High-throughput 2 bit low-density parity-check forward error correction for C-RAN optical fronthaul based on a hard-decision algorithm Ao Li, Ao Li XLIM, UMR CNRS 7252, University of Limoges, Limoges, 87000 FranceSearch for more papers by this authorVahid Meghdadi, Corresponding Author Vahid Meghdadi meghdadi@ensil.unilim.fr XLIM, UMR CNRS 7252, University of Limoges, Limoges, 87000 FranceSearch for more papers by this authorJean-Pierre Cances, Jean-Pierre Cances XLIM, UMR CNRS 7252, University of Limoges, Limoges, 87000 FranceSearch for more papers by this authorChristelle Aupetit-Berthelemot, Christelle Aupetit-Berthelemot XLIM, UMR CNRS 7252, University of Limoges, Limoges, 87000 FranceSearch for more papers by this author Ao Li, Ao Li XLIM, UMR CNRS 7252, University of Limoges, Limoges, 87000 FranceSearch for more papers by this authorVahid Meghdadi, Corresponding Author Vahid Meghdadi meghdadi@ensil.unilim.fr XLIM, UMR CNRS 7252, University of Limoges, Limoges, 87000 FranceSearch for more papers by this authorJean-Pierre Cances, Jean-Pierre Cances XLIM, UMR CNRS 7252, University of Limoges, Limoges, 87000 FranceSearch for more papers by this authorChristelle Aupetit-Berthelemot, Christelle Aupetit-Berthelemot XLIM, UMR CNRS 7252, University of Limoges, Limoges, 87000 FranceSearch for more papers by this author First published: 29 January 2019 https://doi.org/10.1049/iet-cds.2018.5102Citations: 2AboutSectionsPDF ToolsRequest permissionExport citationAdd to favoritesTrack citation ShareShare Give accessShare full text accessShare full-text accessPlease review our Terms and Conditions of Use and check box below to share full-text version of article.I have read and accept the Wiley Online Library Terms and Conditions of UseShareable LinkUse the link below to share a full-text version of this article with your friends and colleagues. Learn more.Copy URL Share a linkShare onFacebookTwitterLinkedInRedditWechat Abstract In this study, the authors demonstrate the potentiality of the integration of low-density parity-check codes with a full self-seeded optical architecture using advanced optical and electrical models. This study aims to show the performances that one can expect from this association in the context of cloud radio access network (C-RAN). Different decoding algorithms have been studied over additive white Gaussian noise channel. Hard-decision algorithm of Gradient descent bit flipping (GDBF) is finally chosen since it represents the best trade-off between the complexity of decoder and the performance. Furthermore, the authors show that a small 2-bit quantification is sufficient, which can increase the data rate and decrease the latency of decoder in comparison with a more complex ADs. The same performance of floating point GDBF is achieved by using the new algorithm Balanced Weighted GDBF (BWGDBF) with 2-bit quantification. Finally, the authors have succeeded to implement BWGDBF algorithm on the FPGA Spartan 6 xc6slx16. The proposed system exhibits very good performances since it is able to achieve 2.5 Gb/s throughput in the C-RAN context. 1 Introduction The architecture of the mobile network is ever improving to provide higher network exchanges in order to fit the various demands of users who want the connections anywhere, anytime or with any device. One of the solutions proposed by the operators is the cloud radio access network (C-RAN) [1]. This architecture proposes to centralise the base band unit (BBU) by moving the antenna sites to the central office (CO) which is also called BBU hosteling or BBU pooling. In this way, only the remote radio head (RRH) is placed on the antenna sites, this solution not only provides coverage and maximum throughput but also simplifies deployment and maintenance. As a result, it appears a new optical segment called the fronthaul. This transformation was allowed and supported by the exploitation of the digital radio over fibre. This link uses the common public radio interface protocol [1] to transport the digital radio signal with a high bit rate. By reducing the capital expenditure and operational expenditure, instead of using a point-to-point distribution network between the RRH and BBU, the use of wavelength division multiplexing is recognised as a good solution. In this context, the achromatic transmitter including reflective semiconductor optical amplifier (RSOA) self-seeded configuration is a promising way of investigation [2, 3]. To ensure the quality of service in high-speed fibre transmission, forward error correction (FEC) is suitable for the optical fronthaul link so as to achieve a throughput of and beyond up to . The objective of this paper is to study the potentiality of low-density parity-check (LDPC) codes [4, 5] as part of an optical architecture using RSOA self-seeded configuration for the mobile access network. There exists two grand families of LDPC codes: soft-decision and hard-decision according to their different decoding methods. Soft-decision algorithms such as belief propagation (BP), Min-Sum (MS), and their variants consist of iteratively exchanging messages between the individual processing nodes [4-11]. These algorithms offer the best performance on the additive white Gaussian noise (AWGN) channel. However, the number of necessary arithmetic operations for limited memory devices such as field programmable gate array (FPGA) creates limits for high-speed telecommunication systems. As a result, hard-decision-based algorithms such as bit flipping (BF), weighted BF (WBF), gradient descent BF (GDBF), and their variants [12-28] may be preferred. One common characteristic of these algorithms consists of using an inversion function which represents the reliabilities of each bit. Hard-decision-based algorithms check the information by exchanging messages between variable nodes and check nodes and the convergence is usually faster than those of soft-decision-based algorithms. Our contributions in the field, mainly for short block size (1000 bits) and high code rate (0.9), are manifold and are summarised below. A new algorithm balanced weighted GDBF (BWGDBF) was proposed with an optimised threshold value named delta (δ) [29, 30]. An optimisation of multiple mode (MS) is studied for different hard-decision algorithms. To reduce the latency of the circuit, an optimisation of the number of iterations has been derived. The convergence rate of different algorithms is shown as well. The implementation methods of GDBF and BWGDBF on FPGA are demonstrated with high-efficient architectures. The occupation of resources in FPGA is shown in the last part and the simulations we conducted showed that a latency of could be achieved. The rest of this paper is organised as follows: Section 4 briefly recalls the coding and decoding theory of hard-decision LDPC codes. Section 5 shows performances of LDPC codes are presented with 2 bit analogue-to-digital convertor (ADC). The proposed FPGA practical implementation for the system is shown in Section 6. A summary of results is given in Section 7. 2 Notation The following notations are used. Let be a binary parity-check matrix, where . The binary linear code is defined as , where denotes the binary Galois field. In the present report, a vector is assumed to be a column vector. For convention, we introduce the bipolar codes corresponding to as follows: (1) We assume a binary-input AWGN channel, which is defined by: . The vector z = () is a white Gaussian noise vector, where is an independent and identically distributed. A Gaussian random variable with zero mean and variance . Let us define and with and the following indexes: and , where presents the ith element of the parity matrix . With this notation, we can also define the condition of parity: , which simply means: . In the next section, we will consider LDPC codes length of 1000 bits with a coding rate of 0.9. Hence, m will be equal to 100 and n will be equal to 1000. Furthermore, we will consider regular codes, i.e. , and , . This value of N is recognised as the best value for error correction power whilst M is imposed by the coding rate. 3 Decoding algorithms Hard-decision algorithms (BF algorithms) represent an excellent trade-off between complexity and performance. Different variants of BF algorithms are described in [13]. The key of hard-decision algorithms of LDPC codes is using an inversion function. This function is given by (2) The objective of the inversion function is to identify the most unreliable bits in the received sequence. It consists of two parts: the correlation of represents the reliability of received value and represents the reliability of bits which participate in the parity equation. Here, are parameters which reweigh those reliabilities to achieve a better performance. In fact, the bit which corresponds to the minimum of this inversion function (or metric) will be flipped. According to different types of inversion function, three principle algorithms are obtained: BF, WBF, and GDBF. Their cost functions are exhibited below. BF algorithm (3) BF exhibits the poorest performance among hard-decision-based algorithms, and improved versions such as probabilistic BF and WBF only slightly improve performances. WBF algorithm (4) The value is the reliability of bipolar syndromes defined by . Its variants can drastically improve performances comparing with BF algorithms such as modified WBF (MWBF) [17], improved MWBF (IMWBF) [18], Reliability-based ratio WBF [19]. Among them, the best performances are obtained with IMWBF. GDBF algorithm (5) This function reaches its maximum value when all bits are correct, so inversion function of the GDBF algorithm can be obtained by deriving the function (6) The GDBF algorithm has been proposed to further improve WBF performance and this algorithm becomes a viable alternative to the BP algorithm. The girth in parity matrix will prevent the function to achieve the global maximum, and the consequence is that stays in the local maximum and cannot reach the optimal value. A lot of variants methods which aim at avoiding local maximum such as adaptive threshold GDBF, reliability–ratio-WGDBF, improve hybrid GDBF have been proposed in the scientific literature. The stochastic method such as noisy GDBF is not discussed in this paper, because it proposes a huge number of iterations, typically at least 100 iterations are necessary to achieve performance comparable as those of soft-decision algorithms. General decoding process: Step 1: Compute the sum of the syndrome () and compare it with M. If , errors exist in received sequence, go to step 2. Otherwise, the decoding process stops (the received sequence is a valid code word). Step 2: Calculate in order to identify the least reliable bit to flip (minimum value). Step 3: Flip the bits which are obtained in step 2 and go to step 1. 4 Soft input BF The value of the message is always recorded with a finite number of bits in the fibre transmission. To convert a continuous physical quantity to a digital number that represents the quantified amplitude, an ADC is added to be compliant with a realistic system. To increase the data rate and decrease the latency, 2 bit ADC is chosen for the fronthaul link in C-RAN context. To simplify the computation, The AWGN channel is replaced by a discrete channel. At the receiver side, according to the analogue received values, we have four different symbols: 00, 01, 10, and 11. The bit-error rate (BER) at the output of an AWGN channel can be represented as a function of (error probability) (7) From this equation, we can calculate different error probabilities for different values of threshold as delta . Using the Gaussian integral, we can compute the noise variance using the inverse of the Q-function (function of error probability) (8) We can define the quantification rules as follows: any level in is quantified as 1, any level in as , any level in as and any level in as . With this quantification rule, we can compute the probability that a 1 is detected when a 1 or 0 is transmitted. We have to consider the following four cases with, for example, which denotes the probability that the decision variable is located in (9) (10) (11) (12) and (13) The algorithm for determining and proceeds as follows: for a target , we compute from (8), then we look for a compliant value of using (9)–(13) in order to obtain the desired target error probability . Many methods among the variants of WBF and GDBF lose their improvement because only 2 bits are used in the decoding process. The following steps are based on the GDBF algorithm. As done in previous works [29, 30], we have proposed two methods to improve the performances by combining single and MSs. For the single mode (SM), we only flip 1 bit per iteration whilst several bits are flipped for MSs. For the following simulation results, is set at 0.25, and the number of iterations is equal to 20. We use the following two combinations: SM mode: First SM then MSs, noted SMGDBF. MS mode: First MSs then SM, noted MSGDBF. The maximum number of bits to be flipped in MSs is defined as three in order to avoid the oscillation (local maximum) in the decoding process. It can also be given by (14) where are chosen randomly among those detected bits. 5 Optimisation 5.1 Optimisation of MSs In the following we will suppose that there is no short cycle of length 4, i.e. the girth is at least equal to 6. Instead of using a fixed number of flipped bits, we propose to optimise this number studying the evolution of syndrome function. If one error is corrected, the summation of the syndrome is reduced by the multiplication of the weight of column and the difference between the correct and the wrong syndromes weighted by the spectral efficiency of the modulation . We suppose the summation of syndromes is S, the constant T represents the maximum summation of syndromes when the received block is correct. For our application, this number T is equal to 100. In these conditions, the number of bits to flip is given by (15) where represent an upper integer of x. Figs. 1a–c show the rate of correct detection with different BER input values. Fig. 1Open in figure viewerPowerPoint Rate of correct detection with different BER (input)(a) BER = 10−2, (b) BER = 5 × 10−3, (c) BER = 10−3 The conclusion is quite simple. When the BER is larger than , there are a lot of error patterns which contain more than three errors and, in this case, it is not satisfactory to work with a fixed number of bits to be flipped. 5.2 Proposed decoding algorithm With the use of 2 bit ADC, the received symbols in the decoder have four values and we use the reliability function to distinguish between the wrong and the correct bits. To create priorities between the detected bits, we can reweigh the syndrome. In fact, it is straightforward to assume that the syndromes which received extreme values are the most reliable ones. Therefore, two new algorithms named WGDBF and BWGDBF are expressed as follows: WGDBF algorithm (16) BWGDBF algorithm (17) where represents the estimated reliability for the syndrome calculated by the check node i. Intuitively, the bits having the values are less reliable than the bits with values ±1. To make the syndrome proportional to the reliabilities with a low complexity circuit, we use the number of and we compute its ratio with respect to the number of variable nodes. For example, for the code QC (quasi-cyclic LDPC (QC-LDPC) with column weight 3 and row weight 30), if among all the 30 edges converging to check node i, all of them are connected to variable nodes, is equal to 1, which represents the most reliable syndrome. introduces the experimental value which is always close to 1 so as to adjust the importance between and . On Fig. 2, based on (17), we plot for different values the 'BER output' (i.e. after FECapplication and correction) as a function of the 'BER input' (FEC obtained atthe output of the fronthaul link before FEC correction). Fig. 2 shows that when is equal to 1, the curve can achieve thebest performance for the BWGDBF algorithm. Moreover, in Fig. 3, we can observe that BWGDBF convergesfaster than the other algorithms and its performance is comparable with those ofthe MS algorithm with ten iterations. The MS algorithm is a simplified BPalgorithm which is the most popular algorithm for soft-decision decoding. Fig. 2Open in figure viewerPowerPoint Value ofas a function of BER (input) Fig. 3Open in figure viewerPowerPoint Performance of different algorithms with 20 iterations and2 bit ADC If we supposed a 'BER input' equal to , using the formula that we have proposed to detect the error bits in this section (17), we can observe in Fig. 4 that only four iterations are needed to stabilise the performance for the BWGDBF method. Fig. 4Open in figure viewerPowerPoint Performance as a function of the number of iterations for different algorithms On Fig. 5, we plot thecorrection power or the number of false bits, which can be corrected at eachiteration for the different hard-decision-based tested algorithms. According toFigs. 4 and 5, we can also observe the convergence and the correctionpower of these algorithms when 'BER input' is equal to 10−3. BWGDBFconverges faster than other algorithms. WGDBF cannot achieve the sameperformance because its inversion function sometimes cannot consider thedifference of magnitude order between the terms: and . Fig. 5Open in figure viewerPowerPoint Correction power with a different number ofiterations(a) Three iterations,(b) Four iterations Table 1. Comparison of different algorithms performance when 'BERinput' = 10−3with 2 bit ADC Algorithm(R = 0.9) with 2 bit ADC 'BER output' BF WBF IMWBF GDBF WGDBF BWGDBF MS (ten iterations) Table 1 shows the comparison of achievable 'BER output'for different algorithms with 2 bit ADC considering a 'BER input' equal to10−3. According to the relative works [29, 30], the newalgorithm BWGDBF achieves the same performance as that of analogue GDBF. On thistable, for example, the notation means that the output BER value is between and . 6 FPGA implementation Nowadays, QC-LDPC codes [31], as we presented in the last session, are widely used in many wireless communication standards such as IEEE 802.16e [worldwide interoperability for microwave access (WiMAX)], IEEE 802.11n (wireless fidelity), IEEE 802.15.3c (wireless personal area network) etc. Considering a coding rate equal to 0.9, full-parallel hardware architecture [32] is too complex to be implemented for the fronthaul link in the C-RAN context. Since we are seeking for low-latency and high-throughput circuit, partially parallel architecture is recommended here [33-36]. In this section, we will put the emphasis on GDBF and BWGDBF implementation architectures. 6.1 Design procedure We implemented the proposed BWGDBF-based LDPC architecture on Xilinx integrated synthesis environment, using very high speed integrated circuit hardware description language. There exists three approaches for LDPC decoding process: low parallel, row based, and column wise. The method low parallel is useful for moderate throughput up to 1 Gb/s. It instantiates a small number of variable node units (VNUs) and check node units (CNUs) and thus needs a large number of cycles for each iteration. For a row-based decoder, all N VNs are instantiated as VNUs and number of CNUs. In this architecture, VNUs are working in a fully parallel way, and at each iteration, one CNU processes a few number of edges (equal to the number of the weights in the corresponding row of the matrix). This architecture requires a large number of resources in FPGA. To overcome the huge distributed memory utilisation of FPGA and to obtain the high-throughput requirement, we chose the column-wise decoding method in this paper. CNUs of colum-wise decoder need to be implemented fully parallel. They receive a few edges per iteration (equal to the number of the weights in the corresponding column of the matrix). Owing to the lower degree of parallelism, this hardware mapping needs more cycles per iteration. All CNUs work in parallel and thus produce the check node to variable node messages all at the same time. The distributed memory usage is ten times less (for a code rate of 0.9) in comparison with the row-wise decoder. 6.2 Decoder structure The inversion function can be rewritten as: ; therefore, we design first the architecture of GDBF, which computes . We assume that ones are represented by logic level 0 and the by 1. To be more explicit, a regular LDPCcode with 3 ones at each column and 30 ones ateach row of the parity-check matrix is used in our implementation. The inputbuffer (called X) represents the variable nodes containing1000 bits. Each register in buffer X contains just 1 bitrepresenting the corresponding decoded bit. In the beginning, it is initialisedfrom the received data y defined over 2 bits. The syndromebuffer, represented by S is of size 100 bits. When syndrome , the corresponding ithparity equation is not satisfied. The buffer X is checked bitby bit from the beginning to the end. It is done by a pointer that addresses theelements of X. To process in parallel, we use three memories.The first elements of these three memories contain the position of the threeones in the columns of the parity-check matrix. The simplified resulting circuitis given in Fig. 6. The enable for each flip-flop (FF) comes from the three-memory output.After 1000 clock cycles, the syndromes according to are calculated and saved in the FFs. Fig. 6Open in figure viewerPowerPoint Syndrome computations As illustrated in Fig. 7, the three memories are controlled by the same address controller. Once we have the values of , the inversion function can be computed with 2 bit adders and after 100 clock cycles, the registers will contain the sum of syndromes. Fig. 7Open in figure viewerPowerPoint Inversion function of GDBF The hard-decision algorithm decides to flip or not a bit accordingto the sum of the syndromes for that bit. To reduce the resource usage and thelatency of circuit, we flip only the most unreliable bits, for whom the valueEx of is equal to 2 or 3. Fig. 8Open in figure viewerPowerPoint Comparison of different algorithms and different functions ofk(a) Function of differentk for GDBF, (b)Performance comparison between GDBF and BWGDBF Table 2. Resource utilisation of FPGA spartan 6 (xc6slx16) Logic utilisation Used Available Utilisation GDB-F BWG-DBF GDBF, % BWG-DBF, % number of registers 1309 1743 18,224 7 9 number of look-up tables 2815 3350 9112 30 36 number of occupied slices 959 1182 2278 33 57 To design the multiplication between x and y, since y is quantified only by 2 bits the result of multiplication will be represented by 3 bits. We will enumerate all the possibilities and design a combinatorial circuit to realise this operation. A truth table of which needs 4 bits should be created as well. Ideally, we should calculate the complete inversion function, and then sort them. We select theproper number of errors [see (15)], and then flip them. To reduce the complexity; instead ofcalculating all the 1000 inversion function values, we select kof them, according to the sum of the syndromes. In fact, we select the bits withthe sum equal to or . As soon as the k valuesare selected, we stop the further selection. Then, we add to them thecorrelation function and sort the buffer of sizek. The buffer size k should be optimised.This value has been optimised by running a large number of simulations and theresults are given in Fig. 8a. We observe that with k = 10,the degradation compared with the ideal case is negligible. As we have mentioned in the last paragraph, the inversion function of the BWGDBF algorithm is: . Therefore, we can construct the architecture in the same way as for GDBF. Here, to take into account the values of , the inversion function is calculated over 10 bits. That is because the is a value belonging to the set . To simplify the division, we approximate the by the set and then realise the division just by a simple 6 bit shift. We use the same truth table as before where we compute the value of with a simple adder. The BER performance of the method is shown in Fig. 8b. Table 2 shows the resource usage for the spartan6(xc6slx16). Moreover, the latency obtained is nearly with 400 MHz clock frequency. 7 Conclusion In this paper, we show how to implement LDPC code for the optical fronthaul link in C-RAN context for the future fifth generation of radio-cellular systems at a high bit rate up to . Different kinds of LDPC decoding algorithm codes have been also presented. In the case of hard-decision algorithms, the effect of 2 bit sampling was considered. To achieve the best performance, a new algorithm BWGDBF has been proposed. Using several levels of optimisation: MSs and number of iterations [11, 29], our new BWGDBF performs as well as a soft-decision decoder using the MS algorithm with ten iterations. This represents an outstanding result compared with the literature state of the art. For the implementation, a realistic architecture based on Xilinx FPGA was given as well. To achieve lower latency in C-RAN context, which should be about 5 µs, two possibilities are ready to be used: (i) increase the clock frequency and (ii) divide the buffer X into two or more parts and run through it in parallel. The later solution needs a duplication of one exclusive OR gate. 8 Acknowledgments These works were realised in the framework of ANR LAMPION French project and with the support of Elopsys Limosin region competitiveness cluster. 9 References 1Chanclou, P., Pizzinat, A., Le Clech, F., et al.: ' Optical fiber solution for mobile fronthaul to achieve cloud radio access network'. Future Network and Mobile Summit (Future Network Summit), Lisboa, Portugal, 3–5 July 2013, pp. 1– 11 2Won, E., Lee, K.L., Anderson, T.B.: 'Directly modulated self-seeding reflective semiconductor optical amplifiers as colorless transmitters in wavelength division multiplexed passive optical networks', J. Lightwave Technol., 2007, 25, (1), pp. 67– 74 3Simon, G., Saliou, F., Chanclou, P., et al.: ' Infrastructure impact on transmission performances of self-seeded DWDM colorless sources at 2.5 Gbps'. (ECOC) IEEE European Conf. Optical Communication, Cannes, France, 2014, pp. 1– 3 4Mackay, D.J.C., Neal, R.M.: 'Near Shannon limit performance of low density parity check codes', Electron. Lett., 1996, 32, (18), p. 1645 5Kou, Y., Lin, S., Fossorier, M.P.C.: 'Low-density parity-check codes based on finite geometries: a rediscovery and new results', IEEE Trans. Inf. Theory, 2001, 47, (7), pp. 2711– 2736 6Yazdani, M.R., Hemati, S., Banihashemi, A.H.: 'Improving belief propagation on graphs with cycles', IEEE Commun. Lett., 2004, 8, (1), pp. 57– 59 7Chen, J., Fossorier, M.P.C.: 'Density evolution for two improved BP-based decoding algorithms of LDPC codes', IEEE Commun. Lett., 2002, 6, (5), pp. 208– 210 8Jiang, M., Zhao, C., Zhang, L., et al.: 'Adaptive offset min-sum algorithm for low-density parity check codes', IEEE Commun. Lett., 2006, 10, (6), pp. 483– 485 9Xu, M., Wu, J., Zhang, M.: ' A modified offset min-sum decoding algorithm for LDPC codes'. Third IEEE Int. Conf. Computer Science and Information Technology (ICCSIT), Chengdu, China, 2010, pp. 19– 22 10Savin, V.: ' Self-corrected min-sum decoding of LDPC codes'. IEEE Int. Symp. Information Theory, Toronto, Canada, 2008, pp. 146– 150 11Balatsoukas-Stimming, A., Dollas, A.: ' FPGA-based design and implementation of a multi-GBPS LDPC decoder'. 22nd Int. Conf. Field Programmable Logic and Applications (FPL), Oslo, Norway, 2012, pp. 262– 269 12Miladinovic, N., Fossorier, M.P.C.: 'Improved bit-flipping decoding of low-density parity-check codes', IEEE Trans. Inf. Theory, 2005, 51, (4), pp. 1594– 1606 13Chang, T.C.-Y., Su, Y.T.: 'Dynamic weighted bit-flipping decoding algorithms for LDPC codes', IEEE Trans. Commun., 2015, 63, (11), pp. 3950– 3963 14Nguyen, D.V., Vasic, B.: 'Two-bit bit flipping algorithms for LDPC codes and collective error correction', IEEE Trans. Commun., 2014, 62, (4), pp. 1153– 1163 15Cho, J., Sung, W.: 'Adaptive threshold technique for bit-flipping decoding of low-density parity-check codes', IEEE Commun. Lett., 2010, 14, (9), pp. 857– 859 16Zhang, J., Fossorier, M.P.C.: 'A modified weighted bit-flipping decoding of low-density parity-check codes', IEEE Commun. Lett., 2004, 8, (3), pp. 165– 167 17Jiang, M., Zhao, C., Shi, Z., et al.: 'An improvement on the modified weighted bit flipping decoding algorithm for LDPC codes', IEEE Commun. Lett., 2005, 9, (9), pp. 814– 816 18Guo, F., Hanzo, L.: 'Reliability ratio based weighted bit-flipping decoding for low-density parity-check codes', Electron. Lett., 2004, 40, (21), pp. 1356– 1358 19Wu, X., Zhao, C., You, X.: 'Parallel weighted bit-flipping decoding', IEEE Commun. Lett., 2007, 11, (8), pp. 671– 673 20Chen, T.-C.: 'Adaptive-weighted multibit-flipping decoding of low density parity-check codes based on ordered statistics', IET Commun., 2013, 7, (14), pp. 1517– 1521 21Zhang, L., Ye, Z., Feng, Q.: 'An improved multi-bit threshold flipping LDPC decoding algorithm', Int. J. Comput. Theory Eng., 2014, 6, (6), p. 510 22Li, G., Li, D., Wang, Y., et al.: ' Improved parallel weighted bit flipping decoding of finite geometry LDPC codes'. Fourth Int. Conf. Communications and Networking in China, Maoming, China, 2009, pp. 1– 5 23Wadayama, T., Nakamura, K., Yagita, M., et al.: ' Gradient descent bit flipping algorithms for decoding LDPC codes'. Int. Symp. on Information Theory and Its Applications, ISITA, Auckland, New Zealand, 2008, pp. 1– 6 24Boyd, S., Vandenberghe, L.: ' Convex optimization' ( Cambridge university press, New York, NY, USA, 2004), ISBN:0521833787 25Sundararajan, G., Winstead, C., Boutillon, E.: 'Noisy gradient descent bit-flip decoding for LDPC codes', IEEE Trans. Commun., 2014, 62, (10), pp. 3385– 3400 26Phromsa-Ard, T., Arpornsiripat, J., Wetcharungsri, J., et al.: ' Improved gradient descent bit flipping algorithms for LDPC decoding'. Second Int. Conf. on Digital Information and Communication Technology and it's Applications (DICTAP), Bangkok, Thailand, 2012, pp. 324– 328 27Haga, R., Usami, S.: ' Multi-bit flip type gradient descent bit flipping decoding using no thresholds'. Int. Symp. Information Theory and its Applications (ISITA), Honolulu, Hawaii, USA, 2012, pp. 6– 10 28Tithi, T., Winstead, C., Sundararajan, G.: ' Decoding LDPC codes via noisy gradient descent bit-flipping with re-decoding', arXiv preprint arXiv:1503.08913, 2015 29Li, A., Meghdadi, V., Cances, J.-P., et al.: ' High rate LDPC based decoder architectures with high speed ADC for C-RAN optical fronthaul'. Int. Conf. Computer and Communication Engineering (ICCCE), Kuala Lumpur, Malaysia, 2016, pp. 386– 391 30Li, A., Meghdadi, V., Cances, J.-P., et al.: ' High throughput LDPC decoder for C-RAN optical fronthaul based on improved bit-flipping algorithm'. Tenth Int. Symp. Communication Systems, Networks and Digital Signal Processing (CSNDSP), Prague, Czech Republic, 2016, pp. 1– 5 31Fossorier, M.P.C.: 'Quasi-cyclic low-density parity-check codes from circulant permutation matrices', IEEE Trans. Inf. Theory, 2004, 50, (8), pp. 1788– 1793 32Zhang, W., Chen, S., Bai, X., et al.: ' A full layer parallel QC-LDPC decoder for WiMAX and Wi-Fi'. IEEE 11th Int. Conf. ASIC (ASICON), Chengdu, China, 2015, pp. 1– 4 33Al Hariri, A.A., Monteiro, F., Siéler, L., et al.: ' A high throughput configurable partially-parallel decoder architecture for quasi-cyclic low-density parity-check codes'. 2014 IEEE Conf. Design of Circuits and Integrated Circuits (DCIS), Madrid, Spain, 2014, pp. 1– 6 34Dai, Y., Chen, N., Yan, Z.: 'Memory efficient decoder architectures for quasi-cyclic LDPC codes', IEEE Trans. Circuits Syst. I, Regul. Pap., 2008, 55, (9), pp. 2898– 2911 35Wang, Z., Cui, Z.: 'Low-complexity high-speed decoder design for quasi-cyclic LDPC codes', IEEE Trans. Very Large Scale Integr. (VLSI) Syst., 2007, 15, (1), pp. 104– 114 36Khan, Z., Arslan, T., Macdougall, S.: ' A real time programmable encoder for low density parity check code as specified in the IEEE P802'. IEEE Int. Conf. SoC 16E/D7 Standard and its Efficient Implementation on a DSP Processor, Taipei, Taiwan, 2006, pp. 17– 20 Citing Literature Volume13, Issue2March 2019Pages 111-116 FiguresReferencesRelatedInformation

Referência(s)