Artigo Revisado por pares

Cryptanalysis of McEliece cryptosystem variants based on quasi‐cyclic low‐density parity check codes

2015; Institution of Engineering and Technology; Volume: 10; Issue: 4 Linguagem: Inglês

10.1049/iet-ifs.2015.0064

ISSN

1751-8717

Autores

Masoumeh Koochak Shooshtari, Mahmoud Ahmadian‐Attari, Thomas Johansson, Mohammad Reza Aref,

Tópico(s)

Error Correcting Code Techniques

Resumo

IET Information SecurityVolume 10, Issue 4 p. 194-202 Research ArticleFree Access Cryptanalysis of McEliece cryptosystem variants based on quasi-cyclic low-density parity check codes Masoumeh Koochak Shooshtari, Corresponding Author Masoumeh Koochak Shooshtari m-koochak@ee.kntu.ac.ir Faculty of Electrical Engineering, K.N. Toosi University of Technology, Tehran, 16315-1355 IranSearch for more papers by this authorMahmoud Ahmadian-Attari, Mahmoud Ahmadian-Attari Faculty of Electrical Engineering, K.N. Toosi University of Technology, Tehran, 16315-1355 IranSearch for more papers by this authorThomas Johansson, Thomas Johansson Department of Electrical and Information Technology, Lund University, Lund, 221 00 SwedenSearch for more papers by this authorMohammad Reza Aref, Mohammad Reza Aref Department of Electrical Engineering, Sharif University of Technology, Tehran, 11365-11155 IranSearch for more papers by this author Masoumeh Koochak Shooshtari, Corresponding Author Masoumeh Koochak Shooshtari m-koochak@ee.kntu.ac.ir Faculty of Electrical Engineering, K.N. Toosi University of Technology, Tehran, 16315-1355 IranSearch for more papers by this authorMahmoud Ahmadian-Attari, Mahmoud Ahmadian-Attari Faculty of Electrical Engineering, K.N. Toosi University of Technology, Tehran, 16315-1355 IranSearch for more papers by this authorThomas Johansson, Thomas Johansson Department of Electrical and Information Technology, Lund University, Lund, 221 00 SwedenSearch for more papers by this authorMohammad Reza Aref, Mohammad Reza Aref Department of Electrical Engineering, Sharif University of Technology, Tehran, 11365-11155 IranSearch for more papers by this author First published: 01 July 2016 https://doi.org/10.1049/iet-ifs.2015.0064Citations: 10AboutSectionsPDF ToolsRequest permissionExport citationAdd to favoritesTrack citation ShareShare Give accessShare full text accessShare full-text accessPlease review our Terms and Conditions of Use and check box below to share full-text version of article.I have read and accept the Wiley Online Library Terms and Conditions of UseShareable LinkUse the link below to share a full-text version of this article with your friends and colleagues. Learn more.Copy URL Share a linkShare onFacebookTwitterLinkedInRedditWechat Abstract One of the approaches to modify the McEliece cryptosystem to overcome its large key size is replacing binary Goppa codes with a new structured code. However, this modification makes such cryptosystems encounter some new attacks. There are a few modified McEliece cryptosystem variants which are known to be secure. One of them is the cryptosystem introduced by Baldi et al. which uses quasi-cyclic low-density parity check (QC-LDPC) codes. This cryptosystem is still unbroken as no efficient attack has been reported against it since 2008. In this study, an attack has been applied to this cryptosystem which is feasible when the code length is a multiple of a power of 2. Also an important weakness of this kind of cryptosystem has been pointed out, namely utilising a too low-weight intentional error vector. The authors have established a new security level for this cryptosystem which is applicable to other McEliece-like cryptosystems using QC-LDPC codes. This security level for instance is 29.18 times lower than previous ones in the case of n = 4 × 4096 when only one ciphertext is available. The gain of the attack in this study can be increased if more than one ciphertext is available. 1 Introduction Code-based public key cryptography is a highly studied area of cryptography which relies on hard problems in coding theory. The first cryptosystem in this field which was proposed by McEliece [1] uses binary Goppa codes. Although the McEliece cryptosystem allowed fast encryption and decryption, at the time of the proposal, it did not attract too much attention, mainly because of its large public key size in comparison with the cryptosystems based on number theory, such as RSA [2]. Since it was proven that cryptosystems based on number theory will not tolerate quantum computer attacks [3], researchers have tried to find secure cryptosystems for a post-quantum computer age. Code-based cryptography is one of the most prominent candidates for post-quantum cryptography among the different options examined by cryptography researchers. Much work has been done to overcome the large public key size by applying new codes instead of the binary Goppa codes in McEliece original cryptosystem. Examples are generalised Reed–Solomon codes [4], Reed Muller codes [5], low-density parity check (LDPC) codes [6], quasi-cyclic (QC) codes [7, 8], convolutional codes [9], QC-LDPC codes [10-16], quasi-dyadic codes [17], non-binary Goppa codes [18, 19] and moderate-density parity check (MDPC) codes [20]. McEliece-like public key cryptosystems are subject to two kinds of attacks, namely decoding attacks and structural attacks. The goal of a structural attack is to find the private key through the public key whereas in a decoding attack, an adversary who knows the ciphertext tries to find the plaintext. The security level of cryptosystems (and the complexity of an attack) is typically measured by the average number of binary operations which are required to break the cryptosystem, called attack work factor. In McEliece-like cryptosystems, the attack with the lowest work factor is typically a decoding attack. Thus, usually this kind of attack determines the security level of the cryptosystems. So far, the original McEliece cryptosystem is regarded to be immune against structural attacks. However, the McElice-like cryptosystems having modified codes are often threatened by such an attack. Much work has been done to investigate their security level and most of them were actually broken. Some of this cryptanalysis work can be found in [21-26]. Among McEliece-like cryptosystems that tries to overcome the large public key size problem, only a few have remained unbroken. Among these cryptosystems, the cryptosystem which uses QC-LDPC codes has the minimum key size and seems to be efficient to use. Using the LDPC codes family in the McEliece cryptosystem was first discussed in [6] in a disappointing manner as it argued that the sparsity of parity check matrix could not be helpful for reducing the key size from a security viewpoint. However, Baldi et al. proposed working with QC-LDPC codes instead of LDPC codes in the McEliece cryptosystem in order to exploit the good features of the LDPC family, such as large error correction capability and fast decoding. To do so, they added an idea to exploit the QC structure in order to reduce the public key size [10]. Then, they altered the structure of their cryptosystem without making any changes in the applied code by replacing the permutation matrix used for obtaining the public key with a general transformation matrix. Therefore, they made the whole cryptosystem immune against decoding attacks using the dual code [11]. Shortly thereafter, this proposal was exposed to a structural attack in [22] due to its sparse transformations. By some modifications, Baldi et al. proposed a new version of their cryptosystem which has ever since remained unbroken [12]. Although the cryptosysytems based on LDPC and MDPC codes have some similarities, the approaches introduced in [26] for the cryptanalysis of the earlier version of the cryptosystem based on MDPC codes [27] are not applicable to the attack of cryptosystem variants based on QC-LDPC codes. Actually, the approaches used in [26] are based on knowing both parity check and generator matrices which is not the case for [12]. 1.1 Our contribution In this paper, we focus on cryptosystems using QC-LDPC codes and, in particular, the last unbroken one [12]. The extended version of [12] in [14] includes an updated security analysis. We present a decoding attack by using special squaring technique which is applicable when the code length is a multiple of a power of 2. Our attack is much more efficient than the previously known methods. Moreover, our approach can be easily applied to other cryptosystems based on irregular codes, such as cryptosystems introduced in [16, 15]. These kinds of cryptosystems have been immune to attacks since their publication, but we demonstrate that they have some flaws in their nature. It should be noted that using a code with odd circulant block length (p) can avoid our attack, while the complexity of encryption and decryption are increased as it is impossible to apply Winograd algorithm to reduce their complexity [14]. This approach is similar to what has been done in [20]. 1.2 Outline The paper is organised as follows. We recall some basic facts about coding theory and QC-LDPC codes in Section 2. Next, an overview of the public key cryptosystems based on QC-LDPC is given in Section 3. Our new attack is introduced in Section 4, and Section 5 summarises and concludes the paper. 2 Preliminaries This section briefly gives adequate background on coding theory. Definition 1.A [n, k] binary linear code C of length n and dimension k is a k -dimensional subspace of , which can be represented by two matrices; a k × n generator matrix G, such that or by a (n − k) × n parity check matrix H, such that , where c is a codeword of C. Definition 2.The Hamming weight of a binary codeword or vector x is the number of non-zero coordinates in the codeword or vector which is denoted by wH (x). The minimum distance, d, of a linear code C is the smallest Hamming weight of a non-zero codeword in C or the minimum Hamming distance between any two different codewords. Definition 3.A circulant matrix M over 𝔽2 [x] is a p × p matrix obtained by cyclically right shifting of the first row as follows A circulant matrix is completely described by only its first row. It can be equivalently described as a polynomial or simply as a vector m = [m0, m1, …, mp −1]. Proposition 1.Let be the set of all p × p circulant matrices over 𝔽2 [x]. Then an isomorphism exists between the rings and (𝔽2 [x]/(xp + 1), +, ·). Definition 4.A QC code of length n and dimension k is a linear code with its generator matrix or parity check matrix composed by k0 × n0 size-p circulant sub-matrices where k = k0 × p and n = n0 × p. The main property of QC codes is that each cyclic shift of a codeword by p positions is also a codeword. Let B be a generator matrix or a parity check matrix of a QC code which is composed by size-p circulant matrices such that It is easy to verify that this matrix preserves its QC property by matrix addition and matrix multiplication with another matrix with the QC property. Therefore, we can establish an identification between B and a polynomial k0 × n0 matrix B (x) with coefficients in 𝔽2 [x]/(xp + 1) by mapping each sub-matrix Bi, j onto the polynomial bi, j (x) which defines it. Definition 5.A LDPC code is a linear code which has a sparse parity check matrix. Its error correction capability depends on the sparsity of H and it allows a very low complexity decoding process. 3 McEliece cryptosystem using QC-LDPC codes Here is a very brief overview of the QC-LDPC version of McEliece as described in [13]. 3.1 Public key generation Choose a t -error-correcting [n, k, dv] code from the QC-LDPC family with parity check matrix in which n = n0 × p, k = k0 × p and k0 = n0 − 1, such that where each Hi is a circulant p × p matrix with weight dv in each row or column. Moreover, Hn0− 1 must be non-singular. Obtain the systematic generator matrix of the code, which is , where Generate the public key G from G ′ as where Q is a sparse n × n non-singular matrix with row and column weight m > 1 and S is a k × k dense non-singular scrambling matrix. Both Q and S have a QC structure. 3.2 Encryption Let be the plaintext. Multiply m with the public key G and add t ′ random errors, that is, y = mG + e, where wH (e) = t ′ ≤ t /m. 3.3 Decryption Let be the ciphertext. Given the secret low-weight parity check matrix H together with matrices Q and S, and multiplying y by Q from right, then a low complexity decoding procedure is used to obtain mS−1. At last, multiply S from right to find the plaintext m. More details can be found in [13]. 4 Presenting a new attack In what follows, our decoding attack on the scheme from Section 3 is described, where we intend to obtain the error vector e from an observed ciphertext y. In our new approach, we will use a form of information set decoding (ISD) algorithm, but only after applying a special squaring technique that will be explained later. One standard way of finding e in a received codeword (or ciphertext) is searching for the minimum weight codewords in the given code extended by the received codeword, that is, the code described by the generator matrix . Such an approach, then, uses an ISD algorithm to search for the minimum weight codeword which is equivalent to finding e. The complexity of this attack can be determined through the complexity of the Stern algorithm [28] or its improved versions [29-32]. In QC-LDPC code-based cryptosystems, each block-wise cyclically shifted version of the ciphertext y is still a valid ciphertext. This QC structure can help us to speed up the attack procedure by a factor , where M is the number of possible shifted versions of each received codeword [33, 20]. The main idea that needs to be described in our new attack is a special squaring technique which is combined with a reconstructing step for 'partial' polynomials. In the squaring technique, as introduced in [26], we have decreased both the dimension and the weight of each of the polynomials simultaneously. Squaring each term of an arbitrary polynomial in 𝔽2 [x]/(xp + 1), p even, leads to a polynomial having terms of only even degrees. In general, squaring d times, the degree of each term will be a multiple of 2d if and only if p is a multiple of 2d. If so, we can omit all positions that are not a multiple of 2d. In this case, the dimension is decreased by a factor of 2d. In addition, some collisions between non-zero elements can occur and, thus, the weight of the squared polynomial can be decreased after squaring. In the reconstruction procedure, we aim to reconstruct the original polynomial from its squared versions. The complexity of this technique depends on both its dimension and the number of times squaring is applied to the polynomial. In our attack, we apply the squaring technique to the ciphertext (received word) y and we hope to find the squared low-weight codeword in a lower dimension faster. Then, we will be able to find the original low-weight codeword from the squared low-weight codewords in the reconstruction procedure. 4.1 Squaring step An operation referred to as a special squaring is defined for both QC non-square matrices and long vectors over 𝔽2 [x]/(xp + 1), p even. It should be noted that long polynomials must be divided into length p parts before squaring. For notational convenience, let e (x), y (x) and g (x) represent e, y and G, respectively. 4.1.1 Vector squaring Let e be a binary vector with a length of n = n0 p. The squaring procedure over 𝔽2 [x]/(xp + 1) divides e into n0 size-p vectors. By considering Proposition 1, we have Then, each part is squared separately as follows To complete the special squaring, all of the indices that are not a multiple of 2 in each part should be omitted so that the dimension of each part and, thus, that of the whole vector, is decreased by a factor of 2. This can be considered as a variable substitution where x2 is replaced by a new variable x. After that, it can be transformed into a vectorial form and this reduced form of e2 (x) is called , 4.1.2 Matrix squaring Let be a QC matrix containing k0 × n0, p × p circulant sub-matrices of the form Considering Proposition 1, we can show G in the polynomial matrix form as Then, the matrix squaring of G (x) is defined as As for vectors, all of the positions that are not a multiple of 2 in each polynomial part should be omitted so the dimension of each part and, thus, that of the whole matrix is decreased by a factor of 2. After that, it can be transformed to QC form; this reduced form of G2 is called . With this definition, we can square all QC matrices including the non-square ones for p even. Applying squaring q times and omitting positions which are not a multiple of 2q result in a decrease in the dimension of polynomials by the factor of 2q and we can obtain with a length of n /2q and . Moreover, the polynomial weight is reduced only when 2q |p because more squaring actually has no effect on weight and dimension. For the sake of simplicity, we have omitted SP in the notation, so e2q and G2q denote and for q ∈ N in what follows. 4.2 Searching for low-weight codewords ISD algorithms are a kind of decoding algorithm which can be used to search for low-weight codewords in random linear codes. In case of ISD algorithm, it is noteworthy that an early efficient algorithm is Stern's algorithm [28]. Some improved versions of Stern algorithm have been proposed in [29, 31, 32], but for better comparison with [14], similarly [29] has been considered in our attack. To find a single codeword of weight w in a code of length n and dimension k, the complexity which is denoted as WF (n, k, w), is calculated as WF (n, k, w) = N /Pw, where the number of binary operations for each iteration is and the probability of finding a single codeword of weight w is Here, l and g are two parameters whose size is determined in such a way that WF is minimised. As the code here has QC structure and each cyclic shift of size p in a codeword is also a valid codeword, the number of possible shifts which is equal to n − k is considered in calculating the whole complexity. This feature can reduce the complexity by a factor of [33, 20]. Thus, the ISD complexity can be calculated as . Note that as this algorithm is used after the special squaring step is taken, this algorithm is applied to n /2q and k /2q parameters. 4.3 Reconstructing the polynomials In each iteration of the squaring procedure, non-zero elements can collide and become zero elements. This procedure causes loss of information and that information must be reconstructed. In this section, we will show how the reconstruction procedure is performed. Let e2 (x) of Hamming weight w ′ be the squared polynomial of e (x) of Hamming weight w. If no collisions have occurred in the squaring procedure, then w = w ′. However, for each collision, the weight reduction is equal to 2, therefore, for α collisions the weight reduction is w − w ′ = 2α. After the special squaring of e (x), there will be some one-positions and zero-positions in e2 (x). In e2 (x), if the coefficient of xi is 1, it is called one-position; otherwise, that index is referred to as a zero-position. For each xi over 𝔽2 [x]/(xp + 1) (including one and zero-positions) in e2 (x), there are two monomials xi /2, x(i +p)/2 in e (x) which can lead to xi in e2 (x). If there is a one-position in e2 (x), it means that one of the two monomials, xi /2, x(i +p)/2, exists in e (x). If we have a zero-position in that index, it means that either both of them or none of them may exist in e (x). To obtain e (x) from e2 (x), we should focus on one and zero-positions in e2 (x). The solutions to an arbitrary polynomial e (x) when e2 (x) is known can be illustrated as in Fig. 1. Fig. 1Open in figure viewerPowerPoint Reconstruction of e (x) from e2 (x) In this example, exactly one of xi1/2, x(i1+p)/2 must exist in e (x) and either none (most likely for sparse polynomials) or both xj1/2, x(j1+p)/2 must exist in e (x). In case no collisions have occurred in the squaring procedure, then, recovering e (x) is easy as it is sufficient to select between two sets of incidences: xi1/2 and x (i1+p)/2, which is done by solving a system of linear equations that will be explained later. However, in case of the occurrence of any collisions, first, the location of collisions should be recovered. One approach is checking all collisions in each step of the reconstruction and solving the system of linear equations for each of the steps. This easy approach is useful when we have a few collisions. The second approach is recovering the collisions by means of ISD algorithm. Since e2 (x) of weight w ′ = w − 2α has been recovered, the one-positions show the location of a set of 2w ′ positions which contains, at least, w ′ elements of w, which is the Hamming weight of e (x). To recover the location of collisions, the extended code (by the received codeword) can be punctured in those positions and the problem is just finding a low-weight codeword of weight 2α in a code of length n − 2w ′ while the dimension remains unchanged, which is equivalent to finding the locations of collisions. When we have several collisions, the second approach is more efficient. Let us assume that we have a known vector e2 and we want to find e such that eHT = s. As we work in 𝔽2 [x]/(xp + 1), we have or equivalently Due to even dimensions of e and also ei (p is even), we can write it as where xi and ai are binary vectors of size p /2: xi = [xi 1, xi 2, …, xip /2] and ai = [ai 1, ai 2, …, aip /2]. xi s are unknown but ai s are known vectors which can be recovered from . ai is non-zero in the positions whose corresponding polynomial term is in the one-position of and zero in the remaining positions. Moreover, s is divided into two vectors of size p /2: s1 and s2. Since each sub-block of HT has a circulant structure of size p × p, it is possible to show each circulant as where each Ej, i, 1 ≤ j ≤ 2, 1 ≤ i ≤ n0 is a binary matrix of size p /2 × p /2, we have (1) We can use half of the equations in (1) as follows First, let us assume that no collisions have occurred. Let be a set of all one-position's locations in and contain the rows of x with indices in (x can be vector or matrix). Then, we have a system of linear equations as follows (2) where s1, ai s and Ej, i s are known and constant. By solving (2), we can find xi, 1 ≤ i ≤ n0. In the average case, we expect to have a few collisions. Whenever a collision has occurred, it means that two non-zero elements have collided leading to a zero element. In each step of reconstruction, by using ISD algorithm to find a low-weight codeword of weight 2α in the punctured extended code, that is, [n − 2(w − 2α), k], we are able to find the binary collision vector of size n that contains 2α ones, if α collisions have happened in squaring e. After finding u, we can extract each sub-vector ui = [vi, vi] which is symmetric over 𝔽2 [x]/(xp + 1). We should consider ui in addition to each ei in (1). Then, we can consider the indices of non-zero positions of ui and move their combinations of corresponding rows in for each ei, that is, , to the right hand side. In this case, the number of unknowns remains constant if we consider each of the collision variable assignments as fixed. As we are interested in solving half of the equations, we can solve (3) in case of the occurrence of collision and find xi, 1 ≤ i ≤ n0. Since the dimension of this linear equation system is equal to wH (e2q) × (n − k)/2q, it is over-defined and solving it can yield a single solution (of course, if it does have any solution). 4.4 Complexity of reconstruction The complexity of reconstruction procedure in each step (constructing e2q−1 from e2q) is formulated as follows where is the cost of searching for the collision vector of weight 2α when α collisions have occurred in the q th step of squaring. ((n − k)/2q)3 is the cost of Gaussian elimination to solve the linear equations and α ×(n −k)/2q is the cost of considering collisions. CRecons is calculated according to the second approach which was explained in Section 4.3. In fact in this approach, after finding the collision vector, it is needed to solve the system of linear equations (3) only once. However, by following the first approach, the number of collision vector patterns, , should be enumerated and the system of linear equations for each one should be solved. In case we have several collisions, this easy approach is not efficient. It is possible to find the collision vector totally in one step. Since e2q (x) of weight w − 2α has been recovered, its one-positions show the location of a set of 2q (w − 2α) positions which contains, at least, w − 2α elements of w in e. To recover the locations of collisions, the extended code (by the received codeword) can be punctured in those positions and the problem is just finding a low-weight codeword of weight 2α in a code with parameters [n − 2q (w − 2α), k]. Thus, the complexity of ISD is . After finding the whole collision vector, the number of collisions that have occurred in each step should be determined so that the system of linear equations (3) in each step can be solved. To handle the complexity of reconstruction, we prefer to find the collision vector by applying ISD algorithm in each step and doing the reconstruction step-by-step. Now, we can summarise the new attack in the next subsection. 4.5 Attack procedure Square the public key G and the received ciphertext y for q times with the special squaring technique that has been explained in Section 4.1. Apply the ISD algorithm to the matrix and find the minimum weight codeword which will be e2q. Calculate H from G in systematic form and the syndrome s = y · HT. Find the expected weight of e2q for different q and its probability of occurrence as the weight of e, w = t ′ is known. Use e2q · H2q = s2q for reconstructing e by iterating the reconstruction procedure explained in Section 4.3; that is, find e2q −1 from the given e2q, for the expected weights while assuming α collisions in each step. Example 1.Let h0 (x) = 1 + x2 + x3 and h1 (x) = x + x2 + x7 in 𝔽2 [x]/(x8 + 1) be the polynomial generators of H = [H0 |H1] which corresponds to G = [I8 |P], with p (x) = x7, as public key. Also, let us assume that y = mG + e = [10000111, 10011101] is the ciphertext and we want to find e of weight w = 3 by our attack, assuming no collision has occurred. First, by squaring G and y, we have Then, by applying ISD algorithm, we can find as the minimum weight codeword. From , we can derive three values: (i) e2 = [00100000, 00001010], (ii) a1 = [0100] and a2 = [0011], (iii) and . It is easy to determine two sets of possible values for e from and by the approach which was depicted in Fig. 1. Two sets are as follows In the next step, H = [PT |I8] and the syndrome s = y · HT = [s1, s2] = [1001, 0010] can be calculated. From HT, we derive E1,1, E2,1, E1,2, E2,2. Now, the system of linear equations is as follows By applying and according to (2), we get the following equation system Having solved the equation system, we get x11 = 1, x21 = 0, x22 = 1. Owing to the structure of , if xii = 1, it means that element of e should be chosen from the first set, that is, eSet1 and if xii = 0, that element is chosen from eSet2. So we can find e1 (x) = x and e2 (x) = x3 + x6. The error vector is e = [01000000, 00010010] and by eHT = s, the correctness of e is verified. Example 2.Let us assume that in Example 1, y = [10100101, 10011101] is given as ciphertext and we want to find the error vector e of weight w = 5. We should do all the steps, similar to what we have done in Example 1. and s = [1101, 0110] are recovered. As the Hamming weight of is less than w = 5 and the difference is 2, it means that one collision has occurred. In this case, we should consider this collision in the reconstruction step. To recover the collision vector, first, the punctured extended code should be constructed. To do so, we remove two columns corresponding to x and x5 in the first block and four columns corresponding to x2, x3, x6 and x7 in the second block of . Applying the ISD algorithm, we can find a codeword of weight 2 as the collision vector: [001000100, 00000000] that shows one collision has occurred in the first block, that is, v1 = [0010]. So the system of linear equations according to (3) is as follows It is noteworthy that (by finding the collisions correctly), this equation system is the same as Example 1 with the same solution, so the discovered e is given by Note that for the sake of simplicity, Q and S have been omitted in the public key of our examples, but nothing is changed in case they are used. 4.6 Cryptanalysis and complexity of attack The complexity of the attack is the sum of the reconstruction steps and the ISD algorithm complexity. In fact, by selecting the number of special squarings q, a tradeoff between these two complexity numbers can be used. The attack can be handled in such a way that ISD algorithm determines the attack complexity. Selecting q : q must be selected in such a way that the complexity of the ISD algorithm is reduced by decreasing the dimension while the complexity of the reconstruction is kept within an acceptable range around the highest ISD algorithm complexity. Note that q is upper bounded by the highest power of 2 which divides p, that is, 2q |p. Success probability of the attack: As explained before, we know that by applying the ISD algorithm, we can find the low-weight codeword which is e2q. In the reconstruction step, if all possible search-weights are checked, it can be guaranteed that e can be found, but sometimes, some of the search-weights occur with a very low probability whil

Referência(s)
Altmetric
PlumX