Improving the bound on the restricted isometry property constant in multiple orthogonal least squares
2018; Institution of Engineering and Technology; Volume: 12; Issue: 5 Linguagem: Inglês
10.1049/iet-spr.2017.0460
ISSN1751-9683
AutoresHaifeng Li, Jing Zhang, Jian Zou,
Tópico(s)Blind Source Separation Techniques
ResumoIET Signal ProcessingVolume 12, Issue 5 p. 666-671 Research ArticleFree Access Improving the bound on the restricted isometry property constant in multiple orthogonal least squares Haifeng Li, Haifeng Li School of Mathematics and Information Sciences, Henan Normal University, Jianshe Road 46, Xinxiang, People's Republic of ChinaSearch for more papers by this authorJing Zhang, Jing Zhang School of Mathematics and Information Sciences, Henan Normal University, Jianshe Road 46, Xinxiang, People's Republic of ChinaSearch for more papers by this authorJian Zou, Corresponding Author Jian Zou zoujian@yangtzeu.edu.cn School of Information and Mathematics, Yangtze University, Jingzhou, Hubei, People's Republic of ChinaSearch for more papers by this author Haifeng Li, Haifeng Li School of Mathematics and Information Sciences, Henan Normal University, Jianshe Road 46, Xinxiang, People's Republic of ChinaSearch for more papers by this authorJing Zhang, Jing Zhang School of Mathematics and Information Sciences, Henan Normal University, Jianshe Road 46, Xinxiang, People's Republic of ChinaSearch for more papers by this authorJian Zou, Corresponding Author Jian Zou zoujian@yangtzeu.edu.cn School of Information and Mathematics, Yangtze University, Jingzhou, Hubei, People's Republic of ChinaSearch for more papers by this author First published: 01 July 2018 https://doi.org/10.1049/iet-spr.2017.0460Citations: 8AboutSectionsPDF ToolsRequest permissionExport citationAdd to favoritesTrack citation ShareShare Give accessShare full text accessShare full-text accessPlease review our Terms and Conditions of Use and check box below to share full-text version of article.I have read and accept the Wiley Online Library Terms and Conditions of UseShareable LinkUse the link below to share a full-text version of this article with your friends and colleagues. Learn more.Copy URL Share a linkShare onFacebookTwitterLinkedInRedditWechat Abstract By allowing multiple L indices to be chosen per iteration, multiple orthogonal least squares (MOLS) have been proposed that is an extension of the classical greedy algorithm OLS. Wang and Li 2017 demonstrated that MOLS can successfully reconstruct K -sparse signals from compressed measurements in at most K iterations if the sensing matrix has unit -norm columns satisfying the restricted isometry property (RIP) of order LK with . In this study, by increasing the RIP order just by one (i.e. from LK), the authors refine the bound further to . In the noisy case, they also propose a stopping criterion of MOLS algorithm, and with this criterion MOLS algorithm can recover the support of sparse signal successfully. 1 Introduction In recent years, compressed sensing (CS) [1-3] has attracted more and more attention since it has been proposed. The main task of CS is to recover a high-dimensional sparse signal (the coefficients have few non-zero entries) from far fewer samples than those required by the classical Shannon–Nyquist theorem. Typically, let be a K -sparse signal [i.e. , where is the support of and is the cardinality of supp(x)]. The measurements are given by (1)where is the sensing matrix. There are many greedy algorithms that aim to recover the support of sparse signal from (1). The orthogonal matching pursuit (OMP) algorithm is a widely used greedy algorithm for recovering the support of sparse signals [4]. Over the years, to improve the performance of OMP, many efforts have been made to propose the modifications of OMP. For example, generalised OMP [5-7], multipath matching pursuit [8], and so on. In the sparse recovery literature, one of the typical methods that are akin to OMP is the OLS [9-11]. The OLS and OMP algorithms coincide for the first step but differ afterward. Herzet et al. [12] presented that OLS is computationally more expensive yet is more reliable than OMP. Using the commonly known tool restricted isometry property (RIP), Wen et al. [10] presented sufficient conditions under which the OLS algorithm recovers the true support set in the noiseless compressive sensing problems. In the following, we will present the definition of RIP [13]. A matrix satisfies the RIP of order K if a constant exists such that (2)for any K -sparse vector . In particular, the minimum of all constants satisfying (2) is called as the RI constant (RIC) . Recently, to reduce the computational cost of OLS, Wang and Li [14] proposed a new algorithm called multiple OLS (MOLS) that can be seen as an extension of OLS. Comparing to OLS, MOLS selects multiple L indices at a time. Therefore, MOLS often converges in less iterations and improves the computational efficiency of the OLS algorithm. When , MOLS is reduced to OLS which has been discussed in [10]. In this paper, for , we prove that for any K -sparse signal , if a sensing matrix has unit -norm columns satisfying the RIP with , then MOLS exactly recovers the support of from (1) in at most K iterations. The proposed sufficient condition for the successful signal recovery has improved sufficient condition over the existing one when . In the noisy case, we propose a stopping rule along with an RIC-based sufficient condition and a requirement of the minimum magnitude of non-zero components of the sparse signal to ensure the success of support recovery. Now, we give some notations. Let . For a set T, is the set of all elements contained in but not in T. means the i th entry of . is the i th column of . is the vector containing the entries of indexed by . Similarly, is the matrix containing the columns of indexed by . is the m -dimensional identity matrix. Let and denote the orthogonal projection operator onto the column space of and its orthogonal complement, respectively. Let denote the transpose of . The cardinality of a finite set is denoted by . The support of is denoted by supp, where supp. is an matrix with all of its entries being 0. 2 Problem formulation In this section, we list the MOLS algorithm and describe the key steps in this algorithm. The MOLS algorithm [14] is listed in Algorithm 1 (see Fig. 1). Throughout this paper, suppose that supp and . Fig. 1Open in figure viewerPowerPoint Algorithm 1: MOLS algorithm [14] Comparing to OLS, MOLS exploits multiple indices to improve the reconstruction performance of sparse signals. So, MOLS improves the chance of choosing the true support and converges in fewer iterations. Thus, MOLS improves the computational efficiency over OLS [14]. For the Algorithm 1 (Fig. 1), the identification step requires to compute different orthogonal projections (i.e. ), this implementation is computationally expensive. Inspired by the technical report of [11], Wang and Li [14] presented a cost-effective alternative to (2) for the identification step of Algorithm 1 (Fig. 1). The result can be listed in the following lemma. Lemma 1.Consider (1) and the MOLS algorithm. At the th iteration, the MOLS algorithm selects a set of L indices (3)By (3), to obtain , a simple implementation requires sorting all elements in and then finds the largest L ones (and their corresponding indices). We explain this in the following. At the th iteration , MOLS computes the correlation between and each column of . From Algorithm 1 (Fig. 1), one has (4)where (a) follows from and the definition of . Then, we have (5)So, in the identification step of Algorithm 1 (Fig. 1), by the fact that the columns of indexed by are zeros. Therefore, at the th iteration, MOLS will select L indices from . Thus (6) 3 RIP analysis of MOLS 3.1 Perfect recovery condition for MOLS In this section, in the noiseless scenario, to improve the existing results, we will present one sufficient condition of MOLS for exactly recovering the true support set from (1). The following lemmas are useful in our analysis. Lemma 2.If satisfies the RIP of orders and with , then . Lemma 3.If satisfies the RIP of order with constant , then Lemma 4.[10]: Suppose that and let have unit -norm columns and satisfy the RIP of order . Then, for any (7) For convenience, we say that MOLS makes a success at an iteration if it chooses at least one correct index at the iteration. In the following, we will describe the sufficient condition of the MOLS algorithm ensuring the support recovery of K -sparse signals. Our proofs are motivated by Wen et al. [7, 10]. Theorem 1.Let be any K -sparse signal and let be the sensing matrix with unit -norm columns. Then if satisfies the RIP with (8)the MOLS algorithm accurately recovers from (1) in at most K steps.The proof of Theorem 1 is given in Section 7. Remark 2.When , MOLS will be reduced to OLS, a similar result has been presented in [10]. In this paper, we mainly consider . Owing to the selection of multiple indices at each iteration, MOLS often has better convergence property than OLS and improves the computational efficiency over OLS. However, this rule makes the analysis of MOLS different and more challenging than that of OLS. Remark 3.In [14], it was shown that if the sensing matrix has unit -norm columns satisfying the RIP with (9)then MOLS chooses all support indices of K -sparse signal from in at most K iterations.When , the results in [14] have been improved by Wen et al. [10].When , by (9), [14] shows that exact recovery is guaranteed under . Comparing to (8), by increasing the RIP order just by one (i.e. from LK), it is possible to refine the bound further to . 3.2 Recovery from noisy measurements In this section, we present the RIP-based condition of MOLS to select the true support set T from noisy measurements (10)where is the noise vector. Analogous to OLS algorithm, in MOLS algorithm sparsity of signal is assumed to be known beforehand. In reality, sparsity is usually unavailable. In noiseless case, MOLS could simply iterate until the residual signal is equal to zero if sparsity is unknown. In noisy case, stopping criterion of MOLS algorithm is an unsolved problem. Theorem 4 gives a stopping criterion along with RIC restriction to make MOLS algorithm successful. Theorem 4.Consider the measurement model in (10). Then, MOLS with stopping rules can recover the support of exactly in at most K iteration, if has unit -norm columns satisfying (8) and the minimal magnitude of non-zero entries of satisfies (11)The proof of Theorem 4 is given in Section 8. Remark 5.In noisy case, Theorem 2 in [14] presented an error bound between the estimated signal by MOLS algorithm and original signal . However, it fails to guarantee that correct support is recovered. Although in [14] (Theorem 3), a condition is proposed based on which MOLS algorithm could select at least one correct atom if it succeeds in the previous iterations, it requires that sparsity K of signal is available. It only proved that the MOLS algorithm will exactly recover the support of with the stopping criterion . In Theorem 4, we propose a stopping criterion of MOLS algorithm, and with this criterion MOLS algorithm can recover the support of sparse signal successfully. 4 Conclusion In this paper, based on RIP, we present a new analysis for the MOLS algorithm. Our result shows that if the sensing matrix has unit -norm columns satisfying the RIP with , then MOLS accurately recovers any K -sparse signal from the compressed measurements . The proposed bound is an improvement over the existing bound [14]. In the noisy case, we propose a stopping criterion of MOLS algorithm, and with this criterion MOLS algorithm can recover the support of sparse signal successfully. 5 Acknowledgments This work was partially supported by the National Natural Science Foundation of China (Grant nos. 11526081, U1404603, 11601134, 11671122 and 11701157) and the Key Scientific Research Project of Colleges and Universities in Henan Province (Grant nos. 17A110008, 18A120009 and 162102210269). 7 Appendix 1. Main analysis If MOLS chooses a correct index at an iteration, we will say that MOLS makes a success at the iteration. The proof is based on the mathematical induction. Suppose that MOLS chooses at least one correct index at each of the previous k iterations. Let be the number of correct indices in . Then (12)We need to show that the MOLS algorithm also selects at least one correct index at the th iteration, i.e. showing that [see Algorithm 1 (Fig. 1)]. Here, we assume . Thus, the proof for the first selection corresponds to the case of . Clearly, the induction hypothesis holds for this case since . The case of has been discussed in [10]; therefore, in the following, we only prove Theorem 1 when . Before proving Theorem 1, we introduce the following useful lemmas [14-17]. Lemma 5.Let(13)such that (14)for . W contains L wrong indices that are not in the support set and are L positive integers. Suppose that satisfies the RIP with (15) Then we have (16) Proof.Define (17) (18)where (19)and (20)By some simple calculation, we have (21)It is straightforward to show that (22) (23)and (24)Using the property of norm, one has (25)where (a) follows from (21).On the other hand, using RIP, one has (26)where (a) is from and Lemma 3, (b) is from (18), (c) is from (21) and (26) is from Lemma 2.Combining (25) with (26), we have where (a) follows from (22), (b) is from Lemma 4, (c) is from Lemma 2 and (d) is from (15).This completes the proof.□ 7.1 Proof of Theorem 1 Proof.At the th iteration, in order to prove that MOLS selects at least one correct index from , according to (1), (14) and Lemma 1, we need to verify the following inequality: (27)By (4) and the property of norm, we have (28)where (a) is from , (b) is from , (c) is from (12), (d) is from (e) is from Lemma 5 and (8) and (28) is from (14) and (4).So, (28) implies that (27) holds. Thus, at the th iteration, it is shown that MOLS selects at least one correct index from . In each iteration, the MOLS algorithm selects at least one correct index from the support set of . Therefore, MOLS can exactly recover the support of in at most K steps.This completes the proof.□ 8 Appendix 2 Proof.The proof of Theorem 4 follows along a similar line as the analysis in the proof of Theorem 1 and [10]. In contrast to the noiseless scenario, our analysis is divided into two parts. In the first part, we consider condition guaranteeing the success for . In the second part, we present condition guaranteeing the success for .In the noisy case, by (10), one has (29) 8.1 Proof of For , MOLS can be changed into OLS [10]. Therefore, the proof of owes a lot to the work by Wen et al. [10]. In the noisy case, combining (10) and [10], at the th iteration, in order to prove that MOLS (i.e. OLS) selects one correct index from , we need to verify the following inequality: (30)It is equivalent to show for each (31)Recall that has unit -norm columns, by (29) and triangle inequality, we can obtain (32)and (33)By (32) and (33), to show (31), it suffices to show (34)We give a lower bound on the left-hand side of (34) (35)where (a) follows from (C.1) in [10], (b) follows from Lemma 4 in [10] and (35) follows from (C.5) in [10]. Now, we give an upper bound on the right-hand side of (34) (36)where (a) follows from RIP and Lemma 3 in [10]. From (35) and (36), (34) is guaranteed by (37)Combining (11) and (37), we can show that (37) holds. Thus, (30) holds. It means that MOLS algorithm could select one correct atom in each iteration. In the following, we will show that MOLS will not stop early with the stopping rule . When all the correct atoms are selected (in the th iteration, ), residual signal becomes (38)If MOLS stops at th iteration and not all the correct supports are identified, it has (39)where (39) follows from (11) and: (40)Therefore, the MOLS algorithm does not stop early. This completes the proof. 8.2 Proof of Combining the proof of Theorem 1 and (10), at the th iteration, in order to prove that MOLS selects at least one correct index from , we need to verify the following inequality: (41)where is defined in (14). From (32) and (33) with (42)to show (41), it suffices to show (43)From (35), we can present a lower bound on the left-hand side of (43) (44)where (a) follows from Lemma 5 and (b) follows from Lemma 3 in [10] and (8). By (44) and (36), (43) is guaranteed by (45)Combining (11) and (45), we can show that (45) holds. Thus, (41) holds. It means that MOLS algorithm could select at least one correct atom in each iteration. In the following, we will show that MOLS will not stop early with the stopping rule . When all the correct atoms are selected (in the th iteration, ), residual signal becomes (46)If MOLS stops at th iteration and not all the correct supports are identified, it has (47)where (47) follows from (11) and: (48)Therefore, the MOLS algorithm does not stop early. This completes the proof.□ 6 References 1Candès, E., Romberg, J., Tao, T.: 'Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information', IEEE Trans. Inf. Theory, 2006, 52, (2), pp. 489– 509 2Donoho, D.: 'Compressed sensing', IEEE Trans. Inf. Theory, 2006, 52, (4), pp. 1289– 1306 3Zhao, Y., Hu, Y., Liu, J.: 'Random triggering-based sub-Nyquist sampling system for sparse multiband signal', IEEE Trans. Instrum. Meas., 2017, 66, (7), pp. 1789– 1797 4Pati, Y., Rezaiifar, R., Krishnaprasad, P.: 'Orthogonal matching pursuit: recursive function approximation with applications to wavelet decomposition'. 1993 Conf. Record of The 27th Asilomar Conf. Signals, Systems, and Computers, Pacific Grove, CA, November 1993, vol. 1, pp. 40– 44 5Wang, J., Kwon, S., Shim, B.: 'Generalized orthogonal matching pursuit', IEEE Trans. Signal Process., 2012, 60, (12), pp. 6202– 6216 6Park, D.: 'Improved sufficient condition for performance guarantee in generalized orthogonal matching pursuit', IEEE Signal Process. Lett., 2017, 24, (9), pp. 1308– 1312 7Wen, J., Zhou, Z., Li, D. et al: 'A novel sufficient condition for generalized orthogonal matching pursuit', IEEE Commun. Lett., 2017, 21, (4), pp. 805– 808 8Kwon, S., Wang, J., Shim, B.: 'Multipath matching pursuit', IEEE Trans. Inf. Theory, 2014, 60, (5), pp. 2986– 3001 9Chen, S., Cowan, C., Grant, P.: 'Orthogonal least squares learning algorithm for radial basis function networks', IEEE Trans. Neural Netw., 1991, 2, (2), pp. 302– 309 10Wen, J., Wang, J., Zhang, Q.: 'Nearly optimal bounds for orthogonal least squares', IEEE Trans. Signal Process., 2017, 65, (20), pp. 5347– 5356 11Blumensath, T., Davies, M.E.: ' On the difference between orthogonal matching pursuit and orthogonal least squares'. Technical Report, University of Southampton, Southampton, UK, 2007. Available: http://www.personal.soton.ac.uk/tb1m08/papers/BDOMPvsOLS07.pdf, accessed on 29 March 2007 12Herzet, C., Soussen, C., Idier, J. et al: 'Exact recovery conditions for sparse representations with partial support information', IEEE Trans. Inf. Theory, 2013, 59, (11), pp. 7509– 7524 13Candès, E., Tao, T.: 'Decoding by linear programming', IEEE Trans. Inf. Theory, 2005, 51, (12), pp. 4203– 4215 14Wang, J., Li, P.: 'Recovery of sparse signals using multiple orthogonal least squares', IEEE Trans. Signal Process., 2017, 65, (8), pp. 2049– 2062 15Wang, J., Shim, B.: 'On the recovery limit of sparse signals using orthogonal matching pursuit', IEEE Trans. Signal Process., 2012, 60, (9), pp. 4973– 4976 16Wen, J., Zhou, Z., Wang, J. et al: 'A sharp conditions for exact support recovery of sparse signals with orthogonal matching pursuit', IEEE Trans. Signal Process., 2017, 65, (6), pp. 1370– 1382 17Shen, Y., Li, B., Pan, W. et al: 'Analysis of generalized orthogonal matching pursuit using restricted isometry constant', Electron. Lett., 2014, 50, (14), pp. 1020– 1022 Citing Literature Volume12, Issue5July 2018Pages 666-671 FiguresReferencesRelatedInformation
Referência(s)