Diffusion LMS with component‐wise variable step‐size over sensor networks
2015; Institution of Engineering and Technology; Volume: 10; Issue: 1 Linguagem: Inglês
10.1049/iet-spr.2015.0033
ISSN1751-9683
AutoresWei Huang, Xi Yang, Duan-Yang Liu, Shengyong Chen,
Tópico(s)Image and Signal Denoising Methods
ResumoIET Signal ProcessingVolume 10, Issue 1 p. 37-45 Research ArticleFree Access Diffusion LMS with component-wise variable step-size over sensor networks Wei Huang, Corresponding Author Wei Huang huangwei@zjut.edu.cn College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, 310023 People's Republic of ChinaSearch for more papers by this authorXi Yang, Xi Yang College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, 310023 People's Republic of ChinaSearch for more papers by this authorDuanyang Liu, Duanyang Liu College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, 310023 People's Republic of ChinaSearch for more papers by this authorShengyong Chen, Shengyong Chen College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, 310023 People's Republic of ChinaSearch for more papers by this author Wei Huang, Corresponding Author Wei Huang huangwei@zjut.edu.cn College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, 310023 People's Republic of ChinaSearch for more papers by this authorXi Yang, Xi Yang College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, 310023 People's Republic of ChinaSearch for more papers by this authorDuanyang Liu, Duanyang Liu College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, 310023 People's Republic of ChinaSearch for more papers by this authorShengyong Chen, Shengyong Chen College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, 310023 People's Republic of ChinaSearch for more papers by this author First published: 01 February 2016 https://doi.org/10.1049/iet-spr.2015.0033Citations: 21AboutSectionsPDF ToolsRequest permissionExport citationAdd to favoritesTrack citation ShareShare Give accessShare full text accessShare full-text accessPlease review our Terms and Conditions of Use and check box below to share full-text version of article.I have read and accept the Wiley Online Library Terms and Conditions of UseShareable LinkUse the link below to share a full-text version of this article with your friends and colleagues. Learn more.Copy URL Share a linkShare onFacebookTwitterLinkedInRedditWechat Abstract In this study, the authors propose a novel component -wise variable step-size (CVSS) diffusion distributed algorithm for estimating a specific parameter over sensor networks. The novelty of the CVSS algorithm is that step-sizes vary from each other on different components at each iteration. They derive the steady-state value of global mean-square deviation (MSD) and relative MSD (RMSD). In the numerical simulations, they compare the proposed CVSS algorithm with several other least mean square (LMS) algorithms. Results show that, when compared with these other algorithms, the CVSS algorithm can effectively reduce steady-state value and speed up convergence rate of RMSD while not sacrificing the convergence rate of MSD. Results also reveal that the proposed CVSS algorithm can achieve reduced difference of steady-state values of relative estimation error on various components. 1 Introduction Distributed estimation over sensor networks is to estimate some parameter of interest in noisy environment using the data collected from the network [1-5]. Compared with centralised strategies for parameter estimation, distributed strategy has the advantage of saving communication cost and enhancing the robustness of networks while still achieving accurate estimation. Actually, distributed estimation has wide applications in a wide range of fields such as target localisation [6], environmental monitoring [7] and cognitive radio [8] etc. Currently, the LMS-type incremental [4, 9] and diffusion [1, 5, 10-12] distributed algorithms are widely studied. In this paper, we mainly focus on the diffusion algorithm due to its enhanced adaptation performance and wide applications. In most previous works, it was assumed that the step-size is spatially and temporally fixed. Fixed step-size (FSS) in the adaptation process can hardly deliver satisfying performances when estimating a specified parameter. Saeed et al. [13] observed that variable step-size (VSS) diffusion LMS algorithm can deliver higher convergence rate of global mean-square deviation (MSD) than the diffusion LMS algorithm with FSS. Assume the real parameter is wo and w is an estimate of wo. The i th components of wo and w are denoted by and wi, respectively. Let us define the absolute weight error on component i as . If the order of magnitude of the first component is larger than that of the second component, it is usually regarded that the first component is more accurately estimated than the second component even if the absolute weight errors on these two components are equal. Thus, it is also meaningful to evaluate estimation accuracy from the perspective of relative weight error. The relative weight error on component i can be defined as (1) Previous studies, which assumed identical step-sizes on all components in the adaptation process, did not take into account relative weight error. Therefore, in this paper, we propose a novel diffusion LMS algorithm with component -wise VSS (CVSS) for distributed estimation in the sensor network. The novelty of our proposed algorithm is that step-sizes on all components in the adaptation process are not only time varying but also vary from each other. We investigate global MSD and relative MSD (RMSD) in this paper. Here, RMSD is used to measure relative estimation error between estimate and the real parameter. Detailed analysis for convergence, stability and steady state of the proposed CVSS algorithm is performed. We compare the CVSS algorithm with several other LMS algorithms, including the FSS algorithm and several kinds of VSS algorithms with identical step-sizes on all components. When approximately the same MSD are achieved by tuning relevant parameters in each algorithm, the superiority of the CVSS algorithm over other algorithms under study are highlighted as follows: The CVSS algorithm achieves lower RMSD. The CVSS algorithm achieves higher convergence rate of RMSD. The CVSS algorithm achieves smaller difference of relative weight errors on various components. The rest of this paper is organised as follows. In Section 2, we describe the data model and the FSS algorithm, followed by the introduction for a kind of VSS algorithm in Section 3. In Section 4, we present the description of the proposed CVSS algorithm, followed by the mean stability, mean-square convergence and steady-state analysis in Section 5. In Section 6, we present the results of numerical simulations for comparing the CVSS algorithm with other algorithms, followed by the comparison of computational complexities of all algorithms under study in Section 7. Finally, conclusions are drawn in Section 8. Notations: In this paper, we use boldface letters and normal letters to denote random quantities and deterministic quantities, respectively. The absolute value of a scalar is denoted by | · |. Besides, the notation (·)T stands for transposition of matrices/vectors and the notation E [ ·] stands for expectation operation. The notation Nk denotes the neighbours of nodes k, that is, the nodes having a direct link with node k, including the node k itself. Other notations not appearing here will be defined later if necessary. 2 Problem formulation and preliminaries 2.1 Problem formulation and the FSS algorithm We consider a connected sensor network with N nodes in a geographical region. Each node k has access to time realisations, {dk (i), uk, i } of some zero-mean random process {dk (i), uk, i } (i = 1, …, N), where dk (i) is a scalar measure and uk, i is a 1 × M regression vector; both at iteration i. Here, M denotes the length of regression vector. Assuming the process uk, i is spatially and temporally independent, the covariance of uk, i is given by a positive definite matrix . The problem is to estimate an M × 1 weight vector wo from the noisy time realisations {dk (i), uk, i }. The relationship between dk (i) and uk, i can be related by the following linear regression model (2) where vk (i) is the additive Gaussian noise. The linear regression model in (2) is widely studied in the adaptive filtering literature since the model captures many cases of interests [14]. On the basis of the mean-square error (MSE) criterion, assuming that each node k has access only to the data from its neighbours {l ∈ Nk }, an estimate of wo is sought by minimising the following local linear cost function (3) where el, i denotes the estimation error, w is an estimation of wo and clk are non-negative combination coefficients measuring the importance of data from node l to node k, which satisfy the condition , clk = 0 if . The solution of (3) has been reported in the literatures [1, 5]. Two kinds of schemes have been designed, which are the adaptation-then-combine (ATC) scheme, of which the mathematical implementation is given by (4) and the combine-then-adaptation (CTA) scheme, of which the mathematical implementation is given by (5) In both schemes, μ is the fixed step-size (FSS) and φk, i is an intermediate estimate of wo at node k and iteration i. In this paper, we only take the ATC scheme as the example, and similar analysis can be extended to the CTA scheme. 3 VSS algorithm in [13] We now introduce the VSS algorithm proposed in [13] in order to propose the CVSS algorithm in the next section. The mathematical implementation of the VSS algorithm in [13] is given by (6) where f (μk, i) is the step-size update function which was defined using the step-size adaptation given in [15]. The step-size update function is defined by (7) where 0 < α < 1 and γ > 0. Usually the value of α is less than but close to 1. The merit for the update rule of step-size is that the notation ek, i denotes estimation error. Large estimation error indicates that the step-size should be increased to provide fast tracking, whereas small estimation error indicates that the step-size should be decreased to provide more accurate estimation. 4 CVSS algorithm Different from this VSS algorithm, in our proposed CVSS algorithm, step-size adaptation on different components varies from each other. The mathematical implementation of the CVSS algorithm is shown as follows (8) where Zk, i is a diagonal M × M matrix, which is defined as (9) Here, μk, i, m denotes the step-size on component m at node k and iteration i. The novelty of CVSS algorithm lies in the update function of step-size, of which the mathematical implementation is as follows (10) where Hk, i is a diagonal matrix with diagonal elements being the elements of the current estimate wk, i, and other elements being zero. In the CVSS algorithm described by (8) and (10), the difference in the step-size evolution on all components is caused by different component values of estimate due to the indistinguishable effects of ek, i on the evolution of step-size on all components. For an estimate at any iteration, as (10) indicates, step-sizes on large-value components of real parameter are larger than those on small-value components of real parameter to provide fast tracking for real parameter. On the other hand, for small-value components of real parameter, small step-sizes are assigned to achieve higher estimation accuracy. Thus, our proposed CVSS algorithm provides the ability of simultaneously improving estimation accuracy and enhancing convergence rate. Like [15], additional control for step-size on each component is performed to provide minimal level of tracking ability and ensure bounded estimation error. The control of step-size is as follows (11) where 0 < μmin < μmax. To show the non-trivial role of non-identical step-sizes on all components in the CVSS algorithm, a slight modification is made to the VSS algorithm in [13]. In (10), each diagonal element in depends on the value of corresponding component in the instantaneous estimate. Here, we reallocate the values of all diagonal elements in . All diagonal elements in the diagonal matrix after reallocation are identical with each value equal to the average of diagonal elements of . Formally, the mathematical implementation of the slight modification version is (12) Changing the parameter γ in the algorithm proposed in [13], the slight modification version is actually equivalent to the algorithm in [13]. For fair comparison, in the remaining part of this paper, we will compare our proposed CVSS algorithm with the slight modification version rather than the algorithm proposed in [13]. 5 Performance analysis To study the performances of CVSS algorithm in the whole network, we define the following global variables (13) In (13), the operation col{·} stacks the specified column vectors in the curly braces. Assuming C is the combination coefficient matrix with {C }lk, we define G = C ⊗ IM, where ⊗ denotes the Kronecker product and IM is an M × M identical matrix. Then, the global linear regression model can be written as (14) Defining , the global ATC scheme can be written as (15) From (14), we shall carry out mean stability and mean-square convergence analysis. For the ease of mathematical tractability, we make the following assumptions. A1: The white noises vk (i) , are spatially and temporally independent from each other as well as the regression vectors. Moreover, we have (16) A2: For the first equation of (15), we made the following assumption (17) In the right-hand side of the second equation of (15), if the second term is far less than the first term, each step-size varies slowly with iterations, and therefore (17) approximately holds. This assumption was widely adopted in the adaptive signal processing literature [16-18]. 5.1 Mean stability analysis The objective of performing mean stability analysis is to find sufficient condition of estimate converging to the real parameter in the mean sense. To begin with, define the global weight error as (18) On the basis of the relation GW(o) = W(o), from (14) and (15), we have (19) On the basis of assumptions A1 and A2, we take expectations for both sides of (19) (20) where is the covariance matrix of Ui. Following (20), the convergence of requires that λmax (GB) < 1, where λmax (·) denotes the spectral radius of a matrix. Using matrix 2-norm, we have (21) The second inequality holds because C is the combination matrix with the sum of elements in the same column equal to one. Therefore, we have ||C || ≤ 1. To ensure the stability in the mean sense, the following condition has to be satisfied (22) which further requires that (23) This condition ensuring mean stability of the CVSS algorithm generalises those of FSS algorithm and VSS algorithm proposed in [13]. The expectation of each step-size component must be limited to a specific range. 5.2 Mean-square analysis In this section, we shall perform mean-square analysis. To compare the relative estimation error for algorithms under study, we introduce the RMSD, which is given by (24) where W(−o) = diag{(w−o)T, (w−o)T, …, (w−o)T } and w−o is the column vector with the same length as wo but each element of w−o is the reciprocal of corresponding element in wo. In addition to RMSD, we shall also compare the global MSD, which is defined as (25) We now perform Gaussian transformation to relevant parameters, that is, , and , where RU is decomposed as RU = T ΛT *. Here, Λ = diag{Λ1, …, ΛN }, where Λk > 0 and is diagonal. Defining the vectorised parameter , after tedious transformation, we have (26) where (27) In (26), , and . Here, we define as the block Kronecker product of two black matrices A and B, with the kl -block defined as (28) for k, l = 1, …, N. In (27), A = diag{A1, A2, …, AN }, where (29) and λl = vec{Λl }. By specifying , the global MSD can be obtained. By specifying (30) we can derive the transient behaviour of RMSD according to (26). Here, the notation is a diagonal matrix with the diagonal elements being the elements of W(−o), that is, . 5.3 Steady-state analysis In this section, we shall examine steady-state values of mean-squares with the aid of steady-state adjustment. Since the covariance matrix Ru is symmetric, there must exist matrices Q and Λ, where Λ = diag{λ1, …, λM } is a diagonal matrix with the diagonal element being the eigenvalues of Ru and QQT = QT Q = I, such that Ru = Q ΛQT. For any node k, we define and , where . We do not use the subscript k for any variable related to node index because the evaluation for here is performed on single-node basis. Then, like [15], the MSE is defined as (31) where ξex (i) denotes excess mean square error (EMSE). Then the steady-state misadjustment is defined as (32) Let Gi be a column vector with entries being the diagonal elements of and let 1 be a column vector of 1′s with the length equal to the length of Gi. Then following [15], the steady state of Gi is given by (33) where (34) Then considering ξex (i) = 1T Gi, the steady state of EMSE is (35) For any node, from (10), we have (36) Recalling assumption A2, when the iteration i goes to infinity, we can approximately have , and , where Ψ is a diagonal matrix whose diagonal elements are elements of wo. Then, the steady-state values of step-size and its square are given by (37) and (38) Then the steady-state values of step-size and its square on component p (1 ≤ p ≤ M) can be represented by (39) and (40) where the notation wo, p denotes the p th component of wo. Now let us rewrite (35) as (41) where (42) Let . Then the steady-state value of misadjustment is (43) Then like [15], we can derive the approximated value of y under rather small misadjustment (44) At last, we can derive the steady-state value of misadjustment, valid for rather small misadjustment (45) Thus, from (39) and (40), using the fact that , the steady-state values of step-size and its square are expressed as (46) and (47) Substituting steady state of step-size and its square, that is, (46) and (47), into the weighting matrix Fi and the vector bi, we can derive the steady-state values of Fi and bi, which are shown as (48) and (49) respectively, where Θo and denote the global steady-state values of step-size and its square, which is defined as (50) Then, it is straightforward to rewrite (26) into the steady-state version, that is (51) Thus, the steady state of mean-square behaviour is given by (52) Specifying and , we can derive the steady-state values of MSD and RMSD, respectively. 6 Numerical simulations In this section, we shall show results of numerical simulations for our proposed CVSS algorithm (8), and compare its performances to theoretical results which are presented in Section 5. In addition, we shall also compare the CVSS algorithm with the FSS algorithm, the VSS algorithm (12), the VSS algorithm proposed in [19] and the VSS normalised LMS algorithm with the MSD-based combination method proposed in [20]. Except for the CVSS algorithm and the FSS algorithm, all other algorithms assign an identical time-varying step-size for adapting all components of the real parameter. Henceforth, the algorithm (12) and the algorithms in [19, 20] are collectively called VSS algorithms in this paper. We consider a network with N = 20 nodes, which are evenly located on a circle (see Fig. 1). Each node is connected with its nearest neighbour on each side. The combination coefficients are set according to Metropolis rule [21], that is (53) where nk and nl are the degrees of nodes k and l, respectively. The input regression samples uk, i are Gaussian random sequences with zero-mean and unit variance. The noise sequences vk (i) also follow Gaussian distribution with zero mean but with variance . Fig. 1Open in figure viewerPowerPoint Network topology To highlight the efficiency of the CVSS algorithm in reducing relative estimation error, the magnitudes of all components of the real parameter vary from each other. In our numerical simulations, wo is set to (54) Each result is averaged over 100 independent experiments. In Fig. 2, we report the transient behaviours of MSD and RMSD for the CVSS algorithm. Both results from numerical simulations and theoretical analysis are illustrated. As can be observed from this figure, analytical results can well match the results from numerical simulations. Fig. 2Open in figure viewerPowerPoint Global MSD (a) and RMSD (b) for the CVSS algorithm. Solid and dotted lines are results from numerical simulations and theoretical analysis, respectively To facilitate us comparing the global RMSD of all algorithms under study, we test two sets of parameter values for each algorithm under study. Each set of parameter values is carefully adjusted to achieve much the same steady-state value of global MSD for all algorithms. Fig. 3 illustrates the evolution of global RMSD for all algorithms under study. For the FSS algorithm and all VSS algorithms under study, the steady-state value of RMSD can hardly be distinguished from each other. This result indicates that time-varying step-size with identical component value cannot further reduce steady-state value of RMSD when compared with the FSS algorithm. Furthermore, Fig. 3 also shows that the CVSS algorithm achieves the smallest steady-state value and fastest convergence rate of RMSD among all competing algorithms under study, which indicates that component-wise step-size adaptation can effectively reduce relative estimation error. Fig. 3Open in figure viewerPowerPoint Global RMSD for all algorithms under study with two sets of parameter values shown in panels (a) and (b). Unless otherwise specified, in this paper, the first set of parameter values for each corresponding algorithm is assigned a By the values in the legend of panel b Second set by the values in the legend of panel For the CVSS algorithm and the VSS algorithm (12), the initial step-size is set to 0.1, and the lower bound μmin and upper bound μmax of step-size are set to 0.001 and 0.5, respectively. For the algorithm in [19], the parameter α denotes the forgetting factor and the regularisation weight is set to δ = 5 × 10−3 for both panels Fig. 4 plots the evolution of global relative weight error on each component. The global relative weight error on each component is characterised by the following equation (55) where wk, i, m is the m th component of wk, i and is the m th component of wo. Except for the CVSS algorithm, the steady-state values of global relative weight error on any component for the other algorithms under study are close to each other. Comparatively speaking, on any component, the global relative weight error for the FSS algorithm and the algorithm in [20] decays slower than those of the other two algorithms. It is interesting to observe that the CVSS algorithm leads to a totally different steady-state value of relative weight error on each component. On the first and second components, the CVSS algorithm deduces smaller global relative weight error. However, compared with the other four algorithms, our proposed CVSS algorithm deduces larger steady-state value of global weight error on the third component. Fig. 4Open in figure viewerPowerPoint Global relative weight error on each component for the two sets of parameter values. In both panels, results from various algorithms are marked with different labels (asterisk: FSS, circle: CVSS, square: VSS (12), rhombus: VSS in [19] and right-sided triangle: VSS in [20]) a 1st set of parameter values b 2nd set of parameter values Fig. 5 illustrates the iteration evolution of average step-size for the three kinds of VSS algorithms, and the iteration evolution of average step-size (m = 1, 2 or 3) on component m for the CVSS algorithms. Here, is the average over all nodes at iteration i, which is defined as (56) and is the average of component m over all nodes at iteration i, which is defined as (57) Fig. 5Open in figure viewerPowerPoint Evolution of average step-size for the CVSS algorithm and three kinds of VSS algorithms under study. For the CVSS algorithm, each curve represents the evolution of only one component of step-size. For any kind of VSS algorithms, only one curve is plotted because step-sizes on all components are identical for these algorithms. Like Fig. 4, panels (a) and (b) are for the first and second sets of parameter values, respectively Fig. 5 shows that, for the CVSS algorithm, the evolution of step-size component is dependent on the magnitude of wo on this component. A large value on some component in wo leads to a large step-size component. This result is reasonable and can be explained as follows. For the small-magnitude component, the step-size on this component should be small to achieve fine estimate of the corresponding component in wo. A large step-size imposed on small-magnitude component will lead to coarse adjustment for estimates on this component. As a result, accurate estimate on this component can hardly be achieved. Instead, small step-size imposed on large-magnitude component will slow down the rate of estimate approaching the real value on this component. Therefore, by assigning different step-sizes on various components, our proposed CVSS algorithm provides good trade-off between achieving accurate estimation of real parameter and high convergence rate of RMSD. For the VSS algorithm (12), the same initial value of step-size as the case of the CVSS algorithm is manually assigned. Compared with the evolution of step-size in the CVSS algorithm, intermediate step-size values between the largest and the smallest components of step-size in the CVSS algorithm are observed in the VSS algorithm (12). For VSS algorithms in [19, 20], the initial step-sizes are controlled by parameters in the corresponding algorithm. The step-size evolution of the two algorithms shows totally different behaviours from that of the proposed CVSS algorithm. In Fig. 6, we report the results of global MSD for all algorithms under study. As we have stated, both sets of parameter values are given to achieve approximately the same level of global MSD. Results show that the CVSS algorithm, together with the VSS algorithm (12) and the VSS algorithm in [19], delivers approximately the same level of convergence rate of MSD, which is evidently higher than those of the FSS algorithm and the VSS algorithm in [20]. Then we can conclude that while the CVSS algorithm does not sacrifice the convergence rate of global MSD, the CVSS algorithm shows evident superiority over all other algorithms under study in achieving low relative estimation error. Fig. 6Open in figure viewerPowerPoint Global MSD for all algorithms under study for the two sets of parameter values, which are shown in Panels (a) and (b) respectively At last, we have also compared the CVSS algorithm with the optimal step-size assignment strategy for incremental LMS algorithm proposed in [22]. Under the same steady-state MSD achieved, higher convergence rate of MSD is observed for the CVSS algorithm from the inset of Fig. 7 a. Besides, Fig. 7 a also reveals that the CVSS algorithm delivers lower level steady state and higher convergence rate of RMSD than the strategy in [22]. From Fig. 7 b, the superiority of the CVSS algorithm in reducing the difference of relative weight error on various components over the optimal step-size assignment strategy is observed. In all, the CVSS algorithm can deliver better estimation performances than the optimal step-size assignment strategy in both achieving lower-level RMSD and higher convergence rate of MSD. Fig. 7Open in figure viewerPowerPoint Comparison for the CVSS algorithm (marked with circles) and the optimal step-size assignment strategy for incremental algorithm (marked with asterisks). Panels (a) and (b) show global RMSD and global relative weight error on all components, respectively. The inset of Panel (a) shows the global MSD. In this paper, noise variances on all nodes are uniformly distributed in . For fair comparison, approximately the same steady-state values of global MSD are achieved by adjusting relevant parameter in each algorithm, as shown in the inset of panel (a). For the optimal step-size assignment strategy, the average step-size over all nodes is 0.02. For the CVSS algorithm, the parameter γ is set to 0.015 7 Comparison for computation complexity Now we compare the computational complexities for all algorithms under study. Compared with the FSS algorithm, the extra computational cost consumed in the VSS algorithm (12) is dominated by the update of step-size of all nodes, whose total cost is O (MN). For step-size update in the CVSS algorithm [shown in (10)], the term (dk (i) − uk, i wk, i)2 is applicable for all components and the diagonal elements of matrix vary from each other. Therefore, on the basis of the algorithm (12), the CVSS algorithm consumes O (MN) more computational cost for updating step-sizes of all nodes at each iteration. For the algorithms in [19, 20], the computational cost for updating step-sizes of all nodes at each iteration is of the same level as that of the VSS algorithm (12), that is, O (MN). However, the algorithm in [19, 20] are more complicated because some parameters need to be estimated at each iteration to obtain an updated step-size. 8 Conclusions In this paper, we have proposed a novel CVSS LMS algorithm in the context of diffusion distributed estimation over sensor networks to reduce relative estimation error. Different from most previous work, in the CVSS algorithm, all components have non-identical step-sizes for estimating the real parameter. Theoretical analysis was found to well agree with numerical simulations. We compare the CVSS algorithm with a few other algorithms. The significant feature of the proposed CVSS algorithm lies in its distinct advantage in reducing steady-state value and enhancing convergence rate of relative MSE (RMSD), which is used to measure relative estimation error between estimate and real parameter. Our future work will focus on designing effective VSS diffusion distributed algorithms under more complicated scenarios such as non-identical environment noise over the network. 9 Acknowledgments This work was supported by the National Natural Science Foundation of China (61201074, 61303142, 61374152, 61325019 and 61173096), HongKong, Macao, the Taiwan Science & Technology Cooperation Program of China (2014DFH10110) and Zhejiang Provincial Natural Science Foundation of China (LY14F020018 and LQ14F030003). 10 References 1Cattivelli, F.S., Sayed, A.H.: ‘Diffusion LMS strategies for distributed estimation’, IEEE Trans. Signal Process, 2010, 58, (3), pp. 1035– 1048 (doi: https://doi.org/10.1109/TSP.2009.2033729) 2Dimakis, A.G., Kar, S., Moura, J.M.F. et al: ‘Gossip algorithms for distributed signal processing’, Proc. IEEE, 2011, 98, (11), pp. 1847– 1864 (doi: https://doi.org/10.1109/JPROC.2010.2052531) 3Li, C., Shen, P., Liu, Y. et al: ‘Diffusion information theoretic learning for distributed estimation over network signal processing’, IEEE Trans. Signal Process., 2013, 61, (16), pp. 4011– 4024 (doi: https://doi.org/10.1109/TSP.2013.2265221) 4Lopes, C.G., Sayed, A.H.: ‘Incremental adaptive strategies over distributed networks’, IEEE Trans. Signal Process., 2007, 55, (8), pp. 4064– 4077 (doi: https://doi.org/10.1109/TSP.2007.896034) 5Lopes, C.G., Sayed, A.H.: ‘Diffusion least-mean squares over adaptive networks: formulation and performance analysis’, IEEE Trans. Signal Process., 2008, 56, (7), pp. 3122– 3136 (doi: https://doi.org/10.1109/TSP.2008.917383) 6Tu, S.-Y., Sayed, A.H.: ‘Mobile adaptive networks’, IEEE J. Sel. Top. Signal Process., 2011, 5, (4), pp. 649– 664 (doi: https://doi.org/10.1109/JSTSP.2011.2125943) 7Cao, X., Chen, J., Xiao, Y. et al: ‘Building-environment control with wireless sensor and actuator networks: centralized versus distributed’, IEEE Trans. Ind. Electron., 2010, 57, (11), pp. 3596– 3605 (doi: https://doi.org/10.1109/TIE.2009.2029585) 8Joseph, M., Maguire, J.Q.: ‘Cognitive radio: making software radios more personal’, IEEE Pers. Commun., 1999, 6, (4), pp. 13– 18 (doi: https://doi.org/10.1109/98.788210) 9Khalili, A., Rastegarnia, A., Bazzi, W.M. et al: ‘Derivation and analysis of incremental augmented complex least mean square algorithm’, IET Signal Process., 2015, 9, pp. 312– 319 (doi: https://doi.org/10.1049/iet-spr.2014.0188) 10Sayed, A.H.: ‘ Diffusion adaptation over networks’, arXiv:1205.4220 [cs.MA], May 2013 11Ghazanfari-Rad, S., Labeau, F.: ‘Diffusion least-mean squares over distributed networks in the presence of MAC errors’. Asilomar Conf. Signals, Systems, and Computers, November 2012, pp. 1787– 1791 12Rastegarnia, A., Bazzi, W.M., Khalili, A. et al: ‘Diffusion adaptive networks with imperfect communications: link failure and channel noise’, IET Signal Process., 2014, 8, (1), pp. 59– 66 (doi: https://doi.org/10.1049/iet-spr.2012.0281) 13Saeed, M.O.B., Zerguine, A., Zummo, S.A.: ‘A variable step-size strategy for distributed estimation over adaptive networks’, EURASIP J. Adv. Signal Process., 2013, 135, (15), pp. 1– 14 14Sayed, A.H.: ‘ Fundamentals of adaptive filtering’ ( Wiley, New York, 2003) 15Kwong, R.H., Johnston, E.W.: ‘A variable step size LMS algorithm’, IEEE Trans. Signal Process, 1992, 40, (7), pp. 1633– 1642 (doi: https://doi.org/10.1109/78.143435) 16Feuer, A., Weinstein, E.: ‘Convergence analysis of LMS filters with uncorrelated Gaussian data’, IEEE Trans. Acous. Speech Signal Process., 1985, ASSP-33, pp. 222– 230 (doi: https://doi.org/10.1109/TASSP.1985.1164493) 17Widrow, B., McCool, J.M., Larimore, M.G. Jr. et al: ‘Stationary and nonstationary learning characteristics of the LMS adaptive filter’, Proc. IEEE, 1976, 64, pp. 1151– 1162 (doi: https://doi.org/10.1109/PROC.1976.10286) 18Horowitz, L.L., Senne, K.D.: ‘Performance advantage of complex LMS for controlling narrow-band adaptive arrays’, IEEE Trans. Acous. Speech Signal Process., 1981, ASSP-29, pp. 722– 736 (doi: https://doi.org/10.1109/TASSP.1981.1163602) 19Lee, H.S., Kim, S.E., Lee, J.W. et al: ‘A variable step-size diffusion LMS algorithm for distributed estimation’, IEEE Trans. Signal Process., 2015, 63, (7), pp. 1808– 1820 (doi: https://doi.org/10.1109/TSP.2015.2401533) 20Jung, S.M., Seo, J.H., Park, P.G.: ‘A variable step-size diffusion normalized least-mean-square algorithm with a combination method based on mean-square deviation’, Circuits Syst. Signal Process., 2015, 34, (10), pp. 3291– 3304 (doi: https://doi.org/10.1007/s00034-015-0005-9) 21Xiao, L., Boyd, S.: ‘Fast linear iterations for distributed averaging’, Syst. Control Lett., 2004, 53, (1), pp. 65– 78 (doi: https://doi.org/10.1016/j.sysconle.2004.02.022) 22Khalili, A., Rastegarnia, A., Chambers, J.A. et al: ‘An optimum step-size assignment for incremental LMS adaptive networks based on average convergence rate constraint’, AEU Int. Electron. Commun., 2013, 67, (3), pp. 263– 268 (doi: https://doi.org/10.1016/j.aeue.2012.08.010) Citing Literature Volume10, Issue1February 2016Pages 37-45 FiguresReferencesRelatedInformation
Referência(s)