Artigo Produção Nacional Revisado por pares

Low‐complexity robust adaptive beamforming algorithms exploiting shrinkage for mismatch estimation

2016; Institution of Engineering and Technology; Volume: 10; Issue: 5 Linguagem: Inglês

10.1049/iet-spr.2014.0331

ISSN

1751-9683

Autores

Hang Ruan, Rodrigo C. de Lamare,

Tópico(s)

Advanced Adaptive Filtering Techniques

Resumo

IET Signal ProcessingVolume 10, Issue 5 p. 429-438 Review ArticleFree Access Low-complexity robust adaptive beamforming algorithms exploiting shrinkage for mismatch estimation Hang Ruan, Corresponding Author Hang Ruan hr648@york.ac.uk Department of Electronics, University of York, Heslington, York, YO10 5DD UKSearch for more papers by this authorRodrigo C. de Lamare, Rodrigo C. de Lamare Department of Electronics, University of York, Heslington, York, YO10 5DD UK Centre for Telecommunications Studies (CETUC), Pontifical Catholic University of Rio de Janeiro, Rua Marquês de São Vicente, 225, Gávea, Rio de Janeiro, 22451-900 BrazilSearch for more papers by this author Hang Ruan, Corresponding Author Hang Ruan hr648@york.ac.uk Department of Electronics, University of York, Heslington, York, YO10 5DD UKSearch for more papers by this authorRodrigo C. de Lamare, Rodrigo C. de Lamare Department of Electronics, University of York, Heslington, York, YO10 5DD UK Centre for Telecommunications Studies (CETUC), Pontifical Catholic University of Rio de Janeiro, Rua Marquês de São Vicente, 225, Gávea, Rio de Janeiro, 22451-900 BrazilSearch for more papers by this author First published: 01 July 2016 https://doi.org/10.1049/iet-spr.2014.0331Citations: 3AboutSectionsPDF ToolsRequest permissionExport citationAdd to favoritesTrack citation ShareShare Give accessShare full text accessShare full-text accessPlease review our Terms and Conditions of Use and check box below to share full-text version of article.I have read and accept the Wiley Online Library Terms and Conditions of UseShareable LinkUse the link below to share a full-text version of this article with your friends and colleagues. Learn more.Copy URL Share a linkShare onFacebookTwitterLinkedInRedditWechat Abstract This study proposes low-complexity robust adaptive beamforming (RAB) techniques based on shrinkage methods. The authors first review a low-complexity shrinkage-based mismatch estimation batch algorithm to estimate the desired signal steering vector mismatch, in which the interference-plus-noise covariance matrix is also estimated by a recursive matrix shrinkage method. Then they develop low-complexity adaptive recursive versions of stochastic gradient and conjugate gradient to update the beamforming weights, resulting in low-cost robust adaptive algorithms. An analysis of the effect of shrinkage on the estimation procedure is developed along with a computational complexity study of the proposed and existing algorithms. Simulations are conducted in local scattering scenarios and comparisons to existing RAB techniques are provided. 1 Introduction Sensor array signal processing techniques and their applications to wireless communications, sensor networks and radar have been widely investigated in recent years. Adaptive beamforming is one of the most important topics in sensor array signal processing which has applications in many fields. However, adaptive beamformers may suffer performance degradation due to small sample data size or the presence of the desired signal in the training data. In practical environments, desired signal steering vector mismatch problems like signal pointing errors [1], imprecise knowledge of the antenna array, look-direction mismatch or local scattering may even lead to more significant performance loss [2]. 1.1 Prior and related work To address these problems, robust adaptive beamforming (RAB) techniques have been developed in recent years. Popular approaches include worst-case optimisation [2], diagonal loading [3-5], and eigendecomposition [1, 6]. However, general RAB designs have some limitations such as their ad-hoc nature, high probability of subspace swap at low signal-to-noise ratio (SNR) and high computational cost [7]. Further recent works have looked at approaches based on combined estimation procedures for both the steering vector mismatch and interference-plus-noise covariance (INC) matrix to improve RAB performance. The worst-case optimisation methods in [2, 8-10] solve an online semi-definite programming while using a matrix inversion to estimate the INC matrix. The method in [11] estimates the steering vector mismatch by solving an online Sequential Quadratic Program (SQP) [12], while estimating the INC matrix using a shrinkage method [11]. Another similar method which jointly estimates the steering vector using SQP and the INC matrix using a covariance reconstruction method [13], presents outstanding performance compared to other RAB techniques. However, their main disadvantages include the high computational cost associated with online optimisation programming, the matrix inversion or reconstruction process, and slow convergence. Our recent work in [14] has introduced a low-complexity shrinkage-based mismatch estimation (LOCSME) algorithm, which implements an efficient iterative robust beamforming method with precise estimation of the steering vector mismatch. In this method, an extension of the Oracle Approximating Shrinkage (OAS) method [15] is employed to perform vector shrinkage estimation of the cross-correlation vector between the sensor array received data and the beamformer output. The mismatched steering vector is efficiently estimated without any costly optimisation procedure in a low-complexity sense. Then, we estimate the desired signal power based on the desired signal steering vector and the received data. In a subsequent step, we perform matrix shrinkage to the sample covariance matrix (SCM), from which the covariance matrix of the desired signal is computed and subtracted to obtain a further estimated INC matrix. Then the output signal-to-interference-plus-noise ratio (SINR) can be computed directly. 1.2 Contributions In this work, we first develop a stochastic gradient (SG) adaptive version of the LOCSME technique in [14], denoted LOCSME-SG, which does not require matrix inversions or costly recursions to update the beamforming weights adaptively. In particular, the SCM is estimated only once using a knowledge-aided (KA) shrinkage [16, 17] algorithm along with the computation of the beamforming weights based on the estimated steering vector through SG recursions. Second, we also develop an adaptive LOCSME technique based on the conjugate gradient (CG) adaptive algorithm, resulting in CG type algorithms, denoted LOCSME-CCG and LOCSME-MCG. Different from LOCSME-SG, the CG type algorithms not only updates the beamforming weights, but can also estimate the mismatched steering vector, which sequentially performs the estimation of the mismatched vector by LOCSME in every snapshot. An analysis shows that both LOCSME-SG and LOCSME-CG achieve one degree lower complexity than the original LOCSME. Simulations also show an excellent performance which benefits from the precise estimation provided by the shrinkage approach. Our contributions are summarised as follows: The development of LOCSME type SG and CG algorithms. An investigation of the effect of shrinkage on the estimation accuracy of the algorithms. A study of the performance and the complexity of the proposed and existing algorithms. The paper is organised as follows. The system model and problem statement are described in Section 2. A review of the LOCSME algorithm is provided in Section 3 whereas Section 4 presents the proposed adaptive LOCSME-SG, LOCSME-CCG and LOCSME-MCG algorithms. Section 5 provides the shrinkage and complexity analyses. Section 6 presents the simulation results. Section 7 gives the conclusion. 2 System model and problem statement Consider a linear antenna array of M sensors and K narrowband signals which impinge on the array. The data received at the i th snapshot can be modelled as (1) where are uncorrelated source signals, θ = [θ1, …, θK]T ∈ ℝK is a vector containing the directions of arrival (DoAs), is the matrix which contains the steering vector for each DoA and e is the steering vector mismatch of the desired signal, is assumed to be complex Gaussian noise with zero mean and variance . The beamformer output is (2) where is the beamformer weight vector, where (·)H denotes the Hermitian transpose. The optimum beamformer is computed by maximising the SINR given by (3) where is the desired signal power and is the INC matrix. Assuming that the steering vector is known precisely , then problem (3) can be cast as an optimisation problem (4) which is known as the minimum variance distortionless response (MVDR) beamformer or Capon beamformer [18, 19]. The optimum weight vector is given by . Since is usually unknown in practice, it can be estimated by the SCM of the received data as (5) which results in the sample matrix inversion (SMI) beamformer . However, the SMI beamformer requires a large number of snapshots to converge and is sensitive to steering vector mismatches [11, 13]. The problem we are interested in solving is how to design low-complexity robust adaptive beamforming algorithms that can preserve the SINR performance in the presence of uncertainties in the steering vector of a desired signal. 3 LOCSME robust beamforming algorithm In this section, the LOCSME algorithm [14] is briefly reviewed. The basic idea of LOCSME [14] is to obtain a precise estimate of the desired signal steering vector by exploiting cross-correlation vector between the beamformer output and the array observation data and then computing the beamforming weights. 3.1 Steering vector estimation The cross-correlation between the array observation data and the beamformer output can be expressed as . With assumptions that for m = 2, …, K and that the signal sources and that the system noise have zero mean while the desired signal is independent from the interferers and the noise, d can be rewritten as . By projecting d onto a predefined subspace [20], which collects all possible information from the desired signal, the unwanted part of d can be eliminated. LOCSME also exploits prior knowledge which amounts to providing an angular sector in which the desired signal is located, say [θ1 − θe, θ1 + θe]. The subspace projection matrix P is given by (6) where are the p principal eigenvectors of the matrix C, which is defined by [12] (7) To optimise LOCSME, the performance for different values of p can be observed through simulations or measurements for a scenario of interest. Then, the value of p associated with the best performance is chosen. To achieve a better estimation of the steering vector, we employ the OAS shrinkage technique to obtain a more accurate estimate of the vector d. Let us define the sample correlation vector (SCV) in snapshot i as (8) and its mean value as (9) Then we aim to shrink the SCV towards its mean value , which yields (10) where represents the shrinkage coefficient . To find out the optimum , we minimise the mean square error (MSE) of , which leads to (11) Once the correlation vector is obtained, the steering vector is estimated by (12) 3.2 Desired signal power estimation This subsection will introduce a method to estimate the desired signal power . This can be accomplished by directly using the desired signal steering vector. Let us rewrite the received data as (13) Pre-multiplying the above equation by and assuming is uncorrelated with the interferers, we obtain (14) Taking the expectation of , we obtain (15) If the noise is statistically independent from the desired signal, then we have (16) where represents the noise covariance matrix which can be replaced by , where is assumed known here for convenience, otherwise it can be easily estimated by a specific estimation method. A proper approach is to use a maximum likelihood (ML) based method as in [21]. A specialised comparison between the cases when the noise power is assumed known or estimated is also given in the simulations. Replacing the desired signal power E [|s1 |2] by its estimate , the desired signal power estimate is computed as (17) Equation (17) has a low complexity and can be directly implemented if the desired signal steering vector is accurately estimated and the noise level is known. 3.3 Estimation of the INC matrix In this subsection, we describe a method to estimate the INC matrix that is based on the OAS matrix shrinkage method [15] and used in LOCSME. First of all, we need the SCM in (5) as a preliminary estimate for the INC matrix. Then we define , where . By minimising the MSE described by , the following recursion is employed (18) (19) where must be initialised between 0 and 1 to guarantee convergence [15]. To exclude the information of the desired signal from the covariance matrix of the sensor array observation data, a simple subtraction is considered (20) 3.4 Computation of beamforming weights The beamforming weights of LOCSME are computed directly by (21) which has a computationally costly matrix inversion . To reproduce the LOCSME algorithm, whose complexity is , (9)–(12) and (17)–(21) are required. In comparison to previously reported RAB algorithms in [7, 11-13] with costly online optimisation procedures and complexity or higher, LOCSME requires lower cost. 4 Proposed adaptive algorithms In this section, we develop adaptive strategies based on the LOCSME robust beamforming technique, resulting in the proposed LOCSME-SG, LOCSME-CCG and LOCSME-MCG algorithms. These algorithms are developed for implementation purposes and are especially suitable for dynamic scenarios. In these adaptive algorithms, we employ the same recursions as in LOCSME to estimate the steering vector and the desired signal power, whereas the estimation procedures of the INC matrix and the beamforming weights are different. In particular, LOCSME-SG employs a low-cost KA shrinkage method to estimate the INC matrix. For LOCSME-SG, LOCSME-CCG and LOCSME-MCG, the weight vector update equation is derived from a reformulated optimisation problem. 4.1 LOCSME-SG adaptive algorithms With the estimate of the desired signal power we subtract unwanted information of the interferences out from the array received data to obtain a modified array observation vector. Consider a simple substraction step as (22) Then the INC matrix can be estimated by (23) Now, we employ the idea of KA shrinkage method [16, 17] to help with our INC estimation. By applying a linear shrinkage model to the INC matrix, we have (24) where is an initial guess for the INC matrix, η (i) is the shrinkage parameter and η (i) ∈ (0, 1). Here the shrinkage parameter is expected to be adaptively estimated. Employing an idea of adaptive filtering [16, 17], it is possible to set the overall filter output yf (i) equal to , which is the linear combination of the outputs from two filter elements which are and , which leads to (25) To restrict η (i) to a value greater than 0 and less than 1, a sigmoidal function is employed (26) where ε (i) is updated as (27) where μɛ is the step size while σɛ is a small positive constant, and q (i) is updated as (28) where λq is a forgetting factor. Now we resort to an SG adaptive strategy to reduce the complexity required by the matrix inversion. The optimisation problem (4) can be re-expressed as (29) Then we can express the SG recursion as (30) where - + . By substituting into the SG (30) and letting , λ is obtained as (31) By substituting λ back into (30) again, the weight update equation for LOCSME-SG is obtained as (32) The adaptive SG recursion circumvents a matrix inversion when computing the weights using (21), which is unavoidable in LOCSME. Therefore, the computational complexity is reduced from in LOCSME to in LOCSME-SG. The proposed LOCSME-SG algorithm is summarised in Table 1. Table 1. Proposed LOCSME-SG algorithm Initialise: For each snapshot index i = 1, 2, …: Steering vector mismatch estimation Desired signal power estimation Computation of INC matrix Computation of beamformer weights End snapshot 4.2 LOCSME-CCG adaptive algorithm To introduce CG-based adaptive algorithms, we specifically divide them into two different algorithms, namely, LOCSME-CCG and its modified version LOCSME-MCG. In the approach of LOCSME-CCG, the SCV is replaced by an estimate with a forgetting factor λ, which is a constant scalar less than and close to 1 as (33) before we employ it into the vector shrinkage method. The INC matrix is also estimated directly with this forgetting factor as (34) To derive CG-based recursions we need to reformulate the cost function that needs to be minimised as follows (35) where is the CG-based weight vector. In LOCSME-CCG, we require a run of N iterations in each snapshot. In the n th iteration, and are updated as follows (36) (37) where and are direction vectors updated by (38) (39) where and are the negative gradients of the cost function in terms of and , respectively, which are expressed as (40) (41) The scaling parameters , can be obtained by substituting (36) and (37) into (35) and minimising with respect to and , respectively. The solutions are given by (42) (43) The parameters and should be chosen to provide conjugacy for direction vectors [22, 23] which results in (44) (45) After and are updated for N iterations, the beamforming weight vector w (i) can be computed by (46) while the estimated steering vector is also updated to . Table 2 summarises the LOCSME-CCG algorithm. Table 2. Proposed LOCSME-CCG algorithm Initialise: For each snapshot index i = 1, 2, …: Steering vector mismatch estimation Desired signal power estimation CCG-based estimations of steering vector mismatch and beamformer weights For each iteration index n = 1, 2, …, N : Computation of beamformer weights End snapshot 4.3 LOCSME-MCG adaptive algorithm In LOCSME-MCG, we let only one iteration be performed per snapshot [22, 23], which further reduces the complexity compared to LOCSME-CCG. Here we denote the CG-based weights and steering vector updated by snapshots rather than inner iterations as (47) (48) As can be seen, the subscripts of all the quantities for inner iterations are eliminated. Then, we employ the degenerated scheme to ensure and satisfy the convergence bound [22] given by (49) (50) Instead of updating the negative gradient vectors and in iterations, now we utilise the forgetting factor to re-express them in one snapshot as (51) (52) Pre-multiplying (51) and (52) by and , respectively, and taking expectations we obtain (53) (54) where in (54) we have . After substituting (54) back into (50) we obtain the bounds for as follows (55) Then we can introduce a constant parameter to restrict within the bounds in (55) as (56) Similarly, we can also obtain the bounds for . For simplicity let us define , , and . Substituting (53) into (49) gives (57) in which we can introduce another constant parameter to restrict within the bounds in (57) as (58) or (59) Then we can update the direction vectors and by (60) (61) where and are updated by (62) (63) Finally, we can update the beamforming weights by (64) The LOCSME-MCG algorithm is summarised in Table 3. The MCG approach employs the forgetting factor λ and constant η for estimating α (i), which means its performance may depend on a suitable choice of these parameters. However, it requires much lower complexity for the elimination of inner recursions compared to CCG and presents a similar performance in the simulations. Table 3. Proposed LOCSME-MCG algorithm Initialise: For each snapshot index i = 1, 2, …: Steering vector mismatch estimation Desired signal power estimation MCG-based estimations of steering vector Mismatch and beamformer weights Computation of beamformer weights End snapshot 5 Analysis: shrinkage and complexity This section investigates the effects of shrinkage approaches and the computational complexity of the proposed algorithms. First, we rewrite the vector shrinkage recursions into a matrix shrinkage recursion. Then we employ an eigendecomposition approach to examine the eigenvalues dispersion for the vector shrinkage and matrix shrinkage cases by exploring the MSE [24] of their eigenvalues, and give reasons why shrinkage gives an important contribution to the performance. Then we present a complexity analysis for the proposed algorithms and comparisons to the existing RAB algorithms. It is clear that the proposed algorithms achieve one degree lower complexity than most of the existing ones. 5.1 Effects of shrinkage First of all, we modify the vector shrinkage formula (10) to the following full rank matrix form (65) where , , and are all diagonal matrices, having each of their diagonal entries identical to , elements of the optimal shrinkage estimator and elements of the SCV , respectively, whereas all the three matrices have their other entries equal to zero. Associated with (18), it can be seen they share the same linear shrinkage formula. Now, we carry out eigenvalue decompositions for every matrix in (10). Since the eigenvalues of a diagonal matrix are simply its diagonal entries, the eigenvalues of , , and can be expressed as (66) (67) (68) respectively. Since we have (69) where 〈, 〉 denotes the inner product and we have , then the inner product term in the above equation equals 0, which yields the following equation (70) Equation (70) can be interpreted in terms of the eigenvalues of the matrices if we rewrite it as (71) Note that in (71), actually represents the mean value of the SCV or the diagonal entries of matrix . Similarly to the matrix shrinkage in (18), we can process the same analysis even though the matrices are no longer diagonal, but will lead to a more general result. Assuming the eigenvalues of the matrices , , and are (72) (73) (74) respectively. Then we have (75) where the inner product term equals 0 because of , which results in (76) Noting that , then (76) is equivalent to (77) which can be rewritten in an alternative form as (78) Since the expectation on the right hand side of (71) and (78) are always non-negative, so we have their left hand side always equal or larger than 0, which yields (79) (80) Since we also know that (81) (82) which express the expected mean of the eigenvalues of the sampled matrices and in snapshot i, respectively. Then (79) and (80) indicate that the expected MSE of the eigenvalues of or in snapshot i is always larger or equal to those of the optimal shrinkage estimator or obtained from the previous snapshot. In other words, the eigenvalues of the sampled matrix are more dispersedly distributed [here we should have , , and λ1 (i − 1) > γ1 (i) > 0, λm (i − 1) < γm (i)] based on their expected mean value than those of the optimal shrinkage estimator from the last snapshot. Shrinking the sampled matrix to a matrix with less dispersed eigenvalues can lead to an improved covariance matrix estimator as reported in [25]. 5.2 Complexity analysis In this section, we analyse the computational complexity in terms of flops (total number of additions and multiplications) required by the proposed RAB algorithms. The proposed RAB algorithms avoid costly matrix inversion and multiplication procedures, which are unavoidable in the existing RAB algorithms. The complexity comparison among different algorithms is listed in Table 4. It should be noted that LOCSME-CCG has its complexity dependent on the number of inner iterations N, which can be properly selected within the range of 5–10. However, the LCWC algorithm of [6] also requires N inner iterations per snapshots, which significantly varies in different snapshots and is usually much larger than the value of N in the proposed LOCSME-CCG algorithm. It is clear that our proposed algorithms have one degree lower complexity in terms of the number of sensors M, which are dominated by , resulting in great advantages when M is large. Fig. 1 gives illustrations of the complexity comparison of the listed algorithms, where the values of N for [6] and the proposed LOCSME-CCG are selected as 50 and 10, respectively. Fig. 1Open in figure viewerPowerPoint Complexity against number of sensors Table 4. Complexity comparison RAB algorithms Flops LOCSME [14] 4M3 + 3M2 + 20M RCB [3] 2M3 + 11M2 Algorithm of [11] M3.5 + 7M3 + 5M2 + 3M LOCME [20] 2M3 + 4M2 + 5M LCWC [6] N (2M2 + 7M) LOCSME-SG 15M2 + 30M LOCSME-CCG 5M2 + 21M + N (8M2 + 32M) LOCSME-MCG 13M2 + 77M 6 Simulation results The simulations are carried out under both coherent and incoherent local scattering mismatch [4] scenarios. A uniform linear array of M = 12 omnidirectional sensors with half-wavelength spacing is considered. Hundred repetitions are executed to obtain each point of the curves and a maximum of i = 300 snapshots are observed. The desired signal is assumed to arrive at θ1 = 10° while there are other two interferers impinging on the antenna array from directions θ2 = 30° and θ3 = 50°. The signal-to-interference ratio (SIR) is fixed at 0 dB. For the curves with the optimum beamforming in each of the comparisons, we employ the MVDR beamformer and assume that the DoA of the desired signal is perfectly known (without mismatch) and that the covariance matrix of the received data is also perfectly known perfectly so that the output SINR can be directly computed with (3). For our proposed algorithms, the angular sector in which the desired signal is assumed to be located is chosen as [θ1 − 5°, θ1 + 5°] and the number of eigenvectors of the subspace projection matrix p is selected manually with the help of simulations. The results focus on the beamformer output SINR performance against the number of snapshots, or a variation of input SNR (−10 to 30 dB). 6.1 Mismatch due to coherent local scattering The steering vector of the desired signal affected by a time-invariant coherent local scattering effect is modelled as (83) where p corresponds to the direct path while corresponds to the scattered paths. The angles θk (k = 1, 2, 3, 4) are randomly and independently drawn in each simulation run from a uniform generator with mean 10° and standard deviation 2°. The angles φk (k = 1, 2, 3, 4) are independently and uniformly taken from the interval [0, 2π] in each simulation run. Notice that θk and φk change from trials while remaining constant over snapshots. Figs. 2 and 3 illustrate the performance comparisons of SINR against snapshots and SINR against SNR, respectively, in terms of the mentioned RAB algorithms in the last section under coherent scattering case. Specifically to obtain Fig. 2, we assume the noise power is known and select μ = 0.2, μɛ = 1, σɛ = 0.001, λq = 0.99, for LOCSME-SG, λ = 0.95 for LOCSME-CCG and λ = 0.95, η = 0.2 for LOCSME-MCG. However, selection of these parameters may vary according to different input SNR as shown in Fig. 3. The proposed algorithms outperform the other algorithms and are very close to the standard LOCSME, especially for LOCSME-CCG and LOCSME-MCG. Fig. 2Open in figure viewerPowerPoint Coherent local scattering, SINR against snapshots Fig. 3Open in figure viewerPowerPoint Coherent local scattering, SINR against SNR In Fig. 4, we use an ML-based method to estimate the noise power in LOCSME, LOCSME-SG, LOCSME-CCG and LOCSME-MCG in the same scenario of Fig. 2. It is clear that no noticeable differences between their performances can be observed by comparing Figs. 2 and 4. Fig. 4Open in figure viewerPowerPoint Coherent local scattering, SINR against snapshots 6.2 Mismatch due to incoherent local scattering In the incoherent local scattering case, the desired signal has a time-varying signature and the steering vector is modelled by (84) where sk (i)(k = 0, 1, 2, 3, 4) are i.i.d zero mean complex Gaussian random variables independently drawn from a random generator. The angles θk (k = 0, 1, 2, 3, 4) are drawn independently in each simulation run from a uniform generator with mean 10° and standard deviation 2°. This time, sk (i) changes both from run to run and from snapshot to snapshot. Figs. 5 and 6 illustrate the performance comparisons of SINR against snapshots and SINR against SNR, respectively, in terms of the mentioned RAB algorithms in the last section under incoherent scattering case. To obtain Fig. 5, we select μ = 0.1, μɛ = 5, σɛ = 0.001, λq = 0.99, for LOCSME-SG, λ = 0.99 for LOCSME-CCG and λ = 0.95, η = 0.3 for LOCSME-MCG. However, we have optimised the parameters to give the best possible performance at different input SNRs. Fig. 5Open in figure viewerPowerPoint Incoherent local scattering, SINR against snapshots Fig. 6Open in figure viewerPowerPoint Incoherent local scattering, SINR against SNR Differently from the coherent scattering results, all the algorithms have a certain level of performance degradation due to the effect of incoherent local scattering model, in which case we have the extra system dynamics with the time variation, contributing to more environmental uncertainties in the system. However, over a wide range of input SNR values, the proposed algorithms are still able to outperform the other RAB algorithms. One point that needs to be emphasised is, most of the existing RAB algorithms experience significant performance degradation when the input SNR is high (i.e. around or more than 20 dB), which is explained in [13] that the desired signal always presents in any kind of diagonal loading technique. However, the proposed algorithms have improved the estimation accuracy, so that the high SNR degradation is successfully avoided as can be seen in Figs. 5 and 6. We assess the SINR performance against the number of snapshots of the selected algorithms in a specific time-varying scenario with the desired signal operating at 12 dB. The scenario is characterised by a set of source signals which have associated DoAs from the beginning of their operation until 150 snapshots. The DoAs of these source signals suddenly change at 150 snapshots, which requires the beamforming algorithms to adjust to the new environment as described in Table 5. The result of this scenario is shown in Fig. 7. Fig. 7Open in figure viewerPowerPoint Scenario with incoherent local scattering and time-varying DoAs Table 5. Changes of interferers Snapshots DoAs 0–150 θ1 = 10°, θ2 = 30°, θ3 = 50°. 150–300 θ1 = 15°, θ2 = 25°, θ3 = 35°. In addition, it should also be emphasised that performance comparisons with the conventional adaptive algorithms (i.e. SG, CCG or MCG without combined to LOCSME) are not included, as they are not recognised as RAB algorithms and have much worse performance in the presence of uncertainties. Actually, as mentioned in Section 1, it has already been shown that conventional adaptive beamforming algorithms are extremely sensitive to the statistical characteristics of the sampled data (i.e. data size and data accuracy). Especially, when these algorithms suffer environment uncertainties (i.e. steering vector mismatch), significant further performance degradation is unavoidable. 7 Conclusion This work proposed low-complexity adaptive RAB algorithms developed from the LOCSME RAB method. In each of these algorithms, we have derived recursions for the weight vector update and exploited effective shrinkage methods, both of which require low complexity without losing any noticeable performance. In addition, in the CG-based RAB algorithms we have enabled the estimation for the mismatch steering vector inside the CG recursions to enhance the robustness. Both complexity and performance comparisons are provided and analysed. Simulation results have shown that the proposed algorithms achieved excellent output SINR performance and are suitable for operation in high input SNR. 8 References 1Zhuang, J., Manikas, A.: ‘Interference cancellation beamforming robust to pointing errors’, IET Signal Process., 2013, 7, (2), pp. 120– 127 (doi: https://doi.org/10.1049/iet-spr.2011.0464) 2Vorobyov, S.A., Gershman, A.B., Luo, Z.: ‘Robust adaptive beamforming using worst-case performance optimization: a solution to signal mismatch problem’, IEEE Trans. Signal Process., 2003, 51, (4), pp. 313– 324 (doi: https://doi.org/10.1109/TSP.2002.806865) 3Li, J., Stoica, P., Wang, Z.: ‘On robust capon beamforming and diagonal loading’, IEEE Trans. Signal Process., 2003, 57, (7), pp. 1702– 1715 4Astely, D., Ottersten, B.: ‘The effects of local scattering on direction of arrival estimation with music’, IEEE Trans. Signal Process., 1999, 47, (12), pp. 3220– 3234 (doi: https://doi.org/10.1109/78.806068) 5Liao, B., Chan, S.C., Tsui, K.M.: ‘Recursive steering vector estimation and adaptive beamforming under uncertainties’, IEEE Trans. Aerosp. Electron. Syst., 2013, 49, (1), pp. 489– 501 (doi: https://doi.org/10.1109/TAES.2013.6404116) 6Elnashar, A.: ‘Efficient implementation of robust adaptive beamforming based on worst-case performance optimization’, IET Signal Process., 2008, 2, (4), pp. 381– 393 (doi: https://doi.org/10.1049/iet-spr:20070162) 7Khabbazibasmenj, A., Vorobyov, S.A., Hassanien, A.: ‘Robust adaptive beamforming based on steering vector estimation with as little as possible prior information’, IEEE Trans. Signal Process., 2012, 60, (6), pp. 2974– 2987 (doi: https://doi.org/10.1109/TSP.2012.2189389) 8Nai, S.E., Ser, W., Yu, Z.L. et al: ‘Iterative robust minimum variance beamforming’, IEEE Trans. Signal Process., 2011, 59, (4), pp. 1601–1611 (doi: https://doi.org/10.1109/TSP.2010.2096222) 9Yu, Z.L., Gu, Z., Zhou, J. et al: ‘A robust adaptive beamformer based on worst-case semi-definite programming’, IEEE Trans. Signal Process., 2010, 58, (11), pp. 5914– 5919 (doi: https://doi.org/10.1109/TSP.2010.2058107) 10Lie, J.P., Ser, W., See, C.M.S.: ‘Adaptive uncertainty based iterative robust capon beamformer using steering vector mismatch estimation’, IEEE Trans. Signal Process., 2011, 59, (9), pp. 4483–4488 (doi: https://doi.org/10.1109/TSP.2011.2157500) 11Gu, Y., Leshem, A.: ‘Robust adaptive beamforming based on jointly estimating covariance matrix and steering vector’. Proc. IEEE Int. Conf. on Acoustics Speech and Signal Processing, 2011, pp. 2640– 2643 12Hassanien, A., Vorobyov, S.A., Wong, K.M.: ‘Robust adaptive beamforming using sequential quadratic programming: an iterative solution to the mismatch problem’, IEEE Signal Process. Lett., 2008, 15, pp. 733– 736 (doi: https://doi.org/10.1109/LSP.2008.2001115) 13Gu, Y., Leshem, A.: ‘Robust adaptive beamforming based on interference covariance matrix reconstruction and steering vector estimation’, IEEE Trans. Signal Process., 2012, 60, (7), pp. 3881–3885 14Ruan, H., de Lamare, R.C.: ‘Robust adaptive beamforming using a low-complexity shrinkage-based mismatch estimation algorithm’, IEEE Signal Process. Lett., 2013, 21, (1), pp. 60– 64 (doi: https://doi.org/10.1109/LSP.2013.2290948) 15Chen, Y., Wiesel, A., Hero III, A.O.: ‘Shrinkage estimation of high dimensional covariance matrices’. Proc. IEEE Int. Conf. on Acoustics Speech and Signal Processing, 2009, pp. 2937– 2940 16Fa, R., de Lamare, R., Nascimento, V.H.: ‘Knowledge-aided STAP algorithm using convex combination of inverse covariance matrices for heterogeneous clutter’. Proc. IEEE Int. Conf. on Acoustics Speech and Signal Processing, 2010, pp. 2742– 2745 17Stoica, P., Li, J., Zhu, X. et al: ‘On using a priori knowledge in space-time adaptive processing’, IEEE Trans. Signal Process., 2008, 56, (6), pp. 2598– 2602 (doi: https://doi.org/10.1109/TSP.2007.914347) 18Van Trees, H.L.: ‘ Optimum array processing’ ( Wiley, New York, 2002) 19Van Veen, B., Buckley, K.M.: ‘ Beamforming techniques for spatial filtering’ ( CRC Press LLC, 2000) 20Landau, L., de Lamare, R., Haardt, M.: ‘Robust adaptive beamforming algorithms using low-complexity mismatch estimation’. Proc. IEEE Statistical Signal Processing Workshop, 2011 21Morelli, M., Sangunietti, L., Mengali, U.: ‘Channel estimation for adaptive frequency-domain equalization’, IEEE Trans. Wirel. Commun., 2005, 4, (5), pp. 53– 57 22Wang, L., de Lamare, R.C.: ‘Constrained adaptive filtering algorithms based on conjugate gradient techniques for beamforming’, IET Signal Process., 2010, 4, (6), pp. 686– 697 (doi: https://doi.org/10.1049/iet-spr.2009.0243) 23Wang, L.: ‘Array signal processing algorithms for beamforming and direction finding’. PhD. Thesis, Department of Electronics, University of York, 2009 24Li, S., de Lamare, R.C., Haardt, M.: ‘Adaptive frequency-domain group-based shrinkage estimators for UWB systems’, IEEE Trans. Veh. Technol., 2013, 62, (8), pp. 3639– 3652 (doi: https://doi.org/10.1109/TVT.2013.2259603) 25Ledoit, O., Wolf, M.: ‘A well-conditioned estimator for large dimensional covariance matrix’, J. Multivariate Anal., 2001, 88, (2004), pp. 365– 411 Citing Literature Volume10, Issue5July 2016Pages 429-438 FiguresReferencesRelatedInformation

Referência(s)
Altmetric
PlumX