 Original Article
 Open Access
 Published:
A Robust and Efficient Compressed Sensing Algorithm for Wideband Acoustic Imaging
Chinese Journal of Mechanical Engineering volume 33, Article number: 95 (2020)
Abstract
Wideband acoustic imaging, which combines compressed sensing (CS) and microphone arrays, is widely used for locating acoustic sources. However, the location results of this method are unstable, and the computational efficiency is low. In this work, in order to improve the robustness and reduce the computational cost, a DCSSOMPSVD compressed sensing method, which combines the distributed compressed sensing using simultaneously orthogonal matching pursuit (DCSSOMP) and singular value decomposition (SVD) is proposed. The performance of the DCSSOMPSVD is studied through both simulation and experiment. In the simulation, the locating results of the DCSSOMPSVD method are compared with the wideband BP method and the DCSSOMP method. In terms of computational efficiency, the proposed method is as efficient as the DCSSOMP method and more efficient than the wideband BP method. In terms of locating accuracy, the proposed method can still locate all sources when the signal to noise ratio (SNR) is − 20 dB, while the wideband BP method and the DCSSOMP method can only locate all sources when the SNR is higher than 0 dB. The performance of the proposed method can be improved by expanding the frequency range. Moreover, there is no extra source in the maps of the proposed method, even though the target sparsity is overestimated. Finally, a gas leak experiment is conducted to verify the feasibility of the DCSSOMPSVD method in the practical engineering environment. The experimental results show that the proposed method can locate both two leak sources in different frequency ranges. This research proposes a DCSSOMPSVD method which has sufficient robustness and low computational cost for wideband acoustic imaging.
Introduction
Acoustic imaging, which uses planar microphone array and beamforming methods [1‒5], is widely employed for locating acoustic sources. As for wideband acoustic imaging, one way is to operate the timedomain beamforming technique. The timedomain method has a relatively high computational efficiency and a very good application for nonstationary and strongly transient signals [6]. Zhao using a Tap delay line structure or finite impulse response (FIR) filter to achieve different frequency beamforming [7]. Wilkins presented a true time delay (TTD) beamformer bank in the beamspace and permitted the directions of arrival of broadband sources to be estimated accurately, efficiently, and noniteratively [8]. However, runtime delay quantization effects can cause a need for high sampling and processing of huge amounts of data [9]. Another way to deal with wideband acoustic imaging is to operate in the frequency domain. For wideband signal, we can use the fast Fourier transform (FFT) to divide the array outputs into many narrowband frequency bins and apply some beamforming methods to each narrowband frequency bins, such as delayandsum (DAS) beamforming [10], standard capon beamforming(SCB) method [11] and robust capon beamforming method(RCB) [12]. However, the problem of varying mainlobe width as a function of frequency exists on the results when those methods are applied to wideband acoustic imaging. Wang et al. [13] have proposed the shaded robust capon beamformer method (SRCB), which can obtain approximately constant mainlobe width for wideband acoustic imaging. In Ref. [14], robust Capon beamforming with presteering produces can locate acoustic emissions with significant accuracy and less ghost than ordinary beamforming. The wideband acoustic imaging also can be performed by jointing different narrowband frequency bins. Kassis et al. [15] have proposed the wideband zeroforcing MUSIC (ZFMUSIC) method for locating aeroacoustic sources. The wideband ZFMUSIC criterion was proposed to avoid the maximization of the criterion at each frequency bin. The ZFMUSIC method increases the ability to separate sources having different powers than the MUSIC method, but the width of the frequency band cannot be too large in case the low frequencies degrade the resolution. Guo developed a robust nearfield wideband beamformer design approach based on adaptiveweighted convex optimization. The method employs the adaptive array signal processing theory, adjusts weights flexibly, and improves the beamforming performance [16]. He proposed a new direction of arrival (DOA) estimation method of wideband source, which is based on iterative adaptive spectral reconstruction, which can be applied to coherent sources and improve the accuracy of DOA estimation [17]. But they all increase computational complexity and time costs, cannot meet the realtime requirements.
Donoho [18], Candès, Romberg, and Tao [19‒22] proposed the theory of compressed sensing (CS). CS shows that a signal having a sparse representation can be recovered exactly from a small set of linear, nonadaptive measurements. If most entries of a signal are zeros, the signal is sparse. As for the acoustic imaging problem, the number of sources that are usually assumed to be point sources is much less than the node number behind the grid. Therefore, compressed sensing can be easily used for acoustic imaging. Chu et al. [23] have applied the Bayesian CS method to nearfield wideband aeroacoustic imaging. The method has good robustness in poor signal to noise ratio (SNR) cases and can obtain a wide dynamic range. However, it has more computational costs than beamforming methods. Chaturvedi used CS to reconstruct crosscorrelation of wideband signals from the crosscorrelation of subNyquist samples to estimate the DOA [24], but crosscorrelation provides inferior accuracy in the experiment [25]. Boufounos et al. combined joint sparsity models and CoSaMP algorithm to wideband array processing. However, it is known that CoSaMP algorithm fails to provide satisfactory performance in the source location and spectral estimation applications, especially in the presence of closely spaced sources [26].
In this paper, we propose a new CS method called the DCSSOMPSVD method for wideband acoustic imaging, which combines the distributed compressed sensing using simultaneously orthogonal matching pursuit (DCSSOMP) method and singular value decomposition (SVD). In this study, we will study the performance of the DCSSOMPSVD method for wideband acoustic imaging by comparing that with the wideband basis pursuit (BP) method and the DCSSOMP method.
This paper is organized as follows: Section 2 describes the observation model of acoustic signal propagation and the math model of the wideband acoustic imaging. Then our proposed method is presented in Section 3. Subsequently, the performance of the proposed method is compared with the other two methods by simulations in Section 4. The analysis of the DCSSOMPSVD method is also shown in this section. Section 5 provides a gas leakage experiment to verify the feasibility of the DCSSOMPSVD method in actual application. Finally, we conclude this paper in Section 6.
Observation Model and Math Model for Wideband Acoustic Imaging
Observation Model for Acoustic Imaging
Figure 1 illustrates the acoustic signal model propagating from the source plane \({z}_{0}\), which is \(h\) away from the planar microphone array. The microphone array consists of \(M\) sensors at known positions \({\varvec{\stackrel{}P}}={\left[{{\varvec{\stackrel{}P}}}_{1},\cdot \cdot \cdot ,{{\varvec{\stackrel{}P}}}_{M}\right]}^{\mathrm{T}}\), where \({[\cdot ]}^{\mathrm{T}}\) denotes the transpose operator. The source plane \({z}_{0}\) is discretized into \(N=u\times u\) equidistant grids at known discrete positions \({\varvec{P}}=\left[{{\varvec{P}}}_{1},\cdot \cdot \cdot ,{{\varvec{P}}}_{{\varvec{N}}}\right]\). Let \({\varvec{Y}}\) be the vector of wavefield measurement at \(M\) microphones of the array in the frequency domain, and the unknown vector \({\varvec{X}}\) comprise source strengths at all \(N\) grid nodes. With the help of the microphone array, we can get the pressure fields of the microphones and obtain \({\varvec{Y}}={\left[{{\varvec{Y}}}_{1},\cdot \cdot \cdot ,{{\varvec{Y}}}_{M}\right]}^{\mathrm{T}}\) by operating FFT. The \(n\)th element of \({\varvec{X}}\) equals zero if there is no source at the nth gird node. Otherwise it is nonzero. The sources in our model are supposed to be uncorrelated monopoles in order to simplify the physical process and build up the acoustic propagation model explicitly [27].
The pressure field at the \(m\)th microphone is given by:
where \({r}_{mn}=\Vert {\stackrel{}{{\varvec{P}}}}_{m}{{\varvec{P}}}_{n}\Vert\) denotes the distance between the mth microphone and the nth grid node, \(\omega =2\uppi f\) with \(f\) being the frequency, the wave number \(k=\omega /c\) with \(c\) being the sound speed, and \({X}_{n}\) is the amplitude of the nth grid node.
The model can be compactly expressed in matrix form:
\({\varvec{Y}}={\varvec{A}}{\varvec{X}},\)
where \({\varvec{A}}\) is a \(M\times N\) matrix and defined as the measurement matrix.
In the practical engineering environment, there are often measurement errors and background noise. In this paper, we assume errors and background noise as additive Gaussian white noise (AGWN), which is mutually independent and identically distributed and independent to sources. Thus, a more realistic propagation model can be depicted as:
where \({\varvec{e}}\) denotes background noise and errors.
By solving the linear system Eq. (2), we can recover the signal \({\varvec{x}}\in {\mathrm{C}}^{N}\). Mathematically speaking, if we want to solve the linear system Eq. (2) without any distortion, the number of measurements M, i.e., the number of microphones in acoustic imaging, should be at least as large as the signal length \(N\). Otherwise, the system will be severely underdetermined and have no unique solution.
Thanks to sparsity, one can perfectly recover \({\varvec{X}}\) by solving the following optimization problem:
Unfortunately, the \({l}_{0}\)minimization problem is an NPhard problem [28‒30] and thus computationally intractable. Candès and Tao [31, 32] have proved that under certain condition, Eq. (4) is equivalent to the following \({l}_{1}\)optimization problem:
where \({\Vert {\varvec{X}}\Vert }_{1}=\sum_{i=1}^{n}\left{x}_{i}\right\).
As to Eq. (3) which takes noise into account, we can solve it by solving the following second order cone programming (SOCP):
where \(\varepsilon\) is a specified tolerance for noise \(e\).
Eqs. (5) and (6) are the convex relaxation of their according to original NPHard problems and can be solved by basis pursuit (BP) (also known as \({l}_{1}\)minimization method) with polynomial computational time [31, 33‒35]. The BP method has both merits and drawbacks. BP provides theoretical performance guarantees, but its computational costs may be a limitation.
Apart from convex relaxation, several greedy methods, which compute the support set of the signal iteratively and approximate the sparse signal of Eq. (3) until a preset stopping condition [36‒42], are also widely used. Greedy methods have the advantages of easy implementation, fast convergence, and low complexity.
Joint Sparsity Model for Wideband Acoustic Imaging
For the narrow acoustic imaging, we have developed the CS algorithm based on the greedy algorithm [43‒45]. For a wideband signal, we choose its characteristic frequency band, and its lower and upper bound frequency are \({f}_{min}\) and \({f}_{max}\). A simple method is to divide the chosen wideband into several narrowband, then for each frequency \({f}_{j}\) we have:
where J is the number of frequencies within [\({f}_{min},{f}_{max}\)].
We can directly solve Eq. (7) by the CS algorithms, but the computational efficiency can be a problem as the bandwidth grows.
However, the positions of the sound sources do not change with frequency. In other words, the signals at different frequencies satisfy the simultaneous sparse approximation. So, we combine the signal vectors at each frequency into a signal matrix and solve it jointly [46]. So, the problem can be transformed into an optimization problem with the help of joint sparsity:
where \({{\varvec{X}}}^{({l}_{2})}={[{{\varvec{X}}}_{1}^{({l}_{2})},{{\varvec{X}}}_{2}^{({l}_{2})},\ldots ,{{\varvec{X}}}_{N}^{({l}_{2})} ]}^{\mathrm{T}}\) denotes the energy vector, which is the mean square of source power on each node, i.e., \({{\varvec{X}}}_{n}^{({l}_{2})}={\Vert {{\varvec{X}}}_{n}\left({f}_{1}\right),{{\varvec{X}}}_{n}\left({f}_{2}\right),\ldots ,{X}_{n}({f}_{J})\Vert }_{2}. \stackrel{}{{\varvec{X}}}\) and \(\stackrel{}{{\varvec{Y}}}\) can be obtained by stacking the data \({\varvec{Y}}({f}_{j})\) and signal vectors \({\varvec{X}}({f}_{j})\), \(\stackrel{}{{\varvec{A}}}\) is a blockdiagonal matrix with each measurement matrix \({\varvec{A}}({f}_{j})\) as its element, and \(\stackrel{}{\varepsilon }\) is a specified tolerance for noise \(\stackrel{}{{\varvec{e}}}\).
Then the original problem has been simplified to a SOCP problem. Thus the BP can be used. Furthermore, the greedy algorithm can also be applied. Simultaneous orthogonal matching pursuit (SOMP) is a kind of greedy pursuit algorithm proposed by Tropp et al. [47] on the basis of OMP. It can compute provably good solutions to several simultaneous sparse approximation problems. Sarvotham et al. [48, 49] expanded the theory for distributed compressed sensing (DCS) and achieved joint recovery of multiple signals from incoherent projections through SOMP.
DCSSOMP has the same advantages as OMP and has higher computational efficiency than BP. However, we found that DCSSOMP could not locate sources in low SNR. So, we propose a robust method combining DCSSOMP and SVD, which can be used for wideband acoustic imaging even though the SNR is low.
DCSSOMPSVD Method for Wideband Acoustic Imaging
In this section, we present a practical approach for wideband acoustic imaging in low SNR environments, which combines the DCSSOMP algorithm and the SVD. The measurements of sound pressure received by the microphone array in timedomain are divided into \(B\) blocks, where each block contains \(L\) data points and has \(50\mathrm{ \%}\) overlap. We perform \(L\)point discrete Fourier transform (DFT) and choose the data between the lower and the upper bound frequency \({f}_{min}\) and \({f}_{max}\). We can obtain a \(M\times B\) data matrix \({\varvec{y}}({f}_{j})\) under each frequency \({f}_{j}\) from the \(B\) blocks of data:
Similarly, the source strength can also be divided as a \(N\times B\) matrix \({\varvec{x}}\left({f}_{j}\right)\), which consists of source strengths under the frequency each frequency \({f}_{j}\). We employ SVD for \({\varvec{y}}\left({f}_{j}\right)\):
where \({\varvec{U}}\) is a \(M\times M\) unitary matrix, \(\boldsymbol{\varLambda }\) is a \(M\times B\) diagonal matrix, and \({{\varvec{V}}}^{\mathrm{T}}\) is a \(B\times B\) unitary matrix.
We define the reduced \(M\times K\) dimensional matrix \({{\varvec{y}}\left({f}_{j}\right)}_{SV}\), which involves most of the signal power as \({{\varvec{y}}\left({f}_{j}\right)}_{SV}={\varvec{U}}\boldsymbol{\varLambda }{{\varvec{D}}}_{K}={\varvec{y}}\left({f}_{j}\right)\boldsymbol{\varLambda }{{\varvec{D}}}_{K}\), where \({{\varvec{D}}}_{K}={\left[\begin{array}{cc}{{\varvec{I}}}_{K}& \mathbf{0}\end{array}\right]}^{\mathrm{T}}\). Here \(K\) is the source sparsity, which is the actual number of sources. \({{\varvec{I}}}_{K}\) is a \(K\times K\) identity matrix and \(\mathbf{0}\) is a \(K\times (BK)\) zero matrix. In addition, we transform the signal matrix \({\varvec{x}}\left({f}_{j}\right)\) at each frequency \({f}_{j}\) as \({{\varvec{x}}\left({f}_{j}\right)}_{SV}={\varvec{x}}\left({f}_{j}\right){\varvec{V}}{{\varvec{D}}}_{K}\), and let \({{\varvec{e}}\left({f}_{j}\right)}_{SV}={\varvec{e}}\left({f}_{j}\right){\varvec{V}}{{\varvec{D}}}_{K}\), to obtain the system:
After SVD, the signal subspace is reserved, and the noise subspace is abandoned. Then we can apply the DCSSOMP algorithm to solve the problem in Eq. (11). Same as the OMP algorithm, the DCSSOMP algorithm also has two ways to terminate iterations. We have introduced the difference between the two ways in our previous work [45]. Unfortunately, the source sparsity \(K\) is unknown a priori in many cases. In this work, we use the target sparsity \({K}_{T}\), which is larger than \(K\), as the stopping condition, same as the approach of the OMPSVD [45, 50].
Donoho et al. [51] gave a phase diagram to depict the performance of CS. The diagram shows that highaccuracy reconstruction can be obtained for small \(\rho (=K/M)\) and large \(\delta (=M/N)\), while for large \(\rho\) and small \(\delta\) the reconstruction fails. The phase transition analysis [52] shows that the maximum source sparsity \(K\) can be accurately reconstructed with an empirical formula \(M\approx 2K\mathrm{log}(N)\) [42]. To obtain highaccuracy reconstruction (small \(\rho\) and large \(\delta\)) of the DCSSOMPSVD, the maximum possible number of sources \(K\) is obtained by
By applying DCSSOMP, we can obtain an approximate solution \({\varvec{X}}\left({f}_{j}\right)\) at each frequency \({f}_{j}\) and transform \({\varvec{X}}\left({f}_{j}\right)\) into the matrix \({{\varvec{X}}\left({f}_{j}\right)}_{SV}\).Then we can get the strengths and locations of sources by averaging the source strengths of all columns of signal subspace \({{\varvec{X}}\left({f}_{j}\right)}_{SV}\). The (discrete) source strength \({{\varvec{X}}}_{j}^{*}\) can be obtained by
where \({{\varvec{R}}}_{{X\left({f}_{j}\right)}_{SV}(K)}=E\left[{{\varvec{X}}\left({f}_{j}\right)}_{SV}(k){{{\varvec{X}}\left({f}_{j}\right)}_{SV}(k)}^{H}\right]\) [27] denotes the source power covariance matrix and \({{\varvec{X}}\left({f}_{j}\right)}_{SV}(k)\) denotes the kth column of the matrix \({{\varvec{X}}\left({f}_{j}\right)}_{SV}\).
As the location of unknown sources is assumed to be the same, we can depict the maps with location and overall sound pressure level (OASPL). The OASPL can be obtained from following equation:
where \({x}_{jn}^{*}\) is the power of the nth node for the jth frequency within the frequency range.
Now we are ready to present our DCSSOMPSVD method for wideband acoustic imaging. The sequence of steps is as follows. [48, 49]
Step 1: Obtain the measurement of sound pressure in timedomain from the microphone array.
Step 2: Give the lower and upper frequency bound \({f}_{min}\) and \({f}_{max}\). For each frequency \(f\in [{f}_{min}\), \({f}_{max}]\), construct the measurement matrix \({\varvec{A}}\) according to the frequency \(f\), the node position, and the distance \({z}_{0}\) with Eq. (2).
Step 3: Divide the measurement of sound pressure into \(B\) blocks at each frequency \({f}_{j}\) and each block contains data block lengths \(L\), where each block has \(50\mathrm{ \%}\) overlap.
Step 4: Obtain the \(M\times B\) data matrix \({\varvec{y}}({f}_{j})\) at each frequency \({f}_{j}\) by performing \(k\)point DFT on the data in step 1.
Step 5: Compute the source sparsity K with Eq. (12) as the maximum number of sources can be located. On this basis, we use the target sparsity \({K}_{T}\) as the stopping condition.
Step 6: Discretize the source plane, which is \({z}_{0}\) away from the array plane, with \(u\times v=N\) nodes.
Step 7: Perform SVD for the data matrix \({\varvec{y}}({f}_{j})\) with Eq. (10).
Step 8: Transform the data matrix \({\varvec{y}}\) as \({{\varvec{y}}({f}_{j})}_{SV}={\varvec{U}}\boldsymbol{\varLambda }{{\varvec{D}}}_{K}={\varvec{y}}\left({f}_{j}\right)\boldsymbol{\varLambda }{{\varvec{D}}}_{K}\).
Step 9: Repeat from step 3 to step 8 for signal data and error data to obtain \({{\varvec{x}}({f}_{j})}_{SV}\) and \({{\varvec{e}}({f}_{j})}_{SV}\), respectively.
Step 10: Construct the system model with Eq. (11).
Step 11: Set the iteration counter \(l=1\). For each signal index \(j\in \{1,\cdot \cdot \cdot ,J\}\), initialize the orthogonalized coefficient vectors \({\widehat{\beta }}_{j}=0\), also initialize the set of selected indices \(\widehat{\boldsymbol{\varOmega }}=\boldsymbol{\varnothing }\). Let \({{\varvec{r}}}_{j,l}\) denote the residual of the measurement \({\varvec{y}}({f}_{j})\) remaining after the first \(l\) iterations, and initialize \({{\varvec{r}}}_{j,0}={\varvec{y}}\left({f}_{j}\right).\)
Step 12: Select the dictionary vector that maximizes the value of the sum of the magnitudes of the projections of the residual at each narrow band, and add its index to the set of selected indices:
where \({{\varvec{A}}}_{j,n}\) is the nth column of the measurement matrix \({\varvec{A}}\left({f}_{j}\right)\).
Step 13: Operate Schmidt regularization and orthogonalize the selected basis vector against the orthogonalized set of previously selected dictionary vectors:
where \({{\varvec{\gamma}}}_{k,l}\) is the regularization result of selected column \({{\varvec{A}}}_{j,{n}_{j}}\), which equals to the multiplication of the amplitude of \({{\varvec{A}}}_{j,{n}_{j}}\) and \({\varvec{Q}}\) after \(QR\) factorization.
Step 14: Update the estimate of the coefficients \({\widehat{\beta }}_{j}\) for the selected vector and residuals \({{\varvec{r}}}_{j,l}\):
where \({\widehat{\beta }}_{j}={\widehat{{\varvec{R}}}}_{j,l}{\stackrel{\sim }{{\varvec{X}}}}_{j,l}\) with \({\stackrel{\sim }{{\varvec{X}}}}_{j,l}\) being the least square solution of linear system \({{\varvec{r}}}_{j,l1}={{\varvec{A}}}_{j,{n}_{l}}{{\varvec{X}}}_{j,l}\) and \({\widehat{{\varvec{R}}}}_{j,l}\) being the division of \({\varvec{R}}\) after \(QR\) factorization of the selected column \({{\varvec{A}}}_{j,{n}_{l}}\) and the amplitude of \({{\varvec{A}}}_{j,{n}_{l}}\).
Step 15: \(l=l+1\). Return to step 12 until \(l={K}_{T}\).
Step 16: Apply QR factorization on the mutilated basis \({{\varvec{A}}}_{j,\widehat{\Omega }}={{\varvec{Q}}}_{j}{{\varvec{R}}}_{j}={\boldsymbol{\varGamma }}_{j}{{\varvec{R}}}_{j}\). Since in each narrowband \({\varvec{y}}\left({f}_{j}\right)={\boldsymbol{\varGamma }}_{j}{{\varvec{R}}}_{j}={{\varvec{A}}}_{j,\widehat{\Omega }}{{\varvec{X}}}_{j,\widehat{\Omega }}={\boldsymbol{\varGamma }}_{j}{{\varvec{R}}}_{j,\widehat{\Omega }}\), where \({{\varvec{X}}}_{j,\widehat{\Omega }}\) is the mutilated coefficient vector, we can compute the signal estimates \(\left\{{\stackrel{\sim }{{\varvec{X}}}}_{j}\right\}\) as:
Step 17: End iteration and let \(\varvec{X}\left( {f_{j} } \right) = \varvec{\tilde{X}}_{j}\).
Step 18: Reshape \({\varvec{X}}\left( {f_{j} } \right)\) as the \(N \times K\) matrix \({\varvec{X}}\left( {f_{j} } \right)_{SV}\).
Step 19: Calculate the (discrete) source strengths \({\varvec{X}}_{j}^{*} \left( {j = 1,2,\ldots,J} \right)\) via Eq. (12). Then get the OASPL by Eq. (14).
Step 20: Find the source positions using the indices of the nonzero elements in OASPL and the observation model in Section 2.
Simulation Results and Analysis
In this section, the wideband BP method, DCSSOMP method, and DCSSOMPSVD method are compared in terms of computational effort, reconstruction precision of OASPL, and the robustness to signal to noise ratio (SNR). More, we investigate the effect of frequency range and iteration number on our method. The source maps for the wideband BP method are obtained by employing convex optimization (CVX) toolbox, a package for specifying and solving convex programs [53], to solve the Eq. (8). The value of \(\varepsilon\) in Eq. (8) is set to be the norm of background noise roughly, where the background noise is the difference between the measurement data with the AGWN and those without the AGWN.
Simulation Configuration
In the simulation, an optimized random plane microphone array with a circular aperture of 1.0 m is used to obtain the measurement data. The plane microphone array, which is shown in Figure 2, consists of 60 microphone sensors. The interested sources plane is a 0.8×0.8 m^{2} rectangular zone, which is 0.8 m away from the array plane. A piece of 14 s music is regarded as the source signal added additional Gaussian white noise (AGWN), and its spectrum, which is shown in Figure 3, is a typical wideband spectrum.
Method Comparison
In this section, four monopoles with same source pressure level are located at the grid nodes, where the coordinate of sources are S_1 (−0.2, 0.2, 0.8) m, S_2 (−0.2, −0.2, 0.8) m, S_3 (0.2, 0.2, 0.8) m, S_4 (0.2, −0.2, 0.8) m. The DCSSOMPSVD method is compared with the wideband BP method and the DCSSOMP method in terms of computational effort and OASPL errors. The frequency range and SNR are set to 2500 ‒3000 Hz and 30 dB, respectively. The iterative termination conditions of both the DCSSOMP method and the DCSSOMPSVD method are set to be the target sparsity, which are consistent with the actual source sparsity. The source plane is discretized into 60×60 nodes.
Figure 4 shows the source maps obtained by the wideband BP method, the DCSSOMP method and the DCSSOMPSVD method. The squares in the source maps indicate the actual source positions, while the small colored circular dots indicate the reconstructed source positions. Figure 4(b) and (c) indicate that the DCSSOMP and the DCSSOMPSVD method can obtain the superresolution source maps like the wideband BP method, and the whole sources can be located by the three methods exactly. The CPUtime of methods is obtained using a Core i7 multicore PC. The CPUtime of the DCSSOMPSVD method is 2.463 s, which is a little slower than the DCSSOMP method, but is far less than the CPUtime 126.5 s of the wideband BP method. More, the DCSSOMPSVD method has a smaller OASPL error ratio among each source in comparison with the wideband BP method, which is shown in Table 1. We conclude that the DCSSOMPSVD method is as efficient as the DCSSOMP method and much more efficient than the wideband BP method.
Effect of SNR on Source Maps
In this section, we investigate the effect of SNR on the source maps of the wideband BP method, the DCSSOMP method, and the DCSSOMPSVD method. We choose 2500‒3000 Hz as the frequency range, which has good performance in Section 4.2. We choose the same sources, as shown in Section 4.2. The performances of the wideband BP method, DCSSOMP method, and our method are studied as SNR equals 0 dB, −10 dB, and −20 dB.
The source maps obtained by the wideband BP method, the DCSSOMP method, and the DCSSOMPSVD method at different SNRs are shown in Figure 5, Figure 6, and Figure 7. When the SNR equals 0 dB, the DCSSOMPSVD method can locate the whole sources exactly, while the wideband BP method can only locate three sources, and the DCSSOMP method has 1 grid error in locating sources. Besides, the max OASPL error of DCSSOMPSVD method among the whole sources being 0.0041% is less than the max OASPL error of the wideband BP method being 10.6171% and the max OASPL error of DCSSOMP method being 1.4726%. As the SNR reduces to −10 dB, the DCSSOMPSVD method can still locate whole sources, while the DCSSOMP method and the wideband BP method can only locate part of the sources. Besides, some reconstructed sources outside of the expected source positions exist on the source maps of wideband BP method, and one of reconstructed sources outside of the expected source positions has larger OASPL than the reconstructed source located at the actual source position, as shown in Figure 6(a). The DCSSOMPSVD method has better OASPL reconstruction precision than the DCSSOMP method and the wideband BP method. When the SNR equals −20 dB, the DCSSOMPSVD method can still locate all sources accurately, while the DCSSOMP method and the wideband BP method have failed to locate any sources. Compared with the wideband BP method and the DCSSOMP method, our method has better robustness in low SNRs.
Effect of Frequency Range on Sources Maps
In this section, we investigate the effect of the frequency range on the source maps. The wideband BP method has a relatively large computational effort in Section 4.2, which limits the use of the wideband BP method at a wider frequency range. Therefore, we focus on studying the effect of frequency range on the DCSSOMP method and the DCSSOMPSVD method. Here, the frequency ranges are chosen as 1000‒1500 Hz, 1000‒2000 Hz, and 1000‒3000 Hz, and the SNR is set to 0 dB.
Figure 8 shows the source maps obtained by the DCSSOMP method and the DCSSOMPSVD method at different frequency ranges. When the frequency range equals 1000‒1500 Hz, Figure 8(b) shows there are only three minor errors on the source maps. However, Figure 8(a) shows that the DCSSOMP method can only locate one source. There are over ten grid errors in locating other sources, which is considered that the source having the max minor error cannot be located by the DCSSOMP method. When the frequency range broadens to 1000‒2000 Hz, the position errors become smaller. The whole sources are located by the DCSSOMPSVD method, in which the max error is merely one grid as shown in Figure 8(d). Figure 8(c) shows the DCSSOMP method can also locate all sources while the max errors are four grids. Furthermore, as the frequency range equals 1000‒3000 Hz, Figure 8(f) shows that the whole sources can be located by the DCSSOMPSVD method correctly. The DCSSOMP method can also have a satisfying result when the frequency range equals 1000‒3000 Hz. There is only one grid error in the map obtained by the DCSSOMP method shown in Figure 8(e). From the above comparison, as the frequency range broadens, the location results gradually become better.
Table 2 shows the error of the locating results of the DCSSOMP method and the DCSSOMPSVD method under different frequency ranges. We can see that the results obtained by the DCSSOMPSVD are better than that of the DCSSOMP method at the same frequency ranges.
In order to identify the performance of the DCSSOMPSVD method after broadening frequency range, the max OASPL error being merely 0.0008% among whole sources is computed as the frequency range equals 1000‒3000 Hz. It can conclude that the DCSSOMPSVD method can obtain better performance by broadening the frequency range.
Effect of the Target Sparsity on Sources Maps
In the above study, the iterative termination conditions of both the DCSSOMP method and the DCSSOMPSVD method are set to be the target sparsity, which are consistent with the actual source sparsity. However, in real cases, the source sparsity is not known in advance, and we use the overestimated target sparsity, which is larger than the actual source sparsity. Therefore, it is necessary to study the effect of the target sparsity of the DCSSOMP and the DCSSOMPSVD method on source maps. In this section, the performance of the DCSSOMP and the DCSSOMPSVD method are investigated as the target sparsity equals 10. The frequency range and SNR are set to 2500‒3000 Hz and 0 dB, respectively.
Figure 9 shows the sources maps obtained by the DCSSOMP method and the DCSSOMPSVD method as the target sparsity equals 10. When the target sparsity equals 10, there are six extra sources on the map obtained by the DCSSOMP method shown in Figure 9(a), which has a bad effect on determining the location of the actual source. Figure 9(b) shows that the whole sources can be located exactly by the DCSSOMPSVD method when the target sparsity equals 10. Besides, there are no extra sources on the map.
Experimental Results and Analysis
In order to verify the feasibility of the DCSSOMPSVD method in the real application, we conducted an experiment of gas leakage for wideband acoustic imaging at the Northwestern Polytechnical University.
Experiment Configurations
In the experiment, a planar microphone array composed of 24 microphone sensors shown in Figure 10 was used to measure the audio data. Figure 11 shows the experimental configuration taken by a camera installed in the center of the array.
The microphone array is 0.7 m away from the source plane. A 10 s recordings were stored for each measurement with sampling frequency being 44.1 kHz. Thus we can obtain many timedomain sampling data for each microphone. The sampling data were divided into \(50{{\% }}\) overlapping data block, where every data block contains 1024 sampling points. We performed FFT after applying the Hanning window to each data block. Then we can obtain the measurement data of microphone array in frequency domain. The observation zone, which is \(0.5708 \times 0.4280{\text{ m}}^{2}\), is discretized into \(N = 61 \times 46\) grids.
The spectrum of measurement data is shown in Figure 12, where we can find that the frequency of sources locates at low and middle frequency range mainly. Due to the long calculation time of the wideband BP method, the experimental results of the method will not be shown. The experimental source maps of the wideband DCSSOMP method and DCSSOMPSVD method were obtained as the frequency ranges equal 3500‒4000 Hz, 6500‒7000 Hz, and 11000‒11500 Hz, which were shown in Figure 13, Figure 14, and Figure 15.
Source Maps Using Experiment Results of the Gas Leakage
Figure 13 shows the source maps obtained by the DCSSOMP method and the DCSSOMPSVD method when the frequency range equals 3500‒4000 Hz. It can be seen from Figure 12 that there is a peak near 4000 Hz, so the SNR of the experiment is relatively high. In this frequency range, both the DCSSOMP method and the DCSSOMPSVD method can locate two sources.
Figure 14 shows the source maps obtained by the DCSSOMP method and the DCSSOMPSVD method when the frequency range equals 6500‒7000 Hz. When the frequency range is 6500‒7000 Hz, the amplitude of the signal is further reduced from Figure 15. In this case, the DCSSOMP method can only locate the left source and fails to locate the right one, while our DCSSOMPSVD method can still locate two sources.
Figure 15 shows the source maps obtained by the DCSSOMP method and the DCSSOMPSVD method when the frequency range equals 11000‒11500 Hz. As the frequency increases, the amplitude of the signal gradually decreases. When the frequency range is 11000‒11500 Hz, the DCSSOMP method can only locate the right source, and the DCSSOMPSVD method can locate two sources. Therefore, we conclude that our method has better localization performance under different frequency ranges than the DCSSOMP method.
Conclusions
In this paper, we have proposed a DCSSOMPSVD method for wideband acoustic imaging. The performance of the proposed method has been studied through both simulations and experiments. We have also compared the source maps obtained by the DCSSOMPSVD method with those obtained by the wideband BP method and the DCSSOMP method. The main conclusions are as follows.

(1)
The simulation results show that the DCSSOMPSVD method, as well as the wideband BP method and the DCSSOMP method, can locate all sources as the SNR equals 30 dB. The CPUtime of the DCSSOMPSVD method is 2.463 s, which is a little slower than the DCSSOMP method, but is far less than the CPUtime 126.5 s of the wideband BP method.

(2)
However, when the SNR decreases to −20 dB, the DCSSOMPSVD method can still locate all sources accurately, while the DCSSOMP method and the wideband BP method have failed to locate any sources.

(3)
When the frequency range expands from 1000‒1500 Hz to 1000‒2000 Hz, 1000‒3000 Hz, the location results of the DCSSOMPSVD method gradually become better.

(4)
When the target sparsity equals 10, there are six extra sources on the map obtained by the DCSSOMP method, while there is no extra reconstructed source in the maps of the DCSSOMPSVD method.

(5)
In the gas leak experiment, the results show that the DCSSOMPSVD method can locate all leak sources in 3500‒4000 Hz, 6500‒7000 Hz, and 11000‒11500 Hz, while the DCSSOMP method can only locate the leak source when the SNR is high.
References
 [1]
P N Samarasinghe, H Chen, A Fahim, et al. Performance analysis of a planar microphone array for three dimensional soundfield analysis. IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, New Paltz, USA, October 1518, 2017: 249253.
 [2]
W Ma, X Liu. Improving the efficiency of DAMAS for sound source localization via wavelet compression computational grid. Journal of Sound and Vibration, 2017, (395): 341353.
 [3]
S Zhao, T N T Nguyen, D L Jones. Large region acoustic source mapping using movable arrays. IEEE International Conference on Acoustics, Speech and Signal Processing, Brisbane, Australia, April 1924, 2015: 25892593.
 [4]
P Sijtsma. Clean based on spatial source coherence. International Journal of Aeroacoustics, 2007, 6(4): 357374.
 [5]
C Tuna, S Zhao, T N T Nguyen, et al. Driveby largeregion acoustic noisesource mapping via sparse beamforming tomography. The Journal of the Acoustical Society of America, 2016, 140(4): 25302541.
 [6]
J Benesty, I Cohen, J Chen. Beamforming in the Time Domain. Fundamentals of Signal Enhancement and Array Signal Processing. John Wiley & Sons Singapore Pte, Ltd, 2017.
 [7]
Y Zhao, W Liu, R J Langley. Adaptive wideband beamforming with response variation constraints. 18^{th} European Signal Processing Conference, Aalborg, Denmark, August 2327, 2015: 20772081.
 [8]
N Wilkins, A K Shaw, M Shaik. True time delay beamspace wideband source localization. IEEE International Conference on Acoustics, Speech and Signal Processing, Shanghai, China, March 2025, 2016: 31613165.
 [9]
O Jaeckel. Strengths and weaknesses of calculating beamforming in the time domain. 1st Berlin Beamforming Conference, Berlin, Germany, November 2222, 2006: 110.
 [10]
M W, J Humphreys, F T Brooks, et al. Design and use of microphone directional arrays for aeroacoustic measurements. 36th AIAA Aerospace Sciences Meeting and Exhibit, Reno, USA, January 1215, 1998: 471.
 [11]
J Capon. Highresolution frequencywavenumber spectrum analysis. Proceedings of the IEEE, 1969, 57(8): 14081418.
 [12]
J Li, P Stoica, Z Wang. On robust capon beamforming and diagonal loading. IEEE Transactions on Signal Processing, 2003, 51(7): 17021715.
 [13]
Z Wang, J Li, T. Nishida, et al. Robust capon beamformers for wideband acoustic imaging. 9th AIAA/CEAS Aeroacoustics Conference and Exhibit, Hilton Head, USA, May 1214, 2003: 3198.
 [14]
R J Kozick, C Coviello. Wideband capon beamforming with presteering. Conference on Signals, 50th Asilomar Conference on Signals, Systems and Computers, Pacific Grove, USA, November 69, 2016: 338342.
 [15]
C Kassis, J Picheral. Wideband zeroforcing music for aeroacoustic sources localization. Proceedings of the 20th European Signal Processing Conference, Bucharest, Romania, August 2731, 2012: 22832287.
 [16]
Y C Guo, C Wang, N Zhang. Robust nearfield wideband beamforming design based on adaptiveweighted convex optimization. Mathematical Problems in Engineering, 2017: 110.
 [17]
S He, Z W Yang, G S Liao. DOA estimation of wideband signals based on iterative spectral reconstruction. Journal of Systems Engineering and Electronics, 2017, 28(6): 10391045.
 [18]
D L Donoho. Compressed sensing. IEEE Transactions on Information Theory, 2006, 52(4): 12891306.
 [19]
E J Candès, J K Romberg, T Tao. Stable signal recovery from incomplete and inaccurate measurements. Communications on Pure and Applied Mathematics, 2006, 59(8): 12071223.
 [20]
E J Candès, J Romberg, T Tao. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Transactions on Information Theory, 2006, 52(2): 489509.
 [21]
E J Candès. Compressive sampling. Proceedings of the International Congress of Mathematicians, 2006, (3):1433–1452.
 [22]
E J Candès, T Tao. Nearoptimal signal recovery from random projections: Universal encoding strategies. IEEE Transactions on Information Theory, 2006, 52(12): 54065425.
 [23]
N Chu, A Mohammaddjafari, J Picheral. Bayesian compressed sensing in nearfield wideband aeroacoustic imaging. Workshop on Compressed Sensing Applied to Radar, 2012.
 [24]
A Chaturvedi, H H Fan. Compressive wideband direction of arrival estimation. 38th International Conference on Telecommunications and Signal Processing, Prague, Czech Republic, July 911, 2015: 15.
 [25]
P T Boufounos, P Smaragdis, B Raj. Joint sparsity models for wideband array processing. Conference on Wavelets and Sparsity XIV, San Diego, USA, 2011: 81380.
 [26]
L Xu, K Zhao, J Li, et al. Wideband source localization using sparse learning via iterative minimization. Signal Processing, 2013, 93(12): 3504–3514.
 [27]
N Chu, J Picheral, A Mohammaddjafari, et al. A robust superresolution approach with sparsity constraint in acoustic imaging. Applied Acoustics, 2014, 76: 197208.
 [28]
B K Natarajan. Sparse approximate solutions to linear systems. SIAM Journal on Computing, 1995, 24(2): 227234.
 [29]
S Muthukrishnan. Data streams: Algorithms and applications. Now Publishers Inc., 2005.
 [30]
D L Donoho. For most large underdetermined systems of linear equations the minimal \(l_{1}\)norm solution is also the sparsest solution. Communications on Pure and Applied Mathematics, 2006, 59(6): 797829.
 [31]
E J Candès, T Tao. Decoding by linear programming. IEEE Transactions on Information Theory, 2005, 51(12): 42034215.
 [32]
E J Candès. The restricted isometry property and its implications for compressed sensing. Comptes Rendus Mathematique, 2008, 346(9): 589592.
 [33]
S S Chen, D L Donoho, M A Saunders. Atomic decomposition by basis pursuit. SIAM Journal on Scientific Computing, 1998, 20(1): 3361.
 [34]
M A Figueiredo, R D Nowak, S J Wright. Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems. IEEE Journal of Selected Topics in Signal Processing, 2007, 1(4): 586597.
 [35]
I Daubechies, M Defrise, C De Mol. An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Communications on Pure and Applied Mathematics, 2004, 57(11): 14131457.
 [36]
S G Mallat, Z Zhang. Matching pursuits with timefrequency dictionaries. IEEE Transactions on Signal Processing, 1993, 41(12): 33973415.
 [37]
R A DeVore, V N Temlyakov. Some remarks on greedy algorithms. Advances in Computational Mathematics, 1996, 5(1): 173187.
 [38]
Y C Pati, R Rezaiifar, P Krishnaprasad. Orthogonal matching pursuit: Recursive function approximation with applications to wavelet decomposition. Proceedings of 27th Asilomar Conference on Signals, Systems and Computers, Pacific Grove, USA, November 13, 1993: 4044.
 [39]
G M Davis, S G Mallat, Z Zhang. Adaptive timefrequency decompositions. Optical Engineering, 1994, 33(7): 21832191.
 [40]
T Blumensath, M E Davies. Iterative thresholding for sparse approximations. Journal of Fourier Analysis and Applications, 2008, 14 (56): 629654.
 [41]
W Dai, O Milenkovic. Subspace pursuit for compressive sensing signal reconstruction. IEEE Transactions on Information Theory, 2009, 55(5): 22302249.
 [42]
D Needell, J A Tropp. Cosamp: Iterative signal recovery from incomplete and inaccurate samples. Applied and Computational Harmonic Analysis, 2009, 26(3): 301–321.
 [43]
F L Ning, Y Liu, C Zhang, et al. Acoustic imaging with compressed sensing and microphone arrays. Journal of Computational Acoustics, 2017, 25(4): 1750027.
 [44]
F L Ning, J Wei, L Qiu, et al. Threedimensional acoustic imaging with planar microphone arrays and compressive sensing. Journal of Sound and Vibration, 2016, (380): 112128.
 [45]
F L Ning, F Pan, C Zhang, et al. A highly efficient compressed sensing algorithm for acoustic imaging in low signaltonoise ratio environments. Mechanical Systems and Signal Processing, 2018, (112): 113128.
 [46]
P T Boufounosa, P Smaragdisb, B Rajc. Joint sparsity models for wideband array processing. Conference on Wavelets and Sparsity XIV, San Diego, USA, 2012.
 [47]
J A Tropp, A C Gilbert, M J Strauss. Algorithms for simultaneous sparse approximation. Part I: Greedy pursuit. Signal Processing, 2006, 86(3): 572588.
 [48]
S Sarvotham, D Baron, M Wakin, et al. Distributed compressed sensing of jointly sparse signals. Asilomar conference on signals, systems, and computers, 2005: 15371541.
 [49]
D Baron, M B Wakin, M F Duarte, et al. Distributed compressed sensing. https://www.dsp.rice.edu/~drorb/pdf/DCS112005.pdf. 2005.
 [50]
K Gkoktsi, A Giaralis. Assessment of subNyquist deterministic and random data sampling techniques for operational modal analysis. Structural Health Monitoring, 2017, 16(5): 630646.
 [51]
D L Donoho, Y Tsaig, I Drori, et al. Sparse solution of underdetermined systems of linear equations by stagewise orthogonal matching pursuit. IEEE transactions on Information Theory, 2012, 58(2): 10941121.
 [52]
D Donoho, J Tanner. Counting faces of randomly projected polytopes when the projection radically lowers dimension. Journal of the American Mathematical Society, 2009, 22(1): 153.
 [53]
M C Grant, S P Boyd. Graph Implementations for Nonsmooth Convex Programs. Lecture Notes in Control and Information Sciences, 2008, 95:110.
Acknowledgements
Not applicable
Funding
Supported by National Natural Science Foundation of China (Grant Nos. 51675425, 52075441), Shaanxi Provincial Key Research Program Project of China (Grant No. 2020ZDLGY0609), Dongguan Municipal Social Science and Technology Development(key) Project of China (Grant No. 20185071021600), Science and Technology on Microsystem Laboratory Foundation of China (Grant No. 6142804200405).
Author information
Affiliations
Contributions
FN was in charge of the whole trial; ZL revised the manuscript; JS wrote the manuscript; FP, PH, and JW assisted with sampling and laboratory analyses. All authors read and approved the final manuscript.
Authors’ Information
Fangli Ning, born in 1974, is currently a professor and a PhD candidate supervisor at School of Mechanical Engineering, Northwestern Polytechnical University, China. His current research interests include nonlinear acoustics, aeroacoustics.
Zhe Liu, born in 1991, is currently a PhD candidate at School of Mechanical Engineering, Northwestern Polytechnical University, China.
Jiahao Song, born in 1996, is currently a master candidate at School of Mechanical Engineering, Northwestern Polytechnical University, China.
Feng Pan, born in 1994, is currently a master candidate at School of Mechanical Engineering, Northwestern Polytechnical University, China.
Pengcheng Han, born in 1993, is currently a master candidate at School of Mechanical Engineering, Northwestern Polytechnical University, China.
Juan Wei, born in 1973, is currently a professor at School of Communication Engineering, Xidian University, China. She received her PhD degree from Northwestern Polytechnical University, China, in 2002.
Corresponding author
Ethics declarations
Competing Interests
The authors declare no competing financial interests.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Ning, F., Liu, Z., Song, J. et al. A Robust and Efficient Compressed Sensing Algorithm for Wideband Acoustic Imaging. Chin. J. Mech. Eng. 33, 95 (2020). https://doi.org/10.1186/s10033020005049
Received:
Revised:
Accepted:
Published:
Keywords
 Wideband acoustic imaging
 Compressed sensing
 Singular value decomposition
 Microphone array
 Gas leakage