Skip to main content

A Robust and Efficient Compressed Sensing Algorithm for Wideband Acoustic Imaging

Abstract

Wideband acoustic imaging, which combines compressed sensing (CS) and microphone arrays, is widely used for locating acoustic sources. However, the location results of this method are unstable, and the computational efficiency is low. In this work, in order to improve the robustness and reduce the computational cost, a DCS-SOMP-SVD compressed sensing method, which combines the distributed compressed sensing using simultaneously orthogonal matching pursuit (DCS-SOMP) and singular value decomposition (SVD) is proposed. The performance of the DCS-SOMP-SVD is studied through both simulation and experiment. In the simulation, the locating results of the DCS-SOMP-SVD method are compared with the wideband BP method and the DCS-SOMP method. In terms of computational efficiency, the proposed method is as efficient as the DCS-SOMP method and more efficient than the wideband BP method. In terms of locating accuracy, the proposed method can still locate all sources when the signal to noise ratio (SNR) is − 20 dB, while the wideband BP method and the DCS-SOMP method can only locate all sources when the SNR is higher than 0 dB. The performance of the proposed method can be improved by expanding the frequency range. Moreover, there is no extra source in the maps of the proposed method, even though the target sparsity is overestimated. Finally, a gas leak experiment is conducted to verify the feasibility of the DCS-SOMP-SVD method in the practical engineering environment. The experimental results show that the proposed method can locate both two leak sources in different frequency ranges. This research proposes a DCS-SOMP-SVD method which has sufficient robustness and low computational cost for wideband acoustic imaging.

Introduction

Acoustic imaging, which uses planar microphone array and beamforming methods [15], is widely employed for locating acoustic sources. As for wideband acoustic imaging, one way is to operate the time-domain beamforming technique. The time-domain method has a relatively high computational efficiency and a very good application for non-stationary and strongly transient signals [6]. Zhao using a Tap delay line structure or finite impulse response (FIR) filter to achieve different frequency beamforming [7]. Wilkins presented a true time delay (TTD) beamformer bank in the beamspace and permitted the directions of arrival of broadband sources to be estimated accurately, efficiently, and non-iteratively [8]. However, runtime delay quantization effects can cause a need for high sampling and processing of huge amounts of data [9]. Another way to deal with wideband acoustic imaging is to operate in the frequency domain. For wideband signal, we can use the fast Fourier transform (FFT) to divide the array outputs into many narrowband frequency bins and apply some beamforming methods to each narrowband frequency bins, such as delay-and-sum (DAS) beamforming [10], standard capon beamforming(SCB) method [11] and robust capon beamforming method(RCB) [12]. However, the problem of varying mainlobe width as a function of frequency exists on the results when those methods are applied to wideband acoustic imaging. Wang et al. [13] have proposed the shaded robust capon beamformer method (SRCB), which can obtain approximately constant mainlobe width for wideband acoustic imaging. In Ref. [14], robust Capon beamforming with pre-steering produces can locate acoustic emissions with significant accuracy and less ghost than ordinary beamforming. The wideband acoustic imaging also can be performed by jointing different narrowband frequency bins. Kassis et al. [15] have proposed the wideband zero-forcing MUSIC (ZF-MUSIC) method for locating aeroacoustic sources. The wideband ZF-MUSIC criterion was proposed to avoid the maximization of the criterion at each frequency bin. The ZF-MUSIC method increases the ability to separate sources having different powers than the MUSIC method, but the width of the frequency band cannot be too large in case the low frequencies degrade the resolution. Guo developed a robust nearfield wideband beamformer design approach based on adaptive-weighted convex optimization. The method employs the adaptive array signal processing theory, adjusts weights flexibly, and improves the beamforming performance [16]. He proposed a new direction of arrival (DOA) estimation method of wideband source, which is based on iterative adaptive spectral reconstruction, which can be applied to coherent sources and improve the accuracy of DOA estimation [17]. But they all increase computational complexity and time costs, cannot meet the real-time requirements.

Donoho [18], Candès, Romberg, and Tao [1922] proposed the theory of compressed sensing (CS). CS shows that a signal having a sparse representation can be recovered exactly from a small set of linear, nonadaptive measurements. If most entries of a signal are zeros, the signal is sparse. As for the acoustic imaging problem, the number of sources that are usually assumed to be point sources is much less than the node number behind the grid. Therefore, compressed sensing can be easily used for acoustic imaging. Chu et al. [23] have applied the Bayesian CS method to nearfield wideband aeroacoustic imaging. The method has good robustness in poor signal to noise ratio (SNR) cases and can obtain a wide dynamic range. However, it has more computational costs than beamforming methods. Chaturvedi used CS to reconstruct cross-correlation of wideband signals from the cross-correlation of sub-Nyquist samples to estimate the DOA [24], but cross-correlation provides inferior accuracy in the experiment [25]. Boufounos et al. combined joint sparsity models and CoSaMP algorithm to wideband array processing. However, it is known that CoSaMP algorithm fails to provide satisfactory performance in the source location and spectral estimation applications, especially in the presence of closely spaced sources [26].

In this paper, we propose a new CS method called the DCS-SOMP-SVD method for wideband acoustic imaging, which combines the distributed compressed sensing using simultaneously orthogonal matching pursuit (DCS-SOMP) method and singular value decomposition (SVD). In this study, we will study the performance of the DCS-SOMP-SVD method for wideband acoustic imaging by comparing that with the wideband basis pursuit (BP) method and the DCS-SOMP method.

This paper is organized as follows: Section 2 describes the observation model of acoustic signal propagation and the math model of the wideband acoustic imaging. Then our proposed method is presented in Section 3. Subsequently, the performance of the proposed method is compared with the other two methods by simulations in Section 4. The analysis of the DCS-SOMP-SVD method is also shown in this section. Section 5 provides a gas leakage experiment to verify the feasibility of the DCS-SOMP-SVD method in actual application. Finally, we conclude this paper in Section 6.

Observation Model and Math Model for Wideband Acoustic Imaging

Observation Model for Acoustic Imaging

Figure 1 illustrates the acoustic signal model propagating from the source plane \({z}_{0}\), which is \(h\) away from the planar microphone array. The microphone array consists of \(M\) sensors at known positions \({\varvec{\stackrel{-}P}}={\left[{{\varvec{\stackrel{-}P}}}_{1},\cdot \cdot \cdot ,{{\varvec{\stackrel{-}P}}}_{M}\right]}^{\mathrm{T}}\), where \({[\cdot ]}^{\mathrm{T}}\) denotes the transpose operator. The source plane \({z}_{0}\) is discretized into \(N=u\times u\) equidistant grids at known discrete positions \({\varvec{P}}=\left[{{\varvec{P}}}_{1},\cdot \cdot \cdot ,{{\varvec{P}}}_{{\varvec{N}}}\right]\). Let \({\varvec{Y}}\) be the vector of wavefield measurement at \(M\) microphones of the array in the frequency domain, and the unknown vector \({\varvec{X}}\) comprise source strengths at all \(N\) grid nodes. With the help of the microphone array, we can get the pressure fields of the microphones and obtain \({\varvec{Y}}={\left[{{\varvec{Y}}}_{1},\cdot \cdot \cdot ,{{\varvec{Y}}}_{M}\right]}^{\mathrm{T}}\) by operating FFT. The \(n\)-th element of \({\varvec{X}}\) equals zero if there is no source at the nth gird node. Otherwise it is nonzero. The sources in our model are supposed to be uncorrelated monopoles in order to simplify the physical process and build up the acoustic propagation model explicitly [27].

Figure 1
figure1

Acoustic signal propagation model

The pressure field at the \(m\)-th microphone is given by:

$${Y}_{m}=\sum_{n=1}^{N}\frac{{X}_{n}*{e}^{-jk{r}_{mn}}}{4\uppi {r}_{mn}}$$
(1)

where \({r}_{mn}=\Vert {\stackrel{-}{{\varvec{P}}}}_{m}-{{\varvec{P}}}_{n}\Vert\) denotes the distance between the mth microphone and the nth grid node, \(\omega =2\uppi f\) with \(f\) being the frequency, the wave number \(k=\omega /c\) with \(c\) being the sound speed, and \({X}_{n}\) is the amplitude of the nth grid node.

The model can be compactly expressed in matrix form:

\({\varvec{Y}}={\varvec{A}}{\varvec{X}},\)

$${\varvec{A}}=\frac{1}{4\uppi }\left(\begin{array}{ccc}\frac{{e}^{-jk{r}_{11}}}{{r}_{11}}& \cdots & \frac{{e}^{-jk{r}_{1N}}}{{r}_{1N}}\\ \vdots & \ddots & \vdots \\ \frac{{e}^{-jk{r}_{M1}}}{{r}_{M1}}& \cdots & \frac{{e}^{-jk{r}_{MN}}}{{r}_{MN}}\end{array}\right),{\varvec{X}}=\left[\begin{array}{c}{X}_{1}\\ \vdots \\ {X}_{N}\end{array}\right]$$
(2)

where \({\varvec{A}}\) is a \(M\times N\) matrix and defined as the measurement matrix.

In the practical engineering environment, there are often measurement errors and background noise. In this paper, we assume errors and background noise as additive Gaussian white noise (AGWN), which is mutually independent and identically distributed and independent to sources. Thus, a more realistic propagation model can be depicted as:

$${\varvec{Y}}={\varvec{A}}{\varvec{X}}+{\varvec{e}}$$
(3)

where \({\varvec{e}}\) denotes background noise and errors.

By solving the linear system Eq. (2), we can recover the signal \({\varvec{x}}\in {\mathrm{C}}^{N}\). Mathematically speaking, if we want to solve the linear system Eq. (2) without any distortion, the number of measurements M, i.e., the number of microphones in acoustic imaging, should be at least as large as the signal length \(N\). Otherwise, the system will be severely underdetermined and have no unique solution.

Thanks to sparsity, one can perfectly recover \({\varvec{X}}\) by solving the following optimization problem:

$$\mathrm{min}{\Vert {\varvec{X}}\Vert }_{0},\mathrm{ s}.\mathrm{t}., {\varvec{A}}{\varvec{X}}={\varvec{Y}}$$
(4)

Unfortunately, the \({l}_{0}\)-minimization problem is an NP-hard problem [2830] and thus computationally intractable. Candès and Tao [31, 32] have proved that under certain condition, Eq. (4) is equivalent to the following \({l}_{1}\)-optimization problem:

$$\mathrm{min}{\Vert {\varvec{X}}\Vert }_{1},\mathrm{ s}.\mathrm{t}., {\varvec{A}}{\varvec{X}}={\varvec{Y}}$$
(5)

where \({\Vert {\varvec{X}}\Vert }_{1}=\sum_{i=1}^{n}\left|{x}_{i}\right|\).

As to Eq. (3) which takes noise into account, we can solve it by solving the following second order cone programming (SOCP):

$$\mathrm{min}{\Vert {\varvec{X}}\Vert }_{1},\mathrm{ s}.\mathrm{t}., {\Vert {\varvec{A}}{\varvec{X}}-{\varvec{Y}}\Vert }_{2}<\varepsilon$$
(6)

where \(\varepsilon\) is a specified tolerance for noise \(e\).

Eqs. (5) and (6) are the convex relaxation of their according to original NP-Hard problems and can be solved by basis pursuit (BP) (also known as \({l}_{1}\)-minimization method) with polynomial computational time [31, 33‒35]. The BP method has both merits and drawbacks. BP provides theoretical performance guarantees, but its computational costs may be a limitation.

Apart from convex relaxation, several greedy methods, which compute the support set of the signal iteratively and approximate the sparse signal of Eq. (3) until a preset stopping condition [3642], are also widely used. Greedy methods have the advantages of easy implementation, fast convergence, and low complexity.

Joint Sparsity Model for Wideband Acoustic Imaging

For the narrow acoustic imaging, we have developed the CS algorithm based on the greedy algorithm [4345]. For a wideband signal, we choose its characteristic frequency band, and its lower and upper bound frequency are \({f}_{min}\) and \({f}_{max}\). A simple method is to divide the chosen wideband into several narrowband, then for each frequency \({f}_{j}\) we have:

$${\varvec{Y}}\left({f}_{j}\right)={\varvec{A}}\left({f}_{j}\right){\varvec{X}}\left({f}_{j}\right)+{\varvec{e}}\left({f}_{j}\right),j ={\mathrm{1,2}},\cdot \cdot\cdot ,J$$
(7)

where J is the number of frequencies within [\({f}_{min},{f}_{max}\)].

We can directly solve Eq. (7) by the CS algorithms, but the computational efficiency can be a problem as the bandwidth grows.

However, the positions of the sound sources do not change with frequency. In other words, the signals at different frequencies satisfy the simultaneous sparse approximation. So, we combine the signal vectors at each frequency into a signal matrix and solve it jointly [46]. So, the problem can be transformed into an optimization problem with the help of joint sparsity:

$${\mathrm{min}}{\Vert {{\varvec{X}}}^{(l_{2})}\Vert }_{1},{\mathrm{s}}.{\mathrm{t}}., {\Vert \stackrel{-}{{\varvec{A}}}\stackrel{-}{{\varvec{X}}}-\stackrel{-}{{\varvec{Y}}}\Vert }_{2}<\stackrel{-}{\varepsilon }$$
(8)

where \({{\varvec{X}}}^{({l}_{2})}={[{{\varvec{X}}}_{1}^{({l}_{2})},{{\varvec{X}}}_{2}^{({l}_{2})},\ldots ,{{\varvec{X}}}_{N}^{({l}_{2})} ]}^{\mathrm{T}}\) denotes the energy vector, which is the mean square of source power on each node, i.e., \({{\varvec{X}}}_{n}^{({l}_{2})}={\Vert {{\varvec{X}}}_{n}\left({f}_{1}\right),{{\varvec{X}}}_{n}\left({f}_{2}\right),\ldots ,{X}_{n}({f}_{J})\Vert }_{2}. \stackrel{-}{{\varvec{X}}}\) and \(\stackrel{-}{{\varvec{Y}}}\) can be obtained by stacking the data \({\varvec{Y}}({f}_{j})\) and signal vectors \({\varvec{X}}({f}_{j})\), \(\stackrel{-}{{\varvec{A}}}\) is a block-diagonal matrix with each measurement matrix \({\varvec{A}}({f}_{j})\) as its element, and \(\stackrel{-}{\varepsilon }\) is a specified tolerance for noise \(\stackrel{-}{{\varvec{e}}}\).

Then the original problem has been simplified to a SOCP problem. Thus the BP can be used. Furthermore, the greedy algorithm can also be applied. Simultaneous orthogonal matching pursuit (SOMP) is a kind of greedy pursuit algorithm proposed by Tropp et al. [47] on the basis of OMP. It can compute provably good solutions to several simultaneous sparse approximation problems. Sarvotham et al. [48, 49] expanded the theory for distributed compressed sensing (DCS) and achieved joint recovery of multiple signals from incoherent projections through SOMP.

DCS-SOMP has the same advantages as OMP and has higher computational efficiency than BP. However, we found that DCS-SOMP could not locate sources in low SNR. So, we propose a robust method combining DCS-SOMP and SVD, which can be used for wideband acoustic imaging even though the SNR is low.

DCS-SOMP-SVD Method for Wideband Acoustic Imaging

In this section, we present a practical approach for wideband acoustic imaging in low SNR environments, which combines the DCS-SOMP algorithm and the SVD. The measurements of sound pressure received by the microphone array in time-domain are divided into \(B\) blocks, where each block contains \(L\) data points and has \(50\mathrm{ \%}\) overlap. We perform \(L\)-point discrete Fourier transform (DFT) and choose the data between the lower and the upper bound frequency \({f}_{min}\) and \({f}_{max}\). We can obtain a \(M\times B\) data matrix \({\varvec{y}}({f}_{j})\) under each frequency \({f}_{j}\) from the \(B\) blocks of data:

$${\varvec{y}}\left({f}_{j}\right)=[{{\varvec{Y}}}_{1j},\ldots ,{{\varvec{Y}}}_{Bj}]$$
(9)

Similarly, the source strength can also be divided as a \(N\times B\) matrix \({\varvec{x}}\left({f}_{j}\right)\), which consists of source strengths under the frequency each frequency \({f}_{j}\). We employ SVD for \({\varvec{y}}\left({f}_{j}\right)\):

$${\varvec{y}}\left({{\varvec{f}}}_{{\varvec{j}}}\right)={\varvec{U}}\boldsymbol{\varLambda }{{\varvec{V}}}^{\mathrm{T}}$$
(10)

where \({\varvec{U}}\) is a \(M\times M\) unitary matrix, \(\boldsymbol{\varLambda }\) is a \(M\times B\) diagonal matrix, and \({{\varvec{V}}}^{\mathrm{T}}\) is a \(B\times B\) unitary matrix.

We define the reduced \(M\times K\) dimensional matrix \({{\varvec{y}}\left({f}_{j}\right)}_{SV}\), which involves most of the signal power as \({{\varvec{y}}\left({f}_{j}\right)}_{SV}={\varvec{U}}\boldsymbol{\varLambda }{{\varvec{D}}}_{K}={\varvec{y}}\left({f}_{j}\right)\boldsymbol{\varLambda }{{\varvec{D}}}_{K}\), where \({{\varvec{D}}}_{K}={\left[\begin{array}{cc}{{\varvec{I}}}_{K}& \mathbf{0}\end{array}\right]}^{\mathrm{T}}\). Here \(K\) is the source sparsity, which is the actual number of sources. \({{\varvec{I}}}_{K}\) is a \(K\times K\) identity matrix and \(\mathbf{0}\) is a \(K\times (B-K)\) zero matrix. In addition, we transform the signal matrix \({\varvec{x}}\left({f}_{j}\right)\) at each frequency \({f}_{j}\) as \({{\varvec{x}}\left({f}_{j}\right)}_{SV}={\varvec{x}}\left({f}_{j}\right){\varvec{V}}{{\varvec{D}}}_{K}\), and let \({{\varvec{e}}\left({f}_{j}\right)}_{SV}={\varvec{e}}\left({f}_{j}\right){\varvec{V}}{{\varvec{D}}}_{K}\), to obtain the system:

$${{\varvec{y}}\left({f}_{j}\right)}_{SV}={\varvec{A}}{{\varvec{x}}\left({f}_{j}\right)}_{SV}+ {{\varvec{e}}\left({f}_{j}\right)}_{SV}$$
(11)

After SVD, the signal subspace is reserved, and the noise subspace is abandoned. Then we can apply the DCS-SOMP algorithm to solve the problem in Eq. (11). Same as the OMP algorithm, the DCS-SOMP algorithm also has two ways to terminate iterations. We have introduced the difference between the two ways in our previous work [45]. Unfortunately, the source sparsity \(K\) is unknown a priori in many cases. In this work, we use the target sparsity \({K}_{T}\), which is larger than \(K\), as the stopping condition, same as the approach of the OMP-SVD [45, 50].

Donoho et al. [51] gave a phase diagram to depict the performance of CS. The diagram shows that high-accuracy reconstruction can be obtained for small \(\rho (=K/M)\) and large \(\delta (=M/N)\), while for large \(\rho\) and small \(\delta\) the reconstruction fails. The phase transition analysis [52] shows that the maximum source sparsity \(K\) can be accurately reconstructed with an empirical formula \(M\approx 2K\mathrm{log}(N)\) [42]. To obtain high-accuracy reconstruction (small \(\rho\) and large \(\delta\)) of the DCS-SOMP-SVD, the maximum possible number of sources \(K\) is obtained by

$$K\le \frac{M}{2\mathrm{log}(N)}$$
(12)

By applying DCS-SOMP, we can obtain an approximate solution \({\varvec{X}}\left({f}_{j}\right)\) at each frequency \({f}_{j}\) and transform \({\varvec{X}}\left({f}_{j}\right)\) into the matrix \({{\varvec{X}}\left({f}_{j}\right)}_{SV}\).Then we can get the strengths and locations of sources by averaging the source strengths of all columns of signal subspace \({{\varvec{X}}\left({f}_{j}\right)}_{SV}\). The (discrete) source strength \({{\varvec{X}}}_{j}^{*}\) can be obtained by

$${{\varvec{X}}}_{j}^{*}=\frac{1}{K}\sum_{k=1}^{K}{\mathrm{diag}}\left[{{\varvec{R}}}_{{X\left({f}_{j}\right)}_{SV}(k)}\right]$$
(13)

where \({{\varvec{R}}}_{{X\left({f}_{j}\right)}_{SV}(K)}=E\left[{{\varvec{X}}\left({f}_{j}\right)}_{SV}(k){{{\varvec{X}}\left({f}_{j}\right)}_{SV}(k)}^{H}\right]\) [27] denotes the source power covariance matrix and \({{\varvec{X}}\left({f}_{j}\right)}_{SV}(k)\) denotes the kth column of the matrix \({{\varvec{X}}\left({f}_{j}\right)}_{SV}\).

As the location of unknown sources is assumed to be the same, we can depict the maps with location and overall sound pressure level (OASPL). The OASPL can be obtained from following equation:

$$OASPL_{n} = 10{\text{log}}_{{10}} \left( {10^{{\frac{{x_{{1n}}^{*} }}{{10}}}} + 10^{{\frac{{x_{{2n}}^{*} }}{{10}}}} + \cdot \cdot \cdot + 10^{{\frac{{x_{{Jn}}^{*} }}{{10}}}} } \right)$$
(14)

where \({x}_{jn}^{*}\) is the power of the nth node for the jth frequency within the frequency range.

Now we are ready to present our DCS-SOMP-SVD method for wideband acoustic imaging. The sequence of steps is as follows. [48, 49]

Step 1: Obtain the measurement of sound pressure in time-domain from the microphone array.

Step 2: Give the lower and upper frequency bound \({f}_{min}\) and \({f}_{max}\). For each frequency \(f\in [{f}_{min}\), \({f}_{max}]\), construct the measurement matrix \({\varvec{A}}\) according to the frequency \(f\), the node position, and the distance \({z}_{0}\) with Eq. (2).

Step 3: Divide the measurement of sound pressure into \(B\) blocks at each frequency \({f}_{j}\) and each block contains data block lengths \(L\), where each block has \(50\mathrm{ \%}\) overlap.

Step 4: Obtain the \(M\times B\) data matrix \({\varvec{y}}({f}_{j})\) at each frequency \({f}_{j}\) by performing \(k\)-point DFT on the data in step 1.

Step 5: Compute the source sparsity K with Eq. (12) as the maximum number of sources can be located. On this basis, we use the target sparsity \({K}_{T}\) as the stopping condition.

Step 6: Discretize the source plane, which is \({z}_{0}\) away from the array plane, with \(u\times v=N\) nodes.

Step 7: Perform SVD for the data matrix \({\varvec{y}}({f}_{j})\) with Eq. (10).

Step 8: Transform the data matrix \({\varvec{y}}\) as \({{\varvec{y}}({f}_{j})}_{SV}={\varvec{U}}\boldsymbol{\varLambda }{{\varvec{D}}}_{K}={\varvec{y}}\left({f}_{j}\right)\boldsymbol{\varLambda }{{\varvec{D}}}_{K}\).

Step 9: Repeat from step 3 to step 8 for signal data and error data to obtain \({{\varvec{x}}({f}_{j})}_{SV}\) and \({{\varvec{e}}({f}_{j})}_{SV}\), respectively.

Step 10: Construct the system model with Eq. (11).

Step 11: Set the iteration counter \(l=1\). For each signal index \(j\in \{1,\cdot \cdot \cdot ,J\}\), initialize the orthogonalized coefficient vectors \({\widehat{\beta }}_{j}=0\), also initialize the set of selected indices \(\widehat{\boldsymbol{\varOmega }}=\boldsymbol{\varnothing }\). Let \({{\varvec{r}}}_{j,l}\) denote the residual of the measurement \({\varvec{y}}({f}_{j})\) remaining after the first \(l\) iterations, and initialize \({{\varvec{r}}}_{j,0}={\varvec{y}}\left({f}_{j}\right).\)

Step 12: Select the dictionary vector that maximizes the value of the sum of the magnitudes of the projections of the residual at each narrow band, and add its index to the set of selected indices:

$${n}_{l}=\underset{n=\mathrm{1,2},\ldots ,N}{\mathit{argmax}}\sum_{j=1}^{J}\frac{\left|\langle {{\varvec{r}}}_{j,l-1},{{\varvec{A}}}_{j,n}\rangle \right|}{{\Vert {{\varvec{A}}}_{j,n}\Vert }_{2}}$$
(15)
$$\widehat{\boldsymbol{\varOmega }}=\left[\begin{array}{cc}\widehat{\varOmega }& {n}_{l}\end{array}\right]$$
(16)

where \({{\varvec{A}}}_{j,n}\) is the nth column of the measurement matrix \({\varvec{A}}\left({f}_{j}\right)\).

Step 13: Operate Schmidt regularization and orthogonalize the selected basis vector against the orthogonalized set of previously selected dictionary vectors:

$${{\varvec{\gamma}}}_{j,l}={{\varvec{A}}}_{j,{n}_{l}}-\sum_{t=0}^{l-1}\frac{\left|\langle {{\varvec{A}}}_{j,{n}_{l}},{{\varvec{\gamma}}}_{j,t}\rangle \right|}{{\Vert {{\varvec{\gamma}}}_{j,t}\Vert }_{2}^{2}}{{\varvec{\gamma}}}_{j,t}$$
(17)

where \({{\varvec{\gamma}}}_{k,l}\) is the regularization result of selected column \({{\varvec{A}}}_{j,{n}_{j}}\), which equals to the multiplication of the amplitude of \({{\varvec{A}}}_{j,{n}_{j}}\) and \({\varvec{Q}}\) after \(QR\) factorization.

Step 14: Update the estimate of the coefficients \({\widehat{\beta }}_{j}\) for the selected vector and residuals \({{\varvec{r}}}_{j,l}\):

$${\widehat{\beta }}_{j}(l)=\frac{\left|\langle {{\varvec{r}}}_{j,l-1},{{\varvec{\gamma}}}_{j,l}\rangle \right|}{{\Vert {\gamma }_{j,l}\Vert }_{2}^{2}}$$
(18)
$${{\varvec{r}}}_{j,l}={{\varvec{r}}}_{j,l-1}-\frac{\left|\langle {{\varvec{r}}}_{j,l-1},{{\varvec{\gamma}}}_{j,l}\rangle \right|}{{\Vert {{\varvec{\gamma}}}_{j,l}\Vert }_{2}^{2}}{{\varvec{\gamma}}}_{j,l}$$
(19)

where \({\widehat{\beta }}_{j}={\widehat{{\varvec{R}}}}_{j,l}{\stackrel{\sim }{{\varvec{X}}}}_{j,l}\) with \({\stackrel{\sim }{{\varvec{X}}}}_{j,l}\) being the least square solution of linear system \({{\varvec{r}}}_{j,l-1}={{\varvec{A}}}_{j,{n}_{l}}{{\varvec{X}}}_{j,l}\) and \({\widehat{{\varvec{R}}}}_{j,l}\) being the division of \({\varvec{R}}\) after \(QR\) factorization of the selected column \({{\varvec{A}}}_{j,{n}_{l}}\) and the amplitude of \({{\varvec{A}}}_{j,{n}_{l}}\).

Step 15: \(l=l+1\). Return to step 12 until \(l={K}_{T}\).

Step 16: Apply QR factorization on the mutilated basis \({{\varvec{A}}}_{j,\widehat{\Omega }}={{\varvec{Q}}}_{j}{{\varvec{R}}}_{j}={\boldsymbol{\varGamma }}_{j}{{\varvec{R}}}_{j}\). Since in each narrow-band \({\varvec{y}}\left({f}_{j}\right)={\boldsymbol{\varGamma }}_{j}{{\varvec{R}}}_{j}={{\varvec{A}}}_{j,\widehat{\Omega }}{{\varvec{X}}}_{j,\widehat{\Omega }}={\boldsymbol{\varGamma }}_{j}{{\varvec{R}}}_{j,\widehat{\Omega }}\), where \({{\varvec{X}}}_{j,\widehat{\Omega }}\) is the mutilated coefficient vector, we can compute the signal estimates \(\left\{{\stackrel{\sim }{{\varvec{X}}}}_{j}\right\}\) as:

$$\user2{\tilde{X}}_{j} = \user2{R}_{j}^{{ - 1}} \hat{\beta }_{j}$$
(20)

Step 17: End iteration and let \(\varvec{X}\left( {f_{j} } \right) = \varvec{\tilde{X}}_{j}\).

Step 18: Reshape \({\varvec{X}}\left( {f_{j} } \right)\) as the \(N \times K\) matrix \({\varvec{X}}\left( {f_{j} } \right)_{SV}\).

Step 19: Calculate the (discrete) source strengths \({\varvec{X}}_{j}^{*} \left( {j = 1,2,\ldots,J} \right)\) via Eq. (12). Then get the OASPL by Eq. (14).

Step 20: Find the source positions using the indices of the nonzero elements in OASPL and the observation model in Section 2.

Simulation Results and Analysis

In this section, the wideband BP method, DCS-SOMP method, and DCS-SOMP-SVD method are compared in terms of computational effort, reconstruction precision of OASPL, and the robustness to signal to noise ratio (SNR). More, we investigate the effect of frequency range and iteration number on our method. The source maps for the wideband BP method are obtained by employing convex optimization (CVX) toolbox, a package for specifying and solving convex programs [53], to solve the Eq. (8). The value of \(\varepsilon\) in Eq. (8) is set to be the norm of background noise roughly, where the background noise is the difference between the measurement data with the AGWN and those without the AGWN.

Simulation Configuration

In the simulation, an optimized random plane microphone array with a circular aperture of 1.0 m is used to obtain the measurement data. The plane microphone array, which is shown in Figure 2, consists of 60 microphone sensors. The interested sources plane is a 0.8×0.8 m2 rectangular zone, which is 0.8 m away from the array plane. A piece of 14 s music is regarded as the source signal added additional Gaussian white noise (AGWN), and its spectrum, which is shown in Figure 3, is a typical wideband spectrum.

Figure 2
figure2

An optimized microphone array with 60 elements

Figure 3
figure3

Spectrum of source signal

Method Comparison

In this section, four monopoles with same source pressure level are located at the grid nodes, where the coordinate of sources are S_1 (−0.2, 0.2, 0.8) m, S_2 (−0.2, −0.2, 0.8) m, S_3 (0.2, 0.2, 0.8) m, S_4 (0.2, −0.2, 0.8) m. The DCS-SOMP-SVD method is compared with the wideband BP method and the DCS-SOMP method in terms of computational effort and OASPL errors. The frequency range and SNR are set to 2500 ‒3000 Hz and 30 dB, respectively. The iterative termination conditions of both the DCS-SOMP method and the DCS-SOMP-SVD method are set to be the target sparsity, which are consistent with the actual source sparsity. The source plane is discretized into 60×60 nodes.

Figure 4 shows the source maps obtained by the wideband BP method, the DCS-SOMP method and the DCS-SOMP-SVD method. The squares in the source maps indicate the actual source positions, while the small colored circular dots indicate the reconstructed source positions. Figure 4(b) and (c) indicate that the DCS-SOMP and the DCS-SOMP-SVD method can obtain the super-resolution source maps like the wideband BP method, and the whole sources can be located by the three methods exactly. The CPU-time of methods is obtained using a Core i7 multi-core PC. The CPU-time of the DCS-SOMP-SVD method is 2.463 s, which is a little slower than the DCS-SOMP method, but is far less than the CPU-time 126.5 s of the wideband BP method. More, the DCS-SOMP-SVD method has a smaller OASPL error ratio among each source in comparison with the wideband BP method, which is shown in Table 1. We conclude that the DCS-SOMP-SVD method is as efficient as the DCS-SOMP method and much more efficient than the wideband BP method.

Figure 4
figure4

Source maps and the CPU-time resulting from the simulations with: a Wideband BP method (126.5 s), b DCS-SOMP method (0.9503 s), and c DCS-SOMP-SVD method (2.463 s) as the SNR equals 30 dB

Table 1 OASPL error ratio among each source (%)

Effect of SNR on Source Maps

In this section, we investigate the effect of SNR on the source maps of the wideband BP method, the DCS-SOMP method, and the DCS-SOMP-SVD method. We choose 2500‒3000 Hz as the frequency range, which has good performance in Section 4.2. We choose the same sources, as shown in Section 4.2. The performances of the wideband BP method, DCS-SOMP method, and our method are studied as SNR equals 0 dB, −10 dB, and −20 dB.

The source maps obtained by the wideband BP method, the DCS-SOMP method, and the DCS-SOMP-SVD method at different SNRs are shown in Figure 5, Figure 6, and Figure 7. When the SNR equals 0 dB, the DCS-SOMP-SVD method can locate the whole sources exactly, while the wideband BP method can only locate three sources, and the DCS-SOMP method has 1 grid error in locating sources. Besides, the max OASPL error of DCS-SOMP-SVD method among the whole sources being 0.0041% is less than the max OASPL error of the wideband BP method being 10.6171% and the max OASPL error of DCS-SOMP method being 1.4726%. As the SNR reduces to −10 dB, the DCS-SOMP-SVD method can still locate whole sources, while the DCS-SOMP method and the wideband BP method can only locate part of the sources. Besides, some reconstructed sources outside of the expected source positions exist on the source maps of wideband BP method, and one of reconstructed sources outside of the expected source positions has larger OASPL than the reconstructed source located at the actual source position, as shown in Figure 6(a). The DCS-SOMP-SVD method has better OASPL reconstruction precision than the DCS-SOMP method and the wideband BP method. When the SNR equals −20 dB, the DCS-SOMP-SVD method can still locate all sources accurately, while the DCS-SOMP method and the wideband BP method have failed to locate any sources. Compared with the wideband BP method and the DCS-SOMP method, our method has better robustness in low SNRs.

Figure 5
figure5

Source maps and the CPU-time obtained by: a Wideband BP method (134.4 s), b DCS-SOMP method (1.256 s), c DCS-SOMP-SVD method (2.759 s) when the SNR equals 0 dB

Figure 6
figure6

Source maps and the CPU-time obtained by: a Wideband BP method (126.3 s), b DCS-SOMP method (1.342 s), c DCS-SOMP-SVD method (2.473 s) when the SNR equals − 10 dB

Figure 7
figure7

Source maps and the CPU-time obtained by: a Wideband BP method (129.8 s), b DCS-SOMP method (1.432 s), c DCS-SOMP-SVD method (3.242 s) when the SNR equals − 20 dB

Effect of Frequency Range on Sources Maps

In this section, we investigate the effect of the frequency range on the source maps. The wideband BP method has a relatively large computational effort in Section 4.2, which limits the use of the wideband BP method at a wider frequency range. Therefore, we focus on studying the effect of frequency range on the DCS-SOMP method and the DCS-SOMP-SVD method. Here, the frequency ranges are chosen as 1000‒1500 Hz, 1000‒2000 Hz, and 1000‒3000 Hz, and the SNR is set to 0 dB.

Figure 8 shows the source maps obtained by the DCS-SOMP method and the DCS-SOMP-SVD method at different frequency ranges. When the frequency range equals 1000‒1500 Hz, Figure 8(b) shows there are only three minor errors on the source maps. However, Figure 8(a) shows that the DCS-SOMP method can only locate one source. There are over ten grid errors in locating other sources, which is considered that the source having the max minor error cannot be located by the DCS-SOMP method. When the frequency range broadens to 1000‒2000 Hz, the position errors become smaller. The whole sources are located by the DCS-SOMP-SVD method, in which the max error is merely one grid as shown in Figure 8(d). Figure 8(c) shows the DCS-SOMP method can also locate all sources while the max errors are four grids. Furthermore, as the frequency range equals 1000‒3000 Hz, Figure 8(f) shows that the whole sources can be located by the DCS-SOMP-SVD method correctly. The DCS-SOMP method can also have a satisfying result when the frequency range equals 1000‒3000 Hz. There is only one grid error in the map obtained by the DCS-SOMP method shown in Figure 8(e). From the above comparison, as the frequency range broadens, the location results gradually become better.

Figure 8
figure8

Source maps obtained by the DCS-SOMP method and the DCS-SOMP-SVD method as the frequency ranges equal 1000‒1500 Hz, 1000‒2000 Hz and 1000‒3000 Hz: a, c, e show the maps obtained by the DCS-SOMP method; b, d, f show the maps obtained by the DCS-SOMP-SVD method

Table 2 shows the error of the locating results of the DCS-SOMP method and the DCS-SOMP-SVD method under different frequency ranges. We can see that the results obtained by the DCS-SOMP-SVD are better than that of the DCS-SOMP method at the same frequency ranges.

Table 2 The max locating error among whole sources in different frequency ranges (grids)

In order to identify the performance of the DCS-SOMP-SVD method after broadening frequency range, the max OASPL error being merely 0.0008% among whole sources is computed as the frequency range equals 1000‒3000 Hz. It can conclude that the DCS-SOMP-SVD method can obtain better performance by broadening the frequency range.

Effect of the Target Sparsity on Sources Maps

In the above study, the iterative termination conditions of both the DCS-SOMP method and the DCS-SOMP-SVD method are set to be the target sparsity, which are consistent with the actual source sparsity. However, in real cases, the source sparsity is not known in advance, and we use the overestimated target sparsity, which is larger than the actual source sparsity. Therefore, it is necessary to study the effect of the target sparsity of the DCS-SOMP and the DCS-SOMP-SVD method on source maps. In this section, the performance of the DCS-SOMP and the DCS-SOMP-SVD method are investigated as the target sparsity equals 10. The frequency range and SNR are set to 2500‒3000 Hz and 0 dB, respectively.

Figure 9 shows the sources maps obtained by the DCS-SOMP method and the DCS-SOMP-SVD method as the target sparsity equals 10. When the target sparsity equals 10, there are six extra sources on the map obtained by the DCS-SOMP method shown in Figure 9(a), which has a bad effect on determining the location of the actual source. Figure 9(b) shows that the whole sources can be located exactly by the DCS-SOMP-SVD method when the target sparsity equals 10. Besides, there are no extra sources on the map.

Figure 9
figure9

Source maps obtained by: a DCS-SOMP method, and b DCS-SOMP-SVD method as the target sparsity equals 10

Experimental Results and Analysis

In order to verify the feasibility of the DCS-SOMP-SVD method in the real application, we conducted an experiment of gas leakage for wideband acoustic imaging at the Northwestern Polytechnical University.

Experiment Configurations

In the experiment, a planar microphone array composed of 24 microphone sensors shown in Figure 10 was used to measure the audio data. Figure 11 shows the experimental configuration taken by a camera installed in the center of the array.

Figure 10
figure10

A 24-channel microphone array

Figure 11
figure11

Gas leakage experiment configuration

The microphone array is 0.7 m away from the source plane. A 10 s recordings were stored for each measurement with sampling frequency being 44.1 kHz. Thus we can obtain many time-domain sampling data for each microphone. The sampling data were divided into \(50{{\% }}\) overlapping data block, where every data block contains 1024 sampling points. We performed FFT after applying the Hanning window to each data block. Then we can obtain the measurement data of microphone array in frequency domain. The observation zone, which is \(0.5708 \times 0.4280{\text{ m}}^{2}\), is discretized into \(N = 61 \times 46\) grids.

The spectrum of measurement data is shown in Figure 12, where we can find that the frequency of sources locates at low and middle frequency range mainly. Due to the long calculation time of the wideband BP method, the experimental results of the method will not be shown. The experimental source maps of the wideband DCS-SOMP method and DCS-SOMP-SVD method were obtained as the frequency ranges equal 3500‒4000 Hz, 6500‒7000 Hz, and 11000‒11500 Hz, which were shown in Figure 13, Figure 14, and Figure 15.

Figure 12
figure12

Power spectrum of the gas leakage experiment

Source Maps Using Experiment Results of the Gas Leakage 

Figure 13 shows the source maps obtained by the DCS-SOMP method and the DCS-SOMP-SVD method when the frequency range equals 3500‒4000 Hz. It can be seen from Figure 12 that there is a peak near 4000 Hz, so the SNR of the experiment is relatively high. In this frequency range, both the DCS-SOMP method and the DCS-SOMP-SVD method can locate two sources.

Figure 14 shows the source maps obtained by the DCS-SOMP method and the DCS-SOMP-SVD method when the frequency range equals 6500‒7000 Hz. When the frequency range is 6500‒7000 Hz, the amplitude of the signal is further reduced from Figure 15. In this case, the DCS-SOMP method can only locate the left source and fails to locate the right one, while our DCS-SOMP-SVD method can still locate two sources.

Figure 15 shows the source maps obtained by the DCS-SOMP method and the DCS-SOMP-SVD method when the frequency range equals 11000‒11500 Hz. As the frequency increases, the amplitude of the signal gradually decreases. When the frequency range is 11000‒11500 Hz, the DCS-SOMP method can only locate the right source, and the DCS-SOMP-SVD method can locate two sources. Therefore, we conclude that our method has better localization performance under different frequency ranges than the DCS-SOMP method.

Figure 13
figure13

Source maps for the gas leakage experiment are obtained by: a)DCS-SOMP method, and b DCS-SOMP-SVD method when the frequency range equals 3500‒4000 Hz

Figure 14
figure14

Source maps for the gas leakage experiment are obtained by: a DCS-SOMP method, and b DCS-SOMP-SVD method when the frequency range equals 6500‒7000 Hz

Figure 15
figure15

Source maps for the gas leakage experiment are obtained by: a DCS-SOMP method, and b DCS-SOMP-SVD method when the frequency range equals 11000‒11500 Hz

Conclusions

In this paper, we have proposed a DCS-SOMP-SVD method for wideband acoustic imaging. The performance of the proposed method has been studied through both simulations and experiments. We have also compared the source maps obtained by the DCS-SOMP-SVD method with those obtained by the wideband BP method and the DCS-SOMP method. The main conclusions are as follows.

  1. (1)

    The simulation results show that the DCS-SOMP-SVD method, as well as the wideband BP method and the DCS-SOMP method, can locate all sources as the SNR equals 30 dB. The CPU-time of the DCS-SOMP-SVD method is 2.463 s, which is a little slower than the DCS-SOMP method, but is far less than the CPU-time 126.5 s of the wideband BP method.

  2. (2)

    However, when the SNR decreases to −20 dB, the DCS-SOMP-SVD method can still locate all sources accurately, while the DCS-SOMP method and the wideband BP method have failed to locate any sources.

  3. (3)

    When the frequency range expands from 1000‒1500 Hz to 1000‒2000 Hz, 1000‒3000 Hz, the location results of the DCS-SOMP-SVD method gradually become better.

  4. (4)

    When the target sparsity equals 10, there are six extra sources on the map obtained by the DCS-SOMP method, while there is no extra reconstructed source in the maps of the DCS-SOMP-SVD method.

  5. (5)

    In the gas leak experiment, the results show that the DCS-SOMP-SVD method can locate all leak sources in 3500‒4000 Hz, 6500‒7000 Hz, and 11000‒11500 Hz, while the DCS-SOMP method can only locate the leak source when the SNR is high.

References

  1. [1]

    P N Samarasinghe, H Chen, A Fahim, et al. Performance analysis of a planar microphone array for three dimensional soundfield analysis. IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, New Paltz, USA, October 15-18, 2017: 249-253.

  2. [2]

    W Ma, X Liu. Improving the efficiency of DAMAS for sound source localization via wavelet compression computational grid. Journal of Sound and Vibration, 2017, (395): 341-353.

    Article  Google Scholar 

  3. [3]

    S Zhao, T N T Nguyen, D L Jones. Large region acoustic source mapping using movable arrays. IEEE International Conference on Acoustics, Speech and Signal Processing, Brisbane, Australia, April 19-24, 2015: 2589-2593.

  4. [4]

    P Sijtsma. Clean based on spatial source coherence. International Journal of Aeroacoustics, 2007, 6(4): 357-374.

    Article  Google Scholar 

  5. [5]

    C Tuna, S Zhao, T N T Nguyen, et al. Drive-by large-region acoustic noise-source mapping via sparse beamforming tomography. The Journal of the Acoustical Society of America, 2016, 140(4): 2530-2541.

    Article  Google Scholar 

  6. [6]

    J Benesty, I Cohen, J Chen. Beamforming in the Time Domain. Fundamentals of Signal Enhancement and Array Signal Processing. John Wiley & Sons Singapore Pte, Ltd, 2017.

  7. [7]

    Y Zhao, W Liu, R J Langley. Adaptive wideband beamforming with response variation constraints. 18th European Signal Processing Conference, Aalborg, Denmark, August 23-27, 2015: 2077-2081.

  8. [8]

    N Wilkins, A K Shaw, M Shaik. True time delay beamspace wideband source localization. IEEE International Conference on Acoustics, Speech and Signal Processing, Shanghai, China, March 20-25, 2016: 3161-3165.

  9. [9]

    O Jaeckel. Strengths and weaknesses of calculating beamforming in the time domain. 1st Berlin Beamforming Conference, Berlin, Germany, November 22-22, 2006: 1-10.

  10. [10]

    M W, J Humphreys, F T Brooks, et al. Design and use of microphone directional arrays for aeroacoustic measurements. 36th AIAA Aerospace Sciences Meeting and Exhibit, Reno, USA, January 12-15, 1998: 471.

  11. [11]

    J Capon. High-resolution frequency-wavenumber spectrum analysis. Proceedings of the IEEE, 1969, 57(8): 1408-1418.

    Article  Google Scholar 

  12. [12]

    J Li, P Stoica, Z Wang. On robust capon beamforming and diagonal loading. IEEE Transactions on Signal Processing, 2003, 51(7): 1702-1715.

    Article  Google Scholar 

  13. [13]

    Z Wang, J Li, T. Nishida, et al. Robust capon beamformers for wideband acoustic imaging. 9th AIAA/CEAS Aeroacoustics Conference and Exhibit, Hilton Head, USA, May 12-14, 2003: 3198.

  14. [14]

    R J Kozick, C Coviello. Wideband capon beamforming with pre-steering. Conference on Signals, 50th Asilomar Conference on Signals, Systems and Computers, Pacific Grove, USA, November 6-9, 2016: 338-342.

  15. [15]

    C Kassis, J Picheral. Wideband zero-forcing music for aeroacoustic sources localization. Proceedings of the 20th European Signal Processing Conference, Bucharest, Romania, August 27-31, 2012: 2283-2287.

  16. [16]

    Y C Guo, C Wang, N Zhang. Robust nearfield wideband beamforming design based on adaptive-weighted convex optimization. Mathematical Problems in Engineering, 2017: 1-10.

  17. [17]

    S He, Z W Yang, G S Liao. DOA estimation of wideband signals based on iterative spectral reconstruction. Journal of Systems Engineering and Electronics, 2017, 28(6): 1039-1045.

    Article  Google Scholar 

  18. [18]

    D L Donoho. Compressed sensing. IEEE Transactions on Information Theory, 2006, 52(4): 1289-1306.

    MathSciNet  Article  Google Scholar 

  19. [19]

    E J Candès, J K Romberg, T Tao. Stable signal recovery from incomplete and inaccurate measurements. Communications on Pure and Applied Mathematics, 2006, 59(8): 1207-1223.

    MathSciNet  Article  Google Scholar 

  20. [20]

    E J Candès, J Romberg, T Tao. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Transactions on Information Theory, 2006, 52(2): 489-509.

    MathSciNet  Article  Google Scholar 

  21. [21]

    E J Candès. Compressive sampling. Proceedings of the International Congress of Mathematicians, 2006, (3):1433–1452.

    MathSciNet  MATH  Google Scholar 

  22. [22]

    E J Candès, T Tao. Near-optimal signal recovery from random projections: Universal encoding strategies. IEEE Transactions on Information Theory, 2006, 52(12): 5406-5425.

    MathSciNet  Article  Google Scholar 

  23. [23]

    N Chu, A Mohammad-djafari, J Picheral. Bayesian compressed sensing in nearfield wideband aeroacoustic imaging. Workshop on Compressed Sensing Applied to Radar, 2012.

    Google Scholar 

  24. [24]

    A Chaturvedi, H H Fan. Compressive wideband direction of arrival estimation. 38th International Conference on Telecommunications and Signal Processing, Prague, Czech Republic, July 9-11, 2015: 1-5.

  25. [25]

    P T Boufounos, P Smaragdis, B Raj. Joint sparsity models for wideband array processing. Conference on Wavelets and Sparsity XIV, San Diego, USA, 2011: 81380.

  26. [26]

    L Xu, K Zhao, J Li, et al. Wideband source localization using sparse learning via iterative minimization. Signal Processing, 2013, 93(12): 3504–3514.

    Article  Google Scholar 

  27. [27]

    N Chu, J Picheral, A Mohammad-djafari, et al. A robust superresolution approach with sparsity constraint in acoustic imaging. Applied Acoustics, 2014, 76: 197-208.

    Article  Google Scholar 

  28. [28]

    B K Natarajan. Sparse approximate solutions to linear systems. SIAM Journal on Computing, 1995, 24(2): 227-234.

    MathSciNet  Article  Google Scholar 

  29. [29]

    S Muthukrishnan. Data streams: Algorithms and applications. Now Publishers Inc., 2005.

  30. [30]

    D L Donoho. For most large underdetermined systems of linear equations the minimal \(l_{1}\)-norm solution is also the sparsest solution. Communications on Pure and Applied Mathematics, 2006, 59(6): 797-829.

    MathSciNet  Article  Google Scholar 

  31. [31]

    E J Candès, T Tao. Decoding by linear programming. IEEE Transactions on Information Theory, 2005, 51(12): 4203-4215.

    MathSciNet  Article  Google Scholar 

  32. [32]

    E J Candès. The restricted isometry property and its implications for compressed sensing. Comptes Rendus Mathematique, 2008, 346(9): 589-592.

    MathSciNet  Article  Google Scholar 

  33. [33]

    S S Chen, D L Donoho, M A Saunders. Atomic decomposition by basis pursuit. SIAM Journal on Scientific Computing, 1998, 20(1): 33-61.

    MathSciNet  Article  Google Scholar 

  34. [34]

    M A Figueiredo, R D Nowak, S J Wright. Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems. IEEE Journal of Selected Topics in Signal Processing, 2007, 1(4): 586-597.

    Article  Google Scholar 

  35. [35]

    I Daubechies, M Defrise, C De Mol. An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Communications on Pure and Applied Mathematics, 2004, 57(11): 1413-1457.

    MathSciNet  Article  Google Scholar 

  36. [36]

    S G Mallat, Z Zhang. Matching pursuits with time-frequency dictionaries. IEEE Transactions on Signal Processing, 1993, 41(12): 3397-3415.

    Article  Google Scholar 

  37. [37]

    R A DeVore, V N Temlyakov. Some remarks on greedy algorithms. Advances in Computational Mathematics, 1996, 5(1): 173-187.

    MathSciNet  Article  Google Scholar 

  38. [38]

    Y C Pati, R Rezaiifar, P Krishnaprasad. Orthogonal matching pursuit: Recursive function approximation with applications to wavelet decomposition. Proceedings of 27th Asilomar Conference on Signals, Systems and Computers, Pacific Grove, USA, November 1-3, 1993: 40-44.

  39. [39]

    G M Davis, S G Mallat, Z Zhang. Adaptive time-frequency decompositions. Optical Engineering, 1994, 33(7): 2183-2191.

    Article  Google Scholar 

  40. [40]

    T Blumensath, M E Davies. Iterative thresholding for sparse approximations. Journal of Fourier Analysis and Applications, 2008, 14 (5-6): 629-654.

    MathSciNet  Article  Google Scholar 

  41. [41]

    W Dai, O Milenkovic. Subspace pursuit for compressive sensing signal reconstruction. IEEE Transactions on Information Theory, 2009, 55(5): 2230-2249.

    MathSciNet  Article  Google Scholar 

  42. [42]

    D Needell, J A Tropp. Cosamp: Iterative signal recovery from incomplete and inaccurate samples. Applied and Computational Harmonic Analysis, 2009, 26(3): 301–321.

  43. [43]

    F L Ning, Y Liu, C Zhang, et al. Acoustic imaging with compressed sensing and microphone arrays. Journal of Computational Acoustics, 2017, 25(4): 1750027.

    MathSciNet  Article  Google Scholar 

  44. [44]

    F L Ning, J Wei, L Qiu, et al. Three-dimensional acoustic imaging with planar microphone arrays and compressive sensing. Journal of Sound and Vibration, 2016, (380): 112-128.

    Article  Google Scholar 

  45. [45]

    F L Ning, F Pan, C Zhang, et al. A highly efficient compressed sensing algorithm for acoustic imaging in low signal-to-noise ratio environments. Mechanical Systems and Signal Processing, 2018, (112): 113-128.

    Article  Google Scholar 

  46. [46]

    P T Boufounosa, P Smaragdisb, B Rajc. Joint sparsity models for wideband array processing. Conference on Wavelets and Sparsity XIV, San Diego, USA, 2012.

  47. [47]

    J A Tropp, A C Gilbert, M J Strauss. Algorithms for simultaneous sparse approximation. Part I: Greedy pursuit. Signal Processing, 2006, 86(3): 572-588.

    Article  Google Scholar 

  48. [48]

    S Sarvotham, D Baron, M Wakin, et al. Distributed compressed sensing of jointly sparse signals. Asilomar conference on signals, systems, and computers, 2005: 1537-1541.

  49. [49]

    D Baron, M B Wakin, M F Duarte, et al. Distributed compressed sensing. https://www.dsp.rice.edu/~drorb/pdf/DCS112005.pdf. 2005.

  50. [50]

    K Gkoktsi, A Giaralis. Assessment of sub-Nyquist deterministic and random data sampling techniques for operational modal analysis. Structural Health Monitoring, 2017, 16(5): 630-646.

    Article  Google Scholar 

  51. [51]

    D L Donoho, Y Tsaig, I Drori, et al. Sparse solution of underdetermined systems of linear equations by stagewise orthogonal matching pursuit. IEEE transactions on Information Theory, 2012, 58(2): 1094-1121.

    MathSciNet  Article  Google Scholar 

  52. [52]

    D Donoho, J Tanner. Counting faces of randomly projected polytopes when the projection radically lowers dimension. Journal of the American Mathematical Society, 2009, 22(1): 1-53.

    MathSciNet  Article  Google Scholar 

  53. [53]

    M C Grant, S P Boyd. Graph Implementations for Nonsmooth Convex Programs. Lecture Notes in Control and Information Sciences, 2008, 95:110.

    MATH  Google Scholar 

Download references

Acknowledgements

Not applicable

Funding

Supported by National Natural Science Foundation of China (Grant Nos. 51675425, 52075441), Shaanxi Provincial Key Research Program Project of China (Grant No. 2020ZDLGY06-09), Dongguan Municipal Social Science and Technology Development(key) Project of China (Grant No. 20185071021600), Science and Technology on Micro-system Laboratory Foundation of China (Grant No. 6142804200405).

Author information

Affiliations

Authors

Contributions

FN was in charge of the whole trial; ZL revised the manuscript; JS wrote the manuscript; FP, PH, and JW assisted with sampling and laboratory analyses. All authors read and approved the final manuscript.

Authors’ Information

Fangli Ning, born in 1974, is currently a professor and a PhD candidate supervisor at School of Mechanical Engineering, Northwestern Polytechnical University, China. His current research interests include nonlinear acoustics, aeroacoustics.

Zhe Liu, born in 1991, is currently a PhD candidate at School of Mechanical Engineering, Northwestern Polytechnical University, China.

Jiahao Song, born in 1996, is currently a master candidate at School of Mechanical Engineering, Northwestern Polytechnical University, China.

Feng Pan, born in 1994, is currently a master candidate at School of Mechanical Engineering, Northwestern Polytechnical University, China.

Pengcheng Han, born in 1993, is currently a master candidate at School of Mechanical Engineering, Northwestern Polytechnical University, China.

Juan Wei, born in 1973, is currently a professor at School of Communication Engineering, Xidian University, China. She received her PhD degree from Northwestern Polytechnical University, China, in 2002.

Corresponding author

Correspondence to Fangli Ning.

Ethics declarations

Competing Interests

The authors declare no competing financial interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Ning, F., Liu, Z., Song, J. et al. A Robust and Efficient Compressed Sensing Algorithm for Wideband Acoustic Imaging. Chin. J. Mech. Eng. 33, 95 (2020). https://doi.org/10.1186/s10033-020-00504-9

Download citation

Keywords

  • Wideband acoustic imaging
  • Compressed sensing
  • Singular value decomposition
  • Microphone array
  • Gas leakage