# 1 Introduction

In acoustic transmission system, there are two important components: the speaker and the microphone. If the speaker and the microphone are totally separated, there will be no echo from the speaker to the microphone. Then, there is no need for acoustic echo cancellation.

However, in practice, the speaker and the microphone is almost always not totally separated. For example, when you are calling your friend A, you are the **near end** user and your friend A is the **far end** user. The speech from your friend A will come out from the speaker of your phone, which may also be collected by the microphone of your phone and sent back to your friend A.

As shown in Fig. 1, In this case, the **far end** user will hear his/her own voice with some delay. Such phenomena is recognized as **acoustic echo**. Although in this case most likely the far end and near end users may still be able to understand what the other is talking, it is very annoying and will reduce the communication quality dramatically. Thus, there is a need for acoustic echo cancellation (AEC).

The goal of AEC is to remove the echo from the speaker collected by the microphone before sending to the **far end** user. The echo depends on a lot of conditions, for example the relative position of the speaker and the microphone, the way the near end user holds the phone, the other objects around the near end users (e.g., furnitures), etc. It is difficult to determine the echo (i.e., magnitude, phase, delay) in advance. However, for each specific talk between the far end and near end users, the channel between the near end speaker and the near end microphone (e.g., \(H\) in Fig. 1) is almost fixed or slowly varying, since during the talk, the environment mentioned above that will affect the channel \(H\) may not change dramatically.

Thus, we have a simpler problem to solve: estimate the fixed (or slowly varying) and unknown \(H\).

Since \(H\) is not deterministic, we needs to find some ways to adaptively estimate the channel from the speaker to the microphone. When channel is precisely estimated, the echo can be easily removed from the signal collected by the microphone.

As shown in Fig. 2, signal from the near end microphone (\(y(n)\)) contains 3 components

- \(d(n)\) is the echo (from the near end speaker to near end microphone),
- \(s(n)\) is the near end speech (the only signal that needs to be sent to the far end user),
- \(n(n)\) is the other noise besides the echo \(d(n)\) (e.g., from environment).

An adaptive filter is used to estimate the echo (\(d^\prime(n)\)), which is subtracted from the microphone signal \(y(n)\) before sent to the far end user. After cancellation, the signal to the far end user can be written as

# 2 LMS Algorithm

Fig. 2 shows the structure of the acoustic echo cancellation. And Eq. (\ref{eqn:aec_error}) shows the signal sent to the far end user after processing. If \(d(n)\approx d^\prime(n)\), then the output will be

In this case, besides the noise (e.g. from environment), the only signal sent to the far end user is the near end speech (\(s(n)\)).

So the next question is how to estimate \(d(n)\), such that \(d^\prime(n)\approx d(n)\). One classical way is LMS (least mean square) algorithm. In this section, we briefly introduce the LMS algorithm [1].

**Wiener Solution.**
Suppose we want to estimated the desired signal (\(d(n)\)) from inputs (\(x(n)\)) with a linear filter by minimizing its mean square error (MSE),

where \(N\) is the order of the adaptive filter, \(w_k = a_k + jb_k\) is the filter coefficients^{1}.
And,

The MSE cost function can be written as

Here we ignore the near end speech \(s(n)\) and noise \(n(n)\) for simplicity. Someone claims that they will not affect \(J\) beyond a constant. Under what assumptions, does such claim hold? You may find it helpful to include \(s(n)\) and \(n(n)\) in \(J\).

When \(J\) achieves its minimum, its first derivative is zero for all \(w_k\), in particular

Thus, the optimal solution satisfies

where \( k= 0,1,2,\dots,N-1\).

Define the correction matrix:

where

Similarly, define the cross-correlation vector between the input (\(x(n)\)) and the desired signal (\(d(n)\)),

where

And, let

Thus Eq. (\ref{eqn:wiener}) can be written in matrix form as

If \(\boldsymbol{ R}\) is invertible, the Wiener solution could be written as,

**Steepest Decent Algorithm.**
To calculate the optimal filter from Wiener solution with Eq. (\ref{eqn:wiener3}), we need to calculate the matrix inverse \(\boldsymbol{ R^{-1}}\). In practice, such operation is is expensive. Fortunately, it can be approximated by the steepest decent algorithm, which updates the filter coefficients iteratively.

Let us define the gradient of the cost function as

And update the filter coefficients with

where \(\mu\) is the positive step size.

With first-order Taylor series expansion, we get,

Thus, \(J(\boldsymbol{w}(n))\) decreases after each iteration and Eq. (\ref{eqn:steeps}) approaches the optimal solution as \(n\rightarrow\infty\). Following the same way in Wiener solution, we get,

With Eqs. (\ref{eqn:steeps2}) and (\ref{eqn:steeps}), we have,

**LSM Algorithm.**
Eq. (\ref{eqn:steeps3}) needs to know *a priori* statistics information about the input (\(x(n)\)) and desired signal (\(d(n)\)). For most practical cases, it is not realistic (e.g., such statistics information may vary over time). One solution is to replace the statistics information with instantaneous estimation. That's exactly what LSM algorithm does

where

Thus, Eq. (\ref{eqn:steeps3}) can be simplified as

This is the iterative equation for LMS algorithm.

**Block LMS.**
Besides the above LMS method, there are many variants.
For block LMS (it is also called mini-batch LMS), as suggested by its name, the coefficients will not be updated at every input sample; instead, the coefficients are kept fixed until \(L\) samples are received. This algorithm is less sensitive to the noise in single input, while may need more time for convergence.

And the filter coefficients are kept unchanged

and

**Normalized LMS.**
As shown in Eq. (\ref{eqn:lms}), at each step, the correction of \(\boldsymbol{w}\) depends on input \(x(n)\). If the power of \(x\) changes over time, it will affect the convergence. For example, if \(x(n)\) is very small, it may take longer to converge. To mitigate such effect, in normalized LMS algorithm, the step size is normalized by the energy of the input signal. And it converge fast than LMS algorithm

where \(\sigma\) is a small number to avoid division by zero.

# 3 Double Talk Detector

In theory, as shown above, \(e(n)\) can be used to update the filter coefficients.

Once the filter converges, the echo will be cancelled from the microphone signal. One problem we haven't talked about so far is when will \(e(n)\) contain useful information to update the filter coefficients (\(H^\prime\))? Obviously, if the far end speech is off (e.g., \(x(n) = 0\)), \(d(n) = d^\prime(n) = 0\), no matter what the actual channel between the speaker and microphone is. In this case, the \(e(n)\) does not contain useful information to update the filter coefficients.

Even when the far end speech (\(x(n)\)) is on, \(e(n)\) may still not be useful to update the filter coefficients. For example, when the near end speech is on (e.g., \(s(n)\neq 0)\)), the calculated error \(e(n)\) will include the near end speech \(s(n)\). In theory, if \(s(n)\) is uncorrelated to the input \(x(n)\), the LMS algorithm will still converge well. However, in this application, the near end speech \(s(n)\) is almost always much larger than the far end speech echo \(d(n)\) (e.g., in practice, the microphone is placed to collect the near end speech much better than to collect the signal from the speaker), which will cause a large variation to the error \(e(n)\). Thus, if the step size \(\mu\) is larger, the noise from \(s(n)\) may cause the algorithm to diverge. If \(\mu\) is smaller (smaller than the case when \(s(n)=0\) ), the algorithm will take longer to converge.

Thus, as shown in Fig. 3, a block is added to detect the near-end speech (in some literature, it is also called double-talk detector. Since if the far end speech is off, there is always no need to update the filter coefficients. Thus, the only case we need to detect is both far end and near end speech are on.). Once detected, we will stop updating the filter coefficients. In other words, the filter coefficients are kept constant (the echo is still estimated and canceled, but the filter coefficients are not updated).

The next question is how to detect that the near-end speech is on? Most algorithms are based on the heuristics.

**Giegel algorithm.**
This algorithm is based on the assumption that the power of the far end speech \(x(n)\) is much smaller than the near end speech \(s(n)\). Thus, it compares the current received sample from microphone with the maximum amplitude of the far end speech signal (within the recent \(L_{\textrm{DT}}\) samples).

And \(\textrm{DT}(n)\) is compared with a pre-defined threshold to determine whether the near end speech is on or not.

This algorithm is easy to implement since it only depends on the input signals \(x(n)\) and \(y(n)\), but not the estimated signals, e.g., \(e(n)\) and \(d^\prime(n)\).

**Variance Impulse Response Algorithm [2].**
When the algorithm (e.g., LMS algorithm) converges after some period, the estimated \(\boldsymbol{w}(n)\) will converge to the actual \(\boldsymbol{w}_o(n)\). And the echoes are supposed to be stable (or pseudo-stable), which means that \(\boldsymbol{w}_o(n)\) will not change dramatically. So if we detect any large change of the estimated \(\boldsymbol{w}(n)\), we could say that it doesn't come from the change of the channel, but from the near end speech.
With that said, this algorithm calculates the variance of the maximum filter coefficient of the adaptive filter.

where \(\bar{w}_{\textrm{max}}(i) = \frac{1}{i}\sum_{l=1}^i{w_{\textrm{max}}(l)}\), and \(w_{\textrm{max}}(i) = \mathop{\mathrm{arg\,max}}_k \left(w_k(i)\right)\).

In practice, the estimation can be approximated with iteration

where \(\lambda_\sigma\) and \(\lambda_w\) are forgetting factors. It can be seen from the above equations that the algorithm depends on the correct estimation of \(\boldsymbol{w}(n)\). The adaptive filter should be given some time to converge before applying the above algorithm to detect the near end speech.

**Normalized Cross Correlation** [3] [4].
The variance of signal \(y\) can be written as

where \(\boldsymbol{R}_{xx} = E(\boldsymbol{x}\boldsymbol{x}^T)\).

And the covariance between signals \(\boldsymbol{x}\) and \(y\) can be written as

Then, the decision matrix is defined as

Thus, if the near end speech is off, \(\sigma_s^2=0\), and \(\textrm{DT}\) will be close to 1. Otherwise, since \( \sigma_s^2 \gg \boldsymbol{w}_o^T\boldsymbol{R}_{xx}\boldsymbol{w}_o\), \(\textrm{DT}\) will be close to 0.

Eq. (\ref{eqn:ncc}) shows that we need to know the optimal solution \(\boldsymbol{w}_o\), which is generally not the case. As describe in [4], the algorithm can be approximated by replacing \(\boldsymbol{w}_o\) with \(\boldsymbol{w}(n)\) (ignore the \(n(n)\) here for simplicity)

The decision matrix is defined as

In practice, \(r_{ey}\) and \(\sigma_y^2\) can be estimated by

where \(\lambda_{ey}\) and \(\lambda_y\) are forgetting factors for \(r_{ey}\) and \(\sigma_y^2\) estimation, respectively.

Same as the variance impulse response algorithm mentioned above, the normalized cross correlation algorithm also depends on the correct estimation of \(\boldsymbol{w}\). If \(\boldsymbol{w}(n)\) has not converged to \(\boldsymbol{w}_o\) yet, such decision criterion may not lead to the robust near-end speech detection. Thus, the adaptive filter should be given some time to converge.

**Echo Return Loss Enhancement** [5].
The decision criterion of ERLE (echo return loss enhancement) algorithm is defined as

where \(P_y^2(n)\) is the estimated power of signal \(y\) during a short window (e.g., \(16ms\)), \(P_e^2(n)\) is the estimated power of error signal \(e(n)\) during the short window.

If the near end speech is off, \(P_e^2(n)\) is close to zero (after filter converging), and thus \(\textrm{ERLE}\) will be very small (e.g., negative); otherwise, \(P_e^2(n)\) will be close to \(P_y^2(n)\), and \(\textrm{ERLE}\) will be close to 0.

# 4 Far-end Talk Detector

When far end speech is off \(x(n)=0\), \(d(n) = \boldsymbol{w}_o^T\boldsymbol{x} = 0\). In this case, the calculated error \(e(n)\) only contains noise (or near end speech) and is not useful to update the filter coefficients

One straightforward way to detect the far end speech is to measure the power of the signal \(x(n)\)

When \(\hat{\sigma}_x^2\) is small (e.g., \(<\textrm{thrd}\)), the far end speech is off; otherwise, far end speech is on. In practice, if the length \(N_x\) is smaller than the adaptive filter length \(N\), \(\hat{\sigma}_x^2\) can be estimated as

If \(N_x\) is larger than the adaptive filter length (\(N\)), the above equation can still be used to update the power estimation. However, we need more registers to store the previous inputs \(x\). Or a simple first order low pass filter can be used to simplify the estimation

where \(\lambda_x\) is the forgetting factor.