Least Squares
LS performs the parameter identification by minimizing the squares of the errors between the terminal voltage and the output of the battery model [27]. As shown in Figure 2, LS assumes that the measured output \(\tilde {y}\) is noisy while the input x is accurate.
For LS, the parameter vector θk can be solved by minimizing the cost function as
$$J({\varvec{\theta}_k})=\sum\limits_{{i=1}}^{k} {{{[\Delta {y_i}]}^2}} = \sum\limits_{{i=1}}^{k} {{{[{{\tilde {y}}_i} - \varvec{\theta}_{k}^{{\text{T}}}{{\varvec{x}}_i}]}^2}} ,$$
(9)
where \(\Delta {y_i}\) is the measurement error, \({\tilde {y}_i}\) is the noisy output. It can be defined that the gradient of the cost function \(J({\varvec{\theta}_k})\) is equal to zero
$$\partial J({\varvec{\theta}_k})/\partial {\varvec{\theta}_k}=0.$$
(10)
Then, the analytical solution of the parameter vector θk can be obtained as
$${\varvec{\theta}_k}={({\varvec{X}}_{k}^{{\text{T}}}{{\varvec{X}}_k})^{-1}}{\varvec{X}}_{k}^{{\text{T}}}{{\varvec{Y}}_k},$$
(11)
where \({{\varvec{X}}_k}={[{{\varvec{x}}_1},{{\varvec{x}}_2},\ldots,{{\varvec{x}}_k}]^{\text{T}}}\), \({{\varvec{Y}}_k}={[{y_1},{y_2},\ldots,{y_k}]^{\text{T}}}\).
The recursive form of the LS can be further expressed as
$$\left\{ \begin{aligned} {{\varvec{K}}_k} & ={{\varvec{P}}_{k - 1}}{{\varvec{x}}_k}/{{({\lambda}}}+{\varvec{x}}_{k}^{{\text{T}}}{{\varvec{P}}_{k - 1}}{{\varvec{x}}_k}), \\ {e_k} & ={y_k} - {\varvec{x}}_{k}^{{\text{T}}}{{\hat {\varvec{\theta}}}_{k - 1}}, \\ {\varvec{\theta}_k} & = {\varvec{\theta}_{k - 1}}+{{\varvec{K}}_k}{e_k},\\ {{\varvec{P}}_k} &=({\varvec{I}} - {{\varvec{K}}_k}{\varvec{x}}_{k}^{{\text{T}}}){{\varvec{P}}_{k - 1}}/{{\lambda},} \end{aligned} \right.$$
(12)
where Kk denotes the gain matrix; Pk is the covariance matrix; \({e_k}\) is the residual error, and λ (0.95 < λ < 1) is a user-defined forgetting factor.
It should be noted that LS has not considered the errors from the input x. Thus, the estimation results are easily biased owing to the noise corruption.
Total Least Squares
Different from LS, TLS assumes that both output \(\tilde {y}\) and input \(\tilde {x}\) are noisy. As we can see from Figure 3, TLS employs the orthogonal regression to minimize the sum of the squared orthogonal distances from the sampling points to the fitting line.
Similarly, TLS solves the parameter vector θk by minimizing the cost function as
$$J({\varvec{\theta}_k})=\left\|[\Delta {{\varvec{X}}_k},\; \Delta {{\varvec{Y}}_k}]\right\|_{\text{F}},$$
(13)
where \(\Delta {{\varvec{X}}_k}={[\Delta {{\varvec{x}}_1},\Delta {{\varvec{x}}_2},\ldots,\Delta {{\varvec{x}}_k}]^\text{T}}\), \(\Delta {{\varvec{Y}}_k}={[\Delta {y_1},\Delta {y_2},\ldots,\Delta {y_k}]^\text{T}}\).
The recursive form of the TLS is expressed as
$${\varvec{\theta}_k}={\varvec{\theta}_{k - 1}}+{\alpha _k}{\tilde {{\varvec{x}}}_k},$$
(14)
where the gain factor \({\alpha _k}\) is obtained by using the gradient search approach in Ref. [24].
$$\partial J({\varvec{\theta}_{k - 1}}+{\alpha _k}{\tilde {{\varvec{x}}}_k})/\partial {\alpha _k}=0,$$
(15)
where \({\tilde {{\varvec{x}}}_k}\) is the noisy input vector.
As shown in Eq. (14), RTLS updates the parameters along the direction of \({\tilde {{\varvec{x}}}_k}\) rather than the gradient of the cost function. Only one gain factor \({\alpha _k}\) can be obtained at each iteration, which largely limits the convergence rate when multiple parameters are needed to be identified.
A Comparison Between RLS and RTLS
A comparative study is carried out in this subsection to evaluate the performances of RLS and RTLS for online parameter identification.
As shown in Figure 4, RLS and RTLS have some merits in a specific area. On one hand, RTLS takes into account the disturbances from both the input and output, which has a better performance in dealing with noise interferences. On the other hand, RLS updates the parameters along the gradient of the cost function, which owns a very fast computing speed and a higher convergence rate.
However, RLS is biased with measurement noises, while RTLS converges slowly with initial value uncertainty. Therefore, to design a superior approach to dealing with the above issues, this paper integrates the RLS and RTLS for better parameter identification of the ECM.
The Proposed Co-estimation Method
This work proposes a co-estimation algorithm for superior performances of online parameter identification, which has fast convergence speed and robustness against noise corruption. Meanwhile, the proposed method does not require much computational effort and is suitable for online implementation. The flowchart of the proposed method is as follows.
The variables in Figure 5 are described as follows: (i) ei (i = k − TL, k − TL +1, ..., k) is the residual error of the parameter identification, (ii) e0 is set as a threshold to decide the convergence of the parameters, (iii) Tl is the time scale for determining the convergence of the parameters, (iv) \({E_k}\) is designed as a convergence indicator, which is expressed as the root mean square error (RMSE) of ei,
$${E_k}=\sqrt {\frac{1}{{{T_l}}}\sum\limits_{{i=k - {T_l}}}^{k} {e_{i}^{2}} } .$$
(16)
The strategy of the proposed co-estimation method can be summarized in the following three parts.
-
Part I. Given the initial parameter values are unavailable, \({\varvec{\theta}_k}\) is randomly initialized and updated by RLS using Eq. (12) until the parameters are converged to their references.
-
Part II. To determine whether the parameters have converged, \({E_k}\) is calculated using Eq. (16) once l reaches TL.
-
Part III. When \({E_k}\) is less than the pre-set threshold e0, the flag is set to 1, indicating that convergence has been completed. The parameters are updated by RTLS using Eq. (14) and Eq. (15) afterwards.
It can be seen that the proposed co-estimation method combines the merits of RLS and RTLS. RLS can converge to the reference values of the parameters quickly without any prior knowledge of initial values, while RTLS has good accuracy and robustness against the noise disturbances, which can be applied to update the already converged parameters.