Skip to main content

Multi-Scale Convolutional Gated Recurrent Unit Networks for Tool Wear Prediction in Smart Manufacturing

Abstract

As an integrated application of modern information technologies and artificial intelligence, Prognostic and Health Management (PHM) is important for machine health monitoring. Prediction of tool wear is one of the symbolic applications of PHM technology in modern manufacturing systems and industry. In this paper, a multi-scale Convolutional Gated Recurrent Unit network (MCGRU) is proposed to address raw sensory data for tool wear prediction. At the bottom of MCGRU, six parallel and independent branches with different kernel sizes are designed to form a multi-scale convolutional neural network, which augments the adaptability to features of different time scales. These features of different scales extracted from raw data are then fed into a Deep Gated Recurrent Unit network to capture long-term dependencies and learn significant representations. At the top of the MCGRU, a fully connected layer and a regression layer are built for cutting tool wear prediction. Two case studies are performed to verify the capability and effectiveness of the proposed MCGRU network and results show that MCGRU outperforms several state-of-the-art baseline models.

1 Introduction

The development of Prognostic and Health Management (PHM) has motivated the research in the field of machine health monitoring to detect faults and predict machine’s future conditions [1,2,3,4]. In modern manufacturing system, the worn tool is harmful for the metal cutting process and often causes additional costs [5]. The cutting tools will gradually become blunt during the manufacturing process, as shown in Figure 1, because of a lot of factors like abrasion, deformation and attrition. As a result, the quality of the products will be degraded. It is therefore crucial to monitor and predict the cutting tool wear online so as to prevent the quality from degradation [6].

Figure 1
figure 1

Degradation of cutting tools

Aiming at monitoring the working conditions of cutting tools and predicting tool wear, many methods, direct or indirect, online or offline, have been researched. Traditionally, by performing cutting tests under different working conditions, data about the cutting tools are acquired and then analyzed with the help of optimization techniques including the response surface methodology (RSM) and the design of experiments (DOE). This approach is time-consuming and inefficient because the number of the tests required is large [7]. The finite element method (FEM) [8,9,10] has also been used in different cutting tasks [11, 12] and to predict cutting tool wear. Over the last two decades, methods based on deep learning and neural networks have started to be used for the estimation and prediction of cutting tool wear. Ko et al. [13] designed an autoregressive model followed by a highly parallel neural network to monitor the cutting state. Özel et al. [14] used neural networks for the prediction of cutting tool wear and its surface roughness. Ghosh et al. [6] designed a sensor fusion model with the help of neural network to extract and fuse features from various signals for the estimation of cutting tool’s average flank wear. By using Adaptive Neuro Fuzzy Inference System, Sharma et al. [15] developed a method for tool wear estimation. Venkata et al. [16] fed the cutting speed, the radius of nose and the volume of removed material to a multilayer perceptron model for the prediction of amplitude vibration, surface roughness, and tool wear. Zhao et al. [17] designed a convolutional bi-directional LSTM networks to monitor machine health, as well as to predict the tool wear depth. In general, researchers divide these methods into two major categories: physics-based methods and data-driven methods. In tasks of tool wear prediction, physics-based methods based on grey models and particle filters [18] have proven to be effective. However, these methods usually require accurate and high-quality domain knowledge, which is often unavailable under complex and noisy working conditions. Moreover, most of them are unable to be upgraded with online data. Data-driven methods are now more attractive because they are able to address these issues. Deep learning theories and large amounts of data collected by advanced sensors have promoted the development of data-driven online methods. Two phases are usually included in data-driven models [19], where the first phase is to train models with collected data and then the other phase is to apply the trained models to online data to monitor the conditions or make predictions. The key of these two phases is that deep learning theories enable the model to better extract features and derive representations of machine conditions hidden in the data, and therefore enable it to make better predictions based on online data. In this paper, we research data-driven methods with the help of deep learning theories to predict cutting tool wear.

Data-driven methods take single or multiple sensory data as input, feeding them into training models to extract features and learn representations. Online data will then be fed into the well trained models to make predictions. Data and models are two core parts of data-driven methods. Figure 2 shows the basic framework of data-driven methods. The raw time series data collected by sensors are in sequential forms, whose sequential characteristics are difficult to be discovered by previous work focusing on developing models to extract multi-domain features. These models, trying to extract statistical, frequency and time-frequency features, require intensive expert knowledge and feature engineering. Some models, such as the Markov models and Kalman filters [20,21,22], are capable of addressing sequential data, but they are not good at capturing long-range dependencies. It is important to capture connections and information in time scale because in real working conditions, the features are often submerged by heavy background noise, which will cause failures in these models. The development in the field of neural networks and deep learning has offered solutions to address these issues, and one of these solutions is Recurrent Neural Network (RNN). Traditionally, neural networks deal with inputs and outputs independently, which is not so reasonable in some sequential tasks. RNN is proposed to make use of information in arbitrary long sequences, and to capture the calculated information. However, the problem of gradient exploding and vanishing in traditional RNN weakens its power. Some improved variants of RNN have been designed to solve this problem, and one of them is LSTM, namely Long Short-Term Memory Network. LSTM is good at solving problems that in need of information about previous events [23], which means that it is better at addressing sequential data of various length and capturing long-term dependencies. LSTM needs sufficient data to train while in real working conditions, while there may be no sufficient labeled data. Gated Recurrent Unit (GRU) network performs better under such situations. Proposed in 2014, GRU [24] is a more efficient variant of LSTM that shares many similar properties. With comparable performance to LSTM on sequence modeling, GRU has fewer parameters and is easier to train. Here, we introduce GRU networks to be one part of our network architecture. As one type of neural networks, GRU is able to extract features and learn representations without expert or domain knowledge, but it may be not robust because of the existence of noise in the raw sensory data. Compared with GRU, Convolutional Neural Network (CNN) is more robust when the data has noise interference. The convolutional operations in the CNN are able to extract abstract features by applying learnable filters to the convolutional layers to convolve with sequential data. For this reason, in Ref. [17], Zhao et al. adopted a one-layer CNN as a local feature extractor. However, as the information hidden in the sequential data is complicated and diverse, this local feature extractor with only one kernel size cannot extract all of the useful information. To address this issue, filters of different sizes are adopted to form a multi-scale convolutional layer to extract different significant features. Here, we use this multi-scale convolutional network to extract hidden but important features and then these features will be concatenated into a single feature map.

Figure 2
figure 2

Basic framework of data-driven methods

In this paper, we propose a model combining multi-scale CNN and GRU named Multi-scale Convolutional Gated Recurrent Unit Network (MCGRU) to predict cutting tool wear. In this model, the multi-scale CNN consists of six parallel branches, and they are independent of each other. These branches are able to extract local features, as well as abstract ones from high level. Then these feature maps will be merged into a single feature map. Temporal information is encoded and representations are learnt in a two-layer GRU network, built on the top of the merge layer. We experiment on an open source dataset from cutters of high-speed Computer Numerical Control (CNC) milling machine, containing acoustic emission data, accelerometer data and dynamometer data [25]. Additionally, another experiment of CNC tool wear is carried out, through which current and vibration data and tool wear depth are sampled. Based on these sequential data and their corresponding tool wear depth, we compare the predicting ability of our model with that of several state-of-the-art models.

This paper is organized as follows. In Section 2, we review some related work about CNN and GRU. Based on classic CNN and GRU, the MCGRU is designed and its details are presented in Section 3. Two case studies on the prediction of tool wear are conducted in Section 4. More details about the model and future steps are discussed in Section 5. Finally, conclusions are presented in Section 6.

2 Review of Related Work

2.1 Convolutional Neural Network

Convolutional Neural Network has proven very powerful in many recognition and classification tasks [26,27,28]. It has also shown the power to address sequential data in task of natural language processing [29,30,31]. In the convolutional layers, filters slide over sequential data to extract features and filters in the pooling layers will focus on the most salient ones. Additionally, the training process can be sped up and the model’s performance can be improved by adding batch normalization layers [32]. The capability of CNN can be further improved by stacking the above layers to build a “deep” CNN. Besides, the width of CNN can also influence its performance. In the inception module [33], parallel branches consisting of convolutional and pooling layers with different kernels are designed. This architecture allows the model to recover both local features via kernels of smaller sizes and high abstract features via that of larger sizes.

In our MCGRU network, an architecture of six parallel branches of CNN is designed to process the input sequential data before it is fed to GRU units. Kernels with different sizes are adopted in different branches to extract local and abstract features at the same time. The model itself is going to determine which features are significant to be chosen.

2.2 RNN, LSTM and GRU

Recurrent Neural Network (RNN) is mainly proposed to handle long term dependencies while processing sequential data in task of natural language processing. The hidden states in RNN use the outputs of the previous states as the inputs of the next states, which means that the sequential information is preserved. As weights are shared across time, RNN is able to process sequential input of any length. However, the problem of gradient exploding and vanishing emerged as the major obstacle to traditional RNN’s performance. To avoid this problem, Long Short-Term Memory Network was proposed by Sepp and Jürgen [34] in 1997 and was improved by Felix Gers’ team [35] in 2000. It is able to prevent backpropagated error from vanishing [36] and memorize a state for different time periods with the help of the input gate, forget gate and output gate, which manage the flow of information in the network. As another variant of RNN, Gated Recurrent Unit (GRU) was introduced to solve the vanishing problem. With only a reset gate and an update gate, GRU has comparable performance to LSTM. However, there are fewer parameters in GRU because it lacks an output gate and has less complex structure, which means that it is more efficient and can be used under situations where there is no sufficient data. Considering the effectiveness of GRU, it has been more and more widely used to learn significant representations in time series data.

In the proposed MCGRU network, two-layer GRUs are adopted to process the output of the multi-scale convolutional layers. Effective representations will be learnt here and to be used in the prediction of tool wear.

2.3 Neural Networks and Tool Wear Prediction

Neural networks have been successfully used in tasks of machine condition monitoring like tool wear prediction because of their excellent features extraction and representations learning capabilities [37,38,39,40,41,42]. Artificial Neural Networks (ANN) were firstly adopted and were proved to have good performance in the machine condition monitoring tasks. However, with more and more interference and as the working conditions become more and more complex, ANN is no longer good at solving these problems. As a result, Convolutional Neural Networks (CNN) was introduced in this field. The depth of the networks and their learning ability enable themselves to learn what they need in these tasks. However, most of these models make use of the features manually extracted and designed from raw data, while ignoring the representations and the relations of different time steps hidden in the sequential data. For tool wear prediction, the information about the condition of cutting tools remains to be discovered. Our proposed MCGRU combines multi-scale CNN with GRU to learn features and representations without the intervention of human designed features, which the information behind the raw sensory data can be explored as much as possible and the prediction accuracy will be improved.

3 Model

What we desire is an arm exoskeleton which is capable of following motions of the human upper-limb accurately and supplying the human upper-limb with proper force feedback if needed. In order to achieve an ideal controlling performance, we have to examine the structure of the human upper-limb.

Before presenting the MCGRU network, some notations used in this paper are clarified here. The task is to design a model for tool wear prediction based on the multiple in-process sensory data. A labeled time series dataset is given as \(D = \left\{ {\left( {{\varvec{x}}_{i} ,y_{i} } \right)} \right\}_{i = 1}^{N}\), which contains N tool conditions, and their corresponding labels \(y_{i}\), i.e., each tool condition corresponding to a tool wear that is measured and recorded as \(y_{i}\). Assuming that in each tool condition \({\mathbf{x}}_{i}\), \(q\) channels of sensory data are sampled and the length of each channel of sensory data is \(L\). For each channel, the whole sequence is divided into \(l\) sections, i.e., \(l\) time steps. The ith cutting tool condition is

$$ {\varvec{x}}_{i} = \left[ {\begin{array}{*{20}c} {{\varvec{x}}_{i}^{1} } & {{\varvec{x}}_{i}^{2} } & \cdots & {{\varvec{x}}_{i}^{l} } \\ \end{array} } \right]^{{\text{T}}} , $$
(1)

where vector \({\varvec{x}}_{i}^{t} \in R^{d}\) is the multiple channels of sensory data sampled at time step \(t\), i.e., the tth section, and \(d = q * \left( {L/l} \right)\) is the dimensionality of \({\varvec{x}}_{i}^{t}\), and \(\left( \cdot \right)^{{\text{T}}}\) represents the transpose. The goal is to predict the tool wear \(\hat{y}_{i}\) through \({\varvec{x}}_{i}\). In our proposed Multi-scale Convolutional Gated Recurrent Unit, the Multi-scale Convolutional Network functions as a feature extractor and the Gated Recurrent Unit functions as a temporal encoder. Six parallel and independent branches of Convolutional Neural Network consisting of different kernels are designed to process the raw sensory data. Local and abstract features extracted are fed into a merge layer, on the top of which is a two-layer GRU designed to learn significant representations. Finally, the prediction is performed by a fully connected layer and a regression layer. The MCGRU network is shown in Figure 3.

Figure 3
figure 3

Structure of the proposed MCGRU

3.1 Multi-Scale CNN

In each branch of the multi-scale CNN, a five-layer CNN is adopted, which consists of two convolutional layers, one max-pooling layer and two batch normalization layers. In the first convolutional layer of each branch, the kernel size equals to 1. The adoption of this convolutional layer is not only able to help extract more significant features, but also able to reduce the parameters of the model. For example, in one-dimensional convolution, a CNN containing one convolutional layer with kernel size equaling to 7 has more parameters than that containing two convolutional layers with kernel sizes equaling to 1 and 7, respectively. A batch normalization layer is adopted at the top of the first convolutional layer. Batch normalization layers in hidden layers help to accelerate the training and augment the predicting accuracy. Then, the output of the batch normalization layer is fed into the second convolutional layer. The second convolutional layers with different kernels in different branches extract multiple time scale features hidden in the sequential data. Small kernels are able to extract local features, while large kernels are able to extract abstract features. Based on these multi-time-scale features, the model itself learns to determine which ones should be concerned about. The last two layers are another batch normalization layer and a max-pooling layer. The max-pooling layer compresses the previous feature maps to further learn more significant features. Then in a merge layer, all of the feature maps from different branches are concatenated into a single feature map. All of the features extracted from branches are reserved. The organization of these two kinds of structure is shown in Figure 4. Details are presented in the following contents respectively. Here we take the operations in one branch as example, and operations in other branches are the same.

Figure 4
figure 4

a Structure of five of the six CNN branches, b structure of the sixth CNN branch

3.1.1 Convolution

In the convolutional layer of each branch in the Multi-scale CNN, the 1-dimensional convolution operation is achieved by using a filter (kernel) \({\varvec{v}} \in R^{h \times d}\) to slide over \({\varvec{x}}_{i} \in R^{l \times d}\) to convolve with the subsection \({\varvec{x}}_{i}^{t:t + h - 1} \in R^{h \times d}\) from time step \(t\) to time step \(t + h - 1\). The \({\varvec{x}}_{i}^{t:t + h - 1}\) is given as follows:

$$ {\varvec{x}}_{i}^{t:t + h - 1} = \left[ {\begin{array}{*{20}c} {\begin{array}{*{20}c} {\begin{array}{*{20}c} {{\varvec{x}}_{i}^{t} } & {{\varvec{x}}_{i}^{t + 1} } \\ \end{array} } & \cdots \\ \end{array} } & {{\varvec{x}}_{i}^{t + h - 1} } \\ \end{array} } \right]^{{\text{T}}} $$
(2)

where \(h\) is the kernel size. Additionally, a bias term \(b\) is added to get the complete convolution operation, which can be given as:

$$ c_{j}^{t} = {\varvec{v}}_{j} \circ {\varvec{x}}_{i}^{t:t + h - 1} + b $$
(3)

where \(j \in R\) represents the jth filter v and \(\circ\) represents the Hadamard product.

As the filter slides over \({\varvec{x}}_{i}\) and the convolution operation is done, we get a vector \({\varvec{c}}_{j}\), which is given by:

$$ {\varvec{c}}_{j} = \left[ {\begin{array}{*{20}c} {\begin{array}{*{20}c} {\begin{array}{*{20}c} {c_{j}^{1} } & {c_{j}^{2} } \\ \end{array} } & \cdots \\ \end{array} } & {c_{j}^{{_{{\left( {l - h + 2p} \right)/s + 1}} }} } \\ \end{array} } \right]^{{\text{T}}} $$
(4)

where \(p\) is the amount of zero padding, \(s\) is the sliding stride of the kernel, and \(\left( {l - h + 2p} \right)/s + 1\) is the length of the output after convolution operation. When \(s\) and \(p\) are set, the length of the output depends on the kernel size \(h\). Different kernels size results in different output sizes. It is an important point because in each branch of our proposed Multi-scale CNN, different kernel sizes are chosen.

Specially, to concatenate different outputs from different branches, it is more meaningful to get outputs with same size. Therefore, the trick of zero padding is adopted in the convolution operation. In different branches, different amounts of zero padding are adopted, which helps the output to have the same size as the input, no matter which kernel size is chosen. As a result, the feature map can be given by:

$$ {\varvec{c}}_{j} = \left[ {\begin{array}{*{20}c} {\begin{array}{*{20}c} {\begin{array}{*{20}c} {c_{j}^{1} } & {c_{j}^{2} } \\ \end{array} } & \cdots \\ \end{array} } & {c_{j}^{l} } \\ \end{array} } \right]^{{\text{T}}} $$
(5)

3.1.2 Batch Normalization

Instead of just normalizing the input of the CNN, we adopt batch normalization [32] layers to normalize the inputs within the network by using the variance and the mean of the values in the current mini-batch. In the batch normalization layer, the operation can be represented as follows:

$$ {\varvec{bn}}_{j} = {\text{BN}}\left( {{\varvec{c}}_{j} } \right). $$
(6)

As batch normalization layer does not change the feature map’s size, we therefore get:

$$ {\varvec{bn}}_{j} = \left[ {\begin{array}{*{20}c} {\begin{array}{*{20}c} {\begin{array}{*{20}c} {{\text{BN}}\left( {c_{j}^{1} } \right)} & {{\text{BN}}\left( {c_{j}^{2} } \right)} \\ \end{array} } & \cdots \\ \end{array} } & {{\text{BN}}\left( {c_{j}^{l} } \right)} \\ \end{array} } \right]^{{\text{T}}} . $$
(7)

3.1.3 Activation Function

After the convolution and batch normalization operations, an activation function is added to bring in non-linear properties and therefore to learn non-linear complex arbitrary functional relationships between inputs and outputs. As a result, the convolution, batch normalization and activation operations can be together given by:

$$ a_{j}^{t} = f\left( {{\text{BN}}\left( {{\varvec{v}}_{j}^{{\text{T}}} {\varvec{x}}_{i}^{t:t + s - 1} + b} \right)} \right), $$
(8)

where \(f\left( \cdot \right)\) is an activation function. Here, we choose Rectified Linear Units (ReLU) [43] as the activation function in our proposed model.

The above three operations result in a feature map, which can be given by

$$ {\varvec{a}}_{j} = \left[ {\begin{array}{*{20}c} {\begin{array}{*{20}c} {\begin{array}{*{20}c} {a_{j}^{1} } & {a_{j}^{2} } \\ \end{array} } & \cdots \\ \end{array} } & {a_{j}^{l} } \\ \end{array} } \right]^{{\text{T}}} . $$
(9)

3.1.4 Max-Pooling

By introducing pooling layers in the network, the previous feature maps’ size can be further reduced and more significant and abstract features can be extracted. Here, we adopt max-pooling operation. In one-dimensional pooling, with the pooling length \(k\), the max-pooling operation uses a kernel to slide over the feature map to get the max value over the \(k\) consecutive values. Here we let the sliding stride equal to \(k\), and as a result, the output of max-pooling operation can be given by:

$$ {\varvec{m}}_{j} = \left[ {\begin{array}{*{20}c} {\begin{array}{*{20}c} {\begin{array}{*{20}c} {m_{j}^{1} } & {m_{j}^{2} } \\ \end{array} } & \cdots \\ \end{array} } & {m_{j}^{{\left( {l - k + 2p} \right)/s + 1}} } \\ \end{array} } \right]^{{\text{T}}} , $$
(10)

where \(m_{j}^{i} = \max \left( {a_{j}^{{\left( {i - 1} \right)s}} ,a_{j}^{{\left( {i - 1} \right)s + 1}} , \cdots ,a_{j}^{{\left( {i - 1} \right)s + k - 1}} } \right)\).

3.1.5 Concatenation

In the concatenating layer, the feature maps from different branches will be concatenated into a single feature map to merge all the local and abstract features. Assuming that in the ith branch, the jth output of this branch is given by:

$$ {\varvec{m}}_{ij} = \left[ {\begin{array}{*{20}c} {\begin{array}{*{20}c} {\begin{array}{*{20}c} {m_{ij}^{1} } & {m_{ij}^{2} } \\ \end{array} } & \cdots \\ \end{array} } & {m_{ij}^{{\left( {l - k + 2p} \right)/s + 1}} } \\ \end{array} } \right]^{{\text{T}}} . $$
(11)

Then, the output of the concatenation layer is given as:

$$ {\varvec{Concatenation}}{ = }\left[ {\begin{array}{*{20}c} {{\varvec{M}}_{{1}} } & {{\varvec{M}}_{{2}} } & \cdots & {{\varvec{M}}_{N} } \\ \end{array} } \right] $$
(12)

where \(N\) is the serial number of branch, and \({\mathbf{M}}_{i}\) can be represented as

$$ {\varvec{M}}_{i} = \left[ {\begin{array}{*{20}c} {{\varvec{m}}_{i1} } & {{\varvec{m}}_{i2} } & \cdots & {{\varvec{m}}_{{iK_{i} }} } \\ \end{array} } \right], $$
(13)

where \(K_{i}\) is the number of output of the ith branch.

To summarize, in the Multi-scale CNN, the shape of the input sequence is \(n \times l \times d\). Here, \(n\) represents the total number of working conditions. As descripted above, before the concatenating layer, the output shape of the ith branch is \(n \times \left( {\left( {l - k + 2p} \right)/s + 1} \right) \times K_{i}\). In different branches, kernel sizes from small to large help to extract local and abstract features. Compared to the original raw sequence, these multi-time-scale features can better represent the properties of the working conditions. As these features are merged in the concatenating layer, the following GRU is added to learn significant representations of the working conditions. To be more specific, the framework of the Multi-scale CNN is illustrated in Figure 5.

Figure 5
figure 5

Framework of multi-scale CNN

3.2 Deep GRU

Under real industrial conditions, clean sample data is difficult to obtain. Compared to LSTM, GRU is better at dealing with such situations where there is no sufficient data. Here, on the top of the Multi-scale CNN, a two-layer GRU network is designed to excavate vital representations from the multi-time-scale features. The deep GRU is presented as follows.

3.2.1 Gated Recurrent Unit

In GRU, the inputs are the hidden state \({\varvec{h}}_{t - 1}\) at previous time step \(t - 1\) and the data \({\varvec{x}}_{t}\) at the current time step \(t\), and the output is the hidden state \({\varvec{h}}_{t}\). The output \({\varvec{h}}_{t}\) depends on the previous hidden state \({\varvec{h}}_{t - 1}\), the update gate \({\varvec{z}}_{t}\), the reset gate \({\varvec{r}}_{t}\) and the candidate hidden state \(\tilde{\user2{h}}_{t}\). The reset gate \({\mathbf{r}}_{t}\) enables the unit to drop any information in the hidden state that is less meaningful or irrelevant, so as to focus on the information that is more important. The update gate \({\mathbf{z}}_{t}\) determines the information from the previous and the candidate hidden state that can be passed to the current hidden state [23]. The relating equations can be given by:

$$ {\varvec{z}}_{t} = \sigma \left( {{\varvec{W}}_{z} {\varvec{x}}_{t} + {\varvec{U}}_{z} {\varvec{h}}_{t - 1} + {\varvec{b}}_{z} } \right), $$
(14)
$$ {\varvec{r}}_{t} = \sigma \left( {{\varvec{W}}_{r} {\varvec{x}}_{t} + {\varvec{U}}_{r} {\varvec{h}}_{t - 1} + {\varvec{b}}_{r} } \right), $$
(15)
$$ \tilde{\user2{h}}_{t} = \tanh \left( {{\varvec{W}}_{h} {\varvec{x}}_{t} + {\varvec{U}}_{h} \left( {{\varvec{r}}_{t} \circ {\varvec{h}}_{t - 1} } \right) + {\varvec{b}}_{h} } \right), $$
(16)
$$ {\varvec{h}}_{t} = \left( {1 - {\varvec{z}}_{t} } \right) \circ {\varvec{h}}_{t - 1} + {\varvec{z}}_{t} \circ \tilde{\user2{h}}_{t} , $$
(17)

where \(\sigma\) is the sigmoid activation function, \({\varvec{W}}_{z}\), \({\varvec{U}}_{z}\), \({\varvec{W}}_{r}\), \({\varvec{U}}_{r}\), \({\varvec{W}}_{h}\) and \({\varvec{U}}_{h}\) are shared weight matrices which are learned during training, \({\varvec{b}}_{z}\), \({\varvec{b}}_{r}\), \({\varvec{b}}_{h}\) are learnable biases. The basic structure of a one-layer GRU is shown in Fig. 6.

Figure 6
figure 6

One-layer GRU

3.2.2 Deep GRU Gated

As mentioned above, the capability of a neural network can be improved by “going deeper” [44, 45]. In a deep neural network, there exists more non-linear operations and more abstract features and representations can be learned. Inspired by this idea, we stack two GRU layers to get a deep architecture, in which each GRU layer contains different number of units. In the deep GRU, as shown in Figure 7, while the output of each hidden state in one layer propagating through time, it is also the input of the hidden state in the next layer. Features at low level are therefore learned and passed to the next layer to learn higher-level representations. By stacking GRU layers, the network is able to learn essential representations at different time scales more effectively.

Figure 7
figure 7

Deep GRU

3.3 Fully Connected and Linear Regression Layer

The output representation of GRU network is flattened as h and then fed into a fully connected layer to be prepared for the linear regression layer. The operation of the fully connected layer can be given as follows:

$$ {\varvec{o}} = f\left( {{\varvec{Wh}} + {\varvec{b}}} \right), $$
(18)

where o is the output of the fully connected layer, W is the transformation matrix, b is the bias, and \(f\left( \cdot \right)\) is the activation function. We use ReLU here as the activation function. Finally, the fully connected layer’s output o is fed into a regression layer and the tool wear of the ith working condition is therefore predicted, which can be given by

$$ \hat{y}_{i} = {\varvec{Wo}}_{i} . $$
(19)

3.4 Training and Regularization of MCGRU

The Mean Absolute Error (MAE) is adopted as the loss in the training process, which is given by:

$$ loss = MAE = \frac{1}{n}\sum\limits_{i = 1}^{n} {\left| {y_{i} - \hat{y}_{i} } \right|} , $$
(20)

where n represents the total number the samples.

The optimizer we adopt here is Root Mean Square Propagation (RMSProp) [46]. It is a very robust optimizer with pseudo curvature information. RMSProp is useful for mini batch learning because the gradients are normalized by the magnitude of the recent gradients, enabling it to handle stochastic objectives properly. RMSProp is a nice optimizer for recurrent neural networks like LSTM and GRU.

As mentioned above, GRU rather than LSTM is chosen in our proposed model because in real working conditions, there is usually no sufficient labeled data. When going deep and when there is no sufficient data, the network may be too complex to train and the problem of overfitting may appear. In order to solve this problem, regularization methods should be added within the network. Here, we adopt a Dropout [47] layer after the GRU network, as well as after the fully connected layer. Dropout layer enables the network to ignore those neurons that are randomly selected during the process of forward propagation. Therefore, the network will not rely too much on some local features. In our proposed model, we only use dropout during training process, but not in testing process, and the dropout ratio is set to be 0.3.

4 Experiments

4.1 Case 1: High Speed CNC Machine Tool Wear Dataset

4.1.1 Descriptions of Datasets

The first experiment is a high speed CNC machine running under dry milling operations [48]. This dataset is presented on the “prognostic data challenge 2010” database [25]. The experimental platform and the details are shown in Figure 8. In this experiment, six cutters are used to cut over an identical workpiece while each cutter made 315 cuts. When training and testing our model, six channels of data including forces and vibrations are used. A LEICA MZ12 microscope was utilized to measure the flank wear of each flute when the experiment was finished. The values of the wear were then taken as the target value.

Figure 8
figure 8

Details of the CNC machine and the data collected system

In this dataset, six cutting tools are used to do the experiment, which means six collections of data (C1, C2, \(\cdots \), C6) can be used. To compare with the results in Ref. [17], we adopt three cutting tools, i.e., three data collections C1, C4 and C6 as our training and testing sets here. Each data collection contains 315 samples, corresponding to 315 tool wear. To make good use of this dataset, a three-fold strategy is adopted. Among these three data collections, two of them are taken as the training set and the other is the testing one. As a result, we get three cases. For example, when C1 is testing set and C4, C6 are training test, this case is denoted as c1. The other two cases c4, c6 can be deduced from the above example. The details of these three cases are shown in Table 1.

Table 1 Details of the three dataset cases

As the sampling frequency is too high, for each channel, the sampled sequence is divided by 512 to get several sections, and the first forty sections are used. As a result, each original sequence is transformed into a datum with a length of 40, and therefore at each time step, the dimensionality is 3072 (6 channels). As descripted above, in the training process, the input shape of the network is 630×40×3072 and in the testing process, that of the network is 315×40×3072.

4.1.2 Experiment Setup

The following models shown in Table 2 will be compared with our proposed MCGRU model. Regression models including LR, SVR and MLP, cannot process sequential data directly, and hence we firstly extract the related features. Here, ten features, containing statistical features, frequency features and time-frequency features are extracted from raw data. Details are shown in Table 3. As there are six channels of signals, the dimensionality of the input is 60. In LR, there is no hyper parameter. In SVR, the regularization parameter is set as 0.1 and the kernel is Radial Basis Function (RBF). As for the MLP, the parameters of three hidden layers are set as (140, 280, 900) and we choose ReLU as the activation function.

Table 2 Details of the compared models
Table 3 Details of the extracted features

The other compared models are able to address sequential data directly. The input shape is therefore 40×3072. The five-layer CNN has the same structure as one branch of our proposed MCGRU. The kernel sizes in the two convolutional layers are 1 and 7, quantities of kernels are 32 and 64, and the pooling size is set as 2. The setting of the MCNN (Multi-scale Convolutional Neural Networks) is the same as that of our MCGRU. As for the basic recurrent models, including RNN, LSTM and GRU, the quantity of units is set as 192. And for the deep recurrent models, including Deep RNN, Deep LSTM, and Deep GRU, the quantity of the units in two layers is set as (180, 240). The CBLSTM, that is Convolutional Bi-Directional LSTMs, is proposed by Zhao et al. [17]. Here, the same settings in [17] are adopted for this model. The CGRU (Convolutional GRU) has a five-layer CNN with the same settings as the previous CNN model and a two-layer GRU with units (180, 240).

In our proposed MCGRU, from branch 1 to branch 6, the kernel size of the convolutional layers is set as (1, 1), (1, 3), (1, 5), (1, 7), (1, 9), (1, 11) , and the quantity of kernels is set as (32, 64). The kernel size of the pooling layer in all the branches is set as 2. Here, as the input shape is 40×3072, and the zero padding is adopted, the output shapes of each branch are the same, that is 40×32. Then, these six outputs are concatenated to get an output shape of 40×192. The quantity of units in the next two GRU layers is set as (180, 240) and the output units of the fully connected layer is set as 120. All of the activation functions in our model are Rectified Linear Unit (ReLU).

To evaluate the capability of the previous models, the Mean Absolute Error (MAE) and the Root Mean Squared Error (RMSE) are adopted. The MAE focuses on the average magnitude of the errors, without considering their direction. As a quadratic scoring rule, the RMSE measures the average magnitude of the error. The MAE is given in Eq. (20), and the RMSE is given by:

$$ RMSE = \sqrt {\frac{1}{n}\sum\limits_{i = 1}^{n} {\left( {\hat{y}_{i} - y_{i} } \right)^{2} } } , $$
(21)

where \(\hat{y}_{i}\) represents the predicted tool wear and \(y_{i}\) is the actual tool wear.

These models are trained and tested using a Linux Server with two NVIDIA 1080Ti GPUs and a 4.2 GHz INTEL i7-7700K CPU.

4.2 Case 2: Experiment of the Reliability of CNC Machine Tool

4.2.1 Descriptions of Datasets

The second experiment is of the reliability test of CNC machine tool. It is carried out on a CNC machine, as showed in Figure 9. The cutting tool, as showed in Figure 10, is utilized to process a 45# steel bar, and the relating parameters are shown in Table 4.

Figure 9
figure 9

CNC machine system for tool wear testing. a CNC machine and data acquisition system, b tool wear monitoring system

Figure 10
figure 10

Tool wear: a Tool used for processing, b wear in tool

Table 4 Details of the second experimental case

As shown in Table 4, three channels of data are sampled, including the vibration signal, the AE-RMS signal and the current signal of the spindle motor. The tool wear corresponding to each working condition is also measured and recorded to be the label. Here, we did the experiment with 9 cutting tools and got 840 samples in total. All of the cutting tools have the same initial tool wear. Each sample corresponds to a working condition. To be different from the first experimental case, in this case, we randomly chose samples from all of the cutting tools as the testing set and the training set. Both the training set and the testing set contain samples from the nine cutting tools. As a result, the training set include 735 of the 840 samples, and the rest are used as the testing set. Similarly, as the sampling frequency is too high, for each signal, the sequence is divided by 256 to get several sections, and the first 20 sections are selected. Hence, each data sample is transformed into a sequential datum with a length of 20, and at each time step, the dimensionality is 768 (3 channels). As descripted above, in the training process, the input shape of the network is 735 × 20 × 768 and in the testing process, that of the network is 105 × 20 × 768.

4.2.2 Experiment Setup

The setup of this second experiment is almost the same as the first one. Same compared models and same settings for these models are adopted. The two indexes used to evaluate the performance of the models are the MAE and the RMSE. The models are trained and tested using a Linux Server with two NVIDIA 1080Ti GPUs and a 4.2 GHz INTEL i7-7700K CPU.

4.3 Results

In this section, the comparison based on the MAE and RMSE of the above models are shown. Table 5 shows the MAE of each model in the first experimental case, while Table 6 shows the RMSE. The MAE and RMSE of each model in the second experiment are shown in Table 7.

Table 5 MAE of all the models in the first experimental case
Table 6 RMSE of all the models in the first experimental case
Table 7 MAE and RMSE of all the models in the second experimental case

As shown in the tables, the regression models LR, SVR and MLP have shown their capability to make prediction of tool wear based on the features extracted from raw data, while they are not as good as the convolutional models and recurrent models. Linear Regression performs worst in this task because it is a linear model in nature, which cannot make full use of the extracted features to make predictions. The SVR and the MLP perform better because the nonlinearity is introduced into the models so that the relationships among different features can be better explored. In SVR, by adopting different kernels, the samples can be mapped to a high dimensional space. The RBF kernel we chose here shows its power to address regression tasks. The MLP is able to search an efficient mapping mode actively, which is effective and different from SVR.

However, compared to the above regression models, the convolutional models and recurrent models can address raw data to learn significant features and representations, which enables them to have better performance. By choosing different kernels, CNN is able to extract local or abstract features. In this task, MCNN performs better than CNN because it contains kernels of different sizes to extract local and abstract features in the same time. The depth and width we introduced here in these two models also help in predicting tool wear. Long-term dependencies in sequential data cannot be discovered by convolution operation, but it can be captured by recurrent models. As shown in the tables, in this task of addressing time series data, recurrent models perform slightly better than convolutional models. Here, LSTM and GRU are better than basic RNN because their gates enable them to be more powerful to capture long-term dependencies. What’s more, in this task, the amount of data is not large, which shows the GRU’s advantages under the situation where there is no sufficient data. Here, the GRU performs better than LSTM. And as expected, the three deep recurrent models perform better than the three normal ones. In Ref. [17], the proposed Convolutional Bi-Directional LSTMs combines a local feature extractor CNN with a temporal encoder deep Bi-Directional LSTMs, which is able to excavate useful features hidden in the raw sensory data in both forward and backward ways. The CBLSTMs performs better than most of the above convolutional and recurrent models. As for the CGRU, it performs well, as the GRU is also able to learn significant representations on basis of the features extracted by the CNN. Specially, in the second experimental case, the deep GRU and CGRU perform even better than the CBLSTMs, which shows the power of the GRU in dealing with small amount of data. Our proposed model, the MCGRU, performs best among these compared models.

The result reveals that there is much information hidden behind raw sensory data that cannot be discovered by human designed features, while Multi-scale CNN is able to filter the noise from real working environment and explore the information as much as possible. The deep GRU is able to excavate the temporal information to find a more accurate relationship between the input and output, namely the raw sensory data and the predicted tool wear. As the network goes wider, more meaningful features of different time scales can be discovered, and as it goes deeper, the abstract and significant representations can be learnt. The combination of the multi-scale features extractor Multi-scale CNN and the temporal encoder deep GRU is therefore proven to perform well in the task of tool wear prediction.

To be more specific, the prediction of the tool wear, the corresponding actual tool wear, and the error between these two values are illustrated in Figures 11, 12, 13, 14. It is shown that in the first three figures, i.e., in the results of the first experimental case, the trend of the degradation of the cutting tool is robustly captured and error is acceptable. Specially, in Figure 14, nine ascending curves can be found in the curve of actual tool wear, that’s because in the second experimental case, we have sampled data from all of the nine cutting tools to be the testing set and each ascending curve represents the data from a cutting tool. In this case, the results are also satisfying. Moreover, for each epoch, it consumes about 1 s to train. When testing, it consumes only 0.8 s to predict the tool wear of about 300 samples, which means that our proposed model is efficient enough to be used in real-time prediction.

Figure 11
figure 11

Results of the first experimental case, when c1 is the testing set: the prediction of the tool wear, the corresponding actual tool wear, and the error between these two values

Figure 12
figure 12

Results of the first experimental case, when c4 is the testing set: the prediction of the tool wear, the corresponding actual tool wear, and the error between these two values

Figure 13
figure 13

Results of the first experimental case, when c6 is the testing set: the prediction of the tool wear, the corresponding actual tool wear, and the error between these two values

Figure 14
figure 14

Results of the second experimental case: the predicted tool wear, the actual tool wear and the error

5 Discussion

In this section, we discuss the impact of the number of the branches in the Multi-scale CNN and the influence of the depth of the GRU. Some insights and motivation for the future steps are also discussed.

  1. 1)

    As we go wider by using multi branches to extract more features, it is important to point out that this operation increases the model’s parameters, which results in the difficulty in training and the risk of over fitting. Here, based on dataset c1 we compare six numbers of branches (2, 4, 6, 8, 10, 20) in a MCGRU and the MAE and RMSE results are illustrated in Table 8. It shows that as the number of branches increases, the performance of the model gets better and then remains almost the same, and when there are 10 or 20 branches, the performance gets worse, which means that blindly increasing the number of branches does harm to the model and cannot improve its performance. Here we finally adopt 6 branches of CNN in our MCGRU.

  2. 2)

    The depth of the model also affects the performance of the model. We change the layers of GRU in the MCGRU to explore the impact of the depth. The number of layers of GRU is set as (2, 4, 6, 8) and the results are shown in Table 9. It is clear that the performance of these four models is almost the same. A reasonable explanation is that in our two experiments, there is no sufficient labeled data and therefore a shallow depth of GRU is powerful enough to discover the information behind the data. When there is a large amount of data, a GRU of more layers can be tried to further improve the capability of the model.

  3. 3)

    The robustness of a model is important to evaluate the performance of a model. In real working environment, the quality of the samples signals may be influenced by the noise. It is important and interesting to build a model that is robust when there is a large amount of noise. And in our settings, different signals are combined directly, it is meaningful to design a better way of fusing the data from different sensors.

Table 8 Comparison of different numbers of branches in the MCGRU
Table 9 Comparison of different numbers of GRU layers in the MCGRU model

6 Conclusions

  1. (1)

    In this paper, we proposed a Multi-scale Convolutional Gated Recurrent Unit Network (MCGRU) to address tool wear prediction task. We interpret the structure of this model by introducing the feature extractor: Multi-scale CNN and the encoder: Deep GRU. The Multi-scale CNN is able to extract both local and abstract features by kernels of different sizes, and the Deep GRU is capable of capturing long-term dependencies and learning significant representations based on the features extracted in Multi-scale CNN.

  2. (2)

    Moreover, the GRU performs better when there is no sufficient labeled data in real working conditions. Profiting from these advantages, the MCGRU is able to make accurate and effective tool wear prediction based on raw sensory data, without expert knowledge and feature engineering. Its satisfactory performance is further verified by two experimental cases and the comparisons with other models.

References

  1. T Li, Z Zhao, C Sun, et al. Multi-receptive field graph convolutional networks for machine fault diagnosis. IEEE Transactions on Industrial Electronics, 2020, DOI: https://doi.org/10.1109/TIE.2020.3040669.

    Article  Google Scholar 

  2. Z Mo, J Wang, H Zhang, et al. Weighted cyclic harmonic-to-noise ratio for rolling element bearing fault diagnosis. IEEE Transactions on Instrumentation and Measurement, 2020, 69(2): 432-442.

    Article  Google Scholar 

  3. L L Cui, X Wang, Y G Xu, et al. A novel switching unscented Kalman filter method for remaining useful life prediction of rolling bearing, Measurement, 2019, 135: 678-684.

    Article  Google Scholar 

  4. Huaqing Wang, Shi Li, Liuyang Song, et al. A novel convolutional neural network based fault recognition method via image fusion of multi-vibration-signals, Computers in Industry, 2019, 105: 182-190.

    Article  Google Scholar 

  5. N Ghosh, Y B Ravi, A Patra, et al. Estimation of tool wear during CNC milling using neural network-based sensor fusion. Mechanical Systems & Signal Processing, 2007, 21: 466-479.

    Article  Google Scholar 

  6. D E Dimla. Sensor signals for tool-wear monitoring in metal cutting operations—A review of methods. International Journal of Machine Tools and Manufacture, 2000, 40(8): 1073-1098.

    Article  Google Scholar 

  7. Y C Yen, J Söhner, B Lilly, et al. Estimation of tool wear in orthogonal cutting using the finite element analysis. Journal of Materials Processing Technology, 2004, 146(1): 82-91.

    Article  Google Scholar 

  8. J S Strenkowski, J T Carroll. A finite element model of orthogonal metal cutting. Journal of Engineering for Industry, 1985, 107(4): 349-354.

    Article  Google Scholar 

  9. E Ceretti, P Fallböhmer, W T Wu, et al. Application of 2D FEM to chip formation in orthogonal cutting. Journal of Materials Processing Technology, 1996, 59(1-2): 169-180.

    Article  Google Scholar 

  10. I S Jawahir, O W Dillonjr, A K Balajj, et al. Predictive modeling of machining performance in turning operations. Machining Science and Technology, 1998, 2: 253-276.

    Article  Google Scholar 

  11. T Ozel, M Lucchi, C A Rodríguez, et al. Prediction of chip formation and cutting forces in flat end milling: comparison of process simulations with experiments. Technical Paper-Society of Manufacturing Engineers, 1998, 98(250): 1–6.

  12. M Shatla, Y C Yen, T Altan. Tool-workpiece interface in orthogonal cutting-application of FEM modeling. Transactions-North American Manufacturing Research Institution of SME, 2000: 173–178.

  13. T J Ko, W C Dong. Cutting state monitoring in milling by a neural network. International Journal of Machine Tools & Manufacture, 1994, 34: 659-676.

    Article  Google Scholar 

  14. ÖZEL Tugrul, K Yigit. Predictive modeling of surface roughness and tool wear in hard turning using regression and neural networks. International Journal of Machine Tools & Manufacture, 2005, 45: 467-479.

    Article  Google Scholar 

  15. V S Sharma, S K Sharma, A K Sharma. Cutting tool wear estimation for turning. Journal of Intelligent Manufacturing, 2008, 19: 99-108.

    Article  Google Scholar 

  16. K V Rao, B S N Murthy, N M Rao. Prediction of cutting tool wear, surface roughness and vibration of work piece in boring of AISI 316 steel with artificial neural network. Measurement, 2014, 51: 63-70.

    Article  Google Scholar 

  17. R Zhao, R Yan, J Wang, et al. Learning to monitor machine health with convolutional bi-directional LSTM networks. Sensors, 2017, 17(2): 273.

    Article  Google Scholar 

  18. J Wang, W Peng, R X Gao. Enhanced particle filter for tool wear prediction. Journal of Manufacturing Systems, 2015, 36: 35-45.

    Article  Google Scholar 

  19. Z Rui, D Wang, R Yan, et al. Machine health monitoring using local feature-based gated recurrent unit networks. IEEE Transactions on Industrial Electronics, 2017, 99: 1-1.

    Google Scholar 

  20. T Juri, S Emilia, P Eduardo, et al. Validation of inter-subject training for hidden Markov models applied to gait phase detection in children with cerebral palsy. Sensors, 2015, 15: 24514-24529.

    Article  Google Scholar 

  21. K Wei, W Lenan. Mobile location with NLOS identification and mitigation based on modified Kalman filtering. Sensors, 2011, 11: 1641-1656.

    Article  Google Scholar 

  22. H D Yang. Sign language recognition with the kinect sensor based on conditional random fields. Sensors, 2015, 15: 135-147.

    Article  Google Scholar 

  23. J Schmidhuber. Deep learning in neural networks: An overview to Neural Netw., 2015, 61: 85-117.

  24. K Cho, B V Merrienboer, C Gulcehre, et al. Learning phrase representations using RNN encoder-decoder for statistical machine translation. Computer Science, 2014.

  25. P.d.c. PHM Society, https://www.phmsociety.org/competition/phm/10, 2010.1.

  26. Y L Cun, B Boser, J S Denker, et al. Handwritten digit recognition with a back-propagation network. Advances in Neural Information Processing Systems, 1990, 2(2): 396-404.

    Google Scholar 

  27. T Li, Z Zhao, C Sun, et al. WaveletKernelNet: An interpretable deep neural network for industrial intelligent diagnosis. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2021, DOI:https://doi.org/10.1109/TSMC.2020.3048950.

    Article  Google Scholar 

  28. T Li, Z Zhao, C Sun, et al. Adaptive channel weighted CNN with multisensor fusion for condition monitoring of helicopter transmission system. IEEE Sensors Journal, 2020, 20(15): 8364-8373.

    Article  Google Scholar 

  29. O Abdel-Hamid, A R Mohamed, J Hui, et al. Applying convolutional neural networks concepts to hybrid NN-HMM model for speech recognition. IEEE International Conference on Acoustics, 2012: 4277–4280.

  30. Y Kim to Convolutional Neural Networks for Sentence Classification, Eprint Arxiv, (2014). arXiv:1408.5882

  31. Z Rui, K Mao. Topic-aware deep compositional models for sentence classification. IEEE/ACM Transactions on Audio Speech & Language Processing, 2017, 25: 248-260.

    Article  Google Scholar 

  32. S Ioffe, C Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. JMLR.org, 2015: 448–456.

  33. C Szegedy, W Liu, Y Jia, et al. Going deeper with convolutions. 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015: 1-9, doi: https://doi.org/10.1109/CVPR.2015.7298594.

    Article  Google Scholar 

  34. S Hochreiter, J Schmidhuber. Long short-term memory. Neural Computation, 1997, 9: 1735-1780.

    Article  Google Scholar 

  35. F A Gers, J Schmidhuber, F Cummins. Learning to forget: continual prediction with LSTM. International Conference on Artificial Neural Networks, 1999: 850–855.

  36. S Hochreiter. Untersuchungen zu dynamischen neuronalen Netzen, Diploma. Technische Universität München, 1991.

    Google Scholar 

  37. R Huang, Y Liao, S Zhang, et al. Deep decoupling convolutional neural network for intelligent compound fault diagnosis. IEEE Access, 2019, 7: 1848-1858.

    Article  Google Scholar 

  38. C Sun, M Ma, Z Zhao, et al. Deep transfer learning based on sparse auto-encoder for remaining useful life prediction of tool in manufacturing. IEEE Transactions on Industrial Informatics, 2018, PP(4): 1-1.

    Google Scholar 

  39. F Jia, Y Lei, J Lin, et al. Deep neural networks: A promising tool for fault characteristic mining and intelligent diagnosis of rotating machinery with massive data. Mechanical Systems and Signal Processing, 2016, 72: 303-315.

    Article  Google Scholar 

  40. H Shao, H Jiang, H Zhang. Electric locomotive bearing fault diagnosis using a novel convolutional deep belief network. IEEE Transactions on Industrial Electronics, 2018, 65(3): 2727-2736.

    Article  Google Scholar 

  41. C Sun, M Ma, Z Zhao, et al. Sparse deep stacking network for fault diagnosis of motor. IEEE Transactions on Industrial Informatics, 2018, 14: 3261-3270.

    Article  Google Scholar 

  42. E O Ezugwu, S J Arthur, E L Hines. Tool-wear prediction using artificial neural networks. Journal of Materials Processing Technology, 1995, 49: 255-264.

    Article  Google Scholar 

  43. V Nair, G E Hinton. Rectified linear units improve restricted Boltzmann machines. International Conference on International Conference on Machine Learning, 2010.

  44. G E Hinton. Learning multiple layers of representation. Trends in Cognitive Sciences, 2007, 11: 428-434.

    Article  Google Scholar 

  45. Y Bengio. Learning deep architectures for AI. Foundations and Trends in Machine Learning, 2009.

  46. T Tieleman, G Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 2012, 4: 26-31.

    Google Scholar 

  47. N Srivastava, G Hinton, A Krizhevsky, et al. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 2014, 15: 1929-1958.

    MathSciNet  MATH  Google Scholar 

  48. X Li, B Lim, J Zhou, et al. Fuzzy neural network modelling for tool wear estimation in dry milling operation. Annual Conference of the Prognostics and Health Management Society, 2009, 1(1): 1–11.

Download references

Acknowledgements

The authors sincerely thanks to Mr. Tianfu Li for his critical discussion and reading during manuscript preparation.

Funding

Supported in part by Natural Science Foundation of China (Grant Nos. 51835009, 51705398), Shaanxi Province 2020 Natural Science Basic Research Plan (Grant No. 2020JQ-042), and Aeronautical Science Foundation (Grant No. 2019ZB070001).

Author information

Authors and Affiliations

Authors

Contributions

WX: Writing, review and editing; HM: review and discussion; ZZ: review; JL: review; CS: Revision, editing and supervision; RY: Review and supervision. All authors read and approved the final manuscript.

Authors’ Information

Weixin Xu, born in 1994, received the M.S. degree in mechanical engineering from Xi’an Jiaotong University, China, in 2020. His current research is focused on signal processing and deep learning algorithms for machinery health monitoring.

Huihui Miao, born in 1989, is currently a PhD candidate at Xi’an Jiaotong University, China. She received his bachelor degree from Xi’an Jiaotong University, China, in 2011. Her current research interest lies in machine learning for machinery modeling, monitoring, and diagnosis.

Jinxin Liu, born in 1988, is currently an associate professor at Xi’an Jiaotong University, China. He received the PhD degree from Xi’an Jiaotong University, China, in 2016. His current research interests include active noise and vibration control, adaptive filter and control theory, precision engineering and control, condition monitoring, and system development.

Zhibin Zhao, born in 1993, is currently a lecturer at Xi’an Jiaotong University, China. He received the PhD degree from Xi’an Jiaotong University, China, in 2020. His current research is focused on sparse signal processing and machine learning algorithms for machinery health monitoring and healthcare.

Chuang Sun, born in 1986, is currently an associate professor at Xi’an Jiaotong University, China. He received the PhD degree from Xi’an Jiaotong University, China, in 2014. His research interests include manifold learning, deep learning, sparse representation, mechanical fault diagnosis and prognosis, and remaining useful life prediction.

Ruqiang Yan, born in 1975, is currently a Professor at Xi’an Jiaotong University, China. He received the PhD degree from University of Massachusetts Amherst, USA, in 2007. His research interests include nonlinear time-series analysis, multidomain signal processing, and energy-efficient sensing and sensor networks for the condition monitoring and health diagnosis of largescale, complex, dynamical systems.

Corresponding author

Correspondence to Ruqiang Yan.

Ethics declarations

Competing Interests

The authors declare no competing financial interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xu, W., Miao, H., Zhao, Z. et al. Multi-Scale Convolutional Gated Recurrent Unit Networks for Tool Wear Prediction in Smart Manufacturing. Chin. J. Mech. Eng. 34, 53 (2021). https://doi.org/10.1186/s10033-021-00565-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s10033-021-00565-4

Keywords