Skip to main content
  • Original Article
  • Open access
  • Published:

Discerning Weld Seam Profiles from Strong Arc Background for the Robotic Automated Welding Process via Visual Attention Features

Abstract

In the robotic welding process with thick steel plates, laser vision sensors are widely used to profile the weld seam to implement automatic seam tracking. The weld seam profile extraction (WSPE) result is a crucial step for identifying the feature points of the extracted profile to guide the welding torch in real time. The visual information processing system may collapse when interference data points in the image survive during the phase of feature point identification, which results in low tracking accuracy and poor welding quality. This paper presents a visual attention feature-based method to extract the weld seam profile (WSP) from the strong arc background using clustering results. First, a binary image is obtained through the preprocessing stage. Second, all data points with a gray value 255 are clustered with the nearest neighborhood clustering algorithm. Third, a strategy is developed to discern one cluster belonging to the WSP from the appointed candidate clusters in each loop, and a scheme is proposed to extract the entire WSP using visual continuity. Compared with the previous methods the proposed method in this paper can extract more useful details of the WSP and has better stability in terms of removing the interference data. Considerable WSPE tests with butt joints and T-joints show the anti-interference ability of the proposed method, which contributes to smoothing the welding process and shows its practical value in robotic automated welding with thick steel plates.

1 Introduction

Robotic automated arc welding processes need different types of sensors to acquire various useful information for welding state monitoring and control of the welding torch, etc. [1, 2]. Of these sensors, vision sensors are the most widely used [3], and laser vision sensors are commonly employed to detect the weld seam profile (WSP) in robotic thick-steel-plate welding. To implement multipass welding real-time weld seam profile extraction (WSPE) is an indispensable step [4], which makes guiding the welding torch possible using the identified feature points of the extracted WSP. It is true that there are considerable adverse factors in weld seam profile extraction, such as arc flash, fume, and spatter. These lead to the captured images with different interference. It is thus critical that effective methods are presented to extract the WSP from interference background for overcoming the adverse factors. Otherwise, the visual information processing system may provide the false tracking position for the welding torch during the information extraction process.

Different joints result in the various appearance of the WSP in the captured image. A review on how to eliminate the interference data for WSPE with typical butt, fillet, and lap joints, is given. To recognize the image coordinates of seam center, fast template matching and fast Hough transform were presented in Ref. [5]. Also, to extract feature points of V-shaped welding seams, an improved Otsu algorithm and a line detection algorithm were employed by Jawad et al. [6]. Fan et al. [7] extracted the butt welding center and laser stripe by row scanning and column scanning respectively. To stay robust against heavy noise a multilayer hierarchy vision processing architecture integrated with an effective bottom-up and top-down combined inference algorithm was developed in Ref. [8]. Faster R-CNN algorithm is also proposed to separate the WSP from background and eliminate interference of noise [9]. Liu et al. [10] presented a series of preprocessing operations, such as power transformation, limited contrast histogram equalization, top-hat operation, and unidirectional structuring element cascade filtering for WSPE in robotic underwater welding, and the pre-processed image was further segmented by the mean shift algorithm. Du et al. [11] proposed a three-stage algorithm to extract the WSP, in which the potential WSP regions were first selected, and the regions were then ranked with their corrected scene irradiance densities, and column-wise peak detection was finally performed using the ranks.

It is noted that there is a common scheme that is used to extract the WSP, in which the region of interest (ROI) is first determined to reduce the computation load, and various filters are then used to clear up noise, such as median filters [12, 13], Gaussian filters [14, 15], and Wiener filters [16], and thresholding and denoising are presented to remove the interference data points. There is no literature that introduces how to further eliminate the interference data points for WSPE with V-grooves when they remain after denoising.

The above research on WSPE is confined to V-grooves of butt joints, and the following review concentrates on WSPE of lap joints. In order to determine the region of interest, Radon transform is applied to the captured image, and median filtering and thresholding are then also used for denoising [17, 18]. Zhang et al. [19] used the similar method to extract WSP in laser beam welding. Gu et al. [20] proposed a image preprocessing method to eliminate the impact of arc light, light reflection and splashes on the captured image, which includes adaptive threshold segmentation and smoothing.

In our previous research, we presented a great number of methods based on visual attention mechanism to extract WSPs from strong arc background for butt and fillet joints, such as visual saliency [4, 21] and visual attention models [22]. With these methods WSPs can be highlighted from the uneven background with strong arc regions. However, our methods can only strengthen the WSP with regard to intensity. During the subsequent feature point identification of the extracted WSP to guide the welding torch for multipass welding, thresholding is commonly used to further remove interference data points as a crucial step to simplify data processing. There comes an issue that interference data points more or less survive typically when there is the entire arc region also in the image, which always leads to wrong feature points and thus the misappropriate welding position. It is a real challenge to effectively extract the data points of the WSP from random interference data points [20]. Clustering algorithms using the designated Euclidean distance threshold are employed in Refs. [4, 21,22,23] to discern the data points belonging to the WSP in the binary image, which produces many clusters, and the length of the cluster is used to differentiate the segments that belong to the WSP from interference clusters.

In clustering based methods, various clustering tests show that it is very hard to discern the clusters belonging to the WSP from others merely depending on their lengths in space because spatter can be imaging also with big spans in the captured images and randomly occurs. Thresholding is such an important operation that it is often used in the literature for lessening data-processing difficulty, but the fact is that interference data points usually survive more or less after this operation is dealt with. Currently there are few studies that intentionally discuss how to effectively solve this issue. This paper aims at discerning the clusters belonging to the WSP using the visual attention features with which our eyes can easily accomplish the task, and struggles to keep more useful details of the WSP for more effective feature identification of the extracted WSP. Two typical kinds of WSPs with butt joints and T-joints are used to show the anti-interference ability of the proposed method in this paper.

2 Issue Derivation

There is a case in which interference data points survive after some preprocessing algorithms have been applied to images at some sampling time particularly when thresholding is carried out as the final step (the welding system refers to Refs. [21, 22]). Figure 1 illustrates the acquisition process of the image with laser stripes, namely, WSPs. Figure 2 gives the preprocessed results using three methods presented previously in the literature, which shows that the interference data points survive in most cases. Actually these data points lead to fake feature points during the seam tracking (see Figure 3) stage and terminate the automated welding process.

Figure 1
figure 1

WSP acquisition process with T-joints and butt joints: a vision sensor based on structured-light for T-joints, b Imaging of T-joints, c typical image, d vision sensor with butt joints, e typical image with butt joints

Figure 2
figure 2

Examples of extracting the WSP using the methods proposed in the literature: a raw images captured in different welding processes, b results using the method proposed in Ref. [21], c results using the method proposed in Ref. [22], and d in Ref. [12]

Figure 3
figure 3

Example of influencing the automated welding process because of survival interference data points: a successful case in which all interference data points have been completely removed, b unsuccessful case in which fake feature points are identified because of the survival interference data points

The coming problem is how to automatically and effectively differentiate the data points that belong to the WSP from the surviving interference data points. As we previously presented, a clustering-based method is used to solve this problem (the clustering algorithm refers to Ref. [4] and the Euclidean distance threshold used in this algorithm is set to 2 pixels). In this paper this method is improved with better anti-interference ability, and visual attention features from the visual attention process by our eyes are used together with the corresponding empirical knowledge. Note that the searching sequence of data points used in the related clustering process is from left to right and from up to down shown in Figure 4.

Figure 4
figure 4

Illustration of the searching direction of data points used in the clustering process

3 Visual Feature Analysis and Selection for Discerning WSPs

In this section visual features that are from the visual attention process by our eyes and used to build a vision-based method for extracting the WSP are analyzed. Three visual features are defined as the main factors that are used to discern the segment of the WSP from the interference data background. The analysis process is organized as weld seam characteristic analysis, visual attention processes, and visual feature selection.

The light on workpiece projected by the laser commonly impresses us with stripe-like whatever the joint is. The width of the stripe is about 5 pixels in the vertical direction in the image. The stripes appear in a continuous way in the horizontal direction. It is a fact that the intensity of the stripe may not be uniform because the projection distances are different, which produces a diverse situation in which some parts of the stripe are weak with regard to intensity, and some pieces of the stripe are lost when thresholding is carried out.

Although the diverse case occurs we can still easily recognize the regions belonging to the WSP with our eyes from the interference background. What gives us the anti-interference ability? The following aspects should be considered. The first one is that the stripe extends naturally, which means that every piece (all pieces actually exist as clusters in clustering results) of the WSP adjoins each other in space. Minimum Euclidean distances between different clusters should be the first factor that influences the visual decision making process of discerning the WSP, which is described as \( D_{j} \) (subscript \( j \) means the jth cluster). In addition, pieces of the WSP adjoining each other means smaller slope differences than those between the piece of the WSP and the interference cluster. Figure 5 illustrates the process of calculating the slope difference \( \overline{{k_{j} }} \).

Figure 5
figure 5

Illustration of the definition of the slope difference \( \overline{{k_{j} }} \): a thresholding result with the interference data, b four clusters in which the green cluster is deemed the previously discerned segment of the WPS and the rest ones are candidate clusters to be discerned, c the slope differences between the last discerned cluster and the candidate ones

\( \overline{{k_{j} }} \) is obtained from the following steps. Firstly, linear forms of all clusters are produced with their average positions. Secondly, reference slope \( k_{mean}^{l} \) (\( l \) is the lth identification process) illustrated as the green line in Figure 5c is calculated using the final part (ten points are automatically selected) of the green cluster in Figure 5b, which is supposed to be the last discerned cluster belonging to the WSP. Thirdly, determining \( k_{ave}^{j} \) using two groups of data points is followed: the ten points mentioned before and the initial part (from two to ten points that are also automatically selected in the related algorithm) of every cluster that will be discerned. \( k_{ave}^{j} \) is acquired through averaging the slopes. Finally, \( \overline{{k_{j} }} \) is formulated with Eq. (1):

$$ \overline{{k_{j} }} = \left| {k_{mean}^{l} - k_{ave}^{j} } \right|. $$
(1)

Figure 5c intuitively shows \( \overline{{k_{pink} }} \approx \overline{{k_{purple} }} \to 0 \), which means that clusters marked with pink and purple should be considered as the segments of the WSP with regard to the slope difference.

The third factor that influences the discrimination result of the WSP with our eyes is the length \( L_{j} \) of every cluster, which is defined as the maximum Euclidean distance of each cluster. The reason why this factor is useful as a visual feature is that the pieces pertaining to the WSP usually own bigger sizes in space than the clusters surrounding them, particularly when the adopted threshold is set small in the binaryzation process. Therefore, the cluster with bigger length should be regarded more naturally as part of the WSP when the other factors cannot work. Figure 6 shows the case in which cluster B should be visually discerned as part of the WSP more naturally than cluster A.

Figure 6
figure 6

Example of the case in which clusters with bigger sizes should be discerned as part of the WSP than ones with smaller sizes when other factors cannot work

This section presents three factors \( D_{j} \), \( \overline{{k_{j} }} \), and \( L_{j} \) as the visual features for imitating the observation process with our eyes to discern the clusters belonging to the WSP.

4 Scheme of Discerning WSPs

The scheme of discerning the WSP from interference clusters comprises three stages using clustering results (see Figure 7). The first stage is thinning all the clusters, which means that using the average heights of these clusters to represent them in the image. The second stage is to separately determine the first piece of the WSP based on the imaging characteristic of the joint profile. Due to different profile features of butt and fillet joints, this paper presents two methods to recognize the initial part of the WSP for the two kinds of joints. Figure 8 shows these requirements that must be satisfied in the methods using the visual features of different WSPs. The last stage is the cyclic identification process, in which only one cluster is discerned as a piece of the WSP in each loop.

Figure 7
figure 7

Scheme of discerning clusters that belong to the WSP from interference clusters

Figure 8
figure 8

Requirements that are used to determine the first cluster of different WSPs: a three visual requirements for distinguishing the first cluster of the WSP with the V-groove, b illustration of the variables in Figure 8a, c three visual requirements for distinguishing the first cluster of the WSP with the T-joint, d illustration of the requirement of the slope for the first cluster of the WSP with the T-joint

In Figure 8a \( Cor_{Initial} \) is the minimum horizontal coordinate of the cluster which satisfies two requirements, and the definitions of \( H_{\text{max} } \), \( H_{\text{min} } \) are illustrated in Figure 8b. In Figure 8c \( H_{t}^{j} \le H_{t + 1}^{j} \) represents that the heights of the data points gradually increase in the jth cluster (subscript \( t \) means different data points of every cluster). \( length(H_{t}^{j} \le H_{t + 1}^{j} ) \) is the number of the data points that satisfy \( H_{t}^{j} \le H_{t + 1}^{j} \), and \( length(slope_{t}^{j} \le 0) \) means the number of the data points with which different lines are created, and the slopes of the lines are non-positive, and \( length(Cluster_{j} ) \) denotes the number of all data points in the jth cluster. Figure 8d illustrates \( slope^{j} \le 0 \).

Note that the rule of “obtain the clusters on the TempCN’s right from TempCN+1 to CN” in the proposed scheme (see Figure 7) is that the percentage of the data points which are on the right of the identified cluster TempCN must be up to 90% each time.

5 Visual Features-Based Strategy for Discerning WSPs from the Interference Background

The proposed strategy using visual features contains two aspects: automatic determining the number of candidate clusters that are to be discerned in each loop (see Figure 9), and rules of identifying the cluster that belongs to the WSP from the candidate clusters in terms of visual continuity using visual features (see Figure 10). The proposed strategy follows the rule: the candidate cluster with larger distance from the last identified cluster is selected as the WSP’s segment with more harsh requirements. \( C_{i3} \) (subscript \( i \) in these requirements means the ith identification process, similarly hereinafter), for example, is selected as the WSP’s segment with four requirements when Selectnum is 3. This rule lessens the probability that fake clusters are determined as part of the WSP. In Figure 10, \( \bar{k}_{i1} \approx \bar{k}_{i2} \) is defined as \( 0.8 \le \frac{{\text{min} (\bar{k}_{i1} ,\bar{k}_{i2} )}}{{\text{max} (\bar{k}_{i1} ,\bar{k}_{i2} )}} \le 1 \), and \( D_{i1} \approx D_{i2} \) means \( 0.8 \le \frac{{\text{min} (D_{i1} ,D_{i2} )}}{{\text{max} (D_{i1} ,D_{i2} )}} \le 1.5 \); \( \frac{{\bar{k}_{i2} }}{{\bar{k}_{i1} }} \le 0.5 \) denotes that \( \bar{k}_{i2} \) is better than \( \bar{k}_{i1} \) with regard to continuity; \( \bar{k}_{i1} \le 0.5 \) is a criterion with which the last cluster is judged as the end part of the WSP in the final identification process. The criterion results from numerous tests on the slope fluctuations of the linear segments in the image. Slope calculation method is formulated as Eq. (2):

$$ k_{j - 8} = {{\sum\limits_{k = 2,4,6,8} {\tfrac{y(j - k) - y(j + k)}{x(j + k) - x(j - k)}} } \mathord{\left/ {\vphantom {{\sum\limits_{k = 2,4,6,8} {\tfrac{y(j - k) - y(j + k)}{x(j + k) - x(j - k)}} } 4}} \right. \kern-0pt} 4},\begin{array}{*{20}c} {} \\ \end{array} j \ge 9, $$
(2)

where \( x( \cdot ) \) and \( y( \cdot ) \) are the coordinates of data points respectively in the horizontal and vertical directions in the image coordinate system, and \( j \ge 9 \) means that the slope calculation process covers 9 data points in space. To judge whether the remaining only one candidate cluster in the final identification process belongs to the WSP, its slope and the slope of the end part of the last discerned WSP (for the V-groove or the T-joint, the end part of the WSP is always linear) are calculated to judge whether they are close to each other. Figure 11 gives an example of how to determine whether the final cluster (if it exists) is part of the WSP using slope difference, in which the only one candidate cluster (see red circle in Figure 11a) should be judged as part of the WSP because the slopes of the data points in red and pink circles are near to each other (see the green circle in Figure 11b).

Figure 9
figure 9

Flow chart of automatically determining the number of candidate clusters for each identification process

Figure 10
figure 10

Strategy of discerning the cluster as part of the WSP from the candidate clusters in terms of visual continuity using three kinds of visual features, in which \( C_{i1} ,C_{i2} , \) and \( C_{i3} \) are the candidate clusters

Figure 11
figure 11

Illustration of the slope fluctuations of the linear segments of the WSP: a example of the WSP containing different linear pieces, b slope fluctuations of the linear segments of the WSP

Figure 11 shows that the slope fluctuation of the linear segments in the image is less than 0.5, which is the reason why the only one requirement \( \bar{k}_{i1} \le 0.5 \) is used during the final identification process.

6 Experimental Results

Different images with lots of interference data are tested to show the effectiveness of the method proposed in this paper. The extraction results (Figure 12) show the anti-interference ability of the proposed method, and other images captured in the different welding processes (see Figure 13) are used to further exhibit this performance.

Figure 12
figure 12

WSPE to show the anti-interference ability of the proposed method: a WSPE result for T-joints from survival interference data points that are manually added, b WSPE result for butt joints from survival interference data points after thresholding

Figure 13
figure 13

Further validation to show the anti-interference ability of the proposed method using the images captured in different welding processes : a raw images with butt joints and T-joints, b thresholding results of the orientation feature maps using Gabor filtering like Refs. [21,22,23,24], c WSPE results

Two simplified methods that are derived respectively from Refs. [21, 22] are presented to further verify the robust anti-interference ability of the proposed method. Figure 14 shows the first simplified WSPE procedure, and Figure 15 gives the extraction results, in which the raw images with entire arc regions in Figure 2 are selected as the experimental images. Figure 16 shows the second procedure, and Figure 17 gives the corresponding WSPE results.

Figure 14
figure 14

The first proposed flow chart with simplified preprocessing for WSPE

Figure 15
figure 15

Verification results using the first simplified method: a\( \theta { = } \pm 10^\circ \) for Gabor filtering, b\( \theta { = } \pm 10^\circ ,\; - \;80^\circ \) for Gabor filtering

Figure 16
figure 16

The second proposed flow chart with simplified preprocessing for WSPE

Figure 17
figure 17

Verification results using the second simplified method: a filtering angles \( \theta { = } \pm 10^\circ ,\;0^\circ , \)b\( \theta { = } \pm 10^\circ ,\; - \;80^\circ \)

Figures 15 and 17 show that the proposed visual feature-based method here can effectively discern the data points belonging to the WSP from the complicated interference background and it owns the robust anti-interference ability. Meanwhile, we tested the running time of the proposed method with a normal PC, and it is only 40 ms. The result meets the real-time requirement.

7 Discussion

This paper investigated a method to gradually extract WSPs in binary images from the strong interference background. This method used a strategy refined from the visual attention process by our eyes to discern the pieces belonging to two types of WSPs with typical joints from interference data points based on clustering results. This strategy used three visual attention features, i.e., Euclidean distance, slope difference, and the span of clusters in images to implement the automatic identification process. This method can be used in automated thick plate welding with butt joints and T-joints for high welding efficiency.

To date, there are various methods that are proposed to extract WSPs for the traditional automated arc welding process, such as Refs. [7, 12]. However, the thickness of workpiece presented in these studies is generally less than 30 mm. This reduces the difficulty of effectively extract the WSP. The method proposed in this paper expands the scope of thickness.

In addition, we previously set a length threshold 100 pixels to differentiate the clusters belonging to the WSP from other clusters [22] and remove interference data points, in which the clusters whose lengths (the largest Euclidean distance between two points in a cluster is defined as the cluster’s length) are less than 100 pixels were regarded as interference clusters and removed. Also in Ref. [21] 80 pixels determined from empirical knowledge was the length threshold to eliminate interference data points. In Ref. [4], the same operation was implemented through a two-stage strategy. In the first stage, the clusters whose lengths are less than the average length of all clusters are deemed as interference clusters, and in the second stage another distance threshold produced by the genetic algorithm is used to discern the clusters belonging to the WSP. The above methods can directly remove interference data but the anti-interference capacity of these methods extremely depends on these thresholds, and interference data points may survive because spatter can also produce the clusters with large lengths. However, the method in this paper can indirectly eliminate interference data, and the clusters with small sizes belonging to the WSP can still be kept during the identification process, which contributes to maintaining more useful information of the WSP and leads to better accuracy for subsequent feature identification of the WSP like Ref. [22].

The method in this paper conduces to simplifying the WSP extraction process. Only the operation of Gabor filtering, for example, combing with the proposed method in this paper can extract the WSP different from Refs. [21, 22] that use complicated visual attention models. The proposed method also reduces time cost and waste of materials by smoothening the welding process. Certainly, the proposed method can work better when the preprocessing algorithms are more effective.

In terms of the effectiveness of removing interference data, the difference between the method in this paper and the previous ones is stability. The method in this paper nearly does not keep interference clusters with large sizes whereas the previous methods are the opposite.

There is still a defect in the proposed method, namely empirical parameters used in the WSPE process. The next research will focus on overcoming the deficiency.

In addition, the appropriate programming process to implement the method proposed in this paper is necessary because this can save a lot of running time.

8 Conclusions

In conclusion, this paper proposes a visual attention feature-based method to extract the WSP from interference data background using clustering results. The conclusions include the followings.

  1. (1)

    As the proposed method in this paper gradually discerns the segments of the WSP, it can extract more information of the WSP, which contributes to seam tracking with higher accuracy.

  2. (2)

    The proposed visual attention feature-based method provides a reference for other schemes to WSPE when the image has been binarized.

References

  1. Y M Zhang, R Kovacevic, L Li. Characterization and real-time measurement of geometrical appearance of the weld pool. International Journal of Machine Tools & Manufacture, 1996, 36(7): 799-816.

    Article  Google Scholar 

  2. N Lv, Y Xu, Z Zhang, et al. Audio sensing and modeling of arc dynamic characteristic during pulsed Al alloy GTAW process. Sensor Review, 2013, 33(2): 141-156.

    Article  Google Scholar 

  3. Y Xu, G Fang, S Chen, et al. Real-time image processing for vision-based weld seam tracking in robotic GMAW. International Journal of Advanced Manufacturing Technology, 2014, 73(9-12): 1413-1425.

    Article  Google Scholar 

  4. Y He, H Chen, Y Huang, et al. Parameter self-optimizing clustering for autonomous extraction of the weld seam based on orientation saliency in robotic MAG welding. Journal of Intelligent & Robotic Systems, 2016, 83(2): 219-237.

    Article  Google Scholar 

  5. S Y Liu, G R Wang, Z Hua, et al. Design of robot welding seam tracking system with structured light vision. Chinese Journal of Mechanical Engineering, 2010, 23(4): 436-442.

    Article  Google Scholar 

  6. J Muhammad, H Altun, E. Abo-Serie. A robust butt welding seam finding technique for intelligent robotic welding system using active laser vision. The International Journal of Advanced Manufacturing Technology, 2018, 94(1): 13-29.

    Article  Google Scholar 

  7. J Fan, F Jing, Y Lei, et al. A precise seam tracking method for narrow butt seams based on structured light vision sensor. Optics & Laser Technology, 2019, 109(1): 616-626.

    Article  Google Scholar 

  8. Y F Gong, X Z Dai, X D Li. Structured-light based joint recognition using bottom-up and top-down combined visual processing. 2010 International Conference on Image Analysis and Signal Processing (IASP), Zhejiang, China, April 9-11, 2010: 507-512.

  9. R Xiao, Y Xu, Z Hou, et al. An adaptive feature extraction algorithm for multiple typical seam tracking based on vision sensor in robotic arc welding. Sensors and Actuators A: Physical, 2019, 297: 111533.

    Article  Google Scholar 

  10. S Liu, H Zhang, J Jia, et al. Feature recognition for underwater weld images. 29th Chinese Control Conference (CCC), 2010, Beijing, China, July 29-31, 2010: 2729-2734. (in Chinese)

  11. J Du, W Xiong, W Chen, et al. Robust laser stripe extraction using ridge segmentation and region ranking for 3D reconstruction of reflective and uneven surface. Proceedings of the IEEE International Conference on Image Processing (ICIP), Quebec City, QC, Canada, September 27-30, 2015: 4912-4916.

  12. W P Gu, Z Y Xiong, W Wan. Autonomous seam acquisition and tracking system for multi-pass welding based on vision sensor. The International Journal of Advanced Manufacturing Technology, 2013, 69(1-4): 451-460.

    Article  Google Scholar 

  13. X Wang, R Bai, Z Liu. Weld seam detection and feature extraction based on laser vision. 33rd Chinese Control Conference (CCC), 2014, Nanjing, China, July 28-30, 2014: 8249-8252. (in Chinese)

  14. H-C Nguyen, B-R Lee. Laser-vision-based quality inspection system for small-bead laser welding. International Journal of Precision Engineering and Manufacturing, 2014, 15(3): 415-423.

    Article  Google Scholar 

  15. Y He, Z Yu, J Li, et al. Weld seam profile extraction using top-down visual attention and fault detection and diagnosis via EWMA for the stable robotic welding process. The International Journal of Advanced Manufacturing Technology, 2019, 104 (9): 3883-3897.

    Article  Google Scholar 

  16. Q-Q Wu, J-P Lee, M-H Park, et al. A study on development of optimal noise filter algorithm for laser vision system in GMA welding. Procedia Engineering, 2014, 97: 819-827.

    Article  Google Scholar 

  17. Z Lei, Z Mingyang, Z Lihua. Vision-based profile generation method of TWB for a new automatic laser welding line. 2007 IEEE International Conference on Automation and Logistics, Jinan, China, August 18-21, 2007: 1658-1663.

  18. Y L Xie, L Zhang, C Y Wu, et al. A method of robotic visual tracking for a new automatic laser welding line. 2008 International Conference on Computer Science and Software Engineering, Hubei, China, December 12-14, 2008: 891-894.

  19. L Zhang, C Wu, Y Zou. An on-line visual seam tracking sensor system during laser beam welding. International Conference on Information Technology and Computer Science, Kiev, Ukraine, July 25-26, 2009: 361-364.

  20. C Gu, Y Li, Q Wang, D Xu, et al. Robust features extraction for lap welding seam tracking system. 2009. YC-ICT ‘09. IEEE Youth Conference on Information, Computing and Telecommunication, Beijing, China, September 20-21, 2010: 319-322.

  21. Y He, Y Chen, Y Xu, et al. Autonomous detection of weld seam profiles via a model of saliency-based visual attention for robotic arc welding. Journal of Intelligent & Robotic Systems, 2016, 81(3-4): 395-406.

    Article  Google Scholar 

  22. Y He, Y Xu, Y Chen, et al. Weld seam profile detection and feature point extraction for multi-pass route planning based on visual attention model. Robotics and Computer-Integrated Manufacturing, 2016, 37: 251-261.

    Article  Google Scholar 

  23. Y He, H Zhou, J Wang, et al. Weld seam profile extraction of T-joints based on orientation saliency for path planning and seam tracking. 2016 IEEE Workshop on Advanced Robotics and its Social Impacts (ARSO), Shanghai, China, July 8-10, 2016: 110-115.

  24. Y S He, Y X Chen, D Wu, et al. A detection framework for weld seam profiles based on visual saliency. In: Tarn TJ., Chen SB., Chen XQ. (eds). Robotic Welding, Intelligence and Automation. RWIA 2014. Advances in Intelligent Systems and Computing, 2015, 363: 311-319.

    Google Scholar 

Download references

Acknowledgements

The authors sincerely thank Professor Zhiwei Mao of Nanchang University for his critical discussion and reading during manuscript preparation.

Funding

Supported by National Natural Science Foundation of China (Grant Nos. 51575349, 51665037, 51575348), and State Key Laboratory of Smart Manufacturing for Special Vehicles and Transmission System (Grant No. GZ2016KF002).

Author information

Authors and Affiliations

Authors

Contributions

GM was in charge of the whole trial; YH wrote the manuscript; ZY, JL, and LY assisted with sampling and laboratory analyses. All authors read and approved the final manuscript.

Authors’ Information

Yinshui He, born in 1979, is currently a doctor at School of Environment and Chemical Engineering, Nanchang University, Nanchang, 330031, China. He received his doctor degree from Shanghai Jiao Tong University, China, in 2017. His research interests include intelligentized welding technologies and AI.

Zhuohua Yu, born in 1980, is a master at Institute of Technology. East China Jiao Tong University, Nanchang, 330100, China. She received his master degree from School of Mechanical Engineering, East China Jiao Tong University, China, in 2012. Her research interests include spot welding and optimization of automobile safety doors.

Jian Li, born in 1994, is currently a master candidate at School of Mechanical and Electrical Engineering, Nanchang University, China. His research interests include optimization of welding process parameters and Bayesian theory.

Lesheng Yu, born in 1996, is currently a master candidate at School of Mechanical and Electrical Engineering, Nanchang University, China, in 2014. His research interests include intelligentized welding technologies and machine vision.

Guohong Ma, born in 1975, is currently a doctor at School of Mechanical Engineering, Key Laboratory of Lightweight and High Strength Structural Materials of Jiangxi Province, Nanchang University, China. He received his doctor degree from Shanghai Jiao Tong University, China, in 2002. His research interests include intelligentized welding technologies and ultrasound MIG welding with galvanized steel sheets.

Corresponding author

Correspondence to Guohong Ma.

Ethics declarations

Competing Interests

The authors declare no competing financial interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

He, Y., Yu, Z., Li, J. et al. Discerning Weld Seam Profiles from Strong Arc Background for the Robotic Automated Welding Process via Visual Attention Features. Chin. J. Mech. Eng. 33, 21 (2020). https://doi.org/10.1186/s10033-020-00438-2

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s10033-020-00438-2

Keywords