4

Click here to load reader

[IEEE 2009 IEEE International Symposium on Circuits and Systems - ISCAS 2009 - Taipei, Taiwan (2009.05.24-2009.05.27)] 2009 IEEE International Symposium on Circuits and Systems - New

  • Upload
    daniel

  • View
    213

  • Download
    0

Embed Size (px)

Citation preview

Page 1: [IEEE 2009 IEEE International Symposium on Circuits and Systems - ISCAS 2009 - Taipei, Taiwan (2009.05.24-2009.05.27)] 2009 IEEE International Symposium on Circuits and Systems - New

New CNN based algorithms for the full penetration hole extraction in laser welding processes

Leonardo Nicolosi Technische Universität Dresden

Dresden, Germany [email protected]

Felix Abt FGSW Forschungsgesellschaft für Strahlwerkzeuge

Stuttgart, Germany

Ronald Tetzlaff Technische Universität Dresden

Dresden, Germany [email protected]

Heinrich Höfler, Andreas Blug, Daniel Carl IPM Fraunhofer Institut für Physikalische Messtechnik

Freiburg, Germany

Abstract — In this paper new CNN based visual algorithms for the control of welding processes are proposed. The high dynamics of laser welding in several manufacturing processes ranging from automobile production to precision mechanics requires the introduction of new fast real time controls. In the last few years, analogic circuits like Cellular Neural Networks (CNN) have obtained a primary place in the development of efficient electronic devices because of their real-time signal processing properties. Furthermore, several pixel parallel CNN based architectures are now included within devices like the family of EyeRis systems [1]. In particular, the algorithms proposed in the following have been implemented on the EyeRis system v1.2 with the aim to be run at frame rates up to 20 kHz.

I. INTRODUCTION A. Laser Beam Welding (LBW) processes.

The concept of welding could be summed up simply making reference to the joining of two pieces of material to one piece by melting and re-solidification. Several kinds of welding methods exist. In this paper the Laser Beam Welding (LBW) process was adopted. It is a welding technique used to join multiple pieces of metal through the use of a laser. The so called keyhole welding process employs a highly focused laser beam, making deep and slender weld seems with a minimized heat affected zone. LBW is extremely fast (velocities up to 50 m/min are common) and allows saving energy and protecting against thermal material deformations. These facts make the LBW one of the most frequently used method in manufacturing processes such as the automotive industry. The LBW processes treated in this paper have been executed to join a stack of two iron sheets 0.7 mm thick, with a gap of 0.1 mm. The laser is a Yb:YAG thin disk laser with an optical output power of 6000 W at a wavelength of 1030 nm. Figure 1 shows a longitudinal section of the materials during the laser welding process and the resulting image of a coaxial process control camera. The latter represents the thermal radiation of

the melt in the spectral range of 820 nm to 980 nm [2]. As soon as the beam hits the material surface, the melt of the solid materials starts and metal vapor is created. If an opportune beam power is used, due to the hydrostatic pressure of the metal vapor, all the plates of the welding setup are penetrated, creating the so called full penetration. The state of full penetration is visible in the coaxial camera image as a dark zone directly behind the laser interaction zone, the so called full penetration hole. It ensures that the two materials are properly connected over the whole cross section after re-solidification and, therefore, it represents an important quality feature which guarantees the strength of the connection. This paper is focused on the implementation of algorithms on the EyeRis system v1.2 for the detection of the full penetration hole.

Figure 1. Schematics of a welding process in an overlap-joint and

nomenclature. The picture shows on top the longitudinal section of the materials during the welding process and on bottom the resulting image of a

coaxial process control camera.

978-1-4244-3828-0/09/$25.00 ©2009 IEEE 2713

Page 2: [IEEE 2009 IEEE International Symposium on Circuits and Systems - ISCAS 2009 - Taipei, Taiwan (2009.05.24-2009.05.27)] 2009 IEEE International Symposium on Circuits and Systems - New

B. CNN based systems. In the last few years CNN have played an important role in

real time image processing, thanks to the analogic implementation and to their key features like asynchronous parallel processing, continuous-time dynamics and global interaction of network elements, which allow performing fast parallel image elaboration. A CNN is a grid of interconnected cells. Each cell can directly interact with its neighboring cells and indirectly with the other cells because of the propagation effects of the continuous-time dynamics of the network. An image can be seen as a grid of pixel as well. Therefore, the pixel of the input image in the position (i, j) could be mapped on the CNN cell (i, j), so that each pixel is associated with the input or initial state of a particular cell. The input image, consequently, evolves to the corresponding output image thanks to the CNN evolution, according to opportune state equations. An example of CNN cell is the Chua-Yang cell. As explained in [3], it is governed by the normalized state equation (1.1), where xij and uij represent the state-voltage and the input voltage of the cell (i, j), and yij is the output voltage, defined through the piecewise linear expression (1.2).

, ,, ,

ij ij kl i k j l kl i k j lk r l r k r l r

x x A y B u I+ + + +≤ ≤ ≤ ≤

= − + + +∑ ∑ (1.1)

( )1( ) 1 12ij ij ij ijy f x x x= = + − − (1.2)

Finally r denotes the neighborhood of interaction of each cell and the indexes i, j are such that 1 ≤ i ≤ M and 1 ≤ j ≤ N for a CNN having M x N cells arranged in M rows and N columns. The 3x3 matrices A and B are called respectively feedback and feedforward operator and they allow programming the CNN in order to obtain the desired output. Nowadays, some CNN pixel parallel architectures exist, like the family of the EyeRis systems developed by Anafocus [1]. Despite they are not characterized by a Chua-Yang state equation, they can be programmed through the specification of templates, in order to perform fast parallel processing. As already mentioned, the algorithms treated in this paper have been implemented to be executed on the EyeRis system v1.2. It consists essentially of a Smart Image Sensor called Q-Eye, an Altera NIOS II processor and an I/O module. The Q-Eye chip has a quarter CIF resolution fully programmable smart image sensor array. Thus it has an array of 176x144 cells and a surrounding global circuitry. Its most important characteristic is that image acquisition and image processing are simultaneously performed in all pixels of the image sensor. The Altera NIOS II processor, instead, is a FPGA-synthesizable Digital Microprocessor used to control the execution flow and to execute post-processing tasks. The I/O module allows interfacing EyeRis system with external devices by the presence of SPI, UART, PWM modules, GPIOs and USB 2.0 ports. The EyeRis system v1.2 is provided together with a development environment called EyeRis ADK based on the Altera´s NIOS II Integrated Development Environment (IDE) for NIOS II microprocessor and conceived for writing, compiling, executing and debugging vision applications. Thus, it includes the NIOS II IDE, EyeRis Development Tools to create, configure, execute

and debug EyeRis projects, and programming languages and libraries developed by Anafocus to simplify the image processing on the Q-Eye.

II. ARCFILL ALGORITHM The algorithms discussed in this paper for the full

penetration hole detection are based on the method discussed in [2] and implemented on the EyeRis system v1.1. Since the image acquired by the EyeRis system v1.1 has not a good quality, showing pixelization effects, a preliminary operation is needed. It consists in applying a special thresholding function which performs a 3x3-neighborhood average before binarizing the source image. This allows smoothing the interaction zone edges and leads to better processing results. Afterwards, the ArcFill template is executed. It allows filling the full penetration hole area. Subsequently, a logic XOR operation between the thresholded image and the filled image is applied, aiming at obtaining white pixels only in the hole area. At last, a masking operation is performed in order to cut away most of the noise produced along the external edges of the interaction zone. The flow chart in Figure 2 clarifies the algorithm. The execution of the ArcFill algorithm in the EyeRis system v1.1 leads to a single image processing time of about 97 μs. In spite of that, the EyeRis system v1.1 is only an experimental prototype and it cannot be used in real applications for several reasons. At first the poor performance of the sensor leads to the introduction of several software operations, in order to improve the quality of the image, slowing the algorithm down. Furthermore, the EyeRis system v1.1 does not allow easy interfacing with external devices and this could represent a huge bottleneck for the future real time control of welding processes. The EyeRis system v1.2 has introduced new important characteristics which make it faster and more efficient, improving both the sensing and the interfacing phases. For this reason, new strategies for the full penetration hole detection have been studied in order to be applied in the EyeRis system v1.2.

Figure 2. ArcFill algorithm implemented on the EyeRis system v1.1. The flow chart describes the operations performed to detect the full penetration

hole.

2714

Page 3: [IEEE 2009 IEEE International Symposium on Circuits and Systems - ISCAS 2009 - Taipei, Taiwan (2009.05.24-2009.05.27)] 2009 IEEE International Symposium on Circuits and Systems - New

III. EYERIS V1.2 ALGORITHMS In the following, new algorithms for the EyeRis system

v1.2 are described. The EyeRis system v1.2 is provided with different function libraries compared to the v1.1. This makes a function by function translation from the old system to the new one impossible. The first attempt, nevertheless, was to reproduce the ArcFill algorithm using the available Q-Eye function library. Unfortunately, the new operations slow the algorithm down and other alternative methods have been studied. Each algorithm was tested over different kind of images, acquired during real welding processes by the Q-Eye. In this paper only the result obtained by the processing of three of them will be presented. In particular, images “with hole” and “without” hole were chosen.

A. HitAndMiss algorithm. The so called “hitAndMiss algorithm” performs a binary

hit-and-miss operation on the binarized image. That is, it looks for a given pattern into a digital image. The pattern can be specified as a 3x3 matrix in which the elements can assume the value “1” (white), “0” (black) or “DNC” (do not care). Thus, “1” and “0” mean respectively that a white and a black pixel are expected in the input image in the same position where they are placed in the 3x3 matrix. The resulting pixel is set to “black” if the input pixel does not match the specified pattern and “white” otherwise. Since we found in our investigation that the internal shape of the full penetration hole has approximately a certain convex hull with a constant orientation, it is possible to build the pattern to obtain the desired result. By the iterative use of this function, it is possible to fill the full penetration hole obtaining results similar to those provided by a dilating operation along the two image diagonals. Each iteration must be followed by a logic OR in order to keep the white area of the source image in the resulting image. At the end a logic XOR between the source image and the resulting image of the iterative process draws out the filled area from the resulting image. Two versions of this algorithm have been implemented. The first one performs only one iterative pattern research with a single image processing time of about 57 μs. The second one, instead, performs two different iterative pattern researches and joins the results at the end, with a total time consumption of about 66 μs for a single image. Figure 3 shows the results provided by the “hitAndMiss algorithm”.

B. Dilation algorithm. The “dilation algorithm” replaces the pattern research by a

dilating operation. The Q-Eye dilating function allows specifying a 3x3 structuring element in order to perform the dilation in a given direction. The “dilation algorithm” has been implemented in two different versions: the first one called “1-side algorithm” performs iterative dilations along only one diagonal; the second one called “2-sides algorithm” provides, instead, two separate results through the iterative execution of dilations along both the diagonals. In both algorithms, after each dilating operation, a logical XOR and a logical AND using the dilated image are performed. They allow keeping white pixels of the dilated area only. At the end, the two results provided by the “2-sides algorithm” are joined.

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

(i)

Figure 3. HitAndMiss algorithm experimental results. (a), (b) and (c) show the source images; (d), (e) and (f) show the results obtained through the

execution of a single patter research; (g), (h) and (i) are the results obtained through the double pattern research execution.

“1-side” and “2-sides” algorithms take respectively a single image processing time of about 40 μs and 52 μs. Figure 4 clarifies the dilating operation, while Figure 5 shows the results of the “dilation algorithm” execution.

IV. DISCUSSION In this paper new CNN based algorithms for the full

penetration hole extraction have been discussed. As shown in Table I, the “hitAndMiss” and the “dilation” algorithms respect the frame rate demand of 10-20 kHz, requested for the high dynamics of laser welding processes. Furthermore, they both allow rather easy masking operations in order to delete the noise due to the elaboration of the interaction zone edges. From the quality point of view the “hitAndMiss” and the “dilation” algorithms have provided similar results.

(a) (b) (c) (d)

Figure 4. Dilation along one diagonal. (a) is the source image; (b) shows the direction of the dilation; (c) is the result of the dilating operation. At the

end, logic operations between the initial image and the dilating result are applied, obtaining white pixel only in the dilated area (d).

2715

Page 4: [IEEE 2009 IEEE International Symposium on Circuits and Systems - ISCAS 2009 - Taipei, Taiwan (2009.05.24-2009.05.27)] 2009 IEEE International Symposium on Circuits and Systems - New

(a)

(b)

(d)

(e)

(g)

(h)

Figure 5. Dilation algorithm. (a), (b) and (c) show th(e) and (f) show the results obtained by the use of the 1-

(g), (h) and (i) represent the results of the 2-sides alg

Since the latter is faster than the formalgorithm” is the candidate to be applied incontrols of laser welding processes. It has several sequences of images and by the use obuilt offline, providing hit percentages grFigure 6 shows the result of the masking opdetection rate and robustness of the “dilatiosufficient for controlling laser welding proces

V. CONCLUSION The results show that the CNN techno

potential to evaluate and control fast procelaser welding process on continuous frame raNext developments are being focused on the“mask builder”, in order to find the mask whi

TABLE I. ALGORITHM FRAME

Algorithms Frame

HitAndMiss (1 pattern) ≈

HitAndMiss (2 patterns) ≈

Dilation (1-side) ≈

Dilation (2-sides) ≈

(c)

(f)

(i)

e source images. (d), -side algorithm, while gorithm execution.

mer, the “dilation n future real time been tested over

of masks manually reater than 90%. peration. Both the on algorithm” are sses.

ology has a high esses such as the ates up to 20 kHz. e introduction of a ich fits a specific

RATE.

rate [kHz]

≈ 17.5

≈ 15

≈ 25

≈ 19.2

(a) (b)

(d) (e)

(g) (h)

Figure 6. Dilation algorithm: masking ressource images. (d), (e) and (f) show the mask

of the 1-side algorithm, while (g), (h) and (iafter the 2-sides algorithm execution. The m

offline.

optical setup. The intention is to mautonomously build a mask at the using the information from the first tests, therefore, will be done in robustness of the “dilation algorithmbuilder.

ACKNOWLEDGM

The results have been obtainedBildverarbeitung mit zellularen neurRegelung laserbasierter Schweißprozby the ”Landesstiftung Baden-Wü”Forschung Optische Technologien 2

REFERENCE

[1] Company Anafocus, Avd. Isaac NewtoParque Tecnológico Isla de la Cartuja, 4

[2] M. Geese, R. Tetzlaff, D. Carl, A. BlugExtraction in Laser Welding ProcesseInternational Workshop on Cellular Applications, CNNA 2008, Santiago de

[3] F. Corinto, M. Gilli and P.P. CivallerPolitecnico di Torino, Corso Duca degItaly, “On stability of full range and poof the 7th IEEE International Workshoand their Applications, pp.33-40.

(c)

(f)

(i)

sults. (a), (b) and (c) show the ing results obtained after the use i) represent the masking results

masks have been manually built

make the algorithm able to beginning of the process acquired images. Further order to determine the

m” together with the mask

MENT d in the project ”Analoge ronalen Netzen (CNN) zur zesse (ACES)”, sponsored ürttemberg” in the field 2005/2006”.

ES on s/n, Pabellón de Italia, Ático 41092 Sevilla, Spain. g, H. Höfler and F. Abt, “Feature es”, in Proc. of the 11th IEEE

Neural Networks and their Compostela, Spain.

ri, Dipartimento di Elettronica, gli Abruzzi, 24, I-10129 Torino, olynomial type CNNs”, in Proc. op on Cellular Neural Networks

2716