Abstract

Blades are a critical part of steam turbines. Since they usually work under extremely harsh conditions, it is necessary to detect cracks that are generated during operation in time and prevent them from developing into larger ones. Crack detection is crucial to maintaining the structural health and operational safety of steam turbines. Today, one of the most common detection methods is to perform magnetic particle flaw detection manually, but it is subject to the subjective judgment of inspectors, with a low level of automation. This paper presents an automated crack detection device, which can perform magnetic particle inspection on the blades and transfer images to a host computer for further image analysis. After comparing the performance of different object detection models, yolov4 (you only look once—version 4), which is a fast and accurate real-time object detection algorithm, is chosen in this paper to extract subimages containing cracks on the host computer. Furthermore, an intelligent crack detection model is established from image processing techniques, which can be divided into four steps: image preprocessing, edge detection, crack extraction and crack length calculation. In the step of image preprocessing, a new image pyramid method is proposed to blur the background and eliminate the texture of the metal surface while keeping the cracks' information to the utmost extent. An experimental study shows a reliable performance of the proposed crack detection model.

1 Introduction

According to the BP statistical review of the world, coal-fired power generation still dominates global power production. In 2020, coal-fired power generation accounted for 35.1% of the total global power generation [1]. In coal-fired power plants, a steam turbine is a turbomachine that converts steam energy into mechanical work, which is often used as the prime mover. Blades, as the key part of steam turbines, always withstand the combined effects of high temperature, high pressure, huge centrifugal force, steam force, corrosion and vibration, and water droplet erosion in the wet steam area [24]. Since the blades work under extremely harsh conditions, various defects may appear on the surfaces of the blades, such as foreign object damages, rupture, creep, high/low cycle fatigue, oxidation, erosion, rubbing/wear, and combined failure modes. If the damage to the blade is not detected or dealt with in time, it can lead to the destruction of the entire mechanical unit, which may cause high economic losses. Therefore, it is necessary to detect the cracks generated timely and prevent them from developing into large cracks.

Typical crack detection methods include ultrasonic inspection [5], eddy current inspection [6], laser imaging detection [7], induction infrared thermography [8], X-ray detection [9], and so on. In practice, it is common to use ultrasonic inspection, penetrate inspection, or magnetic particle inspection [10] to detect cracks in blades. Magnetic particle detection has high sensitivity and low cost, which is one of the most suitable measures for surface defect detection of ferromagnetic materials. If there are defects such as cracks on the metal surface of a blade, a leakage field will be formed at the defective area with the use of a magnetic field, and this leakage field can capture the applied magnetic powder material and form a magnetic trace, which can be observed by inspectors. Furthermore, fluorescent magnetic particle flaw detection, which uses fluorescent magnetic powder to replace the black nonfluorescent magnetic powder, is introduced to improve the efficiency and accuracy of the inspection. The reliability of these traditional detection methods relies on whether the inspector has a wealth of practical experience, and fluorescent magnetic particle flaw detection must use ultraviolet light, which can cause great harm to inspectors' eyes. And a long time of high-intensity naked eye observation may lead to eye fatigue and thus cause subjective judgment errors, affecting the speed and accuracy of crack detection.

In order to solve the various problems of manual inspection, Kumar et al. [11] introduced a method to detect blade cracks by assessing the change in a measured vibration frequency of the blade, and the method was found to be capable of detecting very small blade root cracks, while its computation is very complex. Computer vision technology has been widely used to analyze blade surface images. Zhang et al. [12] proposed a method for predicting the remaining useful life of turbine blades under water droplet erosion-based on image recognition and machine learning. Zhang et al. [13] proposed an intelligent image recognition method that could analyze the magnetic mark images combined with the process method of artificial crack identification, extract the fluorescent magnetic powder defect based on image morphology technology, establish an expert knowledge base, and automatically correct the expert knowledge base according to the judgment of the personnel on the image recognition results. However, this kind of method relies on experts for feature design, and the selection of features and thresholds often involves human subjectivity and prior knowledge, which leads to poor robustness. To overcome the drawbacks of traditional classification algorithms, deep learning technology has also been used in the field of crack detection. Recently, some papers proposed crack detection algorithms based on deep learning technology to detect cracks in the blades of gas turbines. Khani et al. [14] used deep learning and image processing technology to detect cracks in gas turbine blades. An attempt was made to use bilateral filtering combined with median filtering or only bilateral filtering to preprocess the image. However, the first method can potentially blur the edge of the cracks, while the second method cannot achieve an excellent denoise effect. Deep learning methods can effectively detect cracks, but it is very difficult to collect enough crack images. Aust et al. [15] developed methods to automatically detect defects on the edges of engine blades (e.g., nicks, dents, and tears), but the proposed methods could only be applied to disassembled blades.

This paper aims to propose an automatic magnetic particle inspection device that includes a robot equipped with a vision module, power supply module, magnetization device, and magnetic suspension drainage device. The robot could complete the magnetic particle inspection on the blade, take images and transmit them to the host computer. This paper also establishes an image processing model, which included four steps: image preprocessing, edge detection, crack extraction, and crack length calculation. In the image preprocessing step, this paper proposes a guided filtering algorithm based on image pyramid to deal with noise while preserving the edge of the crack. yolov4, which stands for you only look once version 4, is used to extract subimages containing cracks. After using yolov4 to find the subimages containing the cracks, image processing algorithms are used to analyzing the subimages and extract the edge of the cracks. Finally, a segmented crack fitting algorithm is proposed to find the longest branch from the multibranched crack and calculate its length. This crack detection method can be performed directly on the blade, without disassembling the blade, and has high detection accuracy and speed.

2 Description of the Crack Detection Device

Figure 1(a) shows a robot designed by our team for crack detection of large steam turbines. It is powered by a lithium battery, and the size is 105 mm × 71 mm × 60 mm. It includes a vision module, power supply module, magnetization device, and magnetic suspension drainage device. There is a complementary metal-oxide semiconductor sensor interface camera installed upside down on the top of the robot. Facing the camera there is a hole on the bottom plate of the robot which allows the camera to take images. Besides the camera, an ultraviolet (UV) lamp is installed to provide a light source for the camera in a dark detection environment. There is also two white light-emitting diodes directly on the front of the microrobot which is used as fill lights. A catheter is inside the robot, one end of which is connected to a container with magnetic suspension, and the other end is mounted at the bottom of the robot perpendicular to the blade. By using the pump the other end can spray the magnetic suspension on the blades.

Fig. 1
The overall structure of the crack detection system
Fig. 1
The overall structure of the crack detection system
Close modal

Figure 1(b) illustrates the overall structure design of the crack detection system. The whole robot can be seen as the slave computer and a laptop computer is used as the host computer. The robot contains two Raspberry Pi Zero W, one is used to control the robot's movement and sends instructions for the vertical camera to collect the surface image of the steam turbine blade; the second one is used to control the front camera to collect images which are essential to robot position perception. The drive circuit is used to control the water pump, motor, UV lamp, and electromagnet.

The robot we proposed can perform magnetic particle inspection on the blade as well as transmit the acquired images to the host computer. It can be used in two kinds of application scenarios. One is that the steam turbine factory can use the robot to test the steam turbine blades that are about to leave the factory and be used in production; the other is that the robot can be used for the turbine blades which were already used in production and have been shut down and have the upper removed for inspection.

2.1 The Choice of Camera.

When it came to choosing a camera, some factors should be considered, including the size of the photosensitive film, focal lens, a field of view, and object distance. The optical model is shown in Fig. 2. The object distance is directly proportional to the field of view, meanwhile, the focal length is inversely proportional to the field of view. Considering that the practical requirement of our project's application scenarios was that a crack with a minimum width of 0.1 mm should be identifiable since the width of the magnetic mark under the action of the UV lamp and the magnetic suspension was much larger than the actual width of the crack, the camera must be able to clearly capture magnetic marks with a width of at least 0.5 mm. The field of view v can be calculated by the following formula:
v=l×uf
where l, u, and f denote the size of photosensitive film, the object distance and the focal lens, respectively. Set the size of photosensitive film l=9.407mm, also set the distance from the lens to the blade surface u=20mm, and the lens focal length f was 2.7 mm, then v could be given by
v=l×uf=9.407mm×20mm2.7mm=69.68mm
v1080=69.681080=0.06mm<0.5mm
Fig. 2

In this condition, the selected camera we chose was a digital sensor, and the resolution of the camera was 1080 P.

2.2 The Choice of Light Source.

The UV lamp was essential for fluorescent magnetic particle inspection. Considering the size of the small robot we made, we purchased five kinds of small ultraviolet light-emitting diodes from the market, including five wavelengths of 325 nm, 365 nm, 385 nm, 395 nm, and 405 nm. Then a simple experimental environment could be setup. Under the conditions of keeping the same test sample, the same applied magnetic field, and the same camera parameters, the experiments were done by fixing different light sources inside our small robot, placing the robot on a disassembled steam turbine blade with A1 test pieces stuck on it, using the robot to spray magnetic suspension on the blade and then taking pictures. The experimental results obtained by different products are shown in Fig. 3. Some conclusions could be drawn from Fig. 3 as follows.

Fig. 3
Crack images taken under different light sources
Fig. 3
Crack images taken under different light sources
Close modal
  • In Figs. 3(d) and 3(e), UV lamps with wavelengths of 395 nm and 405 nm were not suitable for fluorescent magnetic particle inspection since the brightness of fluorescent cracks was very low.

  • Compared with Figs. 3(a)3(c), the UV lamp with a wavelength of 365 nm had the best fluorescent effect, but it also enhanced the brightness of the metal texture and increased the difficulty of image processing.

The better the fluorescence effect, the clearer the cracks would be, which was more conducive to crack detection. On this basis, the UV lamp with a wavelength of 365 nm was chosen as the light source. The radiant flux of the lamp was 900 mW. Although the brightness of the metal texture has also been increased, a new advanced guided filtering algorithm is also proposed in Sec. 3.1 to effectively remove the metal texture.

3 Crack Image Processing Algorithms

Given an image of the surface of the blade, the objective of a crack detection problem was to determine whether a specific pixel was a part of the crack. In this paper, to solve this problem, four key steps are implemented for image processing and crack detection as shown in Fig. 4. More specifically, after receiving the original image, the image was first preprocessed to blur the background and eliminate the texture of the metal surface while keeping the cracks' information to the utmost extent. Then yolov4 model was established to classify patches with and without cracks. yolov4 was a fast and accurate real-time object detection algorithm [16], which could extract subimages with cracks from the original images after determining the cracks. The purpose of cutting out the subimages was to reduce the computational work and reduce the impact of other irrelevant parts in the background on image processing. Then some edge detection algorithms were applied to outline cracks. Finally, length detection algorithm was applied to calculate the length of the crack.

Fig. 4
Key steps of image processing and crack detection
Fig. 4
Key steps of image processing and crack detection
Close modal

3.1 Advanced Guided Filtering Algorithm Based on Image Pyramid for Image Preprocessing.

For the image of the steam turbine blade obtained by the miniature magnetic particle inspection robot under ultraviolet light irradiation, since the collected picture was usually covered with metal texture on the surface of the blade, the results of crack detection could be easily interfered. Therefore, in the image preprocessing step, it was desirable to remove noises while maintaining the edge of the defect area as much as possible.

At present, the commonly used filtering algorithms include the mean filtering algorithm, median filtering algorithm, Gaussian filtering algorithm, and so on. These traditional filtering algorithms can effectively eliminate the interference of the metal texture of the blade surface, but the edge information of the cracks will also be smoothed, resulting in loss of details and affecting the subsequent crack segmentation results. Compared to traditional filtering algorithms, He et al. [17] proposed a guided image filtering method to solve this problem, which is an image filtering technology that uses a guide image I to calculate the output image q. The output image p is generally similar to the target image, but the texture part is similar to the guide image. The process can be expressed as follows:
qi=jWij(I)pj
In this formula, i and j are the image's pixel coordinates, and Wij is the filter kernel related to the guided image I. In order to ensure that in a local area, when the guide image has an edge, the output image will remain unchanged, it is assumed that the output image and the guide image have a local linear relationship on the filter kernel window Wk
qi=akIi+bk,iWk
For the filter kernel window with a certain size Wk, there is a unique set of constant coefficients (a,b) to make the equation true. Therefore, in order to obtain the output image, it is necessary to solve the constant coefficients (a,b). The nonedge and unsmooth areas in the input image are regarded as noise n, which satisfies qi=pini, are expected to be as small as possible. At the same time, a regularization parameter ε is introduced to avoid excessive ak, then the loss function in the filter window can be given by
E(ak,bk)=iWk((akIi+bkpi)2+εak2)
The values of (ak,bk) can be obtained by using the above formula and calculating the partial derivatives of the two parameters, respectively. If a 3 × 3 filter kernel is used, then each point will be included in nine windows. Therefore, w is the number of qivalues, which depends on how large the filter kernel is used, and the final result can be obtained by averaging all qi values, which can be written as
qi=1|w|k,iWk(akIi+bk)

Figure 5 shows that whether the mean filtering algorithm or median filtering algorithm could make the edge of the crack becomes blurred, which was not conducive to the extraction of the crack. It could be seen that Fig. 5(d) was better than Figs. 5(b) and 5(c) since the edge of the crack were clearer and Fig. 5(d) retained more details.

Fig. 5
Crack image preprocessing results of different filtering algorithms
Fig. 5
Crack image preprocessing results of different filtering algorithms
Close modal

An image pyramid is a kind of multiscale representation of images, which is a collection of images originating from the same original image and most used for image segmentation and fusion [18], while in literature [19] Lai et al. proposed the Laplacian pyramid super-resolution network to rebuild images from low resolution to high resolution. In order to better eliminate noise such as metal texture on the blade while maintaining the edge of the crack, this paper proposed a guided filtering algorithm based on the image pyramid. The proposed algorithm consisted of three stages, and its specific structure is shown in Fig. 6.

Fig. 6
Structure of the proposed algorithm based on image pyramid
Fig. 6
Structure of the proposed algorithm based on image pyramid
Close modal

3.1.1 Stage A: Constructing Guided Filtering Image Pyramid.

First, the collected original magnetic particle inspection image of the steam turbine's blade was used as the first layer of the guided filtering image pyramid, denoted as G1. Then the image was subjected to guided filtering down-sampling to obtain the second layer of the guided filtering pyramid, denoted as G2. If the resolution of the original image was 1920 × 1080, the resolution of G2 was 960 × 540. By analogy, setting a total of three times of down-sampling, the resolutions of G3 and G4 were 480 × 270 and 240 × 135, respectively. As shown in Fig. 6, G1, G2, G3, and G4 were four layers of the guided filtering pyramid.

3.1.2 Stage B: Constructing Laplacian Residual Image Pyramid.

At this step, an enlarged image can be obtained by up-sampling the fourth layer G4 of the guided filtering pyramid, expanding it to twice the original in each direction, and using linear interpolation for filling. In order to reduce the loss of information caused by scaling, Gaussian filtering was used for blurring. The image after up-sampling and Gaussian filtering was denoted as U4. The definition of the Laplacian pyramid was the residual image set obtained by subtracting the upper layer from the lower layer up-sampling. Then the third layer L3=G3U4 of the Laplacian pyramid had a resolution of 480 × 270. By analogy, the first two layers of the Laplace pyramid could be calculated, L2 and L1, and the resolutions were, respectively, 960 × 540 and 1920 × 1080.

3.1.3 Stage C: Superimposing the Image Pyramid Back to the Original Image.

Firstly, the bottom layer L3 of the Laplacian Pyramid was bilateral filtered and then superposed with U4; then the layer L2 was bilateral filtered and superimposed with step one; the layer L1 was superimposed with step two after bilateral filtering, in which
L1=G1U2
L2=G2U3
L3=G3U4
Since Ui was obtained from the up-sampling of Gi, Ui was approximated to Ui from the energy point of view, then
L1+L2+L3=G1U2+G2U3+G3U4G1U4

Directly superimposing the residual images in the Laplacian pyramid would lose the energy of the last layer of guided filtering, so another layer of superimposition was required.

The final image results for the comparison between the guided filtering algorithm and the proposed algorithm are shown in Fig. 7. Compared with Figs. 7(a) and 7(b) without using the image pyramid, the metal texture in Fig. 7(c) was further filtered after using the Laplacian pyramid structure, and the edge information was almost completely retained. The only shortcoming was the distortion caused by the sampling. The energy intensity of the cracks was weakened, and these energy intensity losses had little effect on the subsequent steps after experimental comparison, and the use of the adaptive histogram equalization algorithm with limited contrast could compensate for the energy intensity loss to a certain extent.

Fig. 7
Comparison between the results of guided filtering and the proposed algorithm in this paper
Fig. 7
Comparison between the results of guided filtering and the proposed algorithm in this paper
Close modal

3.2 Edge Detection.

A contrast limited adaptive histogram equalization algorithm, called the Clahe algorithm [20], was used first to enhance the brightness of the cracks and reduce the brightness of the background. This step could further differentiate the cracks and background. The proposed algorithm was used second to eliminate noise such as metal texture on the blade.

After these image preprocessing steps, yolov4 was used to find the subimages containing the cracks. After obtaining the crack subimages, it is necessary to use some image processing algorithms to analyze these subimages and extract the edge of the cracks.

After these steps, the Canny edge detector was used to extract the edge of the crack to obtain a binary image of the crack edge. After this step of processing, the pixel value of the background part was set to be 0, and the pixel value of the crack edge part was 1.

After the Canny algorithm, we got the edge of the crack. Since the Canny algorithm only extracted the edge of the crack, not the crack itself, the crack was not complete. The dilation algorithm was used to fill the crack, then the erosion algorithm was used to compensate for the distortion of the crack width caused by the expansion algorithm. Since the erosion algorithm could cause some small holes inside the crack, the closing operation was used to fill the holes.

Since there might be some relatively large noise points that could be treated as cracks, cracks should be screened with three steps: connected domain extraction, connected area sorting, and connected domain area screening. First of all, we extracted and counted the point sets of all connected domains, calculated all the point sets and took five larger point sets, then filtered out the point sets that met a certain area through the threshold, which was considered to be a crack, and the rest of the point sets were all considered as noise. The crack area was preliminarily refined by distance transformation, and then the watershed algorithm was used to extract the crack part to a pixel width of 1 pixel.

Taking a subimage with cracks as an example, the corresponding image processing and crack information extraction steps are shown in Fig. 8.

Fig. 8
A real example of using the image processing and crack information extraction steps in this paper
Fig. 8
A real example of using the image processing and crack information extraction steps in this paper
Close modal

3.3 The Calculation of Crack Length.

In this section, we proposed a segmented crack fitting algorithm. After the crack information was extracted by using the above steps, the crack subgraph could be reduced as much as possible by calculating the minimum external matrix of the connected domain, then a coordinate system could be established to determine the length and width of the crack subgraph. When the length of the subgraph was greater than the width, the extracted crack should be segmented along the y-axis, otherwise, the crack was segmented along the x-axis. Then the curved crack was divided into countless micro-arcs. Since the size of every single micro-arcs was small enough, it could be regarded as a straight line. Each dividing point were connected into small straight lines, then the length of each line was calculated and each line's length should be added to fit the length of the entire crack. The accuracy of the fitting result was determined by the segmentation scale, the smaller the size of each segment, the higher the accuracy of the final fitting result.

The algorithm we proposed in this study could cope with the case where the crack had multiple branches by labeling each crack branch. When a new crack branch appeared, the distance between the last coordinate points of each branch list and the crack's coordinate points of the current scale were calculated separately to determine which branch these new points should belong to, then labeled each new point. The unselected point belonged to the newly appeared branch, and was labeled with a new branch number. When the crack branch converged, the number of coordinate points on the current scale was less than the number on the previous scale, then the distances were also calculated to determine which branch number should be labeled on the current coordinate point. A typical process of the calculating crack length is illustrated in Fig. 9.

Fig. 9
The algorithm divided a crack into small lines, by calculating each line's length finally superimposed the whole crack's length
Fig. 9
The algorithm divided a crack into small lines, by calculating each line's length finally superimposed the whole crack's length
Close modal
The algorithm we proposed allowed us to calculate the pixel length of the crack. By calculating the pixel length of the crack and comparing it with the real length of the crack, a corresponding proportional relationship could be obtained. Since the distance from the robot camera to the turbine blade was constant, this proportional relationship was relatively constant as well. The actual length of the crack could be obtained by multiplying it by the scale factor calculated from the actual calibration. After using 82 cracks for testing, we determined the scale factor to be 0.00415. For our robot, this proportional relationship was formulated as follows:
pixellengthofthecrack×0.00415=actuallengthofthecrack(cm)

Part of the results of the calculation of cracks' length is listed in Table 1.

Table 1

Part of the results of the calculation of the crack length

Pixel length of the crackCalculated length of the crack (cm)Actual length of the crack (cm)Absolute error (cm)
174.9040.7260.80.074
296.7541.2321.20.032
158.5610.7150.70.015
527.6702.1902.20.010
383.3391.5911.60.009
349.6131.4511.40.051
362.7661.5051.50.005
489.6612.0322.00.032
Pixel length of the crackCalculated length of the crack (cm)Actual length of the crack (cm)Absolute error (cm)
174.9040.7260.80.074
296.7541.2321.20.032
158.5610.7150.70.015
527.6702.1902.20.010
383.3391.5911.60.009
349.6131.4511.40.051
362.7661.5051.50.005
489.6612.0322.00.032

4 Performance Comparison of Different Models

Our training process was completed on the host computer, the computing workstation was configured with graphics processing unit (NVIDIA GeForce RTX 2080 SUPER, NVIDIA, San Clara, CA), which had a core frequency of 1890 Mhz and 8 GB of video memory. The hardware environment configuration is shown in Table 2.

Table 2

The environment of the experiment

Experiment Environment
CPUIntel i5 9400f
Graphics processing unitNVIDIA GeForce RTX 2080 SUPER
Video memory8G
RAM8G
Programming languagePython
Deep learning frameworkPytorch
Experiment Environment
CPUIntel i5 9400f
Graphics processing unitNVIDIA GeForce RTX 2080 SUPER
Video memory8G
RAM8G
Programming languagePython
Deep learning frameworkPytorch

A part of our pictures in the dataset came from a steam turbine plant, others were collected by taking photos of the cracks on some disused steam turbine blades offered by the steam turbine plant. In order to expand the dataset, we also made some cracks on the test pieces which were often used in the magnetic particle inspection. The dataset contained a total of 982 images. Divided the dataset into two parts, then there were 700 images for training and 282 images for the test. Figure 10 shows the experimental environment and the crack image collected by the robot on the blade of the steam turbine.

Fig. 10
Image acquisition experiment of the robot on steam turbine's blade
Fig. 10
Image acquisition experiment of the robot on steam turbine's blade
Close modal

For training yolov4, the initial learning rate in the training phase was 0.01, and the attenuation coefficient was 0.005. When the number of training iterations was 200 and 600, the learning rate was adjusted to 0.0001 and 0.00001, respectively, to further converge the loss function. After each iteration, the weights trained in this round were used to calculate the loss value of the test set to judge the quality of training.

In the first 50 iterations, the training of backbone network weights was frozen. Since the backbone feature extraction part of neural network used the weights obtained from the visual object classes dataset through pretraining, the training of this part could be frozen at first, then more resources could be concentrated on the postprocessing part of the network, which could greatly save training time. When the number of iterations was greater than 50 times, the freezing was removed, then the backbone network part and the postprocessing part could be trained together to adapt to the magnetic particle flaw detection dataset and learn the features which were different from the visual object classes dataset.

The loss curve of this yolov4 model is shown in Fig. 11. The vertical coordinate of Fig. 11 is the loss value, and the horizontal coordinate is the number of iterations, of which the unit is 1000. Figure 11(a) is the loss value in the training phase. It could be seen that after about 200 iterations, the model has been basically stable, and the loss value dropped to about 0.4. And the final training loss value was 0.278. Figure 11(b) shows the loss value in the testing phase. Since the images in the validation set have not participated in training before, the loss value was a little higher than the loss value in the training phase.

Fig. 11
The loss curve of the yolov4 model
Fig. 11
The loss curve of the yolov4 model
Close modal

Figure 12 shows the prediction result of the yolov4 model with mish active function and the number on the box is the confidence score. It is the degree of confidence that the object is actually present in the box. For longer cracks, the confidence score was above 0.9, while when detecting small cracks the confidence score dropped to above 0.7 since it was easily confused with huge noise. To distinguish a crack from a huge noise, the object in the box was determined to be a crack as long as the confidence score was greater than 0.5.

Fig. 12
The prediction result of using the yolov4 model with mish active function
Fig. 12
The prediction result of using the yolov4 model with mish active function
Close modal

This paper used some evaluation indexes including accuracy, precision (P), recall rate (R), frames per second (FPS), and detection time to evaluate the performance of the selected model. And this paper compared four models including faster regions with convolutional neural networks features (faster R-CNN), single shot multibox detector (SSD), and yolo with three different versions. In Table 3, it could be seen that the accuracy of several models was all above 90%, which meant these models could correctly identify most of the cracks. Compared with these models, although faster R-CNN performed very well in accuracy and had high precision and recall rate values, the detection time was too long to meet the needs of real-time detection. Other models like SSD and yolov3 had really high FPS values, but their performance in accuracy was worse than yolov4. The yolov4 model with mish active function outperformed other models since it had a better accuracy value and shorter time for detection.

Table 3

The performance of different deep learning models

ModelAccuracy (%)P (%)R (%)FPS (f/s)Detection time (ms)
Faster R-CNN97.658.4749.40.442457.745
SSD92.842.0829.733.2830.338
yolov395.348.3630.533.6929.784
yolov4 (ReLU)96.249.8935.731.9431.233
yolov4 (mish)96.453.2636.031.3631.578
ModelAccuracy (%)P (%)R (%)FPS (f/s)Detection time (ms)
Faster R-CNN97.658.4749.40.442457.745
SSD92.842.0829.733.2830.338
yolov395.348.3630.533.6929.784
yolov4 (ReLU)96.249.8935.731.9431.233
yolov4 (mish)96.453.2636.031.3631.578

5 Conclusions

In this paper, we analyzed typical crack detection methods and further proposed an automated crack detection system for the blades of steam turbines. The detection system includes two major parts: (1) a crack detection device used to acquire images based on magnetic particle inspection, and (2) a crack detection and image processing model that is used to identify cracks, extract the structure of cracks, and calculate the length of the cracks. A new edge-preserving image pyramid was also proposed to eliminate noise in the background while preserving the crack's information to the utmost extent. An experimental study showed reliable performance of the proposed crack detection solution, and it is believed that the proposed crack detection and image analysis model can potentially be applied to other industry areas of crack detection.

At present, the robot was designed to perform the crack detection of one side of the blade only. After detecting one side, it was necessary to manually place the robot on the back of the blade or another blade. In the future, we aim to further improve our robot design based on the practical working environment, e.g., including the function of turning over from the blade to the root of the blade and then to another blade. In addition, the image analysis algorithm proposed in this paper could only detect the length of the crack at present. And we will try to use the three-dimensional reconstruction technique to achieve crack depth detection in our further research.

References

1.
BP p.l.c., 2021, “Statistical Review of World Energy 2021,” London, UK, accessed July 6, 2021, https://www.bp.com/en/global/corporate/energy-economics/statistical-review-of-world-energy.html
2.
Jonas
,
O.
, and
Machemer
,
L.
,
2008
,
“Steam Turbine Corrosion and Deposits Problems and Solutions,”
Proceedings of 37th Turbomachinery Symposium
, Houston, TX, Aug. 1, pp.
211
228
.10.21423/R1P05C
3.
Ziegler
,
D.
,
Puccinelli
,
M.
,
Bergallo
,
B.
, and
Picasso
,
A.
,
2013
, “
Investigation of Turbine Blade Failure in a Thermal Power Plant
,”
Eng. Fail. Anal.
,
1
(
3
), pp.
192
199
.10.1016/j.csefa.2013.07.002
4.
Sperry
,
R. E.
,
Toney
,
S.
, and
Shade
,
D. J.
,
1977
, “
Some Adverse Effects of Stress Corrosion in Steam Turbines
,”
ASME J. Eng. Gas Turbines Power
,
99
(
2
), pp.
255
260
.10.1115/1.3446282
5.
Ihn
,
J. B.
, and
Chang
,
F. K.
,
2004
, “
Detection and Monitoring of Hidden Fatigue Crack Growth Using a Built-In Piezoelectectric Sensor/Actuator Network: I. Diagnostics
,”
SMS
,
13
(
3
), pp.
609
620
.10.1088/0964-1726/13/3/020
6.
AbdAlla
,
A. N.
,
Faraj
,
M. A.
,
Samsuri
,
F.
,
Rifai
,
D.
,
Ali
,
K.
, and
Al-Douri
,
Y.
,
2019
, “
Challenges in Improving the Performance of Eddy Current Testing: Review
,”
Meas. Control-Uk
,
52
(
1–2
), pp.
46
64
.10.1177/0020294018801382
7.
Mohan
,
A.
, and
Poobal
,
S.
,
2018
, “
Crack Detection Using Image Processing: A Critical Review and Analysis
,”
Alex. Eng. J.
,
57
(
2
), pp.
787
798
.10.1016/j.aej.2017.01.020
8.
Yang
,
R. Z.
,
He
,
Y. Z.
,
Mandelis
,
A.
,
Wang
,
N. C.
,
Wu
,
X.
, and
Huang
,
S. D.
,
2018
, “
Induction Infrared Thermography and Thermal-Wave-Radar Analysis for Imaging Inspection and Diagnosis of Blade Composites
,”
IEEE Trans. Ind. Inform.
,
14
(
12
), pp.
5637
5647
.10.1109/TII.2018.2834462
9.
Jing
,
Y. H.
,
Yang
,
F. B.
,
Li
,
D. Q.
,
Li
,
H.
,
Zhang
,
L.
,
Li
,
D.
, and
Zhu
,
Q.
,
2007
, “
X-Ray Inspection of MIM 418 Superalloy Turbine Wheels and Defects Analysis
,”
Rare Met. Mat. Eng.
,
42
(
2
), pp.
317
321
.10.1016/S1875-5372(17)30087-5
10.
He
,
H.
,
Zheng
,
Z. B.
,
Yang
,
Z. J.
,
Wang
,
X. C.
, and
Wu
,
Y. X.
,
2020
, “
Failure Analysis of Steam Turbine Blade Roots
,”
Eng. Fail. Anal.
,
115
(
3
), pp.
1873
1961
.10.1016/j.engfailanal.2020.104629
11.
Kumar
,
M.
,
Heinig
,
R.
,
Cottrell
,
M.
,
Siewert
,
C.
,
Almstedt
,
H.
,
Feiner
,
D.
, and
Griffin
,
J.
,
2022
, “
Detection of Cracks in Turbomachinery Blades by Online Monitoring
,”
ASME
Paper No. GT2020-14813.10.1115/GT2020-14813
12.
Zhang
,
Z.
,
Liu
,
T.
,
Zhang
,
D.
, and
Xie
,
Y.
,
2021
, “
Water Droplet Erosion Life Prediction Method for Steam Turbine Blade Materials Based on Image Recognition and Machine Learning
,”
ASME J. Eng. Gas Turbine Power
,
143
(
3
), p.
031009
.10.1115/1.4049768
13.
Zhang
,
J.
,
Yang
,
X.
,
Wang
,
H.
,
Bu
,
Y.
, and
Liang
,
F.
,
2013
, “
Research of Intelligent Image Recognition Technology in Fluorescent Magnetic Particle Flaw Detection
,”
J. Mater. Sci. Technol.
,
29
(
1
), pp.
82
84
.10.1016/j.jmst.2012.12.012
14.
Mohtasham Khani
,
M.
,
Vahidnia
,
S.
,
Ghasemzadeh
,
L.
,
Ozturk
,
Y. E.
,
Yuvalaklioglu
,
M.
,
Akin
,
S.
, and
Ure
,
N. K.
,
2020
, “
Deep-Learning-Based Crack Detection With Applications for the Structural Health Monitoring of Gas Turbines
,”
Struct. Health Monit.
,
19
(
5
), pp.
1440
–14
13
.10.1177/1475921719883202
15.
Aust
,
J.
,
Shankland
,
S.
,
Pons
,
D.
,
Mukundan
,
R.
, and
Mitrovic
,
A.
,
2021
, “
Automated Defect Detection and Decision-Support in Gas Turbine Blade Inspection
,”
Aerospace
,
8
(
2
), p.
30
.10.3390/aerospace8020030
16.
Bochkovskiy
,
A.
,
Wang
,
C.
, and
Liao
,
H. M.
,
2020
, “
YOLOv4: Optimal Speed and Accuracy of Object Detection
,” e-print
arXiv:2004.10934
.10.48550/arXiv.2004.10934
17.
He
,
K.
,
Sun
,
J.
, and
Tang
,
X.
,
2013
, “
Guided Image Filtering
,”
IEEE Trans. Pattern Anal.
,
35
(
6
), pp.
1397
1409
.10.1109/TPAMI.2012.213
18.
Anderson
,
C. H.
,
Bergen
,
J. R.
,
Burt
,
P. J.
, and
Ogden
,
J. M.
,
1984
, “
Pyramid Methods in Image Processing
,”
RCA Eng.
,
29
(
6
), pp.
33
41
.https://wxs.ca/research/multiscale-neural-synthesis/RCA%20Eng.%201984%20Adelson.pdf
19.
Lai
,
W. S.
,
Huang
,
J. B.
,
Ahuja
,
N.
, and
Yang
,
M. H.
,
2017
,
“Deep Laplacian Pyramid Networks for Fast and Accurate Super-Resolution
,”
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
, Honolulu, HI, July 21–26, pp.
624
632
.10.1109/CVPR.2017.618
20.
Zuiderveld
,
K.
,
1994
,
Contrast Limited Adaptive Histogram Equalization
,
Graphics Gems IV, Academic Press Professional
, Association for Computing Machinery, New York, pp.
474
485
.