[go: up one dir, main page]

CN111581409A - Damage image feature database construction method and system and engine - Google Patents

Damage image feature database construction method and system and engine Download PDF

Info

Publication number
CN111581409A
CN111581409A CN202010400530.0A CN202010400530A CN111581409A CN 111581409 A CN111581409 A CN 111581409A CN 202010400530 A CN202010400530 A CN 202010400530A CN 111581409 A CN111581409 A CN 111581409A
Authority
CN
China
Prior art keywords
damage
image
engine
database
damage image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010400530.0A
Other languages
Chinese (zh)
Inventor
郑波
马昕
张小强
高会英
卢俊文
高峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Civil Aviation Flight University of China
Original Assignee
Civil Aviation Flight University of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Civil Aviation Flight University of China filed Critical Civil Aviation Flight University of China
Priority to CN202010400530.0A priority Critical patent/CN111581409A/en
Publication of CN111581409A publication Critical patent/CN111581409A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/51Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

本发明属于数据库构建技术领域,公开了一种损伤图像特征数据库构建方法、构建系统及民航发动机,通过无损检测技术采集民航发动机内部的不同结构损伤图像;利用数字图像处理技术对采集的发动机损伤图像进行降噪、增强和分割的预处理操作;基于颜色矩特征和灰度共生矩阵纹理特征提取损伤图像数字特征;根据损伤图像数字特征构建发动机损伤图像特征数据库;利用发动机损伤图像特征数据库识别未知损伤图像,并进行识别结果的显示;同时,将新损伤特征归入数据库。本发明提出的基于颜色矩和灰度共生矩阵纹理特征的特征提取方法更有利于描述发动机的损伤图像,准备表达发动机的损伤特征,提供合理有效的损伤图像特征数据库。

Figure 202010400530

The invention belongs to the technical field of database construction, and discloses a damage image feature database construction method, a construction system and a civil aviation engine. Damage images of different structures inside the civil aviation engine are collected through non-destructive testing technology; digital image processing technology is used to analyze the collected engine damage images Perform preprocessing operations of noise reduction, enhancement and segmentation; extract digital features of damage images based on color moment features and grayscale co-occurrence matrix texture features; build engine damage image feature database based on digital features of damage images; use engine damage image feature database to identify unknown damage images, and display the recognition results; at the same time, the new damage features are included in the database. The feature extraction method based on the color moment and gray level co-occurrence matrix texture features proposed by the present invention is more conducive to describe the damage image of the engine, prepares to express the damage feature of the engine, and provides a reasonable and effective damage image feature database.

Figure 202010400530

Description

一种损伤图像特征数据库构建方法、构建系统及发动机A damage image feature database construction method, construction system and engine

技术领域technical field

本发明属于数据库构建技术领域,尤其涉及一种损伤图像特征数据库构建方法、构建系统及发动机。The invention belongs to the technical field of database construction, and in particular relates to a construction method, construction system and engine of a damage image feature database.

背景技术Background technique

目前,民航飞机的安全运行直接关系到旅客的生命财产安全,保障民航飞机安全飞行,是民航业的生命线。发动机作为高度集成化、精密化的复杂工业产品,向飞机运行提供充足的动力,是保障飞行安全的关键系统。根据全球民航业数据统计,由发动机引起的飞行事故占比约为50%;而发动机的维修支出约占到所有支出成本的40%。因此开展高效、精准的民航发动机维修决策研究,对保障飞行安全、降低维护成本、提高运行效率具有重要意义。At present, the safe operation of civil aviation aircraft is directly related to the safety of life and property of passengers, and ensuring the safe flight of civil aviation aircraft is the lifeline of the civil aviation industry. As a highly integrated and sophisticated complex industrial product, the engine provides sufficient power for aircraft operation and is a key system to ensure flight safety. According to the statistics of the global civil aviation industry, flight accidents caused by engines account for about 50%; and engine maintenance expenditures account for about 40% of all expenditure costs. Therefore, it is of great significance to carry out efficient and accurate decision-making research on civil aviation engine maintenance to ensure flight safety, reduce maintenance costs, and improve operational efficiency.

民航发动机长期在高温、高压、高载荷的严苛环境下运行,各级轮盘、叶片、涡轮、燃油喷嘴等关键部件容易受到各类冲击载荷作用,产生裂纹、腐蚀、撕裂、烧伤、掉块等各类损伤。这不仅使发动机产生性能衰退,还容易导致发动机工作失效,严重时甚至威胁飞行安全,造成严重后果。借助于数字图像处理技术,对损伤图像进行降噪、增强、分割等操作,可以增强图像对比度,提升损伤识别准确率。但是,现有基于数字图像处理技术的损伤识别依然依赖专家经验,但发动机结构复杂,损伤种类多样、特征繁多,传统的依赖专家经验的方法越来越难以胜任损伤类型的精确识别。同时,现有技术中关于构建发动机损伤图像的特征数据库还尚未见报道。Civil aviation engines have been operating in a harsh environment of high temperature, high pressure and high load for a long time. Key components such as wheel discs, blades, turbines, and fuel nozzles at all levels are easily subjected to various impact loads, resulting in cracks, corrosion, tearing, burns, and falling off. Blocks and other types of damage. This not only causes the performance of the engine to decline, but also easily causes the engine to fail, and even threatens flight safety in severe cases, resulting in serious consequences. With the help of digital image processing technology, operations such as noise reduction, enhancement, and segmentation are performed on the damaged image, which can enhance the image contrast and improve the accuracy of damage identification. However, the existing damage identification based on digital image processing technology still relies on expert experience. However, the engine structure is complex, and the damage types and features are numerous. Traditional methods relying on expert experience are increasingly difficult to accurately identify damage types. At the same time, there is no report on constructing a feature database of engine damage images in the prior art.

通过上述分析,现有技术存在的问题及缺陷为:(1)现有基于数字图像处理技术的损伤识别依然依赖专家经验,但发动机结构复杂,损伤种类多样、特征繁多,传统的依赖专家经验的方法越来越难以胜任损伤类型的精确识别。Through the above analysis, the problems and defects of the existing technology are as follows: (1) The existing damage identification based on digital image processing technology still relies on expert experience, but the engine structure is complex, and the damage types and characteristics are numerous. Methods are increasingly incapable of accurate identification of injury types.

(2)现有技术中关于构建发动机损伤图像的特征数据库还尚未见报道。(2) In the prior art, there is no report on constructing a feature database of engine damage images.

解决以上问题及缺陷的难度为:The difficulty of solving the above problems and defects is as follows:

发动机的图形损失类型、损失成因负责,图形图像特征多样,难以对损失模式的图形特征准确提取。The engine's graphic loss type and the cause of the loss are responsible, and the graphic image features are diverse, so it is difficult to accurately extract the graphic features of the loss mode.

解决以上问题及缺陷的意义为:很好解决了发动机损伤图形特征提取中存在的问题,如车辆号牌识别,等。The significance of solving the above problems and defects is that the problems existing in the extraction of engine damage graphic features, such as vehicle number plate recognition, are well solved.

发明内容SUMMARY OF THE INVENTION

针对现有技术存在的问题,本发明提供了一种损伤图像特征数据库构建方法、构建系统及发动机,具体涉及一种基于颜色特征和纹理特征的损伤图像特征数据库构建方法。Aiming at the problems existing in the prior art, the present invention provides a damage image feature database construction method, construction system and engine, in particular to a damage image feature database construction method based on color features and texture features.

本发明是这样实现的,一种损伤图像特征数据库构建方法,包括以下步骤:The present invention is achieved in this way, a method for constructing a damage image feature database, comprising the following steps:

步骤一,通过无损检测技术采集民航发动机内部的不同结构损伤图像。The first step is to collect damage images of different structures inside the civil aviation engine through non-destructive testing technology.

步骤二,利用数字图像处理技术对采集的发动机损伤图像进行降噪、增强和分割的预处理操作。Step 2, using digital image processing technology to perform preprocessing operations of noise reduction, enhancement and segmentation on the collected engine damage images.

步骤三,基于颜色矩特征和灰度共生矩阵纹理特征提取损伤图像数字特征。The third step is to extract the digital features of the damaged image based on the color moment feature and the gray level co-occurrence matrix texture feature.

步骤四,根据损伤图像数字特征构建发动机损伤图像特征数据库。Step 4: Build an engine damage image feature database according to the digital features of the damage image.

步骤五,利用发动机损伤图像特征数据库识别未知损伤图像,并进行识别结果的显示;同时,将新损伤特征归入数据库。Step 5: Identify the unknown damage image by using the engine damage image feature database, and display the identification result; at the same time, classify the new damage feature into the database.

进一步,步骤三中,所述损伤图像特征的提取方法包括:Further, in step 3, the method for extracting the damage image features includes:

(1)将待检测的损伤图像进行灰度化处理。(1) Grayscale processing is performed on the damaged image to be detected.

(2)进行颜色矩特征提取,获得一阶矩、二阶矩和三阶矩。(2) Perform color moment feature extraction to obtain first-order moment, second-order moment and third-order moment.

(3)进行灰度共生矩阵特征提取,通过能量、对比度、相关性、熵和逆差矩来表征灰度共生矩阵反映的纹理特征。(3) Extract the features of the gray-level co-occurrence matrix, and characterize the texture features reflected by the gray-level co-occurrence matrix through energy, contrast, correlation, entropy and inverse difference moment.

进一步,所述步骤(1)的将待检测的损伤图像进行灰度化处理包括:Further, the step (1) of performing grayscale processing on the damaged image to be detected includes:

设R、G、B为发动机损伤图像的红色、绿色、蓝色分量矩阵,按照人眼对不同色系的敏感程度,图像的灰度化根据对R、G、B三分量进行加权平均获得,其计算表达式如下:Let R, G, and B be the red, green, and blue component matrices of the engine damage image. According to the sensitivity of the human eye to different color systems, the grayscale of the image is obtained by the weighted average of the three components of R, G, and B. Its calculation expression is as follows:

f(i,j)=0.30·R(i,j)+0.59·G(i,j)+0.11·B(i,j) (1)f(i,j)=0.30·R(i,j)+0.59·G(i,j)+0.11·B(i,j) (1)

其中,f表示图像的灰度矩阵。where f represents the grayscale matrix of the image.

进一步,所述步骤(2)的进行颜色矩特征提取的方法,包括:Further, the method for performing color moment feature extraction in the step (2) includes:

获得灰度化的损伤图像后,通过计算的矩来描述颜色的统计特征。通常,颜色分布信息主要集中在低阶矩,因此采用前三阶矩来表明图像的颜色分布,其计算公式如下:After obtaining the grayscale damage image, the statistical characteristics of the color are described by the calculated moments. Usually, the color distribution information is mainly concentrated in the low-order moments, so the first three-order moments are used to indicate the color distribution of the image. The calculation formula is as follows:

Figure BDA0002489248670000031
Figure BDA0002489248670000031

Figure BDA0002489248670000032
Figure BDA0002489248670000032

Figure BDA0002489248670000033
Figure BDA0002489248670000033

其中,N为矩阵的像素数量;pij为第i行j列的像素点;μ为一阶矩,也叫矩阵的均值,表示灰度图像的平均强度;σ为二阶矩,也叫矩阵的方差,反映了灰度图像的不均匀性;而ζ为三阶矩,也叫矩阵的偏度,定义了灰度图像的不对称性。Among them, N is the number of pixels of the matrix; p ij is the pixel point of the i-th row and j column; μ is the first-order moment, also called the mean value of the matrix, which represents the average intensity of the grayscale image; σ is the second-order moment, also called the matrix The variance of , reflects the inhomogeneity of the grayscale image; while ζ is the third-order moment, also called the skewness of the matrix, which defines the asymmetry of the grayscale image.

进一步,所述步骤(3)的进行灰度共生矩阵特征提取的方法,包括:Further, the method for performing gray level co-occurrence matrix feature extraction in the step (3) includes:

灰度共生矩阵GLCM是对灰度图像上保持距离为d,方向为θ的两个像素分别具有某灰度的状况进行统计获得的,反映了灰度图像关于方向、间隔和幅度变化的综合信息,是分析图像局部模式的基础。采用能量(angular second moment,ASM)、对比度(contrast)、相关性(correlation)、熵(entropy)、逆差矩(inverse different moment,IDM)来表征GLCM反映的纹理特征,具体计算公式如下:The gray-level co-occurrence matrix GLCM is obtained by statistics on the gray-level image of two pixels with a distance d and a direction of θ respectively having a certain gray level, which reflects the comprehensive information about the direction, interval and amplitude changes of the gray level image. , which is the basis for analyzing local patterns in images. Energy (angular second moment, ASM), contrast (contrast), correlation (correlation), entropy (entropy), inverse different moment (inverse different moment, IDM) are used to characterize the texture features reflected by GLCM. The specific calculation formula is as follows:

Figure BDA0002489248670000041
Figure BDA0002489248670000041

Figure BDA0002489248670000042
Figure BDA0002489248670000042

Figure BDA0002489248670000043
Figure BDA0002489248670000043

Figure BDA0002489248670000044
Figure BDA0002489248670000044

Figure BDA0002489248670000045
Figure BDA0002489248670000045

其中,in,

Figure BDA0002489248670000046
Figure BDA0002489248670000046

Figure BDA0002489248670000047
Figure BDA0002489248670000047

Figure BDA0002489248670000048
Figure BDA0002489248670000048

Figure BDA0002489248670000049
Figure BDA0002489248670000049

能量反映了图像灰度分布的均匀程度和纹理粗细程度,当GLCM中元素集中分布时,ASM有相对较大的值,表明了灰度图像是一种较为均匀规则的纹理模式。对比度反映了灰度图像的清晰度,一般图像沟纹越深,CON越大,视觉效果越清晰。相关性度量了GLCM中行元素或列元素的相似度,当元素均匀相等时,COR较大,反映了图像中局部灰度相关。熵是图像所携带的随机信息的度量,GLCM中元素分散分布时,ENT较大,表示了图像的非均匀程度和复杂程度。而逆差矩反映图像纹理的粗糙度,粗纹理的IDM较大,反之较小。至此,提取了灰度图像的颜色矩特征和GLCM纹理特征8个,用于描述发动机损伤图像,进而构建发动机损伤图像数据库,为自动识别提供样本支撑。The energy reflects the uniformity and texture thickness of the grayscale distribution of the image. When the elements in the GLCM are concentratedly distributed, the ASM has a relatively large value, indicating that the grayscale image is a relatively uniform and regular texture pattern. Contrast reflects the sharpness of the grayscale image. Generally, the deeper the image groove, the larger the CON, and the clearer the visual effect. Correlation measures the similarity of row elements or column elements in GLCM. When the elements are uniform and equal, the COR is larger, reflecting the local grayscale correlation in the image. Entropy is a measure of the random information carried by the image. When the elements in GLCM are dispersedly distributed, the ENT is larger, which indicates the degree of non-uniformity and complexity of the image. The inverse difference moment reflects the roughness of the image texture, the IDM of the coarse texture is larger, and vice versa. So far, 8 color moment features and GLCM texture features of grayscale images have been extracted to describe engine damage images, and then an engine damage image database is constructed to provide sample support for automatic identification.

本发明的另一目的在于提供一种损伤图像特征数据库构建系统包括:Another object of the present invention is to provide a damage image feature database construction system including:

结构损伤图像获取模块,通过无损检测技术采集民航发动机内部的不同结构损伤图像;The structural damage image acquisition module collects different structural damage images inside the civil aviation engine through non-destructive testing technology;

动机损伤图像处理模块,利用数字图像处理技术对采集的发动机损伤图像进行降噪、增强和分割的预处理;The engine damage image processing module uses digital image processing technology to perform noise reduction, enhancement and segmentation preprocessing on the collected engine damage images;

损伤图像数字特征提取模块,基于颜色矩特征和灰度共生矩阵纹理特征提取损伤图像数字特征;Damage image digital feature extraction module, extracts damage image digital features based on color moment feature and gray level co-occurrence matrix texture feature;

发动机损伤图像特征数据库构建模块,根据损伤图像数字特征构建发动机损伤图像特征数据库;The engine damage image feature database building module, which builds the engine damage image feature database according to the digital features of the damage image;

识别结果显示模块,利用发动机损伤图像特征数据库识别未知损伤图像,并进行识别结果的显示;同时,将新损伤特征归入数据库。The identification result display module uses the engine damage image feature database to identify unknown damage images, and displays the identification results; at the same time, the new damage features are included in the database.

本发明的另一目的在于提供一种接收用户输入程序存储介质,所存储的计算机程序使电子设备执行所述损伤图像特征数据库构建方法。Another object of the present invention is to provide a program storage medium for receiving user input, and the stored computer program enables an electronic device to execute the damage image feature database construction method.

本发明的另一目的在于提供一种存储在计算机可读介质上的计算机程序产品,包括计算机可读程序,供于电子装置上执行时,提供用户输入接口以实施所述损伤图像特征数据库构建方法。Another object of the present invention is to provide a computer program product stored on a computer-readable medium, including a computer-readable program that, when executed on an electronic device, provides a user input interface to implement the method for constructing the damage image feature database .

本发明的另一目的在于提供一种利用所述损伤图像特征数据库构建方法分析处理内部不同结构损伤信息的民航发动机。Another object of the present invention is to provide a civil aviation engine that uses the damage image feature database construction method to analyze and process damage information of different internal structures.

结合上述的所有技术方案,本发明所具备的优点及积极效果为:本发明利用颜色矩和灰度共生矩阵(gray level co-occurrence matrix,GLCM)来提取发动机损伤图像数字特征,并根据某型发动机的4种损伤类型图像,按照不同的特征提方法构造基于发动机无损检测图像的特征数据库。本发明提出的基于颜色矩和GLCM纹理特征的特征提取方法更有利于描述发动机的损伤图像,准备表达发动机的损伤特征,提供合理有效的损伤图像特征数据库。Combined with all the above technical solutions, the advantages and positive effects of the present invention are as follows: the present invention uses the color moment and the gray level co-occurrence matrix (GLCM) to extract the digital features of the engine damage image, and according to a certain type Four kinds of damage type images of the engine are constructed according to different feature extraction methods to construct a feature database based on the non-destructive testing images of the engine. The feature extraction method based on color moments and GLCM texture features proposed by the present invention is more beneficial to describe the damage image of the engine, prepare to express the damage feature of the engine, and provide a reasonable and effective damage image feature database.

本发明借助于无损检测技术,能准确探测到发动机内部产生的结构损伤,并形成图像信息。确定发动机损伤类型是发动机图像分析技术的关键环节,对进一步判断损伤机理、确定损伤部件、评估受损程度具有指导作用。借助于数字图像处理技术,对损伤图像进行降噪、增强、分割等操作,可以增强图像对比度,提升损伤识别准确率。By means of the non-destructive testing technology, the invention can accurately detect the structural damage generated inside the engine and form image information. Determining the type of engine damage is the key link of engine image analysis technology, which has a guiding role in further judging the damage mechanism, determining the damaged parts, and evaluating the damage degree. With the help of digital image processing technology, operations such as noise reduction, enhancement, and segmentation are performed on the damaged image, which can enhance the image contrast and improve the accuracy of damage identification.

附图说明Description of drawings

图1是本发明实施例提供的基于颜色特征和纹理特征的损伤图像特征数据库构建方法流程图。FIG. 1 is a flowchart of a method for constructing a damage image feature database based on color features and texture features provided by an embodiment of the present invention.

图2是本发明实施例提供的发动机损伤图像识别过程示意图。FIG. 2 is a schematic diagram of an engine damage image recognition process provided by an embodiment of the present invention.

图3是本发明实施例提供的图像特征提取流程图。FIG. 3 is a flowchart of image feature extraction provided by an embodiment of the present invention.

图4是本发明实施例提供的某型发动机损伤图像示意图。FIG. 4 is a schematic diagram of a damage image of a certain type of engine provided by an embodiment of the present invention.

具体实施方式Detailed ways

为了使本发明的目的、技术方案及优点更加清楚明白,以下结合实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。In order to make the objectives, technical solutions and advantages of the present invention clearer, the present invention will be further described in detail below with reference to the embodiments. It should be understood that the specific embodiments described herein are only used to explain the present invention, but not to limit the present invention.

针对现有技术存在的问题,本发明提供了一种基于颜色特征和纹理特征的损伤图像特征数据库构建方法,下面结合附图对本发明作详细的描述。In view of the problems existing in the prior art, the present invention provides a method for constructing a damage image feature database based on color features and texture features. The present invention is described in detail below with reference to the accompanying drawings.

如图1所示,本发明实施例提供的基于颜色特征和纹理特征的损伤图像特征数据库构建方法包括以下步骤:As shown in FIG. 1 , the method for constructing a damage image feature database based on color features and texture features provided by an embodiment of the present invention includes the following steps:

S101,通过无损检测技术采集民航发动机内部的不同结构损伤图像。In S101, the damage images of different structures inside the civil aviation engine are collected through non-destructive testing technology.

S102,利用数字图像处理技术对采集的发动机损伤图像进行降噪、增强和分割的预处理操作。S102, using digital image processing technology to perform preprocessing operations of noise reduction, enhancement and segmentation on the collected engine damage image.

S103,基于颜色矩特征和灰度共生矩阵纹理特征提取损伤图像数字特征。S103, extracting digital features of the damaged image based on the color moment feature and the gray level co-occurrence matrix texture feature.

S104,根据损伤图像数字特征构建发动机损伤图像特征数据库。S104, build an engine damage image feature database according to the digital features of the damage image.

S105,利用发动机损伤图像特征数据库识别未知损伤图像,并进行识别结果的显示;同时,将新损伤特征归入数据库。S105 , use the engine damage image feature database to identify the unknown damage image, and display the identification result; at the same time, classify the new damage feature into the database.

如图2所示,本发明提供一种损伤图像特征数据库构建系统包括:As shown in Figure 2, the present invention provides a damage image feature database construction system including:

结构损伤图像获取模块,通过无损检测技术采集民航发动机内部的不同结构损伤图像。The structural damage image acquisition module collects different structural damage images inside the civil aviation engine through non-destructive testing technology.

动机损伤图像处理模块,利用数字图像处理技术对采集的发动机损伤图像进行降噪、增强和分割的预处理。The engine damage image processing module uses digital image processing technology to perform noise reduction, enhancement and segmentation preprocessing on the collected engine damage images.

损伤图像数字特征提取模块,基于颜色矩特征和灰度共生矩阵纹理特征提取损伤图像数字特征。Damage image digital feature extraction module, based on color moment feature and gray level co-occurrence matrix texture feature to extract digital feature of damage image.

发动机损伤图像特征数据库构建模块,根据损伤图像数字特征构建发动机损伤图像特征数据库。The engine damage image feature database building module builds the engine damage image feature database according to the digital features of the damage image.

识别结果显示模块,利用发动机损伤图像特征数据库识别未知损伤图像,并进行识别结果的显示;同时,将新损伤特征归入数据库。The identification result display module uses the engine damage image feature database to identify unknown damage images, and displays the identification results; at the same time, the new damage features are included in the database.

图2显示了对发动机损伤类型的识别过程。利用无损检测获得不同损伤图像后,关键是要形成数字化的图像特征表示,以构建发动机损伤数据库,为训练SVM提供原始样本。Figure 2 shows the identification process for the type of engine damage. After obtaining different damage images using NDT, the key is to form a digital image feature representation to build an engine damage database and provide original samples for training SVM.

下面结合基于颜色特征和纹理特征的损伤数据库对本发明作进一步描述。The present invention is further described below in conjunction with a damage database based on color features and texture features.

1基于颜色矩和灰度共生矩阵GLCM的特征提取1 Feature extraction based on color moment and gray level co-occurrence matrix GLCM

对图像的颜色、纹理、形状等底层特征的分析提取,已成为当今工业界和学术界应用广泛的方法,已形成了较为丰富的研究成果,文献[1]提出基于HSV空间的颜色特征提取方法,文献[2]利用灰度共生矩阵(gray level co-occurrence matrix,GLCM)的统计量来描述图像的纹理特征,文献[3]融合GLCM特征和Tamura特征,得到新的图形数字特征。通常,图像的颜色特征具有较强的鲁棒性,是反映发动机损伤部位的最直观、明显特征。而纹理特征反映的是物体表面的结构特点以及与周围的环境信息,是图像灰度化的属性。根据无损检测的损伤图像特点,本发明提出一种基于颜色矩特征和GLCM纹理特征的图像特征提取方法,图3给出了图像特征提取的流程图。The analysis and extraction of low-level features such as color, texture, and shape of images has become a widely used method in industry and academia, and has formed relatively rich research results. Literature [1] proposed a color feature extraction method based on HSV space. , the literature [2] uses the gray level co-occurrence matrix (gray level co-occurrence matrix, GLCM) statistics to describe the texture features of the image, the literature [3] fuses the GLCM feature and the Tamura feature to obtain a new graphic digital feature. Usually, the color feature of the image has strong robustness, and is the most intuitive and obvious feature to reflect the damaged part of the engine. The texture feature reflects the structural characteristics of the surface of the object and the surrounding environment information, which is the attribute of image grayscale. According to the damage image characteristics of non-destructive testing, the present invention proposes an image feature extraction method based on color moment feature and GLCM texture feature. Figure 3 shows a flowchart of image feature extraction.

设R、G、B为发动机损伤图像的红色、绿色、蓝色分量矩阵,按照人眼对不同色系的敏感程度,图像的灰度化根据对R、G、B三分量进行加权平均获得,其计算表达式如下:Let R, G, and B be the red, green, and blue component matrices of the engine damage image. According to the sensitivity of the human eye to different color systems, the grayscale of the image is obtained by the weighted average of the three components of R, G, and B. Its calculation expression is as follows:

f(i,j)=0.30·R(i,j)+0.59·G(i,j)+0.11·B(i,j) (1)f(i,j)=0.30·R(i,j)+0.59·G(i,j)+0.11·B(i,j) (1)

其中,f表示图像的灰度矩阵。获得灰度化的损伤图像后,通过计算的矩来描述颜色的统计特征。通常,颜色分布信息主要集中在低阶矩,因此采用前三阶矩来表明图像的颜色分布,其计算公式如下:where f represents the grayscale matrix of the image. After obtaining the grayscale damage image, the statistical characteristics of the color are described by the calculated moments. Usually, the color distribution information is mainly concentrated in the low-order moments, so the first three-order moments are used to indicate the color distribution of the image. The calculation formula is as follows:

Figure BDA0002489248670000081
Figure BDA0002489248670000081

Figure BDA0002489248670000082
Figure BDA0002489248670000082

Figure BDA0002489248670000083
Figure BDA0002489248670000083

其中,N为矩阵的像素数量;pij为第i行j列的像素点;μ为一阶矩,也叫矩阵的均值,表示灰度图像的平均强度;σ为二阶矩,也叫矩阵的方差,反映了灰度图像的不均匀性;而ζ为三阶矩,也叫矩阵的偏度,定义了灰度图像的不对称性。灰度共生矩阵(GLCM)指的是一种通过研究灰度的空间相关特性来描述纹理的常用方法,是20世纪70年代初由R.Haralick等人提出的。GLCM是对灰度图像上保持距离为d,方向为θ的两个像素分别具有某灰度的状况进行统计获得的,反映了灰度图像关于方向、间隔和幅度变化的综合信息,是分析图像局部模式的基础。关于GLCM的定义、计算过程,可参见文献[3,4]。为了能更直观地以GLCM来描述图像纹理特征,本发明采用能量(angular second moment,ASM)、对比度(contrast)、相关性(correlation)、熵(entropy)、逆差矩(inverse differentmoment,IDM)来表征GLCM反映的纹理特征,具体计算公式如下:Among them, N is the number of pixels of the matrix; p ij is the pixel point of the i-th row and j column; μ is the first-order moment, also called the mean value of the matrix, which represents the average intensity of the grayscale image; σ is the second-order moment, also called the matrix The variance of , reflects the inhomogeneity of the grayscale image; while ζ is the third-order moment, also called the skewness of the matrix, which defines the asymmetry of the grayscale image. Gray level co-occurrence matrix (GLCM) refers to a common method to describe texture by studying the spatial correlation characteristics of gray level, which was proposed by R. Haralick et al. in the early 1970s. GLCM is obtained by statistics on the grayscale image of two pixels with a distance d and a direction of θ respectively having a certain grayscale. It reflects the comprehensive information about the direction, interval and amplitude changes of the grayscale image. The basis of local mode. For the definition and calculation process of GLCM, please refer to the literature [3,4]. In order to describe the image texture features with GLCM more intuitively, the present invention uses energy (angular second moment, ASM), contrast (contrast), correlation (correlation), entropy (entropy), inverse difference moment (inverse different moment, IDM) to To characterize the texture features reflected by GLCM, the specific calculation formula is as follows:

Figure BDA0002489248670000091
Figure BDA0002489248670000091

Figure BDA0002489248670000092
Figure BDA0002489248670000092

Figure BDA0002489248670000093
Figure BDA0002489248670000093

Figure BDA0002489248670000094
Figure BDA0002489248670000094

Figure BDA0002489248670000095
Figure BDA0002489248670000095

其中,in,

Figure BDA0002489248670000096
Figure BDA0002489248670000096

Figure BDA0002489248670000097
Figure BDA0002489248670000097

Figure BDA0002489248670000098
Figure BDA0002489248670000098

Figure BDA0002489248670000099
Figure BDA0002489248670000099

能量反映了图像灰度分布的均匀程度和纹理粗细程度,当GLCM中元素集中分布时,ASM有相对较大的值,表明了灰度图像是一种较为均匀规则的纹理模式。对比度反映了灰度图像的清晰度,一般图像沟纹越深,CON越大,视觉效果越清晰。相关性度量了GLCM中行元素或列元素的相似度,当元素均匀相等时,COR较大,反映了图像中局部灰度相关。熵是图像所携带的随机信息的度量,GLCM中元素分散分布时,ENT较大,表示了图像的非均匀程度和复杂程度。而逆差矩反映图像纹理的粗糙度,粗纹理的IDM较大,反之较小。至此,提取了灰度图像的颜色矩特征和GLCM纹理特征8个,用于描述发动机损伤图像,进而构建发动机损伤图像数据库,为自动识别提供样本支撑。The energy reflects the uniformity and texture thickness of the grayscale distribution of the image. When the elements in the GLCM are concentratedly distributed, the ASM has a relatively large value, indicating that the grayscale image is a relatively uniform and regular texture pattern. Contrast reflects the sharpness of the grayscale image. Generally, the deeper the image groove, the larger the CON, and the clearer the visual effect. Correlation measures the similarity of row elements or column elements in GLCM. When the elements are uniform and equal, the COR is larger, reflecting the local grayscale correlation in the image. Entropy is a measure of the random information carried by the image. When the elements in GLCM are dispersedly distributed, the ENT is larger, which indicates the degree of non-uniformity and complexity of the image. The inverse difference moment reflects the roughness of the image texture, the IDM of the coarse texture is larger, and vice versa. So far, 8 color moment features and GLCM texture features of grayscale images have been extracted to describe engine damage images, and then an engine damage image database is constructed to provide sample support for automatic identification.

2构建发动机损伤图像特征数据库2 Construction of engine damage image feature database

收集某型民航发动机在无损检测过程中的4类损伤图像,图4显示了每类当中的其中一张损伤图像。Collect 4 types of damage images of a certain type of civil aviation engine in the process of non-destructive testing. Figure 4 shows one of the damage images in each type.

图4(a)是发动机燃油喷嘴隔板(fuel nozzle baffle)烧穿损伤,图4(b)是高压涡轮动叶(high pressure turbine blade,HPT blade)烧蚀损伤,图4(c)是高压压气机叶片后缘(high pressure compressor blade trailing edge,HPC blade TE)掉块并有裂纹,图4(d)是燃烧室(combustion chamber)穿孔。按照前述的特征提取方法,构造一个民航发动机损伤图像特征的测试数据库,数据库基本特征如表所示,随机选择339样本作为训练样本,其余的86样本作为测试样本,以便检验本发明所提的识别算法的性能。如表1、2。Figure 4(a) shows the burn-through damage of the fuel nozzle baffle of the engine, Figure 4(b) shows the ablation damage of the high pressure turbine blade (HPT blade), and Figure 4(c) shows the high pressure turbine blade (HPT blade). The trailing edge of the compressor blade (high pressure compressor blade trailing edge, HPC blade TE) is agglomerated and cracked. Figure 4(d) shows the perforation of the combustion chamber. According to the aforementioned feature extraction method, a test database of civil aviation engine damage image features is constructed. The basic features of the database are shown in the table. 339 samples are randomly selected as training samples, and the remaining 86 samples are used as test samples, so as to test the identification proposed by the present invention. performance of the algorithm. As shown in Table 1 and 2.

表1民航发动机损伤图像特征数据库Table 1 Civil Aviation Engine Damage Image Feature Database

部位part 损伤类型damage type 训练样本数number of training samples 测试样本数据Test sample data fuel nozzle bafflefuel nozzle baffle 烧穿burn through 7878 2020 HPT bladeHPT blade 烧蚀ablation 9090 23twenty three HPC blade TEHPC blade TE 掉块drop block 7878 2020 combustion chambercombustion chamber 穿孔perforation 9393 23twenty three

表2部分图像特征样板Table 2 Partial image feature templates

Figure BDA0002489248670000101
Figure BDA0002489248670000101

Figure BDA0002489248670000111
Figure BDA0002489248670000111

本发明提出的基于颜色矩和GLCM纹理特征的特征提取方法更有利于描述发动机的损伤图像,准备表达发动机的损伤特征,提供合理有效的损伤图像特征数据库。The feature extraction method based on color moments and GLCM texture features proposed by the present invention is more beneficial to describe the damage image of the engine, prepare to express the damage feature of the engine, and provide a reasonable and effective damage image feature database.

下面结合对损伤图像识别性能的验证对本发明作进一步描述。The present invention will be further described below in conjunction with the verification of the damage image recognition performance.

为了验证本发明所提的基于颜色矩和GLCM的特征提取效果,同时验证基于AMPSO优化的SVM(AMPSO-SVM)的损伤识别性能,本发明将做如下对比验证。In order to verify the feature extraction effect based on color moment and GLCM proposed by the present invention, and at the same time verify the damage identification performance of SVM (AMPSO-SVM) based on AMPSO optimization, the present invention will do the following comparative verification.

1不同特征提取方法对识别精度的影响1 Influence of different feature extraction methods on recognition accuracy

图像的特征提取方法有多种,根据发动机损伤图像的特点,将本发明所提的特征提取方法与文献[1-3,5,6]的方法进行比较,文献[1]提出的HSV空间颜色特征提取方法,文献[2]提出的基于GLCM统计量的纹理特征提取方法,文献[5]提出基于Tamura的特征提取方法,文献[3]提出基于GLCM和Tamura融合的特征提取方法,而文献[6]提出了基于Tamura特征和局部灰度颜色(gray color,GC)特征融合的特征提取方法。利用AMPSO-SVM作为识别算法,PSO的种群设置为40,最大迭代数设置为100,粒子搜索范围为[0,100],而速度在[-1,1]之间随机初始化,交叉验证的k取5。利用民航发动机损伤图像,根据不同的方法提取图像特征,并进行算法训练、测试。同时,引入常用的基于知识学习的智能算法,如BP(backpropagation)网络、ELM(extreme learning machines)网络、k-NN(k-nearestneighborhood)算法来测试各种特征提取方法对识别精度的影响。BP网络的误差目标设置为0.005,而迭代次数设置为300次。ELM网络由文献[7]所提的方法优化。K-NN取k为1时的识别精度。各算法具体计算过程请参见相关参考文献。因受权值随机初始化影响,BP网络、ELM网络输出精度呈现出不确定性,因此这2种方法在相同计算环境下连续运行50次,取平均精度作为最终输出。表2显示了基于不同的特征提取方法的识别效果,其中(C,γ)best表示利用AMPSO寻优后的输出解,适应度值即是CV的平均精度。对于表2的识别精度纵向观察可知:对于不同的识别算法,对于本发明提的特征提取方法,都有相较于其它特征提取方法更优的识别效果。分析各种特征提取算法可知:文献[1]、[2]和[5]中属于单一特征提取方法,对损伤图像的描述不够全面,导致识别精度相对较差;文献[3]和[6]的方法属于融合特征提取方法,从多维度来刻画损伤图像特征,因此较之单一特征提取方法,相对更有利于损伤图像识别,且文献[6]也考虑到了颜色特征,它对损伤图像的识别也处于相对较高的精度。本发明提出的基于颜色矩和GLCM特征提取的方法,综合考虑了颜色特征和纹理统计特征,对发动机损伤图像特征提取更加客观、全面,实验结果证明了本发明所提的特征提取方式更加合理有效。There are many kinds of image feature extraction methods. According to the characteristics of the engine damage image, the feature extraction method proposed by the present invention is compared with the method in the literature [1-3, 5, 6]. The HSV space color proposed in the literature [1] is compared. The feature extraction method, the texture feature extraction method based on GLCM statistics proposed by the literature [2], the feature extraction method based on Tamura was proposed in the literature [5], the feature extraction method based on the fusion of GLCM and Tamura was proposed in the literature [3], and the literature [ 6] A feature extraction method based on Tamura feature and local gray color (GC) feature fusion is proposed. Using AMPSO-SVM as the identification algorithm, the population of PSO is set to 40, the maximum number of iterations is set to 100, the particle search range is [0, 100], and the speed is randomly initialized between [-1, 1], and the k of cross-validation is set to 5 . Using civil aviation engine damage images, image features are extracted according to different methods, and the algorithm is trained and tested. At the same time, common intelligent algorithms based on knowledge learning are introduced, such as BP (backpropagation) network, ELM (extreme learning machines) network, and k-NN (k-nearest neighborhood) algorithm to test the impact of various feature extraction methods on recognition accuracy. The error target of the BP network is set to 0.005, while the number of iterations is set to 300. The ELM network is optimized by the method proposed in [7]. The recognition accuracy of K-NN when k is 1. For the specific calculation process of each algorithm, please refer to the relevant references. Affected by the random initialization of weights, the output accuracy of BP network and ELM network presents uncertainty. Therefore, these two methods are run 50 times continuously under the same computing environment, and the average accuracy is taken as the final output. Table 2 shows the recognition results based on different feature extraction methods, where (C, γ)best represents the output solution after optimization using AMPSO, and the fitness value is the average precision of CV. From the longitudinal observation of the recognition accuracy in Table 2, it can be seen that for different recognition algorithms, the feature extraction method proposed in the present invention has a better recognition effect than other feature extraction methods. Analysis of various feature extraction algorithms shows that: Literatures [1], [2] and [5] belong to a single feature extraction method, and the description of damage images is not comprehensive enough, resulting in relatively poor recognition accuracy; Literatures [3] and [6] The method belongs to the fusion feature extraction method, which depicts the damage image features from multiple dimensions. Therefore, compared with the single feature extraction method, it is relatively more conducive to the damage image recognition, and the reference [6] also considers the color feature, which can identify the damage image. also at relatively high precision. The method based on color moment and GLCM feature extraction proposed by the present invention comprehensively considers color features and texture statistical features, and can extract engine damage image features more objectively and comprehensively. The experimental results prove that the feature extraction method proposed by the present invention is more reasonable and effective. .

2不同识别算法的识别性能比较2. Comparison of recognition performance of different recognition algorithms

本发明提出了基于AMPSO优化的SVM,通过交叉验证以获得最优的(C,γ),从而保证SVM稳定可靠的输出。为了验证本发明所提算法在损伤图像识别中的优势,将上述的4种识别算法与AMPSO-SVM进行识别性能比较。对表2中的识别精度横向观察,显然,AMPSO-SVM对不同的特征数据识别基本都是最佳的。The invention proposes an SVM optimized based on AMPSO, and obtains the optimal (C, γ) through cross-validation, thereby ensuring stable and reliable output of the SVM. In order to verify the advantages of the proposed algorithm in damage image recognition, the above four recognition algorithms are compared with AMPSO-SVM in recognition performance. Looking at the recognition accuracy in Table 2, it is obvious that AMPSO-SVM is basically the best for different feature data recognition.

表3识别算法的特征提取方法的识别性能比较表Table 3 Recognition performance comparison table of feature extraction methods of recognition algorithms

Figure BDA0002489248670000131
Figure BDA0002489248670000131

通过表3可以得出AMPSO_SVM识别方法的识别能力优于BP、ELM和K-nn。而纵向比较,可以得出本发明的特征提取方法优于HSV、GLCM、Tamura、GLCM-Tamura和GC-Tamura。From Table 3, it can be concluded that the recognition ability of AMPSO_SVM recognition method is better than BP, ELM and K-nn. And longitudinal comparison, it can be concluded that the feature extraction method of the present invention is superior to HSV, GLCM, Tamura, GLCM-Tamura and GC-Tamura.

由于各种识别算法的识别原理是不同的,因此很难保证一种方法对所有数据分布类型都有效的,例如利用文献[5]提取的数据,AMPSO-SVM的识别精度就不如ELM网络。但AMPSO-SVM有效克服了随机性的影响,不存在如BP网络和ELM网络输出不确定性的缺陷,同时,也不会像k-NN一样对k值敏感。因此,AMPSO-SVM不仅具有良好的识别性能,还能提供稳定、准确的输出,能够为民航发动机的损伤类型识别提供可靠的技术支撑。Since the recognition principles of various recognition algorithms are different, it is difficult to ensure that one method is effective for all data distribution types. For example, using the data extracted in literature [5], the recognition accuracy of AMPSO-SVM is not as good as that of ELM network. However, AMPSO-SVM effectively overcomes the influence of randomness, and does not have the defects of output uncertainty such as BP network and ELM network, and at the same time, it is not as sensitive to k value as k-NN. Therefore, AMPSO-SVM not only has good recognition performance, but also provides stable and accurate output, which can provide reliable technical support for the identification of damage types of civil aviation engines.

参考文献:references:

[1]杨奥博,盛家川,李玉芝,刘赏,赵坤圆.基于HSV空间的颜色特征提取[J].电脑知识与技术,2017(18):193-195.[1] Yang Aobo, Sheng Jiachuan, Li Yuzhi, Liu Shang, Zhao Kunyuan. Color Feature Extraction Based on HSV Space [J]. Computer Knowledge and Technology, 2017(18):193-195.

[2]高程程,惠晓威.基于灰度共生矩阵的纹理特征提取[J].计算机系统应用,2010,19(6):195-198.[2] Elevation Cheng, Hui Xiaowei. Texture feature extraction based on gray level co-occurrence matrix [J]. Computer System Application, 2010, 19(6): 195-198.

[3]耿艳萍,高红斌,任智颖.融合颜色特征和纹理特征的图像检索算法[J].无线互联科技,2017,24:113-116.[3] Geng Yanping, Gao Hongbin, Ren Zhiying. Image retrieval algorithm combining color features and texture features [J]. Wireless Internet Technology, 2017, 24: 113-116.

[4]R.M.Haralick,K.Shanmugam,I.Dinstein.Texture features for imageclassification[J].IEEE Transactions on Systems,Man and Cybernetics,1973,SMC-3(6):610-621.[4]R.M.Haralick,K.Shanmugam,I.Dinstein.Texture features for imageclassification[J].IEEE Transactions on Systems,Man and Cybernetics,1973,SMC-3(6):610-621.

[5]Y.Liu,Z.Li,Z.M.Gao.An Improved Texture Feature Extraction Methodfor Tyre Tread Patterns[C].International Conference on Intelligent Scienceand Big Data Engineering.Springer,Berlin,Heidelberg,2013:705-713.[5] Y. Liu, Z. Li, Z. M. Gao. An Improved Texture Feature Extraction Method for Tyre Tread Patterns [C]. International Conference on Intelligent Science and Big Data Engineering. Springer, Berlin, Heidelberg, 2013: 705-713.

[6]李娜,熊志勇,谢瑾,彭川,任恺.基于Tamura纹理特征提取和SVM的多模态脑肿瘤MR图像分割[J].中南民族大学学报(自然科学版),2018,37(03):148-153.[6] Li Na, Xiong Zhiyong, Xie Jin, Peng Chuan, Ren Kai. Multimodal brain tumor MR image segmentation based on Tamura texture feature extraction and SVM [J]. Journal of South Central University for Nationalities (Natural Science Edition), 2018, 37( 03):148-153.

[7]S.U.Hongjun,S.Tian,Y.Cai,et al.Optimized extreme learning machinefor urban land cover classification using hyperspectral imagery[J].Frontiersof Earth Science,2017,11(4):765-773.[7]S.U.Hongjun,S.Tian,Y.Cai,et al.Optimized extreme learning machinefor urban land cover classification using hyperspectral imagery[J].Frontiersof Earth Science,2017,11(4):765-773.

通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到本发明可借助软件加必需的硬件平台的方式来实现,当然也可以全部通过硬件来实施。基于这样的理解,本发明的技术方案对背景技术做出贡献的全部或者部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在存储介质中,如ROM/RAM、磁碟、光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例或者实施例的某些部分所述的方法。From the description of the above embodiments, those skilled in the art can clearly understand that the present invention can be implemented by means of software plus a necessary hardware platform, and certainly can also be implemented entirely by hardware. Based on this understanding, all or part of the technical solutions of the present invention can be embodied in the form of software products, and the computer software products can be stored in storage media, such as ROM/RAM, magnetic disks, optical disks, etc. , including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform the methods described in various embodiments or some parts of the embodiments of the present invention.

以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,都应涵盖在本发明的保护范围之内。The above are only specific embodiments of the present invention, but the protection scope of the present invention is not limited to this. Any person skilled in the art is within the technical scope disclosed by the present invention, and all within the spirit and principle of the present invention Any modifications, equivalent replacements and improvements made within the scope of the present invention should be included within the protection scope of the present invention.

Claims (10)

1. A method for constructing a damage image feature database is characterized by comprising the following steps: constructing an engine damage image characteristic database according to the acquired damage image digital characteristics;
identifying unknown damage images by using an engine damage image characteristic database, and displaying identification results; at the same time, the new damage characteristics are included in the database.
2. The method for constructing the database of characteristics of the damage image according to claim 1, wherein the method for extracting the characteristics of the damage image comprises:
(1) carrying out gray processing on a damage image to be detected;
(2) performing color moment feature extraction to obtain a first moment, a second moment and a third moment;
(3) and (4) extracting the characteristics of the gray level co-occurrence matrix, and representing the texture characteristics reflected by the gray level co-occurrence matrix through energy, contrast, correlation, entropy and inverse difference moment.
3. The method for constructing the damage image feature database according to claim 2, wherein in the step (1), the method for performing graying processing on the damage image to be detected comprises:
let R, G, B be the red, green, blue component matrix of the engine damage image, and according to the sensitivity of human eyes to different color systems, the graying of the image is obtained by performing weighted average on R, G, B three components, and the calculation expression is as follows:
f(i,j)=0.30·R(i,j)+0.59·G(i,j)+0.11·B(i,j)
where f represents the gray matrix of the image.
4. The method for constructing a database of lesion image features according to claim 2, wherein in the step (2), the method for extracting the color moment features comprises:
after obtaining a grayed damage image, describing the statistical characteristics of the color through the calculated moment; the color distribution information is mainly concentrated in low-order moments, the first three-order moments are adopted to indicate the color distribution of the image, and the calculation formula is as follows:
Figure FDA0002489248660000011
Figure FDA0002489248660000021
Figure FDA0002489248660000022
wherein N is the number of pixels of the matrix; p is a radical ofijThe pixel points of the ith row and the j column are set; mu is a first moment, also called the mean value of the matrix, and represents the average intensity of the gray level image; sigma is a second moment; ζ is the third moment.
5. The method for constructing a damage image feature database according to claim 2, wherein in the step (3), the method for extracting the gray level co-occurrence matrix features comprises:
the gray level co-occurrence matrix GLCM is obtained by counting the condition that two pixels with the distance d and the direction theta on a gray level image respectively have certain gray levels, reflects the comprehensive information of the gray level image about the direction, interval and amplitude change, and is the basis for analyzing the local mode of the image;
the texture characteristics reflected by the GLCM are represented by adopting energy, contrast, correlation, entropy and inverse difference moment, and the specific calculation formula is as follows:
Figure FDA0002489248660000023
Figure FDA0002489248660000024
Figure FDA0002489248660000025
Figure FDA0002489248660000026
Figure FDA0002489248660000031
wherein,
Figure FDA0002489248660000032
Figure FDA0002489248660000033
Figure FDA0002489248660000034
Figure FDA0002489248660000035
6. the method for constructing a database of lesion image features according to claim 1, wherein the method for constructing a database of lesion image features comprises the steps of:
acquiring damage images of different structures in a civil aircraft engine by a nondestructive testing technology;
secondly, preprocessing the acquired engine damage image by noise reduction, enhancement and segmentation by using a digital image processing technology;
extracting digital features of the damaged image based on the color moment features and the gray level co-occurrence matrix texture features;
fourthly, constructing an engine damage image characteristic database according to the digital characteristics of the damage image;
identifying unknown damage images by using an engine damage image characteristic database, and displaying identification results; at the same time, the new damage characteristics are included in the database.
7. A damage image feature database construction system for implementing the damage image feature database construction method according to any one of claims 1 to 6, wherein the damage image feature database construction system comprises:
the structure damage image acquisition module acquires different structure damage images in the civil aircraft engine through a nondestructive testing technology;
the engine damage image processing module is used for carrying out preprocessing of noise reduction, enhancement and segmentation on the acquired engine damage image by using a digital image processing technology;
the damage image digital feature extraction module is used for extracting damage image digital features based on the color moment features and the gray level co-occurrence matrix texture features;
the engine damage image characteristic database construction module is used for constructing an engine damage image characteristic database according to the damage image digital characteristics;
the recognition result display module is used for recognizing the unknown damage image by using the engine damage image characteristic database and displaying the recognition result; at the same time, the new damage characteristics are included in the database.
8. A program storage medium for receiving a user input, the stored computer program causing an electronic device to execute the method for constructing a database of characteristics of a lesion image according to any one of claims 1 to 6.
9. A computer program product stored on a computer readable medium, comprising a computer readable program for providing a user input interface for implementing a method of constructing a database of image characteristics of lesions according to any one of claims 1 to 6 when executed on an electronic device.
10. A civil aviation engine for analyzing and processing damage information of different internal structures by using the damage image feature database construction method of any one of claims 1 to 6.
CN202010400530.0A 2020-05-13 2020-05-13 Damage image feature database construction method and system and engine Pending CN111581409A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010400530.0A CN111581409A (en) 2020-05-13 2020-05-13 Damage image feature database construction method and system and engine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010400530.0A CN111581409A (en) 2020-05-13 2020-05-13 Damage image feature database construction method and system and engine

Publications (1)

Publication Number Publication Date
CN111581409A true CN111581409A (en) 2020-08-25

Family

ID=72115454

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010400530.0A Pending CN111581409A (en) 2020-05-13 2020-05-13 Damage image feature database construction method and system and engine

Country Status (1)

Country Link
CN (1) CN111581409A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114511511A (en) * 2022-01-14 2022-05-17 西安交通大学 Automatic determination method and system for blade surface damage degree
CN119667105A (en) * 2025-02-20 2025-03-21 洛阳理工学院 A synchronous detection system for intelligent identification of wire rope surface damage and diameter measurement

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040190794A1 (en) * 1998-03-20 2004-09-30 Ken Belanger Method and apparatus for image identification and comparison
CN103984951A (en) * 2014-04-25 2014-08-13 西南科技大学 Automatic defect recognition method and system for magnetic particle testing
CN106384103A (en) * 2016-09-30 2017-02-08 王玲 Vehicle face recognition method and device
CN109034269A (en) * 2018-08-22 2018-12-18 华北水利水电大学 A kind of bollworm female male imago method of discrimination based on computer vision technique
CN110929731A (en) * 2019-11-22 2020-03-27 深圳信息职业技术学院 A medical image processing method and device based on Pathfinder intelligent search algorithm

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040190794A1 (en) * 1998-03-20 2004-09-30 Ken Belanger Method and apparatus for image identification and comparison
CN103984951A (en) * 2014-04-25 2014-08-13 西南科技大学 Automatic defect recognition method and system for magnetic particle testing
CN106384103A (en) * 2016-09-30 2017-02-08 王玲 Vehicle face recognition method and device
CN109034269A (en) * 2018-08-22 2018-12-18 华北水利水电大学 A kind of bollworm female male imago method of discrimination based on computer vision technique
CN110929731A (en) * 2019-11-22 2020-03-27 深圳信息职业技术学院 A medical image processing method and device based on Pathfinder intelligent search algorithm

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114511511A (en) * 2022-01-14 2022-05-17 西安交通大学 Automatic determination method and system for blade surface damage degree
CN114511511B (en) * 2022-01-14 2024-02-02 西安交通大学 An automatic method and system for determining the degree of blade surface damage
CN119667105A (en) * 2025-02-20 2025-03-21 洛阳理工学院 A synchronous detection system for intelligent identification of wire rope surface damage and diameter measurement

Similar Documents

Publication Publication Date Title
Wang et al. A real-time bridge crack detection method based on an improved inception-resnet-v2 structure
Wu et al. Surface crack detection based on image stitching and transfer learning with pretrained convolutional neural network
Bo et al. Particle pollution estimation from images using convolutional neural network and weather features
Zhao et al. Real‐time fabric defect detection based on multi‐scale convolutional neural network
CN112581463A (en) Image defect detection method and device, electronic equipment, storage medium and product
CN113409314A (en) Unmanned aerial vehicle visual detection and evaluation method and system for corrosion of high-altitude steel structure
CN115376003A (en) Pavement Crack Segmentation Method Based on U-Net Network and CBAM Attention Mechanism
Gan et al. Bridge bottom crack detection and modeling based on faster R‐CNN and BIM
CN118961708A (en) A method and system for evaluating mechanical properties of ancient building wooden structures
CN111581409A (en) Damage image feature database construction method and system and engine
CN117764980A (en) Automatic identification and measurement method for defects of composite material based on infrared multi-feature fusion
Shao et al. Aircraft skin damage detection and assessment from UAV images using GLCM and cloud model
CN119649031B (en) A method, system, device and storage medium for detecting defects in a transmission line
Fan et al. Application of YOLOv5 neural network based on improved attention mechanism in recognition of Thangka image defects
CN114353666A (en) An Analytical Method for Aircraft Rolling Behavior State of Airport Runway
CN101593274A (en) Texture-based Feature Extraction Method for Transmission Line Equipment
CN117764906A (en) A method for aircraft skin surface defect detection based on dual-structure attention network
CN118552509A (en) Insulator defect identification method and system based on improved YOLOv model
CN117875549B (en) Building heritage protection evaluation system and method based on image recognition
CN111291712A (en) Forest fire identification method and device based on interpolation CN and capsule network
CN119251168A (en) A metal surface defect detection method
Jing et al. Complex crack segmentation and quantitative evaluation of engineering materials based on deep learning methods
CN118864404A (en) Steel structure bridge corrosion and crack disease measurement method and system
Chen et al. Continuous pavement crack detection using ECA-enhanced instance segmentation of video images
CN115660262B (en) Engineering intelligent quality inspection method, system and medium based on database application

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200825

RJ01 Rejection of invention patent application after publication