[go: up one dir, main page]

CN119282817A - A method and system for detecting the state of an intersection hole boring tool - Google Patents

A method and system for detecting the state of an intersection hole boring tool Download PDF

Info

Publication number
CN119282817A
CN119282817A CN202411294167.3A CN202411294167A CN119282817A CN 119282817 A CN119282817 A CN 119282817A CN 202411294167 A CN202411294167 A CN 202411294167A CN 119282817 A CN119282817 A CN 119282817A
Authority
CN
China
Prior art keywords
cutter
image
tool
state
detecting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202411294167.3A
Other languages
Chinese (zh)
Inventor
程骋
杨展鹏
华丰
袁烨
胡笑楠
周子恒
金骏阳
王智慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN202411294167.3A priority Critical patent/CN119282817A/en
Publication of CN119282817A publication Critical patent/CN119282817A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B23MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
    • B23QDETAILS, COMPONENTS, OR ACCESSORIES FOR MACHINE TOOLS, e.g. ARRANGEMENTS FOR COPYING OR CONTROLLING; MACHINE TOOLS IN GENERAL CHARACTERISED BY THE CONSTRUCTION OF PARTICULAR DETAILS OR COMPONENTS; COMBINATIONS OR ASSOCIATIONS OF METAL-WORKING MACHINES, NOT DIRECTED TO A PARTICULAR RESULT
    • B23Q17/00Arrangements for observing, indicating or measuring on machine tools
    • B23Q17/09Arrangements for observing, indicating or measuring on machine tools for indicating or measuring cutting pressure or for determining cutting-tool condition, e.g. cutting ability, load on tool
    • B23Q17/0952Arrangements for observing, indicating or measuring on machine tools for indicating or measuring cutting pressure or for determining cutting-tool condition, e.g. cutting ability, load on tool during machining
    • B23Q17/0957Detection of tool breakage
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B23MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
    • B23QDETAILS, COMPONENTS, OR ACCESSORIES FOR MACHINE TOOLS, e.g. ARRANGEMENTS FOR COPYING OR CONTROLLING; MACHINE TOOLS IN GENERAL CHARACTERISED BY THE CONSTRUCTION OF PARTICULAR DETAILS OR COMPONENTS; COMBINATIONS OR ASSOCIATIONS OF METAL-WORKING MACHINES, NOT DIRECTED TO A PARTICULAR RESULT
    • B23Q17/00Arrangements for observing, indicating or measuring on machine tools
    • B23Q17/24Arrangements for observing, indicating or measuring on machine tools using optics or electromagnetic waves
    • B23Q17/2452Arrangements for observing, indicating or measuring on machine tools using optics or electromagnetic waves for measuring features or for detecting a condition of machine parts, tools or workpieces
    • B23Q17/2457Arrangements for observing, indicating or measuring on machine tools using optics or electromagnetic waves for measuring features or for detecting a condition of machine parts, tools or workpieces of tools
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B23MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
    • B23QDETAILS, COMPONENTS, OR ACCESSORIES FOR MACHINE TOOLS, e.g. ARRANGEMENTS FOR COPYING OR CONTROLLING; MACHINE TOOLS IN GENERAL CHARACTERISED BY THE CONSTRUCTION OF PARTICULAR DETAILS OR COMPONENTS; COMBINATIONS OR ASSOCIATIONS OF METAL-WORKING MACHINES, NOT DIRECTED TO A PARTICULAR RESULT
    • B23Q17/00Arrangements for observing, indicating or measuring on machine tools
    • B23Q17/24Arrangements for observing, indicating or measuring on machine tools using optics or electromagnetic waves
    • B23Q17/248Arrangements for observing, indicating or measuring on machine tools using optics or electromagnetic waves using special electromagnetic means or methods
    • B23Q17/249Arrangements for observing, indicating or measuring on machine tools using optics or electromagnetic waves using special electromagnetic means or methods using image analysis, e.g. for radar, infrared or array camera images
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • G06N3/0442Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Mechanical Engineering (AREA)
  • Optics & Photonics (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Multimedia (AREA)
  • Geometry (AREA)
  • Quality & Reliability (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Electromagnetism (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Processing (AREA)

Abstract

本发明公开了一种交点孔镗削刀具状态的检测方法及系统,属于工业大数据技术领域;基于多模态数据进行检测,通过采集到的一刀加工过程中的加工时序数据和一刀加工完成后的刀具前刀面图像,分别对应获取了刀具后刀面的磨损带宽和刀具前刀面图像的磨损占比作为交点孔镗削刀具状态的检测指标;通过磨损带宽来表征加工精度情况,当磨损带宽较大时表明刀具后刀面磨损较大,影响加工精度;通过磨损占比来表征表面粗糙度情况,当磨损占比较大时表明表明刀具前刀面磨损较大,影响工件表面粗糙度。本发明全方面考虑了加工精度和表面粗糙度要求,实现了交点孔镗削刀具状态的准确检测。

The present invention discloses a detection method and system for the state of an intersection hole boring tool, which belongs to the field of industrial big data technology; detection is performed based on multimodal data, and the wear bandwidth of the tool back face and the wear ratio of the tool front face image are respectively obtained as detection indicators of the state of the intersection hole boring tool through the collected processing timing data during the one-cut processing process and the image of the tool rake face after the one-cut processing is completed, respectively; the processing accuracy is characterized by the wear bandwidth, and when the wear bandwidth is large, it indicates that the tool back face is worn more, affecting the processing accuracy; the surface roughness is characterized by the wear ratio, and when the wear ratio is large, it indicates that the tool rake face is worn more, affecting the surface roughness of the workpiece. The present invention takes into account the processing accuracy and surface roughness requirements in all aspects, and realizes accurate detection of the state of the intersection hole boring tool.

Description

Method and system for detecting state of boring tool for intersection point hole
Technical Field
The invention belongs to the technical field of industrial big data, and particularly relates to a method and a system for detecting the state of an intersection point hole boring cutter.
Background
The manufacturing industry is a support of national economy, is an important expression of national comprehensive national force, and the aviation industry is used as the front edge of the manufacturing industry, has leading and guiding functions on national economy construction and national defense safety, has extremely important strategic positions, and comprehensively reflects the actual force and technological modernization level of the national manufacturing industry. As one of main products of the aviation industry, the aircraft manufacturing level becomes an important index for measuring the aviation industry level due to the characteristics of high quality requirement, high technical difficulty and the like. The aircraft manufacturing process can be roughly divided into four stages of blank manufacturing, part processing, component assembly and testing. As a key link for directly determining the final quality and the service performance of an aircraft product, the assembly of parts becomes a core bottleneck of efficiency and quality in the aircraft production process due to the characteristics of complex coordination relationship, numerous parts and connection quantity, huge workload and the like.
The wing body intersection point holes are positioned on two sides of the main body of the aircraft body and are used for connecting the aircraft body with the wings, are key high-added-value parts for the assembly and connection of the aircraft, and directly influence the accuracy of the appearance of an aircraft product, the balance of the wings and the maneuverability of the aircraft. In the existing wing body intersection point hole processing method, a conservation trial cutting strategy is generally adopted, the processes of preheating trial cutting, cutter adjustment processing, aperture measurement and aperture updating are carried out, the total number of boring cutters in the processing process is more than 200, the processing time length is more than 30%, and in the process, the real-time detection of the state of the intersection point hole boring cutter has important significance.
Because the processing of the wing body intersection point hole generally requires the adoption of precise boring, the processing environment is in a part weak rigid clamping state and has no cutting fluid, extremely high requirements are put on the processing equipment capacity and the process capacity of production units, in particular to the processing precision and the surface roughness, wherein too low processing precision can lead to incapability of matching the wing with the machine body, and too high surface roughness can lead to the reduction of the service life of the matching part of the wing and the machine body. However, the existing tool state detection method only analyzes time sequence data through a data driving model from the viewpoint of machining precision, so that the abrasion state of the tool cannot be detected, the requirements of machining precision and surface roughness cannot be focused on in all aspects, and the state of the boring tool for the intersection point hole cannot be accurately detected.
Disclosure of Invention
Aiming at the defects or improvement demands of the prior art, the invention provides a method and a system for detecting the state of an intersection point hole boring cutter, which are used for solving the technical problem that the state of the intersection point hole boring cutter cannot be accurately detected in the prior art.
In order to achieve the above object, in a first aspect, the present invention provides a method for detecting a boring tool state of an intersection hole, comprising, after each completion of a machining of a workpiece, performing the following detection operations based on collected machining time sequence data in a machining process of the workpiece and a tool rake face image after the completion of the machining of the workpiece:
inputting processing time sequence data into a rear tool face abrasion bandwidth detection model to obtain abrasion bandwidth of a rear tool face of a tool, wherein the processing time sequence data comprises a tool cutting force signal, a tool vibration signal and a machine tool motor current signal;
Detecting a wearing area in the cutter front cutter face image, and calculating the area ratio of the wearing area to the cutter front cutter face image to be used as the wearing ratio of the cutter front cutter face;
When the obtained abrasion bandwidth is larger than a preset bandwidth threshold value or the obtained abrasion ratio is larger than a preset proportion, judging that the boring cutter of the intersection point hole is in a damaged state currently, otherwise, judging that the boring cutter of the intersection point hole is in an unbroken state currently;
the position and the visual angle of an image acquisition device adopted for acquiring the front cutter face image of the cutter are the same relative to the cutter.
Further preferably, the rear tool face abrasion bandwidth detection model comprises a MCNN network, an attention mechanism module, a GRU network and a perceptron which are connected in sequence;
the MCNN network is used for extracting features of different scales of the processing time sequence data;
the attention mechanism module is used for calculating the weights of the features of different scales of the processing time sequence data based on the attention mechanism, and carrying out weighted summation on the features of different scales to obtain fusion features;
the GRU network is used for extracting time sequence characteristics of the fusion characteristics;
The perceptron is used for mapping the time sequence characteristics into corresponding cutter back face abrasion bandwidths.
Further preferably, the GRU network is BiGRU network.
Further preferably, the flank wear bandwidth detection model is trained by:
The method comprises the steps of acquiring a first training set, wherein the first training set comprises processing time sequence data of different cutters in different cutter processing processes, and corresponding labels are actually measured abrasion bandwidths of rear cutter surfaces of the cutters;
inputting each piece of processing time sequence data in the first training set into a rear cutter face abrasion bandwidth detection model to obtain a detection result of abrasion bandwidth of a corresponding cutter rear cutter face;
the tool flank wear bandwidth detection model is trained by minimizing the difference loss between the detection result of the tool flank wear bandwidth and the corresponding label.
It is further preferred that the image segmentation model is used to detect the wear area in the tool rake image.
Further preferably, the image segmentation model includes an image encoder, a hint encoder, and a mask decoder;
the image encoder is used for encoding the cutter front cutter face image into an image embedding vector;
The prompt encoder is used for encoding foreground priori information of the cutter front face image into a prompt embedded vector;
The mask decoder is used for carrying out mask processing on the image embedded vector and the prompt embedded vector to obtain a mask image of a wearing region in the cutter front face image, and further obtain the wearing region in the cutter front face image.
Further preferably, the image segmentation model is trained by:
The second training set comprises cutter front face images of different cutters after each cutter is processed, and corresponding labels are real mask images of abrasion areas in the cutter front face images;
Inputting each cutter front face image in the second training set into an image segmentation model to obtain a mask image of a corresponding abrasion area;
The image segmentation model is trained by minimizing the loss of difference between mask images and corresponding labels in the worn region.
In a second aspect, the invention provides a system for detecting the state of an intersection point hole boring tool, which comprises a memory and a processor, wherein the memory stores a computer program, and the processor executes the method for detecting the state of the intersection point hole boring tool provided by the first aspect of the invention when executing the computer program.
In a third aspect, the present invention provides a system for monitoring the condition of an intersection point hole boring tool, comprising:
The data acquisition module is used for acquiring processing time sequence data of each tool of the workpiece in the processing process and a tool front surface image after one tool is processed, wherein the processing time sequence data comprises a tool cutting force signal, a tool vibration signal and a machine tool motor current signal;
the state detection module is used for executing the detection method of the state of the boring tool for the intersection point hole provided by the first aspect of the invention.
In a fourth aspect, the present invention also provides a computer readable storage medium, where the computer readable storage medium includes a stored computer program, where the computer program, when executed by a processor, controls a device where the storage medium is located to execute the method for detecting a state of an intersection hole boring tool provided in the first aspect of the present invention.
In general, through the above technical solutions conceived by the present invention, the following beneficial effects can be obtained:
1. The invention provides a detection method of the state of an intersection hole boring cutter, which is based on multi-mode data for detection, wherein the acquired processing time sequence data in the processing process of a cutter and the acquired image of the front cutter surface of the cutter after the processing of the cutter are used as detection indexes of the state of the intersection hole boring cutter, the abrasion bandwidth and the abrasion ratio of the image of the front cutter surface of the cutter are respectively correspondingly acquired, the condition of processing precision is represented by the abrasion bandwidth, the abrasion of the rear cutter surface of the cutter is larger when the abrasion bandwidth is larger, the processing precision is influenced, the condition of surface roughness is represented by the abrasion ratio, and the abrasion of the front cutter surface of the cutter is larger when the abrasion ratio is larger, and the surface roughness of a workpiece is influenced. The invention considers the requirements of processing precision and surface roughness in all aspects, and realizes the accurate detection of the state of the boring tool for the intersection point hole.
2. The rear tool face abrasion bandwidth detection model comprises a MCNN network, an attention mechanism module, a GRU network and a perception machine which are sequentially connected, wherein after the characteristics of different scales of processing time sequence data are extracted through the MCNN network, the attention mechanism module distributes weights for the characteristics of different scales, fusion is carried out according to the weights, and finally the time sequence characteristics of the fused characteristics are further extracted through the GRU network, so that the depth characteristic extraction from a space scale to a time sequence dimension is realized, and the detection precision of the abrasion bandwidth of the rear tool face of the tool is improved.
3. Furthermore, in the method for detecting the state of the boring tool with the intersection point hole, which is provided by the invention, a BiGRU network is adopted in a tool rear face abrasion bandwidth detection model, and a bidirectional circulation structure is utilized to connect two hidden layers with opposite propagation directions to the same output layer, so that the hidden layers can process an input sequence from front to back and can process the input sequence from back to front, thereby better capturing context information in the sequence and further improving the detection precision of the abrasion bandwidth of the tool rear face.
4. The image segmentation model comprises an image encoder, a prompt encoder and a mask decoder, wherein the image encoder is used for encoding a front image of the cutter into an image embedded vector, the prompt encoder is used for encoding foreground priori information of the front image of the cutter into a prompt embedded vector, and a segmentation mask is predicted according to the image embedded vector and the prompt embedded vector, so that the priori information and original image information are fully utilized, and the front image of the cutter can be segmented more flexibly and accurately.
5. The invention provides a method for detecting the state of an intersection point hole boring tool, which can directly deploy a rear tool face abrasion bandwidth detection model to the local for use without relying on a cloud computing mode, reduces the dependence on network transmission and meets the requirements of real-time monitoring and fault diagnosis in the processing process.
Drawings
FIG. 1 is a flowchart of a method for detecting the state of an intersection point hole boring tool according to an embodiment of the present invention;
Fig. 2 is a schematic structural diagram of a flank wear bandwidth detection model according to an embodiment of the present invention;
Fig. 3 is a schematic diagram of verifying a prediction result of a flank wear bandwidth detection model according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a monitoring system for the state of an intersection point hole boring tool according to an embodiment of the present invention;
fig. 5 is a system architecture diagram of a hardware platform according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention. In addition, the technical features of the embodiments of the present invention described below may be combined with each other as long as they do not collide with each other.
In order to achieve the above object, in a first aspect, the present invention provides a method for detecting a boring tool state of an intersection hole, as shown in fig. 1, including, after each machining of a workpiece is completed, performing the following detection operations based on collected machining time sequence data in a machining process of the workpiece and a tool front face image after the machining of the workpiece is completed:
Inputting processing time sequence data into a rear cutter face abrasion bandwidth detection model to obtain abrasion bandwidth of a rear cutter face of a cutter, wherein the processing time sequence data comprises a cutter cutting force signal, a cutter vibration signal and a machine tool motor current signal;
Detecting a wearing area in the cutter front cutter face image, and calculating the area ratio of the wearing area to the cutter front cutter face image to be used as the wearing ratio of the cutter front cutter face;
When the obtained abrasion bandwidth is larger than a preset bandwidth threshold value or the obtained abrasion proportion is larger than a preset proportion, judging that the boring cutter of the intersection point hole is in a damaged state currently and needs to be replaced;
the position and the visual angle of an image acquisition device adopted for acquiring the front cutter face image of the cutter are the same relative to the cutter.
In general, during the machining of the blade body intersection hole, the tool is considered to be replaced if the flank wear value is generally greater than 0.1mm or the rake wear area is greater than 0.037. In an alternative embodiment, the preset bandwidth threshold and the preset ratio take values of 0.1mm and 0.037, respectively.
The flank wear bandwidth detection model may be a deep learning model, or may be a model such as CNN, LSTM, GRU, transformer.
Preferably, in an alternative embodiment, the rear tool face abrasion bandwidth detection model comprises a MCNN network, an attention mechanism module, a GRU network and a perceptron which are connected in sequence;
the MCNN network is used for extracting features of different scales of the processing time sequence data;
the attention mechanism module is used for calculating the weights of the features of different scales of the processing time sequence data based on the attention mechanism, and carrying out weighted summation on the features of different scales to obtain fusion features;
the GRU network is used for extracting time sequence characteristics of the fusion characteristics;
The perceptron is used for mapping the time sequence characteristics into corresponding cutter back face abrasion bandwidths.
Specifically, the MCNN network comprises a plurality of parallel feature extraction layers, in the embodiment shown in fig. 2, MCNN comprises an input layer, a feature extraction layer and a pooling layer which are sequentially connected, the feature extraction layer comprises 3 parallel feature extraction layers which are connected in parallel, the input layer comprises a convolution layer, a BN layer and a pooling layer which are sequentially connected, each parallel feature extraction layer comprises a convolution layer, a BN layer and an activation layer which are sequentially connected, and the convolution kernels of the convolution layers in the three parallel feature extraction layers are 3*3, 5*5 and 7*7 respectively. In an alternative embodiment, the convolution layer employs a two-dimensional convolution layer Conv2d, the pooling layer employs an average pooling layer AvePool, and the activation layer employs a rectifying linear unit ReLU. It should be noted that this is only one embodiment, and the pooling layer may also be a maximum pooling layer, a global pooling layer, etc., and the activation layer may also be a leak ReLU, sigmoid, etc., which is not limited herein.
In this embodiment, convolution kernels of three sizes, 3, 5, and 7, are selected. Since the input data is long, large convolution kernels and pooling layers are applied to the input layer to learn the global features. The number of convolution kernels in the feature extraction layer is greater than the number of convolution kernels in the input layer, in this embodiment, the number of convolution kernels in the input layer is 16, and each convolution layer in the feature extraction layer is doubled to 32. Finally, three averaging pooling operations are used for fast downsampling.
The convolution layers with the convolution kernels with the larger sizes are used for obtaining wide receptive fields and global features, the convolution layers with the smaller sizes are used for learning high-level features, and a batch normalization layer (BN layer) is added after each convolution layer, so that internal covariate offset can be reduced.
In this embodiment, a full connection layer is further disposed between the MCNN network and the attention mechanism module, and two dropout layers are disposed in the full connection layer to avoid the problem of overfitting.
The receptive field of the convolution kernel is locally limited and typically requires stacking multiple layers in order to correlate different portions of the image. To solve this problem, by introducing autonomous hints, a method of implementing an attention mechanism by Query (Query), key (Key) and Value (Value) was devised. { a 1,a2,…,an } is a vectorized input vector, q, k, v are Query (Query), key (Key), value (Value) vectors, respectively. The Query is an autonomous prompt, which is a feature vector of subjective consciousness, the Key is a non-autonomous prompt, which is a salient feature information vector of an object, and the Value is a feature vector representing the object itself. From this point of view, the attention mechanism is to realize the attention weight distribution to Value through the attention convergence of Query and Key, and generate the final output result. Taking the attention mechanism of the a 1 vector and other vectors as an example, the q vector of the a 1 vector and the k vector of other input vectors are multiplied respectively and pass through a softmax layer to obtain the matching value of the a 1 vector and the other input vectors, and the matching value is multiplied by the v vector weight of each input vector to obtain the output vector. This process can be formulated as
Wherein Q, K, V is the matrix representation of q, k, v of each input vector, d k is a scaling factor, which acts to make the dot product result fall in the region with larger gradient of the softmax function, thereby accelerating the training process of the model.
On the basis of the above, each input vector needs to be position-coded, so that the network can learn the position relation between the inputs. There are various implementations of position coding, in this embodiment, the classical sin and cos coding methods in the transducer original text are used, and the formulas are as follows:
wherein PE represents position coding, pos represents the position of the input vector (i.e. token) to be solved in the whole input vector, the position coding is a vector (embedding), i represents the index of each element in the vector, and d model represents the vector dimension of the position coding.
The GRU network obtains tool state degradation information in the data by selectively retaining and utilizing the input information x t of the previous time state information h t-1 and the current time t. In general, in the neural network, information is transmitted from front to back, but in the boring process, the current time state information of the tool is influenced by both historical data and future operation, so that the current time state information has a larger relation with the front and back time information, and therefore, in an alternative embodiment, a BiGRU network is preferably adopted, a BiGRU network utilizes a bidirectional circulation structure, and the front and back hidden layers with opposite propagation directions are connected to the same output layer, so that the hidden layers can process an input sequence from front to back and can process the input sequence from back to front, and the context information in the sequence can be better captured.
The updated memory formula is as follows:
Wherein the closer z t is to 1, the smaller the influence of the previous time h t-1 on the calculation of the current time h t, and the closer z t is to 0, the hidden state of the current time candidate is represented The less memory, the more memory is made for the previous state.
In an alternative embodiment, the flank wear bandwidth detection model is trained by:
The method comprises the steps of acquiring a first training set, wherein the first training set comprises processing time sequence data of different cutters in different cutter processing processes, and corresponding labels are actually measured abrasion bandwidths of rear cutter surfaces of the cutters;
inputting each piece of processing time sequence data in the first training set into a rear cutter face abrasion bandwidth detection model to obtain a detection result of abrasion bandwidth of a corresponding cutter rear cutter face;
the tool flank wear bandwidth detection model is trained by minimizing the difference loss between the detection result of the tool flank wear bandwidth and the corresponding label.
In an alternative embodiment, data acquisition is performed in a horizontal boring processing scene, the used tool is a triangular boring tool, and the used workpiece is a titanium alloy test piece. Vibration, cutting force and current signals are collected through a sensor, and then high-quality discrete data of boring processing are finally obtained through an AD conversion module. And acquiring a cutter front face image by an industrial camera. All signals were acquired at a frequency of 10kHz for 50 s. The data set was randomly divided into training, validation and test sets, accounting for 80%, 10% and 10% of the data set, respectively. The model is initialized with random weights using xavier initialization techniques and optimized using Adam optimizer.
After training, the performance of the model is evaluated by adopting evaluation indexes such as MAE, MSE, R 2, and the rear face abrasion bandwidth detection model respectively reaches 6.03, 9.01 and 9.04 on MAE, MSE, R 2 evaluation indexes, which is shown in fig. 3.
There are many ways to detect the worn region in the tool rake image, and for example, a threshold segmentation method, a Grabcut algorithm method, or the like may be used.
In an alternative embodiment, an image segmentation model is used to detect the worn region in the tool rake image. The image segmentation model may be a SAM model, a threshold segmentation model, or the like.
In addition, an image segmentation algorithm can be used for detecting the abrasion area in the cutter front tool face image, such as a Grabcut algorithm.
In an alternative embodiment, the image segmentation model comprises an image encoder, a hint encoder and a mask decoder;
the image encoder is used for encoding the cutter front cutter face image into an image embedding vector;
The prompting encoder is used for encoding foreground priori information of the cutter front face image into a prompting embedded vector, wherein the foreground priori information can be a mask matrix, a boundary box or an identification point for identifying a foreground region (or a background region) or a text where the foreground region (or the background region) is located;
The mask decoder is used for carrying out mask processing on the image embedded vector and the prompt embedded vector to obtain a mask image of a wearing region in the cutter front face image, and further obtain the wearing region in the cutter front face image.
At this time, the ratio of the number of non-zero pixels in the mask image of the worn area to the number of all pixels in the mask image can be calculated as the worn duty ratio of the tool rake face.
Preferably, the image encoder is a visual transducer pre-trained based on visual attention for encoding the tool rake image into a map-embedded vector;
The cue encoder encodes background points, masks, bounding boxes or texts into embedded vectors in real time, is used for generating cue embedded vectors of cues (namely positions, texts and the like), and sends the cue embedded vectors and the image embedded vectors into a mask decoder;
The lightweight mask decoder predicts the segmentation mask according to the embedded vectors from the image encoder and hint encoder, which maps the image embedded, hint embedded, and output markers to the mask and outputs them. In an alternative embodiment, the mask decoder uses immediate self-attention and cross-attention in both directions (from hint embedding to image embedding back). This layout enhances the dataset and allows the model to learn and improve over time, making it efficient and flexible.
It should be noted that, the image segmentation model may directly adopt the existing pre-trained image segmentation model, or may be used after further fine tuning of the pre-trained image segmentation model, or may also be directly trained. The existing pre-trained image segmentation model receives the training of millions of images and billions of order of magnitude masks, and the data set of the model is carefully planned to cover a wide range of fields, objects and scenes, so that the model can be well generalized to different tasks, and therefore effective mask segmentation can be returned for any prompt. In general, the prompt, i.e. the segmentation task, may be information such as foreground, background points, text, etc., and in the required working condition of the present invention, indicates the content to be segmented in the image, i.e. the worn area.
In an alternative embodiment, the image segmentation model is trained by:
The second training set comprises cutter front face images of different cutters after each cutter is processed, and corresponding labels are real mask images of abrasion areas in the cutter front face images;
Inputting each cutter front face image in the second training set into an image segmentation model to obtain a mask image of a corresponding abrasion area;
The image segmentation model is trained by minimizing the loss of difference between mask images and corresponding labels in the worn region.
It should be noted that the foregoing training method may be used for both fine tuning and training.
The method for detecting the state of the boring cutter for the intersection point hole can be applied to boring scenes, can be deployed on notebook computers on production sites, and is used for monitoring the health state of the cutter in real time.
In a second aspect, the invention provides a system for detecting the state of an intersection point hole boring tool, which comprises a memory and a processor, wherein the memory stores a computer program, and the processor executes the method for detecting the state of the intersection point hole boring tool provided by the first aspect of the invention when executing the computer program.
The related technical solution is the same as the method for detecting the state of the boring tool for the intersection point hole provided in the first aspect of the present invention, and is not described herein.
In a third aspect, the present invention provides a monitoring system for the state of an intersection point hole boring tool, as shown in fig. 4, including:
The data acquisition module is used for acquiring processing time sequence data of each tool of the workpiece in the processing process and a tool front surface image after one tool is processed, wherein the processing time sequence data comprises a tool cutting force signal, a tool vibration signal and a machine tool motor current signal;
the state detection module is used for executing the detection method of the state of the boring tool for the intersection point hole provided by the first aspect of the invention.
In an alternative embodiment, the data acquisition module is used for acquiring multi-source data in the machining process and comprises a time sequence signal acquisition module and an image acquisition device, wherein the time sequence signal acquisition module is used for acquiring machining time sequence data in the machining process of each tool of a workpiece, and the image acquisition device is used for acquiring an image of the tool after machining of each tool is completed.
Specifically, as shown in fig. 5, the time sequence signal acquisition module comprises a sensor, a main control module and an analog-to-digital converter, wherein the sensor comprises a moment sensor, an acceleration sensor and a current sensor, the moment sensor is arranged on a workpiece and is used for acquiring cutting force signals, the acceleration sensor is arranged on a cutter handle and is used for acquiring vibration signals, and the current sensor is arranged on a machine tool motor and is used for acquiring current signals. The master control module is used for sending a start/end signal to each sensor to ensure the synchronism of the acquired data, and the analog-to-digital converter is used for converting the analog signal returned by the sensor into a digital signal, so that the computer can conveniently read and store the digital signal. The image acquisition device comprises a camera, such as an industrial area array camera.
The related technical solution is the same as the method for detecting the state of the boring tool for the intersection point hole provided in the first aspect of the present invention, and is not described herein.
In a fourth aspect, the present invention also provides a computer readable storage medium, where the computer readable storage medium includes a stored computer program, where the computer program, when executed by a processor, controls a device where the storage medium is located to execute the method for detecting a state of an intersection hole boring tool provided in the first aspect of the present invention.
The related technical solution is the same as the method for detecting the state of the boring tool for the intersection point hole provided in the first aspect of the present invention, and is not described herein.
It will be readily appreciated by those skilled in the art that the foregoing description is merely a preferred embodiment of the invention and is not intended to limit the invention, but any modifications, equivalents, improvements or alternatives falling within the spirit and principles of the invention are intended to be included within the scope of the invention.

Claims (10)

1. The method for detecting the state of the boring cutter for the intersection point hole is characterized by comprising the following detection operations based on collected processing time sequence data in the processing process of one cutter and a cutter front surface image after the processing of one cutter after each time of the processing of one cutter of a workpiece:
The processing time sequence data is input into a rear cutter face abrasion bandwidth detection model to obtain abrasion bandwidth of a rear cutter face of a cutter, wherein the processing time sequence data comprises a cutter cutting force signal, a cutter vibration signal and a machine tool motor current signal;
Detecting a wear area in the cutter front cutter face image, and calculating the area ratio of the wear area to the cutter front cutter face image to be used as the wear ratio of the cutter front cutter face;
When the abrasion bandwidth is larger than a preset bandwidth threshold value or the abrasion ratio is larger than a preset proportion, judging that the boring cutter of the intersection point hole is in a damaged state currently, otherwise, judging that the boring cutter of the intersection point hole is in an unbroken state currently;
The position and the visual angle of an image acquisition device adopted for acquiring the front cutter face image of the cutter are the same relative to the cutter.
2. The method for detecting the state of the boring tool for the intersection point hole according to claim 1, wherein the rear tool face abrasion bandwidth detection model comprises a MCNN network, an attention mechanism module, a GRU network and a perceptron which are connected in sequence;
the MCNN network is used for extracting the characteristics of different scales of the processing time sequence data;
The attention mechanism module is used for calculating the weights of the features of different scales of the processing time sequence data based on an attention mechanism, and carrying out weighted summation on the features of different scales to obtain a fusion feature;
The GRU network is used for extracting time sequence characteristics of the fusion characteristics;
The sensor is used for mapping the time sequence characteristics into corresponding cutter rear cutter surface abrasion bandwidths.
3. The method for detecting the state of an intersection point hole boring tool according to claim 2, the GRU network is BiGRU networks.
4. A method for detecting the state of an intersection point hole boring tool according to any one of claims 1 to 3, wherein the flank wear bandwidth detection model is trained by:
acquiring a first training set, wherein the first training set comprises processing time sequence data of different cutters in different cutter processing processes, and the corresponding label is the actually measured abrasion bandwidth of the rear cutter surface of the cutter;
Inputting each piece of processing time sequence data in the first training set into the rear cutter face abrasion bandwidth detection model to obtain a detection result of abrasion bandwidth of a corresponding cutter rear cutter face;
And training the tool flank wear bandwidth detection model by minimizing the difference loss between the detection result of the wear bandwidth of the tool flank and the corresponding label.
5. The method for detecting the state of an intersection hole boring tool according to claim 1, wherein the worn area in the tool rake face image is detected using an image segmentation model.
6. The method for detecting the state of an intersection hole boring tool according to claim 5, wherein the image segmentation model comprises an image encoder, a hint encoder and a mask decoder;
The image encoder is used for encoding the cutter front face image into an image embedding vector;
the prompt encoder is used for encoding foreground priori information of the cutter front face image into a prompt embedded vector;
The mask decoder is used for carrying out mask processing on the image embedded vector and the prompt embedded vector to obtain a mask image of a wearing region in the cutter front face image, and further obtain the wearing region in the cutter front face image.
7. The method for detecting the state of an intersection point hole boring tool according to claim 5 or 6, wherein the image segmentation model is trained by:
The method comprises the steps of obtaining a second training set, wherein the second training set comprises cutter front face images of different cutters after each cutter is processed, and corresponding labels are real mask images of abrasion areas in the cutter front face images;
Inputting each cutter front face image in the second training set into the image segmentation model to obtain a mask image of a corresponding abrasion area;
The image segmentation model is trained by minimizing the loss of difference between the mask image and the corresponding label of the worn region.
8. The system for detecting the state of the boring tool of the intersection point hole is characterized by comprising a memory and a processor, wherein the memory stores a computer program, and the processor executes the method for detecting the state of the boring tool of the intersection point hole according to any one of claims 1-7 when executing the computer program.
9. A system for monitoring the condition of an intersection point hole boring tool, comprising:
The device comprises a data acquisition module, a data processing module and a processing module, wherein the data acquisition module is used for acquiring processing time sequence data of each tool processing process of a workpiece and a tool front surface image after one tool processing is finished, and the processing time sequence data comprises a tool cutting force signal, a tool vibration signal and a machine tool motor current signal;
a state detection module for executing the method for detecting the state of the boring tool for the intersection point hole according to any one of claims 1 to 7.
10. A computer readable storage medium, characterized in that the computer readable storage medium comprises a stored computer program, wherein the computer program, when run by a processor, controls a device in which the storage medium is located to perform the method of detecting the state of an intersection hole boring tool according to any one of claims 1-7.
CN202411294167.3A 2024-09-14 2024-09-14 A method and system for detecting the state of an intersection hole boring tool Pending CN119282817A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411294167.3A CN119282817A (en) 2024-09-14 2024-09-14 A method and system for detecting the state of an intersection hole boring tool

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411294167.3A CN119282817A (en) 2024-09-14 2024-09-14 A method and system for detecting the state of an intersection hole boring tool

Publications (1)

Publication Number Publication Date
CN119282817A true CN119282817A (en) 2025-01-10

Family

ID=94163746

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411294167.3A Pending CN119282817A (en) 2024-09-14 2024-09-14 A method and system for detecting the state of an intersection hole boring tool

Country Status (1)

Country Link
CN (1) CN119282817A (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4031368A (en) * 1972-04-17 1977-06-21 Verkstadsteknik Ab Adaptive control of cutting machining operations
CN102564314A (en) * 2011-12-06 2012-07-11 上海交通大学 Orthogonal vision detection system for detecting wear condition of end mill
CN108931961A (en) * 2018-07-05 2018-12-04 西安交通大学 A kind of monoblock type slotting cutter worn-off damage detection method based on machine vision
CN109514349A (en) * 2018-11-12 2019-03-26 西安交通大学 Tool wear state monitoring method based on vibration signal and Stacking integrated model
CN111360582A (en) * 2020-01-17 2020-07-03 华中科技大学 A tool wear state identification method
CN111774935A (en) * 2020-07-27 2020-10-16 上海威研精密科技有限公司 Tooth-by-tooth abrasion detector for front and rear cutter faces of rotary cutter and detection method thereof
CN112428025A (en) * 2020-11-11 2021-03-02 哈尔滨理工大学 Method for constructing two-dimensional wear graph of cutter to optimize safe cutting area
CN112818477A (en) * 2021-01-04 2021-05-18 哈尔滨理工大学 Method and system for establishing cutter failure limit diagram of integral flat-bottom end mill
CN114161227A (en) * 2021-12-28 2022-03-11 福州大学 A Tool Wear Monitoring Method Based on Fusion of Simulation Features and Signal Features
CN115393378A (en) * 2022-10-27 2022-11-25 深圳市大数据研究院 Low-cost and efficient cell nucleus image segmentation method
CN115741235A (en) * 2022-11-28 2023-03-07 营口理工学院 Wear prediction and health management method based on five-axis machining center cutter
CN116204774A (en) * 2022-12-14 2023-06-02 中国航空工业集团公司金城南京机电液压工程研究中心 Cutter abrasion stability prediction method based on hierarchical element learning
CN117862954A (en) * 2024-01-24 2024-04-12 上海交通大学 An intelligent prediction method for tool wear of high-precision jig boring machines
CN118081482A (en) * 2024-03-11 2024-05-28 南京理工大学 MDRSNet-based cutting tool wear prediction method

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4031368A (en) * 1972-04-17 1977-06-21 Verkstadsteknik Ab Adaptive control of cutting machining operations
CN102564314A (en) * 2011-12-06 2012-07-11 上海交通大学 Orthogonal vision detection system for detecting wear condition of end mill
CN108931961A (en) * 2018-07-05 2018-12-04 西安交通大学 A kind of monoblock type slotting cutter worn-off damage detection method based on machine vision
CN109514349A (en) * 2018-11-12 2019-03-26 西安交通大学 Tool wear state monitoring method based on vibration signal and Stacking integrated model
CN111360582A (en) * 2020-01-17 2020-07-03 华中科技大学 A tool wear state identification method
CN111774935A (en) * 2020-07-27 2020-10-16 上海威研精密科技有限公司 Tooth-by-tooth abrasion detector for front and rear cutter faces of rotary cutter and detection method thereof
CN112428025A (en) * 2020-11-11 2021-03-02 哈尔滨理工大学 Method for constructing two-dimensional wear graph of cutter to optimize safe cutting area
CN112818477A (en) * 2021-01-04 2021-05-18 哈尔滨理工大学 Method and system for establishing cutter failure limit diagram of integral flat-bottom end mill
CN114161227A (en) * 2021-12-28 2022-03-11 福州大学 A Tool Wear Monitoring Method Based on Fusion of Simulation Features and Signal Features
CN115393378A (en) * 2022-10-27 2022-11-25 深圳市大数据研究院 Low-cost and efficient cell nucleus image segmentation method
CN115741235A (en) * 2022-11-28 2023-03-07 营口理工学院 Wear prediction and health management method based on five-axis machining center cutter
CN116204774A (en) * 2022-12-14 2023-06-02 中国航空工业集团公司金城南京机电液压工程研究中心 Cutter abrasion stability prediction method based on hierarchical element learning
CN117862954A (en) * 2024-01-24 2024-04-12 上海交通大学 An intelligent prediction method for tool wear of high-precision jig boring machines
CN118081482A (en) * 2024-03-11 2024-05-28 南京理工大学 MDRSNet-based cutting tool wear prediction method

Similar Documents

Publication Publication Date Title
Liu et al. A rail surface defect detection method based on pyramid feature and lightweight convolutional neural network
CN115018910B (en) Method, device and computer-readable storage medium for detecting objects in point cloud data
CN104931960A (en) Trend message and radar target state information whole-track data correlation method
CN104634265B (en) A kind of mineral floating froth bed soft measurement method of thickness based on multiplex images Fusion Features
CN110766060B (en) Time series similarity calculation method, system and medium based on deep learning
Zeller et al. Gaussian radar transformer for semantic segmentation in noisy radar data
Zhang et al. Fast covariance matching with fuzzy genetic algorithm
CN108960421A (en) The unmanned surface vehicle speed of a ship or plane online forecasting method based on BP neural network of improvement
CN118570194B (en) Method and system for detecting defects of inner surface of special-shaped bushing based on three-dimensional point cloud
CN116340796A (en) Time sequence data analysis method, device, equipment and storage medium
CN111275744A (en) Non-contact vibration frequency measurement method based on deep learning and image processing
CN113762151A (en) A fault data processing method, system and fault prediction method
Jiang et al. Learning to count arbitrary industrial manufacturing workpieces
CN104361600A (en) Motion recognition method and system
CN110750876A (en) A bearing data model training and use method
CN118172787B (en) A lightweight document layout analysis method
CN119048842A (en) Domain knowledge driven multi-mode power inspection method, device and system
CN117671569B (en) Rat robot movement fluency assessment method
CN119282817A (en) A method and system for detecting the state of an intersection hole boring tool
Ma et al. Depth-guided progressive network for object detection
CN117074713A (en) Train-mounted monocular vision speed measuring method
CN111461130B (en) High-precision image semantic segmentation algorithm model and segmentation method
Yuan et al. Research approach of hand gesture recognition based on improved YOLOV3 network and Bayes classifier
Yu et al. Sensor fault diagnosis based on passive observer and hybrid CNN-GRU model for intelligent ship
CN117798654B (en) Intelligent adjusting system for center of steam turbine shafting

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination