[go: up one dir, main page]

CN110598033B - Intelligent self-checking vehicle method and device and computer readable storage medium - Google Patents

Intelligent self-checking vehicle method and device and computer readable storage medium Download PDF

Info

Publication number
CN110598033B
CN110598033B CN201910761970.6A CN201910761970A CN110598033B CN 110598033 B CN110598033 B CN 110598033B CN 201910761970 A CN201910761970 A CN 201910761970A CN 110598033 B CN110598033 B CN 110598033B
Authority
CN
China
Prior art keywords
vehicle
image
checking
gradient
intelligent self
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910761970.6A
Other languages
Chinese (zh)
Other versions
CN110598033A (en
Inventor
黎聪明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Property and Casualty Insurance Company of China Ltd
Original Assignee
Ping An Property and Casualty Insurance Company of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Property and Casualty Insurance Company of China Ltd filed Critical Ping An Property and Casualty Insurance Company of China Ltd
Priority to CN201910761970.6A priority Critical patent/CN110598033B/en
Publication of CN110598033A publication Critical patent/CN110598033A/en
Application granted granted Critical
Publication of CN110598033B publication Critical patent/CN110598033B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/51Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Library & Information Science (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to an artificial intelligence technology, and discloses an intelligent self-checking vehicle method, which comprises the following steps: generating a tag set according to a vehicle image set of a vehicle image library; receiving a vehicle checking image set, preprocessing and segmenting the vehicle checking image set to obtain a vehicle direction gradient characteristic atlas set of the vehicle checking image set, and taking the vehicle direction gradient characteristic atlas set as a training set; training a pre-constructed intelligent self-checking vehicle model by using the training set and the label set, outputting a vehicle characteristic image with the highest matching degree with the training set, and finishing the training of the intelligent self-checking vehicle model; and identifying the vehicle inspection image uploaded by the user according to the trained intelligent self-inspection vehicle model and the vehicle image set of the vehicle image library, and outputting a self-inspection result of the vehicle inspection image uploaded by the user. The invention also provides an intelligent self-checking vehicle checking device and a computer readable storage medium. The invention realizes the accurate identification of the vehicle inspection image.

Description

Intelligent self-checking vehicle method and device and computer readable storage medium
Technical Field
The invention relates to the technical field of big data, in particular to an intelligent self-checking vehicle method and device based on user behaviors and a computer readable storage medium.
Background
In recent years, due to the rapid development of science and technology and the improvement of the living standard of people, the number of automobiles is continuously increased, traffic accidents on roads are also continuously increased, and the influence is that the life insurance industry examines and verifies automobile insurance. However, the single amount of the bills in the whole country every day is large, the burden of auditors is also large, the industrial risk cannot be better controlled, meanwhile, manual auditing influences the single-out failure, the single-out speed is low, and the labor cost is high.
Disclosure of Invention
The invention provides an intelligent self-vehicle checking method, an intelligent self-vehicle checking device and a computer readable storage medium, and mainly aims to provide an efficient self-vehicle checking method for a user when the user performs vehicle verification.
In order to achieve the purpose, the invention provides an intelligent self-checking vehicle method, which comprises the following steps:
acquiring vehicle images historically stored by a user from a vehicle image library, and establishing labels for the vehicle images in the vehicle image library to generate a label set;
receiving the vehicle checking image set of the user, and carrying out preprocessing operation on the vehicle checking image set to obtain a target vehicle checking image set;
segmenting the vehicle of the target vehicle-checking image set by an edge detection method and a threshold segmentation method to obtain a local key point image of the vehicle;
establishing a direction gradient characteristic atlas of the vehicle according to the local key point image of the vehicle, and taking the direction gradient characteristic atlas as a training set;
training a pre-constructed intelligent self-checking vehicle model by using the training set to obtain a training value, inputting the training value and the label set into a loss function of the intelligent self-checking vehicle model to obtain a loss function value, and finishing the training of the intelligent self-checking vehicle model until the loss function value is smaller than a preset threshold value;
inputting the vehicle inspection image uploaded by the user into the trained intelligent self-inspection vehicle inspection model to obtain the characteristic image with the highest matching degree of the vehicle inspection image, traversing and comparing the characteristic image with the highest matching degree with the vehicle image library by using a logic comparison program, and outputting the self-inspection result of the vehicle inspection image uploaded by the user.
Optionally, the receiving the vehicle-checking image set of the user, and performing a preprocessing operation on the vehicle-checking image set to obtain a target vehicle-checking image set includes:
converting the car inspection images in the car inspection image set into gray level images through histogram equalization; contrast enhancement is carried out on the gray level image by using a contrast stretching method; and denoising the contrast-enhanced gray level image by Gaussian filtering to obtain the target vehicle inspection image set.
Optionally, the segmenting the vehicle in the target vehicle-testing image set by using an edge detection method and a threshold segmentation method to obtain the local key point image of the vehicle includes:
interface positioning is carried out on the vehicle of the target vehicle-checking image set by using a Canny edge detection method, the amplitude and the direction of the gradient of the vehicle are calculated through the finite difference of first-order partial derivatives, and the amplitude of a non-local maximum value point in the gradient of the vehicle is set to be zero, so that a refined vehicle edge image is obtained;
the method comprises the steps of segmenting the vehicle edge image by using a double threshold method, amplifying key points in the segmented vehicle edge image by using a region growing method, and connecting the segmented vehicle edge image through edge connection so as to obtain a local key point image of the vehicle.
Optionally, the establishing a directional gradient feature atlas of the vehicle according to the local keypoint image of the vehicle includes:
calculating the gradient amplitude G (x, y) and the gradient direction sigma (x, y) of each pixel point (x, y) in the local key point image of the vehicle to form a gradient matrix of the local key point image of the vehicle, and dividing the gradient matrix into small cell units;
calculating the gradient size and direction of each pixel point in the cell unit, counting a gradient direction histogram, and calculating the sum of pixel gradients of each direction channel in the gradient direction histogram;
accumulating the sum of the pixel gradients of each direction channel to form a vector, combining the cell units into a block, normalizing the vector in the block to obtain a characteristic vector, and connecting the characteristic vectors to obtain a direction gradient characteristic map set of the vehicle.
Optionally, the training of the pre-constructed intelligent self-checking vehicle model by using the training set to obtain a training value includes:
inputting the training set to an input layer of a convolutional neural network of the intelligent self-checking vehicle model, and extracting a feature vector by performing convolution operation on the training set through presetting a group of filters in the convolutional layer of the convolutional neural network;
and performing pooling operation on the feature vector by using a pooling layer of the convolutional neural network, inputting the pooled feature vector to a full-link layer, and performing normalization processing and calculation on the pooled feature vector through an activation function of the convolutional neural network to obtain the training value.
In addition, in order to achieve the above object, the present invention further provides an intelligent self-checking vehicle device, which includes a memory and a processor, wherein the memory stores an intelligent self-checking vehicle program that can run on the processor, and when the intelligent self-checking vehicle program is executed by the processor, the following steps are implemented:
acquiring vehicle images historically stored by a user from a vehicle image library, and establishing labels for the vehicle images in the vehicle image library to generate a label set;
receiving the vehicle-checking image set of the user, and carrying out preprocessing operation on the vehicle-checking image set to obtain a target vehicle-checking image set;
segmenting the vehicle of the target vehicle-checking image set by an edge detection method and a threshold segmentation method to obtain a local key point image of the vehicle;
establishing a direction gradient characteristic atlas of the vehicle according to the local key point image of the vehicle, and taking the direction gradient characteristic atlas as a training set;
training a pre-constructed intelligent self-checking vehicle model by using the training set to obtain a training value, inputting the training value and the label set into a loss function of the intelligent self-checking vehicle model to obtain a loss function value, and finishing the training of the intelligent self-checking vehicle model until the loss function value is smaller than a preset threshold value;
inputting the vehicle inspection image uploaded by the user into the trained intelligent self-inspection vehicle model to obtain the characteristic image with the highest matching degree of the vehicle inspection image, traversing and comparing the characteristic image with the highest matching degree with the vehicle image library by utilizing a logic comparison program, and outputting the self-inspection result of the vehicle inspection image uploaded by the user.
Optionally, the receiving the vehicle-checking image set of the user, and performing a preprocessing operation on the vehicle-checking image set to obtain a target vehicle-checking image set includes:
converting the car inspection images in the car inspection image set into gray level images through histogram equalization; contrast enhancement is carried out on the gray level image by using a contrast stretching method; and denoising the contrast-enhanced gray level image by using Gaussian filtering to obtain the target car inspection image set.
Optionally, the segmenting the vehicle in the target vehicle-testing image set by using an edge detection method and a threshold segmentation method to obtain the local key point image of the vehicle includes:
interface positioning is carried out on the vehicle of the target vehicle-checking image set by using a Canny edge detection method, the amplitude and the direction of the gradient of the vehicle are calculated through the finite difference of first-order partial derivatives, and the amplitude of a non-local maximum value point in the gradient of the vehicle is set to be zero, so that a refined vehicle edge image is obtained;
the method comprises the steps of segmenting the vehicle edge image by using a double threshold method, amplifying key points in the segmented vehicle edge image by using a region growing method, and connecting the segmented vehicle edge image through edge connection so as to obtain a local key point image of the vehicle.
Optionally, the establishing a directional gradient feature atlas set of the vehicle according to the local key point image of the vehicle includes:
calculating the gradient amplitude G (x, y) and the gradient direction sigma (x, y) of each pixel point (x, y) in the local key point image of the vehicle to form a gradient matrix of the local key point image of the vehicle, and dividing the gradient matrix into small cell units;
calculating the gradient size and direction of each pixel point in the cell unit, counting a gradient direction histogram, and calculating the sum of pixel gradients of each direction channel in the gradient direction histogram;
accumulating the sum of the pixel gradients of each direction channel to form a vector, combining the cell units into a block, normalizing the vectors in the block to obtain a characteristic vector, and connecting the characteristic vectors to obtain a direction gradient characteristic atlas set of the vehicle.
In addition, to achieve the above object, the present invention also provides a computer readable storage medium having an intelligent self-checking vehicle program stored thereon, which can be executed by one or more processors to implement the steps of the intelligent self-checking vehicle method as described above.
According to the intelligent self-checking vehicle inspection method, the device and the computer readable storage medium, when the user performs vehicle inspection through the vehicle inspection image, the training of the intelligent self-checking vehicle inspection model is completed by combining the acquired vehicle inspection image set and the vehicle image set of the vehicle image library, so that an efficient self-checking vehicle inspection method is provided for the user.
Drawings
Fig. 1 is a schematic flow chart of an intelligent self-checking vehicle inspection method according to an embodiment of the present invention;
fig. 2 is a schematic diagram of an internal structure of the intelligent self-checking vehicle device according to an embodiment of the present invention;
fig. 3 is a schematic block diagram of an intelligent self-checking vehicle-checking program in the intelligent self-checking vehicle-checking device according to an embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The invention provides an intelligent self-checking vehicle method. Fig. 1 is a schematic flow chart of an intelligent self-checking vehicle method according to an embodiment of the present invention. The method may be performed by an apparatus, which may be implemented by software and/or hardware.
In this embodiment, the intelligent self-checking vehicle method includes:
s1, a vehicle image set stored by a user in history is obtained from a vehicle image library, and a label is established for the vehicle image set of the vehicle image library to generate a label set.
In the preferred embodiment of the present invention, the user may be a certain enterprise related to vehicle insurance, such as China's safety. The vehicle image source in the invention mainly comprises the following two modes: the method I comprises the following steps: the method is obtained by the popularization and acquisition of the offline vehicle insurance of Chinese safe business personnel; the second method comprises the following steps: and performing on-line signing acquisition according to the vehicle insurance app of Chinese safety and/or the vehicle insurance official network of Chinese safety. Further, the invention builds labels for the vehicle images in the vehicle image library, thereby generating a label set. For example, tags belonging to an insured vehicle and tags not belonging to an insured vehicle are established separately based on license plate authentication of the vehicle.
S2, receiving the vehicle checking image set of the user, and carrying out preprocessing operation on the vehicle checking image set to obtain a target vehicle checking image set.
In a preferred embodiment of the present invention, the vehicle-checking image set is mainly derived from vehicle images uploaded by a vehicle owner. The pre-processing operations include contrast enhancement, graying, and noise reduction. The method comprises the steps that a car inspection image in a car inspection image set is converted into a gray image through histogram equalization; contrast enhancement is carried out on the gray level image by using a contrast stretching method; and denoising the contrast-enhanced gray level image by using Gaussian filtering to obtain the target car inspection image set.
In detail, the specific implementation steps of contrast enhancement, graying processing and noise reduction are as follows:
a. graying treatment:
the histogram equalization is a process of having the same number of pixel points on each gray level, and aims to distribute and homogenize the image in the whole dynamic change range of the gray level, improve the brightness distribution state of the image and enhance the visual effect of the image. In the embodiment of the present invention, the histogram equalization processing includes: counting a histogram of the vehicle inspection image set with the improved contrast; calculating new gray scale of the vehicle inspection image after transformation by adopting cumulative distribution function according to the counted histogram; and replacing the old gray scale with the new gray scale, and simultaneously combining the gray scales which are equal or approximate to each other to obtain a balanced vehicle-checking image set. Preferably, the invention converts the car inspection image containing the color image into the gray image by using the proportional methods, wherein the proportional methods are that the three components of the current pixel are respectively R, G and B, and the converted pixel component value Y is obtained by using a color conversion formula, so that the gray image of the color image is obtained. The color conversion formula is:
Y=0.3R+0.59G+0.11B
b. contrast enhancement:
the contrast refers to the contrast between the brightness maximum and minimum in the imaging system, wherein low contrast increases the difficulty of image processing. In the preferred embodiment of the present invention, a contrast stretching method is used to achieve the purpose of enhancing the contrast of an image by increasing the dynamic range of gray scale. Furthermore, the invention performs gray scale stretching on the specific area according to the piecewise linear transformation function in the contrast stretching method, thereby further improving the contrast of the output image. When contrast stretching is performed, gray value transformation is essentially achieved. The invention realizes gray value conversion by linear stretching, wherein the linear stretching refers to pixel level operation with linear relation between input and output gray values, and a gray conversion formula is as follows:
D b =f(D a )=a*D a +b
where a is the linear slope and b is the intercept on the Y-axis. When a > 1, the image contrast of the output image is enhanced compared with the original image. When a < 1, the contrast of the output image is weakened compared with the original image, where D a Representing the gray value of the input image, D b Representing the output image grey scale value.
c. Noise reduction:
the Gaussian filtering is linear smooth filtering, is suitable for eliminating Gaussian noise, and is widely applied to the noise reduction process of image processing. In the invention, each pixel in the image of the vehicle-checking image set is scanned by using a template (or called convolution and mask), and the weighted average gray value of the pixels in the neighborhood determined by the template is used for replacing the value of the central pixel point of the template, so that the N-dimensional space normal distribution equation is as follows:
Figure BDA0002166804910000071
where σ is the standard deviation of the normal distribution, the larger the σ value, the more blurred (smoothed) the image, and r is the blur radius, which refers to the distance of the template element to the center of the template.
And S3, segmenting the vehicles in the target vehicle inspection image set through an edge detection method and a threshold segmentation method to obtain local key point images of the vehicles.
The basic idea of edge detection is to consider edge points as those pixel points in an image where the gray level of pixels has a step change or a roof change, i.e. where the derivative of the gray level is large or extremely large. In the preferred embodiment of the invention, a Canny edge detection method is used for carrying out interface positioning on the vehicles in the target vehicle-checking image set, the amplitude and the direction of the gradient of the vehicles are calculated through the finite difference of first-order partial derivatives, the amplitude of a non-local maximum point in the gradient of the vehicles is set to be zero, a thinned vehicle edge image is obtained, a dual-threshold method is used for segmenting the vehicle edge image, a region growing method is used for amplifying key points in the segmented vehicle edge image, and the segmented vehicle edge image is connected through edge connection, so that a local key point image of the vehicle is obtained.
The basic idea of the region growing method is to group pixels or sub-regions into larger regions according to a predefined criterion, starting from a set of growing points (the growing point can be a single pixel or a small region), merging adjacent pixels or regions with similar properties to the growing point with the growing point to form a new growing point, and repeating the process until the growing point cannot grow. The four corners of the segmented vehicle edge image are taken as seed growing points, the pixel values of the background part of the segmented vehicle edge image are set to be zero, the image of the local key point part of the vehicle is segmented, and the image of the local key point part of the vehicle is amplified.
Furthermore, the preferred embodiment of the present invention presets two threshold values T 1 And T 2 (T 1 <T 2 ) Obtaining two threshold edge images N 1 [i,j]And N 2 [i,j]. Preferably, the double threshold method is performed by applying a voltage at the N 2 [i,j]Connecting the interrupted edges into a complete profile, such that when the point of interruption of said edge is reached, it is at said N 1 [i,j]Find edges that can connect within the neighborhood of (1) up to N 2 [i,j]All discontinuities are connected.
And S4, establishing a direction gradient characteristic atlas of the vehicle according to the local key point image of the vehicle, and taking the direction gradient characteristic atlas as a training set.
The directional gradient feature is a feature descriptor used for object detection in computer vision and image processing. The directional gradient feature constitutes a feature by calculating and counting a gradient direction histogram of a local region of the image. In the preferred embodiment of the invention, a gradient matrix of the local key point image of the vehicle is formed by calculating the gradient magnitude G (x, y) and the gradient direction sigma (x, y) of each pixel point (x, y) of the local key point image of the vehicle, wherein each element in the gradient matrix is a vector, the first component is the gradient magnitude, and the second component and the third component are combined to represent the gradient direction; dividing the image matrix into small cell units, wherein each cell unit is 4*4 pixels, each 2*2 cell units form a block, and the angle of 0-180 degrees is averagely divided into 9 directional channels; calculating the gradient size and direction of each pixel point in the cell unit, and counting a gradient direction histogram, wherein the gradient direction histogram comprises 9 direction channels, and the sum of pixel gradients of each direction channel in the gradient direction histogram is calculated to obtain a group of vectors formed by the accumulated sum of the pixel gradients of each channel; forming the cell units into blocks, and performing normalization processing on vectors in the blocks to obtain characteristic vectors; and connecting all the feature vectors subjected to normalization processing to form a directional gradient feature map set of the local key point image of the vehicle.
S5, training a pre-constructed intelligent self-checking vehicle model by using the training set to obtain a training value, inputting the training value and the label set into a loss function of the intelligent self-checking vehicle model to obtain a loss function value, and finishing the training of the intelligent self-checking vehicle model until the loss function value is smaller than a preset threshold value.
In a preferred embodiment of the present invention, the intelligent self-checking vehicle model comprises a convolutional neural network. The convolutional neural network is a feedforward neural network, the artificial neurons of the convolutional neural network can respond to surrounding units in a part of coverage range, the basic structure of the convolutional neural network comprises two layers, one layer is a characteristic extraction layer, the input of each neuron is connected with a local receiving domain of the previous layer, and the local characteristics are extracted. Once the local feature is extracted, the position relation between the local feature and other features is determined; the other is a feature mapping layer, each calculation layer of the network is composed of a plurality of feature mappings, each feature mapping is a plane, and the weights of all neurons on the plane are equal.
In a preferred embodiment of the present invention, the convolutional neural network comprises an input layer, a convolutional layer, a pooling layer, and an output layer. The input layer of the convolutional neural network model of the preferred embodiment of the present invention receives the training setAnd performing convolution operation on the training set by presetting a group of filters in the convolution layer to extract a feature vector, wherein the filters can be { filter 0 ,filter 1 -generating a set of features on similar channels and dissimilar channels, respectively; and performing pooling operation on the feature vectors by using the pooling layer, inputting the pooled feature vectors into a full-connection layer, performing normalization processing and calculation on the pooled feature vectors through an activation function to obtain a training value, and inputting a calculation result into an output layer. The normalization process is to "compress" one K-dimensional vector containing any real number to another K-dimensional real vector so that each element ranges between (0,1) and the sum of all elements is 1.
In the embodiment of the present invention, the activation function is a softmax function, and a calculation formula is as follows:
Figure BDA0002166804910000091
wherein, O j A characteristic image output value, I, of the vehicle representing the jth neuron of the convolutional neural network output layer j And representing the input value of the jth neuron of the convolutional neural network output layer, t representing the total amount of the neurons of the output layer, and e being an infinite acyclic decimal.
In a preferred embodiment of the present invention, the threshold of the predetermined loss function value is 0.01, and the loss function is a least square method:
Figure BDA0002166804910000092
wherein s is an error value between the vehicle feature image with the highest matching degree of the input direction gradient feature map and the vehicle image in the vehicle image library, k is the number of the direction gradient feature map sets, y is i Is a vehicle image of the vehicle image library, y' i And the vehicle characteristic image with the highest matching degree of the input direction gradient characteristic map is obtained.
S6, inputting the vehicle inspection image uploaded by the user into the trained intelligent self-inspection vehicle model to obtain the characteristic image with the highest matching degree of the vehicle inspection image, traversing and comparing the characteristic image with the highest matching degree with the vehicle image library by using a logic comparison program, and outputting the self-inspection result of the vehicle inspection image uploaded by the user.
In a preferred embodiment of the present invention, the logical alignment program is written by MapReduce in Hadoop. The MapReduce is a programming model used for parallel operation of large-scale data sets (larger than 1 TB). According to the invention, the feature image with the highest matching degree with the vehicle inspection image is obtained through the intelligent self-checking vehicle inspection model, and whether the vehicle image corresponding to the feature image with the highest matching degree exists in the vehicle image library is identified according to the logic comparison program, so that the self-checking result of the vehicle inspection image is output. And the traversing comparison is to compare the vehicle inspection image with the vehicle images in the vehicle image library one by one. Preferably, the invention does not process the car inspection image which passes the self-checking, and the car inspection image which does not pass the self-checking is submitted to manual review again.
The invention further provides an intelligent self-checking vehicle-checking device. Fig. 2 is a schematic view of an internal structure of the intelligent self-checking vehicle inspection device according to an embodiment of the present invention.
In the present embodiment, the smart self-checking vehicle device 1 may be a PC (Personal Computer), a terminal device such as a smart phone, a tablet Computer, and a mobile Computer, or may be a server. The intelligent self-checking vehicle device 1 at least comprises a memory 11, a processor 12, a communication bus 13 and a network interface 14.
The memory 11 includes at least one type of readable storage medium, which includes a flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a magnetic memory, a magnetic disk, an optical disk, and the like. The memory 11 may in some embodiments be an internal storage unit of the intelligent self-verifying vehicle device 1, such as a hard disk of the intelligent self-verifying vehicle device 1. The memory 11 may also be an external storage device of the Smart self-verifying vehicle device 1 in other embodiments, such as a plug-in hard disk provided on the Smart self-verifying vehicle device 1, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like. Further, the memory 11 may also include both an internal storage unit and an external storage device of the intelligent self-checking vehicle apparatus 1. The memory 11 may be used to store not only application software installed in the intelligent self-checking vehicle device 1 and various types of data, such as codes of the intelligent self-checking vehicle program 01, but also temporarily store data that has been output or is to be output.
Processor 12, which in some embodiments may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor or other data Processing chip, is configured to execute program codes stored in memory 11 or process data, such as executing smart car checker 01.
The communication bus 13 is used to realize connection communication between these components.
The network interface 14 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), typically used to establish a communication link between the apparatus 1 and other electronic devices.
Optionally, the apparatus 1 may further comprise a user interface, which may comprise a Display (Display), an input unit such as a Keyboard (Keyboard), and optionally a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is suitable for displaying information processed in the intelligent self-verifying vehicle device 1 and for displaying a visual user interface.
While FIG. 2 only shows the intelligent self-verifying device 1 with the components 11-14 and the intelligent self-verifying program 01, those skilled in the art will appreciate that the configuration shown in FIG. 1 does not constitute a limitation of the intelligent self-verifying device 1, and may include fewer or more components than shown, or some components in combination, or a different arrangement of components.
In the embodiment of the apparatus 1 shown in fig. 2, the memory 11 stores therein an intelligent self-checking vehicle program 01; the processor 12 implements the following steps when executing the intelligent self-checking vehicle program 01 stored in the memory 11:
step one, a vehicle image set stored by a user in history is obtained from a vehicle image library, and a label is established for the vehicle image set of the vehicle image library to generate a label set.
In the preferred embodiment of the present invention, the user may be a certain enterprise related to vehicle insurance, such as china security. The vehicle image source in the invention mainly comprises the following two modes: the first method is as follows: the method is obtained by the popularization and acquisition of the offline vehicle insurance of Chinese safe business personnel; the second method comprises the following steps: and performing on-line signing acquisition according to the vehicle insurance app of Chinese safety and/or the vehicle insurance official network of Chinese safety. Further, the invention builds labels for the vehicle images in the vehicle image library, thereby generating a label set. For example, tags belonging to an insured vehicle and not belonging to an insured vehicle are established separately from license plate authentication of the vehicle.
And step two, receiving the vehicle checking image set of the user, and carrying out preprocessing operation on the vehicle checking image set to obtain a target vehicle checking image set.
In a preferred embodiment of the present invention, the vehicle-checking image set is mainly derived from vehicle images uploaded by a vehicle owner. The pre-processing operations include contrast enhancement, graying, and noise reduction. The method comprises the steps that a car inspection image in a car inspection image set is converted into a gray image through histogram equalization; contrast enhancement is carried out on the gray level image by utilizing a contrast stretching method; and denoising the contrast-enhanced gray level image by using Gaussian filtering to obtain the target car inspection image set.
In detail, the specific implementation steps of contrast enhancement, graying processing and noise reduction are as follows:
d. graying treatment:
the histogram equalization is a process of having the same number of pixel points on each gray level, and aims to make the image distributed and homogenized in the whole dynamic variation range of the gray level, improve the brightness distribution state of the image and enhance the visual effect of the image. In the embodiment of the present invention, the histogram equalization processing includes: counting a histogram of the vehicle inspection image set with the improved contrast; calculating new gray scale of the vehicle inspection image after transformation by adopting cumulative distribution function according to the counted histogram; and replacing the old gray scale with the new gray scale, and simultaneously combining the gray scales which are equal or approximate to each other to obtain a balanced vehicle-checking image set. Preferably, the invention converts the car inspection image containing the color image into the gray image by using the proportional methods, wherein the proportional methods are that the three components of the current pixel are respectively R, G and B, and the converted pixel component value Y is obtained by using a color conversion formula, so that the gray image of the color image is obtained. The color conversion formula is:
Y=0.3R+0.59G+0.11B
e. contrast enhancement:
the contrast refers to the contrast between the brightness maximum and minimum in the imaging system, wherein low contrast increases the difficulty of image processing. In the preferred embodiment of the present invention, a contrast stretching method is used to enhance the contrast of the image by increasing the dynamic range of the gray scale. Furthermore, the invention performs gray scale stretching on the specific area according to the piecewise linear transformation function in the contrast stretching method, thereby further improving the contrast of the output image. When contrast stretching is performed, gray value transformation is essentially achieved. The invention realizes gray value conversion by linear stretching, wherein the linear stretching refers to pixel level operation with linear relation between input and output gray values, and a gray conversion formula is as follows:
D b =f(D a )=a*D a +b
where a is the linear slope and b is the intercept on the Y-axis. When a > 1, the image contrast of the output image is enhanced compared with the original image. When a < 1, the contrast of the output image is weakened compared with the original image, where D a Conveying for representationInto the grey value of the image, D b Representing the output image grey value.
f. Noise reduction:
the Gaussian filtering is linear smooth filtering, is suitable for eliminating Gaussian noise, and is widely applied to the noise reduction process of image processing. In the invention, each pixel in the image of the vehicle-checking image set is scanned by using a template (or called convolution and mask), and the weighted average gray value of the pixels in the neighborhood determined by the template is used for replacing the value of the central pixel point of the template, so that the N-dimensional space normal distribution equation is as follows:
Figure BDA0002166804910000121
where σ is the standard deviation of the normal distribution, the larger the σ value, the more blurred (smoothed) the image, and r is the blur radius, which refers to the distance of the template element to the center of the template.
And thirdly, segmenting the vehicles in the target vehicle inspection image set through an edge detection method and a threshold segmentation method to obtain local key point images of the vehicles.
The basic idea of edge detection is to consider edge points as those pixel points in an image where the gray level of pixels has a step change or a roof change, i.e. where the derivative of the gray level is large or extremely large. In the preferred embodiment of the invention, a Canny edge detection method is used for carrying out interface positioning on the vehicle of the target vehicle-checking image set, the amplitude and the direction of the gradient of the vehicle are calculated through the finite difference of first-order partial derivatives, the amplitude of a non-local maximum point in the gradient of the vehicle is set to be zero, a refined vehicle edge image is obtained, a dual-threshold method is used for segmenting the vehicle edge image, a region growing method is used for amplifying key points in the segmented vehicle edge image, and the segmented vehicle edge image is connected through edge connection, so that a local key point image of the vehicle is obtained.
The basic idea of the region growing method is to group pixels or sub-regions into larger regions according to a predefined criterion, starting from a set of growing points (the growing point can be a single pixel or a small region), merging adjacent pixels or regions with similar properties to the growing point with the growing point to form a new growing point, and repeating the process until the growing point cannot grow. The four corners of the segmented vehicle edge image are taken as seed growing points, the pixel values of the background part of the segmented vehicle edge image are set to be zero, the image of the local key point part of the vehicle is segmented, and the image of the local key point part of the vehicle is amplified.
Furthermore, the preferred embodiment of the present invention presets two threshold values T 1 And T 2 (T 1 <T 2 ) Obtaining two threshold edge images N 1 [i,j]And N 2 [i,j]. Preferably, the double threshold method is performed by applying a voltage at N 2 [i,j]Connecting the interrupted edges into a complete profile, such that when the point of interruption of said edge is reached, it is at said N 1 [i,j]Up to N, find edges that can connect 2 [i,j]All discontinuities are connected.
And fourthly, establishing a direction gradient characteristic atlas of the vehicle according to the local key point image of the vehicle, and taking the direction gradient characteristic atlas as a training set.
The directional gradient feature is a feature descriptor used for object detection in computer vision and image processing. The directional gradient feature constitutes a feature by calculating and counting a gradient direction histogram of a local region of the image. The gradient matrix of the local key point image of the vehicle is formed by calculating the gradient magnitude G (x, y) and the gradient direction sigma (x, y) of each pixel point (x, y) of the local key point image of the vehicle, wherein each element in the gradient matrix is a vector, the first component is the gradient magnitude, and the second component and the third component are combined to represent the gradient direction; dividing the image matrix into small cell units, wherein each cell unit is preset to be 4*4 pixels, each 2*2 cell units form a block, and the angle from 0-180 degrees is averagely divided into 9 directional channels; calculating the gradient size and direction of each pixel point in the cell unit, and counting a gradient direction histogram, wherein the gradient direction histogram comprises 9 direction channels, and the sum of pixel gradients of each direction channel in the gradient direction histogram is calculated to obtain a group of vectors formed by the accumulated sum of the pixel gradients of each channel; forming the cell units into blocks, and performing normalization processing on vectors in the blocks to obtain characteristic vectors; and connecting all the feature vectors subjected to normalization processing to form a directional gradient feature map set of the local key point image of the vehicle.
And fifthly, training a pre-constructed intelligent self-checking vehicle model by using the training set to obtain a training value, inputting the training value and the label set into a loss function of the intelligent self-checking vehicle model to obtain a loss function value, and finishing the training of the intelligent self-checking vehicle model until the loss function value is smaller than a preset threshold value.
In a preferred embodiment of the present invention, the intelligent self-checking vehicle model comprises a convolutional neural network. The convolutional neural network is a feedforward neural network, the artificial neurons of the convolutional neural network can respond to surrounding units in a part of coverage range, the basic structure of the convolutional neural network comprises two layers, one layer is a characteristic extraction layer, the input of each neuron is connected with a local receiving domain of the previous layer, and the local characteristics are extracted. Once the local feature is extracted, the position relation between the local feature and other features is determined; the other is a feature mapping layer, each calculation layer of the network is composed of a plurality of feature mappings, each feature mapping is a plane, and the weights of all neurons on the plane are equal.
In a preferred embodiment of the present invention, the convolutional neural network comprises an input layer, a convolutional layer, a pooling layer, and an output layer. In the preferred embodiment of the present invention, the input layer of the convolutional neural network model receives the training set, and performs convolution operation on the training set by presetting a set of filters in the convolutional layer to extract feature vectors, where the filters may be { filter } filters 0 ,filter 1 -generating a set of features on similar channels and dissimilar channels, respectively; using the poolAnd the layer performs pooling operation on the feature vectors, inputs the pooled feature vectors into a full-connection layer, performs normalization processing and calculation on the pooled feature vectors through an activation function to obtain a training value, and inputs a calculation result into an output layer. The normalization process is to "compress" a K-dimensional vector containing arbitrary real numbers to another K-dimensional real vector such that each element ranges between (0,1) and the sum of all elements is 1.
In the embodiment of the present invention, the activation function is a softmax function, and a calculation formula is as follows:
Figure BDA0002166804910000141
wherein, O j A characteristic image output value, I, of the vehicle representing the jth neuron of the convolutional neural network output layer j And representing the input value of the jth neuron of the convolutional neural network output layer, t representing the total amount of the neurons of the output layer, and e being an infinite acyclic decimal.
In a preferred embodiment of the present invention, the threshold of the predetermined loss function value is 0.01, and the loss function is a least square method:
Figure BDA0002166804910000151
wherein s is an error value between the vehicle feature image with the highest matching degree of the input direction gradient feature map and the vehicle image in the vehicle image library, k is the number of the direction gradient feature map sets, and y is the number of the direction gradient feature map sets i Is a vehicle image of the vehicle image library, y' i And the vehicle characteristic image with the highest matching degree of the input direction gradient characteristic map is obtained.
Inputting the vehicle inspection image uploaded by the user into the trained intelligent self-checking vehicle inspection model to obtain the characteristic image with the highest matching degree of the vehicle inspection image, traversing and comparing the characteristic image with the highest matching degree with the vehicle image library by using a logic comparison program, and outputting the self-checking result of the vehicle inspection image uploaded by the user.
In a preferred embodiment of the present invention, the logical alignment program is written by MapReduce in Hadoop. The MapReduce is a programming model used for parallel operation of large-scale data sets (larger than 1 TB). According to the invention, the feature image with the highest matching degree with the vehicle inspection image is obtained through the intelligent self-checking vehicle inspection model, and whether the vehicle image corresponding to the feature image with the highest matching degree exists in the vehicle image library is identified according to the logic comparison program, so that the self-checking result of the vehicle inspection image is output. The traversing comparison is to compare the vehicle inspection image with the vehicle images in the vehicle image library one by one. Preferably, the invention does not process the car inspection image which passes the self-checking, and the car inspection image which does not pass the self-checking is submitted to manual review again.
Alternatively, in other embodiments, the intelligent self-checking vehicle program may be further divided into one or more modules, and the one or more modules are stored in the memory 11 and executed by one or more processors (in this embodiment, the processor 12) to implement the present invention.
For example, referring to fig. 3, a schematic diagram of program modules of an intelligent self-checking vehicle program in an embodiment of the intelligent self-checking vehicle device of the present invention is shown, in this embodiment, the intelligent self-checking vehicle program may be divided into an image receiving module 10, an image processing module 20, a model training module 30, and a result self-checking module 40, which exemplarily:
the image acquisition module 10 is configured to: the method comprises the steps of obtaining vehicle images stored by a user in history from a vehicle image library, receiving a vehicle checking image set of the user, and establishing labels for the vehicle images in the vehicle image library to generate a label set.
The image processing module 20 is configured to: preprocessing the vehicle checking image set to obtain a target vehicle checking image set, segmenting vehicles of the target vehicle checking image set through an edge detection method and a threshold segmentation method to obtain local key point images of the vehicles, establishing a direction gradient characteristic atlas of the vehicles according to the local key point images of the vehicles, and taking the direction gradient characteristic atlas as a training set.
The model training module 30 is configured to: training a pre-constructed intelligent self-checking vehicle model by utilizing the training set to obtain a training value, inputting the training value and the label set into a loss function of the intelligent self-checking vehicle model to obtain a loss function value, and finishing the training of the intelligent self-checking vehicle model until the loss function value is smaller than a preset threshold value.
The result self-core module 40 is configured to: inputting the vehicle inspection image uploaded by the user into the trained intelligent self-inspection vehicle model to obtain the characteristic image with the highest matching degree of the vehicle inspection image, traversing and comparing the characteristic image with the highest matching degree with the vehicle image library by utilizing a logic comparison program, and outputting the self-inspection result of the vehicle inspection image uploaded by the user.
The functions or operation steps of the image obtaining module 10, the image processing module 20, the model training module 30, and the result self-checking module 40 when executed are substantially the same as those of the above embodiments, and are not repeated herein.
Furthermore, an embodiment of the present invention further provides a computer-readable storage medium, where the computer-readable storage medium has an intelligent self-checking vehicle program stored thereon, and the intelligent self-checking vehicle program is executable by one or more processors to implement the following operations:
acquiring vehicle images historically stored by a user from a vehicle image library, and establishing labels for the vehicle images in the vehicle image library to generate a label set;
receiving the vehicle-checking image set of the user, and carrying out preprocessing operation on the vehicle-checking image set to obtain a target vehicle-checking image set;
segmenting the vehicle of the target vehicle-checking image set by an edge detection method and a threshold segmentation method to obtain a local key point image of the vehicle;
establishing a direction gradient characteristic atlas of the vehicle according to the local key point image of the vehicle, and taking the direction gradient characteristic atlas as a training set;
training a pre-constructed intelligent self-checking vehicle model by using the training set to obtain a training value, inputting the training value and the label set into a loss function of the intelligent self-checking vehicle model to obtain a loss function value, and finishing the training of the intelligent self-checking vehicle model until the loss function value is smaller than a preset threshold value;
inputting the vehicle inspection image uploaded by the user into the trained intelligent self-inspection vehicle model to obtain the characteristic image with the highest matching degree of the vehicle inspection image, traversing and comparing the characteristic image with the highest matching degree with the vehicle image library by utilizing a logic comparison program, and outputting the self-inspection result of the vehicle inspection image uploaded by the user.
The specific implementation manner of the computer-readable storage medium of the present invention is substantially the same as that of the above-mentioned embodiments of the intelligent self-checking vehicle-inspecting apparatus and method, and will not be described herein again.
It should be noted that the above-mentioned numbers of the embodiments of the present invention are merely for description, and do not represent the merits of the embodiments. And the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, apparatus, article, or method. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, apparatus, article, or method that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. An intelligent self-verification vehicle method, the method comprising:
acquiring a vehicle image set historically stored by a user from a vehicle image library, and establishing labels for the vehicle image set of the vehicle image library to generate a label set;
receiving the vehicle-checking image set of the user, and carrying out preprocessing operation on the vehicle-checking image set to obtain a target vehicle-checking image set, wherein the preprocessing operation comprises contrast enhancement, graying processing and noise reduction;
segmenting the vehicle of the target vehicle-checking image set by an edge detection method and a threshold segmentation method to obtain a local key point image of the vehicle;
establishing a direction gradient characteristic atlas of the vehicle according to the local key point image of the vehicle, and taking the direction gradient characteristic atlas as a training set;
training a pre-constructed intelligent self-checking vehicle model by using the training set to obtain a training value, inputting the training value and the label set into a loss function of the intelligent self-checking vehicle model to obtain a loss function value, and finishing the training of the intelligent self-checking vehicle model until the loss function value is smaller than a preset threshold value;
inputting the vehicle inspection image uploaded by the user into the trained intelligent self-inspection vehicle model to obtain the characteristic image with the highest matching degree of the vehicle inspection image, traversing and comparing the characteristic image with the highest matching degree with the vehicle image library by utilizing a logic comparison program, and outputting the self-inspection result of the vehicle inspection image uploaded by the user.
2. The intelligent self-checking vehicle method according to claim 1, wherein the receiving the vehicle-checking image set of the user, and performing a preprocessing operation on the vehicle-checking image set to obtain a target vehicle-checking image set comprises:
converting the car inspection images in the car inspection image set into gray level images through histogram equalization; contrast enhancement is carried out on the gray level image by using a contrast stretching method; and denoising the contrast-enhanced gray level image by using Gaussian filtering to obtain the target car inspection image set.
3. The intelligent self-checking vehicle inspection method according to claim 1, wherein the segmenting the vehicle of the target vehicle inspection image set by an edge detection method and a threshold segmentation method to obtain the local key point image of the vehicle comprises:
carrying out interface positioning on the vehicle of the target vehicle-checking image set by using a Canny edge detection method, calculating the amplitude and the direction of the gradient of the vehicle through the finite difference of first-order partial derivatives, and setting the amplitude of a non-local maximum value point in the gradient of the vehicle to be zero to obtain a refined vehicle edge image;
and segmenting the vehicle edge image by using a dual threshold method, amplifying key points in the segmented vehicle edge image by using a region growing method, and connecting the segmented vehicle edge image by edge connection so as to obtain a local key point image of the vehicle.
4. The intelligent self-checking vehicle method according to any one of claims 1 to 3, wherein the establishing of the directional gradient feature atlas of the vehicle according to the local key point image of the vehicle comprises:
calculating the gradient amplitude G (x, y) and the gradient direction sigma (x, y) of each pixel point (x, y) in the local key point image of the vehicle to form a gradient matrix of the local key point image of the vehicle, and dividing the gradient matrix into small cell units;
calculating the gradient size and direction of each pixel point in the cell unit, counting a gradient direction histogram, and calculating the sum of pixel gradients of each direction channel in the gradient direction histogram;
accumulating the sum of the pixel gradients of each direction channel to form a vector, combining the cell units into a block, normalizing the vector in the block to obtain a characteristic vector, and connecting the characteristic vectors to obtain a direction gradient characteristic map set of the vehicle.
5. The intelligent self-checking vehicle method of claim 1, wherein said training a pre-constructed intelligent self-checking vehicle model with said training set to obtain training values comprises:
inputting the training set into an input layer of a convolutional neural network of the intelligent self-checking vehicle model, and extracting a feature vector by performing convolution operation on the training set through presetting a group of filters in the convolutional layer of the convolutional neural network;
and performing pooling operation on the feature vector by using a pooling layer of the convolutional neural network, inputting the pooled feature vector to a full-connection layer, and performing normalization processing and calculation on the pooled feature vector through an activation function of the convolutional neural network to obtain the training value.
6. An intelligent self-checking vehicle inspection device, characterized in that the device comprises a memory and a processor, the memory stores an intelligent self-checking vehicle inspection program which can run on the processor, and when the intelligent self-checking vehicle inspection program is executed by the processor, the following steps are realized:
acquiring vehicle images historically stored by a user from a vehicle image library, and establishing labels for the vehicle images in the vehicle image library to generate a label set;
receiving the vehicle-checking image set of the user, and carrying out preprocessing operation on the vehicle-checking image set to obtain a target vehicle-checking image set, wherein the preprocessing operation comprises contrast enhancement, graying processing and noise reduction;
segmenting the vehicles in the target vehicle-checking image set by an edge detection method and a threshold segmentation method to obtain local key point images of the vehicles;
establishing a direction gradient characteristic atlas of the vehicle according to the local key point image of the vehicle, and taking the direction gradient characteristic atlas as a training set;
training a pre-constructed intelligent self-checking vehicle model by using the training set to obtain a training value, inputting the training value and the label set into a loss function of the intelligent self-checking vehicle model to obtain a loss function value, and finishing the training of the intelligent self-checking vehicle model until the loss function value is smaller than a preset threshold value;
inputting the vehicle inspection image uploaded by the user into the trained intelligent self-inspection vehicle inspection model to obtain the characteristic image with the highest matching degree of the vehicle inspection image, traversing and comparing the characteristic image with the highest matching degree with the vehicle image library by using a logic comparison program, and outputting the self-inspection result of the vehicle inspection image uploaded by the user.
7. The intelligent self-checking vehicle inspection device according to claim 6, wherein said receiving said user's vehicle inspection image set, and performing a preprocessing operation on said vehicle inspection image set to obtain a target vehicle inspection image set comprises:
converting the car inspection images in the car inspection image set into gray level images through histogram equalization; contrast enhancement is carried out on the gray level image by using a contrast stretching method; and denoising the contrast-enhanced gray level image by using Gaussian filtering to obtain the target car inspection image set.
8. The intelligent self-checking vehicle inspection device according to claim 6, wherein the segmenting the vehicle of the target vehicle inspection image set by an edge detection method and a threshold segmentation method to obtain the local key point image of the vehicle comprises:
interface positioning is carried out on the vehicle of the target vehicle-checking image set by using a Canny edge detection method, the amplitude and the direction of the gradient of the vehicle are calculated through the finite difference of first-order partial derivatives, and the amplitude of a non-local maximum value point in the gradient of the vehicle is set to be zero, so that a refined vehicle edge image is obtained;
the method comprises the steps of segmenting the vehicle edge image by using a double threshold method, amplifying key points in the segmented vehicle edge image by using a region growing method, and connecting the segmented vehicle edge image through edge connection so as to obtain a local key point image of the vehicle.
9. The intelligent self-checking vehicle inspection device according to any one of claims 6 to 8, wherein the establishing of the directional gradient feature atlas of the vehicle according to the local key point image of the vehicle comprises:
calculating the gradient amplitude G (x, y) and the gradient direction sigma (x, y) of each pixel point (x, y) in the local key point image of the vehicle to form a gradient matrix of the local key point image of the vehicle, and dividing the gradient matrix into small cell units;
calculating the gradient size and direction of each pixel point in the cell unit, counting a gradient direction histogram, and calculating the sum of pixel gradients of each direction channel in the gradient direction histogram;
accumulating the sum of the pixel gradients of each direction channel to form a vector, combining the cell units into a block, normalizing the vector in the block to obtain a characteristic vector, and connecting the characteristic vectors to obtain a direction gradient characteristic map set of the vehicle.
10. A computer readable storage medium having stored thereon a smart self-verifying vehicle program executable by one or more processors to perform the steps of the smart self-verifying vehicle method as claimed in any one of claims 1 to 5.
CN201910761970.6A 2019-08-14 2019-08-14 Intelligent self-checking vehicle method and device and computer readable storage medium Active CN110598033B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910761970.6A CN110598033B (en) 2019-08-14 2019-08-14 Intelligent self-checking vehicle method and device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910761970.6A CN110598033B (en) 2019-08-14 2019-08-14 Intelligent self-checking vehicle method and device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN110598033A CN110598033A (en) 2019-12-20
CN110598033B true CN110598033B (en) 2023-03-28

Family

ID=68854650

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910761970.6A Active CN110598033B (en) 2019-08-14 2019-08-14 Intelligent self-checking vehicle method and device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN110598033B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111353549B (en) * 2020-03-10 2023-01-31 创新奇智(重庆)科技有限公司 Image label verification method and device, electronic equipment and storage medium
CN112132812B (en) * 2020-09-24 2023-06-30 平安科技(深圳)有限公司 Certificate verification method and device, electronic equipment and medium

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8311294B2 (en) * 2009-09-08 2012-11-13 Facedouble, Inc. Image classification and information retrieval over wireless digital networks and the internet
CN105787466B (en) * 2016-03-18 2019-07-16 中山大学 Method and system for fine identification of vehicle type
TWI753034B (en) * 2017-03-31 2022-01-21 香港商阿里巴巴集團服務有限公司 Method, device and electronic device for generating and searching feature vector
CN107729818B (en) * 2017-09-21 2020-09-22 北京航空航天大学 Multi-feature fusion vehicle re-identification method based on deep learning
CN109101865A (en) * 2018-05-31 2018-12-28 湖北工业大学 A kind of recognition methods again of the pedestrian based on deep learning
CN108805196B (en) * 2018-06-05 2022-02-18 西安交通大学 Automatic incremental learning method for image recognition
CN109472262A (en) * 2018-09-25 2019-03-15 平安科技(深圳)有限公司 License plate recognition method, device, computer equipment and storage medium
CN110097068B (en) * 2019-01-17 2021-07-30 北京航空航天大学 Method and device for identifying similar vehicles

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于视觉注意机制的弱对比度下车辆目标分割方法;刘占文等;《中国公路学报》;20160815;第29卷(第08期);第124-133页 *

Also Published As

Publication number Publication date
CN110598033A (en) 2019-12-20

Similar Documents

Publication Publication Date Title
CN110414507B (en) License plate recognition method and device, computer equipment and storage medium
Zang et al. Vehicle license plate recognition using visual attention model and deep learning
CN110751037A (en) Method for recognizing color of vehicle body and terminal equipment
Kumar et al. Automatic vehicle number plate recognition system using machine learning
CN110619274A (en) Identity verification method and device based on seal and signature and computer equipment
CN112052845A (en) Image recognition method, device, equipment and storage medium
US9224207B2 (en) Segmentation co-clustering
CN110717497B (en) Image similarity matching method, device and computer readable storage medium
CN110287787B (en) Image recognition method, image recognition device and computer-readable storage medium
Islam et al. An efficient method for extraction and recognition of bangla characters from vehicle license plates
CN111160169A (en) Face detection method, device, equipment and computer readable storage medium
CN104282008B (en) The method and apparatus that Texture Segmentation is carried out to image
CN114494994B (en) Vehicle abnormal gathering monitoring method, device, computer equipment and storage medium
CN114444565B (en) Image tampering detection method, terminal equipment and storage medium
CN111160142A (en) Certificate bill positioning detection method based on numerical prediction regression model
CN110598033B (en) Intelligent self-checking vehicle method and device and computer readable storage medium
CN111783896A (en) Image identification method and system based on kernel method
Thaiparnit et al. Tracking vehicles system based on license plate recognition
Liu et al. A novel SVM network using HOG feature for prohibition traffic sign recognition
Bolotova et al. License plate recognition with hierarchical temporal memory model
Arsenovic et al. Deep learning driven plates recognition system
Luo et al. Seatbelt detection in road surveillance images based on improved dense residual network with two-level attention mechanism
Dhar et al. Interval type-2 fuzzy set and human vision based multi-scale geometric analysis for text-graphics segmentation
CN112800872A (en) Face recognition method and system based on deep learning
Tayo et al. Vehicle license plate recognition using edge detection and neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant