[go: up one dir, main page]

CN114092683A - Tire deformation identification method and device based on visual feedback and depth network - Google Patents

Tire deformation identification method and device based on visual feedback and depth network Download PDF

Info

Publication number
CN114092683A
CN114092683A CN202111319379.9A CN202111319379A CN114092683A CN 114092683 A CN114092683 A CN 114092683A CN 202111319379 A CN202111319379 A CN 202111319379A CN 114092683 A CN114092683 A CN 114092683A
Authority
CN
China
Prior art keywords
tire
area
rim
pixel points
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111319379.9A
Other languages
Chinese (zh)
Inventor
张�杰
史鹏
孔烜
邓露
戴丙维
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Zhongdeng Technology Co ltd
Original Assignee
Hunan Zhongdeng Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Zhongdeng Technology Co ltd filed Critical Hunan Zhongdeng Technology Co ltd
Priority to CN202111319379.9A priority Critical patent/CN114092683A/en
Publication of CN114092683A publication Critical patent/CN114092683A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Geometry (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

本申请公开了一种基于视觉反馈和深度网络的轮胎变形量识别方法、装置、设备及可读存储介质,方法包括:根据获取到的车辆图像识别车道,根据车道与道路侧面摄像头的焦距间的关系对道路侧面摄像头进行调焦;利用调焦后的道路侧面摄像头获取轮胎图像,利用预先训练得到的语义分割算法对轮胎图像进行识别,得到轮辋区域和轮胎区域;对轮辋区域、轮胎区域进行检测,得到轮辋区域的像素点、轮胎区域的像素点;根据轮辋区域的像素点及轮辋直径,计算图像比例因子;根据轮胎区域的像素点及图像比例因子,计算轮胎变形量。本申请公开的技术方案,通过借助轮胎图像来实现非接触式地识别轮胎变形量,以提高轮胎变形量识别的便捷性和准确性,降低识别成本。

Figure 202111319379

The present application discloses a method, device, device and readable storage medium for tire deformation recognition based on visual feedback and deep network. The method includes: recognizing a lane according to an acquired vehicle image; Focus on the road side camera; use the focused road side camera to obtain the tire image, use the pre-trained semantic segmentation algorithm to identify the tire image, and obtain the rim area and tire area; Detect the rim area and tire area , obtain the pixel points in the rim area and the pixel points in the tire area; calculate the image scale factor according to the pixel points in the rim area and the diameter of the rim; calculate the tire deformation according to the pixel points in the tire area and the image scale factor. The technical solution disclosed in the present application realizes the non-contact identification of the tire deformation amount by using the tire image, so as to improve the convenience and accuracy of tire deformation identification and reduce the identification cost.

Figure 202111319379

Description

Tire deformation identification method and device based on visual feedback and depth network
Technical Field
The application relates to the technical field of intelligent transportation, in particular to a tire deformation amount identification method, device and equipment based on visual feedback and a depth network and a readable storage medium.
Background
With the continuous development of the intelligent transportation field, it is vital that various information of vehicles is obtained intelligently, efficiently and accurately. The magnitude of the deformation of a tire, which is the only component of a vehicle in contact with a road surface, is an important evaluation index for evaluating the aspects of vehicle driving comfort, driving safety, traffic economy, contact force identification accuracy and the like.
At present, the following method is often adopted for identifying the tire deformation: the tire deformation recognition is carried out based on the change of the signal of the tire internal sensor, and the tire deformation recognition is carried out based on the vehicle speed of a running tire, wherein the tire deformation recognition based on the change of the signal of the tire internal sensor is that the tire deformation is determined by using the distance between the wave crest and the wave trough in the signal according to the change of the signal of a longitudinal accelerometer in the sensor in the moving process of the tire, but the tire sensor has the problems of higher installation requirement, limited service life of a battery and the like; tire deformation recognition based on a vehicle speed relation of a running tire is based on the fact that the vehicle speed changes after the tire deforms in the moving process, therefore, the tire deformation is calculated reversely after the rotation speed of the tire is measured, specifically, the circumference of the tire is obtained according to the vehicle speed change, and whether the tire deforms or not is determined.
In summary, how to identify the tire deformation amount without using a tire sensor is a technical problem to be solved urgently by those skilled in the art.
Disclosure of Invention
In view of the above, an object of the present application is to provide a tire deformation amount identification method, device, apparatus and readable storage medium based on visual feedback and depth network, for identifying tire deformation amount without using a tire sensor.
In order to achieve the above purpose, the present application provides the following technical solutions:
a tire deformation amount identification method based on visual feedback and a depth network comprises the following steps:
acquiring a vehicle image of a target vehicle in a pre-established virtual detection area by using a traffic camera;
recognizing a lane of the target vehicle according to the vehicle image, and focusing the road side camera according to a preset relationship between the lane and a focal length of the road side camera and the lane of the target vehicle;
acquiring a tire image of the target vehicle by using the focused road side camera, and identifying the tire image by using a semantic segmentation algorithm obtained by pre-training to obtain a rim area and a tire area;
detecting the rim area and the tire area to obtain pixel points of the rim area and pixel points of the tire area;
calculating an image scale factor according to the pixel points of the rim area and the diameter of the rim;
and calculating the deformation of the tire according to the pixel points of the tire area and the image scale factor.
Preferably, the detecting the rim area and the tire area to obtain the pixel points of the rim area and the pixel points of the tire area includes:
performing edge detection on the rim area and the tire area to obtain rim edge pixel points and tire edge pixel points;
calculating an image scale factor according to the pixel points of the rim area and the rim diameter, wherein the image scale factor comprises the following steps:
selecting any three pixel points from the rim edge pixel points, and acquiring the coordinates of the selected pixel points;
calculating the coordinates of the circle center of the rim and the number of the pixel points on the diameter of the rim according to the selected coordinates of the pixel points;
and calculating a first scale factor by using the diameter of the rim and the number of pixel points on the diameter of the rim.
Preferably, calculating the tire deformation according to the pixel points of the tire area and the image scale factor includes:
determining a tire contact central point and a highest edge point of the upper edge of the tire according to the circle center of the rim;
respectively obtaining a first preset number of tire upper edge pixel points from two sides of the highest edge point, and forming a tire upper edge point group by the obtained tire upper edge pixel points and the highest edge point;
calculating the number of mean pixel points between the tire upper edge point group and the rim circle center according to the tire upper edge point group;
on the contact length between the tire and the ground, respectively acquiring a second preset number of tire lower edge pixel points from two sides of the tire contact center point, and forming a tire lower edge point group by the acquired tire lower edge pixel points and the tire contact center;
calculating the minimum number of pixel points between the tire lower edge point group and the circular center of the rim according to the tire lower edge point group;
calculating the number of pixel points of the vertical deflection of the tire according to the number of the average pixel points and the minimum pixel points;
and obtaining the vertical deflection of the tire according to the number of the pixel points of the vertical deflection of the tire and the first scale factor.
Preferably, calculating the tire deformation according to the pixel points of the tire area and the image scale factor includes:
determining a tire contact central point according to the circle center of the rim;
respectively selecting lower edge pixel points of the tire from two sides of the center point of the tire, and calculating the slope of each lower edge pixel point of the tire and the contact center point of the tire;
comparing the slope of the selected lower edge pixel point and the tire contact center point with a threshold, determining the lower edge pixel point of the tire, of which the slope of the left side of the tire contact center point and the tire contact center point is greater than or equal to the threshold, as a left critical point, and determining the lower edge pixel point of the tire, of which the slope of the right side of the tire contact center point and the tire contact center point is greater than or equal to the threshold, as a right critical point;
determining the number of pixel points between the left critical point and the right critical point, and calculating the contact length between the tire and the ground according to the number of the pixel points between the left critical point and the right critical point and the first scale factor.
Preferably, according to the pixel point of the rim area and the rim diameter, calculating an image scale factor, including:
acquiring the number of pixel points contained in the rim area;
calculating the area of the rim by using the diameter of the rim;
obtaining a second scale factor according to the area of the rim and the number of pixel points contained in the rim area;
calculating the tire deformation according to the pixel points of the tire area and the image scale factor, and the method comprises the following steps:
acquiring the number of pixel points contained in the upper half area of the tire and the number of pixel points contained in the lower half area of the tire, and calculating the difference value between the number of pixel points contained in the upper half area and the number of pixel points contained in the lower half area of the tire;
and calculating the deformation area of the tire according to the difference value and the second scale factor.
Preferably, recognizing the lane of the target vehicle from the vehicle image includes:
detecting the vehicle image by using a pre-trained target detection algorithm to obtain position information of a vehicle target detection frame corresponding to the target vehicle;
and determining the lane of the target vehicle according to the position information of the vehicle target detection frame.
Preferably, after calculating the tire deformation amount according to the pixel points of the tire area and the image scale factor, the method further includes:
and judging whether the tire deformation is in a corresponding preset range, and if not, performing early warning.
A tire deformation amount recognition apparatus based on visual feedback and a depth network, comprising:
the acquisition module is used for acquiring a vehicle image of a target vehicle in a pre-established virtual detection area by using a traffic camera;
the focusing module is used for identifying the lane of the target vehicle according to the vehicle image and focusing the road side camera according to the preset relationship between the lane and the focal length of the road side camera and the lane of the target vehicle;
the first identification module is used for acquiring a tire image of the target vehicle by using the focused road side camera, and identifying the tire image by using a semantic segmentation algorithm obtained by pre-training to obtain a rim area and a tire area;
the detection module is used for detecting the rim area and the tire area to obtain pixel points of the rim area and pixel points of the tire area;
the first calculation module is used for calculating an image scale factor according to the pixel points of the rim area and the diameter of the rim;
and the second calculation module is used for calculating the tire deformation according to the pixel points of the tire area and the image scale factor.
A tire deformation amount recognition apparatus based on visual feedback and a depth network, comprising:
a memory for storing a computer program;
a processor for processing a computer program to implement the steps of the method for identifying a tire deformation amount based on visual feedback and a depth network as claimed in any one of the above.
A readable storage medium, in which a computer program is stored, which computer program, when being executed by a processor, realizes the steps of the visual feedback and depth network-based tire deformation amount identification method according to any one of the above.
The application provides a tire deformation amount identification method, a tire deformation amount identification device, tire deformation amount identification equipment and a readable storage medium based on visual feedback and a depth network, wherein the method comprises the following steps: acquiring a vehicle image of a target vehicle in a pre-established virtual detection area by using a traffic camera; recognizing a lane of a target vehicle according to the vehicle image, and focusing a road side camera according to a preset relationship between the lane and a focal length of the road side camera and the lane of the target vehicle; acquiring a tire image of a target vehicle by using the focused road side camera, and identifying the tire image by using a semantic segmentation algorithm obtained by pre-training to obtain a rim area and a tire area; detecting a rim area and a tire area to obtain pixel points of the rim area and pixel points of the tire area; calculating an image scale factor according to pixel points of the rim area and the diameter of the rim; and calculating the deformation of the tire according to the pixel points of the tire area and the image scale factor.
According to the technical scheme, the lane of the target vehicle is recognized firstly, then the road side camera is automatically focused according to the lane of the target vehicle, and the road side camera after automatic focusing is utilized to complete the acquisition of the tire image with high resolution, so that the accuracy of tire deformation identification is improved conveniently. Then, the obtained tire image is identified by utilizing a semantic segmentation algorithm obtained by pre-training to obtain a rim area and a tire area, the rim area and the tire area are detected to obtain pixel points of the rim area and pixel points of the tire area, an image scale factor is calculated according to the pixel points of the rim area and the diameter of the rim, and then the tire deformation is calculated according to the pixel points of the tire area and the image scale factor, so that the tire deformation is identified in a non-contact manner by utilizing the tire image obtained by the road side camera after automatic focusing without utilizing a tire sensor, therefore, the convenience of tire deformation identification can be improved, the identification cost of the tire deformation is reduced, and the tire deformation can be quantified in an automatic manner, and the accuracy of tire deformation amount identification can be improved by means of deep learning.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a flowchart of a tire deformation amount identification method based on visual feedback and a depth network according to an embodiment of the present application;
fig. 2 is a schematic diagram of an established virtual detection area according to an embodiment of the present application;
FIG. 3 is an original tire image obtained as provided by an embodiment of the present application;
FIG. 4 is a graph of recognition results obtained by recognition using a semantic segmentation algorithm obtained by pre-training according to an embodiment of the present application;
FIG. 5 is a schematic diagram of calculating a first scaling factor according to an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of calculating vertical deflection of a tire according to an embodiment of the present disclosure;
FIG. 7 is a schematic diagram of calculating a contact length of a tire with a ground according to an embodiment of the present application;
FIG. 8 is a schematic diagram of calculating a deformation area of a tire according to an embodiment of the present application;
FIG. 9 is a schematic diagram illustrating detection of all vehicles included in a vehicle image by using a pre-trained target detection algorithm according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a tire deformation amount identification device based on visual feedback and a depth network according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of a tire deformation amount identification device based on visual feedback and a depth network according to an embodiment of the present application.
Detailed Description
With the continuous development of the intelligent transportation field, it is vital that various information of vehicles is obtained intelligently, efficiently and accurately. The magnitude of the deformation of a tire, which is the only component of a vehicle in contact with a road surface, is an important evaluation index for evaluating the aspects of vehicle driving comfort, driving safety, traffic economy, contact force identification accuracy and the like.
Under the condition of overlarge tire deformation, the contact area and the friction force between the tire and the road surface are increased, so that the problems of deflection, shaking, steering failure and the like of an automobile are caused, and the comfort and the safety of a human body are directly influenced. In addition, the tire with large deformation leads to the increase of oil consumption during the running process, thereby causing the increase of traffic running cost.
When the air pressure is too high, the tire is slightly deformed, the contact area between the tire and the road surface and the friction force are reduced, and the braking performance of the wheel is affected. In the running process of the tire, the vibration transmitted into the vehicle body is increased, and the comfort of the human body is reduced. In addition, the deformation of the tire can directly reflect the change of the contact force of the tire under the same tire pressure. The tire deformation is significant, indicating a high wheel load. Tire deformation was insignificant, indicating low wheel load.
Therefore, high-precision, automated quantification of tire deformation is an important basis for ensuring vehicle driving safety, driving comfort, traffic economy and contact force identification.
At present, the following method is often adopted for identifying the tire deformation: the tire deformation recognition is carried out based on the change of the signal of the tire internal sensor, and the tire deformation recognition is carried out based on the vehicle speed of a running tire, wherein the tire deformation recognition based on the change of the signal of the tire internal sensor is that the tire deformation is determined by using the distance between the wave crest and the wave trough in the signal according to the change of the signal of a longitudinal accelerometer in the sensor in the moving process of the tire, but the tire sensor has the problems of higher installation requirement, limited service life of a battery and the like; tire deformation recognition based on a vehicle speed relation of a running tire is based on the fact that the vehicle speed changes after the tire deforms in the moving process, therefore, the tire deformation is calculated reversely after the rotation speed of the tire is measured, specifically, the circumference of the tire is obtained according to the vehicle speed change, and whether the tire deforms or not is determined.
Therefore, the application provides a tire deformation amount identification method, a tire deformation amount identification device, tire deformation amount identification equipment and a readable storage medium based on visual feedback and a depth network, which are used for identifying the tire deformation amount based on the visual feedback and the depth network so as to realize identification of the tire deformation amount without a tire sensor.
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, which shows a flowchart of a tire deformation amount identification method based on visual feedback and a depth network provided in an embodiment of the present application, a tire deformation amount identification method based on visual feedback and a depth network provided in an embodiment of the present application may include:
s11: and acquiring a vehicle image of a target vehicle in a pre-established virtual detection area by using a traffic camera.
First, a virtual detection area may be established in advance in a shooting range of a traffic camera installed on a portal frame, and specifically, refer to fig. 2, which shows a schematic diagram of the established virtual detection area provided in an embodiment of the present application, where the virtual detection area may be specifically a quadrilateral area, the area may cover multiple traveling vehicles, the length of the area may be 25 meters, the direction is a vehicle traveling direction, the width is a sum of all detection lanes, the direction is perpendicular to the vehicle traveling direction, and the virtual detection area may be numbered so as to obtain lanes of the vehicle, and specifically, lane numbers from two sides of a road to a center of the road are sequentially increased, as shown in fig. 2, and include 3 lane numbers. The lane detection can be conveniently carried out on the basis of the virtual detection area, so that the convenience and the efficiency of the lane detection are improved.
When identifying the tire deformation, a traffic camera mounted on a portal frame can be used for shooting a vehicle image of a target vehicle in a pre-established virtual detection area. Specifically, a traffic camera mounted on the portal frame can be used to shoot a vehicle driving video running into the virtual detection area, and taking the traffic camera around the city in the Changsha city as an example, the traffic camera records the vehicle driving on the expressway, wherein the frame rate of the traffic camera is 30fps (i.e. the number of frames transmitted per second), and the resolution is 1920 × 1080 pixels. Then, the vehicle driving video can be subjected to framing to obtain a vehicle image, and at least one clear vehicle image containing the target vehicle is screened out.
S12: and recognizing the lane of the target vehicle according to the vehicle image, and focusing the road side camera according to the preset relationship between the lane and the focal length of the road side camera and the lane of the target vehicle.
After the vehicle image of the target vehicle is acquired, the lane of the target vehicle may be recognized from the vehicle image, and specifically, the lane number of the lane in which the target vehicle is located may be recognized. When the vehicle image is identified, lanes of all vehicles included in the vehicle image can be specifically identified according to the vehicle image, and lanes of the target vehicle can be obtained from the lanes.
Then, the road side camera may be automatically focused according to a preset relationship between the lane and the focal length of the road side camera (specifically, the camera located on the side of the road, specifically, the camera may be one side of the road or two sides of the road), and the lane of the target vehicle. When the lanes in the virtual detection area are numbered, the relationship between the lane numbers and the focal lengths of the road side cameras can be preset, then the road side cameras are automatically focused according to the relationship and the lane numbers corresponding to the lanes of the target vehicles, namely, the vehicles are logically judged at the specific positions of the lanes, and the information feedback of the road side cameras is completed, so that the automatic focusing of the road side cameras is realized, namely, the traffic cameras on the portal frame and the road side cameras are subjected to linkage control, in other words, the information of the high-altitude cameras and the road side cameras is fused in a visual feedback mode.
The road side camera is specifically a camera located on the side of the road, and specifically can be a camera located on one side of the road or on two sides of the road, and the road side camera is arranged outside the virtual detection area, so that vehicle related information captured by the traffic camera in the air can be transmitted to the road side camera, and specifically, the road side camera can be arranged 10 meters away from the virtual detection area, so that the chance of changing the lane of the vehicle can be reduced, and a transmission signal of the traffic camera in the air can be timely acquired.
Through the process, the focal length of the road side camera can be automatically adjusted based on the information feedback of the lane position, so that the tire image with high resolution can be automatically acquired (wherein the resolution of the tire image can reach 6000 x 4000 pixel points), and the accuracy of tire deformation identification can be improved.
S13: and acquiring a tire image of the target vehicle by using the focused road side camera, and identifying the tire image by using a semantic segmentation algorithm obtained by pre-training to obtain a rim area and a tire area.
On the basis of step S12, the focused road side camera may be used to capture a tire image of the target vehicle to capture a high-resolution tire image, thereby facilitating identification of the tire deformation amount based on the captured high-resolution tire image by computer vision feedback. Computer vision is a science for researching how to make a machine see, and further, the computer vision means that a camera and a computer are used for replacing human eyes to perform machine vision such as identification, tracking, measurement and the like on a target, and further image processing is performed, so that the computer processing becomes an image which is more suitable for human eye observation or is transmitted to an instrument for detection; the information feedback means that the control system transmits the information out, returns the action result of the information and influences the information output again, thereby playing the role of restriction and achieving the preset purpose.
In addition, before the tire deformation amount is identified, an initial semantic segmentation algorithm can be selected in advance and trained to finally obtain the semantic segmentation algorithm capable of identifying the tire image. The semantic segmentation algorithm is a deep learning algorithm that associates a label or class with each pixel of the picture and is used to identify the set of pixels that constitute the distinguishable class.
The process of pre-training the semantic segmentation algorithm may specifically be:
1) shooting a tire video by using a camera, performing frame processing on the tire video, establishing a data set of various tire images, performing contour marking on the tire image data set, and dividing the marked data set. Specifically, tire videos shot by cameras (particularly cameras on two sides of a road) are subjected to frame division processing, and a data set of multiple tire images is established, wherein the data set mainly comprises six types, namely car tires, SUV tires, bus tires, passenger car tires, light truck tires and heavy truck tires, so that various tire images can be identified by a semantic segmentation algorithm obtained through final training; the calibration of the tire image data set mainly comprises rim and tire edge labeling, and the labeled boundary is tightly attached to the rim contour and the tire contour, so that the identification precision of the tire edge can be improved. The labeled data set is divided, for example, to include 1500 different tire images, wherein the training set, the verification set and the test set are 900, 300 and 300, respectively, and the proportion thereof is 60%, 20% and 20%.
2) Currently, commonly used semantic segmentation algorithms include FCNN, SegNet, Mask R-CNN, deep Lab, PSPNet, UNet, and the like. The semantic segmentation algorithm firstly utilizes an encoder to down-sample an input image, compresses the image scale, judges the category of each region of the image, then utilizes a decoder to up-sample a reduced characteristic image, restores the original image size, and finally completes the classification of all pixel points in the input image. In the application, the wire-UNet network framework is preferably used as an initial semantic segmentation algorithm and trained to obtain a final semantic segmentation algorithm, and of course, other semantic segmentation algorithms can be selected for training and recognition, and the process is similar to the training process of the wire-UNet network framework and is not repeated herein.
The specific implementation method comprises the following steps: the Tire edge is identified by utilizing the designed Tire-UNet network framework, the network can effectively identify the edge profile characteristics of the Tire, and the Tire edge identification precision is improved. The Tie-UNet network takes UNet as a network framework, and is formed by adding an expansion convolution and an attention mechanism module. In addition, a transfer learning mode is added in the network training of the wire-UNet network, so that the sample data volume can be reduced, the network convergence is accelerated, and the identification precision is improved. The method comprises the following specific steps: and fitting the Tie-UNet network model by using the training set obtained by division, then adjusting parameters of the model by using the verification set, and finally obtaining the trained semantic segmentation algorithm by evaluating the generalization capability of the model according to the test set.
Through the above process, the present application also adopts a semantic segmentation algorithm based on a Convolutional Neural Network (CNN) to perform tire image recognition, where the Convolutional Neural network is a kind of feedforward Neural network including convolution calculation and having a deep structure, and is one of typical algorithms of deep learning (deep learning). Convolutional neural networks have a characteristic learning ability, and can perform translation invariant classification on input information according to a hierarchical structure thereof, and are also called "translation invariant artificial neural networks".
After the semantic segmentation algorithm is obtained through pre-training, the rim region and the tire region in the obtained tire image can be identified by using the semantic segmentation algorithm obtained through pre-training, the identified rim region and the tire region are masked to form three regions with different colors, namely, the rim region, the tire and the background region, so that the three different regions can be separated, namely, the rim region and the tire region in the tire image can be obtained (the tire region mentioned here is the region outside the wheel, namely, the tire region outside the rim region). Specifically, reference may be made to fig. 3 and fig. 4, where fig. 3 illustrates an original tire image obtained by the embodiment of the present application, and fig. 4 is a graph of a recognition result obtained by performing recognition by using a semantic segmentation algorithm obtained by pre-training provided by the embodiment of the present application.
S14: and detecting the rim area and the tire area to obtain pixel points of the rim area and pixel points of the tire area.
On the basis of step S11, the rim area and the tire area obtained by the identification may be detected respectively to obtain pixel points of the rim area and pixel points of the tire area correspondingly, so as to facilitate identification of tire deformation based on the pixel points of the rim area and the pixel points of the tire area.
S15: and calculating an image scale factor according to the pixel points of the rim area and the diameter of the rim.
Considering that during the deformation of the tire, mainly the tire region is deformed, but the rim region is not deformed, an image scale factor of the tire image, which is a scale factor between the size of the tire image and the size of the real tire, can be calculated based on the rim region.
Specifically, the rim diameter, which refers to the actual physical size of the rim diameter, may be obtained first, specifically by identifying the tire image sidewall specification, and reading the rim diameter from the sidewall specification of the tire image. Then, an image scale factor can be calculated according to the pixel points of the rim area and the rim diameter.
S16: and calculating the deformation of the tire according to the pixel points of the tire area and the image scale factor.
Based on steps S14 and S15, the tire deformation amount may be calculated according to the pixel points of the tire area and the image scale factor, so as to realize quantification of the tire deformation amount, thereby obtaining a specific value of the tire deformation.
In addition, according to the above-described process, the tire image is acquired by the traffic camera and the road side camera, and the tire deformation amount is recognized based on the acquired tire image, without using a contact type tire sensor, so that the limitation and complexity of the tire deformation amount recognition can be reduced, the application range and convenience of the tire deformation amount can be improved, and the tire deformation amount recognition cost can be reduced.
According to the technical scheme, the lane of the target vehicle is recognized firstly, then the road side camera is automatically focused according to the lane of the target vehicle, and the road side camera after automatic focusing is utilized to complete the acquisition of the tire image with high resolution, so that the accuracy of tire deformation identification is improved conveniently. Then, the obtained tire image is identified by utilizing a semantic segmentation algorithm obtained by pre-training to obtain a rim area and a tire area, the rim area and the tire area are detected to obtain pixel points of the rim area and pixel points of the tire area, an image scale factor is calculated according to the pixel points of the rim area and the diameter of the rim, and then the tire deformation is calculated according to the pixel points of the tire area and the image scale factor, so that the tire deformation is identified in a non-contact manner by utilizing the tire image obtained by the road side camera after automatic focusing without utilizing a tire sensor, therefore, the convenience of tire deformation identification can be improved, the identification cost of the tire deformation is reduced, and the tire deformation can be quantified in an automatic manner, and the accuracy of tire deformation amount identification can be improved by means of deep learning.
The tire deformation amount identification method based on visual feedback and depth network provided by the embodiment of the application detects the rim area and the tire area to obtain the pixel points of the rim area and the tire area, and can include the following steps:
carrying out edge detection on a rim area and a tire area to obtain rim edge pixel points and tire edge pixel points;
according to the pixel point of the rim area and the diameter of the rim, calculating an image scale factor can include:
selecting any three pixel points from the rim edge pixel points, and acquiring the coordinates of the selected pixel points;
calculating the coordinates of the circle center of the rim and the number of the pixel points on the diameter of the rim according to the selected coordinates of the pixel points;
and calculating a first scale factor by using the diameter of the rim and the number of pixel points on the diameter of the rim.
In the application, when the rim area and the tire area are detected to obtain the pixel points of the rim area and the pixel points of the tire area, the sub-pixel extraction can be specifically carried out on the rim area and the tire area so as to accurately detect the edge outline of the rim area and the tire area. The algorithms commonly used for the sub-pixels include a difference method, a curve fitting method and a moment-based edge detection algorithm. The method and the device can utilize the sub-pixel edge detection algorithm of Gaussian fitting to carry out edge detection on the rim area and the tire area so as to obtain the rim edge pixel points and the tire edge pixel points, and the algorithm has the characteristics of high precision, short operation time and the like. According to the algorithm, firstly, Canny operators are used for roughly extracting the edge of a rim area and the edge of a tire area, and then, a one-dimensional Gaussian function fitting formula is used for further converting pixel-level information into sub-pixel-level information. After the sub-pixel edge detection algorithm of Gaussian fitting is operated, the rim edge pixel points and the tire edge pixel points (namely, the pixel points are specifically sub-pixel points) can be accurately detected.
On the basis, when calculating the image scale factor according to the pixel points of the rim area and the rim diameter, any three pixel points can be selected from the pixel points at the edge of the rim, and the coordinates of the selected pixel points are obtained, wherein the coordinates of the selected three pixel points can be respectively expressed as (x)1,y1)、(x2,y2) And (x)3,y3) Specifically, the coordinates mentioned here are coordinates in a coordinate system with the upper left corner of the tire image as an origin, the left side as an x-axis, the right side as a y-axis, and the unit length as one pixel point, and specifically refer to fig. 5, which shows a schematic diagram for calculating the first scale factor provided in the embodiment of the present application. Then, calculating the coordinate of the circle center of the rim as (x) according to the coordinates of the selected three pixel pointsc,yc) And the number of the pixel points on the diameter of the rim, and then dividing the number of the pixel points on the diameter of the rim by the diameter of the rim to obtain a first scale factor, namely the number of the actual length corresponding to one pixel point represented by the first scale factor, wherein d in fig. 5 is the diameter of the rim in the tire image, and d can be regarded as the number of the pixel points on the diameter of the rim according to the unit length in the coordinate system as the size of one pixel point. The first scale factor can be accurately calculated through the process, and on the basis, the tire deformation can be calculated according to the tire edge pixel points and the first scale factor, so that the accuracy of tire deformation calculation is improved.
The tire deformation amount identification method based on visual feedback and depth network provided by the embodiment of the application calculates the tire deformation amount according to the pixel points and the image scale factors of the tire area, and can include the following steps:
determining a tire contact central point and a highest edge point of an upper edge of the tire according to the circle center of a rim;
respectively acquiring a first preset number of tire upper edge pixel points from two sides of the highest edge point, and forming a tire upper edge point group by the acquired tire upper edge pixel points and the highest edge point;
calculating the number of average pixel points between the upper edge point group of the tire and the circle center of the rim according to the upper edge point group of the tire;
on the contact length of the tire and the ground, respectively obtaining a second preset number of tire lower edge pixel points from two sides of the tire contact center point, and forming a tire lower edge point group by the obtained tire lower edge pixel points and the tire contact center;
calculating the minimum number of pixel points between the tire lower edge point group and the rim circle center according to the tire lower edge point group;
calculating the number of pixel points of the vertical deflection of the tire according to the number of the average pixel points and the minimum pixel points;
and obtaining the vertical deflection of the tire according to the number of the pixel points of the vertical deflection of the tire and the first scale factor.
On the basis of obtaining the rim edge pixel point, the rim circle center coordinate and the first scale factor, when the tire deformation is calculated according to the pixel point and the image scale factor of the tire area, the vertical deflection of the tire can be obtained through calculation.
Referring to fig. 6, which shows a schematic diagram of calculating the vertical deflection of the tire according to the embodiment of the present application, first, a vertical line may be made through the center of the rim, where the intersection point of the vertical line and the upper edge of the tire is the highest edge point of the upper edge of the tire (meanwhile, the coordinates of the highest edge point of the upper edge of the tire are obtained), and the intersection point of the vertical line and the lower edge of the tire is the tire contact center (here, the tire contact center is the contact center of the tire and the ground) (meanwhile, the coordinates of the tire contact center are obtained). Then, the tire outer contour can be followed, and the highest edge point of the tire upper edge is taken as the center, the left side and the right side respectively obtain the tire upper edge pixel points of a first preset number (the coordinates of the tire upper edge pixel points can be obtained at the same time), and all the obtained tire upper edge pixel points and the tire upper edge highest edge point form the tire upper edge point group, that is, the number of the tire edge pixel points contained in the tire upper edge point group is twice the first preset number + 1. The first preset number may be set as needed, and fig. 6 illustrates that the first preset number is 5, and of course, other values may also be used, which is not limited in this application.
Can be used to obtain the edge point group on the tire
Figure BDA0003344672080000141
Calculating the mean distance L between the upper edge point group of the tire and the center of the rim circle1Wherein n is the number of tire edge pixel points contained in the tire edge point group, (x)i,yi) Namely the coordinates of the ith tire edge pixel point contained in the tire edge point group. The mean distance L between the edge point group on the tire and the circle center of the rim is obtained through calculation1Then, considering that the unit length in the coordinate system is the size of one pixel point, the average distance L between the edge point group on the tire and the circle center of the rim1The number of the average pixel points between the edge point group on the tire and the circle center of the rim can also be regarded as the number of the average pixel points.
In addition, on the contact length between the tire and the ground, a second preset number of tire lower edge pixel points may be respectively obtained from both sides with the tire contact center point as the center (coordinates of each tire lower edge pixel point may be obtained at the same time), and all the obtained tire lower edge pixel points and the tire contact center may form a tire lower edge point group, that is, the number of the tire lower edge pixel points included in the tire lower edge point group is twice the second preset number + 1. The second preset number may be equal to the first preset number, and certainly may be different from the first preset number, which is not limited in the present application. Then, can utilize
Figure BDA0003344672080000142
Calculating a lower edge point of a tireMinimum distance L between group and circular center of rim2Wherein n is the number of tire edge pixel points contained in the tire edge point group, (x)i,yi) Namely the coordinates of the ith tire edge pixel point contained in the tire lower edge point group. The minimum distance L between the lower edge point group of the tire and the circle center of the rim is obtained through calculation2Then, considering that the unit length in the coordinate system is the size of one pixel point, the minimum distance L between the tire lower edge point group and the rim center of circle2The minimum number of pixel points between the lower edge point group of the tire and the circle center of the rim can be considered.
Then, the number of pixels in the mean value between the upper edge point group of the tire and the center of the rim is subtracted from the number of pixels in the minimum value between the lower edge point group of the tire and the center of the rim to calculate the number of pixels with vertical deflection of the tire, and then the number of pixels with vertical deflection of the tire is multiplied by the first scale factor to obtain the vertical deflection of the tire.
In the process, the tire edge point group is formed, and the number of the average pixel points between the tire upper edge point group and the rim center and the minimum pixel points between the tire lower edge point group and the rim center are correspondingly calculated based on the tire edge point group, so that the influence of accidental errors can be avoided, and the accuracy of vertical deflection calculation of the tire is improved.
The tire deformation amount identification method based on visual feedback and depth network provided by the embodiment of the application calculates the tire deformation amount according to the pixel points and the image scale factors of the tire area, and can include the following steps:
determining a tire contact central point according to the circle center of the rim;
respectively selecting lower edge pixel points of the tire from two sides of a central point of the tire, and calculating the slope of the lower edge pixel points of each tire and the contact central point of the tire;
comparing the slope of the selected lower edge pixel point and the tire contact center point with a threshold, determining the lower edge pixel point of the tire, of which the slope of the left side of the tire contact center point and the tire contact center point is greater than or equal to the threshold, as a left critical point, and determining the lower edge pixel point of the tire, of which the slope of the right side of the tire contact center point and the tire contact center point is greater than or equal to the threshold, as a right critical point;
and determining the number of pixel points between the left critical point and the right critical point, and calculating the contact length between the tire and the ground according to the number of the pixel points between the left critical point and the right critical point and the first scale factor.
In the application, on the basis of obtaining the rim edge pixel point, the rim circle center coordinate and the first scale factor, when the tire deformation is calculated according to the pixel point and the image scale factor of the tire area, the vertical deflection of the tire can be calculated, and the contact length between the tire and the ground can also be calculated.
Referring to fig. 7, which shows a schematic diagram of calculating a contact length of a tire and the ground according to an embodiment of the present application, first, a tire contact center point may be determined according to a rim center, specifically, a vertical line is made through a center of a rim center, and an intersection point of the vertical line and a lower edge of the tire is the tire contact center (meanwhile, coordinates of the tire contact center are obtained). Then, the tire contact center is taken as an initial pixel point, and the critical point of the tire contact length is screened along the running direction of the tire. Specifically, tire lower edge pixel points can be respectively selected from two sides of the initial pixel point (specifically, the tire lower edge pixel points are selected in the neighborhood of the initial pixel point), each time one tire lower edge pixel point is selected, the coordinate of the selected tire lower edge pixel point is obtained, then, the slope of the tire lower edge pixel point and the tire contact center point is calculated according to the coordinate of the selected tire lower edge pixel point and the coordinate of the tire contact center point, the calculated slope is compared with a preset value, if the slope is smaller than a threshold value, the tire lower edge pixel point which is positioned on the left side of the tire contact center point and is larger than or equal to the threshold value is continuously selected by taking the tire contact center point as the center, if the slope is larger than or equal to the threshold value, the tire lower edge pixel point which is positioned on the left side of the tire contact center point and is larger than or equal to the threshold value can be determined as a left critical point of the tire contact with the ground, and the wheel which is positioned on the right side of the tire contact center point and is larger than or equal to the threshold value And determining the lower edge pixel point of the tire as a right critical point of the contact between the tire and the ground. The threshold value may be specifically tan (± 1 °), but may be adjusted as needed.
In order to improve the accuracy of obtaining the left critical point and the right critical point, for the determination of the left critical point, when the slopes of the tire lower edge pixel points and the tire contact central points which are continuously located on the left side of the tire contact central point and have the third preset number are both smaller than the threshold, the selection of the tire lower edge pixel points is continuously performed along the left side of the tire contact central point, and when the slopes of the tire lower edge pixel points and the tire contact central points which are continuously located and have the fourth preset number are both larger than the threshold, the tire lower edge pixel point which is closest to the tire contact central point in the fourth preset number of tire lower edge pixel points is used as the left critical point of the tire, wherein the third preset number and the fourth preset number can be both 3, and of course, the adjustment can be performed according to needs; for the determination of the right critical point, the implementation process is similar to that of the left critical point, when the slopes of the lower edge pixel points of the tires with the third preset number and the contact central points of the tires on the right side of the contact central points of the tires are all smaller than the threshold, the lower edge pixel points of the tires are continuously selected along the right side of the contact central points of the tires, and when the slopes of the lower edge pixel points of the tires with the fourth preset number and the contact central points of the tires are larger than the threshold, the lower edge pixel point of the tire closest to the contact central point of the tires in the lower edge pixel points of the tires with the fourth preset number is used as the right critical point of the tires. The influence caused by accidental errors can be avoided through the process, so that the accuracy of obtaining the left critical point and the right critical point is improved, and the accuracy of calculating the contact length of the tire and the ground is improved.
After the left critical point and the right critical point of the contact between the tire and the ground are determined, the number of pixel points between the left critical point and the right critical point can be calculated according to the coordinate of the left critical point and the coordinate of the right critical point, wherein the number of the pixel points between the left critical point and the right critical point is the number of the pixel points corresponding to the contact length between the tire and the ground, then, the number of the pixel points corresponding to the contact length between the tire and the ground can be multiplied by the first scale factor to calculate the contact length between the tire and the ground, and therefore quantification of the contact length between the tire and the ground under the condition that no contact equipment is used can be achieved through the method and the device.
According to the tire deformation identification method based on visual feedback and the depth network, the image scale factor is calculated according to the pixel points of the rim area and the rim diameter, and the method can comprise the following steps:
acquiring the number of pixel points contained in a rim area;
calculating the area of the rim by using the diameter of the rim;
obtaining a second scale factor according to the area of the rim and the number of pixel points contained in the rim area;
calculating the tire deformation according to the pixel points of the tire area and the image scale factor, which may include:
acquiring the number of pixel points contained in the upper half area of the tire and the number of pixel points contained in the lower half area of the tire, and calculating the difference value between the number of pixel points contained in the upper half area and the number of pixel points contained in the lower half area of the tire;
and calculating the deformation area of the tire according to the difference and the second scale factor.
In the application, when the deformation amount of the tire is identified, not only the vertical deflection of the tire and the contact length between the tire and the ground can be obtained, but also the deformation area of the tire can be obtained.
Specifically, when calculating the image scale factor according to the pixel points and the rim diameter in the rim region, the number of the pixel points included in the rim region can be further obtained, the area of the rim is calculated by utilizing the rim diameter (the area is the actual physical area of the rim), then, the area of the rim can be divided by the number of the pixel points included in the rim region to obtain the second scale factor, namely, what the actual area corresponding to one pixel point is represented by the second scale factor is, the second scale factor can be accurately obtained through the mode, and the accuracy of tire deformation amount identification is improved.
On the basis of obtaining the second scale factor, when calculating the tire deformation amount according to the pixel points of the tire area and the image scale factor, first, a center of a rim circle may be obtained (the obtaining process may refer to the detailed description of the corresponding part, which is not described herein again), and then, a horizontal line is made through the center of the rim circle, specifically, refer to fig. 8, which shows a schematic diagram for calculating the deformation area of the tire provided in the embodiment of the present application, and the tire area is divided into an upper half area and a lower half area of the tire by using the horizontal line, where the upper half area of the tire is the tire. And then, obtaining the number of pixel points contained in the upper half area of the tire and the number of pixel points contained in the lower half area of the tire, subtracting the number of pixel points contained in the upper half area of the tire from the number of pixel points contained in the lower half area of the tire, and obtaining the difference value between the number of pixel points contained in the upper half area and the number of pixel points contained in the lower half area of the tire, wherein the difference value is the number of pixel points in the tire deformation area. Then, the difference value can be multiplied by the second scale factor to calculate the deformation area of the tire, so that the deformation area of the tire can be quantized, and the calculation accuracy of the deformation area of the tire can be improved.
The tire deformation amount identification method based on visual feedback and the depth network provided by the embodiment of the application identifies the lane of the target vehicle according to the vehicle image, and comprises the following steps:
detecting the vehicle image by using a pre-trained target detection algorithm to obtain the position information of a vehicle target detection frame corresponding to a target vehicle;
and determining the lane of the target vehicle according to the position information of the vehicle target detection frame.
In the application, when the lane of the target vehicle is identified according to the vehicle image, the vehicle image can be detected by using a pre-trained target detection algorithm to obtain the position information of the vehicle target detection frame corresponding to the target vehicle, specifically to obtain the coordinate positions of the upper left corner, the lower left corner, the upper right corner and the lower right corner of the vehicle target detection frame, and in addition, the vehicle type information of the target vehicle can also be obtained. Specifically, refer to fig. 9, which shows a schematic diagram of detecting all vehicles included in a vehicle image by using a pre-trained target detection algorithm provided in an embodiment of the present application, and then, position information and vehicle type information of a vehicle target detection frame corresponding to a target vehicle may be obtained therefrom.
The process of pre-training the target detection algorithm comprises the following steps:
1) acquiring a vehicle running video shot by a traffic camera, performing frame processing on the vehicle running video shot by the traffic camera, and establishing a data set containing images of various vehicle types, wherein the data set can contain 3000 vehicle type images, and the data set contains vehicle type images of cars, SUVs, buses, vans, pick-up trucks, passenger cars, light trucks, heavy trucks, special cars and tractor cars, wherein the vehicle type images containing various vehicle types can enable a target detection algorithm obtained through final training to detect various vehicle types;
2) marking the vehicle type images in the data set, wherein the marked rectangular frame is tightly attached to the edge of the vehicle, so that the accuracy of vehicle type identification can be improved;
3) dividing the labeled data set into a training set, a verification set and a test set, wherein the proportion is 60%, 20% and 20%, and taking 3000 vehicle type images in the data set as an example, the training set, the verification set and the test set respectively comprise 1800 vehicle type images, 600 vehicle type images and 600 vehicle type images;
4) currently, commonly used target detection algorithms include SSD for single-phase methods, YOLO series, and R-CNN, Fast R-CNN, Faster R-CNN networks for two-phase methods. The single-stage method is to select the whole image to enter a depth network and then directly return the target type and the frame position. The two-stage method comprises the steps that firstly, a candidate region containing a target object is generated by using a CNN network; and secondly, predicting the target type and the frame position of the candidate area. The method and the device can select any one target detection algorithm to identify the vehicle type. The YOLOV5 has the advantages of high detection speed, high detection precision and the like, and can meet the requirement of real-time detection of vehicles, so that a YOLOV5 network is selected for vehicle type target identification. The network framework of YOLOV5 includes two major parts: (1) backbone network: performing convolution operation on an input image, and extracting image characteristics; (2) and a prediction part: and performing up-sampling on the extracted features, and outputting the position and the category of the object. The method comprises the following specific steps: the method comprises the steps of fitting a YOLOV5 deep learning network model by using a vehicle model training set, then adjusting parameters of the model by using a verification set, and finally obtaining a trained target detection algorithm according to the generalization capability of a test set evaluation model, namely training the target detection algorithm based on a deep learning mode. And then, detecting the selected vehicle running image to be detected by using a target detection algorithm obtained by training so as to obtain the position information of the vehicle target detection frame corresponding to the target vehicle.
After the position information of the vehicle target detection frame corresponding to the target vehicle is obtained, the center point of the vehicle target detection frame corresponding to the target vehicle can be determined according to the coordinate positions of the upper left corner, the lower left corner, the upper right corner and the lower right corner of the vehicle target detection frame corresponding to the target vehicle, specifically, the upper left corner and the lower right corner of the vehicle target detection frame corresponding to the target vehicle are connected to form a first straight line, the lower left corner and the upper right corner of the vehicle target detection frame corresponding to the target vehicle are connected to form a second straight line, and the intersection point between the two straight lines is the center point of the vehicle target detection frame corresponding to the target vehicle. And then, determining the lane where the center point of the vehicle target detection frame corresponding to the target vehicle is located as the lane of the target vehicle.
The lane of the target vehicle can be determined based on the deep learning mode through the process, so that the accuracy of determining the lane is improved, and the accuracy of determining the focal length of the camera positioned on the side face of the road is improved conveniently. In addition, the tire deformation amount identification can be carried out based on the multi-scale depth network and the double-vision feedback in combination with the process.
The tire deformation amount identification method based on visual feedback and depth network provided by the embodiment of the application further comprises the following steps after the tire deformation amount is calculated according to the pixel points and the image scale factors of the tire area:
and judging whether the deformation of the tire is in the corresponding preset range, and if not, early warning.
In the present application, after calculating the tire deformation amount according to the pixel points and the image scale factor of the tire area, it may be determined whether the calculated tire deformation amount is within a corresponding preset range, where the preset range mentioned herein is specifically determined according to the normal tire deformation amount. If the tire deformation is in the corresponding preset range, the tire deformation is determined to be in the normal range, if the tire deformation is not in the corresponding preset range, the tire deformation is indicated to be too large or too small (specifically, the tire deformation can be compared with the boundary of the preset range, if the tire deformation is less than the left boundary of the preset range, the tire deformation is indicated to be too small, and if the tire deformation is greater than the right boundary of the preset range, the tire deformation is indicated to be too large). The early warning can be sent out on a display screen of the vehicle or a mobile terminal of a driver of the vehicle, so that related personnel can know the early warning prompt in time and process the prompt in time.
It should be noted that, when the tire deformation is specifically the vertical deflection of the tire, the preset range is the vertical deflection preset range corresponding to the vertical deflection of the tire; when the tire deformation is specifically the contact length between the tire and the ground, the preset range is the preset range of the ground contact length corresponding to the contact length between the tire and the ground; when the tire deformation amount is specifically the deformation area of the tire, the preset range is the deformation area preset range corresponding to the deformation area of the tire, so that comparison and determination can be conveniently carried out.
The embodiment of the present application further provides a tire deformation amount identification device based on visual feedback and a depth network, and referring to fig. 10, it shows a schematic structural diagram of a tire deformation amount identification device based on visual feedback and a depth network provided in the embodiment of the present application, and the tire deformation amount identification device may include:
the acquisition module 21 is configured to acquire a vehicle image of a target vehicle located in a pre-established virtual detection area by using a traffic camera;
the focusing module 22 is used for identifying the lane of the target vehicle according to the vehicle image and focusing the road side camera according to the preset relationship between the lane and the focal length of the road side camera and the lane of the target vehicle;
the first identification module 23 is configured to acquire a tire image of a target vehicle by using the focused road side camera, and identify the tire image by using a semantic segmentation algorithm obtained through pre-training to obtain a rim area and a tire area;
the detection module 24 is configured to detect a rim area and a tire area to obtain pixel points of the rim area and pixel points of the tire area;
the first calculation module 25 is used for calculating an image scale factor according to the pixel points of the rim area and the diameter of the rim;
and the second calculation module 26 is configured to calculate a tire deformation amount according to the pixel points of the tire area and the image scale factor.
In the tire deformation amount recognition apparatus based on visual feedback and depth network provided in the embodiment of the present application, the detection module 24 may include:
the first detection unit is used for carrying out edge detection on a rim area and a tire area to obtain rim edge pixel points and tire edge pixel points;
the first calculation module 25 may include:
the rim edge pixel acquisition device comprises a first acquisition unit, a second acquisition unit and a control unit, wherein the first acquisition unit is used for selecting any three pixel points from rim edge pixel points and acquiring the coordinates of the selected pixel points;
the first calculation unit is used for calculating the coordinates of the circle center of the rim and the number of the pixels on the diameter of the rim according to the selected coordinates of the pixels;
and the second calculation unit is used for calculating the first scale factor by utilizing the diameter of the rim and the number of pixel points on the diameter of the rim.
In an embodiment of the present application, the tire deformation amount identification device based on visual feedback and a depth network, the second calculation module 26 may include:
the first determining unit is used for determining a tire contact center point and a highest edge point of an upper edge of the tire according to the circle center of a rim;
the second acquisition unit is used for respectively acquiring a first preset number of tire upper edge pixel points from two sides of the highest edge point, and forming the acquired tire upper edge pixel points and the highest edge point into a tire upper edge point group;
the third calculating unit is used for calculating the number of mean pixel points between the upper edge point group of the tire and the circle center of the rim according to the upper edge point group of the tire;
the third obtaining unit is used for obtaining a second preset number of lower edge pixel points of the tire from two sides of the contact center point of the tire on the contact length of the tire and the ground, and forming a lower edge point group of the tire by the obtained lower edge pixel points of the tire and the contact center of the tire;
the fourth calculation unit is used for calculating the minimum number of pixel points between the tire lower edge point group and the rim center according to the tire lower edge point group;
the fifth calculation unit is used for calculating the number of pixel points of the vertical deflection of the tire according to the number of the average pixel points and the minimum pixel points;
and obtaining a vertical deflection unit for obtaining the vertical deflection of the tire according to the number of the pixel points of the vertical deflection of the tire and the first scale factor.
In an embodiment of the present application, the tire deformation amount identification device based on visual feedback and a depth network, the second calculation module 26 may include:
the second determining unit is used for determining the tire contact center point according to the circle center of the rim;
the sixth calculating unit is used for respectively selecting lower edge pixel points of the tire from two sides of the center point of the tire and calculating the slope of the lower edge pixel points of each tire and the contact center point of the tire;
the third determining unit is used for comparing the slope of the selected lower edge pixel point and the tire contact center point with a threshold, determining the lower edge pixel point of the tire, of which the slope between the left side of the tire center point and the tire contact center point is greater than or equal to the threshold, as a left critical point, and determining the lower edge pixel point of the tire, of which the slope between the right side of the tire center point and the tire contact center point is greater than or equal to the threshold, as a right critical point;
and the seventh calculating unit is used for determining the number of pixel points between the left critical point and the right critical point and calculating the contact length between the tire and the ground according to the number of the pixel points between the left critical point and the right critical point and the first scale factor.
In the tire deformation identification apparatus based on visual feedback and depth network provided in the embodiment of the present application, the first calculation module 25 may include:
the fourth obtaining unit is used for obtaining the number of pixel points contained in the rim area;
an eighth calculation unit for calculating an area of the rim using the rim diameter;
the ninth calculation unit is used for obtaining a second scale factor according to the area of the rim and the number of pixel points contained in the rim area;
the second calculation module 26 may include:
the fifth acquiring unit is used for acquiring the number of pixel points contained in the upper half area of the tire and the number of pixel points contained in the lower half area of the tire, and calculating the difference value between the number of pixel points contained in the upper half area and the number of pixel points contained in the lower half area of the tire;
and the tenth calculating unit is used for calculating the deformation area of the tire according to the difference value and the second scale factor.
According to the tire deformation amount recognition device based on visual feedback and depth network provided by the embodiment of the application, the focusing module 22 may include:
the second detection unit is used for detecting the vehicle image by utilizing a pre-trained target detection algorithm to obtain the position information of a vehicle target detection frame corresponding to the target vehicle;
and the fourth determining unit is used for determining the lane of the target vehicle according to the position information of the vehicle target detection frame.
The tire deformation amount recognition device based on visual feedback and the depth network provided by the embodiment of the application can further comprise:
and the judging module is used for judging whether the tire deformation is in the corresponding preset range or not after calculating the tire deformation according to the pixel points and the image scale factors of the tire area, and if not, carrying out early warning.
The embodiment of the present application further provides a tire deformation amount identification device based on visual feedback and a depth network, and referring to fig. 11, it shows a schematic structural diagram of a tire deformation amount identification device based on visual feedback and a depth network provided in the embodiment of the present application, and the tire deformation amount identification device may include:
a memory 31 for storing a computer program;
the processor 32, when executing the computer program stored in the memory 31, may implement the following steps:
acquiring a vehicle image of a target vehicle in a pre-established virtual detection area by using a traffic camera; recognizing a lane of a target vehicle according to the vehicle image, and focusing a road side camera according to a preset relationship between the lane and a focal length of the road side camera and the lane of the target vehicle; acquiring a tire image of a target vehicle by using the focused road side camera, and identifying the tire image by using a semantic segmentation algorithm obtained by pre-training to obtain a rim area and a tire area; detecting a rim area and a tire area to obtain pixel points of the rim area and pixel points of the tire area; calculating an image scale factor according to pixel points of the rim area and the diameter of the rim; and calculating the deformation of the tire according to the pixel points of the tire area and the image scale factor.
An embodiment of the present application further provides a readable storage medium, in which a computer program is stored, and when the computer program is executed by a processor, the following steps may be implemented:
acquiring a vehicle image of a target vehicle in a pre-established virtual detection area by using a traffic camera; recognizing a lane of a target vehicle according to the vehicle image, and focusing a road side camera according to a preset relationship between the lane and a focal length of the road side camera and the lane of the target vehicle; acquiring a tire image of a target vehicle by using the focused road side camera, and identifying the tire image by using a semantic segmentation algorithm obtained by pre-training to obtain a rim area and a tire area; detecting a rim area and a tire area to obtain pixel points of the rim area and pixel points of the tire area; calculating an image scale factor according to pixel points of the rim area and the diameter of the rim; and calculating the deformation of the tire according to the pixel points of the tire area and the image scale factor.
The readable storage medium may include: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
For the description of the tire deformation amount identification device, the tire deformation amount identification equipment and the relevant parts in the computer-readable storage medium based on the visual feedback and the depth network, reference may be made to the detailed description of the corresponding parts in the tire deformation amount identification method based on the visual feedback and the depth network provided in the embodiments of the present application, and details are not repeated here.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Furthermore, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include elements inherent in the list. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element. In addition, parts of the above technical solutions provided in the embodiments of the present application, which are consistent with the implementation principles of corresponding technical solutions in the prior art, are not described in detail so as to avoid redundant description.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1.一种基于视觉反馈和深度网络的轮胎变形量识别方法,其特征在于,包括:1. a tire deformation identification method based on visual feedback and deep network, is characterized in that, comprises: 利用交通摄像头获取位于预先建立的虚拟检测区域内的目标车辆的车辆图像;Use traffic cameras to acquire vehicle images of target vehicles located within a pre-established virtual detection area; 根据所述车辆图像识别所述目标车辆的车道,根据预先设置的车道与道路侧面摄像头的焦距间的关系及所述目标车辆的车道,对所述道路侧面摄像头进行调焦;Identify the lane of the target vehicle according to the vehicle image, and adjust the focus of the road side camera according to the relationship between the preset lane and the focal length of the road side camera and the lane of the target vehicle; 利用调焦后的道路侧面摄像头获取所述目标车辆的轮胎图像,利用预先训练得到的语义分割算法对所述轮胎图像进行识别,得到轮辋区域和轮胎区域;Obtain the tire image of the target vehicle by using the focused road side camera, and identify the tire image by using the pre-trained semantic segmentation algorithm to obtain the rim area and the tire area; 对所述轮辋区域、所述轮胎区域进行检测,得到所述轮辋区域的像素点、所述轮胎区域的像素点;Detecting the rim area and the tire area to obtain the pixel points of the rim area and the pixel points of the tire area; 根据所述轮辋区域的像素点及轮辋直径,计算图像比例因子;Calculate the image scale factor according to the pixel points in the rim area and the diameter of the rim; 根据所述轮胎区域的像素点及所述图像比例因子,计算轮胎变形量。The tire deformation is calculated according to the pixel points of the tire area and the image scale factor. 2.根据权利要求1所述的基于视觉反馈和深度网络的轮胎变形量识别方法,其特征在于,对所述轮辋区域、所述轮胎区域进行检测,得到所述轮辋区域的像素点、所述轮胎区域的像素点,包括:2 . The tire deformation amount identification method based on visual feedback and deep network according to claim 1 , wherein the rim area and the tire area are detected to obtain the pixel points of the rim area, the Pixels in the tire area, including: 对所述轮辋区域、所述轮胎区域进行边缘检测,得到轮辋边缘像素点、轮胎边缘像素点;Perform edge detection on the rim area and the tire area to obtain rim edge pixels and tire edge pixels; 根据所述轮辋区域的像素点及轮辋直径,计算图像比例因子,包括:According to the pixel points of the rim area and the diameter of the rim, the image scale factor is calculated, including: 从所述轮辋边缘像素点中选取任意三个像素点,并获取选取的各所述像素点的坐标;Select any three pixel points from the pixel points of the rim edge, and obtain the coordinates of each of the selected pixel points; 根据选取的各所述像素点的坐标,计算轮辋圆心的坐标及所述轮辋直径上的像素点个数;Calculate the coordinates of the center of the rim and the number of pixels on the diameter of the rim according to the coordinates of the selected pixel points; 利用所述轮辋直径、所述轮辋直径上的像素点个数,计算第一比例因子。Using the rim diameter and the number of pixels on the rim diameter, a first scale factor is calculated. 3.根据权利要求2所述的基于视觉反馈和深度网络的轮胎变形量识别方法,其特征在于,根据所述轮胎区域的像素点及所述图像比例因子,计算轮胎变形量,包括:3. The tire deformation amount identification method based on visual feedback and deep network according to claim 2, characterized in that, according to the pixel point of the tire area and the image scale factor, calculating the tire deformation amount, comprising: 根据所述轮辋圆心确定轮胎接触中心点及轮胎上边缘的最高边缘点;Determine the contact center point of the tire and the highest edge point of the upper edge of the tire according to the center of the rim; 从所述最高边缘点的两侧各获取第一预设数量个轮胎上边缘像素点,并将获取到的所述轮胎上边缘像素点及所述最高边缘点构成轮胎上边缘点群;Obtain a first preset number of tire upper edge pixel points from both sides of the highest edge point, and form a tire upper edge point group with the obtained tire upper edge pixel points and the highest edge point; 根据所述轮胎上边缘点群计算所述轮胎上边缘点群和所述轮辋圆心间的均值像素点个数;Calculate the average number of pixels between the tire upper edge point group and the rim center according to the tire upper edge point group; 在所述轮胎与地面的接触长度上,从所述轮胎接触中心点的两侧各获取第二预设数量个轮胎下边缘像素点,并将获取到的所述轮胎下边缘像素点及所述轮胎接触中心构成轮胎下边缘点群;On the contact length of the tire and the ground, a second preset number of pixels on the lower edge of the tire are obtained from both sides of the contact center point of the tire, and the obtained pixel points on the lower edge of the tire and the The tire contact center constitutes the point group of the lower edge of the tire; 根据所述轮胎下边缘点群计算所述轮胎下边缘点群和所述轮辋圆心间的最少像素点个数;Calculate the minimum number of pixel points between the tire lower edge point group and the rim center according to the tire lower edge point group; 根据所述均值像素点个数、所述最少像素点个数,计算轮胎垂向挠度的像素点个数;According to the average number of pixels and the minimum number of pixels, calculate the number of pixels of the vertical deflection of the tire; 根据所述轮胎垂向挠度的像素点个数及所述第一比例因子,得到所述轮胎的垂向挠度。According to the number of pixels of the vertical deflection of the tire and the first scale factor, the vertical deflection of the tire is obtained. 4.根据权利要求2所述的基于视觉反馈和深度网络的轮胎变形量识别方法,其特征在于,根据所述轮胎区域的像素点及所述图像比例因子,计算轮胎变形量,包括:4. The tire deformation amount identification method based on visual feedback and deep network according to claim 2, wherein, according to the pixel points of the tire area and the image scale factor, calculating the tire deformation amount, comprising: 根据所述轮辋圆心确定轮胎接触中心点;Determine the contact center point of the tire according to the center of the rim; 从所述轮胎中心点的两侧分别选取轮胎下边缘像素点,计算各所述轮胎下边缘像素点与所述轮胎接触中心点的斜率;Select tire lower edge pixel points from both sides of the tire center point respectively, and calculate the slope of each of the tire lower edge pixel points and the tire contact center point; 将选取的轮胎下边缘像素点与所述轮胎接触中心点的斜率与阈值进行比较,并将所述轮胎中心点左侧与所述轮胎接触中心点的斜率大于或等于所述阈值的轮胎下边缘像素点确定为左临界点,将所述轮胎中心点右侧与所述轮胎接触中心点的斜率大于或等于所述阈值的轮胎下边缘像素点确定为右临界点;Compare the slope of the selected tire lower edge pixel point and the tire contact center point with the threshold value, and compare the slope of the left side of the tire center point with the tire contact center point greater than or equal to the tire lower edge of the threshold value. The pixel point is determined as the left critical point, and the pixel point on the lower edge of the tire whose slope on the right side of the tire center point and the contact center point of the tire is greater than or equal to the threshold is determined as the right critical point; 确定所述左临界点与所述右临界点间的像素点个数,根据所述左临界点与所述右临界点间的像素点个数及所述第一比例因子,计算所述轮胎与地面的接触长度。Determine the number of pixels between the left critical point and the right critical point, according to the number of pixels between the left critical point and the right critical point and the first scale factor, calculate the tire and the The contact length of the ground. 5.根据权利要求1所述的基于视觉反馈和深度网络的轮胎变形量识别方法,其特征在于,根据所述轮辋区域的像素点及轮辋直径,计算图像比例因子,包括:5. The tire deformation amount identification method based on visual feedback and depth network according to claim 1, wherein, according to the pixel point and the rim diameter of the rim area, calculate the image scale factor, comprising: 获取所述轮辋区域包含的像素点个数;Obtain the number of pixels contained in the rim area; 利用所述轮辋直径计算所述轮辋的面积;Calculate the area of the rim using the rim diameter; 根据所述轮辋的面积、所述轮辋区域包含的像素点个数,得到第二比例因子;Obtain a second scale factor according to the area of the rim and the number of pixels contained in the rim area; 根据所述轮胎区域的像素点及所述图像比例因子,计算轮胎变形量,包括:Calculate the tire deformation amount according to the pixel points of the tire area and the image scale factor, including: 获取所述轮胎上半部分区域包含的像素点个数、所述轮胎下半部分区域包含的像素点个数,并计算所述上半部分区域包含的像素点个数、所述轮胎下半部分区域包含的像素点个数的差值;Obtain the number of pixels included in the upper half of the tire and the number of pixels included in the lower half of the tire, and calculate the number of pixels included in the upper half, and the lower half of the tire. The difference between the number of pixels contained in the area; 根据所述差值及所述第二比例因子,计算所述轮胎的变形面积。According to the difference value and the second scale factor, the deformation area of the tire is calculated. 6.根据权利要求1至5任一项所述的基于视觉反馈和深度网络的轮胎变形量识别方法,其特征在于,根据所述车辆图像识别所述目标车辆的车道,包括:6. The tire deformation amount recognition method based on visual feedback and deep network according to any one of claims 1 to 5, characterized in that, recognizing the lane of the target vehicle according to the vehicle image, comprising: 利用预先训练的目标检测算法对所述车辆图像进行检测,得到所述目标车辆对应的车辆目标检测框的位置信息;Detecting the vehicle image by using a pre-trained target detection algorithm to obtain the position information of the vehicle target detection frame corresponding to the target vehicle; 根据所述车辆目标检测框的位置信息,确定所述目标车辆的车道。According to the position information of the vehicle target detection frame, the lane of the target vehicle is determined. 7.根据权利要求6所述的基于视觉反馈和深度网络的轮胎变形量识别方法,其特征在于,在根据所述轮胎区域的像素点及所述图像比例因子,计算轮胎变形量之后,还包括:7. The tire deformation amount identification method based on visual feedback and deep network according to claim 6, characterized in that, after calculating the tire deformation amount according to the pixel points of the tire area and the image scale factor, the method further comprises: : 判断所述轮胎变形量是否处于对应的预设范围内,若否,则进行预警。It is judged whether the tire deformation amount is within the corresponding preset range, and if not, an early warning is performed. 8.一种基于视觉反馈和深度网络的轮胎变形量识别装置,其特征在于,包括:8. A tire deformation amount identification device based on visual feedback and deep network, characterized in that, comprising: 获取模块,用于利用交通摄像头获取位于预先建立的虚拟检测区域内的目标车辆的车辆图像;an acquisition module, used for acquiring the vehicle image of the target vehicle located in the pre-established virtual detection area by using the traffic camera; 调焦模块,用于根据所述车辆图像识别所述目标车辆的车道,根据预先设置的车道与道路侧面摄像头的焦距间的关系及所述目标车辆的车道,对所述道路侧面摄像头进行调焦;The focusing module is used to identify the lane of the target vehicle according to the vehicle image, and adjust the focus of the road side camera according to the relationship between the preset lane and the focal length of the road side camera and the lane of the target vehicle ; 第一识别模块,用于利用调焦后的道路侧面摄像头获取所述目标车辆的轮胎图像,利用预先训练得到的语义分割算法对所述轮胎图像进行识别,得到轮辋区域和轮胎区域;a first identification module, configured to obtain a tire image of the target vehicle by using the focused road side camera, and identify the tire image by using a pre-trained semantic segmentation algorithm to obtain a rim area and a tire area; 检测模块,用于对所述轮辋区域、所述轮胎区域进行检测,得到所述轮辋区域的像素点、所述轮胎区域的像素点;a detection module, configured to detect the rim area and the tire area, and obtain the pixel points of the rim area and the pixel points of the tire area; 第一计算模块,用于根据所述轮辋区域的像素点及轮辋直径,计算图像比例因子;a first calculation module, configured to calculate an image scale factor according to the pixel points in the rim area and the diameter of the rim; 第二计算模块,用于根据所述轮胎区域的像素点及所述图像比例因子,计算轮胎变形量。The second calculation module is configured to calculate the tire deformation amount according to the pixel points of the tire area and the image scale factor. 9.一种基于视觉反馈和深度网络的轮胎变形量识别设备,其特征在于,包括:9. A tire deformation identification device based on visual feedback and deep network, characterized in that, comprising: 存储器,用于存储计算机程序;memory for storing computer programs; 处理器,用于处理计算机程序时实现如权利要求1至7任一项所述的基于视觉反馈和深度网络的轮胎变形量识别方法的步骤。The processor is configured to implement the steps of the tire deformation amount recognition method based on visual feedback and deep network according to any one of claims 1 to 7 when processing the computer program. 10.一种可读存储介质,其特征在于,所述可读存储介质中存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1至7任一项所述的基于视觉反馈和深度网络的轮胎变形量识别方法的步骤。10. A readable storage medium, wherein a computer program is stored in the readable storage medium, and when the computer program is executed by a processor, the visual feedback-based feedback according to any one of claims 1 to 7 is realized Steps of tire deformation identification method with deep network.
CN202111319379.9A 2021-11-09 2021-11-09 Tire deformation identification method and device based on visual feedback and depth network Pending CN114092683A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111319379.9A CN114092683A (en) 2021-11-09 2021-11-09 Tire deformation identification method and device based on visual feedback and depth network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111319379.9A CN114092683A (en) 2021-11-09 2021-11-09 Tire deformation identification method and device based on visual feedback and depth network

Publications (1)

Publication Number Publication Date
CN114092683A true CN114092683A (en) 2022-02-25

Family

ID=80299622

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111319379.9A Pending CN114092683A (en) 2021-11-09 2021-11-09 Tire deformation identification method and device based on visual feedback and depth network

Country Status (1)

Country Link
CN (1) CN114092683A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112861401A (en) * 2021-02-03 2021-05-28 湖南大学 Vehicle weight identification method, device, equipment and storage medium based on simulation analysis
CN115496754A (en) * 2022-11-16 2022-12-20 深圳佰维存储科技股份有限公司 Curvature detection method and device of SSD, readable storage medium and electronic equipment
CN115497058A (en) * 2022-09-02 2022-12-20 东南大学 A non-contact vehicle weighing method based on multispectral imaging technology
CN119206646A (en) * 2024-11-27 2024-12-27 杭州海康威视数字技术股份有限公司 Road surface element identification method and related equipment

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105172494A (en) * 2015-09-18 2015-12-23 浙江宇视科技有限公司 Tyre safety detecting method and tyre safety detecting system
CN106340180A (en) * 2015-07-06 2017-01-18 北京文安智能技术股份有限公司 Vehicle-mounted illegal lane occupation behavior detecting method and device
CN108528448A (en) * 2017-03-02 2018-09-14 比亚迪股份有限公司 Vehicle travels autocontrol method and device
CN109696133A (en) * 2017-10-24 2019-04-30 柯尼卡美能达株式会社 Squeegee action device for calculating and its method and overload detection system
CN110553594A (en) * 2018-05-31 2019-12-10 柯尼卡美能达株式会社 Image processing apparatus, overload detection system, and medium
CN112036413A (en) * 2020-09-04 2020-12-04 湖南大学 Vehicle weight determination method, device, equipment and computer readable storage medium
CN112329747A (en) * 2021-01-04 2021-02-05 湖南大学 Vehicle parameter detection method based on video identification and deep learning and related device
CN112464773A (en) * 2020-11-19 2021-03-09 浙江吉利控股集团有限公司 Road type identification method, device and system
CN112767471A (en) * 2021-01-04 2021-05-07 湖南大学 Tire ground contact area measuring method and device based on image feature extraction
CN113286096A (en) * 2021-05-19 2021-08-20 中移(上海)信息通信科技有限公司 Video recognition method and system

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106340180A (en) * 2015-07-06 2017-01-18 北京文安智能技术股份有限公司 Vehicle-mounted illegal lane occupation behavior detecting method and device
CN105172494A (en) * 2015-09-18 2015-12-23 浙江宇视科技有限公司 Tyre safety detecting method and tyre safety detecting system
CN108528448A (en) * 2017-03-02 2018-09-14 比亚迪股份有限公司 Vehicle travels autocontrol method and device
CN109696133A (en) * 2017-10-24 2019-04-30 柯尼卡美能达株式会社 Squeegee action device for calculating and its method and overload detection system
CN110553594A (en) * 2018-05-31 2019-12-10 柯尼卡美能达株式会社 Image processing apparatus, overload detection system, and medium
CN112036413A (en) * 2020-09-04 2020-12-04 湖南大学 Vehicle weight determination method, device, equipment and computer readable storage medium
CN112464773A (en) * 2020-11-19 2021-03-09 浙江吉利控股集团有限公司 Road type identification method, device and system
CN112329747A (en) * 2021-01-04 2021-02-05 湖南大学 Vehicle parameter detection method based on video identification and deep learning and related device
CN112767471A (en) * 2021-01-04 2021-05-07 湖南大学 Tire ground contact area measuring method and device based on image feature extraction
CN113286096A (en) * 2021-05-19 2021-08-20 中移(上海)信息通信科技有限公司 Video recognition method and system

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112861401A (en) * 2021-02-03 2021-05-28 湖南大学 Vehicle weight identification method, device, equipment and storage medium based on simulation analysis
CN115497058A (en) * 2022-09-02 2022-12-20 东南大学 A non-contact vehicle weighing method based on multispectral imaging technology
CN115497058B (en) * 2022-09-02 2023-09-01 东南大学 Non-contact vehicle weighing method based on multispectral imaging technology
CN115496754A (en) * 2022-11-16 2022-12-20 深圳佰维存储科技股份有限公司 Curvature detection method and device of SSD, readable storage medium and electronic equipment
CN115496754B (en) * 2022-11-16 2023-04-11 深圳佰维存储科技股份有限公司 Curvature detection method and device of SSD, readable storage medium and electronic equipment
CN119206646A (en) * 2024-11-27 2024-12-27 杭州海康威视数字技术股份有限公司 Road surface element identification method and related equipment
CN119206646B (en) * 2024-11-27 2025-04-08 杭州海康威视数字技术股份有限公司 Road surface element identification method and related equipment

Similar Documents

Publication Publication Date Title
CN114092683A (en) Tire deformation identification method and device based on visual feedback and depth network
US11989951B2 (en) Parking detection method, system, processing device and storage medium
CN107577988B (en) Method, device, storage medium and program product for realizing side vehicle positioning
US9384401B2 (en) Method for fog detection
US20180150704A1 (en) Method of detecting pedestrian and vehicle based on convolutional neural network by using stereo camera
CN105206109B (en) A kind of vehicle greasy weather identification early warning system and method based on infrared CCD
CN110298300B (en) Method for detecting vehicle illegal line pressing
CN112349144A (en) Monocular vision-based vehicle collision early warning method and system
CN107066986A (en) A kind of lane line based on monocular vision and preceding object object detecting method
CN107985189B (en) Depth warning method for driver's lane change in high-speed driving environment
CN109632037B (en) Urban waterlogging depth detection method based on image intelligent recognition
CN108830131B (en) Deep learning-based traffic target detection and ranging method
CN110544211A (en) A detection method, system, terminal and storage medium for lens attached objects
CN110991264A (en) Front vehicle detection method and device
CN108960083B (en) Automatic driving target classification method and system based on multi-sensor information fusion
CN107229906A (en) A kind of automobile overtaking's method for early warning based on units of variance model algorithm
CN108021926A (en) A kind of vehicle scratch detection method and system based on panoramic looking-around system
CN112307840A (en) Indicator light detection method, device, equipment and computer readable storage medium
US7561721B2 (en) System and method for range measurement of a preceding vehicle
CN111539279B (en) Road height limit detection method, device, equipment and storage medium
CN111950478B (en) Method for detecting S-shaped driving behavior of automobile in weighing area of dynamic flat-plate scale
Zhang et al. Noncontact measurement of tire deformation based on computer vision and Tire-Net semantic segmentation
CN110658353B (en) Method and device for measuring speed of moving object and vehicle
CN102043941B (en) Dynamic real-time relative relationship recognition method and system
CN105740801A (en) A camera-based car merging assisted driving method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination