[go: up one dir, main page]

CN113238560A - Robot map rotating method based on line segment information - Google Patents

Robot map rotating method based on line segment information Download PDF

Info

Publication number
CN113238560A
CN113238560A CN202110567138.XA CN202110567138A CN113238560A CN 113238560 A CN113238560 A CN 113238560A CN 202110567138 A CN202110567138 A CN 202110567138A CN 113238560 A CN113238560 A CN 113238560A
Authority
CN
China
Prior art keywords
line segment
pixel
edge
map
gradient
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110567138.XA
Other languages
Chinese (zh)
Inventor
陈柏宇
孙明
熊坤
周和文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Amicro Semiconductor Co Ltd
Original Assignee
Zhuhai Amicro Semiconductor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Amicro Semiconductor Co Ltd filed Critical Zhuhai Amicro Semiconductor Co Ltd
Priority to CN202110567138.XA priority Critical patent/CN113238560A/en
Publication of CN113238560A publication Critical patent/CN113238560A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开一种基于线段信息的机器人旋转地图方法,该方法包括以下步骤:S1:将栅格地图进行二值化处理,获得二值化地图,并对二值化地图中的障碍物和非障碍物所在像素进行标记;S2:根据标记信息获取障碍物边缘所在像素的梯度,并且根据梯度的方向获取该像素的障碍物边缘的方向;S3:根据像素的梯度和障碍物边缘的方向来获取若干像素集合,然后从每个像素集合中获取线段;S4:从若干线段中选择最适线段,并根据最适线段获得旋转角度,然后使栅格地图根据旋转角度进行旋转。从二值化地图的障碍物中提取最适线段,并根据最适线段来使地图在视觉上处于水平状态,机器人在工作过程中可以快速、准确进行定位和规划路线。

Figure 202110567138

The invention discloses a method for a robot to rotate a map based on line segment information. The method includes the following steps: S1: Binarize a grid map to obtain a binarized map; Mark the pixel where the obstacle is located; S2: Obtain the gradient of the pixel where the obstacle edge is located according to the marking information, and obtain the direction of the obstacle edge of the pixel according to the direction of the gradient; S3: Obtain according to the gradient of the pixel and the direction of the obstacle edge Several pixel sets are obtained, and then line segments are obtained from each pixel set; S4: Select the best line segment from several line segments, and obtain the rotation angle according to the best line segment, and then rotate the raster map according to the rotation angle. The optimal line segment is extracted from the obstacles in the binarized map, and the map is visually horizontal according to the optimal line segment. The robot can quickly and accurately locate and plan the route during the work process.

Figure 202110567138

Description

Robot map rotating method based on line segment information
Technical Field
The invention relates to the technical field of intelligent robots, in particular to a robot map rotating method based on line segment information.
Background
Before moving, the mobile robot firstly maps and positions the position of the mobile robot, and then moves according to the established topographic map and the position of the mobile robot, because the initial pose of the sweeping robot is random, the map obtained by mapping cannot be ensured to be in a horizontal state visually, namely, the angle of a line segment with a longer length is horizontal or vertical, while the line segment is in a zigzag state in a non-horizontal state, if the map is not in the horizontal state visually, the robot cannot be effectively positioned during working, and the robot cannot effectively plan a route.
Disclosure of Invention
In order to solve the problems, the invention discloses a robot map rotating method based on segment information, which is characterized in that the rotating angle of a map is quickly acquired through the segment information on the map, the map is rotated to be in a horizontal state visually, and the robot can quickly and accurately position and plan a route in the working process. The specific technical scheme is as follows:
a robot rotating map method based on line segment information comprises the following steps: s1: carrying out binarization processing on the grid map to obtain a binarization map, and marking pixels where obstacles and non-obstacles are located in the binarization map; s2: acquiring the gradient of a pixel where the barrier edge is located according to the marking information, and acquiring the direction of the barrier edge of the pixel according to the direction of the gradient; s3: acquiring a plurality of pixel sets according to the gradient of the pixels and the direction of the edge of the obstacle, and then acquiring line segments from each pixel set; s4: selecting an optimal line segment from the line segments, obtaining a rotation angle according to the optimal line segment, and rotating the grid map according to the rotation angle. Compared with the prior art, the method and the device have the advantages that the line segments are extracted according to the gradient of the pixel where the edge of the obstacle is located in the binary map and the edge direction, then the optimal line segment is selected from the line segments, the map is rotated according to the optimal line segment, the map is in a horizontal state visually, and the robot can be quickly and accurately positioned and plan a route in the working process.
Further, in step S1, the grid map is a probability grid map, and pixels where the obstacle and the non-obstacle are located in the binary map are respectively marked as a maximum value and a minimum value of a map storage format, so that the obstacle edge generates a gradient in value. The grid map is binarized and marked, so that the map is simplified and simple in calculation.
Further, in step S2, before acquiring the gradient of the pixel where the edge of the obstacle is located, the image is subjected to gaussian blur to enhance the edge feature. The edge characteristics are enhanced through Gaussian blur, and the accuracy of data is improved.
Further, in step S2, the obtaining the gradient of the pixel where the obstacle edge is located includes the following steps: and determining the position of the pixel where the edge of the obstacle is located on the grid map, and performing volume integration on the upper, lower, left and right four pixels of the pixel to obtain the gradient of the pixel where the edge of the obstacle is located.
Further, after the gradient of the pixel where the obstacle edge is located is obtained, the direction of the gradient is obtained through an arctangent function, and then the direction of the obstacle edge is obtained according to the direction of the gradient; wherein the direction of the gradient is perpendicular to the direction of the obstacle edge.
Further, in step S3, acquiring several pixel sets according to the gradient of the pixels and the direction of the obstacle edge includes the following steps: and sorting the gradients exceeding the set threshold value from large to small, and then, starting from the pixel where the maximum gradient is located, taking the pixel as the center, and forming a pixel set by the pixels within the set range and with the angle between the directions of the edges of the obstacles within the set value to obtain a plurality of pixel sets. The pixels are sorted from large to small to form a set, so that the condition of missing pixels is prevented.
Further, in step S3, the step of obtaining the line segment from the pixel set includes the following steps: and acquiring the minimum external rectangle of the pixel set, then extracting a line segment obtained by intersecting the minimum external rectangle with the central axis of the longer rectangle, and recording the end point and angle information of the line segment.
Further, the method for obtaining the minimum circumscribed rectangle includes that the edge direction of each pixel in the pixel set is weighted and averaged by taking the distance as an inverse proportion to obtain the length direction of the minimum circumscribed rectangle, then parallel lines are made in the length direction of the minimum circumscribed rectangle to obtain the length of the minimum circumscribed rectangle, and the length of the minimum circumscribed rectangle is made as a vertical line to obtain the width of the minimum circumscribed rectangle.
Further, in step S4, the selecting the optimal line segment includes the following steps: step S11: selecting one line segment from the obtained line segments as a current reference edge, obtaining the length of the current reference edge, and entering step S12; step S12: projecting the rest N-1 line segments in the direction of the current reference edge to obtain N-1 first projection values corresponding to the rest N-1 line segments, projecting the rest N-1 line segments in the direction of ninety degrees from the current reference edge to obtain N-1 second projection values corresponding to the rest N-1 line segments, and entering step S13; step S13: taking the product of the larger value of the first projection value and the second projection value corresponding to the same line segment and the first preset parameter b as a third projection value corresponding to the line segment, obtaining N-1 third projection values corresponding to the rest N-1 line segments, and entering step S14; step S14: calculating the sum of the length of the current reference edge and the N-1 third projection values corresponding to the rest of N-1 line segments, taking the sum of the length of the current reference edge and the N-1 third projection values corresponding to the rest of N-1 line segments as the total length of the line segment corresponding to the current reference edge, and entering the step S15; step S15: repeating the steps S11 to S15 until the N line segments are traversed, acquiring the total length of the N line segments corresponding to the N line segments, and entering the step S16, and the step S16: and selecting the line segment corresponding to the maximum value in the total length of the N line segments as the optimal line segment. By comparison, the optimal line segment selected from the plurality of line segments is more accurate.
Further, in step S4, the obtaining of the rotation angle includes: and calculating the minimum included angle between the optimal line segment and the x axis or the y axis, wherein the included angle is the rotation angle of the map.
Drawings
Fig. 1 is a flowchart of a method for rotating a map by a robot based on line segment information according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described and illustrated below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments provided in the present application without any inventive step are within the scope of protection of the present application.
Referring to fig. 1, a robot rotation map method based on line segment information includes the following steps: s1: carrying out binarization processing on the grid map to obtain a binarization map, and marking pixels where obstacles and non-obstacles are located in the binarization map; s2: acquiring the gradient of a pixel where the barrier edge is located according to the marking information, and acquiring the direction of the barrier edge of the pixel according to the direction of the gradient; s3: acquiring a plurality of pixel sets according to the gradient of the pixels and the direction of the edge of the obstacle, and then acquiring line segments from each pixel set; s4: selecting an optimal line segment from the line segments, obtaining a rotation angle according to the optimal line segment, and rotating the grid map according to the rotation angle. Compared with the prior art, the method and the device have the advantages that the line segments are extracted according to the gradient of the pixel where the edge of the obstacle is located in the binary map and the edge direction, then the optimal line segment is selected from the line segments, the map is rotated according to the optimal line segment, the map is in a horizontal state visually, and the robot can be quickly and accurately positioned and plan a route in the working process.
As an example, in step S1, the grid map is a probability grid map, and pixels where the obstacle and the non-obstacle are located in the binary map are respectively marked as a maximum value and a minimum value in a map storage format, so that the obstacle edge generates a gradient in value. The grid map is binarized and marked, so that the map is simplified and simple in calculation. In step S2, before the gradient of the pixel where the edge of the obstacle is located is obtained, the image is subjected to gaussian blur to enhance the edge feature. The edge characteristics are enhanced through Gaussian blur, and the accuracy of data is improved. In step S2, the step of obtaining the gradient of the pixel where the obstacle edge is located includes the following steps: and determining the position of the pixel where the edge of the obstacle is located on the grid map, and performing volume integration on the upper, lower, left and right four pixels of the pixel to obtain the gradient of the pixel where the edge of the obstacle is located. After the gradient of a pixel where an obstacle edge is located is obtained, obtaining the direction of the gradient through an arctangent function, and then obtaining the direction of the obstacle edge according to the direction of the gradient; wherein the direction of the gradient is perpendicular to the direction of the obstacle edge. The image gradient is the rate of change of a certain pixel of an image in the X and Y directions (compared with adjacent pixels), is a two-dimensional vector and consists of 2 components, and the change of the X axis and the change of the Y axis. Where the change in the X-axis is the pixel value to the right of the current pixel (X plus 1) minus the pixel value to the left of the current pixel (X minus 1). Similarly, the change in the Y-axis is the pixel value below the current pixel (Y plus 1) minus the pixel value above the current pixel (Y minus 1). The 2 components are calculated to form a two-dimensional vector, and the image gradient of the pixel is obtained. The gradient angle can be obtained by taking the inverse tangent arctan.
As an example, in step S3, acquiring several pixel sets according to the gradient of the pixels and the direction of the obstacle edge includes the following steps: and sorting the gradients exceeding the set threshold value from large to small, and then, starting from the pixel where the maximum gradient is located, taking the pixel as the center, and forming a pixel set by the pixels within the set range and with the angle between the directions of the edges of the obstacles within the set value to obtain a plurality of pixel sets. The pixels are sorted from large to small to form a set, so that the condition of missing pixels is prevented.
As an embodiment, in step S3, the step of obtaining the line segment from the pixel set includes the following steps: the minimum circumscribed rectangle of the pixel set is obtained, then a line segment obtained by intersecting the minimum circumscribed rectangle with the central axis of the longer rectangle is extracted, the line segment is a connecting line between two wide middle points of the minimum circumscribed rectangle, and the end points and the angle information of the line segment are recorded. The method for obtaining the minimum circumscribed rectangle comprises the steps of weighting and averaging the edge direction of each pixel in the pixel set by taking the distance as an inverse proportion to obtain the length direction of the minimum circumscribed rectangle, then making parallel lines in the length direction of the minimum circumscribed rectangle to obtain the length of the minimum circumscribed rectangle, and making a vertical line in the length direction of the minimum circumscribed rectangle to obtain the width of the minimum circumscribed rectangle. The minimum circumscribed rectangle can also be obtained by a minareaRect function, and various methods for obtaining the minimum circumscribed rectangle exist, and are prior art, and are not described in detail herein. The end points of the line segments are the wide middle points of the minimum circumscribed rectangle, corresponding coordinates can be directly obtained according to the positions of the line segments on the probability grid map, the line segments are parallel to the length of the minimum circumscribed rectangle, and the angle information is the direction of the length of the minimum circumscribed rectangle.
As an example, in step S4, the selecting the optimal line segment includes the following steps: step S11: selecting one line segment from the obtained line segments as a current reference edge, obtaining the length of the current reference edge, and entering step S12; step S12: projecting the rest N-1 line segments in the direction of the current reference edge to obtain N-1 first projection values corresponding to the rest N-1 line segments, projecting the rest N-1 line segments in the direction of ninety degrees from the current reference edge to obtain N-1 second projection values corresponding to the rest N-1 line segments, and entering step S13; step S13: taking the product of the larger value of the first projection value and the second projection value corresponding to the same line segment and the first preset parameter b as a third projection value corresponding to the line segment, obtaining N-1 third projection values corresponding to the rest N-1 line segments, and entering step S14; step S14: calculating the sum of the length of the current reference edge and the N-1 third projection values corresponding to the rest of N-1 line segments, taking the sum of the length of the current reference edge and the N-1 third projection values corresponding to the rest of N-1 line segments as the total length of the line segment corresponding to the current reference edge, and entering the step S15; step S15: repeating the steps S11 to S15 until the N line segments are traversed, acquiring the total length of the N line segments corresponding to the N line segments, and entering the step S16, and the step S16: and selecting the line segment corresponding to the maximum value in the total length of the N line segments as the optimal line segment. By comparison, the optimal line segment selected from the plurality of line segments is more accurate. The method mainly aims to obtain the line segments representing the direction of the map, and the length of the line segments is as long as possible, and the number of the line segments close to the direction of the line segments is as large as possible.
As one example, in step S4, the obtaining of the rotation angle includes: and calculating the minimum included angle between the optimal line segment and the x axis or the y axis, wherein the included angle is the rotation angle of the map. And calculating the included angle between the optimal line segment and the x axis when the included angle between the optimal line segment and the x axis is smaller than the included angle between the optimal line segment and the y axis, and rotating the corresponding angle to enable the optimal line segment to be parallel to the x axis, thereby completing the rotation of the map. Similarly, if the included angle between the optimal line segment and the y axis is smaller than the included angle between the optimal line segment and the x axis, the included angle between the optimal line segment and the y axis is calculated, and corresponding rotation is performed.
The features of the above embodiments may be arbitrarily combined, and for the sake of brevity, all possible combinations of the above embodiments are not described, but should be considered as within the scope of the present specification as long as there is no contradiction between the combinations of the features.
The above embodiments only express a few embodiments of the present invention, and the description thereof is specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application.

Claims (10)

1. A robot rotating map method based on line segment information is characterized by comprising the following steps:
s1: carrying out binarization processing on the grid map to obtain a binarization map, and marking pixels where obstacles and non-obstacles are located in the binarization map;
s2: acquiring the gradient of a pixel where the barrier edge is located according to the marking information, and acquiring the direction of the barrier edge of the pixel according to the direction of the gradient;
s3: acquiring a plurality of pixel sets according to the gradient of the pixels and the direction of the edge of the obstacle, and then acquiring line segments from each pixel set;
s4: selecting an optimal line segment from the line segments, obtaining a rotation angle according to the optimal line segment, and rotating the grid map according to the rotation angle.
2. The method according to claim 1, wherein in step S1, the grid map is a probability grid map, and pixels where the obstacle and the non-obstacle are located in the binary map are marked as a maximum value and a minimum value of a map storage format, respectively, so that the obstacle edge has a gradient in value.
3. The method for robot rotation map based on line segment information as claimed in claim 1, wherein in step S2, before obtaining the gradient of the pixel where the edge of the obstacle is located, the image is gaussian blurred to enhance the edge feature.
4. The method for robot rotation map based on line segment information of claim 1, wherein the step S2 of obtaining the gradient of the pixel where the obstacle edge is located comprises the following steps:
and determining the position of the pixel where the edge of the obstacle is located on the grid map, and performing volume integration on the upper, lower, left and right four pixels of the pixel to obtain the gradient of the pixel where the edge of the obstacle is located.
5. The method for robot rotation map based on line segment information of claim 4, wherein after obtaining the gradient of the pixel where the obstacle edge is located, the direction of the gradient is obtained through an arctan function, and then the direction of the obstacle edge is obtained according to the direction of the gradient;
wherein the direction of the gradient is perpendicular to the direction of the obstacle edge.
6. The method for robot rotation map based on line segment information as claimed in claim 1, wherein the step S3, obtaining several pixel sets according to the gradient of the pixels and the direction of the obstacle edge comprises the following steps:
and sorting the gradients exceeding the set threshold value from large to small, and then, starting from the pixel where the maximum gradient is located, taking the pixel as the center, and forming a pixel set by the pixels within the set range and with the angle between the directions of the edges of the obstacles within the set value to obtain a plurality of pixel sets.
7. The method for robot rotation map based on line segment information as claimed in claim 1, wherein the step S3, obtaining the line segment from the pixel set comprises the following steps:
and acquiring the minimum external rectangle of the pixel set, then extracting a line segment obtained by intersecting the minimum external rectangle with the central axis of the longer rectangle, and recording the end point and angle information of the line segment.
8. The method for robot rotation mapping based on line segment information of claim 7, wherein the minimum bounding rectangle is obtained by taking the edge direction of each pixel in the pixel set as an inverse distance ratio to obtain the length direction of the minimum bounding rectangle by weighted average, then taking the length direction of the minimum bounding rectangle as a parallel line to obtain the length of the minimum bounding rectangle, and taking the length of the minimum bounding rectangle as a perpendicular line to obtain the width of the minimum bounding rectangle.
9. The method for robot rotation map based on line segment information as claimed in claim 1, wherein the step of selecting the optimal line segment in step S4 comprises the steps of:
step S11: selecting one line segment from the obtained line segments as a current reference edge, obtaining the length of the current reference edge, and entering step S12;
step S12: projecting the rest N-1 line segments in the direction of the current reference edge to obtain N-1 first projection values corresponding to the rest N-1 line segments, projecting the rest N-1 line segments in the direction of ninety degrees from the current reference edge to obtain N-1 second projection values corresponding to the rest N-1 line segments, and entering step S13;
step S13: taking the product of the larger value of the first projection value and the second projection value corresponding to the same line segment and the first preset parameter b as a third projection value corresponding to the line segment, obtaining N-1 third projection values corresponding to the rest N-1 line segments, and entering step S14;
step S14: calculating the sum of the length of the current reference edge and the N-1 third projection values corresponding to the rest of N-1 line segments, taking the sum of the length of the current reference edge and the N-1 third projection values corresponding to the rest of N-1 line segments as the total length of the line segment corresponding to the current reference edge, and entering the step S15;
step S15: repeating the steps S11 to S15 until N line segments are traversed, acquiring the total length of N line segments corresponding to the N line segments, and entering the step S16;
step S16: and selecting the line segment corresponding to the maximum value in the total length of the N line segments as the optimal line segment.
10. The method for robot rotation map based on line segment information as claimed in claim 1, wherein the step S4, the obtaining of the rotation angle includes:
and calculating the minimum included angle between the optimal line segment and the x axis or the y axis, wherein the included angle is the rotation angle of the map.
CN202110567138.XA 2021-05-24 2021-05-24 Robot map rotating method based on line segment information Pending CN113238560A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110567138.XA CN113238560A (en) 2021-05-24 2021-05-24 Robot map rotating method based on line segment information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110567138.XA CN113238560A (en) 2021-05-24 2021-05-24 Robot map rotating method based on line segment information

Publications (1)

Publication Number Publication Date
CN113238560A true CN113238560A (en) 2021-08-10

Family

ID=77138411

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110567138.XA Pending CN113238560A (en) 2021-05-24 2021-05-24 Robot map rotating method based on line segment information

Country Status (1)

Country Link
CN (1) CN113238560A (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6337925B1 (en) * 2000-05-08 2002-01-08 Adobe Systems Incorporated Method for determining a border in a complex scene with applications to image masking
US20100189360A1 (en) * 2007-11-09 2010-07-29 Okita Kunio Information processing apparatus and information processing method
US20120076420A1 (en) * 2010-09-29 2012-03-29 Olympus Corporation Image processing apparatus, image processing method, and computer-readable storage device
CN106127778A (en) * 2016-06-27 2016-11-16 安徽慧视金瞳科技有限公司 A kind of line detection method for projecting interactive system
CN106886981A (en) * 2016-12-30 2017-06-23 中国科学院自动化研究所 Image edge enhancement method and system based on rim detection
CN108387234A (en) * 2018-02-06 2018-08-10 广州科语机器人有限公司 The map creating method of mobile robot based on laser range sensor
CN108717709A (en) * 2018-05-24 2018-10-30 东北大学 Image processing system and image processing method
CN109492647A (en) * 2018-06-20 2019-03-19 国网江苏省电力有限公司泰州供电分公司 A kind of power grid robot barrier object recognition methods
CN109541634A (en) * 2018-12-28 2019-03-29 歌尔股份有限公司 A kind of paths planning method, device and mobile device
US20200081125A1 (en) * 2018-09-07 2020-03-12 Shenzhen Silver Star Intelligent Technology Co., Ltd. Method and robot of mapping
CN110956081A (en) * 2019-10-14 2020-04-03 广东星舆科技有限公司 Method and device for identifying position relation between vehicle and traffic marking and storage medium
CN111754422A (en) * 2020-06-02 2020-10-09 浙江工业大学 A visual edge patrol method based on EDLines and LSM
US20210090285A1 (en) * 2019-09-20 2021-03-25 Samsung Electronics Co., Ltd. Method and apparatus with location estimation
CN112747734A (en) * 2019-10-31 2021-05-04 深圳拓邦股份有限公司 Environment map direction adjusting method, system and device

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6337925B1 (en) * 2000-05-08 2002-01-08 Adobe Systems Incorporated Method for determining a border in a complex scene with applications to image masking
US20100189360A1 (en) * 2007-11-09 2010-07-29 Okita Kunio Information processing apparatus and information processing method
US20120076420A1 (en) * 2010-09-29 2012-03-29 Olympus Corporation Image processing apparatus, image processing method, and computer-readable storage device
CN106127778A (en) * 2016-06-27 2016-11-16 安徽慧视金瞳科技有限公司 A kind of line detection method for projecting interactive system
CN106886981A (en) * 2016-12-30 2017-06-23 中国科学院自动化研究所 Image edge enhancement method and system based on rim detection
WO2019154119A1 (en) * 2018-02-06 2019-08-15 广州科语机器人有限公司 Map creation method for mobile robot based on laser ranging sensor
CN108387234A (en) * 2018-02-06 2018-08-10 广州科语机器人有限公司 The map creating method of mobile robot based on laser range sensor
CN108717709A (en) * 2018-05-24 2018-10-30 东北大学 Image processing system and image processing method
CN109492647A (en) * 2018-06-20 2019-03-19 国网江苏省电力有限公司泰州供电分公司 A kind of power grid robot barrier object recognition methods
US20200081125A1 (en) * 2018-09-07 2020-03-12 Shenzhen Silver Star Intelligent Technology Co., Ltd. Method and robot of mapping
CN109541634A (en) * 2018-12-28 2019-03-29 歌尔股份有限公司 A kind of paths planning method, device and mobile device
US20210090285A1 (en) * 2019-09-20 2021-03-25 Samsung Electronics Co., Ltd. Method and apparatus with location estimation
CN110956081A (en) * 2019-10-14 2020-04-03 广东星舆科技有限公司 Method and device for identifying position relation between vehicle and traffic marking and storage medium
CN112747734A (en) * 2019-10-31 2021-05-04 深圳拓邦股份有限公司 Environment map direction adjusting method, system and device
CN111754422A (en) * 2020-06-02 2020-10-09 浙江工业大学 A visual edge patrol method based on EDLines and LSM

Similar Documents

Publication Publication Date Title
WO2020134082A1 (en) Path planning method and apparatus, and mobile device
CN109859226B (en) Detection method of checkerboard corner sub-pixels for graph segmentation
CN109409163B (en) Quick QR code positioning method based on texture characteristics
CN107609510B (en) Positioning method and device for lower set of quayside container crane
CN107092871A (en) Remote sensing image building detection method based on multiple dimensioned multiple features fusion
CN109448046B (en) A fast extraction method of semi-automatic road centerline based on multiple descriptors
CN106980851B (en) Method and device for positioning data matrix DM code
CN106875430B (en) Single moving target tracking method and device based on fixed form under dynamic background
CN112101205A (en) Training method and device based on multi-task network
CN112562004B (en) Image mapping parameter generation method, device and computer readable medium
CN116222381A (en) Electrode coating size measurement method and device
CN113673518A (en) Target positioning candidate position screening strategy method
CN115147400A (en) Self-adaptive identification method and system for steel bar cross points, electronic equipment and medium
CN108389177B (en) A kind of vehicle bumper damage detection method and traffic safety early warning method
CN113238560A (en) Robot map rotating method based on line segment information
CN113807293A (en) Deceleration strip detection method, system, equipment and computer readable storage medium
JP7188336B2 (en) Attached matter detection device and attached matter detection method
CN110472538B (en) Image recognition method and storage medium of electronic drawing
JP7028099B2 (en) Candidate area estimation device, candidate area estimation method, and program
CN108805896B (en) Distance image segmentation method applied to urban environment
CN107169440A (en) A kind of Approach for road detection based on graph model
CN115717887B (en) A fast star point extraction method based on grayscale distribution histogram
CN113888574B (en) A method for a cleaning robot to obtain the area of a cleanable area
CN110717910A (en) CT image target detection method and CT scanner
CN114037729A (en) Target tracking method, device and equipment and vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 519000 2706, No. 3000, Huandao East Road, Hengqin new area, Zhuhai, Guangdong

Applicant after: Zhuhai Yiwei Semiconductor Co.,Ltd.

Address before: Room 105-514, No.6 Baohua Road, Hengqin New District, Zhuhai City, Guangdong Province

Applicant before: AMICRO SEMICONDUCTOR Co.,Ltd.

CB02 Change of applicant information
RJ01 Rejection of invention patent application after publication

Application publication date: 20210810

RJ01 Rejection of invention patent application after publication