[go: up one dir, main page]

WO2019113968A1 - Procédé de projection de lumière structurée à base de contenu d'image, procédé de détection de profondeur et appareil de projection de lumière structurée - Google Patents

Procédé de projection de lumière structurée à base de contenu d'image, procédé de détection de profondeur et appareil de projection de lumière structurée Download PDF

Info

Publication number
WO2019113968A1
WO2019113968A1 PCT/CN2017/116586 CN2017116586W WO2019113968A1 WO 2019113968 A1 WO2019113968 A1 WO 2019113968A1 CN 2017116586 W CN2017116586 W CN 2017116586W WO 2019113968 A1 WO2019113968 A1 WO 2019113968A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
structured light
highlight
light
projected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2017/116586
Other languages
English (en)
Chinese (zh)
Inventor
阳光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen A&E Intelligent Technology Institute Co Ltd
Original Assignee
Shenzhen A&E Intelligent Technology Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen A&E Intelligent Technology Institute Co Ltd filed Critical Shenzhen A&E Intelligent Technology Institute Co Ltd
Priority to PCT/CN2017/116586 priority Critical patent/WO2019113968A1/fr
Priority to CN201780034793.0A priority patent/CN109661683B/zh
Publication of WO2019113968A1 publication Critical patent/WO2019113968A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/529Depth or shape recovery from texture
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/586Depth or shape recovery from multiple images from multiple light sources, e.g. photometric stereo

Definitions

  • the present application relates to the field of image processing and machine vision technology, and in particular, to a projected structured light method and a depth detecting method based on image content.
  • Vision is the most direct and important way of human observation and cognitive world. We live in a three-dimensional world. Human vision can not only sense the brightness, color, texture information, motion of the surface of the object, but also its shape, space and spatial position (depth, distance). How to make machine vision obtain high-precision 3D depth information in real time and improve the intelligence level of the machine is the difficulty of current machine vision system research.
  • Depth sensing technology and devices In the industrial field, high-resolution, high-precision 3D depth information has a wide range of applications in automotive-assisted safe driving, high-speed machine tool processing, industrial modeling, 3D printing, medical imaging, and IoT 3D visual perception. demand. In the field of consumer electronics, deep sensing technology and devices help to improve the intelligence level and interaction capabilities of electronic products, and bring new human-computer interaction experience to users, enabling innovative applications in smart TVs, smart phones, home appliances, tablet PCs, etc. .
  • Depth sensing technology can be roughly divided into passive and active.
  • Traditional binocular stereo vision ranging is a passive ranging method, which is greatly affected by ambient light and complicated in stereo matching process.
  • Active ranging methods mainly include structured optical coding and ToF.
  • the active visual mode based on the structured optical coding can acquire the image depth information more accurately.
  • the principle of detecting the depth image by projecting the structured light is as shown in FIG. 1.
  • the structured light projection module 110 projects the structured light and is reflected by the lens. 120 is incident, the CCD photosensitive element 130 detects the reflected light, taking the n-th ray 101 of the structured light as an example.
  • the light exit angle a1 is known
  • the distance d between the reference plane and the lens 120 is known
  • the CCD photosensitive element 130 The incident point x of the light reflected by the reference plane is detected, and the incident point x' of the reflected light of the measured object is obtained, so that the incident angle a2 of the reflected light of the measured object entering the lens 120 can be obtained, so that the distance d' of the measured object can be calculated.
  • the above is the basic principle of measuring the depth of the object by the structured light projection, and analyzing the structured light ID (number) of the light bar or the scatter point, and knowing the ID to know the incident angle, the surface of the object corresponding to the reflected light can be calculated according to the triangular principle.
  • the depth (distance) is actually different depending on the structural light used, the lens, the photosensitive element, and the like.
  • the stripe structure light is projected to have a certain width, and the stripe width should be as narrow as possible to capture a finer depth variation. But the thing is that the stripes are denser and denser and harder to distinguish.
  • the general practice is to project multiple frames, from coarse to fine, such as Greencode.
  • the detection period is greatly extended and the accuracy is not accurate to the pixel level.
  • There are also some methods of scanning the object to be inspected with the gradient stripe but it is highly susceptible to external light interference. When there is other background light interference, the analysis of the above ID is easily disturbed, and the depth of the solution is problematic. Since the structured light is often disturbed by the background light, the matching effect of the structured light is greatly reduced.
  • the purpose of the present application is to provide an image content-based projection structure light method, a depth detection method, and a structure light projection device that reduce background light interference.
  • the present application provides a projected structured light method based on image content, including:
  • the image of the object that is not affected by external light is analyzed to obtain an edge of the object and a non-edge area of the object;
  • Projecting the first structured light includes projecting a first grayscale gradient strip to the non-edge region of the object and a second grayscale gradient strip to the edge region.
  • the method for projecting structured light based on image content in the present application first acquires an image of an object that is not affected by external light, and after obtaining an image of the object, obtaining an edge of the object and a non-edge region of the object, projecting a corresponding first grayscale gradient strip and The second grayscale gradient strip.
  • This method very fine structured light can be obtained, which is based on image content, is not affected by external light, and the calculation of structured light is not difficult, the number of projected frames is not much, and the interference against external light is also good.
  • the present application also provides a depth detecting method using the image structure-based projected structured light method, including:
  • the image of the object that is not affected by external light is analyzed to obtain an edge of the object and a non-edge area of the object;
  • Projecting the first structured light including respectively projecting a first grayscale gradient strip to the non-edge region of the object, and projecting a second grayscale gradient strip to the edge region;
  • the depth detecting method uses different structured light for the edge of the object and the non-edge region of the object, and can obtain the depth change of the non-edge region of the object and the depth contour of the edge of the object as much as possible, and the obtained depth image is more accurate and can be easily solved.
  • the acquired coded image blocks are classified.
  • the analyzing the image of the object further comprises obtaining a highlight region of the image of the object, wherein the projected first structured light, wherein the non-edge region of the object is projected
  • the first grayscale gradient strip includes: a highlight portion structured light projected to the highlight region and an object portion structured light projected to the non-highlight region, and the highlight portion structured light projected to the highlight region is projected to the highlight region from the previous time The light is dark.
  • the image pixels for the highlight region can be reduced, and the difference may be different in different embodiments.
  • the high-light partial structure light projected on the highlight region is darker than the light projected to the highlight region in the previous time, and is less than half the brightness of the light projected from the previous highlight region.
  • Light For example, if it is detected that the image obtained after the previous projection of the structured light has a highlight region, the brightness of the projected light that projects the highlight portion of the highlight portion again is reduced to 128; for example, the brightness of the light projected to the highlight region for the previous time.
  • the acquired image still detects that the pixel brightness is 255
  • the brightness of the structured light of the highlight portion is reduced to 64, and then continues until the detected image brightness is less than 255, and the projected brightness is no longer lowered.
  • the second gradation gradient strip projected on the edge region of the object and the structural light of the object portion of the first gradation gradient strip projected on the non-edge region of the object may be simultaneously reduced Brightness, or the same as the previous brightness, can be.
  • the step of analyzing an image of an object that is not affected by external light to obtain a highlight region of the image of the object includes a highlight overflow detection, and the highlight overflow detection may have multiple criteria for judging The requirements are set.
  • the blooming detection includes: determining a gray value of a pixel, where the number n of adjacent pixels having a gray value of 255 is greater than or equal to a preset threshold x, determining the gray value The area where the adjacent pixels of 255 are located is a highlight area.
  • the present invention adopts the depth detection method of the image structure-based projected structured light method, and analyzes the object image that is not affected by external light, and further includes obtaining an object edge of the object image, a non-edge region of the object, and a highlight region, according to Obtaining the structured image, the structured light includes: projecting a first grayscale gradient strip on the non-edge region of the object, and projecting a second grayscale gradient strip on the edge region, wherein the non-edge region of the object is projected
  • a grayscale gradient strip includes: a highlight portion structured light projected to the highlight region and an object portion structured light projected to the non-highlight region, and the highlight portion structured light projected to the highlight region is projected to the highlight region from the previous time. The light is dark.
  • the image of the object obtained by the method greatly reduces the influence of external light, and the constraint is greatly reduced by the edge, the highlight, and the like.
  • a preferred embodiment includes: after the projecting the first structured light, the method further comprises:
  • the method includes: projecting a first grayscale gradient strip on the non-edge region of the object, wherein the first grayscale gradient strip projected on the non-edge region of the object includes a highlight portion structured light projected to the highlight region and an object portion projected to the non-highlight region
  • the structured light, the highlight portion of the light projected onto the highlight region is darker than the light projected to the highlight region from the previous time.
  • the present application adopts the depth detection method of the projected structure light method based on the image content, and after analyzing the current image, performs strategy adjustment on the projected structured light and the number of projections, and uses the structured light method to project the non-edge region of the object.
  • a first grayscale gradient strip projecting a second grayscale gradient strip to the edge region, wherein the first grayscale gradient strip projected on the non-edge region of the object includes a highlight portion structured light and a non-highlight region projected to the highlight region
  • the projected object partially structs light, and the highlight portion of the projected light projected onto the highlight region is darker than the light projected to the highlight region from the previous time.
  • the structured light projection method may also have other implementation manners, such as analyzing the current image. If the edge of the obtained object is large and complicated, the second grayscale gradient strip of the frame is separately projected on the edge of the object, or in the object. The second grayscale gradient strip is projected a few times at the edge, and the obtained multi-frame image is superimposed to solve the depth of the edge.
  • the number of times the structured light is projected may be determined according to the condition of the previous frame image, and the multi-frame structured light may be projected, and the structured light of each frame is darker than the structure light of the previous frame to obtain a more accurate object image.
  • the first grayscale gradient stripe and the second grayscale gradient stripe have different grayscale value regions.
  • the gray value of the second grayscale gradient stripe is 128-256
  • the grayscale value of the first grayscale gradient stripe is 0-128, or other different value regions.
  • the first grayscale gradient strip and the second grayscale gradient strip have different gradient stripe arrangement manners, and the first grayscale gradient strip and the second grayscale gradient strip The gradation of the pixels in the adjacent area is inconsistent.
  • the first grayscale gradient strip and the second grayscale gradient strip can be distinguished from each other, so that the two can be more accurately calculated for the edge of the object and the depth of the surface of the object.
  • the analyzing the acquired image of the object and determining whether the highlight image includes the highlight region includes: determining a gray value of the pixel, and selecting a neighboring pixel with a gray value of 255 If the number n is greater than or equal to the preset threshold x, it is determined that the area where the adjacent pixel of the gray value is 255 is a highlight area; or the pixel of the gray value is judged, and the number of adjacent pixels of the gray value is 255 When the ratio of the number n of the total number of pixels of the object image exceeds a preset threshold y, it is determined that the area of the adjacent pixel whose gradation value is 255 is a highlight area.
  • the image of the object that is not affected by external light includes:
  • the obtained two frames of images are subtracted to obtain an image of an object that is not affected by external light.
  • the subtracting the acquired two frames of images to obtain an object image that is not affected by external light includes: subtracting the pixel-by-pixel grayscale of the image of the second frame by pixel-by-pixel grayscale The pixel gradation after subtraction.
  • the first grayscale gradient strip is projected on the non-edge region of the object, and the entire strip of the first grayscale gradient strip has no repeating texture along the grayscale gradient direction, and the gray between adjacent pixels along the grayscale gradient direction The degree is different, and the gray level of adjacent pixels increases or decreases along the gray level gradient direction.
  • the second grayscale gradient strip is projected along an edge of the object, and the width of the second grayscale gradient strip is D and the width spans the edge of the object, and the grayscale gradient direction of the second grayscale gradient stripe and the edge of the object Consistently, there is no repeating texture along the grayscale gradient direction of the entire strip of the second grayscale gradient strip, and the gray scales of adjacent pixels along the grayscale gradient direction are different, and the gray scales of adjacent pixels are increased along the grayscale gradient direction or Decrement.
  • the first grayscale gradient stripe and the second grayscale gradient stripe do not coincide with pixel grayscales in adjacent regions.
  • the present application is based on a projected structured light method of an image content and a depth detecting method.
  • the image detecting of the object tracking may be applied, after the first structured light is projected, and the object image of the first structured light is acquired. And projecting the first structured light to the object again, acquiring the image of the object after the first structured light is projected, repeatedly projecting the first structured light, acquiring a plurality of the image of the object, and obtaining an image continuously tracked by the object.
  • the application also provides a structured light projection device, comprising:
  • a storage device adapted to store a plurality of instructions adapted to be loaded and executed by the processor:
  • the image of the object that is not affected by external light is analyzed to obtain an edge of the object and a non-edge area of the object;
  • Projecting the first structured light includes projecting a first grayscale gradient strip to the non-edge region of the object and a second grayscale gradient strip to the edge region.
  • the image of the object that is not affected by external light includes:
  • the obtained two frames of images are subtracted to obtain an image of an object that is not affected by external light.
  • the analyzing the image of the object that is not affected by external light further comprising obtaining a highlight region of the image of the object, wherein the first structured light is projected, wherein the first gray projected on the non-edge region of the object
  • the gradient strip includes a highlight portion structured light projected to the highlight region and an object portion structured light projected to the non-high beam region, and the highlight portion structured light projected to the highlight region is darker than the previous projected light to the highlight region .
  • the highlight portion structured light projected on the highlight region is darker than the light projected to the highlight region in the previous time, and is a light that is projected to be half the brightness of the light projected from the highlight region in the previous time.
  • the step of analyzing the image of the object that is not affected by the external light to obtain a highlight region of the object image includes a highlight overflow detection, and the highlight overflow detection is to determine the gray value of the pixel, and the gray value is If the number n of adjacent pixels of 255 is greater than or equal to the preset threshold x, it is determined that the area where the adjacent pixel of the gray value is 255 is a highlight area; or the gray value of the pixel is determined, and the gray value is When the ratio of the number n of adjacent pixel points of 255 to the total number of pixel points of the object image exceeds a preset threshold y, it is determined that the area of the adjacent pixel whose gradation value is 255 is a highlight area.
  • the instructions stored by the storage device further include after projecting the first structured light:
  • the method includes: projecting a first grayscale gradient strip on the non-edge region of the object, and projecting a second grayscale gradient strip on the edge region, wherein the first grayscale gradient strip projected on the non-edge region of the object includes a highlight projected on the highlight region Partially structured light and part of the structured light projected onto the non-highlighted region, the highlight portion of the structured light projected onto the highlight region is darker than the light projected to the highlight region from the previous time.
  • the step of determining whether the object image includes the highlight region in the object image after the acquired object image of the previous frame structured light includes: determining the gray value of the pixel, and the gray value is 255. If the number n of adjacent pixels is greater than or equal to the preset threshold x, it is determined that the area where the adjacent pixel of the gray value is 255 is a highlight area; or the gray value of the pixel is judged, and the gray value is 255. When the ratio of the number of adjacent pixels to the total number of pixels of the object image exceeds a preset threshold y, it is determined that the region where the adjacent pixel of the gradation value is 255 is a highlight region.
  • the subtracting the acquired two frames of images to obtain an object image that is not affected by external light includes: subtracting the pixel-by-pixel grayscale of the first frame image from the pixel-by-pixel grayscale of the second frame image to obtain a phase Subtracted pixel grayscale.
  • the first grayscale gradient strip is projected on the non-edge region of the object, and the entire strip of the first grayscale gradient strip has no repeating texture along the grayscale gradient direction, and the gray between adjacent pixels along the grayscale gradient direction The degree is different, and the gray level of adjacent pixels increases or decreases along the gray level gradient direction.
  • the second grayscale gradient strip is projected along the edge of the object, and the width of the second grayscale gradient strip is D and the width spans the edge of the object, and the grayscale gradient direction of the second grayscale gradient strip is The edges of the object are consistent, and the entire strip of the second grayscale gradient strip has no repeating texture along the grayscale gradient direction, and the grayscale between adjacent pixels along the grayscale gradient direction is different, and the adjacent pixel grayscale along the grayscale gradient direction Increment or decrement.
  • the first grayscale gradient stripe and the second grayscale gradient stripe have different gradient stripe arrangement manners, and the first grayscale gradient stripe is adjacent to the second grayscale gradient stripe
  • the pixel gradation of the area is inconsistent.
  • the present application adopts the image structure-based projection structure light method and the depth detection method, firstly acquiring an object image that is not affected by external light, and differently using the object edge from the non-edge area of the object.
  • the structured light after projecting the first structured light, acquires an image of the object after projecting the first structured light, and obtains a depth change of the non-edge region of the object and a depth profile of the edge of the object as much as possible, and the obtained depth image is more accurate. It is also convenient to classify the obtained coded image blocks when solving.
  • the method of the present application can obtain very fine structured light, and the calculation of the structured light is not difficult, and the number of projection frames is not much, which can effectively eliminate the interference of external light.
  • Figure 1 is a basic schematic diagram of depth detection
  • FIG. 2 is a schematic diagram of an embodiment of a method for projecting structured light based on image content of the present application
  • FIG. 3 is a schematic diagram of the comparison between the projected light and the image obtained after light transmission in the present application.
  • the present application is based on an image structure of a projected structure light method embodiment, including:
  • the gradient strip includes: a highlight portion structured light projected to the highlight region and an object portion structured light projected to the non-high beam region, and the highlight portion structured light projected to the highlight region is darker than the previous projected light to the highlight region .
  • the high-light portion structured light projected to the highlight region is darker than the global projected white light, that is, darker than the brightness of the previous projected light.
  • the high-light partial structure light projected on the highlight region is darker than the brightness of the previous projection light to obtain a more accurate depth image after projecting the structured light, and can effectively eliminate interference from external light.
  • a high-light partial structure light of 128 luminance is projected on the highlight region for a pixel point having a grayscale value of 255 in the highlight region.
  • the edge detection uses a Canny edge detection algorithm, referred to as the Canny algorithm.
  • the Canny algorithm is John F. Canny. 1986 A multi-level edge detection algorithm developed in the year.
  • the purpose of edge detection is to significantly reduce the data size of an image while preserving the original image attributes.
  • the Canny algorithm is a long-standing method, it can be said to be a standard algorithm for edge detection and is still widely used in research.
  • Canny The goal is to find an optimal edge detection algorithm. The meaning of optimal edge detection is:
  • Optimal detection the algorithm can identify the actual edges in the image as much as possible, and the probability of missing the true edge and the probability of false detection of the non-edge are as small as possible;
  • Optimal positioning criterion the position of the detected edge point is closest to the position of the actual edge point, or the extent to which the detected edge deviates from the true edge of the object due to the influence of noise;
  • the detection point corresponds one-to-one with the edge point: the edge point detected by the operator should have a one-to-one correspondence with the actual edge point.
  • a variational method is used, which is a method of finding a function that optimizes a specific function.
  • the optimal detection is represented by four exponential function terms, but it is very similar to the first derivative of the Gaussian function.
  • the Canny edge detection algorithm can be divided into the following five steps:
  • the step of analyzing the image of the object to obtain an edge of the object and a highlight region includes performing edge detection and highlight overflow detection.
  • the highlight overflow detection includes: determining a gray value of a pixel, where the number n of adjacent pixel points whose gray value is 255 is greater than or equal to a preset threshold x, determining that the adjacent pixel whose gray value is 255 is located
  • the area is a highlight area.
  • the preset threshold y 5%, or other values.
  • the first grayscale gradient strip and the second grayscale gradient strip adopt a grayscale structured light strip.
  • the first grayscale gradient strip is projected on the non-edge region of the object, and the entire strip of the first grayscale gradient strip has no repeating texture along the grayscale gradient direction, that is, in the uniform direction along the grayscale gradient, there is no arbitrary
  • the gray levels of the pixels are the same, and the gray levels between adjacent pixels along the gray level gradient direction are different. This difference may be manifested by increasing or decreasing the gray level of adjacent pixels along the gray level gradient direction.
  • the second grayscale gradient strip is projected along the edge of the object, and the width of the second grayscale gradient strip is D and the width spans the edge of the object, that is, the second grayscale gradient stripe projected along the edge of the object.
  • the gray-graded direction of the second gray-graded strip is consistent with the extending direction of the object edge, and the entire strip of the second gray-graded strip has no repeating texture along the gray-graded direction, that is, along the gray
  • the gradation of any pixel is the same, and the gradation of the adjacent pixels along the gradation direction of the gradation is different.
  • the difference may be represented by the adjacent pixel gray along the gradation direction.
  • the degree is incremented or decremented.
  • the first grayscale gradient strip is different from the grayscale value range of the second grayscale gradient strip.
  • the gray value of the second grayscale gradient stripe is 128-256
  • the grayscale value of the first grayscale gradient stripe is 0-128, or other different value regions.
  • the first grayscale gradient strip and the second grayscale gradient strip have different gradient stripe arrangement manners, and the first grayscale gradient strip and the second grayscale gradient strip The gradation of the pixels in the adjacent area is inconsistent.
  • the first grayscale gradient strip and the second grayscale gradient strip can be distinguished from each other, so that the two can be more accurately calculated for the edge of the object and the depth of the surface of the object.
  • the above step (5) may be repeated, and the first structured light is projected a plurality of times to obtain a plurality of images of the object at different time points.
  • the application is applied to perform high-precision depth detection on an object, and further optimize the structured light on the basis of the above step (5), and then project the structured light again, further acquire an image, obtain a plurality of images, and perform superposition operation. Get the depth of the object.
  • the number of times the structured light is projected can be judged based on the image obtained by projecting the structured light from the previous time, and the specific implementation is as follows.
  • step (5) acquiring the third frame image, and further comprising determining whether to project the second structured light according to the third frame image to acquire the fourth frame image.
  • Determining whether to project the second structured light to acquire the fourth frame image according to the third frame image includes determining whether the highlight image is included in the third frame image, and if the third frame image includes a highlight region, projecting a second structured light, the projected second structured light comprising: projecting a first grayscale gradient strip on the non-edge region of the object, and projecting a second grayscale gradient strip on the edge region, wherein the first grayscale projected on the non-edge region of the object
  • the gradient strip includes a highlight portion structured light projected to the highlight region and an object portion structured light projected to the non-high beam region, and the highlight portion structured light projected to the highlight region is projected to the highlight region in the first structured light
  • the intensity of the structured light in the highlight portion is dark.
  • the highlight portion of the second structured light structures the light of 128 brightness.
  • Determining whether the highlight region is included in the image of the third frame includes determining whether there is a pixel having a gray value of 255 in the image of the third frame, and determining whether the number n of pixels having a gray value of 255 is greater than 1, To include a highlight region; or to determine that if the number n of adjacent pixel points whose gray value is 255 is greater than or equal to a preset threshold x, it is determined that the region where the adjacent pixel of the gray value is 255 is a highlight region.
  • the ratio of the number of adjacent pixel points n of the RGB value in the image of the third frame to the total number of pixel points of the object image exceeds a preset threshold y, and if the threshold value y is exceeded, the gray value is determined.
  • the area where the adjacent pixels of 255 are located is a highlight area.
  • the preset threshold y 5%, or other values.
  • the projecting the second structured light further includes determining, according to the fourth frame image, whether to project the third structured light to acquire the fifth frame image.
  • Determining whether to project the third structured light to acquire the fifth frame image according to the fourth frame image includes determining whether the highlight image is included in the fourth frame image, and if the fourth frame image includes a highlight region, projecting a third structured light, the projected third structured light comprising: projecting a first grayscale gradient strip on the non-edge region of the object, and projecting a second grayscale gradient strip on the edge region, wherein the first grayscale projected on the non-edge region of the object
  • the gradient strip includes a highlight portion structured light projected to the highlight region and an object portion structured light projected to the non-high beam region, and the highlight portion structured light projected to the highlight region is projected to the highlight region in the second structured light
  • the intensity of the structured light in the highlight portion is dark.
  • the highlight portion of the third structured light structures the light of 64 brightness.
  • Determining whether the highlight region is included in the image of the fourth frame includes determining whether there is a pixel having a gray value of 255 in the image of the fourth frame, and determining whether the number n of pixels having a gray value of 255 is greater than 1, To include a highlight region; or to determine that if the number n of adjacent pixel points whose gray value is 255 is greater than or equal to a preset threshold x, it is determined that the region where the adjacent pixel of the gray value is 255 is a highlight region.
  • the ratio of the number of adjacent pixel points n of the RGB value in the fourth frame image to the total number of pixel points of the object image exceeds a preset threshold y, and if the threshold value y is exceeded, the gray value is determined.
  • the area where the adjacent pixels of 255 are located is a highlight area.
  • the preset threshold y 5%, or other values.
  • the projected light intensity of the projected highlight portion can be no longer reduced.
  • the method of the present application can obtain very fine structured light, and the calculation of the structured light is not difficult; the number of projected frames is not much; the interference against external light is also good.
  • the projection light source is controllable (ie, any pattern can be controlled to be output, and most of the structured light sensors adopt this solution;
  • the projection light source is generally a general projector such as a laser projector or a DMD).
  • the application also provides a structured light projection device, comprising:
  • a storage device adapted to store a plurality of instructions adapted to be loaded and executed by the processor:
  • the image of the object that is not affected by external light is analyzed to obtain an edge of the object and a non-edge area of the object;
  • Projecting the first structured light includes projecting a first grayscale gradient strip to the non-edge region of the object and a second grayscale gradient strip to the edge region.
  • the application also provides a depth detecting device, comprising:
  • a storage device adapted to store a plurality of instructions adapted to be loaded and executed by the processor:
  • the image of the object that is not affected by external light is analyzed to obtain an edge of the object and a non-edge area of the object;
  • Projecting the first structured light including respectively projecting a first grayscale gradient strip to the non-edge region of the object, and projecting a second grayscale gradient strip to the edge region;

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

La présente invention concerne un procédé de projection de lumière structurée à base de contenu d'image, consistant : premièrement à acquérir une image d'un objet qui n'est pas affectée par la lumière externe ; à projeter une première lumière structurée, le procédé consistant à projeter une première bande de gradient de niveaux de gris sur une région de non-bord de l'objet, et à projeter une seconde bande de gradient de niveaux de gris sur une région de bord ; et à acquérir une image de l'objet après que la première lumière structurée a été projetée. La variation de profondeur de la région de non-bord de l'objet et le contour de profondeur du bord de l'objet peuvent être obtenus, et l'image de profondeur obtenue est plus précise, ce qui permet également de classer les blocs d'image codés obtenus pendant le calcul. L'invention concerne en outre un procédé de détection de profondeur et un appareil de projection de lumière structurée l'utilisant. Selon le procédé, une lumière structurée très fine peut être obtenue, la difficulté de calcul relative à la lumière structurée n'est pas importante, il n'y a pas de nombreuses trames de projection, et l'interférence provoquée par la lumière externe peut être efficacement éliminée.
PCT/CN2017/116586 2017-12-15 2017-12-15 Procédé de projection de lumière structurée à base de contenu d'image, procédé de détection de profondeur et appareil de projection de lumière structurée Ceased WO2019113968A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2017/116586 WO2019113968A1 (fr) 2017-12-15 2017-12-15 Procédé de projection de lumière structurée à base de contenu d'image, procédé de détection de profondeur et appareil de projection de lumière structurée
CN201780034793.0A CN109661683B (zh) 2017-12-15 2017-12-15 基于图像内容的投射结构光方法、深度检测方法及结构光投射装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/116586 WO2019113968A1 (fr) 2017-12-15 2017-12-15 Procédé de projection de lumière structurée à base de contenu d'image, procédé de détection de profondeur et appareil de projection de lumière structurée

Publications (1)

Publication Number Publication Date
WO2019113968A1 true WO2019113968A1 (fr) 2019-06-20

Family

ID=66110528

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/116586 Ceased WO2019113968A1 (fr) 2017-12-15 2017-12-15 Procédé de projection de lumière structurée à base de contenu d'image, procédé de détection de profondeur et appareil de projection de lumière structurée

Country Status (2)

Country Link
CN (1) CN109661683B (fr)
WO (1) WO2019113968A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115359175A (zh) * 2022-06-30 2022-11-18 南京理工大学智能计算成像研究院有限公司 一种基于高光边界三维重映射的自适应投影方法

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111526303B (zh) * 2020-04-30 2022-05-24 长春长光辰芯光电技术有限公司 结构光成像中去除背景光的方法
CN113112432A (zh) * 2021-05-13 2021-07-13 广州道一科学技术有限公司 自动识别图像条带的方法

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104581124A (zh) * 2013-10-29 2015-04-29 汤姆逊许可公司 产生场景的深度图的方法和装置
CN104809698A (zh) * 2015-03-18 2015-07-29 哈尔滨工程大学 一种基于改进三边滤波的Kinect深度图像修复方法
US20160247287A1 (en) * 2015-02-23 2016-08-25 Canon Kabushiki Kaisha Information processing apparatus, information processing method, and storage medium
CN106504284A (zh) * 2016-10-24 2017-03-15 成都通甲优博科技有限责任公司 一种基于立体匹配与结构光相结合的深度图获取方法
CN106651941A (zh) * 2016-09-19 2017-05-10 深圳奥比中光科技有限公司 一种深度信息的采集方法以及深度测量系统
CN106651938A (zh) * 2017-01-17 2017-05-10 湖南优象科技有限公司 一种融合高分辨率彩色图像的深度图增强方法
CN106875435A (zh) * 2016-12-14 2017-06-20 深圳奥比中光科技有限公司 获取深度图像的方法及系统
CN107424186A (zh) * 2016-05-19 2017-12-01 纬创资通股份有限公司 深度信息测量方法及装置

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101087413B (zh) * 2006-06-07 2010-05-12 中兴通讯股份有限公司 视频序列中运动物体的分割方法

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104581124A (zh) * 2013-10-29 2015-04-29 汤姆逊许可公司 产生场景的深度图的方法和装置
US20160247287A1 (en) * 2015-02-23 2016-08-25 Canon Kabushiki Kaisha Information processing apparatus, information processing method, and storage medium
CN104809698A (zh) * 2015-03-18 2015-07-29 哈尔滨工程大学 一种基于改进三边滤波的Kinect深度图像修复方法
CN107424186A (zh) * 2016-05-19 2017-12-01 纬创资通股份有限公司 深度信息测量方法及装置
CN106651941A (zh) * 2016-09-19 2017-05-10 深圳奥比中光科技有限公司 一种深度信息的采集方法以及深度测量系统
CN106504284A (zh) * 2016-10-24 2017-03-15 成都通甲优博科技有限责任公司 一种基于立体匹配与结构光相结合的深度图获取方法
CN106875435A (zh) * 2016-12-14 2017-06-20 深圳奥比中光科技有限公司 获取深度图像的方法及系统
CN106651938A (zh) * 2017-01-17 2017-05-10 湖南优象科技有限公司 一种融合高分辨率彩色图像的深度图增强方法

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115359175A (zh) * 2022-06-30 2022-11-18 南京理工大学智能计算成像研究院有限公司 一种基于高光边界三维重映射的自适应投影方法

Also Published As

Publication number Publication date
CN109661683A (zh) 2019-04-19
CN109661683B (zh) 2020-09-15

Similar Documents

Publication Publication Date Title
US10142612B2 (en) One method of binocular depth perception based on active structured light
US20070176927A1 (en) Image Processing method and image processor
JPH11288459A (ja) 顔のような領域を検出する方法および装置、ならびに観察者トラッキングディスプレイ
CN108846819B (zh) 激光切割参数获取方法及装置、电子设备、存储介质
WO2014058248A1 (fr) Appareil de contrôle d'images pour estimer la pente d'un singleton, et procédé à cet effet
WO2011000225A1 (fr) Procédé et appareil de détection de cible et dispositif d'acquisition d'image
WO2011065671A2 (fr) Appareil et procédé de détection d'un sommet d'une image
JP5342413B2 (ja) 画像処理方法
WO2019113968A1 (fr) Procédé de projection de lumière structurée à base de contenu d'image, procédé de détection de profondeur et appareil de projection de lumière structurée
EP2793172B1 (fr) Appareil de traitement d'images, procédé et programme de traitement d'images
WO2016180246A1 (fr) Procédé et dispositif d'usinage au laser pour des saphirs et support d'informations
JP2005172559A (ja) パネルの線欠陥検出方法及び装置
CN100425062C (zh) 空间信息检测设备
WO2010008134A2 (fr) Procédé de traitement d'image
JP3570198B2 (ja) 画像処理方法およびその装置
KR20040100963A (ko) 화상 처리 장치
WO2018072172A1 (fr) Procédé et appareil d'identification de formes dans des images, dispositif et support de stockage informatique
CN112203390A (zh) 一种提取激光等离子体轮廓的方法
CN111536895B (zh) 外形识别装置、外形识别系统以及外形识别方法
WO2017034323A1 (fr) Dispositif et procédé de traitement d'image pour améliorer de manière adaptative un faible niveau d'éclairage, et dispositif de détection d'objet l'utilisant
JP7007324B2 (ja) 画像処理装置、画像処理方法、及びロボットシステム
KR102855229B1 (ko) 광학적 객체 인식을 이용한 반셔터 기능 수행 방법 및 이를 이용한 영상 촬영 방법
JP2565585B2 (ja) スクライブライン交差領域の位置検出方法
US11943422B2 (en) Three-dimensional image-capturing device and image-capturing condition adjusting method
JPH01315884A (ja) パターン追跡方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17934965

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17934965

Country of ref document: EP

Kind code of ref document: A1