[go: up one dir, main page]

WO2019080061A1 - 基于摄像设备的遮挡检测修复装置及其遮挡检测修复方法 - Google Patents

基于摄像设备的遮挡检测修复装置及其遮挡检测修复方法

Info

Publication number
WO2019080061A1
WO2019080061A1 PCT/CN2017/107875 CN2017107875W WO2019080061A1 WO 2019080061 A1 WO2019080061 A1 WO 2019080061A1 CN 2017107875 W CN2017107875 W CN 2017107875W WO 2019080061 A1 WO2019080061 A1 WO 2019080061A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
occlusion
camera
color
repair
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2017/107875
Other languages
English (en)
French (fr)
Inventor
谢俊
赵聪
杨松龄
陈爽新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Royole Technologies Co Ltd
Original Assignee
Shenzhen Royole Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Royole Technologies Co Ltd filed Critical Shenzhen Royole Technologies Co Ltd
Priority to CN201780092103.7A priority Critical patent/CN110770786A/zh
Priority to PCT/CN2017/107875 priority patent/WO2019080061A1/zh
Publication of WO2019080061A1 publication Critical patent/WO2019080061A1/zh
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Definitions

  • the present invention relates to an image pickup apparatus, and more particularly to an occlusion detection and repair apparatus based on an image pickup apparatus and an occlusion detection and repair method thereof.
  • the camera quality of electronic devices is one of the main considerations for consumers when they choose to purchase electronic devices. In other words, if an electronic device has excellent image quality, it will become a major selling point of the electronic device. However, when the existing electronic device is photographed, if there is a target object, such as a finger, a stain, or the like, the camera of the electronic device is blocked, the film taken will have a dark area, and the filming rate is low.
  • the embodiment of the invention discloses an occlusion detection and repairing device and an occlusion detection and repairing method thereof, which can detect occlusion and repair the occlusion area of the film, can effectively improve the splicing rate, and the repairing effect is good.
  • the occlusion detection and repair device based on the imaging device disclosed in the embodiment of the invention.
  • the occlusion detection and repairing device includes: an image capturing unit, the image capturing unit captures a first image; and a detecting module, wherein the detecting module detects whether the camera unit is present in the framing range of the camera unit a target within a preset distance; a memory storing the framing range, the preset distance, and a preset matching degree, the memory further storing a plurality of second images, wherein the plurality of second images are An image captured by the camera unit; and a processor, the camera unit, the detection module, and the memory are respectively electrically connected to the processor, and the processor is configured to: detect in the detection module Calculating an occlusion region of the first image when the target object within the preset distance is within the framing range of the camera unit; and extracting a first feature of the first image Obtaining the plurality of second images, and extracting second feature points corresponding to the first feature points
  • the occlusion processing method based on the imaging device disclosed in the embodiment of the present invention.
  • the occlusion processing method includes the steps of: capturing a first image; detecting whether a target object within a preset range of the image capturing unit is within a preset distance; and detecting the image capturing unit Calculating an occlusion region of the first image when a target object within the preset distance is within the framing range; extracting a first feature point of the first image; acquiring a plurality of second images, And extracting, from each of the second images, a second feature point corresponding to the first feature point; calculating a second feature point of each of the second images and a first feature point of the first image a degree of matching between the two, and selecting one of the second images whose matching degree satisfies the preset matching degree as a repair source image; and calculating a position of the occlusion region corresponding to the repair source image, and An occlusion region of the first image is repaired using an image of the
  • a computer readable storage medium wherein the computer readable storage medium stores a plurality of program instructions, where the program instructions are executed by the processor, and the step of: capturing the first image; detecting the framing of the camera unit Whether there is a target within a predetermined distance from the camera unit in the range; when it is detected that the target object within the preset distance of the camera unit is within the view range of the camera unit, Calculating an occlusion region of the first image; extracting a first feature point of the first image; acquiring a plurality of second images, and extracting, corresponding to the first feature point, from each of the second images a second feature point; calculating a matching degree between the second feature point of each of the second images and the first feature point of the first image, and selecting one of the matching degrees to satisfy the preset matching degree
  • the second image is used as a repair source image; and calculating a position of the occlusion region corresponding to the repair source image, and adopting an image of the occlusion region corresponding to the first
  • the occlusion detection and repair device and the occlusion detection and repair method of the present invention when the detection unit detects that the camera unit is occluded, uses the image captured by the camera unit to perform occlusion repair, and can detect occlusion in time to avoid continuation. Occlusion, and even if there is occlusion, it can be repaired in time, improve the filming rate, and the repair effect is good.
  • FIG. 1 is a structural block diagram of an occlusion detection and repair apparatus according to an embodiment of the present invention.
  • FIG. 2 is a schematic diagram of an imaging unit and a detection module of an occlusion detection and repair device according to an embodiment of the invention.
  • FIG. 3 is a schematic diagram of an imaging unit and a detection module of an occlusion detection and repair device according to another embodiment of the present invention.
  • FIG. 4 is a schematic diagram of a minimum shooting distance, a common portion, and a non-common portion when an image is taken when the camera unit is a binocular camera according to an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of image comparison of the detection module of the occlusion detection and repair device when the detection module is an image detection unit according to an embodiment of the invention.
  • FIG. 6 is a flowchart of an occlusion detection and repair method according to an embodiment of the present invention.
  • Figure 7 is a sub-flow diagram of step S602 of Figure 6 in an embodiment.
  • Figure 8 is a sub-flow diagram of step S602 of Figure 6 in another embodiment.
  • FIG. 1 is a structural block diagram of an occlusion detection and repair device 100 based on an imaging device according to an embodiment of the present invention.
  • the occlusion detection repair device 100 is applied to an electronic device.
  • the electronic device includes but is not limited to a camera, a mobile phone, a tablet computer, a notebook computer, a desktop computer, etc. Think of smart helmets, smart glasses and other wearable devices.
  • the occlusion detection and repair device 100 includes a processor 10, a memory 20, an imaging unit 30, and a detection module 40.
  • the memory 20, the camera unit 30, and the detection module 40 are electrically connected to the processor 10, respectively.
  • the camera unit 30 is configured to take a photo or video for a shooting scene to obtain image or video information of the shooting scene. Specifically, the camera unit 30 is configured to capture a first image.
  • the camera unit 30 may include at least one camera 31.
  • the camera unit 30 includes a camera 31.
  • the camera unit 30 includes two cameras 31, a binocular camera. It can be understood that in other embodiments, the camera unit 30 can include three or more cameras 31, which can be specifically set according to actual needs.
  • the memory 20 is configured to store a framing range, a preset distance, and a preset matching degree.
  • the framing range is a framing range of the imaging unit 30 at the time of shooting.
  • the preset distance is a preset distance between the camera unit 30 and the target when the target object is photographed.
  • the matching degree is the similarity between the two images. The higher the similarity, the higher the matching degree. Conversely, the lower the similarity, the lower the matching degree.
  • the preset matching degree that is, the similarity between the two images reaches a predetermined similarity.
  • the memory also stores a plurality of second images.
  • the plurality of second images are images captured by the imaging unit 30.
  • the detecting module 40 is configured to detect whether there is a target object in the framing range of the image capturing unit 30 or that the camera 31 of the camera unit 30 is to be blocked.
  • the processor 10 is configured to calculate an occlusion region of the first image when the detection module 40 detects the target within the framing range of the camera unit 30.
  • the detection module 40 includes an inductive detection unit 41.
  • the sensing detection unit 41 is electrically connected to the processor 10 .
  • the sensing detection unit 41 is configured to generate a sensing signal including occlusion position information when detecting that a target object is about to approach the imaging unit 30 or has partially blocked all or part of the camera 31 of the imaging unit 30.
  • the processor 10 is configured to calculate an occlusion region of the first image according to occlusion position information in the sensing signal.
  • the sensing detection unit 41 includes at least one proximity sensor 411.
  • the proximity sensor 411 can also be a distance sensor.
  • the at least one proximity sensor 411 is disposed within a preset distance range around the camera 31.
  • the proximity sensor 411 is disposed between the two cameras 31.
  • the processor 10 senses the The position information of the proximity sensor 411 of the target calculates an occlusion area of the first image.
  • the imaging unit 30 includes one camera 31
  • the at least one proximity sensor 411 is disposed within a preset distance range around the camera 31.
  • the at least one proximity sensor 411 may also be disposed at a position between two adjacent cameras 31.
  • the sensing detection unit 41 includes a touch module 413 mounted on the at least one camera 31 of the camera unit 30 .
  • the touch module 413 can be a flexible touch unit.
  • the touch module 413 generates an inductive signal including touch position coordinates when sensing that a target object is in contact therewith.
  • the processor 10 calculates an occlusion region of the first image according to the touch position coordinates in the sensing signal.
  • the touch module 413 is also disposed within a preset distance range around the camera 31 of the camera unit 30.
  • the touch module 413 generates an inductive signal including touch position coordinates when sensing a target within a preset distance range around the camera 31.
  • the processor 10 determines that the camera unit 30 is about to be blocked according to the touch position coordinates, and issues an occlusion reminder.
  • the detection module 40 further includes an imaging detection unit 43.
  • the imaging detecting unit 43 is the imaging unit 30 itself.
  • the imaging unit 30 captures a third image.
  • the third image is an image captured by the imaging unit 30 prior to the first image and stored in the memory 20.
  • the memory 20 also stores a color threshold, a first difference threshold, and a connected number.
  • the color threshold is a maximum value of a color of an image captured when the camera 31 is blocked.
  • the first difference threshold is a difference between color values of corresponding positions of the two images.
  • the number of connections is the number of blocks in which the difference is greater than or equal to the first difference threshold and communicates with each other.
  • the processor 10 is further configured to cut the first image into a plurality of first small blocks M according to a predetermined size, and cut the third image into the predetermined size as a plurality of second small blocks N, wherein each of the first small blocks M corresponds to one of the second small blocks N, that is, the first small block M and the corresponding second small block N are cameras 31 Images taken at the same shooting position.
  • the processor 10 is further configured to calculate a color average of each of the first small block M and each of the second small blocks N.
  • the processor 10 is further configured to determine whether a color average value of each of the first small blocks M is smaller than the color threshold, that is, whether the color of the first small block M is dark.
  • the treatment is further configured to determine whether a difference between a color average value of each of the first small blocks M and a color average value of the corresponding second small block N is smaller than the first difference threshold, that is, The first small block M is close to the color of the corresponding second small block N, and the color change is relatively small, and the area may be occluded.
  • the processor 10 is further configured to: when the color average of the first small block M is smaller than the color threshold, and the difference corresponding to the first small block M is smaller than the first difference threshold, The first small piece M is marked.
  • the processor 10 is further configured to determine, when the number of the first small blocks M that are marked and communicated with each other is greater than or equal to the number of connected pieces, determine an area where the first small blocks M are located.
  • the occlusion area forms a dark area. It can be understood that when the two first small blocks M that have been marked are adjacent, the two first small blocks M are in communication. It can be understood that the first image and the third image described above are images taken by the same camera 31 of the imaging unit 30.
  • the camera unit 30 is a binocular camera.
  • the third image captured by the camera unit 30 and the first image are images taken by the binocular camera simultaneously for the same shooting scene, and the first image and the third image are followed by a minimum shooting distance. Divided into public and non-public parts.
  • the calculation of the occlusion region of the non-common portion is the same as the above embodiment, that is, the calculation of the occlusion region by the image captured by the camera 31 that captures the first image is performed. However, for the calculation of the occlusion area of the common portion, the following manner can be adopted, which is described in detail below.
  • the memory 20 also stores a second difference threshold.
  • the processor 10 is further configured to cut a common portion of the first image into a plurality of first small pieces M according to a predetermined size, and cut a common portion of the third image into the second small size according to the predetermined size.
  • Block N each of the first small blocks M corresponding to one of the second small blocks N, that is, each of the first small blocks M and the corresponding second small blocks N are the binocular cameras An image taken for the same target.
  • the processor 10 is further configured to calculate a color average of each of the first small block M and each of the second small blocks N.
  • the processor 10 is further configured to determine whether a color average value of each of the first small blocks M is smaller than the color threshold, that is, whether the color of the first small block M is dark.
  • the processor 10 is further configured to determine whether a difference between a color average value of each of the first small blocks M and a color average value of the corresponding second small block N is greater than the second difference threshold. That is, the images taken by the binocular camera for the same target have a large difference, and therefore, there may be a case where one of the cameras 31 is blocked.
  • the processor 10 is further configured to: the color average value of the first small block M is smaller than the color threshold, and the difference between the first small block M and the corresponding second small block N is greater than When the second difference threshold is used, the first small block M is marked.
  • the processor 10 is further configured to determine, when the number of the first small blocks M that are marked and communicated with each other is greater than or equal to the number of connected pieces, determine an area where the first small blocks M are located. Occlusion area.
  • the processor 10 is further configured to repair an occlusion region in the first image.
  • the processor 10 is configured to extract a first feature point of the first image.
  • the processor 10 is configured to extract a first feature point of the first image other than the occlusion region.
  • the processor 10 is configured to acquire the plurality of second images and extract a second feature point of each of the second images.
  • the first feature point and the second feature point may be a SIFT (Scale Invariant Feature Transform) feature, a FAST (Features from Accelerated Segment Test) feature, or an ORB (Oriented FAST and Rotated BRIEF) feature. Select at least one according to actual needs.
  • SIFT Scale Invariant Feature Transform
  • FAST Features from Accelerated Segment Test
  • ORB Oriented FAST and Rotated BRIEF
  • the processor 10 is configured to calculate a matching degree between a second feature point of each of the second images and a corresponding first feature point of the first image, and select one of the matching degrees to satisfy the pre- The second image of the matching degree is set as the repair source image.
  • the processor 10 is configured to calculate a position of the occlusion area corresponding to the image in the repair source, and repair an occlusion of the first image by using an image of the occlusion area corresponding to the first image in the repair source image. region.
  • the processor 10 is configured to perform matrix transformation on an image corresponding to the occlusion region in the repair source image.
  • the matrix transformation describes a conversion relationship between an image corresponding to the occlusion region and a corresponding pixel of the occlusion region in the repair source image, which is equivalent to a perspective transformation.
  • the processor 10 is further configured to perform color adjustment on the repair source image before calculating the position of the occlusion region in the repair source image, so that the repair source image and the The color of the first image is consistent, wherein the color adjustment includes color gamut adjustment, brightness adjustment, and the like. Since the white balance parameter between the first image and the repair source image may be inconsistent, there is a slight difference between the first image and the repair source image, and therefore, before the occlusion region is repaired, Fix the source image for color adjustment to better fix the occlusion area.
  • the processor 10 is further configured to select, as the repair source image, the second image whose matching degree satisfies a preset matching degree and the matching degree is the highest.
  • the processor 10 is further configured to preferentially capture an image captured by the camera 31 of the first image from the binocular camera. Selecting the second image, when the camera 31 that captures the first image in the binocular camera does not satisfy the image of the preset matching degree in the image captured first, another from the binocular camera The second image is selected from images captured by the camera 31.
  • the processor 10 can be a microcontroller, a microprocessor, a single chip, a digital signal processor, or the like.
  • the memory 20 can be a computer readable storage medium such as a memory card, a solid state memory, a micro hard disk, an optical disk, or the like. In some embodiments, the memory 20 stores a number of program instructions that can be executed by the processor 10 to perform the aforementioned functions.
  • FIG. 6 is a flowchart of a method for detecting occlusion detection in an embodiment of the present invention.
  • the occlusion detection repairing method is applied to the occlusion detection repairing apparatus 100 described above, and the order of execution is not limited to the order shown in FIG. 6.
  • the method includes the steps of:
  • step S601 it is detected whether there is a target occlusion or a camera that will block the camera unit 30. If yes, the process proceeds to step S602, otherwise, the process ends.
  • the sensing detection unit 41 generates an inductive signal including occlusion position information when detecting that a target object is about to approach the imaging unit 30 or has partially blocked all or part of the camera 31 of the imaging unit 30.
  • the proximity detecting unit 41 may be a proximity sensor (or distance sensor) 411 disposed within a preset distance range of the at least one camera 31 and/or between the at least one camera 31.
  • the sensing detection unit 41 can also be a touch module 413 mounted on at least one camera 31.
  • the camera unit 30 itself is used to detect whether there is a target that is approaching the camera unit 30 or has partially or completely blocked the camera 31 of the camera unit 30.
  • the target object when it is detected that the target object within the preset distance of the image capturing unit 30 exists in the viewing range of the image capturing unit 30, it is determined that the target object is occluding or about to block the image capturing unit. 30 cameras.
  • Step S602 calculating an occlusion area of the first image.
  • the processor 10 is configured to calculate an occlusion region of the first image according to occlusion position information in the sensing signal. In other embodiments, the processor 10 is configured to calculate an occlusion region according to the first image captured by the imaging unit 30 and the third image captured previously.
  • Step S603 the processor 10 repairs an occlusion area in the first image.
  • the processor 10 extracts a first feature point of the first image, acquires the plurality of second images, and And extracting, in each of the second images, a second feature point corresponding to the first feature point.
  • the processor 10 calculates a matching degree between a second feature point of each of the second images and a corresponding first feature point of the first image, and selects one of the matching degrees to satisfy the preset matching
  • the second image of degrees is used as a repair source image.
  • the processor 10 calculates a position corresponding to the occlusion area in the repair source image, and repairs an occlusion area of the first image by using an image of the occlusion area corresponding to the first image in the repair source image.
  • step S602 includes:
  • step S6021 the processor 10 cuts the first image into a plurality of first small blocks M according to a predetermined size.
  • Step S6022 the processor 10 acquires a third image, and cuts the third image into a plurality of second small blocks N according to the predetermined size, wherein each of the first small blocks M and one of the The second small block N corresponds.
  • the third image is an image captured by the imaging unit 30 prior to the first image and stored in the memory 20.
  • the memory 20 also stores a color threshold, a first difference threshold, and a connected number.
  • step S6023 the processor 10 calculates a color average value of each of the first small block M and each of the second small blocks N.
  • step S6024 the processor 10 determines whether the average value of the color of each of the first small blocks M is smaller than the color threshold, and if yes, proceeds to step S6025, otherwise, ends.
  • Step S6025 the processor 10 determines whether the difference between the color average value of each of the first small blocks M and the corresponding color average value of the second small block N is smaller than the first difference threshold. If yes, go to step S6026, otherwise, end.
  • step S6026 the processor 10 marks the first small block M.
  • step S6027 when the number of the first small blocks M that are marked and connected to each other is greater than or equal to the number of connected, the processor 10 determines that the area where the first small blocks M are located is Occlusion area.
  • the first image and the third image are images simultaneously captured by the binocular camera for the same shooting scene, the first image and the third image.
  • the image is divided into a common portion and a non-public portion according to a minimum shooting distance.
  • the calculation of the occlusion region of the non-common portion is the same as the above embodiment, that is, the calculation of the occlusion region is performed using the image previously captured by the camera 31 that captured the first image.
  • the manner shown in FIG. 6 can be adopted, which is described in detail below.
  • step S602 includes:
  • step S6021' the processor 10 cuts the common portion of the first image into a plurality of first patches M according to a predetermined size.
  • Step S6022' the processor 10 cuts the common portion of the third image into a plurality of second small blocks N according to the predetermined size, and each of the first small blocks M and one of the second small blocks N corresponds to each of the first small block M and the corresponding second small block N being an image taken by the binocular camera for the same target.
  • step S6023' the processor 10 calculates a color average of each of the first small block M and each of the second small blocks N.
  • step S6024' the processor 10 determines whether the average value of the color of each of the first small blocks M is smaller than the color threshold, and if so, proceeds to step S6025', otherwise, ends.
  • Step S6025' the processor 10 determines whether the difference between the color average value of each of the first small blocks M and the corresponding color average value of the second small block N is greater than the second difference value. The threshold, if yes, proceeds to step S6026', otherwise, ends.
  • step S6026' the processor 10 marks the first small block M.
  • Step S6027' when the number of the first small blocks M that are marked and connected to each other is greater than or equal to the number of connected, the processor determines that the area where the first small blocks M are located is Occlusion area.
  • the plurality of program instructions are used by the processor 10 to perform execution to perform the steps in any of the methods of FIGS. 6-8.
  • the present invention further provides a computer readable storage medium having stored therein a plurality of program instructions, which are executed by the processor 10 for execution, and are executed in FIG. 6-8. Any method step of calculating an occlusion region in the first image and repairing an occlusion region in the first image.
  • the computer storage medium is the memory 20, and may be any storable information such as a memory card, a solid state memory, a micro hard disk, an optical disk, or the like. Storage device.
  • the occlusion detection and repair device based on the imaging device of the present invention and the occlusion detection and repair method thereof can perform occlusion detection when capturing an image, repair the occlusion region of the captured image, improve the splicing rate, and have a good repair effect. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

本申请公开一种基于摄像设备的遮挡检测修复方法,包括步骤:拍摄第一图像;侦测在一摄像单元的取景范围内是否存在与摄像单元在预设距离内的目标物;如果有,计算第一图像的遮挡区域;提取第一图像的第一特征点;获取由摄像单元拍摄的多个第二图像,并从每个第二图像中提取与第一特征点相对应的第二特征点;计算每个第二图像的第二特征点与第一图像的对应第一特征点之间的匹配度,并选择其中一个匹配度满足预设匹配度的第二图像作为修复源图像;及计算遮挡区域对应在修复源图像中的位置,并采用修复源图像中对应第一图像的遮挡区域的图像修复第一图像的遮挡区域。本申请能够有效的进行遮挡检测及图像修复,修复效果好,成片率高。

Description

基于摄像设备的遮挡检测修复装置及其遮挡检测修复方法 技术领域
本发明涉及一种摄像设备,尤其涉及一种基于摄像设备的遮挡检测修复装置及其遮挡检测修复方法。
背景技术
电子设备的摄像品质是消费者在选择购买电子设备时的主要考虑因素之一。换句话说,如果电子设备具有卓越的摄像品质,将成为所述电子设备的一大卖点。然,现有的电子设备在摄像时,如果有目标物,例如手指、污渍等挡住电子设备的摄像头时,所拍摄的影片将会出现暗区,成片率低。
发明内容
本发明实施例公开一种遮挡检测修复装置及其遮挡检测修复方法,能够检测遮挡及对影片的遮挡区域修复,可有效提高成片率,修复效果好。
本发明实施例公开的基于摄像设备的遮挡检测修复装置。所述遮挡检测修复装置包括:摄像单元,所述摄像单元拍摄第一图像;侦测模组,所述侦测模组侦测在所述摄像单元的取景范围内是否存在与所述摄像单元在预设距离内的目标物;存储器,所述存储器存储所述取景范围、所述预设距离及预设匹配度,所述存储器还存储多个第二图像,所述多个第二图像为所述摄像单元拍摄的图像;及处理器,所述摄像单元、所述侦测模组及所述存储器分别与所述处理器电性连接,所述处理器用于:在所述侦测模组侦测到所述摄像单元的所述取景范围内存在与所述摄像单元在所述预设距离内的目标物时,计算所述第一图像的遮挡区域;提取所述第一图像的第一特征点;获取所述多个第二图像,并从每个所述第二图像中提取与所述第一特征点相对应的第二特征点;计算每个所述第二图像的第二特征点与所述第一图像的第一特征点之间的匹配度,并选择其中一个所述匹配度满足所述预设匹配度的所述第二图像作为修复源图像;及计算所述遮挡区域对应在所述修复源图像中的位置,并采用所述修复源 图像中对应所述第一图像的遮挡区域的图像修复所述第一图像的遮挡区域。
本发明实施例公开的基于摄像设备的遮挡处理方法。所述遮挡处理方法包括步骤:拍摄第一图像;侦测在一摄像单元的取景范围内是否存在与所述摄像单元在预设距离内的目标物;在侦测到所述摄像单元的所述取景范围内存在与所述摄像单元在所述预设距离内的目标物时,计算所述第一图像的遮挡区域;提取所述第一图像的第一特征点;获取多个第二图像,并从每个所述第二图像中提取与所述第一特征点相对应的第二特征点;计算每个所述第二图像的第二特征点与所述第一图像的第一特征点之间的匹配度,并选择其中一个所述匹配度满足所述预设匹配度的所述第二图像作为修复源图像;及计算所述遮挡区域对应在所述修复源图像中的位置,并采用所述修复源图像中对应所述第一图像的遮挡区域的图像修复所述第一图像的遮挡区域。
一种计算机可读存储介质,所述计算机可读存储介质中存储有若干程序指令,所述若干程序指令供处理器调用执行后,执行步骤:拍摄第一图像;侦测在一摄像单元的取景范围内是否存在与所述摄像单元在预设距离内的目标物;在侦测到所述摄像单元的所述取景范围内存在与所述摄像单元在所述预设距离内的目标物时,计算所述第一图像的遮挡区域;提取所述第一图像的第一特征点;获取多个第二图像,并从每个所述第二图像中提取与所述第一特征点相对应的第二特征点;计算每个所述第二图像的第二特征点与所述第一图像的第一特征点之间的匹配度,并选择其中一个所述匹配度满足所述预设匹配度的所述第二图像作为修复源图像;及计算所述遮挡区域对应在所述修复源图像中的位置,并采用所述修复源图像中对应所述第一图像的遮挡区域的图像修复所述第一图像的遮挡区域。
本发明的遮挡检测修复装置及其遮挡检测修复方法,在通过侦测单元侦测到摄像单元有遮挡时,采用所述摄像单元之前拍摄的图像进行遮挡区域修复,能够及时检测到遮挡,避免持续遮挡,并且即使出现遮挡,也能够及时修复,提高成片率,修复效果好。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本发明一实施例中的遮挡检测修复装置的结构框图。
图2为本发明一实施例中的遮挡检测修复装置的摄像单元及侦测模组的示意图。
图3为本发明另一实施例中的遮挡检测修复装置的摄像单元及侦测模组的示意图。
图4为本发明一实施例中的摄像单元为双目摄像头时所拍摄图像时最小拍摄距离、公共部分及非公共部分的示意图。
图5为本发明一实施例中的遮挡检测修复装置的侦测模组为摄像侦测单元时其图像比对的示意图。
图6为本发明一实施例中的遮挡检测修复方法的流程图。
图7为一实施例中图6中步骤S602的子流程图。
图8为另一实施例中图6中步骤S602的子流程图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
请参阅图1,为本发明一实施例中的基于摄像设备的遮挡检测修复装置100的结构框图。所述遮挡检测修复装置100应用于电子设备上。所述电子设备包括但不限于摄像机、手机、平板电脑、笔记本电脑、桌面型电脑等,也可 以为智能头盔,智能眼镜等穿戴式设备等。所述遮挡检测修复装置100包括处理器10、存储器20、摄像单元30和侦测模组40。所述存储器20、摄像单元30和侦测模组40分别与所述处理器10电性连接。
所述摄像单元30用于针对一拍摄场景进行拍照或者录影以获取该拍摄场景的图像或视频信息。具体的,所述摄像单元30用于拍摄第一图像。请一并参考图2,所述摄像单元30可以包括至少一个摄像头31。在一些实施例中,所述摄像单元30包括一个摄像头31。在另一些实施例中,所述摄像单元30包括两个摄像头31,即双目摄像头。可以理解,在其它实施例中,所述摄像单元30可以包括三个及三个以上的摄像头31,具体可根据实际需要设置。
所述存储器20用于存储取景范围、预设距离及预设匹配度。其中,所述取景范围为所述摄像单元30在拍摄时的取景范围。所述预设距离为所述摄像单元30拍摄目标物时与目标物之间的预设距离。所述匹配度为两幅图像之间的相似度,相似度越高,所述匹配度越高,反之,相似度越低,则所述匹配度越低。所述预设匹配度即两幅图像之间的相似度达到预定的相似度。所述存储器还存储多个第二图像。其中,所述多个第二图像为所述摄像单元30拍摄的图像。
所述侦测模组40用于侦测在所述摄像单元30的所述取景范围内是否有目标物遮挡或者即将遮挡所述摄像单元30的摄像头31。当在所述摄像单元30的所述取景范围内,所述侦测模组40侦测到目标物时,所述处理器10用于计算所述第一图像的遮挡区域。
在一些实施例中,所述侦测模组40包括感应侦测单元41。所述感应侦测单元41与所述处理器10电性连接。所述感应侦测单元41用于在侦测到有目标物即将靠近所述摄像单元30或者已经遮住所述摄像单元30的部分或者全部摄像头31时,产生包含遮挡位置信息的感应信号。所述处理器10用于根据所述感应信号中的遮挡位置信息计算所述第一图像的遮挡区域。
请一并参考图2,在一些实施例中,所述感应侦测单元41包括至少一个接近传感器411。该接近传感器411也可为距离传感器。所述至少一个接近传感器411设置在所述摄像头31的周围预设距离范围内。在另一实施例中,所述接近传感器411设置在两个摄像头31之间。所述处理器10根据感应到所述 目标物的所述接近传感器411的位置信息计算所述第一图像的遮挡区域。具体地,当所述摄像单元30包括一个摄像头31时,所述至少一个接近传感器411设置在所述摄像头31的周围预设距离范围内。当所述摄像单元30包括两个摄像头31时,所述至少一个接近传感器411还可以设置在相邻两个摄像31头之间的位置上。
在一些实施例中,请一并参考图3,所述感应侦测单元41包括安装在所述摄像单元30的所述至少一个摄像头31上的触控模块413。可以理解,所述触控模块413可以为柔性触控单元。所述触控模块413在感应到有目标物与其接触时产生包含触控位置坐标的感应信号。所述处理器10根据所述感应信号中的触控位置坐标计算所述第一图像的遮挡区域。
在一些实施例中,所述触控模块413还设置在所述摄像单元30的所述摄像头31周围的预设距离范围内。所述触控模块413在感应到目标物在摄像头31周围的预设距离范围内时便产生包含触控位置坐标的感应信号。所述处理器10根据所述触控位置坐标判断所述摄像单元30即将被遮挡,并发出遮挡提醒。
在一些实施例中,所述侦测模组40还包括摄像侦测单元43。所述摄像侦测单元43即为所述摄像单元30本身。所述摄像单元30拍摄第三图像。所述第三图像为所述摄像单元30先于所述第一图像所拍摄并存储在所述存储器20中的图像。可以理解,所述存储器20还存储颜色阈值、第一差值阈值和连通个数。其中,所述颜色阈值为当摄像头31被遮挡时所拍摄图像的颜色最大值。所述第一差值阈值为两幅图像对应位置的颜色值之间的差值。所述连通个数为所述差值大于或等于所述第一差值阈值且相互连通的区块个数。
请一并参考图4和图5,所述处理器10还用于将所述第一图像按照预定尺寸切割为若干第一小块M,及将所述第三图像按照所述预定尺寸切割为若干第二小块N,其中,每个所述第一小块M与其中一个所述第二小块N对应,即所述第一小块M与对应的所述第二小块N为摄像头31在同一拍摄位置所拍摄的图像。所述处理器10还用于计算每个所述第一小块M和每个所述第二小块N的颜色平均值。所述处理器10还用于判断每个所述第一小块M的颜色平均值是否小于所述颜色阈值,即所述第一小块M的颜色是否偏暗。所述处理 器10还用于判断每个所述第一小块M的颜色平均值与对应的所述第二小块N的颜色平均值之间的差值是否小于所述第一差值阈值,即所述第一小块M与对应的所述第二小块N的颜色接近,颜色变化比较小,则该区域可能被遮挡。所述处理器10还用于在所述第一小块M的颜色平均值小于所述颜色阈值,且所述第一小块M对应的所述差值小于所述第一差值阈值时,将所述第一小块M做标记。所述处理器10还用于在做过标记且相互连通的所述第一小块M的个数大于或等于所述连通个数时,确定该些所述第一小块M所在的区域为遮挡区域,即形成暗区。可以理解,当做过标记的两个第一小块M相邻,则所述两个第一小块M相连通。可以理解,上述第一图像和第三图像为所述摄像单元30的同一摄像头31拍摄的图像。
在一些实施例中,所述摄像单元30为双目摄像头。其中,所述摄像单元30拍摄的第三图像与所述第一图像为所述双目摄像头针对同一拍摄场景同时拍摄的图像,所述第一图像和所述第三图像按照一最小拍摄距离被划分为公共部分和非公共部分。其中,所述非公共部分的遮挡区域的计算同上述实施例,即,采用由拍摄所述第一图像的所述摄像头31在先拍摄的图像进行遮挡区域的计算。但对于所述公共部分的遮挡区域计算,可采用下述方式,详述如下。
所述存储器20还存储第二差值阈值。所述处理器10还用于将所述第一图像的公共部分按照预定尺寸切割为若干第一小块M,及将所述第三图像的公共部分按照所述预定尺寸切割为若干第二小块N,每个所述第一小块M与其中一个所述第二小块N对应,即每个所述第一小块M与对应的所述第二小块N为所述双目摄像头针对同一目标拍摄的图像。所述处理器10还用于计算每个所述第一小块M和每个所述第二小块N的颜色平均值。所述处理器10还用于判断每个所述第一小块M的颜色平均值是否小于所述颜色阈值,即所述第一小块M的颜色是否偏暗。所述处理器10还用于判断每个所述第一小块M的颜色平均值与对应的所述第二小块N的颜色平均值之间的差值是否大于所述第二差值阈值,即所述双目摄像头针对同一目标拍摄的图像差别较大,因此,可能存在其中一个摄像头31被遮挡的情况。所述处理器10还用于在所述第一小块M的颜色平均值小于所述颜色阈值,且所述第一小块M与对应的所述第二小块N之间的差值大于所述第二差值阈值时,将所述第一小块M做标记。 所述处理器10还用于在做过标记且相互连通的所述第一小块M的个数大于或等于所述连通个数时,确定该些所述第一小块M所在的区域为遮挡区域。
所述处理器10还用于修复所述第一图像中的遮挡区域。
具体地,所述处理器10用于提取所述第一图像的第一特征点。优选地,所述处理器10用于提取所述第一图像除所述遮挡区域以外的第一特征点。所述处理器10用于获取所述多个第二图像,并提取每个所述第二图像的第二特征点。可以理解,所述第一特征点和所述第二特征点可以是SIFT(Scale Invariant Feature Transform)特征、FAST(Features from Accelerated Segment Test)特征或者ORB(Oriented FAST and Rotated BRIEF)特征等,具体可根据实际需要选择至少一个。
所述处理器10用于计算每个所述第二图像的第二特征点与所述第一图像的对应第一特征点之间的匹配度,并选择其中一个所述匹配度满足所述预设匹配度的所述第二图像作为修复源图像。
所述处理器10用于计算所述遮挡区域对应在所述修复源图像中的位置,并采用所述修复源图像中对应所述第一图像的遮挡区域的图像修复所述第一图像的遮挡区域。具体地,所述处理器10用于将所述修复源图像中对应所述遮挡区域的图像进行矩阵变换。所述矩阵变换描述所述修复源图像中对应所述遮挡区域的图像和所述遮挡区域的对应像素的转换关系,其相当于一个透视变换。
在一些实施例中,所述处理器10还用于在计算所述遮挡区域对应在所述修复源图像中的位置之前,将所述修复源图像进行色彩调整,使得所述修复源图像与所述第一图像的色彩一致,其中,所述色彩调整包括色域调整和亮度调整等。由于所述第一图像和所述修复源图像之间的白平衡参数可能不一致,导致所述第一图像和所述修复源图像之间存在细微差别,因此,在修复遮挡区域之前,对所述修复源图像进行色彩调整,可以更好的修复所述遮挡区域。
在一些实施例中,所述处理器10还用于选择所述匹配度满足预设匹配度且所述匹配度最高的所述第二图像作为所述修复源图像。
在一些实施例中,所述摄像单元30为双目摄像头时,所述处理器10还用于优先从所述双目摄像头中拍摄所述第一图像的摄像头31在先拍摄的图像中 选择所述第二图像,在所述双目摄像头中拍摄所述第一图像的摄像头31在先拍摄的图像中没有满足所述预设匹配度的图像时,从所述双目摄像头的另一摄像头31所拍摄的图像中选择所述第二图像。
其中所述处理器10可为微控制器、微处理器、单片机、数字信号处理器等。
所述存储器20可为存储卡、固态存储器、微硬盘、光盘等计算机可读存储介质。在一些实施例中,所述存储器20中存储有若干程序指令,所述程序指令可被处理器10调用后执行前述的功能。
请参阅图6,为本发明一实施例中的遮挡检测修复方法的流程图。所述遮挡检测修复方法应用于前述的遮挡检测修复装置100中,执行顺序并不限于图6所示的顺序。所述方法包括步骤:
步骤S601,侦测是否有目标物遮挡或者即将遮挡所述摄像单元30的摄像头,如果是则进入步骤S602,否则,结束。在一些实施例中,感应侦测单元41在侦测到有目标物即将靠近所述摄像单元30或者已经遮住所述摄像单元30的部分或者全部摄像头31时,产生包含遮挡位置信息的感应信号。所述感应侦测单元41可以是设置在所述至少一个摄像头31的周围预设距离范围内和/或所述至少一个摄像头31之间的接近传感器(或距离传感器)411。所述感应侦测单元41还可以是安装在至少一个摄像头31上的触控模块413。在另一些实施例中,采用所述摄像单元30本身侦测是否有目标物即将靠近所述摄像单元30或者已经遮住所述摄像单元30的部分或者全部摄像头31。
具体的,在侦测到所述摄像单元30的所述取景范围内存在与所述摄像单元30在所述预设距离内的目标物时,判断所述目标物遮挡或者即将遮挡所述摄像单元30的摄像头。
步骤S602,计算所述第一图像的遮挡区域。在一些实施例中,所述处理器10用于根据所述感应信号中的遮挡位置信息计算所述第一图像的遮挡区域。在另一些实施例中,所述处理器10用于根据所述摄像单元30拍摄的第一图像和在先拍摄的第三图像计算遮挡区域。
步骤S603,所述处理器10修复所述第一图像中的遮挡区域。具体地,所述处理器10提取所述第一图像的第一特征点,获取所述多个第二图像,并从 每一所述第二图像中提取与所述第一特征点相对应的第二特征点。所述处理器10计算每个所述第二图像的第二特征点与所述第一图像的对应第一特征点之间的匹配度,并选择其中一个所述匹配度满足所述预设匹配度的所述第二图像作为修复源图像。所述处理器10计算所述遮挡区域对应在所述修复源图像中的位置,并采用所述修复源图像中对应所述第一图像的遮挡区域的图像修复所述第一图像的遮挡区域。
请参阅图7,为步骤S602在一些实施例中的子流程图。如图5所示,所述步骤S602包括:
步骤S6021,所述处理器10将所述第一图像按照预定尺寸切割为若干第一小块M。
步骤S6022,所述处理器10获取第三图像,并将所述第三图像按照所述预定尺寸切割为若干第二小块N,其中,每个所述第一小块M与其中一个所述第二小块N对应。具体地,所述第三图像为所述摄像单元30先于所述第一图像所拍摄并存储在所述存储器20中的图像。所述存储器20还存储颜色阈值、第一差值阈值和连通个数。
步骤S6023,所述处理器10计算每个所述第一小块M和每个所述第二小块N的颜色平均值。
步骤S6024,所述处理器10判断每个所述第一小块M的颜色平均值是否小于所述颜色阈值,如果是,则进入步骤S6025,否则,结束。
步骤S6025,所述处理器10判断每个所述第一小块M的颜色平均值与对应的所述第二小块N的颜色平均值之间的差值是否小于所述第一差值阈值,如果是,则进入步骤S6026,否则,结束。
步骤S6026,所述处理器10将所述第一小块M做标记。
步骤S6027,所述处理器10在做过标记且相互连通的所述第一小块M的个数大于或等于所述连通个数时,确定该些所述第一小块M所在的区域为遮挡区域。
可以理解,所述摄像单元30为双目摄像头时,所述第一图像和所述第三图像为所述双目摄像头针对同一拍摄场景同时拍摄的图像,所述第一图像和所述第三图像按照一最小拍摄距离被划分为公共部分和非公共部分。其中,所述 非公共部分的遮挡区域的计算同上述实施例,即,采用由拍摄所述第一图像的所述摄像头31在先拍摄的图像进行遮挡区域的计算。但对于所述公共部分的遮挡区域计算,可采用图6所示的方式,详述如下。
请参阅图8,为步骤S602在另一些实施例中的子流程图。如图6所示,所述步骤S602包括:
步骤S6021’,所述处理器10将所述第一图像的公共部分按照预定尺寸切割为若干第一小块M。
步骤S6022’,所述处理器10将所述第三图像的公共部分按照所述预定尺寸切割为若干第二小块N,每个所述第一小块M与其中一个所述第二小块N对应,即每个所述第一小块M与对应的所述第二小块N为所述双目摄像头针对同一目标拍摄的图像。
步骤S6023’,所述处理器10计算每个所述第一小块M和每个所述第二小块N的颜色平均值。
步骤S6024’,所述处理器10判断每个所述第一小块M的颜色平均值是否小于所述颜色阈值,如果是,则进入步骤S6025’,否则,结束。
步骤S6025’,所述处理器10判断每个所述第一小块M的颜色平均值与对应的所述第二小块N的颜色平均值之间的差值是否大于所述第二差值阈值,如果是,则进入步骤S6026’,否则,结束。
步骤S6026’,所述处理器10将所述第一小块M做标记。
步骤S6027’,所述处理器在做过标记且相互连通的所述第一小块M的个数大于或等于所述连通个数时,确定该些所述第一小块M所在的区域为遮挡区域。
其中,当存储器40中存储有若干程序指令时,所述若干程序指令用于供处理器10调用执行而执行图6-8中任一方法中的步骤。
在一些实施例中,本发明还提供一种计算机可读存储介质,所述计算机可读存储介质中存储有若干程序指令,所述若干程序指令供处理器10调用执行后,执行图6-8的任一方法步骤,从而计算出所述第一图像中的遮挡区域,并修复所述第一图像中的遮挡区域。在一些实施例中,所述计算机存储介质即为所述存储器20,可为存储卡、固态存储器、微硬盘、光盘等任意可存储信息 的存储设备。
从而,本发明的基于摄像设备的遮挡检测修复装置及其遮挡检测修复方法,能够在拍摄图像时进行遮挡检测,并对已拍摄的图像的遮挡区域进行修复,提高成片率,且修复效果好。
以上所述是本发明的优选实施例,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也视为本发明的保护范围。

Claims (20)

  1. 一种基于摄像设备的遮挡检测修复装置,其特征在于,所述遮挡检测修复装置包括:
    摄像单元,所述摄像单元拍摄第一图像;
    侦测模组,所述侦测模组侦测在所述摄像单元的取景范围内是否存在与所述摄像单元在预设距离内的目标物;
    存储器,所述存储器存储所述取景范围、所述预设距离及预设匹配度,所述存储器还存储多个第二图像,所述多个第二图像为所述摄像单元拍摄的图像;及
    处理器,所述摄像单元、所述侦测模组及所述存储器分别与所述处理器电性连接,所述处理器用于:
    在所述侦测模组侦测到所述摄像单元的所述取景范围内存在与所述摄像单元在所述预设距离内的目标物时,计算所述第一图像的遮挡区域;
    提取所述第一图像的第一特征点;
    获取所述多个第二图像,并从每个所述第二图像中提取与所述第一特征点相对应的第二特征点;
    计算每个所述第二图像的第二特征点与所述第一图像的第一特征点之间的匹配度,并选择其中一个所述匹配度满足所述预设匹配度的所述第二图像作为修复源图像;及
    计算所述遮挡区域对应在所述修复源图像中的位置,并采用所述修复源图像中对应所述第一图像的遮挡区域的图像修复所述第一图像的遮挡区域。
  2. 如权利要求1所述的遮挡检测修复装置,其特征在于,所述侦测模组包括感应侦测单元,所述感应侦测单元与所述处理器电性连接,所述感应侦测单元在侦测到有目标物即将靠近所述摄像单元或者已经遮住所述摄像单元的部分或者全部摄像头时,产生包含遮挡位置信息的感应信号,所述处理器根据所述感应信号中的遮挡位置信息计算所述第一图像的遮挡区域。
  3. 如权利要求2所述的遮挡检测修复装置,其特征在于,所述摄像单元包括至少一个摄像头,所述感应侦测单元包括至少一个接近传感器或距离传感器,所述至少一个接近传感器或距离传感器设置在所述至少一个摄像头的周围预设距离范围内,所述处理器根据感应到所述目标物的所述接近传感器或距离传感器的位置信息计算所述第一图像的遮挡区域。
  4. 如权利要求2所述的遮挡检测修复装置,其特征在于,所述摄像单元包括至少一个摄像头,所述感应侦测单元包括安装在所述至少一个摄像头上的触控模块,所述触控模块在感应到有目标物与其接触时产生包含触控位置坐标的感应信号,所述处理器根据所述感应信号中的触控位置坐标计算所述第一图像的遮挡区域。
  5. 如权利要求4所述的遮挡检测修复装置,其特征在于,所述触控模块还设置在所述至少一个摄像头周围的预设距离范围内,所述触控模块在感应到目标物在所述摄像头周围的预设距离范围内时便产生包含触控位置坐标的感应信号,所述处理器根据所述触控位置坐标判断所述摄像单元即将被遮挡,并发出遮挡提醒。
  6. 如权利要求1所述的遮挡检测修复装置,其特征在于,所述摄像单元拍摄第三图像,所述第三图像为所述摄像单元先于所述第一图像所拍摄的图像,所述存储器还存储颜色阈值、第一差值阈值和连通个数,所述处理器还用于:
    将所述第一图像按照预定尺寸切割为若干第一小块;
    将所述第三图像按照所述预定尺寸切割为若干第二小块,每个所述第一小块与其中一个所述第二小块对应;
    计算每个所述第一小块和每个所述第二小块的颜色平均值;
    判断每个所述第一小块的颜色平均值是否小于所述颜色阈值,且每个所述第一小块的颜色平均值与对应的所述第二小块的颜色平均值之间的差值是否小于所述第一差值阈值;
    在所述第一小块的颜色平均值小于所述颜色阈值,且所述第一小块对应的所述差值小于所述第一差值阈值时,将所述第一小块做标记;及
    在做过标记且相互连通的所述第一小块的个数大于或等于所述连通 个数时,确定该些所述第一小块所在的区域为遮挡区域。
  7. 如权利要求1所述的遮挡检测修复装置,其特征在于,所述摄像单元为双目摄像头,所述摄像单元拍摄第三图像,所述第一图像和所述第三图像为所述双目摄像头的两个摄像头针对同一拍摄场景在同一时刻分别拍摄的图像,所述第一图像和所述第三图像按照一最小拍摄距离被划分为公共部分和非公共部分;所述存储器还存储颜色阈值、第二差值阈值和连通个数,所述处理器还用于:
    将所述第一图像的公共部分按照预定尺寸切割为若干第一小块;
    将所述第三图像的公共部分按照所述预定尺寸切割为若干第二小块,每个所述第一小块与其中一个所述第二小块对应;
    计算每个所述第一小块和每个所述第二小块的颜色平均值;
    判断每个所述第一小块的颜色平均值是否小于所述颜色阈值,且每个所述第一小块的颜色平均值与对应的所述第二小块的颜色平均值之间的差值是否大于所述第二差值阈值;
    在所述第一小块的颜色平均值小于所述颜色阈值,且所述第一小块对应的所述差值大于所述第二差值阈值时,将所述第一小块做标记;及
    在做过标记且相互连通的所述第一小块的个数大于或等于所述连通个数时,确定该些所述第一小块所在的区域为遮挡区域。
  8. 如权利要求1所述的遮挡检测修复装置,所述处理器还用于:在计算所述遮挡区域对应在所述修复源图像中的位置之前,将所述修复源图像进行色彩调整,使得所述修复源图像与所述第一图像的色彩一致,其中,所述色彩调整包括色域调整和亮度调整。
  9. 如权利要求1所述的遮挡检测修复装置,其特征在于,所述处理器还用于选择所述匹配度满足预设匹配度且所述匹配度最高的所述第二图像作为所述修复源图像。
  10. 如权利要求1所述的遮挡检测修复装置,其特征在于,所述摄像单元为双目摄像头时,所述处理器还用于优先从所述双目摄像头中拍摄所述第一图像的摄像头在先拍摄的图像中选择所述第二图像,在所述双目摄像头中拍摄所述第一图像的摄像头在先拍摄的图像中没有满足所述预设匹配度的图像时,从 所述双目摄像头的另一摄像头所拍摄的图像中选择所述第二图像。
  11. 一种基于摄像设备的遮挡处理方法,其特征在于,所述遮挡处理方法包括步骤:
    拍摄第一图像;
    侦测在一摄像单元的取景范围内是否存在与所述摄像单元在预设距离内的目标物;
    在侦测到所述摄像单元的所述取景范围内存在与所述摄像单元在所述预设距离内的目标物时,计算所述第一图像的遮挡区域;
    提取所述第一图像的第一特征点;
    获取多个第二图像,并从每个所述第二图像中提取与所述第一特征点相对应的第二特征点;
    计算每个所述第二图像的第二特征点与所述第一图像的第一特征点之间的匹配度,并选择其中一个所述匹配度满足所述预设匹配度的所述第二图像作为修复源图像;及
    计算所述遮挡区域对应在所述修复源图像中的位置,并采用所述修复源图像中对应所述第一图像的遮挡区域的图像修复所述第一图像的遮挡区域。
  12. 如权利要求11所述的遮挡处理方法,其特征在于,所述遮挡处理方法还包括步骤:
    在侦测到有目标物即将靠近所述摄像单元或者已经遮住所述摄像单元的部分或者全部摄像头时,产生包含遮挡位置信息的感应信号;及
    根据所述感应信号中的遮挡位置信息计算所述第一图像的遮挡区域。
  13. 如权利要求11所述的遮挡处理方法,其特征在于,所述摄像单元包括至少一个摄像头,所述遮挡处理方法还包括步骤:
    至少一个接近传感器或距离传感器设置在所述至少一个摄像头的周围预设距离范围内;
    在所述至少一个接近传感器或距离传感器侦测到有目标物即将靠近所述至少一个摄像头或者已经遮住所述至少一个摄像头的部分或者全部时,产生包含遮挡位置信息的感应信号;及
    根据所述感应信号中的遮挡位置信息计算所述第一图像的遮挡区域。
  14. 如权利要求12所述的遮挡处理方法,其特征在于,所述摄像单元包括至少一个摄像头,所述遮挡处理方法还包括步骤:
    安装在所述摄像单元的所述至少一个摄像头上的触控模块在感应到有目标物与其接触时产生包含触控位置坐标的感应信号;及
    根据所述感应信号中的触控位置坐标计算所述第一图像的遮挡区域。
  15. 如权利要求14所述的遮挡处理方法,其特征在于,所述遮挡处理方法还包括步骤:
    安装在所述摄像单元的所述至少一个摄像头周围的预设距离范围内的触控模块在感应到目标物时产生包含触控位置坐标的感应信号;及
    根据所述触控位置坐标判断所述摄像单元即将被遮挡,并发出遮挡提醒。
  16. 如权利要求11所述的遮挡处理方法,其特征在于,所述遮挡处理方法还包括步骤:
    拍摄第三图像,所述第三图像为所述摄像单元先于所述第一图像所拍摄的图像;
    将所述第一图像按照预定尺寸切割为若干第一小块;
    将所述第三图像按照所述预定尺寸切割为若干第二小块,每个所述第一小块与其中一个所述第二小块对应;
    计算每个所述第一小块和每个所述第二小块的颜色平均值;
    判断每个所述第一小块的颜色平均值是否小于所述颜色阈值,且每个所述第一小块的颜色平均值与对应的所述第二小块的颜色平均值之间的差值是否小于所述第一差值阈值;
    在所述第一小块的颜色平均值小于所述颜色阈值,且所述第一小块对应的所述差值小于所述第一差值阈值时,将所述第一小块做标记;及
    在做过标记且相互连通的所述第一小块的个数大于或等于所述连通个数时,确定该些所述第一小块所在的区域为遮挡区域。
  17. 如权利要求11所述的遮挡处理方法,其特征在于,所述摄像单元为双目摄像头,所述遮挡处理方法还包括步骤:
    所述摄像单元拍摄第三图像,所述第一图像和所述第三图像为所述双目摄像头的两个摄像头针对同一拍摄场景在同一时刻分别拍摄的图像,,所述第一图像和所述第三图像按照一最小拍摄距离被划分为公共部分和非公共部分;
    将所述第一图像的公共部分按照预定尺寸切割为若干第一小块;
    将所述第三图像的公共部分按照所述预定尺寸切割为若干第二小块,每个所述第一小块与其中一个所述第二小块对应;
    计算每个所述第一小块和每个所述第二小块的颜色平均值;
    判断每个所述第一小块的颜色平均值是否小于颜色阈值,且每个所述第一小块的颜色平均值与对应的所述第二小块的颜色平均值之间的差值是否大于第二差值阈值;
    在所述第一小块的颜色平均值小于所述颜色阈值,且所述第一小块对应的所述差值大于所述第二差值阈值时,将所述第一小块做标记;及
    在做过标记且相互连通的所述第一小块的个数大于或等于所述连通个数时,确定该些所述第一小块所在的区域为遮挡区域。
  18. 如权利要求11所述的遮挡处理方法,其特征在于,在计算所述遮挡区域对应在所述修复源图像中的位置之前,所述遮挡处理方法还包括步骤:
    将所述修复源图像进行色彩调整,使得所述修复源图像与所述第一图像的色彩一致,其中,所述色彩调整包括色域调整和亮度调整。
  19. 如权利要求11所述的遮挡处理方法,其特征在于,所述摄像单元为双目摄像头,所述遮挡处理方法还包括步骤:
    优先从所述双目摄像头中拍摄所述第一图像的摄像头在先拍摄的图像中选择所述第二图像;及
    在所述双目摄像头中拍摄所述第一图像的摄像头在先拍摄的图像中没有满足所述预设匹配度的图像时,从所述双目摄像头的另一摄像头所拍摄的图像中选择所述第二图像。
  20. 一种计算机可读存储介质,所述计算机可读存储介质中存储有若干程序指令,所述若干程序指令供处理器调用执行后,执行步骤:
    拍摄第一图像;
    侦测在一摄像单元的取景范围内是否存在与所述摄像单元在预设距离内的目标物;
    在侦测到所述摄像单元的所述取景范围内存在与所述摄像单元在所述预设距离内的目标物时,计算所述第一图像的遮挡区域;
    提取所述第一图像的第一特征点;
    获取多个第二图像,并从每个所述第二图像中提取与所述第一特征点相对应的第二特征点;
    计算每个所述第二图像的第二特征点与所述第一图像的第一特征点之间的匹配度,并选择其中一个所述匹配度满足所述预设匹配度的所述第二图像作为修复源图像;及
    计算所述遮挡区域对应在所述修复源图像中的位置,并采用所述修复源图像中对应所述第一图像的遮挡区域的图像修复所述第一图像的遮挡区域。
PCT/CN2017/107875 2017-10-26 2017-10-26 基于摄像设备的遮挡检测修复装置及其遮挡检测修复方法 Ceased WO2019080061A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201780092103.7A CN110770786A (zh) 2017-10-26 2017-10-26 基于摄像设备的遮挡检测修复装置及其遮挡检测修复方法
PCT/CN2017/107875 WO2019080061A1 (zh) 2017-10-26 2017-10-26 基于摄像设备的遮挡检测修复装置及其遮挡检测修复方法

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/107875 WO2019080061A1 (zh) 2017-10-26 2017-10-26 基于摄像设备的遮挡检测修复装置及其遮挡检测修复方法

Publications (1)

Publication Number Publication Date
WO2019080061A1 true WO2019080061A1 (zh) 2019-05-02

Family

ID=66246747

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/107875 Ceased WO2019080061A1 (zh) 2017-10-26 2017-10-26 基于摄像设备的遮挡检测修复装置及其遮挡检测修复方法

Country Status (2)

Country Link
CN (1) CN110770786A (zh)
WO (1) WO2019080061A1 (zh)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112989878A (zh) * 2019-12-13 2021-06-18 Oppo广东移动通信有限公司 瞳孔检测方法及相关产品
CN113902677A (zh) * 2021-09-08 2022-01-07 九天创新(广东)智能科技有限公司 一种摄像头遮挡检测方法、装置及智能机器人
CN114299451A (zh) * 2021-12-30 2022-04-08 山东土地集团数字科技有限公司 一种深度学习监控视频遮挡的识别系统及方法
CN115731206A (zh) * 2022-11-29 2023-03-03 杭州励飞软件技术有限公司 监控镜头监测方法、装置、设备及存储介质
CN115880239A (zh) * 2022-12-01 2023-03-31 深圳一清创新科技有限公司 相机视野干扰的检测方法、装置、智能机器人及存储介质
CN117315228A (zh) * 2023-09-22 2023-12-29 广东电网有限责任公司 拍摄装置的脏污检测方法、装置、设备和存储介质

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113298808B (zh) * 2021-06-22 2022-03-18 哈尔滨工程大学 一种面向倾斜遥感图像中建筑物遮挡信息的修复方法
CN113592781B (zh) * 2021-07-06 2024-09-27 北京爱笔科技有限公司 背景图像的生成方法、装置、计算机设备和存储介质
CN114419148B (zh) * 2021-12-08 2024-12-17 科大讯飞股份有限公司 触碰检测方法、装置、设备和计算机可读存储介质
CN116631044A (zh) * 2022-02-11 2023-08-22 宏碁股份有限公司 特征点位置检测方法及电子装置
CN115870237A (zh) * 2022-12-19 2023-03-31 北京航空航天大学杭州创新研究院 一种流水线产品质量检测并剔除缺陷产品的系统及方法

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003044996A (ja) * 2001-07-31 2003-02-14 Matsushita Electric Ind Co Ltd 障害物検出装置
CN101266685A (zh) * 2007-03-14 2008-09-17 中国科学院自动化研究所 一种基于多幅照片去除无关图像的方法
CN101482968A (zh) * 2008-01-07 2009-07-15 日电(中国)有限公司 图像处理方法和设备
JP2010237798A (ja) * 2009-03-30 2010-10-21 Equos Research Co Ltd 画像処理装置および画像処理プログラム
CN103679749A (zh) * 2013-11-22 2014-03-26 北京奇虎科技有限公司 一种基于运动目标跟踪的图像处理方法及装置
CN104657993A (zh) * 2015-02-12 2015-05-27 北京格灵深瞳信息技术有限公司 一种镜头遮挡检测方法及装置
CN105827952A (zh) * 2016-02-01 2016-08-03 维沃移动通信有限公司 一种去除指定对象的拍照方法及移动终端

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2884460B1 (en) * 2013-12-13 2020-01-01 Panasonic Intellectual Property Management Co., Ltd. Image capturing apparatus, monitoring system, image processing apparatus, image capturing method, and non-transitory computer readable recording medium
CN103731658B (zh) * 2013-12-25 2015-09-30 深圳市墨克瑞光电子研究院 双目摄像机复位方法和双目摄像机复位装置
CN106331460A (zh) * 2015-06-19 2017-01-11 宇龙计算机通信科技(深圳)有限公司 一种图像处理方法、装置及终端

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003044996A (ja) * 2001-07-31 2003-02-14 Matsushita Electric Ind Co Ltd 障害物検出装置
CN101266685A (zh) * 2007-03-14 2008-09-17 中国科学院自动化研究所 一种基于多幅照片去除无关图像的方法
CN101482968A (zh) * 2008-01-07 2009-07-15 日电(中国)有限公司 图像处理方法和设备
JP2010237798A (ja) * 2009-03-30 2010-10-21 Equos Research Co Ltd 画像処理装置および画像処理プログラム
CN103679749A (zh) * 2013-11-22 2014-03-26 北京奇虎科技有限公司 一种基于运动目标跟踪的图像处理方法及装置
CN104657993A (zh) * 2015-02-12 2015-05-27 北京格灵深瞳信息技术有限公司 一种镜头遮挡检测方法及装置
CN105827952A (zh) * 2016-02-01 2016-08-03 维沃移动通信有限公司 一种去除指定对象的拍照方法及移动终端

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112989878A (zh) * 2019-12-13 2021-06-18 Oppo广东移动通信有限公司 瞳孔检测方法及相关产品
CN112989878B (zh) * 2019-12-13 2024-11-29 Oppo广东移动通信有限公司 瞳孔检测方法及相关产品
CN113902677A (zh) * 2021-09-08 2022-01-07 九天创新(广东)智能科技有限公司 一种摄像头遮挡检测方法、装置及智能机器人
CN114299451A (zh) * 2021-12-30 2022-04-08 山东土地集团数字科技有限公司 一种深度学习监控视频遮挡的识别系统及方法
CN115731206A (zh) * 2022-11-29 2023-03-03 杭州励飞软件技术有限公司 监控镜头监测方法、装置、设备及存储介质
CN115880239A (zh) * 2022-12-01 2023-03-31 深圳一清创新科技有限公司 相机视野干扰的检测方法、装置、智能机器人及存储介质
CN117315228A (zh) * 2023-09-22 2023-12-29 广东电网有限责任公司 拍摄装置的脏污检测方法、装置、设备和存储介质

Also Published As

Publication number Publication date
CN110770786A (zh) 2020-02-07

Similar Documents

Publication Publication Date Title
WO2019080061A1 (zh) 基于摄像设备的遮挡检测修复装置及其遮挡检测修复方法
US9325899B1 (en) Image capturing device and digital zooming method thereof
CN111028205B (zh) 一种基于双目测距的眼睛瞳孔定位方法及装置
US10389948B2 (en) Depth-based zoom function using multiple cameras
CN106683071B (zh) 图像的拼接方法和装置
US10915998B2 (en) Image processing method and device
TWI424361B (zh) 物件追蹤方法
US9100563B2 (en) Apparatus, method and computer-readable medium imaging through at least one aperture of each pixel of display panel
WO2018201809A1 (zh) 基于双摄像头的图像处理装置及方法
CN107113415A (zh) 用于多技术深度图获取和融合的方法和设备
WO2021136386A1 (zh) 数据处理方法、终端和服务器
TW201316760A (zh) 攝像裝置、及其之控制方法
CN112261292B (zh) 图像获取方法、终端、芯片及存储介质
CN104361569A (zh) 图像拼接的方法及装置
CN104363377A (zh) 对焦框的显示方法、装置及终端
CN107113421B (zh) 一种光学系统成像质量的检测方法和装置
WO2021129806A1 (zh) 图像处理方法、装置、电子设备及可读存储介质
CN104299188A (zh) 图像修正方法及系统
CN107995476B (zh) 一种图像处理方法及装置
CN108198189B (zh) 图片清晰度的获取方法、装置、存储介质及电子设备
TWI749370B (zh) 臉部辨識方法及其相關電腦系統
CN102467742A (zh) 对象追踪方法
CN107622192A (zh) 视频画面处理方法、装置和移动终端
TWI892360B (zh) 投影圖像修正方法、裝置、投影設備、採集設備及可讀存儲介質
CN107633498A (zh) 图像暗态增强方法、装置及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17929912

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17929912

Country of ref document: EP

Kind code of ref document: A1