[go: up one dir, main page]

WO2018201809A1 - Dispositif et procédé de traitement d'image basé sur des caméras doubles - Google Patents

Dispositif et procédé de traitement d'image basé sur des caméras doubles Download PDF

Info

Publication number
WO2018201809A1
WO2018201809A1 PCT/CN2018/079230 CN2018079230W WO2018201809A1 WO 2018201809 A1 WO2018201809 A1 WO 2018201809A1 CN 2018079230 W CN2018079230 W CN 2018079230W WO 2018201809 A1 WO2018201809 A1 WO 2018201809A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
camera
information
image processing
overall
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2018/079230
Other languages
English (en)
Chinese (zh)
Inventor
韩银和
许浩博
王颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Computing Technology of CAS
Original Assignee
Institute of Computing Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Computing Technology of CAS filed Critical Institute of Computing Technology of CAS
Publication of WO2018201809A1 publication Critical patent/WO2018201809A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums

Definitions

  • the present invention relates to the field of digital image processing technologies, and in particular, to an image processing apparatus and method based on a dual camera.
  • the traditional camera device mostly uses a single camera to perform photographing operations. Since the single camera is limited by the focal length, the aperture size, the shutter speed, and the metering mode, the single shooting mode has a single shooting mode, and the imaging effect is limited, which cannot meet the application requirements of the high-definition image.
  • a photographing device designed by a dual camera of a main camera and a sub camera is used.
  • the camera system utilizes the difference between the two cameras in terms of focal length and aperture, and selects one of the appropriate cameras for shooting to improve the imaging quality of the camera.
  • the above-mentioned devices are still insufficient in image fusion, and it is difficult to ensure the global definition of images for images with complicated pictures and large depth of field.
  • the present invention provides a dual camera-based image processing apparatus including a first camera, a second camera, and a control module, wherein the first camera is used to capture an overall image; the control module is configured to send a shooting instruction to the a second camera; the second camera is configured to capture a partial image according to the received photographing instruction, and wherein the photographing instruction includes photographing information of the first camera and image information of the whole image.
  • the image information of the overall image includes location information of a specified target in the overall image, wherein the partial image is associated with the specified target.
  • the image information of the overall image is information of a region graphic containing a specified target in the overall image, wherein the partial image is associated with the specified target.
  • the image processing apparatus further includes a processing module for identifying information of the area graphic.
  • the information of the area graphic is central position information of the area graphic detected by the processing module using a depth neural network based image recognition algorithm.
  • the processing module is further configured to perform image fusion on the entire image and the partial image.
  • the partial image may be one or more.
  • a dual camera based image processing method comprising the steps of:
  • a partial image is captured by the second camera based on the photographing information and the image information.
  • detecting the overall image comprises:
  • a target detection algorithm is employed to detect the center position coordinates of the area graphic containing the specified target in the overall image.
  • the image processing method further includes: fusing the entire image and the partial image into a complete image.
  • the dual camera-based image processing apparatus and method detects a specified target in an original image captured by the camera 1 by using a target detection algorithm, and according to The detection result and the shooting position of the camera 1 are used to perform secondary shooting on the specified target using the camera 2, and the original image and the secondary captured image are processed by the image fusion algorithm to compensate for the image due to partial underexposure or focus difference in the original image. Problems such as poor imaging results improve the imaging performance of the specified target and enhance the detailed performance of the entire image.
  • FIG. 1 is a block diagram showing an embodiment of an image processing apparatus according to the present invention.
  • FIG. 2 is a flow chart of a method of performing image processing using the image processing apparatus shown in FIG. 1.
  • the lens is generally divided into a wide-angle lens and a telephoto lens.
  • the wide-angle lens has a wide viewing angle and a wider range of viewing than the human eye can, so it is especially suitable for a wide range of overall shooting.
  • a telephoto lens is similar to the telescope principle and is suitable for shooting inaccessible objects. However, its range of view is far smaller than the range that people can see.
  • a wide-angle lens can be used to capture the whole object
  • a telephoto lens is used to capture a local target
  • an image processing method is used to merge the images captured by the two to Get an overall wide and partially clear image.
  • an image processing apparatus based on a dual camera
  • a camera 1 is a short focal length wide-angle lens as a first camera for capturing a wide image of a wide viewing angle
  • the camera 2 is a telephoto lens as a second camera for capturing a partial clear image of a specified target
  • the control module is configured to command according to a user Control and schedule each of the other modules
  • the storage module is used to store image data collected by the camera 1 and the camera 2 and related software programs
  • the processing module is used for data operations and image processing.
  • FIG. 2 is a flowchart of a method for performing image processing by using the image processing apparatus shown in FIG. 1.
  • an image processing method based on a dual camera is provided, and the method specifically includes the following step:
  • the control module controls the motor to adjust the shooting position of the camera 1 according to the instruction, sends the photographing instruction 1 to the camera 1, and controls the camera 1 to take a picture.
  • the photographing instruction 1 includes shooting parameters such as a camera position, a focus point, and an aperture size, and the shooting parameters may be user-defined settings or default automatic settings.
  • the camera 1 receives the photographing instruction 1 from the control module and enters the shooting mode. According to the received shooting parameters, the camera 1 performs the photographing operation after adjusting the focus and the aperture, and obtains the overall image 1 and can send an instruction to the control module to complete the photographing.
  • an image processing algorithm can be used to set the index parameter. For example, the amount of light entering can be evaluated by parameters such as image brightness and exposure; sharpness can be evaluated by parameters such as edge width function, entropy function, and gradient function for image edge width, edge peak, and grayscale change rate.
  • control module can store the overall image 1 in a binary form in the storage module, and record the shooting information of the camera 1 when the overall image 1 is captured (for example, lens position, focus position, aperture size, exposure time). , image brightness, exposure, etc.).
  • the user can select the designated target D in the obtained overall image 1.
  • the user can select a fixed area, set the image in the area to the specified target D, or specify according to the characteristics of the target, and use the image recognition algorithm to set the targets with the same type of features.
  • Specify the target D such as a face.
  • the processing module can detect the position information of the specified target D (for example, the center position coordinate of the target) from the overall image 1 captured by the camera 1, and transmit the position information to the control module.
  • the processing module can perform detection by using a target detection algorithm. For example, an image algorithm based on pattern recognition, or an image recognition algorithm based on deep neural networks. An image recognition algorithm based on a deep neural network will be described below as an example.
  • step S203 Adjust the n-1 regions obtained in step S202 to the region pattern A of the pixel n*n. If the region is an irregular pattern, the pixel may be filled into a size of n*n, where the pixel value is filled. Can be the grayscale average of the region;
  • the depth pattern is used to identify whether the area pattern A obtained in step S203 includes the feature of the specified target D. For example, it can be set that if the area coincidence degree IOU of the area figure A and the designated object D is greater than 60%, it is determined that the area figure A includes the specified recognition target and the center point coordinates of the area figure A are calculated.
  • the above coincidence degree IOU can be defined as:
  • S coincides to represent the area of the coincident portion
  • S area A represents the area of the area figure A
  • S area D represents the area of the designated target D.
  • the control module according to the photographing information (for example, lens position, focus position, aperture size, exposure time, image brightness, exposure, etc.) when the entire image 1 is captured by the camera 1 obtained in step S10, and the specified target D obtained in step 20
  • the position information (for example, the central position coordinates) generates a photographing instruction 2 and transmits it to the camera 2, wherein the photographing instruction 2, for example, may include photographing parameters such as a camera position, a focus point, and an aperture size.
  • some of these shooting parameters may be user-defined settings, while others may be default automatic settings by the control module based on the shooting information and the location information.
  • the camera 2 receives the photographing instruction 2 from the control module and enters the shooting mode. According to the received shooting parameters, the camera 2 performs the photographing operation after focusing and aperture adjustment, obtains the partial image 2 containing the specified target D′, and can send an instruction to the control module to complete the shooting, wherein the specified target D in the partial image 2 'The specified object D in the entire image 1 obtained in step S10 is the same object, but the performance in the entire image 1 and the partial image 2 is different.
  • an image processing algorithm can be used to set the index parameter. For example, the amount of light entering can be evaluated by parameters such as image brightness and exposure; sharpness can be evaluated by parameters such as edge width function, entropy function, and gradient function for image edge width, edge peak, and grayscale change rate.
  • control module After the camera 2 is photographed, the control module stores the partial image 2 in a binary form in the storage module, and can transmit an image processing instruction to the processing module.
  • the processing module After receiving the image processing command, the processing module processes the entire image 1 captured by the camera 1 and the partial image 2 captured by the camera 2 by means of image processing.
  • the image processing method here can provide various options according to user requirements. The following is an example of pixel-level image fusion.
  • the pixel-level image fusion refers to replacing the specified target D in the overall image 1 with the specified target D′ in the image 2, that is, the specified target D is extracted from the overall image 1 and the designation in the partial image 2 is performed.
  • the target D' is filled into the overall image 1, thereby obtaining a complete image containing a clear locally specified target and a wide viewing angle.
  • the image parameters of the specified target D in the overall image 1 and the specified target D′ in the partial image 2 are different, when performing image fusion, it is necessary to use the image fusion algorithm to remove the partial pixel points from the specified target D′ to match the overall image 1
  • a simple weighted fusion algorithm for example, a Laplacian pyramid fusion algorithm, a contrast fusion algorithm, a gradient fusion algorithm, or a wavelet fusion algorithm; in addition, when performing image filling, it is also necessary to optimize the image mosaic edges to improve details. which performed.
  • similar image processing methods are numerous and will not be repeated here.
  • the camera 1 and the camera 2 may be a photosensitive sensor using a charge coupled device (CCD) type, or a metal oxide semiconductor material may be used as the photosensitive sensor.
  • CCD charge coupled device
  • metal oxide semiconductor material may be used as the photosensitive sensor.
  • the camera 1 and the camera 2 are located on the same plane, and the focusing operation is performed in a motor-driven manner.
  • control unit may be a central processing unit, a micro control unit, a programmable gate array, or the like.
  • the processing unit may be a digital signal processing unit, or a dedicated graphics processing circuit, or a deep learning based neural network processor.
  • the storage unit may be a storage medium such as a memory, an external hard disk, and a flash memory card.
  • the user can specify a plurality of targets in the overall image 1 taken by the camera 1 as needed.
  • the processing module sequentially detects the specified target D1 to be photographed from the entire image 1 including the plurality of specified targets captured by the camera 1 in the above step S10 by using the target detection algorithm.
  • the camera 2 receives a plurality of shooting commands transmitted from the control module, and performs shooting on the designated target D1, the designated target D2, the designated target D3, ..., respectively, to obtain a partial image 21 including the specified target D1', including the specified target.
  • the processing module replaces the designated target D1, the designated target D2, the designated target D3, ... in the overall image 1 captured by the camera 1 with the image fusion algorithm, respectively, in the partial image 21.
  • the target D1', the designated target D2' in the partial image 22, the specified target D3' in the partial image 23, ... are obtained, thereby obtaining a complete image including a plurality of clear locally specified targets and having a wide viewing angle.
  • step S40 the image processing instruction is sent to the processing module as a user selectable operation, that is, when the camera 1 and the camera 2 respectively complete the shooting and the overall image 1 and one or more parts are captured.
  • the user can perform the image processing of step S40 as needed, directly output the image, or select a plurality of execution steps S40 from the plurality of partial images.
  • the user may also select one or more specified targets D that need to be partially photographed during the camera framing before the camera 1 captures the overall image 1 (for example, in the framing frame).
  • One or more designated targets D) are manually selected; thus, steps S10-S30 or steps S10-S40 are automatically performed by the apparatus provided by the present invention, and the user does not even feel the difference in shooting time and the delay time of image processing.
  • the camera 1 is a short-focus lens and the camera 2 is a telephoto lens
  • the focal length here is a relative evaluation, that is, the camera 2 is telephoto compared to the camera 1.
  • the lens 1 is a short-focus lens compared to the camera 2.
  • the processing of the whole image and the partial image is performed by taking the pixel-level image fusion as an example, those skilled in the art should understand that other image processing methods may be adopted in other embodiments to implement the dual camera based on the present invention.
  • Image processing apparatus and methods such as feature level image fusion or decision level image fusion.
  • the dual camera-based image processing apparatus and method combines the advantages of the wide-angle camera and the telephoto camera, and analyzes the captured image information by using an image processing algorithm. Extraction and fusion, so that multiple images can complement each other, thereby improving the overall performance of the image.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

La présente invention concerne un dispositif de traitement d'images basé sur des caméras doubles, consistant en des première et seconde caméras et en un module de commande. La première caméra est utilisée pour capturer une image globale. Le module de commande est utilisé pour envoyer une instruction de prise de vue à la seconde caméra ; et la seconde caméra capture une image locale selon l'instruction de prise de vue reçue. Le dispositif est caractérisé en ce que l'instruction de prise de vue comprend des informations de prise de vue de la première caméra et des informations d'image de l'image globale. Grâce à la présente invention, l'effet d'imagerie d'un objet local peut être amélioré et la représentation des détails de l'image entière peut être améliorée.
PCT/CN2018/079230 2017-05-05 2018-03-16 Dispositif et procédé de traitement d'image basé sur des caméras doubles Ceased WO2018201809A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710312832.0 2017-05-05
CN201710312832.0A CN107087107B (zh) 2017-05-05 2017-05-05 基于双摄像头的图像处理装置及方法

Publications (1)

Publication Number Publication Date
WO2018201809A1 true WO2018201809A1 (fr) 2018-11-08

Family

ID=59612636

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/079230 Ceased WO2018201809A1 (fr) 2017-05-05 2018-03-16 Dispositif et procédé de traitement d'image basé sur des caméras doubles

Country Status (2)

Country Link
CN (1) CN107087107B (fr)
WO (1) WO2018201809A1 (fr)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111311623A (zh) * 2020-02-26 2020-06-19 歌尔股份有限公司 图像分界方法、装置、设备及存储介质
CN111563552A (zh) * 2020-05-06 2020-08-21 浙江大华技术股份有限公司 图像融合方法以及相关设备、装置
CN113506214A (zh) * 2021-05-24 2021-10-15 南京莱斯信息技术股份有限公司 一种多路视频图像拼接方法
CN114689582A (zh) * 2022-03-15 2022-07-01 郑州凯雪冷链股份有限公司 立式风幕柜的商品纯净度检测方法
CN115393330A (zh) * 2022-08-30 2022-11-25 深圳市震有软件科技有限公司 摄像头图像模糊检测方法、装置、计算机设备及存储介质
CN116087201A (zh) * 2022-12-27 2023-05-09 广东尚菱视界科技有限公司 一种工业视觉检测系统以及检测方法
CN118898822A (zh) * 2024-07-17 2024-11-05 金龙联合汽车工业(苏州)有限公司 基于双目摄像头的红绿灯识别方法、系统及存储介质

Families Citing this family (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107087107B (zh) * 2017-05-05 2019-11-29 中国科学院计算技术研究所 基于双摄像头的图像处理装置及方法
CN107749944A (zh) * 2017-09-22 2018-03-02 华勤通讯技术有限公司 一种拍摄方法及装置
JP7043219B2 (ja) * 2017-10-26 2022-03-29 キヤノン株式会社 撮像装置、撮像装置の制御方法、及びプログラム
CN108377341A (zh) * 2018-05-14 2018-08-07 Oppo广东移动通信有限公司 拍照方法、装置、终端及存储介质
CN110536048B (zh) * 2018-05-25 2024-11-12 上海翌视信息技术有限公司 一种具有偏置构成的相机
CN108898171B (zh) * 2018-06-20 2022-07-22 深圳市易成自动驾驶技术有限公司 图像识别处理方法、系统及计算机可读存储介质
CN108960109B (zh) * 2018-06-26 2020-01-21 哈尔滨拓博科技有限公司 一种基于两个单目摄像头的空间手势定位装置及定位方法
CN108874142B (zh) * 2018-06-26 2019-08-06 哈尔滨拓博科技有限公司 一种基于手势的无线智能控制装置及其控制方法
CN111630840B (zh) * 2018-08-23 2021-12-03 深圳配天智能技术研究院有限公司 一种超分辨图像的获取方法及获取装置、图像传感器
CN109348101A (zh) * 2018-10-17 2019-02-15 浙江舜宇光学有限公司 基于双摄镜头组的拍摄装置及方法
CN109547707A (zh) * 2018-12-05 2019-03-29 成都泰盟软件有限公司 带双摄像头的信号采集控制系统
CN109379522A (zh) * 2018-12-06 2019-02-22 Oppo广东移动通信有限公司 成像方法、成像装置、电子装置及介质
WO2020124408A1 (fr) * 2018-12-19 2020-06-25 陈加志 Dispositif d'observation astronomique à lentilles multiples et son procédé d'imagerie
CN109580645A (zh) * 2018-12-20 2019-04-05 深圳灵图慧视科技有限公司 疵点识别设备
CN112954218A (zh) 2019-03-18 2021-06-11 荣耀终端有限公司 一种多路录像方法及设备
CN110072058B (zh) * 2019-05-28 2021-05-25 珠海格力电器股份有限公司 图像拍摄装置、方法及终端
CN110217271A (zh) * 2019-05-30 2019-09-10 成都希格玛光电科技有限公司 基于图像视觉的快速轨道侵限识别监测系统及方法
CN110430359B (zh) * 2019-07-31 2021-07-09 北京迈格威科技有限公司 拍摄辅助方法、装置、计算机设备和存储介质
CN110430360A (zh) * 2019-08-01 2019-11-08 珠海格力电器股份有限公司 一种全景图像拍摄方法及装置、存储介质
CN112584034B (zh) 2019-09-30 2023-04-07 虹软科技股份有限公司 图像处理方法、图像处理装置及应用其的电子设备
CN110855883B (zh) * 2019-11-05 2021-07-20 浙江大华技术股份有限公司 一种图像处理系统、方法、装置设备及存储介质
CN110913131B (zh) * 2019-11-21 2021-05-11 维沃移动通信有限公司 一种月亮拍摄方法及电子设备
CN111050083B (zh) * 2019-12-31 2022-02-18 联想(北京)有限公司 一种电子设备及处理方法
CN113570617B (zh) * 2021-06-24 2022-08-23 荣耀终端有限公司 图像处理方法、装置和电子设备
CN113592751B (zh) * 2021-06-24 2024-05-07 荣耀终端有限公司 图像处理方法、装置和电子设备
CN113923367B (zh) * 2021-11-24 2024-04-12 维沃移动通信有限公司 拍摄方法、拍摄装置
CN117245229B (zh) * 2023-11-20 2024-03-08 广东码清激光智能装备有限公司 全自动上下料打标机以及全自动打标方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110058053A1 (en) * 2009-09-08 2011-03-10 Pantech Co., Ltd. Mobile terminal with multiple cameras and method for image processing using the same
JP2011211552A (ja) * 2010-03-30 2011-10-20 Fujifilm Corp 撮像装置、方法およびプログラム
WO2015081556A1 (fr) * 2013-12-06 2015-06-11 华为终端有限公司 Procédé de photographie destiné à un dispositif à double appareil photo et dispositif à double appareil photo
CN104935866A (zh) * 2014-03-19 2015-09-23 华为技术有限公司 实现视频会议的方法、合成设备和系统
CN107087107A (zh) * 2017-05-05 2017-08-22 中国科学院计算技术研究所 基于双摄像头的图像处理装置及方法

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100419783C (zh) * 2006-10-09 2008-09-17 武汉大学 一种遥感图像空间形状特征提取与分类方法
CN102113014A (zh) * 2008-07-31 2011-06-29 惠普开发有限公司 图像的感知分割
CN103247042B (zh) * 2013-05-24 2015-11-11 厦门大学 一种基于相似块的图像融合方法
CN103780840B (zh) * 2014-01-21 2016-06-08 上海果壳电子有限公司 一种高品质成像的双摄像成像装置及其方法
CN104052931A (zh) * 2014-06-27 2014-09-17 宇龙计算机通信科技(深圳)有限公司 一种图像拍摄装置、方法及终端
CN104333703A (zh) * 2014-11-28 2015-02-04 广东欧珀移动通信有限公司 使用双摄像头拍照的方法和终端
US11120478B2 (en) * 2015-01-12 2021-09-14 Ebay Inc. Joint-based item recognition
US9860445B2 (en) * 2015-06-15 2018-01-02 Bendix Commercial Vehicle Systems Llc Dual node composite image system architecture
CN105701762B (zh) * 2015-12-30 2020-03-24 联想(北京)有限公司 一种图片处理方法和电子设备
CN106131449B (zh) * 2016-07-27 2019-11-29 维沃移动通信有限公司 一种拍照方法及移动终端
CN106454121B (zh) * 2016-11-11 2020-02-07 努比亚技术有限公司 双摄像头拍照方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110058053A1 (en) * 2009-09-08 2011-03-10 Pantech Co., Ltd. Mobile terminal with multiple cameras and method for image processing using the same
JP2011211552A (ja) * 2010-03-30 2011-10-20 Fujifilm Corp 撮像装置、方法およびプログラム
WO2015081556A1 (fr) * 2013-12-06 2015-06-11 华为终端有限公司 Procédé de photographie destiné à un dispositif à double appareil photo et dispositif à double appareil photo
CN104935866A (zh) * 2014-03-19 2015-09-23 华为技术有限公司 实现视频会议的方法、合成设备和系统
CN107087107A (zh) * 2017-05-05 2017-08-22 中国科学院计算技术研究所 基于双摄像头的图像处理装置及方法

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111311623A (zh) * 2020-02-26 2020-06-19 歌尔股份有限公司 图像分界方法、装置、设备及存储介质
CN111563552A (zh) * 2020-05-06 2020-08-21 浙江大华技术股份有限公司 图像融合方法以及相关设备、装置
CN111563552B (zh) * 2020-05-06 2023-09-05 浙江大华技术股份有限公司 图像融合方法以及相关设备、装置
CN113506214A (zh) * 2021-05-24 2021-10-15 南京莱斯信息技术股份有限公司 一种多路视频图像拼接方法
CN113506214B (zh) * 2021-05-24 2023-07-21 南京莱斯信息技术股份有限公司 一种多路视频图像拼接方法
CN114689582A (zh) * 2022-03-15 2022-07-01 郑州凯雪冷链股份有限公司 立式风幕柜的商品纯净度检测方法
CN115393330A (zh) * 2022-08-30 2022-11-25 深圳市震有软件科技有限公司 摄像头图像模糊检测方法、装置、计算机设备及存储介质
CN116087201A (zh) * 2022-12-27 2023-05-09 广东尚菱视界科技有限公司 一种工业视觉检测系统以及检测方法
CN116087201B (zh) * 2022-12-27 2023-09-05 广东尚菱视界科技有限公司 一种工业视觉检测系统以及检测方法
CN118898822A (zh) * 2024-07-17 2024-11-05 金龙联合汽车工业(苏州)有限公司 基于双目摄像头的红绿灯识别方法、系统及存储介质

Also Published As

Publication number Publication date
CN107087107A (zh) 2017-08-22
CN107087107B (zh) 2019-11-29

Similar Documents

Publication Publication Date Title
WO2018201809A1 (fr) Dispositif et procédé de traitement d'image basé sur des caméras doubles
US10997696B2 (en) Image processing method, apparatus and device
CN107977940B (zh) 背景虚化处理方法、装置及设备
JP6935587B2 (ja) 画像処理のための方法および装置
TWI899424B (zh) 用於針對具有多個深度處的目標的場景的影像融合的方法、設備,和非暫時性電腦可讀取媒體
US10269130B2 (en) Methods and apparatus for control of light field capture object distance adjustment range via adjusting bending degree of sensor imaging zone
US8885091B2 (en) Imaging device and distance information detecting method
CN108076278B (zh) 一种自动对焦方法、装置及电子设备
WO2019105214A1 (fr) Procédé et appareil de floutage d'image, terminal mobile et support de stockage
CN105657238B (zh) 跟踪对焦方法及装置
CN108024054A (zh) 图像处理方法、装置及设备
WO2021145913A1 (fr) Estimation de la profondeur basée sur la taille de l'iris
CN108337447A (zh) 高动态范围图像曝光补偿值获取方法、装置、设备及介质
CN108053363A (zh) 背景虚化处理方法、装置及设备
CN112261292B (zh) 图像获取方法、终端、芯片及存储介质
CN108024058B (zh) 图像虚化处理方法、装置、移动终端和存储介质
CN108154514A (zh) 图像处理方法、装置及设备
CN110650288B (zh) 对焦控制方法和装置、电子设备、计算机可读存储介质
CN108156369A (zh) 图像处理方法和装置
CN107133982A (zh) 深度图构建方法、装置及拍摄设备、终端设备
JP2020053774A (ja) 撮像装置および画像記録方法
CN108052883A (zh) 用户拍照方法、装置及设备
WO2017208991A1 (fr) Dispositif de capture et de traitement d'image, instrument électronique, procédé de capture et de traitement d'image, et programme de commande de dispositif de capture et de traitement d'image
WO2015141185A1 (fr) Dispositif de commande d'imagerie, procédé de commande d'imagerie, et support d'informations
CN114222059B (zh) 拍照、拍照处理方法、系统、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18794246

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18794246

Country of ref document: EP

Kind code of ref document: A1