WO2018228467A1 - Procédé et dispositif d'exposition d'image, dispositif photographique, et support de stockage - Google Patents
Procédé et dispositif d'exposition d'image, dispositif photographique, et support de stockage Download PDFInfo
- Publication number
- WO2018228467A1 WO2018228467A1 PCT/CN2018/091228 CN2018091228W WO2018228467A1 WO 2018228467 A1 WO2018228467 A1 WO 2018228467A1 CN 2018091228 W CN2018091228 W CN 2018091228W WO 2018228467 A1 WO2018228467 A1 WO 2018228467A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- exposure
- portrait
- shooting scene
- current shooting
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/71—Circuitry for evaluating the brightness variation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/76—Circuitry for compensating brightness variation in the scene by influencing the image signals
Definitions
- the present application relates to the field of image processing technologies, and in particular, to an image exposure method, apparatus, imaging apparatus, and storage medium.
- Controlling the camera to perform proper exposure with appropriate exposure compensation is an essential condition for obtaining a high quality image during image capture or framing display.
- Most current camera devices (such as mobile terminals) support manual adjustment of exposure. Manual adjustment of exposure can be accomplished by the user clicking the set button to trigger the camera to display the exposure bar on the screen, and the user slides the cursor in the exposure bar.
- the exposure compensation at the time of image shooting can be set to achieve the purpose of adjusting the exposure effect.
- an image capturing device (such as a mobile terminal) performs the same exposure compensation process on the entire image when performing image exposure processing on the image, that is, if the user wants to adjust the exposure effect of an area in the image, the image is in the image. Other areas will also be adjusted to the same exposure. It can be seen that the existing manual adjustment exposure scheme cannot comprehensively consider the exposure effect of a specific area in the image, so that appropriate exposure compensation cannot be obtained, and thus a high-quality exposure image cannot be obtained, and the user experience is deteriorated.
- the purpose of the present application is to solve at least one of the above technical problems to some extent.
- the first object of the present application is to propose an image exposure method.
- the method can realize the purpose of auto-exposure based on multi-frame fusion, comprehensively considers the exposure effect of each specific region in the captured image, so that the entire shot is properly compensated, thereby obtaining a high-quality exposure image and improving the user.
- the method can realize the purpose of auto-exposure based on multi-frame fusion, comprehensively considers the exposure effect of each specific region in the captured image, so that the entire shot is properly compensated, thereby obtaining a high-quality exposure image and improving the user.
- a second object of the present application is to provide an image exposure apparatus.
- a third object of the present application is to propose an image pickup apparatus.
- a fourth object of the present application is to propose a storage medium.
- the image exposure method of the first aspect of the present application includes: when detecting a portrait in the current shooting scene, extracting a silhouette of the current shooting scene based on the depth information of the current shooting scene. And a background portion; acquiring a face region of the portrait, and locating the body region of the portrait according to the face region and the silhouette of the portrait; detecting the face region, the body region, and the background portion, respectively Brightness to obtain corresponding first photometric value, second photometric value and third photometric value; respectively according to the first photometric value, the second photometric value and the third photometric value Exposing and photographing the face area, the body area, and the background portion to obtain corresponding first exposure image, second exposure image, and third exposure image; for the first exposure image, The two exposure images and the third exposure image are subjected to fusion processing to obtain a fused target image.
- the portrait contour and the background portion in the current shooting scene may be extracted based on the depth information of the current shooting scene, and the human face region and the body are respectively detected.
- the brightness of the area and the background portion is obtained to obtain corresponding first photometric values, second photometric values, and third photometric values, and are respectively performed according to the first photometric value, the second photometric value, and the third photometric value, respectively.
- Exposure control and shooting to obtain the corresponding three images.
- the three differently exposed images are fused, and the resulting face, portrait outline, and background part are all exposed to appropriate photos, which realizes multi-frame fusion.
- the purpose of auto-exposure is to take into account the exposure effect of each specific area in the captured image, so that the entire shot is properly compensated, and the high-quality exposure image can be obtained, which improves the user experience.
- an image exposure apparatus includes: a first acquisition module, configured to acquire depth information of a current shooting scene; and an extraction module, configured to include, in detecting the current shooting scene And a second image acquisition module for acquiring a face contour and a background portion in the current shooting scene; a second acquiring module, configured to acquire a face region of the portrait; a positioning module, configured to use the face region and Positioning the body area of the portrait; the detecting module is configured to respectively detect brightness of the face area, the body area and the background part to obtain a corresponding first photometric value and a second measurement a light value and a third light metering value; a control module, configured to face the human face region and the body region according to the first light metering value, the second light metering value, and the third light metering value, respectively Performing exposure control and photographing with the background portion to obtain corresponding first exposure image, second exposure image, and third exposure image; and a fusion module for using
- the portrait contour and the background portion in the current shooting scene may be extracted based on the depth information of the current shooting scene
- the detecting module respectively detects The brightness of the face area, the body area, and the background portion to obtain corresponding first photometric values, second photometric values, and third photometric values
- the control module respectively according to the first photometric value, the second photometric value, and
- the third photometric value is subjected to exposure control and photographing to obtain corresponding three images
- the fusion module performs fusion processing on the three differently exposed images to obtain a photograph of the final generated face, the silhouette of the portrait, and the background portion.
- an image pickup apparatus includes a memory, a processor, and a computer program stored on the memory and operable on the processor, and the processor implements the program when the program is executed.
- the image exposure method described in the first aspect of the invention is applied.
- a non-transitory computer readable storage medium has a computer program stored thereon, and when the program is executed by the processor, the image exposure described in the first aspect of the present application is implemented. method.
- FIG. 1 is a flow chart of an image exposure method according to an embodiment of the present application.
- FIG. 2 is a schematic structural diagram of an image exposure apparatus according to an embodiment of the present application.
- FIG. 3 is a schematic structural diagram of an image exposure apparatus according to an embodiment of the present application.
- FIG. 4 is a schematic structural diagram of an image pickup apparatus according to an embodiment of the present application.
- the camera device may be a device having a photographing function, for example, a mobile terminal (such as a mobile phone, a tablet computer, and the like having various operating systems), a digital camera, and the like.
- the image exposure method may include:
- the portrait contour and the background portion in the current shooting scene are extracted based on the depth information of the current shooting scene.
- the depth information of the current shooting scene may be acquired first.
- the depth of field refers to the distance between the front and back of the object measured by the image that can obtain a clear image at the front of the camera lens or other imager. After the focus is completed, a clear image can be formed in the range before and after the focus. This range of distances before and after is called depth of field.
- the length of the space in which the subject is located is called the depth of field. In other words, the image in the space in which the image is blurred on the surface of the film is within the limited range of the allowable circle, and the length of the space is the depth of field.
- the step of acquiring the depth of field information and the step of detecting whether the current shooting scene includes a portrait may not be specifically limited: as an example, the current shooting scene may be acquired first. Depth of field information, after that, face recognition technology can be used to detect whether a portrait is included in the current shooting scene. As another example, the face recognition technology may be used to detect whether a portrait is included in the current shooting scene, and when the portrait is included in the current shooting scene, the depth information of the current shooting scene is acquired.
- the depth information of the current shooting scene may be acquired by a dual camera or a depth RGBD (RGB+depth, color depth image including color information and distance depth information) camera.
- a dual camera the specific implementation process of acquiring the depth information of the current shooting scene by using the dual camera may be as follows: the first angle ⁇ 1 of the object to be photographed and the left camera may be calculated by an algorithm, and the object to be photographed and the right camera are calculated.
- the second angle ⁇ 2 such that the center distance between the left camera and the right camera (where the center distance is a fixed value), the first angle ⁇ 1 and the second angle ⁇ 2, can be calculated by using the triangle principle
- the distance between the object and the lens which is the depth of field information of the current shooting scene.
- the specific implementation process of acquiring the depth information of the current shooting scene by the depth RGBD camera may be as follows: using the depth detector (for example, an infrared sensor, etc.) in the depth RGBD camera, the detection is taken. The distance between the object and the camera, which is the depth of field information of the current shooting scene.
- the depth detector for example, an infrared sensor, etc.
- the face detection technology When detecting the portrait image in the current shooting scene and obtaining the depth information of the current shooting scene, the face detection technology is used to calculate the distance between the face and the lens according to the depth information of the current shooting scene, and the distance is searched according to the distance.
- the distance finds the entire silhouette of the portrait and extracts the background portion from the current shooting scene based on the distance difference separation technique and the portrait contour.
- the above formula (1) can be expressed as follows:
- ⁇ L is the depth of field information of the current shooting scene
- f is the focal length of the lens
- F is the aperture value at the time of lens shooting
- ⁇ is the diameter of the circle of confusion
- L is the distance between the face and the lens.
- a face region of the portrait is acquired, and the body region of the portrait is located according to the face region and the silhouette of the portrait.
- the face recognition technology may be used to acquire the face region of the portrait, such as the location and size information of the face region, and then, according to the face region and the silhouette of the portrait. (ie the overall outline of the portrait), the body area of the portrait can be determined. That is to say, the body area of the portrait is the remaining part of the portrait outline after the face area is removed.
- Block 130 detecting brightness of the face area, the body area, and the background portion, respectively, to obtain corresponding first photometric values, second photometric values, and third photometric values.
- the brightness value of the face region can be detected to obtain a corresponding first photometric value, and the brightness value of the body region is detected to obtain a corresponding second photometric value, and the brightness of the background portion is detected. Value to get the corresponding third metering value.
- a brightness value of each pixel in the face region may be detected, and brightness values of the respective pixels are averaged, and the average value is used as the whole person.
- the brightness value of the face area, and the brightness value of the entire face area is used as the first light metering value of the face area.
- Block 140 performing exposure control and photographing on the face region, the body region, and the background portion according to the first photometric value, the second photometric value, and the third photometric value, respectively, to obtain a corresponding first exposure image, second Exposure image and third exposure image.
- the face region may be subjected to exposure control and shooting according to the first photometric value to obtain a first exposure image for the face region, and the body region of the portrait is performed according to the second photometric value.
- a fusion process is performed on the first exposure image, the second exposure image, and the third exposure image to obtain a fused target image.
- the face region in the first exposure image, the body region in the second exposure image, and the background portion in the third exposure image may be spliced, and a smoothing filter is adopted. , the boundary at the seam is eliminated to obtain the fused target image.
- the final generated face, portrait outline, and background portion are all exposed to appropriate photos.
- the purpose of auto-exposure based on multi-frame fusion is realized.
- the portrait contour and the background portion in the current shooting scene may be extracted based on the depth information of the current shooting scene, and the human face region and the body are respectively detected.
- the brightness of the area and the background portion is obtained to obtain corresponding first photometric values, second photometric values, and third photometric values, and are respectively performed according to the first photometric value, the second photometric value, and the third photometric value, respectively.
- Exposure control and shooting to obtain the corresponding three images.
- the three differently exposed images are fused, and the resulting face, portrait outline, and background part are all exposed to appropriate photos, which realizes multi-frame fusion.
- the purpose of auto-exposure is to take into account the exposure effect of each specific area in the captured image, so that the entire shot is properly compensated, and the high-quality exposure image can be obtained, which improves the user experience.
- an embodiment of the present application further provides an image exposure apparatus, and the image exposure apparatus provided by the embodiments of the present application and the image exposure method provided by the above embodiments
- the embodiment of the image exposure method described above is also applicable to the image exposure apparatus provided in this embodiment, which will not be described in detail in this embodiment.
- 2 is a schematic structural view of an image exposure apparatus according to an embodiment of the present application.
- the image exposing device may include: a first acquiring module 210 , an extracting module 220 , a second acquiring module 230 , a positioning module 240 , a detecting module 250 , a control module 260 , and a fusion module 270 .
- the first acquiring module 210 is configured to acquire depth information of the current shooting scene. Specifically, in an embodiment of the present application, the first acquiring module 210 may acquire the depth information of the current shooting scene through a dual camera or a deep RGBD camera.
- the extracting module 220 is configured to extract a portrait contour and a background portion in the current shooting scene based on the depth information when detecting the portrait in the current shooting scene.
- the extraction module 220 may include: a calculation unit 221 and an extraction unit 222.
- the calculation unit 221 is configured to calculate the distance between the face and the lens according to the depth information of the current shooting scene by using a face detection technology.
- the extracting unit 222 is configured to find a portrait contour according to the distance, and extract a background portion from the current photographing scene according to the distance difference separating technique and the portrait contour.
- the second obtaining module 230 is configured to acquire a face area of the portrait.
- the positioning module 240 is configured to locate a body region of the portrait according to the face region and the silhouette of the portrait.
- the detecting module 250 is configured to respectively detect brightness of the face area, the body area, and the background portion to obtain corresponding first photometric values, second photometric values, and third photometric values.
- the control module 260 is configured to perform exposure control on the face region, the body region, and the background portion according to the first photometric value, the second photometric value, and the third photometric value, respectively. Shooting to obtain corresponding first exposure image, second exposure image, and third exposure image.
- the fusion module 270 is configured to perform fusion processing on the first exposure image, the second exposure image, and the third exposure image to obtain a fused target image. Specifically, in an embodiment of the present application, the fusion module 270 may perform splicing processing on the face region in the first exposure image, the body region in the second exposure image, and the background portion in the third exposure image, A smoothing filter is used to eliminate the boundaries at the seam to obtain a fused target image.
- the portrait contour and the background portion in the current shooting scene may be extracted based on the depth information of the current shooting scene
- the detecting module respectively detects The brightness of the face area, the body area, and the background portion to obtain corresponding first photometric values, second photometric values, and third photometric values
- the control module respectively according to the first photometric value, the second photometric value, and
- the third photometric value is subjected to exposure control and photographing to obtain corresponding three images
- the fusion module performs fusion processing on the three differently exposed images to obtain a photograph of the final generated face, the silhouette of the portrait, and the background portion.
- the present application also proposes an imaging apparatus.
- FIG. 4 is a schematic structural diagram of an image pickup apparatus according to an embodiment of the present application.
- the imaging device may be a device having a shooting function, for example, a mobile terminal (such as a mobile phone, a tablet computer, or the like having various operating systems), a digital camera, or the like.
- the image pickup apparatus 40 may include a memory 41, a processor 42, and a computer program 43 stored on the memory 41 and operable on the processor 42, and when the processor 42 executes the computer program 43, The image exposure method described in any of the above embodiments is applied.
- the present application further provides a non-transitory computer readable storage medium having stored thereon a computer program, which is executed by the processor to implement the image exposure method described in any of the above embodiments of the present application. .
- the present application also provides a computer program product that, when executed by a processor, performs an image exposure method, the method comprising the steps of:
- the portrait contour and the background portion in the current shooting scene are extracted based on the depth information of the current shooting scene.
- the face area of the portrait is acquired, and the body area of the portrait is located according to the face area and the silhouette of the portrait.
- the first exposure image, the second exposure image, and the third exposure image are subjected to fusion processing to obtain a fused target image.
- first and second are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated.
- features defining “first” or “second” may include at least one of the features, either explicitly or implicitly.
- the meaning of "a plurality” is at least two, such as two, three, etc., unless specifically defined otherwise.
- a "computer-readable medium” can be any apparatus that can contain, store, communicate, propagate, or transport a program for use in an instruction execution system, apparatus, or device, or in conjunction with the instruction execution system, apparatus, or device.
- computer readable media include the following: electrical connections (electronic devices) having one or more wires, portable computer disk cartridges (magnetic devices), random access memory (RAM), Read only memory (ROM), erasable editable read only memory (EPROM or flash memory), fiber optic devices, and portable compact disk read only memory (CDROM).
- the computer readable medium may even be a paper or other suitable medium on which the program can be printed, as it may be optically scanned, for example by paper or other medium, followed by editing, interpretation or, if appropriate, other suitable The method is processed to obtain the program electronically and then stored in computer memory.
- portions of the application can be implemented in hardware, software, firmware, or a combination thereof.
- multiple steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system.
- a suitable instruction execution system For example, if implemented in hardware, as in another embodiment, it can be implemented by any one or combination of the following techniques well known in the art: having logic gates for implementing logic functions on data signals. Discrete logic circuits, application specific integrated circuits with suitable combinational logic gates, programmable gate arrays (PGAs), field programmable gate arrays (FPGAs), etc.
- each functional unit in each embodiment of the present application may be integrated into one processing module, or each unit may exist physically separately, or two or more units may be integrated into one module.
- the above integrated modules can be implemented in the form of hardware or in the form of software functional modules.
- the integrated modules, if implemented in the form of software functional modules and sold or used as stand-alone products, may also be stored in a computer readable storage medium.
- the above mentioned storage medium may be a read only memory, a magnetic disk or an optical disk or the like. While the embodiments of the present application have been shown and described above, it is understood that the above-described embodiments are illustrative and are not to be construed as limiting the scope of the present application. The embodiments are subject to variations, modifications, substitutions and variations.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
- Image Processing (AREA)
Abstract
La présente invention concerne un procédé et un dispositif d'exposition d'image, un dispositif photographique, et un support de stockage. Le procédé consiste à : lorsqu'il est détecté qu'une figure humaine est contenue dans une scène photographique actuelle, extraire un contour de figure humaine et une partie d'arrière-plan dans la scène photographique actuelle sur la base d'informations de profondeur de champ relatives à la scène photographique actuelle; acquérir une zone de visage humain de la figure humaine, et positionner une zone de corps de la figure humaine d'après la zone de visage humain et le contour de la figure humaine; détecter respectivement la luminosité de la zone de visage humain, de la zone de corps et de la partie d'arrière-plan afin d'obtenir une première valeur de lumière, une deuxième valeur de lumière, et une troisième valeur de lumière correspondantes; exécuter respectivement un contrôle d'exposition et une photographie d'après la première valeur de lumière, la deuxième valeur de lumière, et la troisième valeur de lumière afin d'obtenir une première image exposée, une deuxième image exposée, et une troisième image exposée correspondantes; et fusionner les trois images exposées différemment afin d'obtenir une image cible fusionnée. Le procédé selon l'invention permet d'obtenir une compensation d'exposition appropriée pour la photographie tout entière, ainsi qu'une image exposée de haute qualité.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201710458977.1 | 2017-06-16 | ||
| CN201710458977.1A CN107241557A (zh) | 2017-06-16 | 2017-06-16 | 图像曝光方法、装置、摄像设备及存储介质 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2018228467A1 true WO2018228467A1 (fr) | 2018-12-20 |
Family
ID=59986386
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2018/091228 Ceased WO2018228467A1 (fr) | 2017-06-16 | 2018-06-14 | Procédé et dispositif d'exposition d'image, dispositif photographique, et support de stockage |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN107241557A (fr) |
| WO (1) | WO2018228467A1 (fr) |
Cited By (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111553231A (zh) * | 2020-04-21 | 2020-08-18 | 上海锘科智能科技有限公司 | 基于信息融合的人脸抓拍与去重系统、方法、终端及介质 |
| CN111582171A (zh) * | 2020-05-08 | 2020-08-25 | 济南博观智能科技有限公司 | 一种行人闯红灯监测方法、装置、系统及可读存储介质 |
| CN112053389A (zh) * | 2020-07-28 | 2020-12-08 | 北京迈格威科技有限公司 | 人像处理方法、装置、电子设备及可读存储介质 |
| CN112085686A (zh) * | 2020-08-21 | 2020-12-15 | 北京迈格威科技有限公司 | 图像处理方法、装置、电子设备及计算机可读存储介质 |
| CN112887612A (zh) * | 2021-01-27 | 2021-06-01 | 维沃移动通信有限公司 | 一种拍摄方法、装置和电子设备 |
| EP3883236A1 (fr) * | 2020-03-16 | 2021-09-22 | Canon Kabushiki Kaisha | Appareil de traitement d'informations, appareil d'imagerie, procédé et support d'informations |
| CN113903060A (zh) * | 2021-09-17 | 2022-01-07 | 北京极豪科技有限公司 | 一种图像处理方法、装置、设备以及存储介质 |
| CN114067189A (zh) * | 2021-12-01 | 2022-02-18 | 厦门航天思尔特机器人系统股份公司 | 一种工件识别方法、装置、设备和存储介质 |
| CN114244965A (zh) * | 2021-11-22 | 2022-03-25 | 浪潮金融信息技术有限公司 | 一种应用于高拍仪的高精准曝光度调控方法、系统及介质 |
| CN115115559A (zh) * | 2021-03-23 | 2022-09-27 | 深圳市万普拉斯科技有限公司 | 一种图像处理方法、装置及电子设备 |
| CN116112657A (zh) * | 2023-01-11 | 2023-05-12 | 网易(杭州)网络有限公司 | 图像处理方法、装置、计算机可读存储介质及电子装置 |
Families Citing this family (20)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107241557A (zh) * | 2017-06-16 | 2017-10-10 | 广东欧珀移动通信有限公司 | 图像曝光方法、装置、摄像设备及存储介质 |
| CN107592468B (zh) * | 2017-10-23 | 2019-12-03 | 维沃移动通信有限公司 | 一种拍摄参数调整方法及移动终端 |
| CN107623818B (zh) * | 2017-10-30 | 2020-04-17 | 维沃移动通信有限公司 | 一种图像曝光方法和移动终端 |
| CN107948519B (zh) * | 2017-11-30 | 2020-03-27 | Oppo广东移动通信有限公司 | 图像处理方法、装置及设备 |
| CN107995425B (zh) * | 2017-12-11 | 2019-08-20 | 维沃移动通信有限公司 | 一种图像处理方法及移动终端 |
| CN109981992B (zh) * | 2017-12-28 | 2021-02-23 | 周秦娜 | 一种在高环境光变化下提升测距准确度的控制方法及装置 |
| CN108616689B (zh) * | 2018-04-12 | 2020-10-02 | Oppo广东移动通信有限公司 | 基于人像的高动态范围图像获取方法、装置及设备 |
| CN108650466A (zh) * | 2018-05-24 | 2018-10-12 | 努比亚技术有限公司 | 一种强光或逆光拍摄人像时提升照片宽容度的方法及电子设备 |
| CN108683862B (zh) * | 2018-08-13 | 2020-01-10 | Oppo广东移动通信有限公司 | 成像控制方法、装置、电子设备及计算机可读存储介质 |
| CN109242794B (zh) * | 2018-08-29 | 2021-05-11 | Oppo广东移动通信有限公司 | 图像处理方法、装置、电子设备及计算机可读存储介质 |
| CN109068060B (zh) * | 2018-09-05 | 2021-06-08 | Oppo广东移动通信有限公司 | 图像处理方法和装置、终端设备、计算机可读存储介质 |
| CN108833804A (zh) * | 2018-09-20 | 2018-11-16 | Oppo广东移动通信有限公司 | 成像方法、装置和电子设备 |
| CN108881701B (zh) * | 2018-09-30 | 2021-04-02 | 华勤技术股份有限公司 | 拍摄方法、摄像头、终端设备及计算机可读存储介质 |
| CN109360176B (zh) * | 2018-10-15 | 2021-03-02 | Oppo广东移动通信有限公司 | 图像处理方法、装置、电子设备和计算机可读存储介质 |
| CN109819176A (zh) * | 2019-01-31 | 2019-05-28 | 深圳达闼科技控股有限公司 | 一种拍摄方法、系统、装置、电子设备及存储介质 |
| CN110211024A (zh) * | 2019-03-14 | 2019-09-06 | 厦门启尚科技有限公司 | 一种图像智能退底的方法 |
| CN111402135B (zh) * | 2020-03-17 | 2023-06-20 | Oppo广东移动通信有限公司 | 图像处理方法、装置、电子设备及计算机可读存储介质 |
| JP7669128B2 (ja) * | 2020-09-04 | 2025-04-28 | キヤノン株式会社 | 情報処理装置、方法、プログラム及び記憶媒体 |
| CN112819722B (zh) * | 2021-02-03 | 2024-09-20 | 东莞埃科思科技有限公司 | 一种红外图像人脸曝光方法、装置、设备及存储介质 |
| CN113347369B (zh) * | 2021-06-01 | 2022-08-19 | 中国科学院光电技术研究所 | 一种深空探测相机曝光调节方法、调节系统及其调节装置 |
Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6654062B1 (en) * | 1997-11-13 | 2003-11-25 | Casio Computer Co., Ltd. | Electronic camera |
| JP2005109757A (ja) * | 2003-09-29 | 2005-04-21 | Fuji Photo Film Co Ltd | 画像撮像装置、画像処理装置、画像撮像方法、及びプログラム |
| CN104092955A (zh) * | 2014-07-31 | 2014-10-08 | 北京智谷睿拓技术服务有限公司 | 闪光控制方法及控制装置、图像采集方法及采集设备 |
| CN104092954A (zh) * | 2014-07-25 | 2014-10-08 | 北京智谷睿拓技术服务有限公司 | 闪光控制方法及控制装置、图像采集方法及采集装置 |
| CN106161980A (zh) * | 2016-07-29 | 2016-11-23 | 宇龙计算机通信科技(深圳)有限公司 | 基于双摄像头的拍照方法及系统 |
| CN106851123A (zh) * | 2017-03-09 | 2017-06-13 | 广东欧珀移动通信有限公司 | 曝光控制方法、曝光控制装置及电子装置 |
| CN106851124A (zh) * | 2017-03-09 | 2017-06-13 | 广东欧珀移动通信有限公司 | 基于景深的图像处理方法、处理装置和电子装置 |
| CN107241557A (zh) * | 2017-06-16 | 2017-10-10 | 广东欧珀移动通信有限公司 | 图像曝光方法、装置、摄像设备及存储介质 |
Family Cites Families (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP5386793B2 (ja) * | 2006-12-11 | 2014-01-15 | 株式会社リコー | 撮像装置および撮像装置の露出制御方法 |
| CN106303250A (zh) * | 2016-08-26 | 2017-01-04 | 维沃移动通信有限公司 | 一种图像处理方法及移动终端 |
| CN106331510B (zh) * | 2016-10-31 | 2019-10-15 | 维沃移动通信有限公司 | 一种逆光拍照方法及移动终端 |
-
2017
- 2017-06-16 CN CN201710458977.1A patent/CN107241557A/zh active Pending
-
2018
- 2018-06-14 WO PCT/CN2018/091228 patent/WO2018228467A1/fr not_active Ceased
Patent Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6654062B1 (en) * | 1997-11-13 | 2003-11-25 | Casio Computer Co., Ltd. | Electronic camera |
| JP2005109757A (ja) * | 2003-09-29 | 2005-04-21 | Fuji Photo Film Co Ltd | 画像撮像装置、画像処理装置、画像撮像方法、及びプログラム |
| CN104092954A (zh) * | 2014-07-25 | 2014-10-08 | 北京智谷睿拓技术服务有限公司 | 闪光控制方法及控制装置、图像采集方法及采集装置 |
| CN104092955A (zh) * | 2014-07-31 | 2014-10-08 | 北京智谷睿拓技术服务有限公司 | 闪光控制方法及控制装置、图像采集方法及采集设备 |
| CN106161980A (zh) * | 2016-07-29 | 2016-11-23 | 宇龙计算机通信科技(深圳)有限公司 | 基于双摄像头的拍照方法及系统 |
| CN106851123A (zh) * | 2017-03-09 | 2017-06-13 | 广东欧珀移动通信有限公司 | 曝光控制方法、曝光控制装置及电子装置 |
| CN106851124A (zh) * | 2017-03-09 | 2017-06-13 | 广东欧珀移动通信有限公司 | 基于景深的图像处理方法、处理装置和电子装置 |
| CN107241557A (zh) * | 2017-06-16 | 2017-10-10 | 广东欧珀移动通信有限公司 | 图像曝光方法、装置、摄像设备及存储介质 |
Cited By (16)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US11575841B2 (en) | 2020-03-16 | 2023-02-07 | Canon Kabushiki Kaisha | Information processing apparatus, imaging apparatus, method, and storage medium |
| EP3883236A1 (fr) * | 2020-03-16 | 2021-09-22 | Canon Kabushiki Kaisha | Appareil de traitement d'informations, appareil d'imagerie, procédé et support d'informations |
| CN111553231A (zh) * | 2020-04-21 | 2020-08-18 | 上海锘科智能科技有限公司 | 基于信息融合的人脸抓拍与去重系统、方法、终端及介质 |
| CN111553231B (zh) * | 2020-04-21 | 2023-04-28 | 上海锘科智能科技有限公司 | 基于信息融合的人脸抓拍与去重系统、方法、终端及介质 |
| CN111582171A (zh) * | 2020-05-08 | 2020-08-25 | 济南博观智能科技有限公司 | 一种行人闯红灯监测方法、装置、系统及可读存储介质 |
| CN111582171B (zh) * | 2020-05-08 | 2024-04-09 | 济南博观智能科技有限公司 | 一种行人闯红灯监测方法、装置、系统及可读存储介质 |
| CN112053389A (zh) * | 2020-07-28 | 2020-12-08 | 北京迈格威科技有限公司 | 人像处理方法、装置、电子设备及可读存储介质 |
| CN112085686A (zh) * | 2020-08-21 | 2020-12-15 | 北京迈格威科技有限公司 | 图像处理方法、装置、电子设备及计算机可读存储介质 |
| CN112887612A (zh) * | 2021-01-27 | 2021-06-01 | 维沃移动通信有限公司 | 一种拍摄方法、装置和电子设备 |
| CN112887612B (zh) * | 2021-01-27 | 2022-10-04 | 维沃移动通信有限公司 | 一种拍摄方法、装置和电子设备 |
| CN115115559A (zh) * | 2021-03-23 | 2022-09-27 | 深圳市万普拉斯科技有限公司 | 一种图像处理方法、装置及电子设备 |
| CN113903060A (zh) * | 2021-09-17 | 2022-01-07 | 北京极豪科技有限公司 | 一种图像处理方法、装置、设备以及存储介质 |
| CN114244965A (zh) * | 2021-11-22 | 2022-03-25 | 浪潮金融信息技术有限公司 | 一种应用于高拍仪的高精准曝光度调控方法、系统及介质 |
| CN114067189A (zh) * | 2021-12-01 | 2022-02-18 | 厦门航天思尔特机器人系统股份公司 | 一种工件识别方法、装置、设备和存储介质 |
| CN116112657A (zh) * | 2023-01-11 | 2023-05-12 | 网易(杭州)网络有限公司 | 图像处理方法、装置、计算机可读存储介质及电子装置 |
| CN116112657B (zh) * | 2023-01-11 | 2024-05-28 | 网易(杭州)网络有限公司 | 图像处理方法、装置、计算机可读存储介质及电子装置 |
Also Published As
| Publication number | Publication date |
|---|---|
| CN107241557A (zh) | 2017-10-10 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2018228467A1 (fr) | Procédé et dispositif d'exposition d'image, dispositif photographique, et support de stockage | |
| CN107241559B (zh) | 人像拍照方法、装置以及摄像设备 | |
| CN107977940B (zh) | 背景虚化处理方法、装置及设备 | |
| CN107948519B (zh) | 图像处理方法、装置及设备 | |
| CN109089047B (zh) | 控制对焦的方法和装置、存储介质、电子设备 | |
| CN100587538C (zh) | 成像设备、成像设备的控制方法 | |
| CN103197491B (zh) | 快速自动聚焦的方法和图像采集装置 | |
| CN107945105B (zh) | 背景虚化处理方法、装置及设备 | |
| WO2019011147A1 (fr) | Procédé et appareil de traitement de région de visage humain dans une scène de rétroéclairage | |
| CN108024057B (zh) | 背景虚化处理方法、装置及设备 | |
| CN105227838B (zh) | 一种图像处理方法及移动终端 | |
| CN108605087B (zh) | 终端的拍照方法、拍照装置和终端 | |
| WO2018201809A1 (fr) | Dispositif et procédé de traitement d'image basé sur des caméras doubles | |
| WO2019105214A1 (fr) | Procédé et appareil de floutage d'image, terminal mobile et support de stockage | |
| WO2021136078A1 (fr) | Procédé de traitement d'image, système de traitement d'image, support lisible par ordinateur et appareil électronique | |
| CN110708463B (zh) | 对焦方法、装置、存储介质及电子设备 | |
| CN105100620B (zh) | 拍摄方法及装置 | |
| CN107948500A (zh) | 图像处理方法和装置 | |
| CN101764925A (zh) | 数字图像的浅景深模拟方法 | |
| CN104333710A (zh) | 相机曝光方法、装置及设备 | |
| CN106878605A (zh) | 一种基于电子设备的图像生成的方法和电子设备 | |
| WO2018228466A1 (fr) | Procédé et appareil d'affichage de région de mise au point, et dispositif terminal | |
| WO2019105260A1 (fr) | Procédé, appareil et dispositif d'obtention de profondeur de champ | |
| CN108289170B (zh) | 能够检测计量区域的拍照装置、方法及计算机可读介质 | |
| WO2019011110A1 (fr) | Procédé et appareil de traitement de région de visage humain dans une scène de rétroéclairage |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18817105 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 18817105 Country of ref document: EP Kind code of ref document: A1 |