[go: up one dir, main page]

CN120812394A - Image shooting method and device - Google Patents

Image shooting method and device

Info

Publication number
CN120812394A
CN120812394A CN202511169843.9A CN202511169843A CN120812394A CN 120812394 A CN120812394 A CN 120812394A CN 202511169843 A CN202511169843 A CN 202511169843A CN 120812394 A CN120812394 A CN 120812394A
Authority
CN
China
Prior art keywords
image
shooting
preview interface
focusing
electronic device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202511169843.9A
Other languages
Chinese (zh)
Inventor
黄焕思
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202511169843.9A priority Critical patent/CN120812394A/en
Publication of CN120812394A publication Critical patent/CN120812394A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

本申请公开了一种图像拍摄方法及装置,属于摄像技术领域,该方法包括:接收对拍摄预览界面的第一输入,拍摄预览界面中包括至少一个拍摄对象;响应于第一输入,拍摄得到第一图像;其中,该第一图像中至少一个拍摄对象的清晰度值是基于至少一个拍摄对象在拍摄预览界面中的位置和数量确定的。

The present application discloses an image shooting method and device, belonging to the field of camera technology. The method comprises: receiving a first input to a shooting preview interface, the shooting preview interface including at least one shooting object; shooting a first image in response to the first input; wherein the clarity value of at least one shooting object in the first image is determined based on the position and quantity of the at least one shooting object in the shooting preview interface.

Description

Image shooting method and device
Technical Field
The application belongs to the technical field of image shooting, and particularly relates to an image shooting method and device.
Background
Camera modules can be generally classified into two types, i.e., fixed Focus (FF) and Auto Focus (AF), based on the presence or absence of a Focus function. In the process of capturing a person image by an electronic device through an AF camera, the sharpness of the person image may generally depend on the hardware capability requirements of the camera and the implementation of the processing of the back-end software algorithm of the electronic device.
However, in the above method, since the definition of the image is limited by the definition characteristics and consistency of the hardware of the camera, the definition of each position cannot be guaranteed to be the same during the image shooting process. Thus, there may be a case where the subject in the image is in a position where the hardware definition is poor, resulting in poor display effect of the subject in the image.
Disclosure of Invention
The embodiment of the application aims to provide an image shooting method and device, which can improve the definition of a shooting object in an image.
In a first aspect, an embodiment of the application provides an image shooting method, which is executed by electronic equipment, and comprises the steps of receiving a first input of a shooting preview interface, wherein the shooting preview interface comprises at least one shooting object, and responding to the first input to obtain a first image, wherein the definition value of the at least one shooting object in the first image is determined based on the position and the number of the at least one shooting object in the shooting preview interface.
In a second aspect, an embodiment of the application provides an image shooting device, which is applied to electronic equipment, and comprises a receiving module and a shooting module, wherein the receiving module is used for receiving a first input of a shooting preview interface, the shooting preview interface comprises at least one shooting object, the shooting module is used for responding to the first input received by the receiving module to shoot and obtain a first image, and the definition value of the at least one shooting object in the first image is determined based on the position and the number of the at least one shooting object in the shooting preview interface.
In a third aspect, an embodiment of the present application provides an electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the method as described in the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor perform the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and where the processor is configured to execute a program or instructions to implement a method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program/program product stored in a storage medium, the program/program product being executable by at least one processor to implement the method according to the first aspect.
In the embodiment of the application, a first input to a shooting preview interface is received, the shooting preview interface comprises at least one shooting object, a first image is obtained through shooting in response to the first input, and the definition value of the at least one shooting object in the first image is determined based on the position and the number of the at least one shooting object in the shooting preview interface. In this solution, if the user needs to capture an image including at least one capturing object, if the capturing preview interface includes at least one capturing object that meets a user requirement, the user may trigger the electronic device to capture the image so as to obtain a first image, where a sharpness value of at least one capturing object in the first image may be determined based on a position and number of at least one capturing object in the capturing preview interface. That is, in the process of capturing the first image by the electronic device, the focusing position of the capturing is automatically adjusted based on the position and the number of the capturing objects included in the image, so that the definition of the capturing objects presented by the image can be correspondingly adjusted, that is, the electronic device can focus based on a more accurate position, so that the definition of at least one capturing object presented by the first image can more meet the requirements of the user, and the display effect of the capturing objects in the image can be improved.
Drawings
Fig. 1 is a flowchart of an image capturing method provided by some embodiments of the present application;
FIG. 2 is an example schematic diagram of a method of determining a position of a face of a subject provided by some embodiments of the present application;
FIG. 3A is an example schematic diagram of a split shot preview interface provided by some embodiments of the present application;
FIG. 3B is an example schematic diagram of a split shot preview interface provided by some embodiments of the present application;
FIG. 4 is a flow chart of an image capture method provided by some embodiments of the present application;
FIG. 5 is a flow chart of an image capture method provided by some embodiments of the present application;
FIG. 6A is an example schematic diagram of a shooting preview interface provided by some embodiments of the present application;
FIG. 6B is an example schematic diagram of a shooting preview interface provided by some embodiments of the present application;
FIG. 7A is an example schematic illustration of a shooting preview interface provided by some embodiments of the present application;
FIG. 7B is an example schematic illustration of a shooting preview interface provided by some embodiments of the present application;
FIG. 8A is an example schematic diagram of a shooting preview interface provided by some embodiments of the present application;
FIG. 8B is an example schematic diagram of a shooting preview interface provided by some embodiments of the present application;
FIG. 9A is an example schematic diagram of a shooting preview interface provided by some embodiments of the present application;
FIG. 9B is an example schematic diagram of a shooting preview interface provided by some embodiments of the present application;
FIG. 10A is a schematic illustration of an example of a background image provided by some embodiments of the application;
FIG. 10B is an example schematic illustration of a facial image provided by some embodiments of the application;
FIG. 10C is a schematic illustration of an example of a background image provided by some embodiments of the application;
FIG. 10D is an example schematic illustration of a facial image provided by some embodiments of the present application;
FIG. 10E is a schematic illustration of an example of an image generation method provided by some embodiments of the application;
FIG. 11A is an example schematic illustration of a background image provided by some embodiments of the application;
FIG. 11B is an example schematic illustration of a facial image provided by some embodiments of the application;
FIG. 11C is an example schematic illustration of a background image provided by some embodiments of the application;
FIG. 11D is an example schematic illustration of a facial image provided by some embodiments of the present application;
FIG. 11E is a schematic illustration of an example of an image generation method provided by some embodiments of the application;
FIG. 12 is a flow chart of an image capture method provided by some embodiments of the present application;
FIG. 13A is an example schematic illustration of a shooting preview interface provided by some embodiments of the present application;
FIG. 13B is an example schematic illustration of a shooting preview interface provided by some embodiments of the present application;
FIG. 14A is an example schematic view of a shooting preview interface provided by some embodiments of the present application;
FIG. 14B is an example schematic view of a shooting preview interface provided by some embodiments of the present application;
FIG. 15A is an example schematic view of a shooting preview interface provided by some embodiments of the present application;
FIG. 15B is an example schematic view of a shooting preview interface provided by some embodiments of the present application;
Fig. 16 is a schematic structural view of an image capturing device according to some embodiments of the present application;
FIG. 17 is a schematic diagram of a hardware configuration of an electronic device according to some embodiments of the present application;
fig. 18 is a schematic hardware structure of an electronic device according to some embodiments of the present application.
Detailed Description
The technical solutions of the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which are obtained by a person skilled in the art based on the embodiments of the present application, fall within the scope of protection of the present application.
The terms "first," "second," and the like in the description of the present application, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or otherwise described herein, and that the objects identified by "first," "second," etc. are generally of a type not limited to the number of objects, such as the first object may be one or more, where the plurality refers to at least two. In addition, "and/or" in the specification means at least one of the connected objects, and the character "/", generally means a relationship in which the associated objects are one kind of "or".
The terms "at least one", and the like in the description of the present application mean that they encompass any one, any two, or a combination of two or more of the objects. For example, at least one of a, b, c (item) may represent "a", "b", "c", "a and b", "a and c", "b and c" and "a, b and c", wherein a, b, c may be single or plural, wherein plural means at least two. Similarly, the term "at least two" means two or more, and the meaning of the expression is similar to the term "at least one".
The terminology used in the description of the embodiments of the application herein is for the purpose of describing particular embodiments of the application only and is not intended to be limiting of the application. The following is a description of terms related to embodiments of the present application.
Focusing is a core operation in photography and image capturing, and determines the sharpness of a subject in a picture. The focusing essence is to adjust the distance between the lens and the image sensor or the film, so that the light rays of the shooting object can be accurately converged on the sensor to form a clear image. This is accomplished by changing the focal length of the lens or moving the lens group. Common Focus types include Fixed Focus (FF) and Auto Focus (AF).
FF is an imaging technique that does not require focal length adjustment, and in which the lens position is fixed, and in which clear imaging of an object within a specific distance range is ensured by optical design. This means that the lens will take pictures at the same focal length no matter how far the subject is from the camera. The principle is generally to use a small aperture to increase the depth of field and to set the focus at the hyperfocal position. Thus, a clearer image can be obtained within a certain distance range, such as 1 meter to infinity. This design ensures that objects outside a certain distance remain clear, while objects within that distance become progressively blurred.
AF, a technique for realizing clear imaging of a subject by automatically adjusting a focal length of a lens, is widely used in photographic imaging apparatuses. The technical principle of the method is divided into an active mode and a passive mode, wherein the active mode is used for measuring and calculating the distance through emitting infrared rays or ultrasonic waves, the method is suitable for low-light environments but limited by smooth surfaces and long-distance objects, and the passive mode is realized through analyzing image phase differences or contrast, and the method is suitable for static scenes but weak in low-light or low-contrast environments.
The camera module is a core component of the imaging equipment and is responsible for converting light into digital images or video signals. The system integrates the technologies of optical, electronic, mechanical and other fields, and is widely applied to equipment such as mobile phones, cameras, monitoring cameras, unmanned aerial vehicles, vehicle-mounted cameras, medical endoscopes and the like. The camera modules generally include a Lens (Lens), a Filter (Filter), an Image Sensor (Image Sensor), an AF module, an FF module, an electronic anti-shake (Electronic Image stabilization, EIS), an optical anti-shake driving chip (Optical Image stabilization driver, OIS driver), and a Voice Coil Motor (VCM), etc.
OIS refers to the use of optical components, such as lenses, in a camera or other similar imaging device to avoid or reduce the appearance of device shake during capturing of optical signals, thereby improving imaging quality. The OIS driver and the VCM are core components for realizing the OIS and AF functions in the camera module, and the OIS driver and the VCM work cooperatively to improve shooting stability and imaging quality.
The convolutional neural network (Convolutional Neural Network, CNN) can automatically extract image features through the combination of a convolutional layer, a pooling layer and a full-connection layer in the core model in the deep learning, and has excellent performance in tasks such as image classification, target detection and the like.
The real-time target detection algorithm (You Only Look Once, YOLO) can finish target detection through single forward propagation, achieves real-time performance, and is widely applied to the fields of automatic driving, video monitoring and the like. The name of the method is derived from the algorithm design concept that the detection can be completed only by one time.
A single shot multi-frame detector (Single Shot MultiBox Detector, SSD), also referred to as a "single shot detector", achieves a balance of speed and accuracy by predicting target location and class on feature maps of different scales.
Interface refers to the medium by which a user interacts with an electronic device. The interface allows a user to send instructions to the system via the input device and to receive feedback information via the output device. The input device can be a keyboard, a mouse, a touch screen and other devices, and the input device can be a display, a loudspeaker and other devices.
The control is an element in the graphical user interface that can perform a corresponding process or display related data by receiving user input. Controls may include, but are not limited to, virtual buttons, sliders, progress bars, check boxes.
In the camera auto-focusing system, code value, also called DAC code or driving code, is the core digital coding parameter for controlling the lens position, and directly determines the precise lens moving position. The essence is a digital control signal of a device such as a stepping motor or a voice coil motor in a driving circuit, which is used for physically adjusting the distance between a lens and an image sensor, so that clear imaging is realized.
The image capturing method provided by the embodiment of the application is described in detail below through specific embodiments and application scenes thereof with reference to the accompanying drawings.
The image shooting method provided by the embodiment of the application can be applied to shooting a scene comprising images of people. The image capturing method provided by the embodiment of the present application is exemplarily described below taking some specific scenes as examples.
Scene 1-in a scenario where a user takes an image for a single other user, the user may launch a camera application in the electronic device to satisfy the requirements of the image content in a shooting preview interface displayed by the camera application, e.g., including the single other user, the user may trigger the electronic device to perform a shooting task to obtain an image including the single other user.
Scene 2-in a scenario where a user captures images for a plurality of other users, the user may launch a camera application in the electronic device to trigger the electronic device to perform a capture task to obtain an image including the plurality of other users if image content in a capture preview interface displayed by the camera application meets requirements, e.g., if the plurality of other users are included as such.
It should be noted that, the above scenario 1 and scenario 2 are only exemplary to list some scenarios where the embodiments of the present application may be applied, and in practical implementation, the embodiments of the present application may also be applied to any possible scenario where more shots include at least one shot object, such as an animal, an object, etc., which are not limited herein.
The main execution body of the image capturing method provided by the embodiment of the present application is an image capturing device, and the device may be an electronic device, or a functional module or entity in the electronic device. An image capturing method provided by an embodiment of the present application will be exemplarily described below using an electronic device as an example.
An embodiment of the application provides an image shooting method, and fig. 1 shows a flowchart of the image shooting method provided by the embodiment of the application. As shown in fig. 1, the image capturing method provided by the embodiment of the present application may include the following steps 201 and 202.
Step 201, the electronic device receives a first input to a shooting preview interface.
In some embodiments of the present application, the shooting preview interface may include at least one shooting object.
In some embodiments of the present application, the subject may include, but is not limited to, at least one of a person, an animal, and an object.
In some embodiments of the present application, the shooting preview interface may be an interface of any one of camera applications in the electronic device.
In some embodiments of the application, the camera application may include a system camera application or a third party camera application.
An electronic device is exemplified as a mobile phone. In the case that the user desires to capture an image or a video, the user may click on an icon of the camera application displayed on the desktop to trigger the mobile phone to start running the camera application. The handset may then display the captured image in real time, such as the image captured in real time including at least one photographic subject, through a photographic preview interface of the camera application.
In some embodiments of the present application, the first input may be an input triggering the electronic device to capture an image. For example, the first input may be an input by the user to a shooting control displayed on the shooting preview interface.
In some embodiments of the present application, the first input may include, but is not limited to, any of a click input, a voice input, a gesture input, or other feasibility input, which embodiments of the present application do not limit.
In some embodiments of the present application, the gesture input may include, but is not limited to, at least one of a click gesture, a slide gesture, a long press gesture, or other possible gesture inputs, and a specific gesture input form may be determined according to actual requirements, which is not limited in some embodiments.
In some embodiments of the application, the first input may comprise a space-apart gesture input.
In some embodiments of the present application, the electronic device may determine, based on the input parameters of the alternate gesture input, a control selected by the alternate gesture input in the photographing preview interface, or the electronic device may directly perform a task corresponding to the alternate gesture based on the input parameters of the alternate gesture input.
The first input may be, for example, a space click input of the shooting control by the user.
In some embodiments of the application, the first input includes a user input to a physical key on the electronic device.
In some embodiments of the present application, the physical key on the electronic device may be a volume up key, a volume down key, a power key, or any physical key in the electronic device.
In some embodiments of the present application, the "user input to a physical key on an electronic device" may be user input to one physical key on an electronic device, user input to a plurality of physical keys on an electronic device, or user input to a combination of physical keys.
The first input may include, for example, a user's press input of a volume down key. For example, if the user wants to trigger the electronic device to execute the shooting task when the electronic device displays the shooting preview interface, the user may trigger the electronic device to directly execute the shooting task by pressing the volume down key, so as to obtain the corresponding image through shooting.
Step 202, the electronic device responds to a first input and shoots to obtain a first image.
In some embodiments of the present application, the first image may include at least one photographic subject.
In some embodiments of the present application, the sharpness value of the at least one photographic subject in the first image may be determined based on the position and the number of the at least one photographic subject in the photographic preview interface.
In some embodiments of the present application, the manner in which the electronic device determines the location and number of the at least one photographic subject in the photographic preview interface may include, but is not limited to:
Automatically extracting key features, such as edges, textures, shapes, color distribution and the like, in a preview image displayed on a shooting preview interface through a deep learning model such as a convolutional neural network and the like;
locating the target object in the image and marking its location information, such as bounding box coordinates, by a target detection algorithm based on the extracted features, wherein the target detection algorithm may include, but is not limited to, any of YOLO and SSD;
for the detected target area, a specific category of the photographic subject, for example, "person", "cat", "car", and the like, is determined by a classification algorithm.
In some embodiments of the present application, after the electronic device determines the position of the photographing object through the bounding box, the electronic device may acquire coordinates of four vertices of the bounding box to represent the top coordinate set as the position of the photographing object, or the electronic device may acquire coordinates of a diagonal intersection of the bounding box to represent the intersection coordinates as the position of the photographing object.
In some embodiments of the present application, the electronic device may determine the location of the at least one photographic subject in the photographic preview interface via a face detection algorithm.
It should be noted that, in the case where a user photographs a person or an animal, that is, the above-mentioned photographed object is a person or an animal, it is generally necessary that the face image of the photographed object is clear, so the electronic device may use the position of the face or the head of the photographed object to indicate the position of the photographed object, so as to ensure that the face image of the photographed object is clear in the final photographed image.
In some embodiments of the present application, after the electronic device determines the location of the photographic subject in the photographic preview interface, the electronic device may further combine the plurality of regions corresponding to the photographic preview interface, and determine the region of the photographic subject in the photographic preview interface based on the location.
In some embodiments of the present application, the electronic device may perform dynamic position detection by directly displaying the face image in real time through the photographing preview interface to obtain the face position of the photographing object, or the electronic device may obtain an image corresponding to the photographing preview interface and perform image detection based on the obtained image to obtain the face position of the photographing object.
In some embodiments of the present application, the electronic device may obtain coordinates of four vertices of a rectangular frame corresponding to the face image to represent the top coordinate set as the face position of the photographic subject, or may obtain coordinates of a diagonal intersection of the rectangular frame to represent the intersection coordinates as the face position of the photographic subject.
In some embodiments of the present application, the rectangular frame may include, but is not limited to, at least one of an image of facial organs such as eyes, nose, mouth, eyes, eyebrows, etc., an image of facial contours or head such as chin, cheek, forehead, etc.
For example, as shown in fig. 2, an image of eyes, nose, mouth, eyebrows, ears, forehead, chin of a subject to be photographed may be contained in a rectangular frame. The electronic device may determine an intersection of the diagonals according to the rectangular frame, and then the electronic device may represent coordinates of the intersection to the face position of the object to be photographed.
In some embodiments of the present application, the electronic device may construct a planar coordinate system with any vertex of the captured preview interface or image as an origin, a horizontal direction as an x-axis, and a vertical direction as a y-axis, so as to obtain the coordinates.
Illustratively, the cell phone may take the top left vertex of the preview interface or image as the origin, with the horizontal right being the positive half-axis of the x-axis, and the vertical down being the positive half-axis of the y-axis.
In some embodiments of the present application, the above-mentioned method for obtaining the region of the at least one photographic subject in the photographic preview interface may be that the electronic device obtains region segmentation information to divide the photographic preview interface into a plurality of regions, so that the electronic device may determine the position of the at least one photographic subject through a face detection algorithm, and determine the region of the at least one photographic subject in the photographic preview interface after combining the region segmentation information.
In some embodiments of the present application, the above-mentioned division manner corresponding to the region division information may include, but is not limited to, any one of annular region division and m×n grid region division.
Wherein m and n are positive integers, and m and n may be equal or unequal.
For example, in the case where the area division method corresponding to the area division information is the ring area division, as shown in fig. 3A, the mobile phone may be set to a full Field of view according to the diagonal size of the photographing preview interface 20, or may be set to a 1.0Field size. Then, the cell phone may divide the photographing preview interface 20 into annular areas at intervals of 0.2Field, so that the cell phone may obtain five areas, such as the areas a to E. Wherein the area A is a center to 0.2Field area, i.e., (0 Field,0.2 Field), the area B is a 0.2Field to 0.4Field area, i.e., (0.2 Field,0.4 Field), the area C is a 0.4Field to 0.6Field area, i.e., (0.4 Field,0.6 Field), the area D is a 0.6Field to 0.8Field area, i.e., (0.6 Field,0.8 Field), and the area E is a 0.8Field to boundary area, i.e., (0.8 Field, 1.0).
For example, in the case where the area division manner corresponding to the above-mentioned area division information is m×n grid area division, assuming that m=7 and n=5, as shown in fig. 3B, the mobile phone may divide the photographing preview interface 20 into 35 areas, such as an area a0, an area B0 to an area B8, an area c0 to an area c15, and an area d0 to an area d9. The region a0 may be understood as a central region of the photographing preview interface 20, and the regions b0 to b8, c0 to c15, and d0 to d9 are multi-layer regions surrounding the region a 0.
It should be noted that, in the shooting preview interface 20 shown in fig. 3A or fig. 3B, at least one control may also be included, where the at least one control includes, in order from top to bottom and from left to right, a flash control, a setting control, a "night scene" shooting mode control, a "beauty" shooting mode control, a "shooting mode control, a" video "shooting mode control, a" professional "shooting mode control, an" album "control, a shooting control, and a lens conversion control.
The flash control can be used for opening or closing a flash, the setting control can be used for triggering a mobile phone to display at least one setting option, such as image proportion, time delay shooting and the like, the night scene control can be used for switching a shooting mode into a night scene shooting mode, the beauty control can be used for switching the shooting mode into a person beauty shooting mode, the shooting control can be used for switching the shooting mode into a common shooting mode, the video recording control can be used for switching the shooting mode into a video recording mode, the professional control can be used for switching the shooting mode into the professional shooting mode, and controls which can be used for adjusting shooting parameters can be overlaid on a shooting preview interface can be used for displaying an album application program interface in a jumping mode so that a user can quickly view a shot image or a recorded video, the shooting control can be used for triggering the mobile phone to shoot the image, and the lens conversion control can be used for switching a camera used for shooting, such as a front camera or a rear camera.
In some embodiments of the present application, in a case where the electronic device divides a plurality of areas according to m×n grid areas, the size of each of the plurality of areas may be the same or different. Each region may be a region divided in units of pixels.
In some embodiments of the present application, in the case where the electronic device divides the plurality of regions according to the mxn grid region, the electronic device may first perform preliminary cropping on the frame acquired by the photographing preview interface, so that it may be divided into a plurality of regions of the same size.
It should be noted that, since the importance of the image content displayed in the edge area is generally low, the above-mentioned preliminary cropping may be cropping of the edge area.
In some embodiments of the present application, after the electronic device obtains coordinates that may represent a position of a face and that may correspond to each face in the at least one photographic subject, the electronic device may determine, according to a coordinate range corresponding to each of the plurality of regions, a region in which each coordinate is located, so that a region of each face in the at least one photographic subject in the photographic preview interface may be determined.
It should be noted that, in the case where the coordinates corresponding to a certain face are located at the boundary between two regions, the method for determining, by the electronic device, the region to which the face belongs may include, but is not limited to, any one of the following:
Determining the area to which the face belongs as two areas, namely that the face is in the range of the two areas at the same time;
Determining a region to which the face belongs as any one of the two regions;
Calculating the area ratio of the face in two areas, for example, 60% of the face is in area 1 and 40% is in area 2, and determining the area to which the face belongs as area 1;
and determining the area to which the face belongs as an area which is close to the center point of the shooting preview interface in the two areas.
In some embodiments of the present application, as shown in fig. 4 in conjunction with fig. 1, the above step 202 may be implemented specifically by the following step 202 a.
In step 202a, the electronic device responds to the first input, and controls the camera of the electronic device to move to a first focusing position, so as to obtain a first image through shooting.
In some embodiments of the present application, the electronic device may drive the camera in the camera module to move by controlling the motor in the camera module.
In some embodiments of the present application, the first focusing position may be determined based on a position and a number of at least one photographic subject in the photographic preview interface.
In some embodiments of the present application, in a case where at least two photographic subjects are included in the photographic preview interface, if the camera moves to the first position, a difference in sharpness values of the at least two photographic subjects included in the photographic preview interface may be reduced, or the sharpness values of the at least two photographic subjects may be the same.
It can be understood that the electronic device may automatically determine the first focusing position according to the position and the number of at least one shooting object in the shooting preview interface, so as to control the camera to move to the first focusing position, and further shoot to obtain the first image.
In some embodiments of the present application, the method for determining the first focusing position may be specifically described in the following embodiments, which are not described herein.
In some embodiments of the present application, the moving direction of the camera may include, but is not limited to, at least one of a horizontal direction and a vertical direction.
Therefore, in the process of shooting the first image by the electronic equipment, the electronic equipment can automatically adjust the shooting focusing position based on the position and the number of the faces included in the image and move the camera so that the definition of the faces presented by the image can be correspondingly adjusted, namely, the electronic equipment can focus based on the more accurate position, so that the definition of at least one shooting object presented by the first image can meet the requirements of users more, and the display effect of the faces in the image can be improved.
In the image capturing method provided by the embodiment of the present application, if the user needs to capture an image including at least one capturing object, if the capturing preview interface includes at least one capturing object that meets the user's needs, the user may trigger the electronic device to capture the image so as to obtain a first image, where the sharpness value of at least one capturing object in the first image may be determined based on the position and the number of at least one capturing object in the capturing preview interface. That is, in the process of capturing the first image by the electronic device, the focusing position of the capturing is automatically adjusted based on the position and the number of the capturing objects included in the image, so that the definition of the capturing objects presented by the image can be correspondingly adjusted, that is, the electronic device can focus based on a more accurate position, so that the definition of at least one capturing object presented by the first image can more meet the requirements of the user, and the display effect of the capturing objects in the image can be improved.
In some embodiments of the present application, the shooting preview interface includes a shooting object, and in conjunction with fig. 1, as shown in fig. 5, the step 202 may be specifically implemented by the following steps 301 to 303.
In step 301, the electronic device captures a second image at a second focusing position in response to the first input.
In some embodiments of the present application, the second image may include the one subject.
In some embodiments of the present application, the second focusing position may be a focusing position selected by the electronic device by default or manually by a user.
In some embodiments of the present application, after the electronic device captures the second image, the electronic device may acquire a position of the face of the one capturing object in the second image according to the second image.
Illustratively, in the case of the scene 1, in a scenario in which the user captures an image for a single other user, the user may start a camera application in the electronic device to satisfy the requirement of the image content in the capturing preview interface displayed by the camera application, as shown in fig. 6A, in a case in which the capturing preview interface 20 includes the single user, the user may click on a capturing control, that is, the first input described above, so that the mobile phone may capture the person preview image 21, determine a rectangular frame of the face of the capturing object in the person preview image 21, and acquire coordinates of a diagonal intersection of the rectangular frame. For example, the resolution of the photographing preview interface 21 is 3072×4096, and the coordinates of the face position in the person preview image 21 are (1210,1340). Then, as shown in fig. 6B, the mobile phone can determine that the face rectangular frame included in the person preview image 21 is located in the area B, that is, the diagonal intersection of the face rectangular frame of the user is located in (0.2 field,0.4 field), based on the ring-shaped area division information and the face rectangular frame.
Note that, since the person preview image 21 shown in fig. 6A and 6B is located at the edge of the region B, that is, at the position of the region B where the sharpness is poor, there is a possibility that the person preview image 21 has a certain image blur.
Illustratively, in the case of the scene 1, in a scenario in which the user captures an image for a single other user, the user may start a camera application in the electronic device to satisfy the requirement of the image content in the capturing preview interface displayed by the camera application, as shown in fig. 7A, in a case in which the capturing preview interface 20 includes the single user, the user may click on the capturing control, that is, the first input described above, so that the mobile phone may capture the person preview image 22, determine a rectangular frame in the person preview image 22 in which the focused face is captured, and acquire coordinates of a diagonal intersection of the rectangular frame. For example, the resolution of the photographing preview interface 21 is 3072×4096, and the coordinates of the face position in the person preview image 21 are (1400,2048). Then, as shown in fig. 7B, the mobile phone can determine that the face rectangular frame included in the person preview image 22 is located in the area a, that is, the diagonal intersection of the face rectangular frame of the user is located in (0 field,0.2 field), based on the ring-shaped area division information and the face rectangular frame.
Note that, since the person preview image 22 shown in fig. 7A and 7B is not the center position of the area a, that is, the position of the area a with poor sharpness, there is a possibility that the person preview image 22 has a certain image blur.
Step 302, the electronic device controls the camera of the electronic device to move to the first focusing position, and a third image is obtained through shooting.
In some embodiments of the present application, after the electronic device obtains the position of the above-mentioned one shooting object in the second image, the electronic device may determine whether the face position is at a worst position for sharpness calibration according to the face position of the shooting object, so that in the case of being within the worst position range, the electronic device may control the camera to move so that the face position is avoided to be within the range of the worst position as much as possible.
When the electronic device obtains the region position information, the electronic device may perform calibration according to a plurality of regions corresponding to the region position information, so as to calibrate a position with the worst resolving power of each region, that is, a position with the worst resolution calibration.
In some embodiments of the present application, the influence factors of the resolving power of the different positions in each region include, but are not limited to, at least one of a distance from a center position of the photographed preview screen, a distance from a center position of the region, and a hardware capability of the camera.
In some embodiments of the present application, the method for determining the first focusing position may be specifically described in the following step 302a, and the embodiments of the present application are not described herein.
In some embodiments of the present application, the movement of the camera may be determined by a hardware parameter of a motor to which the camera is connected.
For example, the maximum movement amount of the motor in the horizontal direction and the vertical direction may be 150 pixels, so that the maximum movement amount corresponding to the camera is also 150 pixels.
In some embodiments of the present application, the moving distance of the camera in the horizontal direction and the moving distance of the camera in the vertical direction may be the same or different.
In some embodiments of the present application, the third image may include the one face.
In some embodiments of the application, the sharpness value of the one face in the third image is higher than the sharpness value of the one face in the second image.
In some embodiments of the present application, the above step 302 may be specifically implemented by the following step 302 a.
In step 302a, the electronic device controls the camera of the electronic device to move to the first focusing position based on the region of the face in the shooting preview interface, and a third image is obtained through shooting.
In some embodiments of the present application, after the camera moves to the first focusing position, one face in the shooting preview interface may be in a central focusing area in the shooting preview interface.
In some embodiments of the present application, the above-mentioned central focusing area may be understood as a central area of the photographing preview interface, or may be understood as a central area corresponding to each of a plurality of areas divided by the photographing preview interface.
It can be understood that, after moving to the first focusing position, the face image of the shooting object in the shooting preview interface is in the central focusing area in the shooting preview interface, that is, the face position moves to the central position, so the sharpness value of the face image in the third image may be higher than the sharpness value of the face image in the second image.
In some embodiments of the present application, the first focusing position may be determined by a position of the face of the one photographic subject in the photographic preview interface and a movable amount of the camera.
For example, in the case where the coordinates of the face position in the person preview image 21 are (1210,1340) in conjunction with fig. 6A, if the maximum movement amount of the camera in the horizontal direction and the vertical direction can be 150 pixels, the camera can be moved horizontally to the left by 150 pixels and vertically upward by 150 pixels, i.e., to the first focusing position, so that the face in the person preview image 31 can be moved to the center position, as shown in fig. 8A or 8B, the coordinates of the face position in the moved person preview image 31 can be (1360,1490), so that the third image can be photographed by the mobile phone.
For example, in a case where the coordinates corresponding to the face position of the photographing object in the person preview image 32 are (1400,2048) in combination with fig. 7A, if the maximum movement amount of the camera in the horizontal direction and the vertical direction can be 150 pixels, the camera can be moved horizontally to the left by 136 pixels, that is, to the above first focusing position, so that the face in the person preview image 32 can be moved to the center position (1536,2048), as shown in fig. 9A or 9B, the coordinates of the face position in the moved person preview image 32 can be (1536,2048), and thus a third image can be photographed by a mobile phone.
Therefore, the electronic device can control the camera of the electronic device to move according to the position of the shooting object, so that one shooting object displayed in the shooting preview interface can move to the central focusing area, namely, the face position of the shooting object in the third image is closer to the central position of the image than the face position in the second image, and then the definition value of the shooting object in the third image can be higher than the definition value of the shooting object in the second image, so that the definition value of the shooting object in the first image obtained by the subsequent electronic device based on the background image of the second image and the shooting object in the third image can be obviously improved compared with the definition value of the shooting object in the second image obtained by direct shooting, and therefore, the definition of the shooting object displayed by the first image can more meet the requirements of users, and the display effect of the shooting object in the image is improved.
Step 303, the electronic device obtains the first image based on the background image area of the second image and the image area corresponding to the shooting object in the third image.
In some embodiments of the present application, the sharpness of the subject in the second image is smaller than the sharpness in the third image.
It will be appreciated that the electronic device may replace the image area corresponding to the object in the second image with the image area corresponding to the object in the third image, so that the electronic device may obtain a clearer image of the object, i.e. the first image, without changing the composition of the second image.
It should be noted that, for the specific step of obtaining the first image by the electronic device based on the background image area of the second image and the image area corresponding to the shooting object in the third image, refer to the description related to the steps A1 to A3 in the following embodiments, which are not described herein in detail.
Therefore, the definition of the shooting object in the second image is smaller than that in the third image, so that the definition of the shooting object in the first image obtained by the electronic equipment based on the background image area of the second image and the image area corresponding to the shooting object in the third image can be obviously improved compared with the definition of the shooting object in the second image obtained by direct shooting, the definition of the shooting object presented by the first image can meet the requirements of users more, and the display effect of the shooting object in the image is improved.
In some embodiments of the present application, in a case where a plurality of subjects are included in the photographing preview interface, but a distance between positions of each two subjects in the photographing preview interface is smaller than a preset distance threshold, the electronic device may take the plurality of subjects as one subject to perform steps similar to steps 301 to 303 described above.
In some embodiments of the present application, a distance between the positions of two photographic subjects being less than the above-mentioned preset distance threshold may be indicative of the two faces being in close contact. For example, the plurality of subjects are leaning or clinging to each other, and the plurality of subjects face-match.
In some embodiments of the present application, before the step 303, the image capturing method provided in the embodiment of the present application may further include the following step A1 and step A2, and the step 303 may be specifically implemented by the following step A3.
And A1, carrying out matting processing on the face in the second image by the electronic equipment to obtain a background image area and a face blank area of the second image.
And A2, the electronic equipment performs matting processing on the face in the third image to obtain the face in the third image.
In some embodiments of the present application, the above-described method of matting processing may include, but is not limited to, any of matting processing by a camera application, matting processing by a third party image processing application, matting processing by an AI function.
In some embodiments of the present application, the specific implementation method of the matting processing may refer to an implementation manner in the related art, and the embodiments of the present application are not described herein.
And A3, the electronic equipment fills the face in the third image obtained by the matting into a face blank area and synthesizes the face blank area with a background image area of the second image to obtain a first image.
For example, in conjunction with fig. 6A, in the case where the mobile phone acquires the image corresponding to the person preview image 21, that is, the above-mentioned second image, the mobile phone may perform a matting process on the image area corresponding to the shooting object in the image corresponding to the person preview image 21, so as to obtain a background image area 211 as shown in fig. 10A and a face image 212 as shown in fig. 10B. Here, the face position in the background image area 211, i.e., the black area in fig. 10A, may not display any image content, i.e., the above-described face blank area. Then, in connection with fig. 8A, in the case where the mobile phone acquires the image corresponding to the person preview image 31, that is, the above-mentioned third image, the mobile phone may perform a matting process on the face in the image corresponding to the person preview image 31 to obtain a background image area 311 shown in fig. 10C and a face image 312 shown in fig. 10D. Here, the face position in the background image area 311, i.e., the black area in fig. 10C, may not display any image content, i.e., the above-described face blank area. Then, as shown in fig. 10E, the electronic device may fill the face image 312 into the face blank area of the background image area 211, and synthesize the face image with the background image area 211 to obtain the face image 41, that is, the first image.
It should be noted that the black areas in fig. 10A and 10C are only for illustration, and the black areas may be displayed in a transparent form during actual use.
For example, in conjunction with fig. 7A, in the case where the mobile phone acquires the image corresponding to the person preview image 22, that is, the above-mentioned second image, the mobile phone may perform a matting process on the face in the image corresponding to the person preview image 22, so as to obtain a background image area 221 shown in fig. 11A and a face image 222 shown in fig. 11B. Here, the face position in the background image area 221, i.e., the black area in fig. 11A, may not display any image content, i.e., the above-described face blank area. Then, in conjunction with fig. 9A, when the mobile phone acquires the image corresponding to the person preview image 32, that is, the third image, the mobile phone may perform a matting process on the face in the image corresponding to the person preview image 32, so as to obtain a background image area 321 shown in fig. 11C and a face image 322 shown in fig. 11D. Here, the face position in the background image area 321, i.e., the black area in fig. 11C, may not display any image content, i.e., the above-described face blank area. Then, as shown in fig. 11E, the electronic device may fill the face image 322 into the face blank area of the background image area 221, and synthesize the face image with the background image area 221 to obtain the face image 42, that is, the first image.
It should be noted that the black areas in fig. 11A and 11C are only used for illustration, and the black areas may be displayed in a transparent form during actual use.
Therefore, the electronic equipment can replace the face with the lower definition value in the second image through the face with the higher definition value, so that the first image is synthesized, namely, the electronic equipment can improve the definition of the face in the image under the condition that the composition of the second image is not changed, so that the definition of the face presented by the first image can meet the requirements of users, and the display effect of the face in the image is improved.
In some embodiments of the present application, the shooting preview interface includes at least two shooting objects, and in conjunction with fig. 1, as shown in fig. 12, the step 202 may be specifically implemented by the following step 401.
In step 401, the electronic device responds to a first input, and controls a camera of the electronic device to move to a first focusing position based on focusing parameters corresponding to an area of each of at least two shooting objects in a shooting preview interface, so as to obtain a first image through shooting.
In some embodiments of the present application, the first focusing position is determined based on focusing parameters corresponding to areas where at least two photographing objects are located.
In some embodiments of the present application, before the first input, if the electronic device does not receive an input triggering the electronic device to determine the focusing position by the user, for example, the user touches a certain image area displayed on the shooting preview interface, the electronic device may automatically acquire the focusing parameter after receiving the first input, and control the camera to move to the first focusing position to obtain the first image.
In some embodiments of the present application, the first image may include at least two photographed objects.
In some embodiments of the present application, the first image includes at least two subjects with a smaller difference in sharpness values or the sharpness values are the same.
It should be noted that, the method for the electronic device to obtain the above focusing parameters may be specifically described in the following steps 601 to 603, which is not repeated herein.
It should be noted that, the method for controlling the camera to move to the first focusing position by the electronic device based on the focusing parameter may be specifically referred to the following description of step 501 and step 502, and the embodiments of the present application are not described herein.
Therefore, under the condition that the shooting preview interface comprises at least two shooting objects, the electronic equipment can determine the position to which the camera needs to move based on focusing parameters corresponding to the areas of the at least two shooting objects in the shooting preview interface, namely, the electronic equipment can balance the definition of the at least two shooting objects in the first image by adjusting the focusing positions, so that the consistency of the definition of the shooting objects in the image can be improved, the problem of obvious difference of the definition caused by the characteristics of hardware definition and uneven introduction is avoided, the definition of at least one shooting object presented by the first image can meet the requirements of users more, and the display effect of the shooting objects in the image is improved.
In some embodiments of the present application, before "controlling the camera of the electronic device to move to the first focusing position and take the first image based on the focusing parameters corresponding to the region of each of the at least two subjects in the shooting preview interface" in the above step 401, the image taking method provided by the embodiment of the present application may further include the following steps 601 to 603.
In step 601, the electronic device obtains a fourth image by shooting in response to the first input.
In some embodiments of the present application, the fourth image may include at least two photographed objects.
In some embodiments of the present application, the fourth image may be an image captured in the background of the electronic device, that is, it may be understood that an image that is stored in the electronic device and displayed for the user later is not the fourth image.
In some embodiments of the present application, after the electronic device acquires the fourth image, the electronic device may determine a position of the at least one photographic subject in the photographic preview interface through a face detection algorithm. Wherein the coordinates of the electronic device's determined location may be expressed in the form of (x, y).
Step 602, the electronic device determines, based on the positions of at least two shooting objects in the fourth image, an area of each of the at least two shooting objects in the shooting preview interface.
In some embodiments of the present application, the position of each of the above-mentioned objects may be represented by the vertex coordinates or the coordinates of the diagonal intersections of the rectangular frame of the image corresponding to each of the objects. For example, the vertex coordinates or diagonal intersection coordinates of a rectangular frame of the face image.
In some embodiments of the present application, the electronic device may determine the region of each photographic subject in the photographic preview interface based on the position of the photographic subject and the plurality of regions corresponding to the region segmentation information. For example, the electronic apparatus may determine an area to which the face of the photographic subject belongs or is adjacent as an area of the photographic subject in the photographic preview interface, based on a numerical value that may represent coordinates of the face.
In the case of the scene 2, for example, in the case that the user shoots images for a plurality of other users, the user may start the camera application program in the mobile phone to satisfy the requirement in the image content in the shooting preview interface displayed by the camera application program, as shown in fig. 13A, in the case that the person preview image 23 displayed by the shooting preview interface 20 includes 4 users, girl 1, girl 2, girl 1 and girl 2 are sequentially displayed from left to right, the user may trigger the mobile phone to execute the shooting task so as to shoot and obtain the image corresponding to the person preview image 23, that is, the fourth image. Then, the mobile phone can determine the positions corresponding to the faces of the 4 users as the positions corresponding to the 4 users respectively, such as the face coordinate of girl 1 is (864,1472), the face coordinate of girl 2 is (1500,2208), the face coordinate of boy 1 is (1600,1888), and the face coordinate of boy 2 is (2656,1568), then, as shown in fig. 13B, the mobile phone can determine the region corresponding to each face in combination with the grid division shown in fig. 3B, such as the region corresponding to the face of girl 1 is the region B0, the region corresponding to the face of girl 2 is the region a0, the region corresponding to the face of boy 1 is the region a0, and the region corresponding to the face of boy 2 is the region c5. That is, the faces of 4 users in the person preview image 23 correspond to 3 areas in total, that is, the area b0, the area a0, and the area c5.
In some embodiments of the present application, in a case where a position of a certain shooting object is just at a junction of a plurality of regions, the electronic device may use the plurality of regions as a region corresponding to the shooting object in common, or the electronic device may determine a region including the most shooting object area as a region corresponding to the shooting object based on an area distribution of the shooting object in the plurality of regions.
In the scene 2, for example, in a scenario where the user shoots images for a plurality of other users, the user may start a camera application program in the mobile phone to enable the content of the images in the shooting preview interface displayed by the camera application program to meet the requirements, as shown in fig. 14A, in a case where the person preview image 24 displayed by the shooting preview interface 20 includes 3 users, which are girls, boys and girls in sequence from left to right, the user may trigger the mobile phone to execute a shooting task to shoot and obtain an image corresponding to the person preview image 24, that is, the fourth image. Then, the mobile phone may determine the positions corresponding to the faces of the 3 users as the positions corresponding to the 3 users respectively, such as the face coordinate of girl (1000,2700), the face coordinate of boy (1500,2550) and the face coordinate of girl (2456,1345), and then, as shown in fig. 14B, the mobile phone may determine the region corresponding to each face in combination with the grid region division shown in fig. 3B, such as the region corresponding to the face of girl is the region B6, the region corresponding to the face of boy is the region B5, and the regions corresponding to the face of teacher are the regions B2 and c5. That is, the faces of 3 users in the person preview image 24 correspond to 4 areas in total, that is, the area b6, the area b5, the area b2, and the area c5.
Since the position of the facial image of the girl teacher is just located at the boundary between the region b2 and the region c5, the electronic device may use the region b2 and the region c5 as the region corresponding to the girl teacher's face at the same time.
Note that, since the person preview image 23 shown in fig. 13A or 13B and the person preview image 24 shown in fig. 14A or 14B each include a plurality of faces, and the electronic device can normally focus on one area, the sharpness values between the faces included in the person preview images shown in fig. 13A, 13B, 14A, and 14B may be different.
For example, in the person preview image 23 shown in fig. 13A or 13B, since the girl 2 and the boy 1 are in the center area and the girl 1 and the boy 2 are in the relatively marginal area, the sharpness values of the faces of the girl 2 and the boy 1 may be higher than those of the girl 1 and the boy 2, and in the person preview image 24 shown in fig. 14A or 14B, since the girl and the boy are located closer to the center area than the girl teacher, the sharpness values of the faces of the girl and the boy may be higher than those of the girl teacher.
Step 603, the electronic device sequentially focuses on the region of each shooting object in the shooting preview interface, so as to obtain focusing parameters corresponding to the region of each shooting object in the shooting preview interface.
In some embodiments of the application, the electronic device may perform the above-described focusing tasks in the background.
In some embodiments of the present application, the above-described focusing parameter may be a digital code, i.e., code value, of the drive motor position.
It can be understood that the electronic device can calculate the required code value according to the image definition evaluation result by the focusing algorithm in the focusing process.
It should be noted that, the specific method of the code value obtained by the electronic device in the focusing process may refer to the method of obtaining the code value in the related art, and the specific embodiment of the present application is not described herein.
For example, in conjunction with fig. 13B, in a case where the mobile phone determines that the faces of 4 users in the person preview image 23 correspond to 3 regions, such as the region B0, the region a0, and the region c5, in the shooting preview interface, the mobile phone may focus on the 3 regions respectively to obtain focusing parameters corresponding to each region, such as af_b0=1400, af_a0=1500, and af_c5=1300.
For example, in conjunction with fig. 14B, in a case where the mobile phone determines that the faces of 3 users in the person preview image 24 correspond to 4 areas in the shooting preview interface, that is, the area B6, the area B5, the area B2, and the area c5, the mobile phone may focus on the 4 areas respectively to obtain focusing parameters corresponding to each area, such as af_b6=1350, af_b5=1450, af_b2=1400, and af_c5=1300.
Therefore, the electronic device can determine the areas corresponding to at least two shooting objects in the shooting preview interface based on the fourth image, and focus the areas in sequence to acquire the focusing parameters corresponding to each area, namely the focusing parameters acquired by the electronic device are related to the distribution of the shooting objects in the image, so that the follow-up electronic device can calculate more accurately through the focusing parameters, so that the electronic device can acquire the focusing parameters corresponding to the positions actually required to be focused, the definition values of a plurality of shooting objects can be balanced, and the display effect of the shooting objects in the image is improved.
In some embodiments of the present application, before the "controlling the camera of the electronic device to move to the first focusing position" in the above step 401, the image capturing method provided in the embodiment of the present application may further include the following step 501 or step 502.
In step 501, when at least two photographic subjects do not include a central area in the photographic preview interface, the electronic device determines an average value of focusing parameters corresponding to the area of each photographic subject in the photographic preview interface as a target focusing parameter.
In some embodiments of the present application, the target focus parameter is used to determine the first focus position.
As an example, in conjunction with fig. 14B, since the face images of 3 users in the person preview image 24 correspond to 4 areas in total, such as area B6, area B5, area B2, and area c5, and af_b6=1350, af_b5=1450, af_b2=1400, af_c5=1300, the mobile phone can calculate an average value to obtain the above-mentioned target focus parameter af_end, that is, af_end=average (af_b6+af_b5+af_b2+af_c5) = (1350+1450+1400+1300)/4=1375. Then, the subsequent mobile phone may control the camera to move based on the target focusing parameter, so that the camera focuses to a position right below the central area, that is, the first focusing position, so as to obtain an image corresponding to the person preview interface 43 shown in fig. 15A, that is, the first image. At this time, the difference between the sharpness values of the faces of the 3 users in the first image is reduced.
Step 502, in the case that at least two shooting objects include a central area in an area in a shooting preview interface, the electronic device determines a weighted average of focusing parameters corresponding to the area of each shooting object in the shooting preview interface as a target focusing parameter.
In some embodiments of the present application, the focus parameter corresponding to the central area has a larger weight than the focus parameters corresponding to the other areas.
In some embodiments of the present application, the target focus parameter is used to determine the first focus position.
It can be appreciated that, since the at least two faces include a central region in the shooting preview interface, and the user's requirement for sharpness of the center of the image frame is generally high, the electronic device increases the proportion of the central region in the calculation process of the target focusing parameter, so as to perform weighted summation.
In some embodiments of the present application, the electronic device may calculate the above target focus parameter by the formula af_end= (focus parameter of central area a0×s+focus parameter of remaining areas)/(2 s-1), where s is the area number of the areas of at least two photographic subjects in the photographic preview interface.
As an example, in conjunction with fig. 13B, since the face images of 4 users in the person preview image 23 correspond to 3 areas in total, such as area B0, area a0, and area c5, and such as af_b0=1400, af_a0=1500, af_c5=1300, the mobile phone can calculate the target focusing parameter, that is, af_end= (1500×3+1400+1300)/(2×3-1) =1440 by the formula af_end= (af_a0×3+af_b0+af_c5)/(2×3-1). Then, the subsequent mobile phone may control the camera to move based on the target focusing parameter, so that the camera focuses to the position to the right of the central area, that is, the first focusing position, so as to obtain an image corresponding to the person preview interface 44 shown in fig. 15B, that is, the first image. At this time, the difference between the sharpness values of the faces of the 4 users in the first image is reduced.
Note that, when at least two faces include a center region in a region in the shooting preview interface, if the target focus parameter is directly calculated by an average value, the obtained target focus wipe parameter is 1400. That is, 1440 obtained by the weighted summation is closer to af_a0=1500 than 1400, i.e. the position corresponding to the target focusing parameter is closer to the center region, i.e. the image of the center region is more focused.
It will be appreciated that, since the requirement for sharpness of the center of the image frame is generally high, in the process of calculating the target focus parameter by the electronic device, it may be first determined whether the region includes a center region, so as to perform calculation in different manners.
Therefore, the electronic equipment can calculate according to the focusing parameters corresponding to the multiple areas respectively to obtain the target focusing parameters, so that the subsequent electronic equipment can shoot based on the positions corresponding to the target focusing parameters, at the moment, the difference between the definition degree of each shooting object in the shot image can be reduced, the consistency of the definition degree of the shooting objects in the image can be improved, the problem of obvious difference of definition caused by hardware definition characteristics and uneven introduction is avoided, and therefore the definition of at least one shooting object presented by the first image can meet the requirements of users more, and the display effect of the shooting objects in the image is improved.
Specific examples are given below for each scenario to which the embodiment of the present application is applicable in combination with each implementation scheme of the embodiment of the present application, and implementation procedures in each scenario of the embodiment of the present application are described. The resolution of the shooting preview interface is 3072×4096, which is illustrated by taking an electronic device as a mobile phone.
Scene 1: a user takes an image for a single other user to obtain an image comprising the single other user.
In example 1, a user may launch a camera application in an electronic device to satisfy the requirement of image content in a shooting preview interface displayed by the camera application, as shown in fig. 6A, in a case where the shooting preview interface 20 includes the single user, the user may click on a shooting control, that is, the first input described above, so that the mobile phone may shoot the person preview image 21, determine a rectangular frame of a face of a shooting object in the person preview image 21, and obtain coordinates of a diagonal intersection point of the rectangular frame, as shown in (1210,1340), to represent a position of the single shooting object. Then, the mobile phone can determine that the face included in the person preview image 21 is located in the region B based on the annular region division information and the face rectangular frame.
Then, the camera may be horizontally moved to the left by 150 pixels and vertically moved up by 150 pixels, i.e., moved to the first focusing position, so that the face image in the person preview image 32 may be moved to the center position, as shown in fig. 8A or 8B, and the coordinates of the face position in the moved person preview image 32 may be (1360,1490), so that the third image may be photographed by the mobile phone.
Then, the mobile phone may perform a matting process on the face image in the image corresponding to the person preview image 21 in fig. 6A to obtain a background image 211 as shown in fig. 10A and a face image 212 as shown in fig. 10B. And performs matting processing on the face image in the image corresponding to the person preview image 31 in fig. 8A to obtain a background image 311 shown in fig. 10C and a face image 312 shown in fig. 10D. Then, as shown in fig. 10E, the electronic device may fill the face image 312 into the face blank area of the background image 211, and synthesize the face image with the background image 211 to obtain the face image 41, that is, the first image.
Example 2 a user may launch a camera application in an electronic device to satisfy the requirements in the image content in a capture preview interface displayed by the camera application, as shown in fig. 7A, where the capture preview interface 20 includes the single user, the user may click on a capture control, i.e., the first input described above, to allow the cell phone to capture the person preview image 22 and determine a rectangular box of the face of the capture object in the person preview image 22, and determine coordinates of the diagonal intersection of the rectangular box, as shown in (1400,2048), to represent the location of the single capture object. The handset may then determine that the face included in the person preview image 22 is located in the area a based on the annular region division information and the face rectangle.
Then, the camera may be moved 136 pixels horizontally to the left, i.e., to the first focus position, so that the face image in the person preview image 22 may be moved to the center position (1536,2048), so that a third image may be photographed by the mobile phone.
Then, the mobile phone may perform a matting process on the face image in the image corresponding to the person preview image 22 in fig. 7A to obtain a background image 221 as shown in fig. 11A and a face image 222 as shown in fig. 11B. And performs matting processing on the face image in the image corresponding to the person preview image 32 in fig. 9A to obtain a background image 321 as shown in fig. 11C and a face image 322 as shown in fig. 11D. Then, as shown in fig. 11E, the electronic device may fill the face image 322 into the face blank area of the background image 221, and synthesize the face image with the background image 221 to obtain the face image 42, that is, the first image.
Scene 2. A user takes images for a plurality of other users to obtain images comprising the plurality of other users.
In example 1, a user may start a camera application program in the mobile phone to enable the content of an image in a shooting preview interface displayed by the camera application program to meet the requirement, as shown in fig. 13A, in a case where the person preview image 23 displayed by the shooting preview interface 20 includes 4 users, namely girl 1, girl 2, girl 1 and girl 2, in sequence from left to right, the user may trigger the mobile phone to execute a shooting task, so as to obtain an image corresponding to the person preview image 23, that is, the fourth image. Then, the mobile phone may acquire the positions corresponding to the faces of the 4 users to represent the positions of the users, such as the face image coordinate of girl 1 is (864,1472), the face image coordinate of girl 2 is (1500,2208), the face image coordinate of girl 1 is (1600,1888), and the face image coordinate of girl 2 is (2656,1568), and then, as shown in fig. 13B, the mobile phone may determine the region corresponding to each face in combination with the grid division shown in fig. 3B, such as the region corresponding to the face image of girl 1 is the region B0, the region corresponding to the face image of girl 2 is the region a0, the region corresponding to the face image of girl 1 is the region a0, and the region corresponding to the face image of girl 2 is the region c5. That is, the face images of 4 users in the person preview image 23 correspond to 3 areas in total, namely, an area b0, an area a0, and an area c5. Wherein, since girl 2 and girl 1 are in the central area and girl 1 and girl 2 are in the relatively marginal area, the clarity value of facial images of girl 2 and girl 1 will be higher than that of girl 1 and girl 2;
Then, the mobile phone can focus on the 3 areas respectively to obtain focusing parameters corresponding to the areas, such as af_b0=1400, af_a0=1500, and af_c5=1300. And the target focusing parameter is calculated by the formula af_end= (af_a0×3+af_b0+af_c5)/(2×3-1), that is, af_end= (1500×3+1400+1300)/(2×3-1) =1440. Then, based on the target focusing parameter, the mobile phone may control the camera to move so that the camera focuses to the position of the center area to the right, that is, the first focusing position, so as to obtain an image corresponding to the person preview interface 44 shown in fig. 15B, that is, the first image. At this time, the difference in sharpness values of the face images of the 4 users in the first image is reduced.
In example 2, a user may start a camera application in the mobile phone to enable the content of the image in the shooting preview interface displayed by the camera application to meet the requirement, as shown in fig. 14A, in the case that the person preview image 24 displayed by the shooting preview interface 20 includes 3 users, namely, girls, boys and girls, in sequence from left to right, the user may trigger the mobile phone to execute a shooting task, so as to obtain an image corresponding to the person preview image 24, that is, the fourth image. Then, the mobile phone may determine the positions corresponding to the face images of the 3 users to represent the positions of the users, such as the face image coordinates of girl (1000,2700), the face image coordinates of girl (1500,2550), and the face image coordinates of girl (2456,1345), and then, as shown in fig. 14B, the mobile phone may determine the region corresponding to each face image in combination with the grid region division as shown in fig. 3B, such as the region corresponding to the face image of girl is the region B6, the region corresponding to the face image of girl is the region B5, and the regions corresponding to the face image of teacher are the regions B2 and c5. That is, the face images of 3 users in the person preview image 24 correspond to 4 areas in total, that is, the area b6, the area b5, the area b2, and the area c5. Wherein, since girls and boys are located closer to the center area than girls, the sharpness values of facial images of girls and boys may be higher than those of girls.
Then, the mobile phone can focus on the 4 areas respectively to obtain focusing parameters corresponding to the areas, such as af_b6=1350, af_b5=1450, af_b2=1400, af_c5=1300. And calculates an average value to obtain the target focus parameter af_end, that is, af_end=average (af_b6+af_b5+af_b2+af_c5) = (1350+1450+1400+1300)/4=1375. Then, based on the target focusing parameter, the mobile phone may control the camera to move so that the camera focuses to a position of the center area right and lower, that is, the first focusing position, so as to obtain an image corresponding to the person preview interface 43 shown in fig. 15A, that is, the first image. At this time, the difference in sharpness values of the face images of 3 users in the first image is reduced.
It should be noted that, the foregoing method embodiments, or various possible implementation manners in the method embodiments may be executed separately, or may be executed in combination with each other on the premise that no contradiction exists, and may be specifically determined according to actual use requirements, which is not limited by the embodiment of the present application.
It should be noted that, in the image capturing method provided by the embodiment of the present application, the execution subject may be an image capturing device. In the embodiment of the application, an image capturing device is taken as an example to execute an image capturing method by using the image capturing device, and the image capturing device provided by the embodiment of the application is described.
Fig. 16 shows a schematic diagram of one possible configuration of an image capturing apparatus according to an embodiment of the present application. As shown in fig. 16, the image photographing device 70 may include a receiving module 71 and a photographing module 72;
the receiving module 71 is configured to receive a first input to a shooting preview interface, where the shooting preview interface includes at least one shooting object;
And a shooting module 72, configured to obtain a first image by shooting in response to the first input received by the receiving module 71, where the sharpness value of at least one shooting object in the first image is determined based on the position and the number of the at least one shooting object in the shooting preview interface.
In a possible implementation manner, the shooting module 72 is specifically configured to control, in response to the first input received by the receiving module 71, the camera of the electronic device to move to a first focusing position, and shoot to obtain a first image, where the first focusing position is determined based on a position and number of at least one shooting object in a shooting preview interface.
In one possible implementation manner, the shooting preview interface includes a shooting object, the shooting module 72 is specifically configured to respond to a first input and shoot at a second focusing position to obtain a second image, control a camera of the electronic device to move to the first focusing position and shoot to obtain a third image, and obtain a first image based on a background image area of the second image and an image area corresponding to the shooting object in the third image, where the sharpness of the shooting object in the second image is smaller than that in the third image.
In one possible implementation manner, the shooting preview interface includes at least two shooting objects;
The shooting module 72 is specifically configured to, in response to a first input, control a camera of the electronic device to move to a first focusing position to obtain a first image based on a focusing parameter corresponding to an area of each of the at least two shooting objects in the shooting preview interface, where the first focusing position is determined based on the focusing parameters corresponding to the areas of the at least two shooting objects.
In a possible implementation manner, the image capturing apparatus 70 provided by the embodiment of the present application further includes a determining module, configured to determine, before the capturing module 72 controls the camera of the electronic device to move to the first focusing position, an average value of focusing parameters corresponding to an area of each of the at least two capturing objects in the capturing preview interface as a target focusing parameter if the area of the at least two capturing objects in the capturing preview interface does not include a central area, or determine, as a target focusing parameter, a weighted average value of focusing parameters corresponding to an area of each of the at least two capturing objects in the capturing preview interface if the area of the at least two capturing objects in the capturing preview interface includes a central area, where the weight of the focusing parameter corresponding to the central area is greater than the weight of the focusing parameters corresponding to other areas, and the target focusing parameter is used to determine the first focusing position.
In a possible implementation manner, the image capturing apparatus 70 provided by the embodiment of the present application further includes a determining module, where the determining module is further configured to, before capturing a first image by controlling the camera of the electronic device to move to the first focusing position based on the focusing parameter corresponding to the region of each of the at least two capturing objects in the capturing preview interface, respond to the first input, capture a fourth image, determine, based on the positions of the at least two capturing objects in the fourth image, the region of each of the at least two capturing objects in the capturing preview interface, and the capturing module 72 is further configured to focus, in turn, the region of each of the at least two capturing objects in the capturing preview interface, and obtain the focusing parameter corresponding to the region of each of the capturing objects in the capturing preview interface.
In the image capturing device provided by the embodiment of the present application, if the user needs to capture an image including at least one capturing object, if the capturing preview interface includes at least one capturing object that meets the user's needs, the user may trigger the image capturing device to capture the image so as to obtain a first image, where the sharpness value of at least one capturing object in the first image may be determined based on the position and the number of at least one capturing object in the capturing preview interface. That is, in the process of capturing the first image by the image capturing device, the focusing position of the capturing is automatically adjusted based on the position and the number of the capturing objects included in the image, so that the definition of the capturing objects presented by the image can be correspondingly adjusted, that is, the image capturing device can focus based on a more accurate position, so that the definition of at least one capturing object presented by the first image can more meet the requirements of the user, and the display effect of the capturing objects in the image can be improved.
The image capturing device in the embodiment of the application can be an electronic device, or can be a component in the electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be other devices than a terminal. The electronic device may be a Mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic device, a Mobile internet appliance (Mobile INTERNET DEVICE, MID), an augmented reality (augmented reality, AR)/Virtual Reality (VR) device, a robot, a wearable device, an ultra-Mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), etc., and may also be a server, a network attached storage (Network Attached Storage, NAS), a personal computer (personal computer, PC), a Television (TV), a teller machine, a self-service machine, etc., which are not particularly limited in the embodiments of the present application.
The image capturing device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android operating system, an iOS operating system, or other possible operating systems, and the embodiment of the present application is not limited specifically.
The image capturing device provided by the embodiment of the present application can implement each process implemented by the above method embodiment, and in order to avoid repetition, details are not repeated here.
Optionally, as shown in fig. 17, the embodiment of the present application further provides an electronic device 90, which includes a processor 91 and a memory 92, where a program or an instruction that can be executed on the processor 91 is stored in the memory 92, and the program or the instruction when executed by the processor 91 implements each step of the embodiment of the image capturing method, and the steps can achieve the same technical effect, so that repetition is avoided, and no further description is given here.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device.
Fig. 18 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 100 includes, but is not limited to, a radio frequency unit 101, a network module 102, an audio output unit 103, an input unit 104, a sensor 105, a display unit 106, a user input unit 107, an interface unit 108, a memory 109, and a processor 110.
Those skilled in the art will appreciate that the electronic device 100 may further include a power source (e.g., a battery) for powering the various components, and that the power source may be logically coupled to the processor 110 via a power management system to perform functions such as managing charging, discharging, and power consumption via the power management system. The electronic device structure shown in fig. 18 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown in the drawings, or may combine some components, or may be arranged in different components, which will not be described in detail herein.
Wherein, the user input unit 107 is configured to receive a first input to a shooting preview interface, where the shooting preview interface includes at least one shooting object;
And a processor 110 for capturing a first image in response to the first input, wherein a sharpness value of at least one subject in the first image is determined based on the location and number of the at least one subject in the capture preview interface.
Optionally, the processor 110 is specifically configured to control the camera of the electronic device to move to a first focusing position in response to the first input, and take a first image, where the first focusing position is determined based on the position and the number of at least one shooting object in the shooting preview interface.
Optionally, the shooting preview interface includes a shooting object, a processor 110 specifically configured to respond to a first input and shoot at a second focusing position to obtain a second image, and control a camera of the electronic device to move to the first focusing position and shoot to obtain a third image, and obtain the first image based on a background image area of the second image and an image area corresponding to the shooting object in the third image, where the sharpness of the shooting object in the second image is smaller than the sharpness in the third image.
Optionally, the shooting preview interface includes at least two shooting objects, and the processor 110 is specifically configured to respond to a first input, and control a camera of the electronic device to move to a first focusing position based on focusing parameters corresponding to an area of each of the at least two shooting objects in the shooting preview interface, so as to obtain a first image, where the first focusing position is determined based on focusing parameters corresponding to the area where the at least two shooting objects are located.
Optionally, the processor 110 is further configured to determine, before controlling the camera of the electronic device to move to the first focusing position, an average value of focusing parameters corresponding to an area of each of the at least two shooting objects in the shooting preview interface as a target focusing parameter if the at least two shooting objects do not include a central area in the area of the shooting preview interface, or determine, as a target focusing parameter, a weighted average value of focusing parameters corresponding to an area of each of the at least two shooting objects in the shooting preview interface if the at least two shooting objects include a central area in the area of the shooting preview interface, where the weight of the focusing parameters corresponding to the central area is greater than the weight of the focusing parameters corresponding to other areas, and the target focusing parameter is used to determine the first focusing position.
Optionally, the processor 110 is further configured to control the camera of the electronic device to move to the first focusing position based on the focusing parameters corresponding to the region of each of the at least two shooting objects in the shooting preview interface, and to obtain a fourth image in response to the first input before the first image is obtained, determine the region of each of the at least two shooting objects in the shooting preview interface based on the positions of the at least two shooting objects in the fourth image, and focus the region of each of the at least two shooting objects in the shooting preview interface in sequence to obtain the focusing parameters corresponding to the region of each of the shooting objects in the shooting preview interface.
In the electronic device provided by the embodiment of the application, if the user needs to shoot an image including at least one shooting object, if the shooting preview interface includes at least one shooting object meeting the user's requirement, the user may trigger the electronic device to shoot so as to obtain a first image, where the sharpness value of at least one shooting object in the first image may be determined based on the position and the number of at least one shooting object in the shooting preview interface. That is, in the process of capturing the first image by the electronic device, the focusing position of the capturing is automatically adjusted based on the position and the number of the capturing objects included in the image, so that the definition of the capturing objects presented by the image can be correspondingly adjusted, that is, the electronic device can focus based on a more accurate position, so that the definition of at least one capturing object presented by the first image can more meet the requirements of the user, and the display effect of the capturing objects in the image can be improved.
The electronic device provided by the embodiment of the application can realize each process realized by the embodiment of the method and can achieve the same technical effect, and in order to avoid repetition, the description is omitted here.
The beneficial effects of the various implementation manners in this embodiment may be specifically referred to the beneficial effects of the corresponding implementation manners in the foregoing method embodiment, and in order to avoid repetition, the description is omitted here.
It should be appreciated that in embodiments of the present application, the input unit 104 may include a graphics processor (Graphics Processing Unit, GPU) 1041 and a microphone 1042, the graphics processor 1041 processing image data of still pictures or video obtained by an image capturing device (e.g. a camera) in a video capturing mode or an image capturing mode. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 107 includes at least one of a touch panel 1071 and other input devices 1072. The touch panel 1071 is also referred to as a touch screen. The touch panel 1071 may include two parts of a touch detection device and a touch controller. Other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein.
Memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a first memory area storing programs or instructions and a second memory area storing data, wherein the first memory area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 109 may include volatile memory or nonvolatile memory, or the memory 109 may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM), static random access memory (STATIC RAM, SRAM), dynamic random access memory (DYNAMIC RAM, DRAM), synchronous Dynamic Random Access Memory (SDRAM), double data rate Synchronous dynamic random access memory (Double DATA RATE SDRAM, DDRSDRAM), enhanced Synchronous dynamic random access memory (ENHANCED SDRAM, ESDRAM), synchronous link dynamic random access memory (SYNCH LINK DRAM, SLDRAM), and Direct random access memory (DRRAM). Memory 109 in embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
The processor 110 may include one or more processing units, and optionally the processor 110 integrates an application processor that primarily processes operations involving an operating system, user interface, application programs, and the like, and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The embodiment of the application also provides a readable storage medium, on which a program or an instruction is stored, which when executed by a processor, implements each process of the above method embodiment, and can achieve the same technical effects, and in order to avoid repetition, the description is omitted here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes computer readable storage medium such as computer readable memory ROM, random access memory RAM, magnetic or optical disk, etc.
The embodiment of the application further provides a chip, which comprises a processor and a communication interface, wherein the communication interface is coupled with the processor, and the processor is used for running programs or instructions to realize the processes of the embodiment of the method, and can achieve the same technical effects, so that repetition is avoided, and the description is omitted here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
Embodiments of the present application provide a computer program product stored in a storage medium, where the program product is executed by at least one processor to implement the respective processes of the above method embodiments, and achieve the same technical effects, and for avoiding repetition, a detailed description is omitted herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.

Claims (10)

1. An image capturing method performed by an electronic device, the method comprising:
Receiving a first input to a shooting preview interface, wherein the shooting preview interface comprises at least one shooting object;
And responding to the first input, shooting to obtain a first image, wherein the definition value of the at least one shooting object in the first image is determined based on the position and the number of the at least one shooting object in the shooting preview interface.
2. The method of claim 1, wherein capturing a first image in response to the first input comprises:
and responding to the first input, controlling a camera of the electronic equipment to move to a first focusing position, and shooting to obtain the first image, wherein the first focusing position is determined based on the position and the number of the at least one shooting object in the shooting preview interface.
3. The method of claim 2, wherein the shot preview interface includes a shot object;
The responding to the first input, shooting to obtain a first image, and the method comprises the following steps:
shooting at a second focusing position to obtain a second image in response to the first input;
Controlling a camera of the electronic equipment to move to the first focusing position, and shooting to obtain a third image;
Obtaining the first image based on a background image area of the second image and an image area corresponding to a shooting object in the third image;
wherein the sharpness of the photographed object in the second image is smaller than the sharpness in the third image.
4. The method of claim 2, wherein the shot preview interface includes at least two shot objects therein;
The responding to the first input, shooting to obtain a first image, and the method comprises the following steps:
Responding to the first input, and controlling a camera of the electronic equipment to move to the first focusing position based on focusing parameters corresponding to the region of each shooting object in the shooting preview interface, so as to obtain a first image through shooting;
the first focusing position is determined based on the focusing parameters corresponding to the areas where the at least two shooting objects are located.
5. The method of claim 4, wherein prior to the controlling the camera of the electronic device to move to the first focus position, the method further comprises:
Determining an average value of focusing parameters corresponding to the region of each shooting object in the shooting preview interface as a target focusing parameter under the condition that the region of the at least two shooting objects in the shooting preview interface does not comprise a central region;
Under the condition that the at least two shooting objects comprise a central area in the shooting preview interface, determining a weighted average value of focusing parameters corresponding to the area of each shooting object in the shooting preview interface as a target focusing parameter, wherein the weight of the focusing parameter corresponding to the central area is larger than that of the focusing parameters corresponding to other areas;
wherein the target focus parameter is used to determine the first focus position.
6. The method of claim 4, wherein the controlling the camera of the electronic device to move to the first focus position based on the focus parameter corresponding to the region of each of the at least two subjects in the photographing preview interface, before photographing the first image, further comprises:
Shooting to obtain a fourth image in response to the first input;
Determining the region of each of the at least two shooting objects in the shooting preview interface based on the positions of the at least two shooting objects in the fourth image;
and focusing the region of each shooting object in the shooting preview interface in sequence to obtain focusing parameters corresponding to the region of each shooting object in the shooting preview interface.
7. An image capturing apparatus applied to an electronic device, the image capturing apparatus comprising:
The receiving module is used for receiving a first input to a shooting preview interface, wherein the shooting preview interface comprises at least one shooting object;
And the shooting module is used for responding to the first input received by the receiving module to shoot and obtain a first image, wherein the definition value of the at least one shooting object in the first image is determined based on the position and the number of the at least one shooting object in the shooting preview interface.
8. The apparatus of claim 7, wherein the photographing module is configured to control a camera of the electronic device to move to a first focus position in response to the first input received by the receiving module, and to photograph the first image, wherein the first focus position is determined based on a position and a number of the at least one photographic subject in the photographing preview interface.
9. The apparatus of claim 8, wherein the shot preview interface includes a shot object;
the shooting module is specifically configured to:
shooting at a second focusing position to obtain a second image in response to the first input;
Controlling a camera of the electronic equipment to move to the first focusing position, and shooting to obtain a third image;
Obtaining the first image based on a background image area of the second image and an image area corresponding to a shooting object in the third image;
wherein the sharpness of the photographed object in the second image is smaller than the sharpness in the third image.
10. The apparatus of claim 8, wherein the shot preview interface includes at least two shot objects therein;
the shooting module is specifically configured to:
Responding to the first input, and controlling a camera of the electronic equipment to move to the first focusing position based on focusing parameters corresponding to the region of each shooting object in the shooting preview interface, so as to obtain a first image through shooting;
the first focusing position is determined based on the focusing parameters corresponding to the areas where the at least two shooting objects are located.
CN202511169843.9A 2025-08-20 2025-08-20 Image shooting method and device Pending CN120812394A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202511169843.9A CN120812394A (en) 2025-08-20 2025-08-20 Image shooting method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202511169843.9A CN120812394A (en) 2025-08-20 2025-08-20 Image shooting method and device

Publications (1)

Publication Number Publication Date
CN120812394A true CN120812394A (en) 2025-10-17

Family

ID=97324917

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202511169843.9A Pending CN120812394A (en) 2025-08-20 2025-08-20 Image shooting method and device

Country Status (1)

Country Link
CN (1) CN120812394A (en)

Similar Documents

Publication Publication Date Title
EP3326360B1 (en) Image capturing apparatus and method of operating the same
EP4093015A1 (en) Photographing method and apparatus, storage medium, and electronic device
CN112738397A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN103780839B (en) A kind of photographic method and photo terminal
CN112532881A (en) Image processing method and device and electronic equipment
CN115623313B (en) Image processing method, image processing device, electronic device, and storage medium
CN111246106A (en) Image processing method, electronic device, and computer-readable storage medium
WO2024022349A1 (en) Image processing method and apparatus, and electronic device and storage medium
CN112637500A (en) Image processing method and device
CN110213492B (en) Device imaging method and device, storage medium and electronic device
CN114390201A (en) Focusing method and device thereof
CN114241127A (en) Panoramic image generation method and device, electronic equipment and medium
CN114785969B (en) Shooting method and device
JP7640718B2 (en) Photographing method, device, electronic device, and readable storage medium
CN107368775A (en) Method for previewing and device during a kind of iris recognition
CN113873160B (en) Image processing method, device, electronic equipment and computer storage medium
KR102022559B1 (en) Method and computer program for photographing image without background and taking composite photograph using digital dual-camera
CN113014820A (en) Processing method and device and electronic equipment
CN120812394A (en) Image shooting method and device
CN114143455B (en) Shooting method, device and electronic equipment
CN115134532A (en) Image processing method, image processing device, storage medium and electronic equipment
CN114339029A (en) Shooting method and device and electronic equipment
CN115334243A (en) Shooting method and device thereof
CN112399092A (en) Shooting method and device and electronic equipment
JP7788025B1 (en) Image processing device, image processing system, and image processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination