Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or described herein. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The shooting method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
As shown in fig. 1, an embodiment of the present application provides a shooting method, which is performed by an electronic device, and includes:
step 101: acquiring proportion information of a first image in a photographing preview interface, wherein the first image is an image of a target position of a photographed object.
In this step, when the subject is a person, the image of the target position may be a face image. When the subject is an object, the image of the target position may be an entire image of the subject, or may be an image of another position of the subject. For example, if the subject is a person, the above steps acquire the proportion information of the face image of the subject in the photo preview interface. The proportion information here may be a ratio of the area of the face image to the area of the photograph preview interface.
Illustratively, when shooting is performed using a rear camera of the electronic device, proportion information of a face image of a subject in a shooting preview interface is calculated.
Step 102: and determining a target composition scene type according to the relation between the proportion information and at least two preset value ranges, wherein each preset value range corresponds to one composition scene type.
Here, the imaging target person is exemplified. As shown in fig. 2, the composition scene includes a close-up, a chest image, a half body, a large half body, a whole body, and a panorama. In the embodiment of the present application, a preset value range corresponding to each of the composition views may be preset. The proportion information of the face images in each composition scene is as follows from big to small: feature, chest image, half body, whole body, panorama.
For example, the preset value range corresponding to the special scenes is 0.1404-0.6833, and the preset value range corresponding to the chest image scenes is 0.0371-0.1065; the range of the preset value corresponding to the bust scene is 0.0231-0.0353; the preset numerical value range corresponding to the large half of the scene is 0.0115-0.0168; the preset value range corresponding to the whole body scene is 0.0024-0.0113, and the whole body is obtained; the corresponding preset numerical value range of the panoramic scene is 0.0021-0.0044.
Here, the proportion information of the first image in the photo preview interface is compared with the preset numerical range to determine an appropriate composition scene.
Alternatively, the attitude of the subject may be combined, and if the subject is in a standing attitude, the configuration view may be accurately selected.
Step 103: and displaying target view frames corresponding to the target composition scenes in the photographing preview interface, wherein the areas of the target view frames corresponding to different composition scenes are different.
Because the proportion information of the face image in each image forming scene is different, the size of the displayed target view-finding frame is different, and the target view-finding frame can be a face frame. The face frame corresponding to the close-up view is the largest, and the face frame corresponding to the panorama is the smallest.
Step 104: and shooting the shot object based on the target view frame to obtain a target image.
According to the shooting method, a proper composition scene is determined according to the proportion information of the image of the target position of the shot object in the shooting preview interface; and displaying a corresponding target view-finding frame according to the determined picture composition scene, and shooting based on the target view-finding frame, thereby realizing the purpose of automatically selecting a proper picture composition scene according to a shot object.
Optionally, the determining a target composition scene according to the relationship between the proportion information and at least two preset value ranges includes:
taking the composition view corresponding to the target preset value range as the target composition view when the target value corresponding to the proportion information is within the target preset value range;
or, under the condition that the target value corresponding to the proportion information is out of each preset value range, selecting the composition scene corresponding to the preset value range with the minimum difference value of the target value as the target composition scene.
For example, if the proportion information of the first image in the photographing preview interface is 0.2, that is, the proportion information is located in the preset numerical value range corresponding to the close-up view, it is determined that the target composition view is the close-up composition view, and the target view frame corresponding to the close-up composition view is displayed.
For another example, if the proportion information of the first image in the photographing preview interface is 0.005, and the proportion information is not within the preset value range corresponding to the configuration view, the preset value range with the minimum difference value with the proportion information is determined, for example, 0.006 to 0.0121, that is, the configuration view is determined to be the large half configuration view, and the corresponding target view frame is displayed.
According to the embodiment of the application, the picture composition type can be simply and accurately determined through the preset value range corresponding to each picture composition type, and therefore a user can shoot high-quality images conveniently according to the picture composition type.
Optionally, the photographing the subject based on the target finder frame includes:
and shooting the shot object when the overlapping area of the target position image of the shot object and the target view frame is larger than or equal to a first preset threshold value.
For example, the target view frame is an elliptical face frame, and when the area of the overlapping area between the face of the subject and the face frame is the same as the area of the face frame, that is, the face of the subject and the face frame are completely overlapped, the subject is photographed, and the specific value of the first preset threshold may be set according to the user requirement.
Optionally, the shooting method in the embodiment of the present application further includes:
receiving a first input to the target view frame under the condition that a partial image of the target position image is located in the target view frame and an overlapping area of the target position image and the target view frame is greater than or equal to a second preset threshold and smaller than a first preset threshold;
and responding to the first input, and performing automatic zooming processing until the overlapping area of the target position image of the shot object and the target view frame is larger than the first preset threshold value.
Optionally, when a partial image of the target position image is located in the target view finder, and an overlapping area between the target position image and the target view finder is greater than or equal to a second preset threshold and smaller than the first preset threshold, a prompt message may be displayed first, where the prompt message is used to prompt a user to start an automatic zoom function, and the user inputs the first input according to the prompt message.
In the embodiment of the present application, as shown in fig. 3, when a face appears in the photo preview interface, the face and the face frame are not necessarily in a completely overlapped state, and the face may be larger or smaller than the face frame, for example, the area of the overlapped area of the face and the face frame is larger than the second preset threshold but smaller than the first preset threshold, and then prompt information, such as "double-click face, automatic zooming", is displayed. After the user double-clicks the face, the mobile phone starts an automatic zooming function, and the focal length change is adjusted so that the overlapping area of the actual face and the face frame in the mobile phone shooting preview interface reaches the first preset threshold. The second preset threshold may be fifty percent of the area of the face frame, and the first preset threshold may be ninety-nine percent of the area of the face frame.
Here, when the overlapping area of the target position image and the target view frame is smaller than a second preset threshold, the user needs to move the mobile phone left and right to make the overlapping area of the target position image and the target view frame reach the second preset threshold, otherwise, even if the user double-clicks the face frame, the automatic zoom function is not started.
According to the shooting method, under the condition that the overlapping area of the target position image of the shot object and the target view frame is larger than the first preset threshold value, the double-click face can automatically zoom, so that the target position image and the target view frame coincide with each other without the need of moving the user back and forth, the shooting time is reduced, and the situation that the best shooting opportunity is missed due to the fact that the user moves back and forth can be avoided.
Optionally, the shooting method in the embodiment of the present application further includes:
receiving a second input to the target view frame under the condition that the target position image is completely positioned in the target view frame and the overlapping area of the target position image and the target view frame is smaller than a first preset threshold value;
and responding to the second input, and performing automatic zooming processing until the overlapping area of the target position image of the shot object and the target view frame is greater than or equal to the first preset threshold value.
As shown in fig. 4, when a distant animal is photographed, the animal is completely located in the target view frame, but the area of the overlapped area is smaller than a first preset threshold, and if the animal occupies only a small part of the middle position of the target view frame, at this time, a second input to the target view frame from the user can be received, and if the animal performs a double-click input, an automatic zoom can be performed, so that the view frame is filled with the animal, and a perfect composition can be rapidly realized. Optionally, the photographing the subject includes:
receiving a third input of a shooting control in the shooting preview interface;
and responding to the third input, and shooting the shot object.
Here, when the overlapping area of the target position image of the subject and the target finder frame is larger than the first preset threshold, the user clicks the photographing button, and the purpose of photographing the subject based on the target composition view can be achieved.
In the conventional composition scheme, composition recommendation is only carried out according to a shooting composition logic, so that the situation of recommending a half-length composition during panoramic shooting is easily caused.
In the shooting method provided by the embodiment of the present application, the executing subject may be a shooting device, or a control module for executing the shooting method in the shooting device. The embodiment of the present application takes an example in which a shooting device executes a shooting method, and the shooting device provided in the embodiment of the present application is described.
As shown in fig. 5, an embodiment of the present application provides a camera 500, including:
a first obtaining module 501, configured to obtain proportion information of a first image in a photographing preview interface, where the first image is an image of a target position of a photographed object;
a first determining module 502, configured to determine a target configuration view category according to a relationship between the proportion information and at least two preset value ranges, where each preset value range corresponds to one configuration view category;
a first display module 503, configured to display, in the photographing preview interface, target view frames corresponding to the target composition scenes, where areas of the target view frames corresponding to different composition scenes are different;
a shooting module 504, configured to shoot the object to be shot based on the target view frame, so as to obtain a target image.
Optionally, in the shooting device of the embodiment of the application, the first determining module is configured to, when a target value corresponding to the proportion information is within a target preset value range, use a composition scene corresponding to the target preset value range as the target composition scene;
or, under the condition that the target value corresponding to the proportion information is out of each preset value range, selecting the composition scene corresponding to the preset value range with the minimum difference value of the target value as the target composition scene.
Optionally, the shooting module is configured to shoot the object when an overlapping area of the target position image of the object and the target view frame is greater than or equal to a first preset threshold.
Optionally, the shooting device of the embodiment of the present application further includes:
the first receiving module is used for receiving a first input to the target view frame under the condition that a partial image of the target position image is located in the target view frame and an overlapping area of the target position image and the target view frame is greater than or equal to a second preset threshold and smaller than a first preset threshold;
and the first response module is used for responding to the first input and performing automatic zooming processing until the overlapping area of the target position image of the shot object and the target view frame is greater than or equal to the first preset threshold value.
Optionally, the shooting device of the embodiment of the present application further includes:
the second receiving module is used for receiving second input to the target view frame under the condition that the target position image is completely positioned in the target view frame and the overlapping area of the target position image and the target view frame is smaller than a first preset threshold value;
and the second response module is used for responding to the second input and performing automatic zooming processing until the overlapping area of the target position image of the shot object and the target view frame is greater than or equal to the first preset threshold value.
The shooting device of the embodiment of the application determines a proper composition scene according to the proportion information of the image of the target position of the shot object in the shooting preview interface; and displaying a corresponding target view-finding frame according to the determined picture composition scene, and shooting based on the target view-finding frame, thereby realizing the purpose of automatically selecting a proper picture composition scene according to a shot object.
The shooting device provided in the embodiment of the present application can implement each process implemented by the method embodiments of fig. 1 to 4, and is not described here again to avoid repetition.
The shooting device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The photographing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an IOS operating system, or other possible operating systems, which is not specifically limited in the embodiments of the present application.
Optionally, as shown in fig. 6, an electronic device 600 is further provided in this embodiment of the present application, and includes a processor 601, a memory 602, and a program or an instruction stored in the memory 602 and executable on the processor 601, where the program or the instruction is executed by the processor 601 to implement each process of the foregoing shooting method embodiment, and can achieve the same technical effect, and no further description is provided here to avoid repetition.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic devices and the non-mobile electronic devices described above.
Fig. 7 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 700 includes, but is not limited to: a radio frequency unit 701, a network module 702, an audio output unit 703, an input unit 704, a sensor 705, a display unit 706, a user input unit 707, an interface unit 708, a memory 709, and a processor 710.
Those skilled in the art will appreciate that the electronic device 700 may also include a power supply (e.g., a battery) for powering the various components, and the power supply may be logically coupled to the processor 710 via a power management system, such that the functions of managing charging, discharging, and power consumption may be performed via the power management system. The electronic device structure shown in fig. 7 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
The processor 710 is configured to obtain proportion information of a first image in a photographing preview interface, where the first image is an image of a target position of a photographed object; determining a target composition scene type according to the relation between the proportion information and at least two preset value ranges, wherein each preset value range corresponds to one composition scene type;
a display unit 706, configured to display, in the photographing preview interface, target view frames corresponding to the target composition scenes, where areas of the target view frames corresponding to different composition scenes are different; and the processor 710 is configured to shoot the shot object based on the target view frame to obtain a target image.
Optionally, the processor 710 is configured to, when a target value corresponding to the proportion information is within a target preset value range, take a composition scene corresponding to the target preset value range as the target composition scene;
or, under the condition that the target value corresponding to the proportion information is out of each preset value range, selecting the composition scene corresponding to the preset value range with the minimum difference value of the target value as the target composition scene.
Optionally, the processor 710 is configured to capture the subject when an overlapping area of the target position image of the subject and the target view frame is greater than or equal to a first preset threshold.
Optionally, the user input unit 707 is configured to receive a first input to the target finder frame when a partial image of the target position image is located in the target finder frame and an overlapping area of the target position image and the target finder frame is greater than or equal to a second preset threshold and smaller than the first preset threshold;
the processor 710 is configured to perform, in response to the first input, an automatic zoom process until an overlapping area of the target position image of the subject and the target view frame is greater than or equal to the first preset threshold.
Optionally, the user input unit 707 is configured to receive a second input to the target finder frame if the target position image is completely located in the target finder frame and an overlapping area of the target position image and the target finder frame is smaller than a first preset threshold;
the processor 710 is configured to perform, in response to the second input, an automatic zoom process until an overlapping area of the target position image of the subject and the target view frame is greater than or equal to the first preset threshold.
The electronic equipment of the embodiment of the application determines a proper composition scene according to the proportion information of the image of the target position of the shot object in the shooting preview interface; and displaying a corresponding target view-finding frame according to the determined picture composition scene, and shooting based on the target view-finding frame, thereby realizing the purpose of automatically selecting a proper picture composition scene according to a shot object.
It should be understood that in the embodiment of the present application, the input Unit 704 may include a Graphics Processing Unit (GPU) 7041 and a microphone 7042, and the Graphics Processing Unit 7041 processes image data of still pictures or videos obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 706 may include a display panel 7061, and the display panel 7061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 707 includes a touch panel 7071 and other input devices 7072. The touch panel 7071 is also referred to as a touch screen. The touch panel 7071 may include two parts of a touch detection device and a touch controller. Other input devices 7072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. Memory 709 may be used to store software programs as well as various data, including but not limited to applications and operating systems. Processor 710 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 710.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the above shooting method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the above shooting method embodiment, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.