The terms "first," "second," "third," and "fourth," etc. in the description and in the claims of the present invention are used for distinguishing between different objects and not for describing a particular order of the objects. For example, the first input, the second input, the third input, the fourth input, etc. are used to distinguish between different inputs, rather than to describe a particular order of inputs.
In the embodiments of the present invention, words such as "exemplary" or "for example" are used to mean serving as examples, illustrations or descriptions. Any embodiment or design described as "exemplary" or "e.g.," an embodiment of the present invention is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
In the description of the embodiments of the present invention, unless otherwise specified, "a plurality" means two or more, for example, a plurality of processing units means two or more processing units; plural elements means two or more elements, and the like.
The embodiment of the invention provides an image display method, wherein electronic equipment can receive first input of a user, and the first input is used for triggering the electronic equipment to shoot a first image; in response to a first input, displaying a first image in a first area of the electronic device and displaying a second image in a second area of the electronic device; and the second image is an image obtained after the first image is processed in a target image processing mode. By the scheme, the electronic equipment responds to the first input of shooting the first image, and can display the first image and display a second image obtained after the first image is processed in the target image processing mode. Therefore, the operation process of the user for processing the shot image can be simplified, the time is saved, and the man-machine interaction performance is improved. Meanwhile, the electronic equipment simultaneously displays the first image and the second image, so that the user can compare the display effect of the first image and the second image conveniently.
The following describes a software environment to which the image display method provided by the embodiment of the present invention is applied, by taking an android operating system as an example.
Fig. 1 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention. In fig. 1, the architecture of the android operating system includes 4 layers, which are respectively: an application layer, an application framework layer, a system runtime layer, and a kernel layer (specifically, a Linux kernel layer).
The application program layer comprises various application programs (including system application programs and third-party application programs) in an android operating system.
The application framework layer is a framework of the application, and a developer can develop some applications based on the application framework layer under the condition of complying with the development principle of the framework of the application.
The system runtime layer includes libraries (also called system libraries) and android operating system runtime environments. The library mainly provides various resources required by the android operating system. The android operating system running environment is used for providing a software environment for the android operating system.
The kernel layer is an operating system layer of an android operating system and belongs to the bottommost layer of an android operating system software layer. The kernel layer provides kernel system services and hardware-related drivers for the android operating system based on the Linux kernel.
Taking an android operating system as an example, in the embodiment of the present invention, a developer may develop a software program for implementing the image display method provided in the embodiment of the present invention based on the system architecture of the android operating system shown in fig. 1, so that the image display method may operate based on the android operating system shown in fig. 1. That is, the processor or the electronic device may implement the image display method provided by the embodiment of the present invention by running the software program in the android operating system.
The electronic device in the embodiment of the invention can be a mobile electronic device or a non-mobile electronic device. The mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), etc.; the non-mobile electronic device may be a Personal Computer (PC), a Television (TV), a teller machine, a self-service machine, or the like; the embodiments of the present invention are not particularly limited.
An execution main body of the image display method provided in the embodiment of the present invention may be the electronic device (including a mobile electronic device and a non-mobile electronic device), or may also be a functional module and/or a functional entity capable of implementing the method in the electronic device, which may be determined specifically according to actual use requirements, and the embodiment of the present invention is not limited. The following takes an electronic device as an example to exemplarily explain an image display method provided by an embodiment of the present invention.
Referring to fig. 2, an embodiment of the present invention provides an image display method applied to an electronic device, which may include steps 201 to 203 described below.
Step 201, the electronic device receives a first input of a user.
The first input is used for triggering the electronic equipment to shoot a first image.
In a case that the electronic device displays a shooting preview interface, the first input may be a touch input of the user to the shooting control, a specific gesture input of the user, a combined key input (a combined input of a power key and a volume key) of the user, or other feasible inputs, which is not limited in the embodiment of the present invention.
Step 202, the electronic device responds to the first input and displays a first image in a first area of the electronic device.
Step 203, the electronic device displays a second image in a second area of the electronic device.
And the second image is an image obtained after the first image is processed in a target image processing mode.
In the embodiment of the invention, the electronic equipment responds to the first input, shoots to obtain the first image, carries out image processing on the first image according to the target image processing mode, displays the first image in the first area and displays the second image in the second area.
In the embodiment of the present invention, the image processing method may be any processing method for beautifying the display effect of the image, for example, a filter processing method, a beauty processing method, and the like, and the embodiment of the present invention is not limited.
In this embodiment of the present invention, the target image processing manner may be an image processing manner pre-stored in the electronic device, or an image processing manner acquired by the electronic device in real time according to an actual use requirement, which is not limited in this embodiment of the present invention.
The specific process of the electronic device performing image processing on the first image according to the target image processing mode may refer to any related technology, which is not described herein again.
Optionally, the first area and the second area may be different areas on one display screen of the electronic device, may also be different areas on different display screens of the electronic device, and may also be other feasible situations, which is not limited in the embodiment of the present invention.
For example, if the first area and the second area are different areas on one display screen, the first image and the second image may be displayed on one display screen of the electronic device in a split-screen manner.
For example, if the first area and the second area are different areas on different display screens, the first image may be displayed on a first display screen of the electronic device, and the second image may be displayed on a second display screen of the electronic device.
It should be noted that, in the embodiment of the present invention, the electronic device may be a single-screen electronic device (for example, a flat-screen electronic device, a curved-screen electronic device, a flexible-screen electronic device, etc.), may be a multi-screen electronic device (for example, a double-screen electronic device, a triple-screen electronic device, etc.), may also be a folding-screen electronic device, and may also be another feasible electronic device, which is not limited in the embodiment of the present invention.
The embodiment of the invention provides an image display method, wherein electronic equipment can receive first input of a user, and the first input is used for triggering the electronic equipment to shoot a first image; in response to a first input, displaying a first image in a first area of the electronic device and displaying a second image in a second area of the electronic device; and the second image is an image obtained after the first image is processed in a target image processing mode. By the scheme, the electronic equipment responds to the first input of shooting the first image, and can display the first image and display a second image obtained after the first image is processed in the target image processing mode. Therefore, the operation process of the user for processing the shot image can be simplified, the time is saved, and the man-machine interaction performance is improved. Meanwhile, the electronic equipment simultaneously displays the first image and the second image, so that the user can compare the display effect of the first image and the second image conveniently. And then the user can know the advantages and disadvantages of the first image and the second image by comparing the display effects of the first image and the second image, and further perform image processing on the second image to obtain an image with better display effect.
Optionally, before the electronic device displays the first image and the second image, the electronic device may perform image processing on the first image according to the image quality of the first image.
Illustratively, before the step 202, the image display method provided by the embodiment of the present invention may further include the following steps 204 to 206.
And step 204, the electronic equipment responds to the first input and obtains a first image through shooting.
And step 205, the electronic device performs image quality evaluation on the first image to obtain an evaluation result.
And step 206, the electronic device performs image processing on the first image according to the target image processing mode corresponding to the evaluation result to obtain a second image.
In the embodiment of the present invention, any related technology may be referred to as a specific method for the electronic device to perform image quality evaluation on the first image, and the embodiment of the present invention is not limited.
Optionally, the evaluation result may be an image quality score, an image quality percentage, or another feasibility evaluation result, and the embodiment of the present invention is not limited.
In the embodiment of the invention, different evaluation result ranges can correspond to different image processing modes. It can be understood that if the evaluation result is in different evaluation result ranges, the evaluation result corresponds to different image processing modes.
For example, the electronic device may calculate an image quality score for the first image via an image quality evaluation neural network. The specific image quality judgment neural network may include: 1. judging the reasonable degree of the light and shadow of the first image through the light and shadow judgment neural network; 2. judging the definition degree of the first image through a definition neural network; 3. and integrating the reasonable degree of the light and shadow of the first image and the definition degree of the first image to obtain the image quality score of the first image. Reference may be made to related technologies, which are not described herein in detail.
Taking advantage of the above example, it is assumed that the first score range corresponds to the first image processing mode, and the second score range corresponds to the second image processing mode, where any value in the first score range is smaller than any value in the second score range. The electronic equipment judges whether the image quality score of the first image is within a first score range, and if the image quality score is within the first score range, the first image is subjected to image processing according to a first image processing mode; and if the image quality score is within the second score range, performing image processing on the first image according to a second image processing mode. In the case that the image quality score of the first image is within the first score range or within the second score range (indicating poor image quality and possibly serious distortion, tilt or exposure non-uniformity problems), the electronic device may perform step 206; in the case where the image quality score of the first image is larger than any one value within the second score range (indicating that the image quality is good), the electronic device may directly display the first image as the second image in the second area without performing image processing on the first image; the electronic device may also perform image processing on the first image according to other image processing manners, and the embodiment of the present invention is not limited.
In the embodiment of the invention, the electronic equipment performs image processing on the first image according to the image quality of the first image, so that the display effect of the first image and the display effect of the second image can be better shown for a user.
It should be noted that, in the embodiment of the present invention, the electronic device does not limit the execution sequence of the steps 205 to 206 and the step 202, for example, the electronic device may execute the steps 205 to 206 first and then execute the step 202; step 202 may be performed first, and then step 205-step 206 may be performed; step 205-step 206 and step 202 may also be performed simultaneously.
Optionally, in this embodiment of the present invention, in response to the first input, the electronic device may further display the first control, and further, when the display effect of the user on the second image is still not satisfactory, the user may further perform image processing on the second image by operating the first control.
Illustratively, in conjunction with fig. 2, as shown in fig. 3, after step 203, the image display method provided by the embodiment of the present invention may further include step 207 as described below.
Step 207, the electronic device responds to the first input, and displays a first control on a preset area of the electronic device in a floating mode.
The first control comprises a first type of option and a second type of option, each option in the first type of option is used for determining an image to be processed, different options in the second type of option are used for indicating different image processing modes (one option is used for indicating one image processing mode), the image to be processed is a partial image of the second image, and the preset area is any area on the electronic device.
In the embodiment of the present invention, the number of options of the first type of options and the number of options of the second type of options are not limited, and may be specifically determined according to actual use requirements. Other options may also be included in the first control, and the embodiment of the present invention is not limited.
Optionally, in this embodiment of the present invention, the first control may be displayed in a floating manner on the first image (that is, the preset region is a region in the first region), the first control may also be displayed in a floating manner on the second image (that is, the preset region is a region in the second region), the first control may also be displayed in a floating manner on the first image and the second image (that is, a part of the preset region is a region in the first region, and another part of the preset region is a region in the second region), the first control may also be displayed in a floating manner on a region other than the first region and the second region (that is, the preset region is neither a region in the first region nor a region in the second region), and the first control may also be displayed at other feasible positions, which is not limited in this embodiment of the present invention.
Optionally, in the embodiment of the present invention, a user may trigger the electronic device through input to adjust the display position of the first control, adjust the size of the first control, rotate the direction of the first control, change the arrangement order of the options in the first control, or switch the display state of the first control, which is not limited in the embodiment of the present invention.
The display state of the first control may include a floating display state, an adsorption display state (adsorbed at an edge of a screen of the electronic device), a minimized display state, a hidden state, and the like, which is not limited in the embodiment of the present invention.
For example, the user may trigger the electronic device to change the display state of the first control by inputting: the user triggers the electronic equipment to switch the display state of the first control from the suspension display state to the adsorption display state through the input of dragging the first control to the edge of the screen; the user can trigger the electronic device to switch the display state of the first control from the minimized display state (or the adsorption display state) to the suspension display state through clicking input of the first control in the minimized display state (or the adsorption display state).
It is to be understood that, in the embodiment of the present invention, different options in the first category of options are used to indicate different forms of determining the to-be-processed image (in the second image) (one option is used to indicate one form of determining the to-be-processed image), for example, the to-be-processed image may be in a grid form, or may be in a selection frame form, and the embodiment of the present invention is not limited.
It should be noted that, in the embodiment of the present invention, the execution sequence of the above steps 202 to 203 and 207 is not limited. For example, the electronic device may perform step 202-step 203 first, and then perform step 207; step 207 may be performed first, and then step 202 to step 203 may be performed; step 202-step 203 and step 207 may also be performed simultaneously.
In the embodiment of the invention, the electronic equipment displays the first control in a suspension manner in the preset area, so that the operation process of a user can be simplified, the operation time can be saved and the man-machine interaction performance can be improved under the condition that the user needs to perform further image processing on the whole or partial area of the second image.
Optionally, in a case that the display effect of the user on the second image is still not satisfactory, the user may trigger the electronic device to display a control corresponding to an option in the first category of options in the first control through input of the option in the second image.
Illustratively, in conjunction with fig. 3, as shown in fig. 4, after step 207, the image display method provided by the embodiment of the present invention may further include steps 208 to 209 described below.
Step 208, the electronic device receives a second input of the first option in the first category of options from the user.
Optionally, the second input may be a click input of the user on the first option, a slide input of the user on the first option, or other feasibility inputs, which is not limited in the embodiment of the present invention.
For example, the click input may be any number of click inputs, such as a single click input, a double click input, or a triple click input; the slide input may be a slide input in any direction, for example, a slide input in a counterclockwise direction, a slide input in a clockwise direction, a slide input in an upward direction, a slide input in a downward direction, a slide input in a leftward direction, a slide input in a rightward direction, or the like.
And step 209, the electronic device responds to the second input and displays a second control corresponding to the first option on the second image.
Optionally, the second control is a grid or a selection box of N × M, wherein N, M is a positive integer. The second control may also be a control in other forms, and the embodiment of the present invention is not limited.
Optionally, in the embodiment of the present invention, the number of the first options is not limited, that is, the first option may be one option or may be multiple options; the number of the second controls is not limited, that is, the second control may be one control or a plurality of controls. For example, if the first option is an option, the second control is a control; if the first option is multiple options, then the second control is multiple controls.
For example, if the second control is a grid of N × M, the second control may include: at least one of the picture segmentation forms of 2 × 3 grids, 3 × 4 grids, nine-square grids, sixteen-square grids, bee grids and the like.
It should be noted that, if the second control is an N × M grid, the electronic device may determine one to-be-processed image in the second image according to each grid in the N × M grid, and may determine at most N × M to-be-processed images in the second image, which may be determined specifically according to actual usage requirements, and the embodiment of the present invention is not limited.
For example, if the second control is a checkbox, the second control may include at least one of a circle checkbox, a polygon checkbox (including a trilateral checkbox, a quadrilateral checkbox (including a parallelogram checkbox, a rectangle checkbox, a diamond checkbox, a trapezoid checkbox, etc.), a pentagon checkbox, etc.), and the like.
Optionally, the shape of the selection frame may be preset by the electronic device, or may be customized by the user according to actual use requirements, and the embodiment of the present invention is not limited.
In the embodiment of the invention, the user triggers the electronic device to display the second control in the second image through the second input of the first option, so that the user can determine the image to be processed in the second image according to the second control, and further can perform corresponding image processing on the image to be processed through the input of the option in the second type of options.
Optionally, after the step 209, the user may adjust the second control through input, or may zoom the to-be-processed image determined by the second control through input, so as to observe a display effect of the to-be-processed image.
Illustratively, in conjunction with fig. 4, as shown in fig. 5, after step 209, the image display method provided by the embodiment of the present invention may further include steps 210 to 211 described below.
And step 210, the electronic device receives a third input of the user to the second control.
Optionally, the third input may be a click input of the user on the second control, a drag input of the user on the second control, or other feasibility inputs, which is not limited in the embodiment of the present invention.
For example, the specific description of the click input may refer to the description of the click input in the description of the second input in step 208, and is not repeated herein; the drag input may be a drag input in any direction, for example, a clockwise drag input or a counterclockwise drag input. An upward drag input, a downward drag input, a leftward drag input, or a rightward drag input, etc.
And step 211, the electronic device responds to the third input and executes a target operation corresponding to the third input.
The target operation includes at least one of: and adjusting the position of the second control on the second image, adjusting the size of the second control, rotating the direction of the second control, switching the display state of the second control, and deleting the second control.
Optionally, the display state of the second control may include: a floating display state, an absorption display state, a minimized display state, etc., and the embodiments of the present invention are not limited.
For example, the user may trigger the electronic device to change the display state of the second control by inputting: the user triggers the electronic equipment to switch the display state of the second control from the suspension display state to the adsorption display state through the input of dragging the second control to the edge of the screen; the user triggers the electronic equipment to switch the display state of the second control from the suspension display state to the minimum display state by reducing the second control to the minimum input; the user can trigger the electronic device to switch the display state of the second control from the minimized display state (or the adsorption display state) to the floating display state by double-click input of the second control in the minimized display state (or the adsorption display state).
It should be noted that, under the condition that the display state of the second control is the adsorption display state or the minimized display state, the second control can be prevented from blocking the sight of the user, so that the user can compare the display effect of the first image and the second image, and a more reasonable image to be processed can be obtained.
In the embodiment of the invention, the user triggers the electronic equipment to adjust the second control through the third input, so that the electronic equipment can determine a more reasonable image to be processed according to the user requirement.
Optionally, after the second image is subjected to at least one image processing, the user still feels that the display effect of the partial region in the second image is not ideal enough, and the user may trigger the electronic device to recommend, through input, an image with a better display effect and a similarity greater than a certain threshold value to the partial region in the second image for the user to select an image for replacing the partial region in the second image.
Illustratively, in conjunction with fig. 5, as shown in fig. 6, after step 211, the image display method provided by the embodiment of the present invention may further include the following steps 212 to 215.
In step 212, the electronic device receives a fourth input of the first image to be processed determined by the user on the second control.
Optionally, the fourth input may be a click input of the user on the first image to be processed, a slide input of the user on the first image to be processed, or other feasibility inputs, which is not limited in the embodiment of the present invention.
For example, the detailed description of the click input and the slide input may refer to the description of the click input and the slide input in the description of the second input in step 208, and will not be repeated here.
Step 213, the electronic device displays at least one third image on the third area of the electronic device in response to the fourth input.
Each third image is an image which has similarity (including scene similarity, person similarity and the like) with the first image to be processed greater than or equal to a first threshold and meets preset conditions, wherein the preset conditions include at least one of the following: the frequency of use (i.e., the frequency with which the user uses the corresponding third image) is greater than or equal to the second threshold value, and the user score (i.e., the score of the display effect of the user on the corresponding third image) is greater than or equal to the third threshold value.
In the embodiment of the present invention, values of the first threshold, the second threshold, and the third threshold may be determined according to actual use requirements, and the embodiment of the present invention is not limited.
The electronic equipment responds to a fourth input, at least one third image which has the similarity larger than or equal to the first threshold and meets the preset condition with the first image to be processed is obtained by screening from the images in the network, the images stored in the electronic equipment and the like, and then the at least one third image is displayed on the third area.
Optionally, the third area may be any area on the electronic device, and reference may be specifically made to the description of the preset area in step 207, which is not described herein again.
For example, the third area may be an area different from the first area and the second area in the electronic device, that is, the third area, the first area and the second area may be displayed in a split screen manner on one screen of the electronic device; two of the third area, the first area and the second area can be displayed on one screen of the electronic equipment in a split mode, and the other screen of the electronic equipment is displayed on the other screen of the electronic equipment; the third area, the first area and the second area may also be displayed on different screens of the electronic device, respectively.
It should be noted that, in response to the first input of the user, the electronic device may partition a third area for displaying at least one third image; the electronic device may also be configured to mark out a third area for displaying at least one third image in response to a fourth input; other possibilities are also possible, and the embodiments of the present invention are not limited.
In step 214, the electronic device receives a fifth input from the user to the target third image of the at least one third image.
Optionally, the fifth input may be a click input of the user on the target third image, a slide input of the user on the target third image, a drag input of the user on the target third image, or other feasibility inputs, which is not limited in the embodiment of the present invention.
For example, the detailed description of the click input and the slide input may refer to the related description of the click input and the slide input in the description of the second input in step 208, and the detailed description of the drag input may refer to the related description of the drag input in the description of the second input in step 210, which is not described herein again.
Step 215, the electronic device updates the first to-be-processed image to the target third image in response to the fifth input.
For example, in response to the fifth input, the electronic device removes the first to-be-processed image from the second image, obtains a fourth image (i.e., the second image after removing the first to-be-processed image, that is, the second image without including the first to-be-processed image), and synthesizes (or splices) the fourth image with the target third image to obtain a fifth image after updating the first to-be-processed image into the target third image.
In the process of synthesizing the second image from which the first image to be processed is removed and the target third image, the electronic device may perform processing such as cropping on the target third image as needed to obtain an image matching the second image from which the first image to be processed is removed.
In the embodiment of the invention, the electronic equipment recommends at least one third image for the user, so that the user can select the target third image for replacing the first image to be processed in the second image to obtain the image with better display effect, the operation process of image processing on the second image by the user can be saved, the operation time is saved, and the man-machine interaction performance is improved.
Optionally, after step 215, the user may further trigger the electronic device to perform further image processing on the fifth image through an input, so as to obtain an image with better display effect.
Illustratively, in conjunction with fig. 6, as shown in fig. 7, after step 215, the image display method provided by the embodiment of the present invention may further include the following steps 216 to 217.
Step 216, the electronic device receives a sixth input of the second option in the second category of options from the user.
Optionally, the sixth input may be a click input of the user on the second option, a slide input of the user on the second option, or other feasibility inputs, which is not limited in the embodiment of the present invention.
For example, the detailed description of the click input and the slide input may refer to the description of the click input and the slide input in the description of the second input in step 208, and will not be repeated here.
In this embodiment of the present invention, the number of the second options is not limited, that is, the second option may be one option (the second option indicates one image processing manner) or may be multiple options (the second option indicates multiple image processing manners), and this embodiment of the present invention is not limited.
Step 217, the electronic device responds to the sixth input, and performs image processing on the target third image displayed in the second area according to the image processing mode indicated by the second option.
It is understood that the target third image displayed in the second area is the target third image in the fifth image in step 215.
In the embodiment of the present invention, after the electronic device performs image processing on the target third image displayed in the second area according to the image processing mode indicated by the second option, the display effect of the synthesized fifth image may be more natural and more beautiful.
In this embodiment of the present invention, the user may trigger the electronic device to store the second image obtained through the processing, or may store a partial image of the second image obtained through the processing.
Illustratively, as shown in fig. 8, the electronic device currently displays a shooting preview interface indicated by a mark "1", and the user clicks the shooting control, as shown in fig. 9, the electronic device displays a first image (original image) in a first area indicated by a mark "2", displays a second image (image processed image) in a second area indicated by a mark "3", and divides a third area indicated by a mark "4" (at least one third image for displaying intelligent recommendation), and suspends the first control indicated by a mark "5" in a preset area, where the first control includes a first type of option such as "nine squares, sixteen squares, a custom grid, a circular selection frame, a rectangular selection frame, and a custom selection frame" and a second type of option such as "filter 1, filter 2, beauty 1, beauty 2". The user can trigger the electronic device to display at least one control corresponding to at least one option in the first category of options in the first control by inputting at least one option in the first control by comparing the display effects of the first image and the second image and if the display effect of the second image is still unsatisfactory. For example, if the user clicks the "custom checkbox" option in the first control, and customizes the "diamond checkbox", "oval checkbox", and "pentagon checkbox", the electronic device displays the diamond checkbox indicated by the symbol "6", the oval checkbox indicated by the symbol "7", and the pentagon checkbox indicated by the symbol "8" on the second image, as shown in fig. 10. For another example, if the user clicks the "nine palace grid" option in the first control, the electronic device displays the nine palace grid indicated by the designation "9" on the second image, as shown in fig. 11. The electronic equipment can trigger the electronic equipment to adjust the position, the size and the direction of the control through the input of any one of the at least one control, switch the display state of the control or delete the control. For example, as shown in fig. 12, the user triggers the electronic device to switch the display state of the diamond-shaped selection box from the floating display state to the minimized display state indicated by the symbol "10" through a two-finger zoom input; the user triggers the electronic equipment to switch the display state of the pentagonal selection frame from the suspension display state to the adsorption display state indicated by the identifier '11' through dragging input; and the user triggers the electronic equipment to delete the oval selection frame by dragging the oval selection frame out of the input of the second area. As shown in fig. 13, a user may trigger the electronic device to enlarge and display any image to be processed determined by the squared figure by double-clicking input, so that the user can better view the display effect of the image to be processed; if the user clicks the option of 'filter 1' in the first control under the condition that the to-be-processed image indicated by the mark '12' is selected, the electronic equipment performs filter processing on the to-be-processed image indicated by the mark '12' according to the filter 1; if the display effect of the image to be processed indicated by the mark "12" is still unsatisfactory after the filter processing, the user can trigger the electronic device to display the intelligently recommended "third image 1, third image 2 and third image 3" in the third area through the sliding input of the image to be processed indicated by the mark "12" so that the user can select the image to replace the image to be processed indicated by the mark "12". The user can trigger the electronic device to zoom the third image 1 by double-click input on the third image 1, so that the user can better observe the display effect of the third image 1. After the electronic device responds to the user input, the to-be-processed image indicated by the mark "12" is replaced by the third image 1, the user can also trigger the electronic device to perform image processing on the to-be-processed image indicated by the replaced "12" through the input, so that the display effect of the spliced image is more natural. After the user is satisfied with the display effect of the processed second image, the user can input and store the processed second image, and can share the processed second image with others through the instant social application.
The drawings in the embodiments of the present invention are all exemplified by the drawings in the independent embodiments, and when the embodiments of the present invention are specifically implemented, each of the drawings can also be implemented by combining any other drawings which can be combined, and the embodiments of the present invention are not limited. For example, in conjunction with fig. 4, after step 209, the image display method provided by the embodiment of the present invention may further include the above-mentioned step 212 to step 215.
As shown in fig. 14, an embodiment of the present invention provides an electronic device 120, where the electronic device 120 includes: a receiving module 121 and a display module 122; the receiving module 121 is configured to receive a first input of a user, where the first input is used to trigger the electronic device to capture a first image; the display module 122, configured to display a first image in a first area of the electronic device and a second image in a second area of the electronic device in response to the first input received by the receiving module 121; and the second image is an image obtained after the first image is processed in a target image processing mode.
Optionally, the electronic device 120 further includes: a photographing module 123, an evaluation module 124, and a processing module 125; the shooting module 123 is configured to obtain a first image by shooting before the display module 122 displays a first image in a first area of the electronic device and displays a second image in a second area of the electronic device; the evaluation module 124 is configured to perform image quality evaluation on the first image captured by the capturing module 123 to obtain an evaluation result; the processing module 125 is configured to perform image processing on the first image according to the target image processing manner corresponding to the evaluation result obtained by the evaluation module 124, so as to obtain a second image.
Optionally, the display module 122 is further configured to, after the receiving module 121 receives the first input of the user, respond to the first input received by the receiving module 121, and display a first control in a floating manner on the preset area of the electronic device; the first control comprises a first type of option and a second type of option, each option in the first type of option is used for determining an image to be processed, different options in the second type of option are used for indicating different image processing modes, the image to be processed is a partial image of the second image, and the preset area is any area on the electronic equipment; the receiving module 121 is further configured to receive a second input of the first option in the first category of options from the user; the display module 122 is further configured to display a second control corresponding to the first option on the second image in response to the second input received by the receiving module 121, where the second control is a grid or a selection box of N × M; wherein N, M is a positive integer.
Optionally, the electronic device 120 further includes: an execution module 126; the receiving module 121 is further configured to receive a third input of the second control from the user after the displaying module 122 displays the second control corresponding to the first option on the second image; the executing module 126 is configured to, in response to the third input received by the receiving module 121, execute a target operation corresponding to the third input, where the target operation includes at least one of: and adjusting the position of the second control on the second image, adjusting the size of the second control, rotating the direction of the second control, switching the display state of the second control, and deleting the second control.
Optionally, the receiving module 121 is further configured to receive, after the displaying module 122 displays the second control corresponding to the first option on the second image, a fourth input of the first image to be processed, which is determined by the user for the second control; the display module 122 is further configured to display at least one third image on the third area of the electronic device in response to the fourth input received by the receiving module 121, where each third image is an image whose similarity to the first image to be processed is greater than or equal to the first threshold and satisfies a preset condition, where the preset condition includes at least one of: the use frequency is greater than or equal to a second threshold value, and the user score is greater than or equal to a third threshold value; the receiving module 121 is further configured to receive a fifth input from the user to a target third image in the at least one third image displayed by the displaying module 122; the display module 122 is further configured to update the first to-be-processed image to the target third image in response to a fifth input received by the receiving module 121.
Optionally, the electronic device 120 further includes: a processing module 125; the receiving module 121 is further configured to receive a sixth input of the second option in the second category of options from the user after the displaying module 122 updates the second to-be-processed image to the target third image; the processing module 125 is configured to, in response to the sixth input received by the receiving module 121, perform image processing on the target third image displayed in the second area according to the image processing manner indicated by the second option.
It should be noted that, as shown in fig. 14, modules that are necessarily included in the electronic device 120 are illustrated by solid line boxes, such as a receiving module 121 and a display module 122; modules that may or may not be included in the electronic device 120 are illustrated with dashed boxes, such as a capture module 123, an evaluation module 124, a processing module 125, and an execution module 126.
The electronic device provided in the embodiment of the present invention is capable of implementing each process shown in any one of fig. 2 to 13 in the above method embodiments, and details are not described here again to avoid repetition.
The embodiment of the invention provides electronic equipment, which can receive a first input of a user, wherein the first input is used for triggering the electronic equipment to shoot a first image; in response to a first input, displaying a first image in a first area of the electronic device and displaying a second image in a second area of the electronic device; and the second image is an image obtained after the first image is processed in a target image processing mode. By the scheme, the electronic equipment responds to the first input of shooting the first image, and can display the first image and display a second image obtained after the first image is processed in the target image processing mode. Therefore, the operation process of the user for processing the shot image can be simplified, the time is saved, and the man-machine interaction performance is improved. Meanwhile, the electronic equipment simultaneously displays the first image and the second image, so that the user can compare the display effect of the first image and the second image conveniently.
Fig. 15 is a schematic diagram of a hardware structure of an electronic device implementing various embodiments of the present invention. As shown in fig. 15, the electronic device 100 includes, but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 15 does not constitute a limitation of the electronic device, and that the electronic device may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic device, a wearable device, a pedometer, and the like.
The user input unit 107 is configured to receive a first input of a user, where the first input is used to trigger the electronic device to capture a first image; a display unit 106 for displaying a first image in a first area of the electronic device and a second image in a second area of the electronic device in response to a first input; and the second image is an image obtained after the first image is processed in a target image processing mode.
According to the electronic device provided by the embodiment of the invention, the electronic device can receive a first input of a user, wherein the first input is used for triggering the electronic device to shoot a first image; in response to a first input, displaying a first image in a first area of the electronic device and displaying a second image in a second area of the electronic device; and the second image is an image obtained after the first image is processed in a target image processing mode. By the scheme, the electronic equipment responds to the first input of shooting the first image, and can display the first image and display a second image obtained after the first image is processed in the target image processing mode. Therefore, the operation process of the user for processing the shot image can be simplified, the time is saved, and the man-machine interaction performance is improved. Meanwhile, the electronic equipment simultaneously displays the first image and the second image, so that the user can compare the display effect of the first image and the second image conveniently.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 101 may be used for receiving and sending signals during a message transmission or call process, and specifically, after receiving downlink data from a base station, the downlink data is processed by the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 102, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the network module 102 or stored in the memory 109 into an audio signal and output as sound. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the electronic apparatus 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 includes a speaker, a buzzer, a receiver, and the like.
The input unit 104 is used to receive an audio or video signal. The input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the Graphics processor 1041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the network module 102. The microphone 1042 may receive sound and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode.
The electronic device 100 also includes at least one sensor 105, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or the backlight when the electronic device 100 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 105 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 107 includes a touch panel 1071 and other input devices 1072. Touch panel 1071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 1071 (e.g., operations by a user on or near touch panel 1071 using a finger, stylus, or any suitable object or attachment). The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and receives and executes commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. Specifically, other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 1071 may be overlaid on the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although in fig. 15, the touch panel 1071 and the display panel 1061 are two independent components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the electronic device, and is not limited herein.
The interface unit 108 is an interface for connecting an external device to the electronic apparatus 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the electronic apparatus 100 or may be used to transmit data between the electronic apparatus 100 and the external device.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, performs various functions of the electronic device and processes data by operating or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the electronic device. Processor 110 may include one or more processing units; alternatively, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The electronic device 100 may further include a power supply 111 (e.g., a battery) for supplying power to each component, and optionally, the power supply 111 may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
In addition, the electronic device 100 includes some functional modules that are not shown, and are not described in detail herein.
Optionally, an embodiment of the present invention further provides an electronic device, which may include the processor 110 shown in fig. 15, the memory 109, and a computer program stored on the memory 109 and capable of being executed on the processor 110, where the computer program, when executed by the processor 110, implements each process of the image display method shown in any one of fig. 2 to 13 in the foregoing method embodiments, and can achieve the same technical effect, and details are not described here to avoid repetition.
An embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the image display method shown in any one of fig. 2 to 13 in the foregoing method embodiments, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling an electronic device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.