[go: up one dir, main page]

CN111405180A - Photographing method, photographing device, storage medium and mobile terminal - Google Patents

Photographing method, photographing device, storage medium and mobile terminal Download PDF

Info

Publication number
CN111405180A
CN111405180A CN202010192948.7A CN202010192948A CN111405180A CN 111405180 A CN111405180 A CN 111405180A CN 202010192948 A CN202010192948 A CN 202010192948A CN 111405180 A CN111405180 A CN 111405180A
Authority
CN
China
Prior art keywords
shooting
determining
photographing
scene
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010192948.7A
Other languages
Chinese (zh)
Inventor
许玉新
张刘哲
曾剑青
张永兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huizhou TCL Mobile Communication Co Ltd
Original Assignee
Huizhou TCL Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huizhou TCL Mobile Communication Co Ltd filed Critical Huizhou TCL Mobile Communication Co Ltd
Priority to CN202010192948.7A priority Critical patent/CN111405180A/en
Publication of CN111405180A publication Critical patent/CN111405180A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses a photographing method, a photographing device, a mobile terminal and a storage medium, which are applied to the mobile terminal, when the mobile terminal starts a photographing function, a preview image and photographing environment information are obtained, a target scene is determined from a plurality of preset scenes according to the preview image and the photographing environment information, then description sentences corresponding to the target scene are determined, and then a photo is generated according to the preview photo of the description sentences, so that the photo with the description sentences is generated, and the description sentences are matched with a photographing object and a photographing environment.

Description

Photographing method, photographing device, storage medium and mobile terminal
Technical Field
The invention relates to the technical field of terminals, in particular to a photographing method, a photographing device, a storage medium and a mobile terminal.
Background
Along with the continuous development of terminal technology, mobile terminal's volume becomes more small and exquisite, and the function becomes more complete, for example, mobile terminal's fuselage is provided with the camera to supply the user to shoot. Compared with digital cameras, single-lens reflex and other shooting equipment, the mobile terminal is more convenient to carry, pixels of a camera of the mobile terminal can completely meet daily requirements of people in the development process of recent years, and the mobile terminal is not wonderful to become the shooting equipment which is most favored by most users.
However, when a user uses a mobile terminal to take a picture, only the scene and the person are recorded at a moment, the content of the picture is single, and as the user sees the single old picture, the user may not remember the place and mood of taking the picture at that time any more, which goes against the original purpose of taking the picture and reduces the storage value of the picture.
Disclosure of Invention
The invention provides a photographing method, and aims to solve the technical problems that the existing mobile terminal is single in photo content and low in photo storage value during photographing.
The technical scheme provided by the application is as follows:
the invention provides a photographing method, which is applied to a mobile terminal and comprises the following steps:
when the mobile terminal starts a shooting function, acquiring a preview image and shooting environment information;
determining a target scene from a plurality of preset scenes according to the preview image and the shooting environment information;
determining a descriptive statement corresponding to the target scene;
and generating a photo according to the descriptive statement and the preview image.
The application also provides a photographing device, which is applied to a mobile terminal, the photographing device comprises:
the acquisition module is used for acquiring the preview image and the shooting environment information when the mobile terminal starts a shooting function;
the first determining module is used for determining a target scene from a plurality of preset scenes according to the preview image and the shooting environment information;
the second determining module is used for determining the descriptive statement corresponding to the target scene;
and the generating module is used for generating a photo according to the descriptive statement and the preview image.
In the photographing apparatus provided by the present application, the first determining module includes:
the processing submodule is used for processing the preview image through the trained model so as to obtain a plurality of shooting object types in the preview image;
the third determining submodule is used for determining at least one scene to be screened which is matched with the shooting object type and the shooting environment information and the matching weight of each scene to be screened from a plurality of preset scenes;
and the target submodule is used for taking the scene to be screened with the highest matching weight as a target scene.
In the photographing apparatus provided by the present application, the third determining sub-module includes:
the fourth determining unit is used for determining a first keyword group corresponding to each preset scene in a plurality of preset scenes, and the first keyword group comprises at least one first keyword;
a fifth determining unit configured to determine, as matching keywords, the first keywords that match the shooting object type and the shooting environment information in each of the first keyword groups, and determine the number of words of the matching keywords in each of the first keyword groups;
and the sixth determining unit is used for determining at least one scene to be screened and the matching weight of each scene to be screened from the plurality of preset scenes according to the matching keywords and the number of the words.
In the photographing device provided by the application, the sixth determining unit is specifically configured to:
determining the scenes to be screened, of which the number of words is greater than a preset number, as the scenes to be screened;
acquiring a preset weight of each matched keyword;
and determining the matching weight of each scene to be screened according to the preset weight and the matching keywords corresponding to each scene to be screened.
In the photographing device provided by the application, the photographing environment information includes a photographing place, a photographing time, and photographing weather, and the fifth determining unit is specifically configured to:
taking the shooting place, the shooting time, the shooting weather and the shooting object type as second keywords;
and determining a first keyword matched with the second keyword in the first keyword group to obtain the first keyword matched with the shooting object type and the shooting environment information.
In the photographing device provided by the application, the generation module is specifically configured to:
displaying the descriptive sentence and the preview image in a preview interface;
detecting whether the descriptive statement is modified by a user;
if yes, updating the description statement according to the modification operation;
and when a photographing instruction is detected, generating a photo according to the preview image and the updated description statement.
In the photographing device provided by the application, the generation module is specifically configured to:
determining a display area of the descriptive sentence in the preview interface;
detecting whether the display area is pressed or not, wherein the pressing time exceeds a preset time;
if the display area is pressed and the pressing time exceeds the preset time, displaying an edit bar;
when the user edits the descriptive statement through the edit bar, judging that the user modifies the descriptive statement;
and when the display area is not pressed, or the pressing time does not exceed the preset time, or the user does not edit the descriptive sentence through the edit bar, judging that the user does not modify the descriptive sentence.
The present application further provides a computer readable storage medium having stored therein a plurality of instructions adapted to be loaded by a processor to perform the method of taking a picture as described in any of the above.
The application also provides a mobile terminal, which comprises a processor and a memory, wherein the processor is electrically connected with the memory, the memory is used for storing instructions and data, and the processor is used for any step in the photographing method.
The beneficial effect of this application does: the application discloses a photographing method, a photographing device, a mobile terminal and a storage medium, which are applied to the mobile terminal, when the mobile terminal starts a photographing function, a preview image and photographing environment information are obtained, a target scene is determined from a plurality of preset scenes according to the preview image and the photographing environment information, then description sentences corresponding to the target scene are determined, and then a photo is generated according to the preview photo of the description sentences, so that the photo with the description sentences is generated, and the description sentences are matched with a photographing object and a photographing environment.
Drawings
The technical solution and other advantages of the present invention will become apparent from the following detailed description of specific embodiments of the present invention, which is to be read in connection with the accompanying drawings.
Fig. 1 is a schematic view of an application scenario of a photographing method provided in an embodiment of the present application.
Fig. 2 is a schematic flowchart of a photographing method according to an embodiment of the present application.
Fig. 3 is another schematic flow chart of the photographing method according to the embodiment of the present application.
Fig. 4 is a schematic structural diagram of a photographing device according to an embodiment of the present application.
Fig. 5 is another schematic structural diagram of a photographing device according to an embodiment of the present application.
Fig. 6 is a schematic structural diagram of a mobile terminal according to an embodiment of the present application.
Fig. 7 is another schematic structural diagram of a mobile terminal according to an embodiment of the present application.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. In the drawings, elements having similar structures are denoted by the same reference numerals. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the application provides a photographing method, a photographing device, a mobile terminal and a storage medium.
Referring to fig. 1, fig. 1 provides an application scenario schematic diagram of a photographing system, where the photographing system may include any one of the photographing devices provided in the embodiments of the present application, the photographing device may be integrated in a mobile terminal, and the mobile terminal may include a smart phone, a tablet computer, and other devices having a mobile communication function.
When the mobile terminal starts a shooting function, a preview image and shooting environment information are obtained, a target scene is determined from a plurality of preset scenes according to the preview image and the shooting environment information, then description sentences corresponding to the target scene are determined, and then a photo is generated according to the description sentences and the preview image.
For example, as shown in fig. 1, the mobile terminal starts a shooting function, a preview image is displayed on a terminal screen, a button a is a shooting button, then the mobile terminal acquires the preview image and shooting environment information, determines a target scene according to the preview image and the shooting environment information, then determines a description sentence according to the target scene, and when a user presses the button a, triggers a shooting instruction, generates a photo according to the description sentence and the preview image, wherein the photo is directly generated by the preview image and the description sentence (cumin tree) at the upper left corner.
Referring to fig. 2, fig. 2 is a schematic flowchart of a photographing method according to an embodiment of the present disclosure, and as shown in fig. 2, the photographing method is applied to a mobile terminal, and the mobile terminal may be any intelligent electronic device with a mobile communication function, such as a smart phone, a tablet computer, and a smart watch. The specific flow of the photographing method provided by this embodiment may be as follows:
s101, when the mobile terminal starts a shooting function, acquiring a preview image and shooting environment information.
When the user wants to take a picture, the shooting software of the mobile terminal can be opened, and when the shooting software is opened, the user interface can display a scene scanned by the camera, namely a preview image.
Specifically, the shooting environment information may be acquired through system settings of the mobile terminal, for example, the shooting environment information may be system time information, system date information, system location information, or the like; in addition, the shooting environment information may also be acquired by some software, for example, weather information is acquired by weather software.
And S102, determining a target scene from a plurality of preset scenes according to the preview image and the shooting environment information.
In some embodiments, step S102 may specifically include the following sub-steps:
1-1, processing the preview image through the trained model to obtain a plurality of photographic subject types in the preview image.
Specifically, the trained model may be a neural network model, and the most common function of the neural network model is object recognition, so that the preview image may be recognized through the neural network model to recognize whether a scene or a person exists in the preview image, for example, the type of the shot object may be a mountain, a lake, a sea, or the like.
1-2, determining at least one scene to be screened matched with the type of the shooting object and the shooting environment information and the matching weight of each scene to be screened from a plurality of preset scenes;
in some embodiments, step 1-2 may specifically include the following sub-steps:
1-2-1, determining a first keyword group corresponding to a plurality of preset scenes, wherein the first keyword group comprises at least one first keyword.
For example, the preset scene may include a peak of a mountain, and the first key phrase corresponding to the scene is a mountain, a tree, a cloud, or the like.
1-2-2, determining first keywords matched with the shooting object type and the shooting environment information in each first keyword group as matching keywords, and determining the number of words of the matching keywords in each first keyword group.
In some embodiments, the shooting environment information may include a shooting location, a shooting time, and a shooting weather, and the step "determining the first keyword matched with the shooting object type and the shooting environment information in each first keyword group" may specifically include:
1-2-2-1, the shooting place, the shooting time, the shooting weather and the shooting object type are taken as second keywords.
Specifically, the shooting place can be determined through system position information, the shooting time can be determined through system time, the shooting weather can be determined through weather software, and meanwhile, the type of the shot object can be determined through a training model.
For example, if the shooting location is Yulong snow mountain, the shooting time is 18:00, the shooting weather is snowing, and the shooting object type is mountain, then the second keyword may be: yulong snow mountain, evening, snow and mountain.
In some embodiments, when the above-mentioned type of the photographic subject includes a person and there are a plurality of persons, it is necessary to determine the relationship between the persons and determine the relationship between the persons as keywords, such as mother and daughter, father and daughter, and the like.
Specifically, the person relationship may be determined by means of a preset database, for example, a face image and a relationship network are stored in the database, and then the person relationship is determined by means of face recognition.
1-2-2-2, determining a first keyword matched with the second keyword in the first keyword group to obtain the first keyword matched with the shooting object type and the shooting environment information.
In some embodiments, a first keyword in the first keyword group, which is the same as the second keyword, may be determined as the first keyword that matches the photographic subject type and the photographic environment information. For example, the second keywords include yulong snow mountain, evening, snow and mountain, and the first keywords included in the first keyword group include evening, mountain and tree, so that the first keywords matching the shooting object type and the shooting environment information are evening and mountain.
1-2-3, determining at least one scene to be screened and the matching weight of each scene to be screened from a plurality of preset scenes according to the matching keywords and the number of words.
In some embodiments, step 1-2-3 may specifically include the following sub-steps:
1-2-3-1, determining the preset scenes with the number of words larger than the preset number as the scenes to be screened.
Specifically, the preset number may be 3, and then the preset scene in which the number of words matching the keyword is greater than 3 in the plurality of preset scenes is determined as the scene to be screened.
1-2-3-2, acquiring a preset weight of each matched keyword;
specifically, each matching keyword may have different preset weights in different preset scenes, for example, in a snow mountain scene, there are two matching keywords, which are snow and mountain respectively, and the preset weights of the two matching keywords are 0.5, but in a common high mountain scene, there are three matching keywords, which are mountain, tree and cloud respectively, and then the preset weights of the three matching keywords may be 0.4, 0.4 and 0, 2 respectively.
In this embodiment, the preset weight of each matching keyword is obtained, and the preset weight is used for determining the matching weight of each scene to be screened in the subsequent steps.
And 1-2-3-3, determining the matching weight of each scene to be screened according to the preset weight and the matching keywords corresponding to each scene to be screened.
In this embodiment, the preset weights of all the matching keywords corresponding to each scene to be screened may be accumulated, and the value obtained after accumulation is used as the matching weight.
And 1-3, taking the scene to be screened with the highest matching weight as a target scene.
Specifically, the matching weight represents the matching degree, so that the scene to be screened with the highest matching weight is the target scene which is most matched with the preview image and the shooting environment information, and the description sentence determined according to the target scene is more accurate and more accords with the mind of the user.
And S103, determining a description sentence corresponding to the target scene.
In some embodiments, each scene to be screened corresponds to a plurality of descriptive sentences, one descriptive sentence may be selected optionally for determination, and in the subsequent step, the user may select other selection sentences according to the mood of the user.
And S104, generating a photo according to the preview photo of the descriptive statement.
In some embodiments, step S104 may specifically include the following sub-steps:
2-1, displaying the descriptive sentence and the preview image in the preview interface.
Specifically, when the descriptive sentence is displayed on the preview interface, the descriptive sentence and the preview image should be located at the edge or the corner of the preview image, so that the user can determine the shooting effect and the subsequent user can modify the descriptive sentence conveniently according to the mind of the user.
And 2-2, detecting whether the user modifies the description sentence, if so, executing the step 2-3, and if not, executing the step 2-5.
In this embodiment, the step 2-2 may specifically include:
2-2-1, determining a display area of the descriptive sentence in the preview interface.
Specifically, the determination display area is used for determining whether the descriptive sentence is selected in the subsequent step.
And 2-2-3, detecting whether the display area is pressed, wherein the pressing time exceeds the preset time, if so, executing the step 2-2-4, and if not, executing the step 2-2-6.
In some embodiments, the preset duration may be 3 seconds. Specifically, whether the user mistakenly touches or really wants to select the descriptive statement is distinguished by pressing a preset time length.
2-2-4, displaying an edit bar.
Specifically, the edit bar may be displayed in the lowermost region of the terminal screen, and the edit bar may include a rotation operation, a modification operation, a deletion operation, and the like.
2-2-5, when the user edits the descriptive statement through the edit bar, judging that the user modifies the descriptive statement.
For example, if the user performs a rotation operation or a modification operation on the descriptive sentence, it is determined that the user has modified the descriptive sentence.
2-2-6, when the display area is not pressed, or the pressing time does not exceed the preset time length, or the user does not edit the descriptive sentence through the edit bar, judging that the user does not modify the descriptive sentence.
In some embodiments, the user may select the descriptive sentence, but does not perform the editing operation, or after performing the editing operation, the original effect is considered to be better, and the editing operation is not saved, and it is determined that the user has not modified the descriptive sentence.
And 2-3, updating the description statement according to the modification operation.
For example, the description sentence is "three or two branches of the peach blossom outside the bamboo, the duck in the spring river is known first", the user modifies the description sentence into "three or two branches of the peach blossom outside the bamboo, the duck in the spring lake is known first" through modification operation, the modified description sentence is used as a new description sentence, and the updated description sentence is stored.
And 2-4, when a photographing instruction is detected, generating a photo according to the preview image and the updated description statement.
Specifically, through the above steps, the user confirms or modifies the shooting effect, and therefore, when a shooting instruction is detected, the preview image and the updated description sentence are directly generated into a picture. Further, descriptive sentences may also be saved in the shooting attributes of the photograph.
In some embodiments, when a photographing instruction is detected, two photographs may be generated, one of which is a preview image, i.e., an original image; the other is a preview image and an updated descriptive sentence.
As can be seen from the above, the photographing method provided by this embodiment is applied to a mobile terminal, and when a photographing function of the mobile terminal is started, a preview image and photographing environment information are obtained, a target scene is determined from a plurality of preset scenes according to the preview image and the photographing environment information, then a description sentence corresponding to the target scene is determined, and then a photograph is generated according to the preview photograph of the description sentence, so that a photograph with the description sentence is generated, and the description sentence is matched with a photographing object and a photographing environment.
Referring to fig. 3, fig. 3 is another schematic flow chart of a photographing method according to an embodiment of the present application, where the photographing method is applied to a mobile terminal, and the mobile terminal may be any intelligent electronic device with a mobile communication function, such as a smart phone, a tablet computer, and a smart watch. The specific flow of the photographing method provided by this embodiment may be as follows:
s201, when the mobile terminal starts a shooting function, acquiring a preview image and shooting environment information.
When the user wants to take a picture, the shooting software of the mobile terminal can be opened, and when the shooting software is opened, the user interface can display a scene scanned by the camera, namely a preview image.
Specifically, the shooting environment information may be acquired through system settings of the mobile terminal, for example, the shooting environment information may be system time information, system date information, system location information, or the like; in addition, the shooting environment information may also be acquired by some software, for example, weather information is acquired by weather software.
S202, processing the preview image through the trained model to obtain a plurality of shooting object types in the preview image.
Specifically, the trained model may be a neural network model, and the most common function of the neural network model is object recognition, so that the preview image may be recognized through the neural network model to recognize whether a scene or a person exists in the preview image, for example, the type of the shot object may be a mountain, a lake, a sea, or the like.
S203, determining first keyword groups corresponding to a plurality of preset scenes, wherein the first keyword groups comprise at least one first keyword.
Specifically, the preset scene may include a peak of a mountain, and the first key phrase corresponding to the scene is a mountain, a tree, a cloud, or the like.
And S204, determining a second keyword according to the shooting environment information and the type of the shooting object.
In some embodiments, the photographing environment information may include a photographing place, a photographing time, and photographing weather, and the photographing place, the photographing time, the photographing weather, and the photographing object type may be used as the second keyword.
Specifically, the shooting place can be determined through system position information, the shooting time can be determined through system time, the shooting weather can be determined through weather software, and meanwhile, the type of the shot object can be determined through a training model.
For example, if the shooting location is Yulong snow mountain, the shooting time is 18:00, the shooting weather is snowing, and the shooting object type is mountain, then the second keyword may be: yulong snow mountain, evening, snow and mountain.
In some embodiments, when the above-mentioned type of the photographic subject includes a person and there are a plurality of persons, it is necessary to determine the relationship between the persons and determine the relationship between the persons as keywords, such as mother and daughter, father and daughter, and the like.
Specifically, the person relationship may be determined by means of a preset database, for example, a face image and a relationship network are stored in the database, and then the person relationship is determined by means of face recognition.
S205, determining first keywords matched with the second keywords in the first keyword group, obtaining first keywords matched with the shooting object type and the shooting environment information, and determining the number of words of the matched keywords in each first keyword group.
In some embodiments, a first keyword in the first keyword group, which is the same as the second keyword, may be determined as the first keyword that matches the photographic subject type and the photographic environment information. For example, the second keywords include yulong snow mountain, evening, snow and mountain, and the first keywords included in the first keyword group include evening, mountain and tree, so that the first keywords matching the shooting object type and the shooting environment information are evening and mountain.
S206, determining the preset scenes with the number of words larger than the preset number as the scenes to be screened.
Specifically, the preset number may be 3, and then the preset scene in which the number of words matching the keyword is greater than 3 in the plurality of preset scenes is determined as the scene to be screened.
And S207, acquiring the preset weight of each matched keyword.
Specifically, each matching keyword may have different preset weights in different preset scenes, for example, in a snow mountain scene, there are two matching keywords, which are snow and mountain respectively, and the preset weights of the two matching keywords are 0.5, but in a common high mountain scene, there are three matching keywords, which are mountain, tree and cloud respectively, and then the preset weights of the three matching keywords may be 0.4, 0.4 and 0, 2 respectively.
In this embodiment, the preset weight of each matching keyword is obtained, and the preset weight is used for determining the matching weight of each scene to be screened in the subsequent steps.
And S208, determining the matching weight of each scene to be screened according to the preset weight and the matching keywords corresponding to each scene to be screened.
In this embodiment, the preset weights of all the matching keywords corresponding to each scene to be screened may be accumulated, and the value obtained after accumulation is used as the matching weight.
And S209, taking the scene to be screened with the highest matching weight as a target scene.
Specifically, the matching weight represents the matching degree, so that the scene to be screened with the highest matching weight is the target scene which is most matched with the preview image and the shooting environment information, and the description sentence determined according to the target scene is more accurate and more accords with the mind of the user.
And S210, determining a description sentence corresponding to the target scene.
In some embodiments, each scene to be screened corresponds to a plurality of descriptive sentences, one descriptive sentence may be selected optionally for determination, and in the subsequent step, the user may select other selection sentences according to the mood of the user.
And S211, displaying the descriptive sentence and the preview image in a preview interface.
Specifically, when the descriptive sentence is displayed on the preview interface, the descriptive sentence and the preview image should be located at the edge or the corner of the preview image, so that the user can determine the shooting effect and the subsequent user can modify the descriptive sentence conveniently according to the mind of the user.
S212, detecting whether the user modifies the description sentence, if so, executing step S213, and if not, executing step S215.
In this embodiment, step S212 may specifically include the following sub-steps:
and a substep A: and determining a display area of the descriptive sentence in the preview interface.
Specifically, the determination display area is used for determining whether the descriptive sentence is selected in the subsequent step.
And a substep B: and C, detecting whether the display area is pressed, wherein the pressing time exceeds a preset time length, if so, executing the substep C, and otherwise, executing the substep E.
In some embodiments, the preset duration may be 3 seconds. Specifically, whether the user mistakenly touches or really wants to select the descriptive statement is distinguished by pressing a preset time length.
And a substep C: and displaying an edit bar.
Specifically, the edit bar may be displayed in the lowermost region of the terminal screen, and the edit bar may include a rotation operation, a modification operation, a deletion operation, and the like.
And a substep D: when the user edits the descriptive statement through the edit bar, the user is judged to modify the descriptive statement.
For example, if the user performs a rotation operation or a modification operation on the descriptive sentence, it is determined that the user has modified the descriptive sentence.
And a substep E: and when the display area is not pressed, or the pressing time does not exceed the preset time length, or the user does not edit the descriptive sentence through the edit bar, judging that the user does not modify the descriptive sentence.
In some embodiments, the user may select the descriptive sentence, but does not perform the editing operation, or after performing the editing operation, the original effect is considered to be better, and the editing operation is not saved, and it is determined that the user has not modified the descriptive sentence.
S213, updating the description statement according to the modification operation.
For example, the description sentence is "three or two branches of the peach blossom outside the bamboo, the duck in the spring river is known first", the user modifies the description sentence into "three or two branches of the peach blossom outside the bamboo, the duck in the spring lake is known first" through modification operation, the modified description sentence is used as a new description sentence, and the updated description sentence is stored.
And S214, when the photographing instruction is detected, generating a photo according to the preview image and the updated description statement.
Specifically, through the above steps, the user confirms or modifies the shooting effect, and therefore, when a shooting instruction is detected, the preview image and the updated description sentence are directly generated into a picture. Further, descriptive sentences may also be saved in the shooting attributes of the photograph.
In some embodiments, when a photographing instruction is detected, two photographs may be generated, one of which is a preview image, i.e., an original image; the other is a preview image and an updated descriptive sentence.
S215, when the photographing instruction is detected, generating a photo according to the preview image and the description sentence.
Specifically, if the user does not modify the descriptive statement, the descriptive statement is in accordance with the shooting condition at that time and in accordance with the mood of the user, and the preview image and the descriptive statement are generated into a photo.
As can be seen from the above, the photographing method provided in this embodiment is applied to a mobile terminal, and includes: when a mobile terminal starts a shooting function, acquiring a preview image and shooting environment information, then processing the preview image through a trained model to obtain a plurality of shooting object types in the preview image, then determining a first keyword group corresponding to each preset scene in a plurality of preset scenes, wherein the first keyword group comprises at least one first keyword, then determining a second keyword according to the shooting environment information and the shooting object types, then determining the first keyword matched with the second keyword in the first keyword group as a matching keyword, determining the number of the matching keywords in each first keyword group, then determining the preset scenes with the number of words larger than the preset number as scenes to be screened, acquiring the preset weight of each matching keyword, and then determining the matching weight of each scene to be screened according to the preset weight and the matching keyword corresponding to each scene to be screened, and taking a scene to be screened with the highest matching weight as a target scene, then determining a description sentence corresponding to the target scene, displaying the description sentence and a preview image on a preview interface, then detecting whether a user modifies the description sentence, if so, updating the description sentence according to modification operation, then generating a photo according to the preview image and the updated description sentence when a photographing instruction is detected, and if not, generating the photo according to the preview image and the description sentence when the photographing instruction is detected. The photo with the descriptive sentences matched with the shooting object and the shooting environment is generated, so that the content of the photo is not single, even if the photo is in transition, the descriptive sentences can play a role in prompting when the user looks over the photo, the user can easily remember the good memory at that time, and the storage value of the photo is improved.
According to the method described in the foregoing embodiment, the embodiment will be further described from the perspective of a photographing device, which may be specifically implemented as an independent entity or integrated in a mobile terminal, where the mobile terminal may include a mobile phone, a tablet computer, and the like.
Referring to fig. 4, fig. 4 specifically illustrates that the photographing apparatus provided in the embodiment of the present application is applied to a mobile terminal, and the mobile terminal may be any intelligent electronic device with a mobile communication function, such as a smart phone, a tablet computer, a notebook computer, and the like. The photographing apparatus may include: an obtaining module 10, a first determining module 20, a second determining module 30 and a generating module 40, wherein:
(1) acquisition module 10
The acquisition module 10 is used for acquiring the preview image and the shooting environment information when the mobile terminal starts the shooting function.
When the user wants to take a picture, the shooting software of the mobile terminal can be opened, and when the shooting software is opened, the user interface can display a scene scanned by the camera, namely a preview image.
Specifically, the shooting environment information may be acquired through system settings of the mobile terminal, for example, the shooting environment information may be system time information, system date information, system location information, or the like; in addition, the shooting environment information may also be acquired by some software, for example, weather information is acquired by weather software.
(2) First determination module 20
And a first determining module 20, configured to determine a target scene from a plurality of preset scenes according to the preview image and the shooting environment information.
Referring to fig. 5, fig. 5 is another schematic structural diagram of a photographing apparatus according to an embodiment of the present disclosure, in some embodiments, the first determining module 20 includes a processing sub-module 21, a third determining sub-module 22, and a target sub-module 23:
the processing submodule 21 is configured to process the preview image through the trained model to obtain multiple shooting object types in the preview image;
specifically, the trained model may be a neural network model, and the most common function of the neural network model is object recognition, so that the preview image may be recognized through the neural network model to recognize whether a scene or a person exists in the preview image, for example, the type of the shot object may be a mountain, a lake, a sea, or the like.
And a third determining submodule 22, configured to determine at least one scene to be filtered that matches the shooting object type and the shooting environment information, and a matching weight of each scene to be filtered, from among the plurality of preset scenes.
In some embodiments, the third determination submodule 22 may include a fourth determination unit 221, a fifth determination unit 222, and a sixth determination unit 223:
the fourth determining unit 221 is configured to determine a first keyword group corresponding to each preset scene in the multiple preset scenes, where the first keyword group includes at least one first keyword.
Specifically, the preset scene may include a peak of a mountain, and the first key phrase corresponding to the scene is a mountain, a tree, a cloud, or the like.
The fifth determining unit 222 is configured to determine, as a matching keyword, a first keyword in each first keyword group, which matches the shooting object type and the shooting environment information, and determine the number of words of the matching keyword in each first keyword group.
In some embodiments, the shooting environment information includes a shooting location, a shooting time, and shooting weather, and the fifth determining unit 222 is specifically configured to:
taking a shooting place, shooting time, shooting weather and a shooting object type as second keywords;
and determining a first keyword matched with the second keyword in the first keyword group to obtain the first keyword matched with the shooting object type and the shooting environment information.
Specifically, the shooting place can be determined through system position information, the shooting time can be determined through system time, the shooting weather can be determined through weather software, and meanwhile, the type of the shot object can be determined through a training model.
For example, if the shooting location is Yulong snow mountain, the shooting time is 18:00, the shooting weather is snowing, and the shooting object type is mountain, then the second keyword may be: yulong snow mountain, evening, snow and mountain.
In some embodiments, when the above-mentioned type of the photographic subject includes a person and there are a plurality of persons, it is necessary to determine the relationship between the persons and determine the relationship between the persons as keywords, such as mother and daughter, father and daughter, and the like.
Specifically, the person relationship may be determined by means of a preset database, for example, a face image and a relationship network are stored in the database, and then the person relationship is determined by means of face recognition.
In some embodiments, a first keyword in the first keyword group, which is the same as the second keyword, may be determined as the first keyword that matches the photographic subject type and the photographic environment information. For example, the second keywords include yulong snow mountain, evening, snow and mountain, and the first keywords included in the first keyword group include evening, mountain and tree, so that the first keywords matching the shooting object type and the shooting environment information are evening and mountain.
A sixth determining unit 223, configured to determine at least one scene to be screened from a plurality of preset scenes according to the matching keyword and the number of words, and a matching weight of each scene to be screened.
In some embodiments, the sixth determining unit 223 is specifically configured to:
determining the scenes to be screened, of which the number of words is greater than the preset number, as the scenes to be screened;
acquiring a preset weight of each matched keyword;
and determining the matching weight of each scene to be screened according to the preset weight and the matching keywords corresponding to each scene to be screened.
Specifically, the preset number may be 3, and then the preset scene in which the number of words matching the keyword is greater than 3 in the plurality of preset scenes is determined as the scene to be screened.
Specifically, each matching keyword may have different preset weights in different preset scenes, for example, in a snow mountain scene, there are two matching keywords, which are snow and mountain respectively, and the preset weights of the two matching keywords are 0.5, but in a common high mountain scene, there are three matching keywords, which are mountain, tree and cloud respectively, and then the preset weights of the three matching keywords may be 0.4, 0.4 and 0, 2 respectively.
In this embodiment, the preset weight of each matching keyword is obtained, and the preset weight is used for determining the matching weight of each scene to be screened in the subsequent steps.
Specifically, the preset weights of all the matching keywords corresponding to each scene to be screened may be accumulated, and the value obtained after accumulation is used as the matching weight.
And the target submodule 23 is configured to use the scene to be screened with the highest matching weight as the target scene.
Specifically, the matching weight represents the matching degree, so that the scene to be screened with the highest matching weight is the target scene which is most matched with the preview image and the shooting environment information, and the description sentence determined according to the target scene is more accurate and more accords with the mind of the user.
(3) Second determination module 30
And a second determining module 30, configured to determine a descriptive statement corresponding to the target scene.
In some embodiments, each scene to be screened corresponds to a plurality of descriptive sentences, one descriptive sentence may be selected optionally for determination, and in the subsequent step, the user may select other selection sentences according to the mood of the user.
(3) Generating module 40
And the generating module 40 is used for generating a photo according to the descriptive sentence and the preview image.
In some embodiments, the generating module 40 may be specifically configured to:
displaying the descriptive sentence and the preview image in a preview interface;
detecting whether the user modifies the descriptive statement;
if so, updating the description statement according to the modification operation;
and when a photographing instruction is detected, generating a photo according to the preview image and the modified descriptive statement.
Specifically, when the descriptive sentence is displayed on the preview interface, the descriptive sentence and the preview image should be located at the edge or the corner of the preview image, so that the user can determine the shooting effect and the subsequent user can modify the descriptive sentence conveniently according to the mind of the user.
In some embodiments, the generation module may be specifically configured to:
determining a display area of the descriptive sentence in the preview interface;
detecting whether the display area is pressed or not, wherein the pressing time exceeds a preset time;
if the display area is pressed and the pressing time exceeds the preset time, displaying an edit bar;
when the user edits the descriptive statement through the edit bar, judging that the user modifies the descriptive statement;
and when the display area is not pressed, or the pressing time does not exceed the preset time length, or the user does not edit the descriptive sentence through the edit bar, judging that the user does not modify the descriptive sentence.
Specifically, the determination display area is used for determining whether the descriptive sentence is selected in the subsequent step. The preset time length can be 3 seconds, and whether the user mistakenly touches or really selects the descriptive statement is distinguished by pressing the preset time length. The edit bar may be displayed in the lowermost region of the terminal screen, and the edit bar may include a rotation operation, a modification operation, a deletion operation, and the like. If the user performs a rotation operation, a modification operation, or the like on the descriptive sentence, it is determined that the user has modified the descriptive sentence. In some embodiments, the user may select the descriptive sentence, but does not perform the editing operation, or after performing the editing operation, the original effect is considered to be better, and the editing operation is not saved, and it is determined that the user has not modified the descriptive sentence.
For example, the description sentence is "three or two branches of the peach blossom outside the bamboo, the duck in the spring river is known first", the user modifies the description sentence into "three or two branches of the peach blossom outside the bamboo, the duck in the spring lake is known first" through modification operation, the modified description sentence is used as a new description sentence, and the updated description sentence is stored.
Specifically, through the above steps, the user confirms or modifies the shooting effect, and therefore, when a shooting instruction is detected, the preview image and the updated descriptive sentence are directly generated into a picture, and furthermore, the descriptive sentence can be saved in the shooting attribute of the picture.
In some embodiments, when a photographing instruction is detected, two photographs may be generated, one of which is a preview image, i.e., an original image; the other is a preview image and an updated descriptive sentence.
In specific implementation, the above modules may be implemented as independent entities, or may be combined arbitrarily to be implemented as the same or several entities, and specific implementation of the above modules may refer to the foregoing method embodiments, which are not described herein again.
As can be seen from the above description, the photographing apparatus provided in this embodiment is applied to a mobile terminal, and when the mobile terminal starts a photographing function, the obtaining module 10 obtains a preview image and photographing environment information, then the first determining module 20 determines a target scene from a plurality of preset scenes according to the preview image and the photographing environment information, the second determining module 30 determines a description sentence corresponding to the target scene, and then the generating module 40 generates a photo according to the description sentence and the preview image, so as to generate a photo with the description sentence, and the description sentence is matched with a photographing object and a photographing environment.
In addition, the embodiment of the application further provides a mobile terminal, and the mobile terminal can be a smart phone, a tablet computer and other devices. As shown in fig. 6, the mobile terminal 500 includes a processor 501, a memory 502. The processor 501 is electrically connected to the memory 502.
The processor 501 is a control center of the mobile terminal 500, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by running or loading an application stored in the memory 502 and calling data stored in the memory 502, thereby integrally monitoring the mobile terminal.
In this embodiment, the processor 501 in the mobile terminal 500 loads instructions corresponding to processes of one or more application programs into the memory 502 according to the following steps, and the processor 501 runs the application programs stored in the memory 502, so as to implement various functions:
when the mobile terminal starts a shooting function, acquiring a preview image and shooting environment information;
determining a target scene from a plurality of preset scenes according to the preview image and the shooting environment information;
determining a description sentence corresponding to a target scene;
and generating a picture according to the descriptive sentence and the preview image.
Fig. 7 is a block diagram illustrating a specific structure of a mobile terminal according to an embodiment of the present application, where the mobile terminal may be used to implement the photographing method provided in the foregoing embodiment. The mobile terminal 300 may be a smart phone or a tablet computer.
The RF circuit 310 is used for receiving and transmitting electromagnetic waves, and performs interconversion between the electromagnetic waves and electrical signals, thereby communicating with a communication network or other devices. RF circuitry 310 may include various existing circuit elements for performing these functions, such as an antenna, a radio frequency transceiver, a digital signal processor, an encryption/decryption chip, a Subscriber Identity Module (SIM) card, memory, and so forth. RF circuit 310 may communicate with various networks such as the internet, an intranet, a wireless network, or with other devices over a wireless network. The wireless network may comprise a cellular telephone network, a wireless local area network, or a metropolitan area network. The Wireless network may use various Communication standards, protocols and technologies, including but not limited to Global System for Mobile Communication (GSM), Enhanced Mobile Communication (EDGE), Wideband Code Division Multiple Access (WCDMA), Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Wireless Fidelity (Wi-Fi) (e.g., IEEE802.11a, IEEE802.11 b, IEEE802.11g and/or IEEE802.11 n), Internet telephony (Voice Internet Protocol, VoIP), world wide interoperability Microwave Access (Microwave for Wireless Communication, wimax), and other short message Communication protocols, as well as any other suitable communication protocols, and may even include those that have not yet been developed.
The memory 320 can be used for storing software programs and modules, such as the photographing method and the corresponding program instructions/modules in the above embodiments, and the processor 380 executes various functional applications and data processing, i.e., realizes the communication data saving function, by running the software programs and modules stored in the memory 320. The memory 320 may include high speed random access memory and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 320 may further include memory located remotely from the processor 380, which may be connected to the mobile terminal 300 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input unit 330 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. In particular, the input unit 330 may include a touch-sensitive surface 331 as well as other input devices 332. The touch-sensitive surface 331, also referred to as a touch screen or touch pad, may collect touch operations by a user on or near the touch-sensitive surface 331 (e.g., operations by a user on or near the touch-sensitive surface 331 using a finger, a stylus, or any other suitable object or attachment), and drive the corresponding connection device according to a predetermined program. Alternatively, the touch sensitive surface 331 may comprise two parts, a touch detection means and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 380, and can receive and execute commands sent by the processor 380. In addition, the touch-sensitive surface 331 may be implemented using various types of resistive, capacitive, infrared, and surface acoustic waves. The input unit 330 may comprise other input devices 332 in addition to the touch sensitive surface 331. In particular, other input devices 332 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
Display unit 340 may be used to Display information input by or provided to a user, as well as various graphical user interfaces of mobile terminal 300, which may be comprised of graphics, text, icons, video, and any combination thereof Display panel 341, optionally Display panel 341 may be configured in the form of L CD (L iquid Crystal Display ), O L ED (Organic L light-Emitting Diode), etc. further, touch-sensitive surface 331 may overlay Display panel 341, and upon touch-sensitive surface 331 detecting a touch operation on or near touch-sensitive surface 331, communicate to processor 380 to determine the type of touch event, and processor 380 then provides a corresponding visual output on Display panel 341 depending on the type of touch event.
The mobile terminal 300 may also include at least one sensor 350, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display panel 341 according to the brightness of ambient light, and a proximity sensor that may turn off the display panel 341 and/or the backlight when the mobile terminal 300 is moved to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when the mobile phone is stationary, and can be used for applications of recognizing the posture of the mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which may be further configured on the mobile terminal 300, detailed descriptions thereof are omitted.
Audio circuitry 360, speaker 361, and microphone 362 may provide an audio interface between a user and the mobile terminal 300. The audio circuit 360 may transmit the electrical signal converted from the received audio data to the speaker 361, and the audio signal is converted by the speaker 361 and output; on the other hand, the microphone 362 converts the collected sound signal into an electrical signal, which is received by the audio circuit 360 and converted into audio data, which is then processed by the audio data output processor 380 and then transmitted to, for example, another terminal via the RF circuit 310, or the audio data is output to the memory 320 for further processing. The audio circuit 360 may also include an earbud jack to provide communication of a peripheral headset with the mobile terminal 300.
The mobile terminal 300, which may assist the user in e-mail, web browsing, streaming media access, etc., through the transmission module 370 (e.g., a Wi-Fi module), provides the user with wireless broadband internet access. Although fig. 7 shows the transmission module 370, it is understood that it does not belong to the essential constitution of the mobile terminal 300 and may be omitted entirely within the scope not changing the essence of the invention as needed.
The processor 380 is a control center of the mobile terminal 300, connects various parts of the entire mobile phone using various interfaces and lines, and performs various functions of the mobile terminal 300 and processes data by operating or executing software programs and/or modules stored in the memory 320 and calling data stored in the memory 320, thereby integrally monitoring the mobile phone. Optionally, processor 380 may include one or more processing cores; in some embodiments, processor 380 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 380.
The mobile terminal 300 also includes a power supply 390 (e.g., a battery) that provides power to the various components and, in some embodiments, may be logically coupled to the processor 380 via a power management system to manage charging, discharging, and power consumption management functions via the power management system. The power supply 190 may also include any component including one or more of a dc or ac power source, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
Although not shown, the mobile terminal 300 may further include a camera (e.g., a front camera, a rear camera), a bluetooth module, etc., which will not be described herein. Specifically, in this embodiment, the display unit of the mobile terminal is a touch screen display, the mobile terminal further includes a memory, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the one or more processors, and the one or more programs include instructions for:
when the mobile terminal starts a shooting function, acquiring a preview image and shooting environment information;
determining a target scene from a plurality of preset scenes according to the preview image and the shooting environment information;
determining a description sentence corresponding to a target scene;
and generating a picture according to the descriptive sentence and the preview image.
In specific implementation, the above modules may be implemented as independent entities, or may be combined arbitrarily to be implemented as the same or several entities, and specific implementation of the above modules may refer to the foregoing method embodiments, which are not described herein again.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor. To this end, embodiments of the present application provide a storage medium including mobile terminal-executable instructions. The mobile terminal executable instructions, when executed by a mobile terminal processor, perform the steps of any one of the photographing methods provided by the embodiments of the present application.
Wherein the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the instructions stored in the storage medium can execute the steps in any photographing method provided by the embodiment of the present invention, the beneficial effects that can be achieved by any photographing method provided by the embodiment of the present invention can be achieved, which are detailed in the foregoing embodiments and will not be described herein again.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
In summary, although the present application has been described with reference to the preferred embodiments, the above-described preferred embodiments are not intended to limit the present application, and those skilled in the art can make various changes and modifications without departing from the spirit and scope of the present application, so that the scope of the present application shall be determined by the appended claims.

Claims (10)

1. A photographing method is applied to a mobile terminal, and is characterized by comprising the following steps:
when the mobile terminal starts a shooting function, acquiring a preview image and shooting environment information;
determining a target scene from a plurality of preset scenes according to the preview image and the shooting environment information;
determining a descriptive statement corresponding to the target scene;
and generating a photo according to the descriptive statement and the preview image.
2. The photographing method according to claim 1, wherein the determining a target scene from a plurality of preset scenes according to the preview image and the photographing environment information specifically includes:
processing the preview image through a trained model to obtain a plurality of shooting object types in the preview image;
determining at least one scene to be screened matched with the shooting object type and the shooting environment information and the matching weight of each scene to be screened from a plurality of preset scenes;
and taking the scene to be screened with the highest matching weight as a target scene.
3. The photographing method according to claim 2, wherein the determining at least one scene to be screened that matches the photographic image type and the photographic environment information and the matching weight of each scene to be screened from a plurality of preset scenes specifically includes:
determining a first keyword group corresponding to each preset scene in a plurality of preset scenes, wherein the first keyword group comprises at least one first keyword;
determining the first keywords matched with the shooting object type and the shooting environment information in each first keyword group as matching keywords, and determining the number of words of the matching keywords in each first keyword group;
and determining at least one scene to be screened from the plurality of preset scenes according to the matching keywords and the word quantity and the matching weight of each scene to be screened.
4. The photographing method according to claim 3, wherein determining at least one scene to be screened from the plurality of preset scenes and a matching weight of each scene to be screened according to the matching keywords and the number of words specifically comprises:
determining the preset scenes with the number of words larger than the preset number as scenes to be screened;
acquiring a preset weight of each matched keyword;
and determining the matching weight of each scene to be screened according to the preset weight and the matching keywords corresponding to each scene to be screened.
5. The photographing method according to claim 3, wherein the photographing environment information includes a photographing place, a photographing time, and a photographing weather, and the determining of the first keyword matched with the type of the photographing object and the photographing environment information in each of the first keyword groups specifically includes:
taking the shooting place, the shooting time, the shooting weather and the shooting object type as second keywords;
and determining a first keyword matched with the second keyword in the first keyword group to obtain the first keyword matched with the shooting object type and the shooting environment information.
6. The photographing method according to claim 1, wherein generating a photograph according to the descriptive statement and the preview image specifically includes:
displaying the descriptive sentence and the preview image in a preview interface;
detecting whether the descriptive statement is modified by a user;
if yes, updating the description statement according to the modification operation;
and when a photographing instruction is detected, generating a photo according to the preview image and the updated description statement.
7. The method of claim 6, wherein the detecting whether the user has modified the descriptive statement specifically includes:
determining a display area of the descriptive sentence in the preview interface;
detecting whether the display area is pressed or not, wherein the pressing time exceeds a preset time;
if the display area is pressed and the pressing time exceeds the preset time, displaying an edit bar;
when the user edits the descriptive statement through the edit bar, judging that the user modifies the descriptive statement;
and when the display area is not pressed, or the pressing time does not exceed the preset time, or the user does not edit the descriptive sentence through the edit bar, judging that the user does not modify the descriptive sentence.
8. The utility model provides a photographing device, is applied to mobile terminal, its characterized in that, photographing device includes:
the acquisition module is used for acquiring the preview image and the shooting environment information when the mobile terminal starts a shooting function;
the first determining module is used for determining a target scene from a plurality of preset scenes according to the preview image and the shooting environment information;
the second determining module is used for determining the descriptive statement corresponding to the target scene;
and the generating module is used for generating a photo according to the descriptive statement and the preview image.
9. A computer-readable storage medium having stored thereon a plurality of instructions adapted to be loaded by a processor to perform the photographing method of any one of claims 1 to 7.
10. A mobile terminal, comprising a processor and a memory, wherein the processor is electrically connected to the memory, the memory is used for storing instructions and data, and the processor is used for executing the steps of the photographing method according to any one of claims 1 to 7.
CN202010192948.7A 2020-03-18 2020-03-18 Photographing method, photographing device, storage medium and mobile terminal Pending CN111405180A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010192948.7A CN111405180A (en) 2020-03-18 2020-03-18 Photographing method, photographing device, storage medium and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010192948.7A CN111405180A (en) 2020-03-18 2020-03-18 Photographing method, photographing device, storage medium and mobile terminal

Publications (1)

Publication Number Publication Date
CN111405180A true CN111405180A (en) 2020-07-10

Family

ID=71430954

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010192948.7A Pending CN111405180A (en) 2020-03-18 2020-03-18 Photographing method, photographing device, storage medium and mobile terminal

Country Status (1)

Country Link
CN (1) CN111405180A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111966254A (en) * 2020-08-06 2020-11-20 惠州Tcl移动通信有限公司 Image shooting method and device, storage medium and terminal
CN112464053A (en) * 2020-12-04 2021-03-09 珠海格力电器股份有限公司 VR scene interaction method and device, electronic equipment and storage medium
CN113505259A (en) * 2021-06-28 2021-10-15 惠州Tcl云创科技有限公司 Media file labeling method, device, equipment and medium based on intelligent identification
CN115118840A (en) * 2021-03-22 2022-09-27 Oppo广东移动通信有限公司 A shooting method, device, electronic device and storage medium
WO2023005882A1 (en) * 2021-07-29 2023-02-02 华为技术有限公司 Photographing method, photographing parameter training method, electronic device, and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1720549A (en) * 2003-02-05 2006-01-11 精工爱普生株式会社 image processing device
CN101296293A (en) * 2008-05-30 2008-10-29 宇龙计算机通信科技(深圳)有限公司 Photo editing method, system and shooting device
CN106161935A (en) * 2016-07-12 2016-11-23 佛山杰致信息科技有限公司 A kind of photo remarks display system
CN106231198A (en) * 2016-08-17 2016-12-14 北京小米移动软件有限公司 The method and device of shooting image
CN106534688A (en) * 2016-11-18 2017-03-22 上海传英信息技术有限公司 Watermarked photo acquisition method and mobile terminal
CN108090866A (en) * 2017-12-13 2018-05-29 重庆越畅汽车科技有限公司 Method, system and the equipment of art photograph are converted to based on photo
CN109660728A (en) * 2018-12-29 2019-04-19 维沃移动通信有限公司 A kind of photographic method and device
CN110493517A (en) * 2019-08-14 2019-11-22 广州三星通信技术研究有限公司 Auxiliary shooting method of image capture device and image capture device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1720549A (en) * 2003-02-05 2006-01-11 精工爱普生株式会社 image processing device
CN101296293A (en) * 2008-05-30 2008-10-29 宇龙计算机通信科技(深圳)有限公司 Photo editing method, system and shooting device
CN106161935A (en) * 2016-07-12 2016-11-23 佛山杰致信息科技有限公司 A kind of photo remarks display system
CN106231198A (en) * 2016-08-17 2016-12-14 北京小米移动软件有限公司 The method and device of shooting image
CN106534688A (en) * 2016-11-18 2017-03-22 上海传英信息技术有限公司 Watermarked photo acquisition method and mobile terminal
CN108090866A (en) * 2017-12-13 2018-05-29 重庆越畅汽车科技有限公司 Method, system and the equipment of art photograph are converted to based on photo
CN109660728A (en) * 2018-12-29 2019-04-19 维沃移动通信有限公司 A kind of photographic method and device
CN110493517A (en) * 2019-08-14 2019-11-22 广州三星通信技术研究有限公司 Auxiliary shooting method of image capture device and image capture device

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111966254A (en) * 2020-08-06 2020-11-20 惠州Tcl移动通信有限公司 Image shooting method and device, storage medium and terminal
CN111966254B (en) * 2020-08-06 2022-06-10 惠州Tcl移动通信有限公司 Image shooting method and device, storage medium and terminal
CN112464053A (en) * 2020-12-04 2021-03-09 珠海格力电器股份有限公司 VR scene interaction method and device, electronic equipment and storage medium
CN115118840A (en) * 2021-03-22 2022-09-27 Oppo广东移动通信有限公司 A shooting method, device, electronic device and storage medium
CN113505259A (en) * 2021-06-28 2021-10-15 惠州Tcl云创科技有限公司 Media file labeling method, device, equipment and medium based on intelligent identification
WO2023273432A1 (en) * 2021-06-28 2023-01-05 惠州Tcl云创科技有限公司 Intelligent identification-based media file labeling method and apparatus, device, and medium
WO2023005882A1 (en) * 2021-07-29 2023-02-02 华为技术有限公司 Photographing method, photographing parameter training method, electronic device, and storage medium

Similar Documents

Publication Publication Date Title
EP3872807B1 (en) Voice control method and electronic device
CN111405180A (en) Photographing method, photographing device, storage medium and mobile terminal
US12225279B2 (en) Input method and electronic device
US20220343648A1 (en) Image selection method and electronic device
CN112130714B (en) Keyword search method and electronic device capable of learning
CN107734251A (en) A kind of photographic method and mobile terminal
CN108182271B (en) Photographing method, terminal and computer readable storage medium
CN107241552B (en) Image acquisition method, device, storage medium and terminal
CN112527093A (en) Gesture input method and electronic equipment
CN112150499B (en) Image processing method and related device
CN110837557B (en) Abstract generation method, device, equipment and medium
CN107635110A (en) A kind of video interception method and terminal
CN109842723A (en) Terminal and its screen brightness control method and computer readable storage medium
CN110881105A (en) A shooting method and electronic device
CN108307110A (en) A kind of image weakening method and mobile terminal
CN108040209A (en) A kind of image pickup method and mobile terminal
CN109561255B (en) Terminal photographing method and device and storage medium
CN111028846A (en) Method and device for registration of wake-up-free words
CN110852217A (en) Method and electronic device for face recognition
US12400298B2 (en) Image fusion method and apparatus, storage medium and mobile terminal
CN112764600B (en) Resource processing method, device, storage medium and computer equipment
CN107704292A (en) Picture method to set up and mobile terminal in a kind of application program
CN111027406A (en) Picture identification method and device, storage medium and electronic equipment
CN110012225B (en) An image processing method, device and mobile terminal
CN117519854A (en) Display method and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200710

RJ01 Rejection of invention patent application after publication