[go: up one dir, main page]

CN108449546B - A kind of photographing method and mobile terminal - Google Patents

A kind of photographing method and mobile terminal Download PDF

Info

Publication number
CN108449546B
CN108449546B CN201810299768.1A CN201810299768A CN108449546B CN 108449546 B CN108449546 B CN 108449546B CN 201810299768 A CN201810299768 A CN 201810299768A CN 108449546 B CN108449546 B CN 108449546B
Authority
CN
China
Prior art keywords
input
image
target
shooting
vertex
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810299768.1A
Other languages
Chinese (zh)
Other versions
CN108449546A (en
Inventor
刘长铕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201810299768.1A priority Critical patent/CN108449546B/en
Publication of CN108449546A publication Critical patent/CN108449546A/en
Application granted granted Critical
Publication of CN108449546B publication Critical patent/CN108449546B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/02Constructional features of telephone sets
    • H04M1/0202Portable telephone sets, e.g. cordless phones, mobile phones or bar type handsets
    • H04M1/026Details of the structure or mounting of specific components
    • H04M1/0264Details of the structure or mounting of specific components for a camera module assembly
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/57Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

The invention provides a photographing method and a mobile terminal, wherein the method comprises the following steps: receiving a first input of a user on a shooting preview interface in a state that a preview image of a first shooting view field is displayed on the shooting preview interface, wherein the first input is used for adjusting the shooting view field; responding to the first input, respectively moving the target assembly to at least two preset positions for shooting to obtain at least two images; outputting a target image obtained by synthesizing the at least two images, wherein the at least two preset positions have a corresponding relation with the first input operation track; the second shooting visual field of the target image is larger than the first shooting visual field. By the photographing method provided by the invention, the image with a larger view field can be photographed without a user moving the mobile terminal, the operation is more convenient and rapid, and obvious splicing traces are not easy to appear.

Description

Photographing method and mobile terminal
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to a photographing method and a mobile terminal.
Background
At present, in the process Of taking a picture through a mobile terminal, when a user needs to acquire an image with a larger Field Of View (FOV), the user generally needs to replace a wide-angle lens with a larger Field Of View (FOV), or use an image sensor with a larger frame, or use a panoramic shooting mode, etc.
However, the camera of the mobile terminal does not support lens replacement, and although some mobile terminals support cameras with two different FOV lenses, the cost is high, and the wide-angle lens is prone to cause large distortion. In addition, large format image sensors are expensive and require larger lenses to accommodate. For the panoramic shooting mode, a user needs to hold the mobile terminal to move up and down or move left and right to obtain a plurality of pictures at different positions and splice the pictures with a large field angle, so that the panoramic shooting mode requires the user to keep stable during use, otherwise, obvious splicing traces are easy to appear.
Disclosure of Invention
The embodiment of the invention provides a photographing method and a mobile terminal, and aims to solve the problems that the operation difficulty of photographing an image with a large view field through the mobile terminal is high and obvious splicing traces are easy to appear.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a photographing method, which is applied to a mobile terminal, where a camera of the mobile terminal includes a target component having a movement attribute, and the target component includes a lens of the camera or an image sensor, and the method includes:
receiving a first input of a user on a shooting preview interface in a state that a preview image of a first shooting view field is displayed on the shooting preview interface, wherein the first input is used for adjusting the shooting view field;
responding to the first input, respectively moving the target assembly to at least two preset positions for shooting to obtain at least two images;
outputting a target image obtained by synthesizing the at least two images;
the at least two preset positions and the operation track of the first input have corresponding relation; the second shooting visual field of the target image is larger than the first shooting visual field.
In a second aspect, an embodiment of the present invention further provides a mobile terminal. The camera of the mobile terminal comprises a target component with a movement attribute, the target component comprises a lens or an image sensor of the camera, and the mobile terminal comprises:
the receiving module is used for receiving a first input of a user on the shooting preview interface in a state that a preview image of a first shooting view field is displayed on the shooting preview interface, wherein the first input is used for adjusting the shooting view field;
the moving module is used for responding to the first input, and respectively moving the target assembly to at least two preset positions for shooting to obtain at least two images;
the output module is used for outputting a target image obtained by synthesizing the at least two images;
the at least two preset positions and the operation track of the first input have corresponding relation; the second shooting visual field of the target image is larger than the first shooting visual field.
In a third aspect, an embodiment of the present invention further provides a mobile terminal, including a processor, a memory, and a computer program stored in the memory and capable of running on the processor, where the computer program, when executed by the processor, implements the steps of the above-mentioned photographing method.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the photographing method are implemented.
In the embodiment of the invention, a first input of a user on a shooting preview interface is received in a state that a preview image of a first shooting view field is displayed on the shooting preview interface, wherein the first input is used for adjusting the shooting view field; responding to the first input, respectively moving the target assembly to at least two preset positions for shooting to obtain at least two images; the target image obtained by synthesizing the at least two images is output, the image with a larger view field can be shot without a user moving a mobile terminal, the operation is convenient, obvious splicing traces are not easy to appear, in addition, the target assembly is moved to be shot based on the dragging track, and the convenience of the operation and the flexibility of shooting control can be enhanced.
Drawings
FIG. 1 is one of the schematic diagrams of the relative positions of a lens and an image sensor provided by the embodiments of the present invention;
FIG. 2 is a second schematic diagram illustrating relative positions of a lens and an image sensor according to an embodiment of the present invention;
FIG. 3 is a third schematic diagram illustrating relative positions of a lens and an image sensor according to an embodiment of the present invention;
FIG. 4 is one of the schematic diagrams of relative positions of an image circle and an image sensor provided by the embodiment of the invention;
FIG. 5 is a second schematic diagram of the relative positions of the image circle and the image sensor provided by the embodiment of the present invention;
FIG. 6 is a third schematic diagram of the relative positions of the image circle and the image sensor provided by the embodiment of the invention;
FIG. 7 is a fourth schematic diagram of the relative positions of the image circle and the image sensor provided by the embodiment of the invention;
FIG. 8 is a fifth schematic diagram of the relative positions of the image circle and the image sensor provided by the embodiment of the invention;
FIG. 9 is a flowchart of a photographing method according to an embodiment of the present invention;
FIG. 10 is a schematic diagram of the intersection of the image circle and the diagonal line of the image sensor provided by the embodiment of the present invention;
FIG. 11 is a diagram illustrating a drag input for a photo button according to an embodiment of the present invention;
FIG. 12 is a schematic diagram of the intersection of the vertex at the upper left corner of the image sensor with the point A on the image circle provided by the embodiment of the invention;
FIG. 13 is a schematic diagram of the intersection of the vertex at the upper right corner of the image sensor with point B on the image circle provided by the embodiment of the invention;
FIG. 14 is a schematic diagram of the intersection of the vertex of the lower right corner of the image sensor and the point C on the image circle provided by the embodiment of the invention;
FIG. 15 is a schematic diagram of the intersection of the vertex of the lower left corner of the image sensor and the point D on the image circle provided by the embodiment of the invention;
FIG. 16 is one of the schematic diagrams of a target image provided by an embodiment of the invention;
FIG. 17 is a schematic diagram of an image circle intersecting straight lines of four sides of an image sensor according to an embodiment of the present invention;
FIG. 18 is a second schematic diagram of a drag input for the photo button according to the embodiment of the present invention;
FIG. 19 is a second schematic diagram of a target image according to an embodiment of the present invention;
fig. 20 is a block diagram of a mobile terminal provided in an embodiment of the present invention;
fig. 21 is a schematic diagram of a hardware structure of a mobile terminal for implementing various embodiments of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
For convenience of description, some terms related to the embodiments of the present invention are explained below:
a shooting view field: which is used to indicate the maximum range that the camera can take.
OIS: optical Image Stabilization, refers to compensating for Image blur caused by motion through relative movement of a lens and an Image sensor. For example, referring to fig. 1 to 3, the light 30 is emitted to the image sensor 10 through the lens 20, and the image sensor 10 and the lens 20 may be relatively translated, so that an image shift caused by hand trembling or the like may be compensated.
Specifically, the relative movement between the lens and the image sensor may include lens fixation, image sensor movement, or image sensor fixation, lens movement, and the like.
And (4) image circle: which refers to the effective imaging area of the optical image of the subject being photographed. For example, referring to fig. 4, when an image is captured, the image sensor 10 (i.e., the image recording area) is usually located in the center area of the image circle 40, i.e., the center point of the image sensor 10 and the center point of the image circle 40 coincide. Specifically, by moving the image sensor 10 or the lens, the relative positions of the image sensor 10 and the image ring 40 can be changed, for example, see fig. 5 to 8.
The embodiment of the invention provides a photographing method which is applied to a mobile terminal, wherein a camera of the mobile terminal comprises a target component with a moving property, namely the position of the target component can be moved, and the target component comprises a lens or an image sensor of the camera. Referring to fig. 9, fig. 9 is a flowchart of a photographing method according to an embodiment of the present invention, as shown in fig. 9, including the following steps:
step 901, receiving a first input of a user on a shooting preview interface in a state that a preview image of a first shooting field of view is displayed on the shooting preview interface, wherein the first input is used for adjusting the shooting field of view.
In an embodiment of the present invention, the first input may be a drag input, a slide input, or the like. Specifically, the first input of the user on the shooting preview interface, for example, the input of the user dragging the shooting identifier on the shooting preview interface, or the input of the user sliding on the shooting preview interface, or the input of multiple clicks on the shooting preview interface, may be received in a state that the preview image of the first shooting field of view is displayed on the shooting preview interface of the mobile terminal.
Optionally, the step 901, that is, the receiving of the first input of the user on the shooting preview interface, includes:
receiving a first input of dragging the photographing identifier on the photographing preview interface by a user;
or receiving a first input of sliding of a user in a preset area on the shooting preview interface.
In the embodiment of the present invention, the photographing identifier may be a photographing button, a photographing control, and the like on the photographing preview interface, and the preset area on the photographing preview interface may be reasonably set according to an actual situation, for example, a rectangular area at a lower left corner, a rectangular area at a lower right corner, or a central area on the photographing preview interface. Specifically, the first input may be an input of dragging the photographing identifier on the photographing preview interface by the user, or an input of sliding the user in a preset area on the photographing preview interface.
In practical application, when the mobile terminal displays the shooting preview interface, if an image with a larger view field needs to be shot, the user may drag the shooting identifier, or slide in a preset area on the shooting preview interface, so that the mobile terminal may receive a first input of the user on the shooting preview interface, and execute step 902 to adjust the shooting view field.
According to the embodiment of the invention, the shooting field of view is adjusted by receiving the first input of dragging the shooting identification on the shooting preview interface by the user or receiving the first input of sliding of the user in the preset area on the shooting preview interface, so that the operation is simple and convenient, and the shooting device is interesting.
And 902, responding to the first input, respectively moving the target assembly to at least two preset positions for shooting to obtain at least two images.
In the embodiment of the present invention, the at least two preset positions have a corresponding relationship with the operation track of the first input. Specifically, after receiving a first input of the user on the shooting preview interface, an operation track of the first input may be obtained, where the operation track of the first input may be a track formed from a start position point of the first input to an end position point of the first input, and optionally, the operation track of the first input may be a triangle, a rectangle, a circle, an ellipse, an arc, and the like.
In this step, the first input operation track may be matched with a preset dragging track, if the first input operation track is matched with the preset dragging track, the target assembly is moved to at least two preset positions to shoot, so as to obtain at least two images, otherwise, the process may be ended, or a conventional shooting process is executed, etc.; the first input operation track can also be matched with a preset dragging track to determine at least two preset positions corresponding to the first input operation track, so that the target assembly is moved to the at least two preset positions to be shot respectively to obtain at least two images; or moving the target assembly to a preset position corresponding to the position of each vertex of the first input operation track according to each vertex of the first input operation track, and shooting to obtain at least two images and the like.
It can be understood that, in response to the first input of the user on the shooting preview interface, the preview image after the shooting field of view is updated may be displayed on the shooting preview interface, that is, each time the target component moves to a preset position, the preview image corresponding to the preset position is displayed on the shooting preview interface, so that the user can view the preview image corresponding to each preset position in real time, or the preview image displayed when the first input is received may be maintained and displayed on the shooting preview interface until step 902 is completed, so as to avoid flickering of the image displayed on the shooting preview interface. Optionally, after the step 902 is finished, the target component of the mobile terminal may be controlled to return to the position where the target component was located before the adjustment of the shooting field of view.
Optionally, in the embodiment of the present invention, the lens or the image sensor may be driven to move to different positions by a motor in the OIS module of the mobile terminal camera, so that the cost may be saved. It is understood that the embodiment of the present invention may additionally provide a motor for driving the lens or the image sensor to move to a different position.
And step 903, outputting a target image obtained by synthesizing the at least two images.
In the embodiment of the invention, the second shooting view field of the target image is larger than the first shooting view field, namely the shooting view field of the synthesized image is larger than the first shooting view field before the shooting view field is adjusted.
In this step, the at least two images may be registered and fused to remove the overlapping portions of the at least two images, thereby obtaining the target image. It can be understood that the image registration algorithm may be an image registration algorithm based on gray information, an image registration algorithm based on a transform domain, or an image registration algorithm based on features, and the image fusion algorithm may be an image fusion algorithm based on wavelet transform, an image fusion algorithm based on pyramid change, an image fusion algorithm based on color space, and the like, which is not limited in the implementation of the present invention. Optionally, after the target image is obtained, the target image may be directly displayed on a shooting preview interface of the mobile terminal, and the target image may also be kept in an album of the mobile terminal for the user to view.
Specifically, because the at least two images are images shot when the target assembly is located at different preset positions, the target image obtained by combining the at least two images has a larger view field relative to the image shot when the target assembly is located at one position, so that a lens with a larger FOV (field of view) does not need to be replaced, the image sensor frame does not need to be increased, a user does not need to move the mobile terminal to shoot, the image with a larger view field can be obtained, the cost is lower, and the operation is more convenient.
In the embodiment of the present invention, the mobile terminal may be a mobile phone, a Tablet personal Computer (Tablet personal Computer), a Laptop Computer (Laptop Computer), a personal digital assistant (PDA for short), a Wearable Device (Wearable Device), or the like.
According to the photographing method, a first input of a user on a shooting preview interface is received in a state that a preview image of a first shooting view field is displayed on the shooting preview interface, wherein the first input is used for adjusting the shooting view field; responding to the first input, respectively moving the target assembly to at least two preset positions for shooting to obtain at least two images; outputting a target image obtained by synthesizing the at least two images; the at least two preset positions and the operation track of the first input have corresponding relation; the second shooting view field of the target image is larger than the first shooting view field, the image with the larger view field can be shot without a user moving the mobile terminal, the operation is convenient, obvious splicing traces are not prone to occurring, in addition, the target assembly is shot based on the first input movement of the user on the shooting preview interface, and convenience in operation and flexibility in shooting control can be enhanced.
Optionally, in step 902, that is, in response to the first input, moving the target assembly to at least two preset positions respectively for shooting to obtain at least two images, where the step includes:
under the condition that the first input operation track is matched with a first preset track, the target assembly is moved to at least two first preset positions respectively to be shot, and at least two images are obtained;
when the target assembly moves to the first preset position, the vertex of the image sensor intersects with a first target point on the image circle of the lens; the first target point is an intersection point of straight lines where the image circle of the lens and the diagonal line of the image sensor are located when the central point of the image circle of the lens and the central point of the image sensor are overlapped.
In the embodiment of the present invention, the first preset track may be set according to actual requirements, for example, the first preset track may include one or at least two of a triangle, a rectangle, a flag, and the like. Specifically, when the first input operation track is matched with the first preset track, the target assembly can be moved to at least two first preset positions to shoot, so as to obtain at least two images, when the first input operation track is not matched with the first preset track, the process can be ended, a conventional shooting process can be executed, or the first input operation track can be continuously matched with a second preset track, wherein the second preset track is different from the first preset track. The first preset position may be a position corresponding to an intersection of a vertex of the image sensor and a first target point on the image circle of the lens, that is, when the target assembly moves to the first preset position, the vertex of the image sensor and the first target point on the image circle of the lens intersect.
For ease of understanding, embodiments of the present invention are described below with reference to fig. 10 to 16:
referring to fig. 10, points a to D are respectively intersections of straight lines where diagonal lines of the image circle 40 and the image sensor 10 are located when a center point of the image circle 40 and a center point of the image sensor 10 (i.e., an image recording area) are overlapped (i.e., overlapped with the center point 0), and the first target point may include at least two points among the points a to D.
In practical applications, when the operation track of the first input (for example, the first operation track 1 shown in fig. 11, that is, the flag-shaped operation track abcd) matches the first preset track, the target assembly (for example, the lens or the image sensor) may be moved to at least two first preset positions for shooting, and when the target assembly is moved to the first preset positions, the vertex of the image sensor 10 intersects with the first target point (that is, the point from the point a to the point D) on the image circle 40, for example, referring to fig. 12 to 15, the four vertices of the image sensor 10 intersect with one of the points from the point a to the point D on the image circle 40.
It should be noted that, when the image sensor or the lens is moved by the OIS module, if the movable stroke of the OIS module is limited, so that the vertex of the image sensor and the first target point on the image circle of the lens cannot be intersected, the target assembly may be moved as much as possible, so that the vertex of the image sensor is close to the first target point in the image circle 40, so as to expand the shooting field of view.
Specifically, after four images captured when four vertices of the image sensor 10 intersect with one of the points a to D in the image circle 40, the four images may be combined to obtain the target image 50 shown in fig. 16, that is, a rectangular region formed by the points a to D.
According to the embodiment of the invention, when the target assembly moves to the first preset position, the vertex of the image sensor is intersected with the first target point on the image circle of the lens, and the first target point is the intersection point of the straight line where the image circle of the lens and the diagonal line of the image sensor are located when the central point of the image circle of the lens is superposed with the central point of the image sensor, so that the image which has the same length-width ratio as the original image but has a larger view field can be spliced.
Optionally, in step 902, that is, in response to the first input, moving the target assembly to at least two preset positions respectively for shooting to obtain at least two images, where the step includes:
under the condition that the first input operation track is matched with a second preset track, the target assembly is moved to at least two second preset positions respectively to be shot, and at least two images are obtained;
when the target assembly moves to the second preset position, the vertex of the image sensor intersects with a second target point on the image circle of the lens; the second target point is a point on a target circular arc on the image circle of the lens; the target arc is an arc between two adjacent intersection points in intersection points of straight lines where the image ring of the lens and the four sides of the image sensor are located when the central point of the image ring of the lens is coincident with the central point of the image sensor.
In the embodiment of the present invention, the second preset track may be reasonably set according to an actual situation, for example, the second preset track may include one or at least two of a circle, an ellipse, an arc, and the like. It can be understood that, in the case that the operation trajectory of the first input does not match the second preset trajectory, the flow may be ended, and the regular photographing operation may also be performed.
The second preset position may be a position corresponding to an intersection of a vertex of the image sensor and a second target point on the image circle of the lens, that is, when the target assembly moves to the second preset position, the vertex of the image sensor and the second target point on the image circle of the lens intersect, the second target point is a point on a target arc on the image circle of the lens, the target arc may include one arc on the image circle or at least two arcs on the image circle, and the second target point may include at least two points on the target arc.
For ease of understanding, embodiments of the present invention are described below with reference to fig. 17 to 19:
referring to fig. 17, points E to L are respectively intersections of straight lines where four sides of the image sensor 10 are located and the image ring 40 when the central point of the image sensor 10 (i.e., the image recording area) and the central point of the image ring 40 coincide with each other, and the target arc may include one or at least two of an arc LE (i.e., an arc between the points L and E), an arc FG (i.e., an arc between the points F and G), an arc HI (i.e., an arc between the points H and I) and an arc JK (i.e., an arc between the points J and K) on the image ring 40.
Taking the target arcs including the arc LE, the arc FG, the arc HI, and the arc JK as an example, the second target point may include at least two points on each of the arcs, and for example, the second target point may include two end points and a middle point on each of the arcs. It should be noted that the more points on the arc included by the second target point, the larger the field of view of the obtained stitched image.
In practical applications, when the first input operation track (for example, the second operation track 2 shown in fig. 18, that is, the operation track abcde) matches with the second preset track, the target assembly (for example, the lens or the image sensor) may be moved to at least two second preset positions for shooting, and when the target assembly is moved to the second preset positions, the vertex of the image sensor 10 intersects with the second target point (that is, the point on the circular arc) on the image circle 40. Alternatively, the target assemblies may be moved to the second preset positions respectively according to a preset moving sequence.
For example, the image sensor 10 may be moved in the arrow direction shown in fig. 17 with the point L on the arc LE as a starting point, that is, the vertex of the upper left corner of the image sensor 10 may be moved along the arc LE in the arrow direction corresponding to the arc LE, and one image may be taken each time the image sensor is moved; secondly, moving the vertex of the upper right corner of the image sensor 10 along the arc FG according to the arrow direction corresponding to the arc FG, and shooting one image every time the vertex moves; moving the lower right vertex of the image sensor 10 along the arc HI according to the arrow direction corresponding to the arc HI, and shooting one image every time the image sensor moves; finally, the lower left vertex of the image sensor 10 is moved along the arc JK in the direction of the arrow corresponding to the arc JK, and one image is taken every time the image sensor is moved.
Specifically, after obtaining the images captured when the four vertexes of the image sensor 10 respectively move along the corresponding arcs in the image circle 40, the captured images may be stitched, so that the target image 50 shown in fig. 19, that is, the area with the arc formed by the point E to the point L, may be obtained.
In the embodiment of the present invention, when the target assembly moves to the second preset position, the vertex of the image sensor intersects with a second target point on the image circle of the lens, the second target point is a point on a target arc on the image circle of the lens, and the target arc is an arc between two adjacent intersection points in intersection points of straight lines where the image circle of the lens and the four sides of the image sensor are located when the central point of the image circle of the lens coincides with the central point of the image sensor, so that the field of view of the combined image can be maximized.
Optionally, the embodiment of the present invention may combine the two implementation manners, that is, in a case that the first input operation trajectory is matched with a first preset trajectory, the target assembly is respectively moved to at least two first preset positions to perform shooting, so as to obtain at least two images; under the condition that the first input operation track is matched with the second preset track, the target assembly is moved to at least two second preset positions to be shot respectively to obtain at least two images, so that a user can select different implementation modes for expanding the shooting view field of the images by inputting different first input operation tracks, and convenience in operation and flexibility in shooting control are improved.
Optionally, in step 902, that is, moving the target assembly to at least two preset positions for shooting according to the first input operation trajectory, respectively, to obtain at least two images, where the step includes:
acquiring each vertex of the first input operation track;
and moving the target assembly to a preset position corresponding to the position of each vertex of the first input operation track respectively to shoot to obtain at least two images.
In the embodiment of the present invention, a corresponding relationship between the position of the vertex of the first input operation trajectory and the preset position may be established in advance. For example, a vertex at the upper left corner of the first input operation track corresponds to a preset position a, a vertex at the lower left corner of the first input operation track corresponds to a preset position B, a vertex at the upper right corner of the first input operation track corresponds to a preset position C, and a vertex at the lower right corner of the first input operation track corresponds to a preset position D, wherein the vertex at the upper left corner of the image sensor intersects with a point a on the image circle when the target assembly moves to the preset position a, the vertex at the lower left corner of the image sensor intersects with a point D on the image circle when the target assembly moves to the preset position B, the vertex at the upper right corner of the image sensor intersects with a point B on the image circle when the target assembly moves to the preset position C, and the lower right vertex of the image sensor intersects with a point C on the image circle when the target assembly moves to the preset position D.
Optionally, when the operation trajectory of the first input formed by the obtained drag input includes a vertex of a lower left corner and a vertex of a lower right corner (for example, the operation trajectory of the first input is an upright triangle with a start drag point of the photographing button as a vertex of the uppermost part), the target component may be moved to the preset position b and the preset position d respectively for photographing; when the operation track of the first input formed by the acquired dragging input comprises a vertex at the upper left corner and a vertex at the upper right corner (for example, the operation track of the first input is an inverted triangle with the initial dragging point of the photographing button as the lowest vertex), the target assembly can be moved to the preset position a and the preset position c respectively for photographing; when the operation trajectory of the first input formed by the acquired dragging input includes a vertex of an upper left corner, a vertex of an upper right corner, a vertex of a lower left corner, and a vertex of a lower right corner (for example, the operation trajectory of the first input is a rectangle), the target component may be moved to the preset position a, the preset position c, the preset position b, and the preset position d, respectively, for shooting. Therefore, the user can control the target component to move to different preset positions for shooting by inputting different operation tracks of the first input.
According to the embodiment of the invention, at least two images are obtained by acquiring each vertex in the first input operation track and respectively moving the target assembly to the preset position corresponding to the position of each vertex in the first input operation track to shoot, so that a user can control the target assembly to move to different preset positions to shoot by inputting different first input operation tracks, the flexibility of shooting control is improved, and the shooting requirements of the user on the images with different view field sizes can be met.
Optionally, the obtaining each vertex in the operation trajectory of the first input includes:
acquiring each vertex of the first input operation track and the operation sequence of each vertex;
correspondingly, the moving the target assembly to the preset position corresponding to the position of each vertex of the first input operation track respectively to shoot to obtain at least two images includes:
and moving the target assembly to a preset position corresponding to the position of each vertex of the first input operation track in sequence according to the operation sequence of each vertex, and shooting to obtain at least two images.
For example, if the operation sequence of the vertexes of the first input operation trajectory is the vertex of the lower left corner, the vertex of the upper right corner and the vertex of the lower right corner in turn, the target component can be moved to the preset position b, the preset position a, the preset position c and the preset position d in turn to perform shooting; if the operation sequence of the vertex of the first input operation track is the vertex of the lower right corner, the vertex of the lower left corner, the vertex of the upper left corner and the vertex of the upper right corner in sequence, the target assembly can be moved to the preset position d, the preset position b, the preset position a and the preset position c in sequence to shoot.
In the embodiment of the invention, the target component is moved to the preset position corresponding to the position of each vertex in the first input operation track for shooting according to the operation sequence of each vertex in the first input operation track, so that a user can flexibly control the movement of the target component through different dragging inputs, and the control mode of image shooting is enriched.
It should be noted that, in the embodiment of the present invention, the different implementation manners may be combined according to actual requirements, for example, when the operation trajectory of the first input matches a first preset trajectory, each vertex in the operation trajectory of the first input is obtained; respectively moving the target assembly to a first preset position corresponding to the position of each vertex in the first input operation track for shooting to obtain at least two images; under the condition that the first input operation track is matched with a second preset track, each vertex in the first input operation track is obtained; and moving the target assembly to a second preset position corresponding to the position of each vertex in the first input operation track respectively to shoot to obtain at least two images.
Referring to fig. 20, fig. 20 is a structural diagram of a mobile terminal according to an embodiment of the present invention, where a camera of the mobile terminal includes a target component having a movement attribute, and the target component includes a lens or an image sensor of the camera, as shown in fig. 20, a mobile terminal 2000 includes: a receiving module 2001, a moving module 2002, and an output module 2003, wherein:
a receiving module 2001, configured to receive a first input of a user on a shooting preview interface in a state where a preview image of a first shooting field of view is displayed on the shooting preview interface, where the first input is used to adjust the shooting field of view;
a moving module 2002, configured to, in response to the first input, move the target component to at least two preset positions respectively for shooting, so as to obtain at least two images;
an output module 2003, configured to output a target image obtained by synthesizing the at least two images;
the at least two preset positions and the operation track of the first input have corresponding relation; the second shooting visual field of the target image is larger than the first shooting visual field.
Optionally, the moving module 2002 includes:
the first moving unit is used for respectively moving the target assembly to at least two first preset positions to shoot under the condition that the first input operation track is matched with a first preset track, so as to obtain at least two images;
when the target assembly moves to the first preset position, the vertex of the image sensor intersects with a first target point on the image circle of the lens; the first target point is an intersection point of straight lines where the image circle of the lens and the diagonal line of the image sensor are located when the central point of the image circle of the lens and the central point of the image sensor are overlapped.
Optionally, the moving module 2002 includes:
the second moving unit is used for respectively moving the target assembly to at least two second preset positions to shoot under the condition that the first input operation track is matched with a second preset track, so as to obtain at least two images;
when the target assembly moves to the second preset position, the vertex of the image sensor intersects with a second target point on the image circle of the lens; the second target point is a point on a target circular arc on the image circle of the lens; the target arc is an arc between two adjacent intersection points in intersection points of straight lines where the image ring of the lens and the four sides of the image sensor are located when the central point of the image ring of the lens is coincident with the central point of the image sensor.
Optionally, the moving module 2002 includes:
an acquisition unit configured to acquire each vertex of the first input operation trajectory;
and the third moving unit is used for respectively moving the target assembly to a preset position corresponding to the position of each vertex of the first input operation track for shooting to obtain at least two images.
Optionally, the obtaining unit is specifically configured to:
acquiring each vertex of the first input operation track and the operation sequence of each vertex;
the third mobile unit is specifically configured to:
and moving the target assembly to a preset position corresponding to the position of each vertex of the first input operation track in sequence according to the operation sequence of each vertex, and shooting to obtain at least two images.
Optionally, the receiving module 2001 is specifically configured to:
receiving a first input of dragging the photographing identifier on the photographing preview interface by a user;
or receiving a first input of sliding of a user in a preset area on the shooting preview interface.
The mobile terminal 2000 provided in the embodiment of the present invention can implement each process implemented by the mobile terminal in the method embodiment of fig. 9, and is not described herein again to avoid repetition.
The mobile terminal 2000 of the embodiment of the present invention includes a receiving module 2001, configured to receive a first input of a user on a shooting preview interface in a state where a preview image of a first shooting view field is displayed on the shooting preview interface, where the first input is used to adjust the shooting view field; a moving module 2002, configured to, in response to the first input, move the target component to at least two preset positions respectively for shooting, so as to obtain at least two images; the output module 2003 is configured to output a target image obtained by synthesizing the at least two images, and the image with a large view field can be captured without a user moving a mobile terminal, so that the operation is relatively convenient and fast, and an obvious splicing trace is not easy to appear.
Fig. 21 is a schematic diagram of a hardware structure of a mobile terminal for implementing various embodiments of the present invention. Referring to fig. 21, the mobile terminal 2100 includes, but is not limited to: a radio frequency unit 2101, a network module 2102, an audio output unit 2103, an input unit 2104, a sensor 2105, a display unit 2106, a user input unit 2107, an interface unit 2108, a memory 2109, a processor 2110, a power supply 2111, a camera 2112, and the like. Those skilled in the art will appreciate that the mobile terminal configuration shown in fig. 21 is not intended to be limiting of mobile terminals, and that a mobile terminal may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the mobile terminal includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like. The above-described camera 2112 includes a target component having a movement property, which includes a lens or an image sensor of the camera.
The processor 2110 is configured to receive a first input of a user on the shooting preview interface in a state that a preview image of a first shooting field of view is displayed on the shooting preview interface, where the first input is used to adjust the shooting field of view; responding to the first input, respectively moving the target assembly to at least two preset positions for shooting to obtain at least two images; outputting a target image obtained by synthesizing the at least two images; the at least two preset positions and the operation track of the first input have corresponding relation; the second shooting visual field of the target image is larger than the first shooting visual field.
The embodiment of the invention can shoot the image with a larger view field without a user moving the mobile terminal, is convenient to operate and is not easy to have obvious splicing marks, and in addition, the target assembly is moved to shoot based on the first input operation track, so that the convenience of operation and the flexibility of shooting control can be enhanced.
Optionally, the processor 2110 is further configured to:
under the condition that the first input operation track is matched with a first preset track, the target assembly is moved to at least two first preset positions respectively to be shot, and at least two images are obtained;
when the target assembly moves to the first preset position, the vertex of the image sensor intersects with a first target point on the image circle of the lens; the first target point is an intersection point of straight lines where the image circle of the lens and the diagonal line of the image sensor are located when the central point of the image circle of the lens and the central point of the image sensor are overlapped.
Optionally, the processor 2110 is further configured to:
under the condition that the first input operation track is matched with a second preset track, the target assembly is moved to at least two second preset positions respectively to be shot, and at least two images are obtained;
when the target assembly moves to the second preset position, the vertex of the image sensor intersects with a second target point on the image circle of the lens; the second target point is a point on a target circular arc on the image circle of the lens; the target arc is an arc between two adjacent intersection points in intersection points of straight lines where the image ring of the lens and the four sides of the image sensor are located when the central point of the image ring of the lens is coincident with the central point of the image sensor.
Optionally, the processor 2110 is further configured to:
acquiring each vertex of the first input operation track;
and moving the target assembly to a preset position corresponding to the position of each vertex of the first input operation track respectively to shoot to obtain at least two images.
Optionally, the processor 2110 is further configured to:
acquiring each vertex of the first input operation track and the operation sequence of each vertex;
accordingly, the processor 2110 is further configured to:
and moving the target assembly to a preset position corresponding to the position of each vertex of the first input operation track in sequence according to the operation sequence of each vertex, and shooting to obtain at least two images.
Optionally, the processor 2110 is further configured to:
receiving a first input of dragging the photographing identifier on the photographing preview interface by a user;
or receiving a first input of sliding of a user in a preset area on the shooting preview interface.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 2101 may be used to receive and transmit signals during information transmission and reception or during a call, and specifically, receive downlink data from a base station and then process the received downlink data to the processor 2110; in addition, the uplink data is transmitted to the base station. In general, the radio frequency units 2101 include, but are not limited to, antennas, at least one amplifier, transceivers, couplers, low noise amplifiers, duplexers, and the like. The radio frequency unit 2101 may also communicate with networks and other devices via a wireless communication system.
The mobile terminal provides wireless broadband internet access to the user, such as assisting the user in emailing, browsing web pages, and accessing streaming media, via the network module 2102.
The audio output unit 2103 can convert audio data received by the radio frequency unit 2101 or the network module 2102 or stored in the memory 2109 into an audio signal and output as sound. Also, the audio output unit 2103 may provide audio output related to a specific function performed by the mobile terminal 2100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 2103 includes a speaker, a buzzer, a receiver, and the like.
The input unit 2104 is used to receive audio or video signals. The input Unit 2104 may include a Graphics Processing Unit (GPU) 21041 and a microphone 21042, the Graphics processor 21041 Processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capture mode or an image capture mode. The processed image frames may be displayed on the display unit 2106. The image frames processed by the graphics processor 21041 may be stored in the memory 2109 (or other storage medium) or transmitted via the radio frequency unit 2101 or the network module 2102. The microphone 21042 may receive sounds and may be capable of processing such sounds into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 2101 in the case of a phone call mode.
The mobile terminal 2100 also includes at least one sensor 2105, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that adjusts the brightness of the display panel 21061 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 21061 and/or a backlight when the mobile terminal 2100 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of the mobile terminal (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 2105 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 2106 is used to display information input by the user or information provided to the user. The Display unit 2106 may include a Display panel 21061, and the Display panel 21061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 2107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the mobile terminal. Specifically, the user input unit 2107 includes a touch panel 21071 and other input devices 21072. The touch panel 21071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 21071 (e.g., operations by a user on or near the touch panel 21071 using a finger, a stylus, or any suitable object or accessory). The touch panel 21071 may include two portions, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, and sends the touch point coordinates to the processor 2110 to receive and execute commands sent by the processor 2110. In addition, the touch panel 21071 can be implemented by various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 21071, the user input unit 2107 may include other input devices 21072. In particular, the other input devices 21072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a track ball, a mouse, and a joystick, which are not described herein.
Further, the touch panel 21071 can be overlaid on the display panel 21061, and when the touch panel 21071 detects a touch operation on or near the touch panel 21071, the touch operation can be transmitted to the processor 2110 to determine the type of the touch event, and then the processor 2110 can provide a corresponding visual output on the display panel 21061 according to the type of the touch event. Although the touch panel 21071 and the display panel 21061 are shown in fig. 21 as two separate components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 21071 and the display panel 21061 may be integrated to implement the input and output functions of the mobile terminal, and the implementation is not limited herein.
The interface unit 2108 is an interface for connecting an external device to the mobile terminal 2100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 2108 may be used to receive input from external devices (e.g., data information, power, etc.) and transmit the received input to one or more elements within the mobile terminal 2100 or may be used to transmit data between the mobile terminal 2100 and external devices.
The memory 2109 may be used for storing software programs as well as various data. The memory 2109 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 2109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 2110 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by operating or executing software programs and/or modules stored in the memory 2109 and calling data stored in the memory 2109, thereby integrally monitoring the mobile terminal. Processor 2110 may include one or more processing units; preferably, the processor 2110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 2110.
The mobile terminal 2100 may also include a power supply 2111 (e.g., a battery) for powering the various components, and preferably, the power supply 2111 is logically connected to the processor 2110 via a power management system that provides power management functions, including charging, discharging, and power consumption management.
In addition, the mobile terminal 2100 includes some functional modules that are not shown, and thus will not be described in detail herein.
Preferably, an embodiment of the present invention further provides a mobile terminal, including a processor 2110, a memory 2109, and a computer program stored in the memory 2109 and capable of running on the processor 2110, where the computer program is executed by the processor 2110 to implement each process of the above-mentioned photographing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements the processes of the above-mentioned photographing method embodiment, and can achieve the same technical effects, and in order to avoid repetition, the descriptions thereof are omitted here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (13)

1. A photographing method is applied to a mobile terminal, and is characterized in that a camera of the mobile terminal comprises a target component with a movement attribute, the target component comprises a lens or an image sensor of the camera, and the method comprises the following steps:
receiving a first input of a user on a shooting preview interface in a state that a preview image of a first shooting view field is displayed on the shooting preview interface, wherein the first input is used for adjusting the shooting view field;
responding to the first input, respectively moving the target assembly to at least two preset positions for shooting to obtain at least two images;
outputting a target image obtained by synthesizing the at least two images;
the at least two preset positions and the operation track of the first input have corresponding relation; the second shooting visual field of the target image is larger than the first shooting visual field;
the at least two preset positions include positions corresponding to the intersection of the vertex of the image sensor and the target point on the image circle of the lens.
2. The method of claim 1, wherein said moving the target assembly to at least two preset positions for capturing in response to the first input, respectively, results in at least two images, comprising:
under the condition that the first input operation track is matched with a first preset track, the target assembly is moved to at least two first preset positions respectively to be shot, and at least two images are obtained;
when the target assembly moves to the first preset position, the vertex of the image sensor intersects with a first target point on the image circle of the lens; the first target point is an intersection point of straight lines where the image circle of the lens and the diagonal line of the image sensor are located when the central point of the image circle of the lens and the central point of the image sensor are overlapped.
3. The method of claim 1, wherein said moving the target assembly to at least two preset positions for capturing in response to the first input, respectively, results in at least two images, comprising:
under the condition that the first input operation track is matched with a second preset track, the target assembly is moved to at least two second preset positions respectively to be shot, and at least two images are obtained;
when the target assembly moves to the second preset position, the vertex of the image sensor intersects with a second target point on the image circle of the lens; the second target point is a point on a target circular arc on the image circle of the lens; the target arc is an arc between two adjacent intersection points in intersection points of straight lines where the image ring of the lens and the four sides of the image sensor are located when the central point of the image ring of the lens is coincident with the central point of the image sensor.
4. The method of claim 1, wherein said moving the target assembly to at least two preset positions for capturing in response to the first input, respectively, results in at least two images, comprising:
acquiring each vertex of the first input operation track;
and moving the target assembly to a preset position corresponding to the position of each vertex of the first input operation track respectively to shoot to obtain at least two images.
5. The method of claim 4, wherein the obtaining each vertex in the trajectory of the operation of the first input comprises:
acquiring each vertex of the first input operation track and the operation sequence of each vertex;
the moving the target assembly to a preset position corresponding to the position of each vertex of the first input operation track respectively to shoot to obtain at least two images, including:
and moving the target assembly to a preset position corresponding to the position of each vertex of the first input operation track in sequence according to the operation sequence of each vertex, and shooting to obtain at least two images.
6. The method of any one of claims 1 to 5, wherein receiving a first input from a user on a capture preview interface comprises:
receiving a first input of dragging the photographing identifier on the photographing preview interface by a user;
or receiving a first input of sliding of a user in a preset area on the shooting preview interface.
7. A mobile terminal, wherein a camera of the mobile terminal includes a target component having a movement attribute, and the target component includes a lens or an image sensor of the camera, the mobile terminal comprising:
the receiving module is used for receiving a first input of a user on the shooting preview interface in a state that a preview image of a first shooting view field is displayed on the shooting preview interface, wherein the first input is used for adjusting the shooting view field;
the moving module is used for responding to the first input, and respectively moving the target assembly to at least two preset positions for shooting to obtain at least two images;
the output module is used for outputting a target image obtained by synthesizing the at least two images;
the at least two preset positions and the operation track of the first input have corresponding relation; the second shooting visual field of the target image is larger than the first shooting visual field;
the at least two preset positions include positions corresponding to the intersection of the vertex of the image sensor and the target point on the image circle of the lens.
8. The mobile terminal of claim 7, wherein the mobile module comprises:
the first moving unit is used for respectively moving the target assembly to at least two first preset positions to shoot under the condition that the first input operation track is matched with a first preset track, so as to obtain at least two images;
when the target assembly moves to the first preset position, the vertex of the image sensor intersects with a first target point on the image circle of the lens; the first target point is an intersection point of straight lines where the image circle of the lens and the diagonal line of the image sensor are located when the central point of the image circle of the lens and the central point of the image sensor are overlapped.
9. The mobile terminal of claim 7, wherein the mobile module comprises:
the second moving unit is used for respectively moving the target assembly to at least two second preset positions to shoot under the condition that the first input operation track is matched with a second preset track, so as to obtain at least two images;
when the target assembly moves to the second preset position, the vertex of the image sensor intersects with a second target point on the image circle of the lens; the second target point is a point on a target circular arc on the image circle of the lens; the target arc is an arc between two adjacent intersection points in intersection points of straight lines where the image ring of the lens and the four sides of the image sensor are located when the central point of the image ring of the lens is coincident with the central point of the image sensor.
10. The mobile terminal of claim 7, wherein the mobile module comprises:
an acquisition unit configured to acquire each vertex of the first input operation trajectory;
and the third moving unit is used for respectively moving the target assembly to a preset position corresponding to the position of each vertex of the first input operation track for shooting to obtain at least two images.
11. The mobile terminal according to claim 10, wherein the obtaining unit is specifically configured to:
acquiring each vertex of the first input operation track and the operation sequence of each vertex;
the third mobile unit is specifically configured to:
and moving the target assembly to a preset position corresponding to the position of each vertex of the first input operation track in sequence according to the operation sequence of each vertex, and shooting to obtain at least two images.
12. The mobile terminal according to any one of claims 7 to 11, wherein the receiving module is specifically configured to:
receiving a first input of dragging the photographing identifier on the photographing preview interface by a user;
or receiving a first input of sliding of a user in a preset area on the shooting preview interface.
13. A mobile terminal, characterized in that it comprises a processor, a memory and a computer program stored on said memory and executable on said processor, said computer program, when executed by said processor, implementing the steps of the photographing method according to any one of claims 1 to 6.
CN201810299768.1A 2018-04-04 2018-04-04 A kind of photographing method and mobile terminal Active CN108449546B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810299768.1A CN108449546B (en) 2018-04-04 2018-04-04 A kind of photographing method and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810299768.1A CN108449546B (en) 2018-04-04 2018-04-04 A kind of photographing method and mobile terminal

Publications (2)

Publication Number Publication Date
CN108449546A CN108449546A (en) 2018-08-24
CN108449546B true CN108449546B (en) 2020-03-31

Family

ID=63198251

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810299768.1A Active CN108449546B (en) 2018-04-04 2018-04-04 A kind of photographing method and mobile terminal

Country Status (1)

Country Link
CN (1) CN108449546B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110933303B (en) * 2019-11-27 2021-05-18 维沃移动通信(杭州)有限公司 Photographing method and electronic device
CN111010510B (en) * 2019-12-10 2021-11-16 维沃移动通信有限公司 Shooting control method and device and electronic equipment
CN111654620B (en) * 2020-05-26 2021-09-17 维沃移动通信有限公司 Shooting method and device
WO2022022715A1 (en) * 2020-07-30 2022-02-03 华为技术有限公司 Photographing method and device
CN112492215B (en) * 2020-12-09 2022-04-12 维沃移动通信有限公司 Shooting control method and device and electronic equipment
CN112995467A (en) * 2021-02-05 2021-06-18 深圳传音控股股份有限公司 Image processing method, mobile terminal and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010050521A (en) * 2008-08-19 2010-03-04 Olympus Corp Imaging device
CN102645836A (en) * 2012-04-20 2012-08-22 中兴通讯股份有限公司 Photograph shooting method and electronic apparatus
CN102739961A (en) * 2011-04-06 2012-10-17 卡西欧计算机株式会社 Image processing device capable of generating wide-range image
CN105120179A (en) * 2015-09-22 2015-12-02 三星电子(中国)研发中心 Shooting method and device
CN106231181A (en) * 2016-07-29 2016-12-14 广东欧珀移动通信有限公司 Panorama shooting method, device and terminal unit
CN107466474A (en) * 2015-05-26 2017-12-12 谷歌公司 Omni-directional stereo capture for mobile devices

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5493942B2 (en) * 2009-12-15 2014-05-14 ソニー株式会社 Imaging apparatus and imaging method
KR101784176B1 (en) * 2011-05-25 2017-10-12 삼성전자주식회사 Image photographing device and control method thereof
CN204408486U (en) * 2015-03-03 2015-06-17 深圳市宏天威科技有限公司 176 ° of super wide viewing angle video cameras
US9667848B2 (en) * 2015-04-22 2017-05-30 Qualcomm Incorporated Tiltable camera module
US20160353012A1 (en) * 2015-05-25 2016-12-01 Htc Corporation Zooming control method for camera and electronic apparatus with camera
CN105516676A (en) * 2015-12-29 2016-04-20 武汉光电工业技术研究院有限公司 Optical monitoring system
CN106101506A (en) * 2016-07-29 2016-11-09 广东欧珀移动通信有限公司 Camera control method and device
CN106550181B (en) * 2016-11-09 2020-01-03 华为机器有限公司 Camera module and terminal equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010050521A (en) * 2008-08-19 2010-03-04 Olympus Corp Imaging device
CN102739961A (en) * 2011-04-06 2012-10-17 卡西欧计算机株式会社 Image processing device capable of generating wide-range image
CN102645836A (en) * 2012-04-20 2012-08-22 中兴通讯股份有限公司 Photograph shooting method and electronic apparatus
CN107466474A (en) * 2015-05-26 2017-12-12 谷歌公司 Omni-directional stereo capture for mobile devices
CN105120179A (en) * 2015-09-22 2015-12-02 三星电子(中国)研发中心 Shooting method and device
CN106231181A (en) * 2016-07-29 2016-12-14 广东欧珀移动通信有限公司 Panorama shooting method, device and terminal unit

Also Published As

Publication number Publication date
CN108449546A (en) 2018-08-24

Similar Documents

Publication Publication Date Title
CN111541845B (en) Image processing method and device and electronic equipment
CN108449546B (en) A kind of photographing method and mobile terminal
CN109361869B (en) Shooting method and terminal
CN108513070B (en) Image processing method, mobile terminal and computer-readable storage medium
US11451706B2 (en) Photographing method and mobile terminal
CN109660723B (en) Panoramic shooting method and device
WO2021051995A1 (en) Photographing method and terminal
CN109246360B (en) Prompting method and mobile terminal
CN111064895B (en) Virtual shooting method and electronic equipment
CN109474786B (en) A kind of preview image generation method and terminal
CN110445984B (en) Shooting prompting method and electronic equipment
CN107248137B (en) Method for realizing image processing and mobile terminal
CN110213485B (en) An image processing method and terminal
CN108881733A (en) A kind of panorama shooting method and mobile terminal
CN110266957B (en) Image shooting method and mobile terminal
CN108259743A (en) Panoramic image shooting method and electronic device
US20220086365A1 (en) Photographing method and terminal
CN107948505A (en) A kind of panorama shooting method and mobile terminal
CN110602389A (en) Display method and electronic equipment
CN108174110B (en) A kind of photographing method and flexible screen terminal
CN108833796A (en) An image capturing method and terminal
CN111447365A (en) A shooting method and electronic device
CN108156386B (en) Panoramic photographing method and mobile terminal
KR20220123077A (en) Image processing method and electronic device
CN108391050B (en) An image processing method and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant