Specific embodiment
Refering to fig. 1, Fig. 1 is the structural schematic diagram of mobile terminal first embodiment provided by the present application, the mobile terminal 10
Including camera module 11, controller 12 and first sensor component 13.
Wherein, which specifically includes for being shot and takes pictures or record a video.The controller 12 can be
CPU (Central Processing Unit, central processing unit) or GPU (Graphics Processing Unit, at figure
Manage device), for handling data, or for issuing control instruction, to control the work of other components.
Wherein, which is used to obtain position and its situation of change of target to be captured, controller
12 for determining the motion state of target to be captured according to the change in location situation of target to be captured, and according to target to be captured
Motion state carries out motion compensation when camera module is shot.
Wherein, which may include moving direction, speed and acceleration of target to be captured etc., here with speed
Based on degree.It should be understood that detect the speed of target to be captured, by optical sensor detect variation and the time of its position come
Calculating its speed indirectly is feasible scheme.And the change in location of target to be captured is detected, it obtains and first detects target to be captured
The situation of change in the distance between capture apparatus and direction, for example, in one case, target to be captured and mobile terminal
Between line direction it is constant, but distance between the two is changing, in another case, target to be captured and it is mobile eventually
The distance between end is constant, but line direction between the two is changing.Object to be shot is introduced below by several ways
The detection method of the distance between capture apparatus.
Optionally, which can be laser sensing device assembly, including laser emitter and laser connect
Device is received, is specifically carried out using TOF (Time of flight) apart from detection, TOF, i.e. time-of-flight method 3D imaging are by swashing
Optical transmitting set continuously transmits light pulse to object to be measured, and the light returned from object to be measured is then received with laser pickoff, is passed through
Flight (round-trip) time of detecting optical pulses obtains object distance.
ToF distance measuring method belongs to Bidirectional distance measurement technology, it mainly utilizes signal in two asynchronous receiver-transmitters
(Transceiver) (or by reflecting surface) the round-trip flight time carrys out the distance between measuring node between.Traditional ranging technology
It is divided into Bidirectional distance measurement technology and unidirectional ranging technology.In the relatively good modulation of signal level or under non line of sight line-of-sight circumstances, it is based on
The knot of RSSI (Received Signal Strength Indication, the instruction of received signal intensity) distance measuring method estimation
Fruit is more satisfactory;Under sighting distance line-of-sight circumstances, it can make up for it based on ToF range estimation method based on RSSI range estimation method
It is insufficient.
Optionally, which can be 3D structured light sensor component, including projector and camera,
After the specific optical information of projector projects to object to be measured surface and background, acquired by camera.The light according to caused by object
The variation of signal restores entire three-dimensional space information such as the position and the depth that calculate object, obtains mesh to be measured with this
Target distance.
Optionally, in other embodiments, the inspection of distance can also be realized using ultrasonic sensor, laser radar
It surveys, in addition, other than range sensor, it can also be with the use of inertial sensor, accelerometer, gyroscope etc., to inspection
The distance of survey is further corrected.
The detection adjusted the distance below by a kind of specific embodiment is introduced.
As shown in Fig. 2, the structure that Fig. 2 is first sensor component in mobile terminal first embodiment provided by the present application is shown
It is intended to.First sensory package 13 is multiple spot laser ranging module, including laser emitter 131 and corresponding laser pickoff
132。
Wherein, laser emitter 131 is used to emit laser to target to be captured;Laser pickoff 132 is for receiving reflection
Laser;Controller (Fig. 2 does not show) is used to be surveyed according to the time difference between laser emitter 131 and corresponding laser pickoff 132
Obtain the distance between first sensor component 13 and object to be shot A.It should be understood that due in practical structures, laser hair
The distance between emitter 131 and laser pickoff 132 very little, can be ignored, and therefore, emitting light and reflection light can be with
Approximate coincidence is considered, then can specifically calculate first sensor component 13 and object to be shot A using following formula
The distance between:
Wherein, L is the distance between first sensor component 13 and object to be shot A, and c is the light velocity, and t is laser emitter
Time difference between at the time of 131 sending laser and at the time of the reception laser of laser pickoff 132.
Optionally, laser emitter 131 and corresponding laser pickoff 132 can be with the distributions of array, such as can use
The array distribution of 4*4.
As shown in figure 3, Fig. 3 is the mobile schematic diagram of target to be captured, show respectively at tri- moment of t0, t1, t2, to
The situation of movement of photographic subjects.It should be understood that since the position of mobile terminal will not generally change, it is possible to think
The position of sensor is certain, then, if the distance that different sensors detect is changed, so that it may think wait clap
It takes the photograph target and movement has occurred.Further, the situation of change of the distance obtained by each sensor is it may determine that be captured
The situation of movement of target.
As shown in figure 4, Fig. 4 is the moving distance schematic diagram of target to be captured, two adjacent sensors are in time interval t
Distance is inside measured respectively and significantlys change a and b, since sensor can accurately be set in design, production and calibration process
Angle theta is counted out, it is possible to calculate the moving distance s of target to be captured:
Further, the speed v of target to be captured:
Wherein, v is the speed of target to be captured, and a is the mesh in the first moment first laser transmitter and target to be captured
The distance between punctuate, b are the distance between the second moment second laser transmitter and target point, and θ is first laser transmitter
With the angle between second laser transmitter, time span of the t between the first moment and the second moment.
It is the structural schematic diagram of mobile terminal second embodiment provided by the present application, the mobile terminal 50 refering to Fig. 5, Fig. 5
Including camera module 51, controller 52, first sensor component 53 and second sensor component 54.
Wherein, which specifically includes for being shot and takes pictures or record a video.The controller 52 can be
CPU (Central Processing Unit, central processing unit) or GPU (Graphics Processing Unit, at figure
Manage device), for handling data, or for issuing control instruction, to control the work of other components.
Wherein, which is used to obtain the change in location situation of target to be captured, and controller 52 is used for
The motion state of target to be captured is determined according to the change in location situation of target to be captured.The second sensor component 54 is for obtaining
Take the motion state of mobile terminal 50, controller 52 is used for according to the motion state of target to be captured and mobile terminal 50
Motion state carries out motion compensation when camera module 51 is shot.
Wherein, using the motion state that first sensor component 53 obtains be the moving direction of target to be captured, speed with
And acceleration etc., using the motion state that second sensor component 54 obtains be the moving direction of mobile terminal 50, rate and
Acceleration can also be chattering frequency, the jitter amplitude etc. of mobile terminal 50.
Specifically, which may include two gyroscopes (or accelerometer), two gyroscopes point
The tilt angle with front and back Jian Ce not be controlled, then again by the two angle information transfers to controller 52, controller 52 is based on
These angle informations obtain the motion state of mobile terminal 50.
It is the structural schematic diagram of mobile terminal 3rd embodiment provided by the present application, the mobile terminal 60 refering to Fig. 6, Fig. 6
Including camera module 61, controller 62 and first sensor component 63.
Wherein, which specifically includes for being shot and takes pictures or record a video.The controller 62 can be
CPU (Central Processing Unit, central processing unit) or GPU (Graphics Processing Unit, at figure
Manage device), for handling data, or for issuing control instruction, to control the work of other components.
Wherein, which is used to obtain the change in location situation of target to be captured, and controller 62 is used for
The motion state of target to be captured is determined according to the change in location situation of target to be captured, and according to the movement shape of target to be captured
State issues control instruction to camera module 61, to carry out motion compensation when camera module is shot.
As shown in fig. 7, Fig. 7 is the structural representation of camera module in mobile terminal 3rd embodiment provided by the present application
Figure, camera module 61 specifically include motor of camera 611, camera lens 612 and driver 613.
Wherein, camera module 61 specifically can be set on the circuit board 60a in mobile terminal 60, circuit board 60a
It can be specifically flexible circuit board (FPC), be connected to above the mainboard of electronic equipment by BTB (board to board).Separately
Outside, camera module 61 can also include the other assemblies such as light sensation, flash lamp, be also possible to combine shape with other mould groups such as earpieces
At a multi-functional mould group, here without repeating.
Wherein, which can be mechanical motor, electronics touches motor, ring-shaped ultrasonic motor etc., here not
It is restricted.Optionally, in one embodiment, it can realize that the position of camera lens 612 is adjusted using voice coil motor.Voice coil motor
(Voice Coil Actuator/Voice Coil Motor), is a kind of device for converting electrical energy into mechanical energy, and realize
The movement of linear type and limited pivot angle.
It is the structural representation of motor of camera in mobile terminal 3rd embodiment provided by the present application refering to Fig. 8, Fig. 8 simultaneously
Figure, the motor of camera 611 include Antimagnetic hood 611a, magnet 611b, upper elastic slice 611c, camera lens carrier 611d, motor coil
611e, lower elastic slice 611f and motor base 611g.Wherein, camera lens 612 is fixed on camera lens carrier 611d;Driver 613 has
Body is for obtaining driving instruction, to control the current strength in motor coil 611e, to control camera lens carrier 611d in motor
Movement in 611.
Optionally, which can be voice coil motor, and the working principle of voice coil motor is: in magnet 611b
In the permanent-magnetic field formed, by changing the direct current size of motor inner motor coil 611e, to control spring leaf (on i.e.
Elastic slice 611c and lower elastic slice 611f) stretch position, so that camera lens carrier 611d be driven to move back and forth, and then drive camera lens 612
Move back and forth.
In conjunction with the above embodiments, controller 62 in the present embodiment the motion state for getting target to be captured (such as
Movement speed) after, motion compensation can be carried out by two ways:
The first, can also be called optical anti-vibration, and optical anti-vibration wraps up suspension camera lens by magnetic force, thus effectively overcome because
The image that camera vibration generates is fuzzy, and the effect that this can play the digital camera of big zoom lens is more obvious.In general,
Gyroscope in camera lens detects small movement, and signal can be reached to microprocessor and calculate the displacement for needing to compensate immediately
Amount is compensated then by compensation lens set according to the jitter direction of camera lens and displacement, and compensation microscope group accordingly adjusts position
And angle, make optical path keep stablizing, thus the image blur for effectively overcoming the vibration because of camera to generate.
By optical anti-vibration technical application to the movement to target to be captured in the present embodiment.When laser array detects
Target to be captured it is mobile when, microprocessor determines its movement speed by the variation of its distance, then by compensation lens set,
It is compensated according to the moving direction of target to be captured and movement speed, compensation microscope group accordingly adjusts position and angle, makes optical path
It keeps stablizing, thus the image blur for effectively overcoming the vibration because of camera to generate.
Specifically, can change according to a certain percentage to adjust the movement speed of camera lens, such as in embodiment above-mentioned
In got the distance between mobile terminal and target to be captured, it is assumed here that camera lens and sensor and target to be captured away from
From identical, then the movement speed of camera lens can be adjusted according to the following formula:
Wherein, v1 is the movement speed of target to be captured, and v2 is the movement speed of camera lens, and s1 is target to be captured and CCD
The distance between (imaging sensor), s2 are the distance between camera lens and CCD.
Second, electronic flutter-proof can also be called, electronic flutter-proof is referred mainly to be improved the photosensitive parameter of CCD using pressure while be added
Fast shutter is simultaneously analyzed, the stabilization then compensated using edge image for the image obtained on CCD, and electronic flutter-proof is real
It is a kind of technology that shake is compensated by reduction image quality on border, this technology attempts to obtain one between image quality and float
Equalization point.
Electronic flutter-proof generates anti-shake effect using the processing that digital circuit carries out picture.When Anti-shaking circuit work, shooting
Picture is only 90% or so of real screen, and then digital circuit carries out fuzzy Judgment to the jitter conditions of target to be captured,
And then jitter compensation is carried out with remaining 10% or so picture.The characteristics of this mode is at low cost, but reduces the benefit of CCD
With rate, certain loss can be brought to image sharpness.
It is in contrast to the prior art, the mobile terminal in the present embodiment, comprising: camera module;First sensing group
Part, for obtaining the distance of target to be captured;Controller, the situation of change for the distance according to target to be captured are determined wait clap
The motion state of target is taken the photograph, and according to the motion state of target to be captured, when camera module is shot, carries out movement benefit
It repays.By the above-mentioned means, motion compensation can be carried out when shooting based on the movement speed of target to be captured, movement ensure that
The clarity of target improves the quality to moving object shooting.
It is the flow diagram of the photographic method first embodiment of mobile terminal provided by the present application refering to Fig. 9, Fig. 9, it should
Method includes:
Step 91: obtaining the change in location situation of target to be captured.
In one case, the line direction between target to be captured and mobile terminal is constant, but between the two away from
From changing, in another case, the distance between target to be captured and mobile terminal are constant, but line between the two
Direction is changing.
Optionally, it can use laser sensing device assembly to obtain the change in location situation of target to be captured, specifically, be somebody's turn to do
Laser sensing device assembly includes laser emitter and laser pickoff, specifically carries out distance using TOF (Time of flight)
Detection, TOF, i.e. time-of-flight method 3D imaging, are to continuously transmit light pulse to object to be measured by laser emitter, then with sharp
Optical receiver receives the light that returns from object to be measured, obtained by flight (round-trip) time of detecting optical pulses object away from
From.
ToF distance measuring method belongs to Bidirectional distance measurement technology, it mainly utilizes signal in two asynchronous receiver-transmitters
(Transceiver) (or by reflecting surface) the round-trip flight time carrys out the distance between measuring node between.Traditional ranging technology
It is divided into Bidirectional distance measurement technology and unidirectional ranging technology.In the relatively good modulation of signal level or under non line of sight line-of-sight circumstances, it is based on
The knot of RSSI (Received Signal Strength Indication, the instruction of received signal intensity) distance measuring method estimation
Fruit is more satisfactory;Under sighting distance line-of-sight circumstances, it can make up for it based on ToF range estimation method based on RSSI range estimation method
It is insufficient.
Optionally, the distance of target to be captured, specifically, the structure can also be obtained using structured light sensor component
Optical sensor components include projector and camera, with the specific optical information of projector projects to object to be measured surface and background
Afterwards, it is acquired by camera.The information such as position and the depth for calculating object according to the variation of optical signal caused by object, Jin Erfu
Former entire three-dimensional space, the distance of object to be measured is obtained with this.
Step 92: the motion state of target to be captured is determined according to the change in location situation of target to be captured.
Optionally, the detection of distance can be carried out using multiple spot laser matrix.Optionally, multiple spot laser matrix can be adopted
With the array distribution of 4*4, i.e. 16 laser emitters and corresponding 16 laser pickoffs.It so can specifically use following
Formula calculates the distance of object to be shot:
Wherein, L is the distance between mobile terminal and object to be shot, and c is the light velocity, and t is that laser emitter issues laser
At the time of and laser pickoff receive laser at the time of between time difference.
It should be understood that since the position of mobile terminal will not generally change, it is possible to think the position of sensor
It is certain for setting, then, if the distance that different sensors detect is changed, so that it may think target to be captured
Movement.Further, the situation of change of the distance obtained by each sensor it may determine that target to be captured movement
Situation.
In conjunction with Fig. 4, two adjacent sensors measure distance respectively in time interval t and significantly change a and b, by
Angle theta can be accurately designed in design, production and calibration process in sensor, it is possible to calculate mesh to be captured
Target moving distance s:
Further, the speed v of target to be captured:
Wherein, v is the speed of target to be captured, and a is the mesh in the first moment first laser transmitter and target to be captured
The distance between punctuate, b are the distance between the second moment second laser transmitter and target point, and θ is first laser transmitter
With the angle between second laser transmitter, time span of the t between the first moment and the second moment.
Step 93: when camera module is shot, according to the motion state of target to be captured, carrying out motion compensation.
Optionally, step 93 can be with specifically: when camera module is shot, according to the movement shape of target to be captured
State carries out position adjustment to the camera lens in camera module, to carry out motion compensation when camera module is shot.
This mode can also be called optical anti-vibration, and optical anti-vibration wraps up suspension camera lens by magnetic force, to effectively overcome
Because the image that camera vibration generates is fuzzy, the effect that this can play the digital camera of big zoom lens is more obvious.It is logical
Often, the gyroscope in camera lens detects small movement, and signal can be reached to microprocessor and calculated immediately and need to compensate
Displacement is compensated then by compensation lens set according to the jitter direction of camera lens and displacement, and compensation microscope group accordingly adjusts
Position and angle make optical path keep stablizing, thus the image blur for effectively overcoming the vibration because of camera to generate.
By optical anti-vibration technical application to the movement to target to be captured in the present embodiment.When laser array detects
Target to be captured it is mobile when, microprocessor determines its movement speed by the variation of its distance, then by compensation lens set,
It is compensated according to the moving direction of target to be captured and movement speed, compensation microscope group accordingly adjusts position and angle, makes optical path
It keeps stablizing, thus the image blur for effectively overcoming the vibration because of camera to generate.
Specifically, can change according to a certain percentage to adjust the movement speed of camera lens, such as in embodiment above-mentioned
In got the distance between mobile terminal and target to be captured, it is assumed here that camera lens and sensor and target to be captured away from
From identical, then the movement speed of camera lens can be adjusted according to the following formula:
Wherein, v1 is the movement speed of target to be captured, and v2 is the movement speed of camera lens, and s1 is target to be captured and CCD
The distance between (imaging sensor), s2 are the distance between camera lens and CCD.
Optionally, step 93 can be with specifically: when camera module is shot, to the time for exposure of camera module
It is adjusted, to carry out motion compensation when camera module is shot.
This mode can also be called electronic flutter-proof, and electronic flutter-proof refers mainly to improve the photosensitive parameter of CCD simultaneously using pressure
Accelerate shutter and is analyzed for the image obtained on CCD, the stabilization then compensated using edge image, electronic flutter-proof
Actually a kind of to compensate the technology of shake by reducing image quality, this technology attempts to obtain one between image quality and float
A equalization point.
Electronic flutter-proof generates anti-shake effect using the processing that digital circuit carries out picture.When Anti-shaking circuit work, shooting
Picture is only 90% or so of real screen, and then digital circuit carries out fuzzy Judgment to the jitter conditions of target to be captured,
And then jitter compensation is carried out with remaining 10% or so picture.The characteristics of this mode is at low cost, but reduces the benefit of CCD
With rate, certain loss can be brought to image sharpness.
0, Figure 10 is the flow diagram of the photographic method second embodiment of mobile terminal provided by the present application refering to fig. 1,
This method comprises:
Step 101: obtaining the change in location situation of target to be captured.
Step 102: the motion state of target to be captured is determined according to the change in location situation of target to be captured.
Step 103: obtaining the motion state of mobile terminal.
In general, the gyroscope in camera lens detects small movement, to get the motion state of mobile terminal.
It should be understood that above-mentioned steps 101, step 102 are used to detect the motion state of target to be captured, step 103 is used
In detection mobile terminal motion state, between the two the step of be can exchange, or both may be performed simultaneously, here
It is not required.
Step 104: when camera module is shot, according to the motion state and mobile terminal of target to be captured
Motion state, when camera module is shot, carry out motion compensation.
Optionally, it is compensated compared to single movement, obtains target to be captured and mobile terminal two simultaneously here
The motion state of person, compensates simultaneously.
For example, if target to be captured is identical with the moving direction of mobile terminal, then relative to the independent of target to be captured
For movement, offset can be smaller, if the moving direction of target to be captured and mobile terminal is on the contrary, so relative to be captured
For being individually moved of target, offset can be larger.
Specifically, an offset individually can be obtained according to different types.For example, it is directed to the speed of target to be captured,
The first offset is obtained, for the speed of mobile terminal, obtains the second offset.Then judge the moving direction of target to be captured
It is whether identical with the moving direction of mobile terminal, if they are the same, the first offset is obtained into final offset plus the second offset,
If not identical, using the difference of the first offset and the second offset as final offset.
1, Figure 11 is the structural schematic diagram of one embodiment of computer storage medium provided by the present application, the calculating refering to fig. 1
Computer program 111 is stored in machine storage medium 110, computer program 111 realizes following side when being executed by processor
Method:
Obtain the change in location situation of target to be captured;Mesh to be captured is determined according to the change in location situation of target to be captured
Target motion state;When camera module is shot, according to the motion state of target to be captured, motion compensation is carried out.
Optionally, computer program 111 is also used to realize following method when being executed by processor: being swashed using multiple spot
Ligh-ranging mould group emits laser to target to be captured;The laser of reflection is received using multiple spot laser ranging module;Swashed based on transmitting
The time difference between light and reception laser determines the change in location situation of target to be captured.
Optionally, computer program 111 is also used to realize following method when being executed by processor: using following public affairs
The speed of formula calculating target to be captured:Wherein, v is the speed of target to be captured, when a is first
Carve the distance between the target point in first laser transmitter and target to be captured, b be the second moment second laser transmitter with
The distance between target point, angle of the θ between first laser transmitter and second laser transmitter, t are the first moment and the
Time span between two moment.
Optionally, computer program 111 is also used to realize following method when being executed by processor: in camera mould
When group is shot, according to the motion state of target to be captured and the motion state of mobile terminal, carried out in camera module
When shooting, motion compensation is carried out.
Optionally, computer program 111 is also used to realize following method when being executed by processor: in camera mould
When group is shot, according to the motion state of target to be captured, position adjustment is carried out to the camera lens in camera module, to take the photograph
When being shot as head mould group, motion compensation is carried out.
Optionally, computer program 111 is also used to realize following method when being executed by processor: in camera mould
When group is shot, the time for exposure of camera module is adjusted, to be moved when camera module is shot
Compensation.
Embodiments herein is realized in the form of SFU software functional unit and when sold or used as an independent product, can
To be stored in a computer readable storage medium.Based on this understanding, the technical solution of the application substantially or
Say that all or part of the part that contributes to existing technology or the technical solution can embody in the form of software products
Out, which is stored in a storage medium, including some instructions are used so that a computer equipment
(can be personal computer, server or the network equipment etc.) or processor (processor) execute each implementation of the application
The all or part of the steps of mode the method.And storage medium above-mentioned includes: USB flash disk, mobile hard disk, read-only memory
(ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic or disk
Etc. the various media that can store program code.
The foregoing is merely presently filed embodiments, are not intended to limit the scope of the patents of the application, all to utilize this
Equivalent structure or equivalent flow shift made by application specification and accompanying drawing content, it is relevant to be applied directly or indirectly in other
Technical field similarly includes in the scope of patent protection of the application.