Specific embodiment
Presently filed embodiment is described further below in conjunction with attached drawing.Same or similar label is from beginning in attached drawing
To the same or similar element of expression or element with the same or similar functions eventually.The application's described with reference to the accompanying drawing
Embodiment is exemplary, and is only used for explaining presently filed embodiment, and should not be understood as the limitation to the application.
Also referring to Fig. 1 and Fig. 2, the electronic equipment 100 of the application embodiment includes ontology 10, flight time component
20, CCD camera assembly 30, microprocessor 40 and application processor 50.
Ontology 10 includes multiple and different orientation.As shown in figure 1, ontology 10 can have there are four different direction example, along side clockwise
To successively are as follows: first orientation, second orientation, third orientation and fourth orientation, first orientation is opposite with third orientation, second orientation
It is opposite with fourth orientation.First orientation is the right side of orientation corresponding with the top of ontology 10, second orientation as with ontology 10
The corresponding orientation in side, third orientation are the left side of orientation corresponding with the lower section of ontology 10, fourth orientation as with ontology 10
Corresponding orientation.
Flight time component 20 is arranged on ontology 10.The quantity of flight time component 20 can be multiple, multiple flights
Time component 20 is located at multiple and different orientation of ontology 10.Specifically, the quantity of flight time component 20 can be two, point
It Wei not flight time component 20a and flight time component 20b.Flight time component 20a is arranged in first orientation, flight time group
Part 20b is arranged in third orientation.Certainly, to may be four (or any other be greater than two to the quantity of flight time component 20
Quantity), two flight time components 20 in addition can be separately positioned on second orientation and fourth orientation.The application embodiment party
Formula is illustrated so that the quantity of flight time component 20 is two as an example, it will be understood that two flight time components 20 can be real
Now obtaining panoramic range image, (panoramic range image refers to that the field angle of the panoramic range image is greater than or equal to 180 degree, example
Such as, the field angle of panoramic range image can be 180 degree, 240 degree, 360 degree, 480 degree, 720 degree etc.), be conducive to saving electronics
The manufacturing cost of equipment 100 and the volume and the power consumption that reduce electronic equipment 100 etc..The electronic equipment 100 of present embodiment can
To be the portable electronic devices such as mobile phone, tablet computer, the laptop for being provided with multiple flight time components 20, at this point,
Ontology 10 can be handset, tablet computer fuselage, laptop fuselage etc..Electronic equipment higher for thickness requirement
100, for mobile phone, since mobile phone requires fuselage thinner thickness, thus the side of fuselage can not usually install flight time group
Part 20, then can solve the above problem using two flight time components 20 to obtain the setting of panoramic range image, at this time
Two flight time components 20 can be separately mounted to handset on the front and back.In addition, two flight time components
20 modes that can obtain panoramic range image are also beneficial to reduce the calculation amount of panoramic range image.
Each flight time component 20 includes two optical transmitting sets 22 and an optical receiver 24.Optical transmitting set 22 is used for
Emit laser pulse to outside ontology 10, optical receiver 24 is used to receive corresponding two optical transmitting sets 22 hair of target subject reflection
The laser pulse penetrated.Specifically, flight time component 20a includes optical transmitting set 222a, optical transmitting set 224a and optical receiver
24a, flight time component 20b include optical transmitting set 222b, optical transmitting set 224b and optical receiver 24b.Optical transmitting set 222a and
Optical transmitting set 224a is used to emit laser pulse to the outer first orientation of ontology 10, and optical transmitting set 222b and optical transmitting set 224b are equal
For emitting laser pulse to the outer third orientation of ontology 10, optical receiver 24a is used to receive the target subject reflection of first orientation
Optical transmitting set 222a and optical transmitting set 224a transmitting laser pulse, optical receiver 24b is used to receive the shot of third orientation
The laser pulse of optical transmitting set 222b and optical transmitting set the 224b transmitting of target reflection, it is each outside ontology 10 so as to cover
Different zones are rotated by 360 ° for could obtaining more comprehensive depth information compared to existing needs, present embodiment
Electronic equipment 100, which can not rotate, can disposably obtain more comprehensive depth information, and it is rapid to execute simple and response speed.
The optical transmitting set 22 of multiple flight time components 20 emits laser simultaneously, corresponding, multiple flight time groups
The optical receiver 24 of part 20 exposes simultaneously, and to obtain panoramic range image, multiple flight time components 20 are, for example, two flights
Time component 20.Specifically, optical transmitting set 222a, optical transmitting set 224a, optical transmitting set 222b and optical transmitting set 224b are sent out simultaneously
Laser is penetrated, optical receiver 24a and optical receiver 24b expose simultaneously.Since multiple optical transmitting sets 22 emit laser, Duo Geguang simultaneously
Receiver 24 exposes simultaneously, is acquiring corresponding multiple initial depth according to the received laser pulse of multiple optical receivers 24
When image, multiple initial depth image timeliness having the same are able to reflect the 10 each orientation of outer synchronization of ontology and show
Picture, i.e. the panoramic range image of synchronization.
The field angle of each optical transmitting set 22 is the arbitrary value in 80 degree~120 degree, the field angle of each optical receiver 24
For the arbitrary value in 180 degree~200 degree.
In one embodiment, the field angle of each optical transmitting set 22 is the arbitrary value in 80 degree~90 degree, such as light hair
Emitter 222a, optical transmitting set 224a, optical transmitting set 222b and optical transmitting set 224b field angle be 80 degree, optical receiver 24a
Field angle with optical receiver 24b is 180 degree.When the field angle of optical transmitting set 22 is smaller, the manufacturing process of optical transmitting set 22
Fairly simple, manufacturing cost is lower, and can be improved the uniformity of the laser of transmitting.When the field angle of optical receiver 24 is smaller
When, lens distortion is smaller, and the initial depth picture quality of acquisition is preferable, and the panoramic range image quality obtained from is also preferable,
And accurate depth information can be got.
In one embodiment, optical transmitting set 222a, optical transmitting set 224a, optical transmitting set 222b and optical transmitting set 224b
The sum of field angle is equal to 360 degree, and the sum of optical receiver 24a and the field angle of optical receiver 24b are equal to 360 degree.Specifically, light is sent out
Emitter 222a, optical transmitting set 224a, optical transmitting set 222b and optical transmitting set 224b field angle can be 90 degree, optical receiver
The field angle of 24a and optical receiver 24b can be 180 degree, and the mutual field angle of four optical transmitting sets 22 is not handed over mutually
It is folded, two mutual field angles of optical receiver 24 are non-overlapping, obtain 360 degree or approximate 360 degree of panorama depth to realize
Spend image.Alternatively, the field angle of optical transmitting set 222a and optical transmitting set 224a can be 80 degree, optical transmitting set 222b and light emitting
The field angle of device 224b is 100 degree, the field angle of optical receiver 24a and optical receiver 24b are 180 degree etc., four light hairs
Emitter 22, which is realized by angled complimentary, two optical receivers 24 by angled complimentary, obtains 360 degree or approximate 360 degree of panorama depth
Spend image.
In one embodiment, optical transmitting set 222a, optical transmitting set 224a, optical transmitting set 222b and optical transmitting set 224b
The sum of field angle is greater than 360 degree, and the sum of optical receiver 24a and the field angle of optical receiver 24b are greater than 360 degree, four light emittings
The mutual field angle of at least two optical transmitting sets 22 in device 22 is overlapping, the mutual field angle of two optical receivers 24
It is overlapping.Specifically, the field angle of optical transmitting set 222a, optical transmitting set 224a, optical transmitting set 222b and optical transmitting set 224b can be with
It is 100 degree, the field angle of four optical transmitting sets 22 between any two is mutually overlapping.The view of optical receiver 24a and optical receiver 24b
Rink corner can be 200 degree, and the field angle between two optical receivers 24 is mutually overlapping.It, can when obtaining panoramic range image
First to identify the edge overlapping part of two initial depth images, then the panorama for being 360 degree by two initial depth image mosaics
Depth image.Visual field since the field angle of four optical transmitting sets 22 between any two is mutually overlapping, between two optical receivers 24
Angle is mutually overlapping, it can be ensured that outer 360 degree of the depth information of panoramic range image covering ontology 10 of acquisition.
Certainly, the specific value of the field angle of each optical transmitting set 22 and each optical receiver 24 is not limited to above-mentioned act
Example, those skilled in the art, which can according to need, is set as appointing between 80 degree~120 degree for the field angle of optical transmitting set 22
Meaning numerical value, the field angle of optical receiver 24 are set as any number between 180 degree~200 degree, such as: the view of optical transmitting set 22
Rink corner is for 80 degree, 82 degree, 84 degree, 86 degree, 90 degree, 92 degree, 94 degree, 96 degree, 98 degree, 104 degree, 120 degree or any between the two
Arbitrary value, the field angle of optical receiver 24 is 180 degree, 181 degree, 182 degree, 187 degree, 188 degree, 193.2 degree, 195 degree, 200
Degree or any arbitrary value between the two, this is not restricted.
Referring to Fig. 3, each optical transmitting set 22 includes light source 222 and diffuser 224.Light source 222 is for emitting laser (example
Such as infrared laser, at this point, optical receiver 24 is infrared camera), diffuser 224 is used to spread the laser of the transmitting of light source 222.
Under normal circumstances, the laser pulse that optical transmitting set 22 adjacent between two adjacent flight time components 20 emits
It is easy to cause interference, such as the field angle phase of the optical transmitting set 22 between two adjacent flight time components 20 between each other
When mutually overlapping, the laser pulse that optical transmitting set 22 emits be easy to cause interference between each other.Therefore, in order to improve the depth of acquisition
The accuracy of information, the wavelength of the laser pulse that the adjacent optical transmitting set 22 of two neighboring flight time component 20 emits can be with
Difference, in order to distinguish and calculate initial depth image.
Specifically, it is assumed that the wavelength of the laser pulse of the optical transmitting set 222a transmitting of first orientation is λ 1, first orientation
The wavelength of the laser pulse of optical transmitting set 224a transmitting is λ 2, the wave of the laser pulse of the optical transmitting set 222b transmitting in third orientation
The wavelength of a length of λ 3, the laser pulse of the optical transmitting set 224b transmitting in third orientation are λ 4, then need to only meet 1 ≠ λ of λ 3,2 ≠ λ of λ 4
?.Wherein, λ 1 can be equal with λ 2, be also possible to not equal (due to optical transmitting set 222a and optical transmitting set 224a
In same orientation and belong to same flight time component 20a, therefore, to depth information when λ 1 is equal with λ 2 and mutually overlapping
Acquisition influence less, therefore, λ 1 can be equal with λ 2, can not also wait), λ 3 can be equal with λ 4, be also possible to not
Deng (acquisition of depth information is influenced less when ibid, λ 3 is equal with λ 4 and mutually overlapping, λ 3 can be equal with λ 4,
Can not wait), λ 1 can be equal with λ 4, be also possible to not equal, λ 2 can be equal with λ 3, be also possible to
's.Preferably, the wavelength for the laser pulse that each optical transmitting set 22 emits is different, to further increase the depth information of acquisition
Accuracy.In other words, in 3 ≠ λ of λ 1 ≠ λ, 2 ≠ λ 4, the laser pulse that multiple optical transmitting sets 22 emit does not interfere with each other, makes
The calculating for obtaining initial depth image is easy the most.In addition, each optical receiver 24 is configured as receiving corresponding optical transmitting set 22
The laser pulse of the corresponding wavelength of transmitting.Such as optical receiver 24a is for receiving optical transmitting set 222a and optical transmitting set 224a hair
The laser pulse for the corresponding wavelength penetrated is unable to receive the corresponding wavelength of optical transmitting set 222b and optical transmitting set 224b transmitting
Laser pulse.Similarly, optical receiver 24b is only used for receiving the corresponding wavelength of optical transmitting set 222b and optical transmitting set 224b transmitting
Laser pulse.
By taking the laser pulse that optical transmitting set 22 emits is infrared light as an example, the wavelength of infrared light be 770 nanometers to 1 millimeter it
Between, then λ 1 can be the arbitrary value between 770 nanometers~1000 nanometers, and λ 2 can be times between 1000 nanometers~1200 nanometers
Meaning value, λ 3 can be the arbitrary value between 1200 nanometers~1400 nanometers, and λ 4 can be between 1400 nanometers~1600 nanometers
Arbitrary value.Optical receiver 24a is used to receive the laser pulse that the wavelength of optical transmitting set 222a transmitting is 770 nanometers~1000 nanometers
The laser pulse that wavelength with optical transmitting set 224a transmitting is 1000 nanometers~1200 nanometers, optical receiver 24b is for receiving light
Transmitter 222b transmitting wavelength be 1200 nanometers~1400 nanometers laser pulse and optical transmitting set 224b transmitting wavelength be
1400 nanometers~1600 nanometers of laser pulse.
It should be pointed out that other than the wavelength for the laser pulse for emitting optical transmitting set 22 is different, the skill of this field
Art personnel can also avoid different flight time components 20 from interfering between each other when working at the same time using other modes,
This is with no restriction;Alternatively, the lesser degree of interference can also be ignored, initial depth image is directly calculated;Alternatively, calculating just
When beginning depth image, influence caused by the interference is filtered out by certain algorithm process.
Fig. 1 and Fig. 2 are please referred to, CCD camera assembly 30 is arranged on ontology 10.The quantity of CCD camera assembly 30 can be more
It is a, the corresponding flight time component 20 of each CCD camera assembly 30.For example, when the quantity of flight time component 20 is two
When, the quantity of CCD camera assembly 30 is also two, and two CCD camera assemblies 30 are separately positioned on first orientation and third orientation.
Multiple CCD camera assemblies 30 are connect with application processor 50.Each CCD camera assembly 30 is for acquiring target subject
Scene image and export to application processor 50.In present embodiment, two CCD camera assemblies 30 are respectively used to acquisition first
The scene image of the target subject in orientation, the scene image of the target subject in third orientation are simultaneously exported respectively to application processor
50.It is appreciated that each CCD camera assembly 30 it is identical as the field angle of optical receiver 24 of corresponding flight time component 20 or
It is approximately uniform, so that each scene image can preferably be matched with corresponding initial depth image.
CCD camera assembly 30 can be visible image capturing head 32 or infrared pick-up head 34.When CCD camera assembly 30
When for visible image capturing head 32, scene image is visible images;When CCD camera assembly 30 is infrared pick-up head 34, scene
Image is infrared light image.
Referring to Fig. 2, microprocessor 40 can be processing chip.The quantity of microprocessor 40 can be multiple, Mei Gewei
Processor 40 corresponds to a flight time component 20.For example, the quantity of flight time component 20 is two in present embodiment,
The quantity of microprocessor 40 is also two.Each microprocessor 40 in corresponding flight time component 20 optical transmitting set 22 and
Optical receiver 24 is all connected with.Each microprocessor 40 can drive corresponding optical transmitting set 22 to emit laser by driving circuit, and
Realize that four optical transmitting sets 22 emit laser simultaneously by the control of multi-microprocessor 40.Each microprocessor 40 be also used to
Corresponding optical receiver 24 provides the clock information for receiving laser pulse so that optical receiver 24 exposes, and passes through two micro- places
The control of reason device 40 exposes while realizing two optical receiver 24.When each microprocessor 40 is also used to according to corresponding flight
Between component 20 optical transmitting set 22 emit laser pulse and the received laser pulse of optical receiver 24 to obtain initial depth figure
Picture.For example, laser pulse and light-receiving that two microprocessors 40 emit according to the optical transmitting set of flight time component 20a respectively
The received laser pulse of device 24a is to obtain initial depth image P1, swashing according to the transmitting of the optical transmitting set of flight time component 20b
Light pulse and the received laser pulse of optical receiver 24b are to obtain initial depth image P2 (as shown in the upper part of Fig. 4).Each
Microprocessor 40 can also carry out tiled, distortion correction, the processing of self calibration scheduling algorithm to initial depth image, initial to improve
The quality of depth image.
It is appreciated that the quantity of microprocessor 40 may be one, at this point, microprocessor 40 needs successively according to correspondence
Flight time component 20 optical transmitting set 22 emit laser pulse and the received laser pulse of optical receiver 24 with obtain just
Beginning depth image.For a microprocessor 40, processing speed faster, is delayed smaller two microprocessors 40.
Two microprocessors 40 are connect with application processor 50, by initial depth image transmitting to application processor
50.In one embodiment, microprocessor 40 can pass through mobile industry processor interface (Mobile Industry
Processor Interface, MIPI) it is connect with application processor 50, specifically, microprocessor 40 passes through mobile industry processing
The credible performing environment (Trusted Execution Environment, TEE) of device interface and application processor 50 connects, with
Data (initial depth image) in microprocessor 40 are transmitted directly in credible performing environment, to improve electronic equipment 100
The safety of interior information.Wherein, the code in credible performing environment and region of memory are controlled by access control unit,
It cannot be accessed by the program in untrusted performing environment (Rich Execution Environment, REE), credible execution ring
Border and untrusted performing environment can be formed in application processor 50.
The system that application processor 50 can be used as electronic equipment 100.Application processor 50 can reset microprocessor 40,
Wake up (wake) microprocessor 40, error correction (debug) microprocessor 40 etc..Application processor 50 can also be with electronic equipment 100
Multiple electronic components connect and control multiple electronic component and run in predetermined patterns, such as application processor 50
It connect with visible image capturing head 32 and infrared pick-up head 34, is shot with controlling visible image capturing head 32 and infrared pick-up head 34
Visible images and infrared light image, and handle the visible images and infrared light image;When electronic equipment 100 includes display screen
When, application processor 50 can control display screen and show scheduled picture;Application processor 50 can be with controlling electronic devices 100
Antenna send or receive scheduled data etc..
Referring to Fig. 4, in one embodiment, application processor 50 is used for the field angle according to optical receiver 24 for two
Two initial depth images that microprocessor 40 obtains synthesize a frame panoramic range image.
Specifically, it incorporated by reference to Fig. 1, is built using transversal line as X-axis by Y-axis of longitudinal axis using the center of ontology 10 as center of circle O
Vertical rectangular coordinate system XOY, in rectangular coordinate system XOY, the visual field of optical receiver 24a is (suitable between 190 degree~350 degree
Hour hands rotation, rear same), for the visual field of optical transmitting set 222a between 190 degree~90 degree, the visual field of optical transmitting set 224a is located at 90
Between~350 degree of degree, for the visual field of optical receiver 24b between 10 degree~170 degree, the visual field of optical transmitting set 222b is located at 270
Between~170 degree of degree, the visual field of optical transmitting set 224b is between 10 degree~270 degree, then application processor 50 is according to two light
Initial depth image P1, initial depth image P2 are spliced into the panoramic range image of 360 degree of a frame by the field angle of receiver 24
P12, so as to the use of depth information.
Each microprocessor 40 handle corresponding flight time component 20 optical transmitting set 22 emit laser pulse and
In the initial depth image that the received laser pulse of optical receiver 24 obtains, the depth information of each pixel is the quilt in corresponding orientation
Take the photograph the distance between the optical receiver 24 in target and the orientation.That is, in initial depth image P1 each pixel depth information
For the distance between the target subject of first orientation and optical receiver 24a;The depth letter of each pixel in initial depth image P2
Breath is the distance between target subject and the optical receiver 24b in third orientation.By multiple initial depth images in multiple orientation
During the panoramic range image for being spliced into 360 degree of a frame, first have to the depth of each pixel in each initial depth image
Degree information is converted to unitized depth information, and unitized depth information indicates each target subject and some benchmark in each orientation
The distance of position.After depth information is converted into unitized depth information, facilitate application processor 40 according to unitized depth information
Do the splicing of initial depth image.
Specifically, a frame of reference is selected, the frame of reference can be with the optical receiver 24 in some orientation
Image coordinate system is also possible to select other coordinate systems as the frame of reference as the frame of reference.By taking Fig. 5 as an example, with
xo-yo-zoCoordinate system is benchmark coordinate system.Coordinate system x shown in fig. 5a-ya-zaFor the image coordinate system of optical receiver 24a, sit
Mark system xb-yb-zbFor the image coordinate system of optical receiver 24b.Application processor 50 is according to coordinate system xa-ya-zaWith reference coordinate
It is xo-yo-zoBetween spin matrix and translation matrix the depth information of each pixel in initial depth image P1 is converted into system
One changes depth information, according to coordinate system xb-yb-zbWith frame of reference xo-yo-zoBetween spin matrix and translation matrix will be first
The depth information of each pixel is converted to unitized depth information in beginning depth image P2.
After the completion of depth information conversion, multiple initial depth images are located under a unified frame of reference, and each
Corresponding coordinate (the x of one pixel of initial depth imageo,yo,zo), then initial depth can be done by coordinate matching
The splicing of image.For example, some pixel P in initial depth image P1aCoordinate be (xo1,yo1,zo1), initial deep
Spend some pixel P in image P2bCoordinate be also (xo1,yo1,zo1), due to PaAnd PbUnder the current frame of reference
Coordinate value having the same, then pixels illustrated point PaWith pixel PbIt is actually the same point, initial depth image P1 and initial
When depth image P2 splices, pixel PaIt needs and pixel PbIt is overlapped.In this way, application processor 50 can pass through of coordinate
The splicing of multiple initial depth images is carried out with relationship, and obtains 360 degree of panoramic range image.
It should be noted that the splicing that the matching relationship based on coordinate carries out initial depth image requires initial depth image
Resolution ratio need be greater than a default resolution ratio.If being appreciated that the resolution ratio of initial depth image is lower, coordinate
(xo,yo,zo) accuracy also can be relatively low, at this point, directly being matched according to coordinate, in fact it could happen that PaPoint and PbPoint is practical
On be not overlapped, but differ an offset offset, and the value of offset be more than error bounds limit value the problem of.If image
Resolution ratio it is higher, then coordinate (xo,yo,zo) accuracy also can be relatively high, at this point, directly being matched according to coordinate, i.e.,
Make PaPoint and PbPoint is practically without coincidence, differs an offset offset, but the value of offset can also be less than bouds on error
Value will not influence too much the splicing of initial depth image that is, in the range of error permission.
It is appreciated that subsequent implementation mode can be used aforesaid way by two or more initial depth images into
Row splicing or synthesis, no longer illustrate one by one.
Two initial depth images can also be synthesized three-dimensional with corresponding two visible images by application processor 50
Scene image is watched with being shown for user.For example, two visible images are respectively visible images V1 and visible light figure
As V2.Then application processor 50 initial depth image P1 is synthesized with visible images V1 respectively, by initial depth image P2 with
Visible images V2 synthesis, then two images after synthesis are spliced to obtain the three-dimensional scene images of 360 degree of a frame.Or
Person, application processor 50 first splice initial depth image P1 and initial depth image P2 to obtain the panorama depth of 360 degree of a frame
Image, and will be seen that light image V1 and visible images V2 splices to obtain the panorama visible images of 360 degree of a frame;Again by panorama
Depth image and panorama visible images synthesize 360 degree of three-dimensional scene images.
Referring to Fig. 6, in one embodiment, be used to be obtained according to two microprocessors 40 two of application processor 50
Initial depth image and two scene images of two CCD camera assemblies 30 acquisition identify target subject.
Specifically, when scene image is infrared light image, two infrared light images can be infrared light image I1 respectively
With infrared light image I2.Application processor 50 is respectively according to initial depth image P1 and infrared light image I1 identification first orientation
Target subject, the target subject that third orientation is identified according to initial depth image P2 and infrared light image I2.When scene image is
When visible images, two visible images are visible images V1 and visible images V2 respectively.Application processor 50 is distinguished
According to the target subject of initial depth image P1 and visible images V1 identification first orientation, according to initial depth image P2 and can
The target subject in light-exposed image V2 identification third orientation.
When identifying target subject is to carry out recognition of face, application processor 50 is using infrared light image as scene image
It is higher to carry out recognition of face accuracy.Application processor 50 carries out recognition of face according to initial depth image and infrared light image
Process can be as follows:
Firstly, carrying out Face datection according to infrared light image determines target human face region.Since infrared light image includes
The detailed information of scene can carry out Face datection according to infrared light image, to detect after getting infrared light image
It whether include out face in infrared light image.If in infrared light image including face, extract in infrared light image where face
Target human face region.
Then, In vivo detection processing is carried out to target human face region according to initial depth image.Due to each initial depth
Image and infrared light image are corresponding, include the depth information of corresponding infrared light image in initial depth image, therefore,
Depth information corresponding with target human face region can be obtained according to initial depth image.Further, since living body faces are
Three-dimensional, and the face of the display such as picture, screen is then plane, it therefore, can be according to the target human face region of acquisition
Depth information judge that target human face region is three-dimensional or plane, to carry out In vivo detection to target human face region.
If In vivo detection success, obtains the corresponding target face property parameters of target human face region, and according to target person
Face property parameters carry out face matching treatment to the target human face region in infrared light image, obtain face matching result.Target
Face character parameter refers to the parameter that can characterize the attribute of target face, can be to target person according to target face property parameters
Face carries out identification and matching treatment.Target face property parameters include but is not limited to be face deflection angle, face luminance parameter,
Face parameter, skin quality parameter, geometrical characteristic parameter etc..Electronic equipment 100 can be stored in advance joins for matched face character
Number.After getting target face property parameters, so that it may by target face property parameters and pre-stored face character
Parameter is compared.If target face property parameters are matched with pre-stored face character parameter, recognition of face passes through.
It should be pointed out that application processor 50 carries out the tool of recognition of face according to initial depth image and infrared light image
Body process is not limited to this, such as application processor 50 can also detect facial contour according to initial depth visual aids, to mention
High recognition of face precision etc..Application processor 50 according to initial depth image and visible images carry out the process of recognition of face with
Application processor 50 is similar with the infrared light image progress process of recognition of face according to initial depth image, no longer separately explains herein
It states.
Fig. 6 and Fig. 7 are please referred to, application processor 50 is also used to according to two initial depth images and two scene images
When identifying target subject failure, two initial depth figures being obtained two microprocessors 40 according to the field angle of optical receiver 24
Merge depth image as synthesizing a frame, two scene images that two CCD camera assemblies 30 are acquired synthesize a frame and merge field
Scape image, and target subject is identified according to merging depth image and merging scene image.
Specifically, in Fig. 6 and embodiment shown in Fig. 7, due to the view of the optical receiver 24 of each flight time component 20
Rink corner is limited, it is understood that there may be the half of face be located at initial depth image P1, the other half be located at the situation of initial depth image P2,
Initial depth image P1 and initial depth image P2 are synthesized a frame and merge depth image P12 by application processor 50, and corresponding
Infrared light image I1 and infrared light image I2 (or visible images V1 and visible images V2) are synthesized into a frame and merge scene
Image I12 (or V12), to identify target subject according to merging depth image P12 and merging scene image I12 (or V12) again.
Fig. 8 and Fig. 9 are please referred to, in one embodiment, application processor 50 according to multiple initial depth images for sentencing
Disconnected the distance between target subject and electronic equipment 100 variation.
Specifically, each optical transmitting set 22 can repeatedly emit laser pulse, and accordingly, each optical receiver 24 can be more
Secondary exposure.For example, at the first moment, optical transmitting set, the light of the optical transmitting set of flight time component 20a, flight time component 20b
Receiver 24a and optical receiver 24b receive laser pulse, and two microprocessors 40 are corresponding to obtain initial depth image P11, initial
Depth image P21;At the second moment, optical transmitting set, the light of the optical transmitting set of flight time component 20a, flight time component 20b
Receiver 24a and optical receiver 24b receive laser pulse, and two microprocessors 40 are corresponding to obtain initial depth image P12, initial
Depth image P22.Then, application processor 50 judges first according to initial depth image P11 and initial depth image P12 respectively
The variation of the distance between the target subject in orientation and electronic equipment 100;According to initial depth image P21 and initial depth image
P22 judges that the distance between target subject and the electronic equipment 100 in third orientation change.
It is appreciated that due to include in initial depth image target subject depth information, application processor 50
Can be changed according to the depth information at multiple continuous moment between the target subject and electronic equipment 100 that judge corresponding orientation away from
From variation.
Referring to Fig. 10, application processor 50 is also used to judging that distance change fails according to multiple initial depth images
When, a frame, which is synthesized, according to two initial depth images that the field angle of optical receiver 24 obtains two microprocessors 40 merges
Depth image, application processor 50 continuously perform synthesis step to obtain multiframe and continuously merge depth image, and according to multiframe
Merge depth image and judges distance change.
Specifically, in embodiment shown in Fig. 10, due to the field angle of the optical receiver 24 of each flight time component 20
It is limited, it is understood that there may be the half of face be located at initial depth image P11, the other half be located at the situation of initial depth image P21, answer
The initial depth image P11 at the first moment and initial depth image P21 are synthesized into a frame with processor 50 and merge depth image
P121, and correspond to and the initial depth image P12 and initial depth image P22 at the second moment are synthesized into frame merging depth image
Then P122 merges depth image P121 and P122 according to this two frame after merging and rejudges distance change.
Referring to Fig. 9, when judging that distance change reduces for distance according to multiple initial depth images, or according to multiframe
When merging depth image judges that distance change reduces for distance, application processor 50 can be improved to be passed from least one microprocessor 40
The frame per second to judge the initial depth image of distance change is acquired in defeated multiple initial depth images.
It is appreciated that electronic equipment 100 can not prejudge when the distance between target subject and electronic equipment 100 reduce
The distance, which reduces, whether there is risk, and therefore, application processor 50 can be improved from the more of the transmission of at least one microprocessor 40
The frame per second to judge the initial depth image of distance change is acquired in a initial depth image, it should be away from closer concern
From variation.Specifically, when judging that the corresponding distance in some orientation reduces, the orientation is can be improved from Wei Chu in application processor 50
The frame per second to judge the initial depth image of distance change is acquired in multiple initial depth images that reason device 40 transmits.
For example, two microprocessors 40 obtain initial depth image P11, initial depth image respectively at the first moment
P21;At the second moment, two microprocessors 40 obtain initial depth image P12, initial depth image P22 respectively;In third
It carves, two microprocessors 40 obtain initial depth image P13, initial depth image P23 respectively;At the 4th moment, two micro- places
Reason device 40 obtains initial depth image P14, initial depth image P24 respectively.
Under normal circumstances, the selection of application processor 50 initial depth image P11 and initial depth image P14 judges first
The variation of the distance between the target subject in orientation and electronic equipment 100;Choose initial depth image P21 and initial depth image
P24 judges that the distance between target subject and the electronic equipment 100 in third orientation change.Application processor 50 is adopted in each orientation
The frame per second of collection initial depth image is to acquire a frame at interval of two frames, i.e., every three frame chooses a frame.
When judging that the corresponding distance of first orientation reduces according to initial depth image P11 and initial depth image P14,
Application processor 50 can then choose initial depth image P11 and initial depth image P13 judge the target subject of first orientation with
The variation of the distance between electronic equipment 100.The frame per second that application processor 50 acquires the initial depth image of first orientation becomes every
It is spaced a frame and acquires a frame, i.e., every two frame chooses a frame.And the frame per second in other orientation remains unchanged, i.e., application processor 50 still selects
Initial depth image P21 and initial depth image P24 is taken to judge distance change.
When judging that the corresponding distance of first orientation reduces according to initial depth image P11 and initial depth image P14, together
When according to initial depth image P21 and initial depth image P24 judge third orientation it is corresponding distance reduce when, using processing
Device 50 can then choose initial depth image P11 and initial depth image P13 judges the target subject and electronic equipment of first orientation
The target subject that initial depth image P21 and initial depth image P23 judges third orientation is chosen in the distance between 100 variations
The variation of the distance between electronic equipment 100, application processor 50 acquire the initial depth image of first orientation and third orientation
Frame per second become acquiring a frame at interval of a frame, i.e. every two frame chooses a frame.
Certainly, application processor 50 can also be improved when judging that the corresponding distance in any one orientation reduces from each
The frame per second to judge the initial depth image of distance change is acquired in multiple initial depth images that microprocessor 40 transmits.
That is: when the target subject and electronic equipment for judging first orientation according to initial depth image P11 and initial depth image P14
When the distance between 100 reduction, application processor 50 can then choose initial depth image P11 and initial depth image P13 judgement
Initial depth image P21 and initial depth figure are chosen in the variation of the distance between the target subject of first orientation and electronic equipment 100
As P23 judges that the distance between target subject and the electronic equipment 100 in third orientation changes.
Application processor 50 can also judge the distance in conjunction with visible images or infrared light image when distance reduces
Variation.Specifically, application processor 50 first identifies target subject according to visible images or infrared light image, then further according to more
The initial depth image at a moment judges distance change, to set for different target subjects from different distance controlling electronics
Standby 100 execute different operations.Alternatively, the control of microprocessor 40 improves the corresponding transmitting of optical transmitting set 22 and swashs when distance reduces
The frequency etc. that light and optical receiver 24 expose.
It should be noted that the electronic equipment 100 of present embodiment is also used as an external terminal, be fixedly mounted or
It is removably mounted on the portable electronic device such as mobile phone, tablet computer, laptop outside, can also be fixedly mounted
It is used in the loose impediments such as vehicle body (as shown in Figure 7 and Figure 8), unmanned aerial vehicle body, robot body or ship ontology.
When specifically used, when electronic equipment 100 synthesizes a frame panoramic range image according to multiple initial depth images as previously described, entirely
Scape depth image can be used for three-dimensional modeling, immediately positioning and map structuring (simultaneous localization and
Mapping, SLAM), augmented reality shows.When the identification target subject as previously described of electronic equipment 100, then can be applied to portable
Recognition of face unlock, the payment of formula electronic device, or applied to the avoidance of robot, vehicle, unmanned plane, ship etc..Work as electronics
When equipment 100 judges the variation of the distance between target subject and electronic equipment 100 as previously described, then it can be applied to robot, vehicle
, automatic runnings, the object tracking such as unmanned plane, ship etc..
Fig. 2 and Figure 11 are please referred to, the application embodiment also provides a kind of mobile platform 300.Mobile platform 300 includes this
Body 10 and the multiple flight time components 20 being arranged on ontology 10.Multiple flight time components 20 are located at the more of ontology 10
A different direction.Each flight time component 20 includes two optical transmitting sets 22 and an optical receiver 24.Each light emitting
The field angle of device 22 is the arbitrary value in 80 degree to 120 degree, and the field angle of each optical receiver 24 is 180 degree in 200 degree
Arbitrary value.Optical transmitting set 22 is used to receive target subject reflection for emitting laser pulse, optical receiver 24 to outside ontology 10
The laser pulse of corresponding two optical transmitting sets 22 transmitting.The optical transmitting set 22 of multiple flight time components 20 emits simultaneously to swash
The optical receiver 24 of light, multiple flight time components 20 exposes simultaneously, to obtain panoramic range image.
Specifically, ontology 10 can be vehicle body, unmanned aerial vehicle body, robot body or ship ontology.
Figure 11 is please referred to, when ontology 10 is vehicle body, the quantity of multiple flight time components 20 is two, and two fly
Row time component 20 is separately mounted to the two sides of vehicle body, for example, headstock and the tailstock, alternatively, being mounted on vehicle body left side and vehicle
On the right side of body.Vehicle body can drive two flight time components 20 to move on road, and 360 degree constructed in travelling route are complete
Scape depth image, using as Reference Map etc.;Or the initial depth image of two different directions is obtained, to identify mesh shot
Mark, judge the distance between target subject and mobile platform 300 change, thus control vehicle body accelerate, deceleration, stop, around
Row etc., realizes unmanned avoidance, for example, in vehicle when being moved on road, if recognizing target subject at a distance from vehicle
Reduce and target subject is the pit on road, then vehicle is slowed down with the first acceleration, if recognizing target subject and vehicle
Distance reduces and target subject is behaved, then vehicle is slowed down with the second acceleration, wherein the absolute value of the first acceleration is less than second
The absolute value of acceleration.In this way, executing different operations according to different target subjects when distance reduces, vehicle can be made
It is more intelligent.
Figure 12 is please referred to, when ontology 10 is unmanned aerial vehicle body, the quantity of multiple flight time components 20 is two, two
Flight time component 20 is separately mounted to the opposite two sides of unmanned aerial vehicle body, such as front and rear sides or arranged on left and right sides, or
It is mounted on the opposite two sides of the holder carried on unmanned aerial vehicle body.Unmanned aerial vehicle body can drive multiple flight time components 20
It flies in the sky, to be taken photo by plane, inspection etc., the panoramic range image that unmanned plane can will acquire is returned to ground control terminal,
SLAM can directly be carried out.Multiple flight time components 20 can realize unmanned plane acceleration, deceleration, stopping, avoidance, object tracking.
Figure 13 is please referred to, when ontology 10 is robot body, such as sweeping robot, multiple flight time components 20
Quantity is two, and two flight time components 20 are separately mounted to the opposite sides of robot body.Robot body can band
Move multiple flight time components 20 to move at home, obtain the initial depth image in multiple and different orientation, with identify target subject,
Judge that the distance between target subject and mobile platform 300 change, to control robot body movement, realizes that robot removes
Rubbish, avoidance etc..
Figure 14 is please referred to, when ontology 10 is ship ontology, the quantity of multiple flight time components 20 is two, and two fly
Row time component 20 is separately mounted to the opposite two sides of ship ontology.Ship ontology can drive flight time component 20 to transport
It is dynamic, the initial depth image in multiple and different orientation is obtained, to accurately identify quilt in adverse circumstances (such as under the environment that hazes)
It takes the photograph target, judge target subject and the variation of the distance between mobile platform 300, improve sea going safety etc..
The mobile platform 300 of the application embodiment be can movable independently platform, multiple flight time components 20 pacify
On the ontology 10 of mobile platform 300, to obtain panoramic range image.And the electronic equipment of the application embodiment 100
Body generally can not be moved independently, and electronic equipment 100 can further be equipped on the dress that can be moved similar to mobile platform 300 etc.
It sets, so that the device be helped to obtain panoramic range image.
It should be pointed out that it is above-mentioned to the ontology 10 of electronic equipment 100, it is flight time component 20, CCD camera assembly 30, micro-
The explanation of processor 40 and application processor 50 is equally applicable to the mobile platform 300 of the application embodiment, herein not
Repeat explanation.
Although embodiments herein has been shown and described above, it is to be understood that above-described embodiment is example
Property, it should not be understood as the limitation to the application, those skilled in the art within the scope of application can be to above-mentioned
Embodiment is changed, modifies, replacement and variant, and scope of the present application is defined by the claims and their equivalents.