CN103051705A - Method and device for determining target person and mobile terminal - Google Patents
Method and device for determining target person and mobile terminal Download PDFInfo
- Publication number
- CN103051705A CN103051705A CN2012105545782A CN201210554578A CN103051705A CN 103051705 A CN103051705 A CN 103051705A CN 2012105545782 A CN2012105545782 A CN 2012105545782A CN 201210554578 A CN201210554578 A CN 201210554578A CN 103051705 A CN103051705 A CN 103051705A
- Authority
- CN
- China
- Prior art keywords
- information
- people
- view data
- characteristic information
- target people
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
Landscapes
- Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Telephonic Communication Services (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a method and a device for determining a target person and a mobile terminal. The method comprises the following steps of acquiring first characteristic information of a person in image data; matching the first characteristic information and second characteristic information of the target person which is pre-acquired; and determining whether the person in the image data is the target person or not according to the matching result. By the method and the device, the problems of the current people searching way are solved, and the people searching accuracy and the people searching effectiveness are improved.
Description
Technical field
The present invention relates to the communications field, in particular to a kind of target people's definite method, device and portable terminal.
Background technology
In actual life, often need missing.For example, seek abducted people, missing relatives, wanted suspect etc.
Because missing Information Communication face is little, information source is few, adds that the flowability of population is larger, the difficulty of missing is often very large for present missing mode (for example, put up posters, publish in the newspaper, TV missing etc.).
For problems such as missing mode efficient in the correlation technique are low, inconvenient, inaccurate, effective solution is proposed not yet at present.
Summary of the invention
There is low, inconvenient, the inaccurate problem of efficient for the missing mode in the correlation technique, the invention provides a kind of target people's definite method, device and portable terminal, to address the above problem at least.
According to an aspect of the present invention, provide definite method of a kind of target people, having comprised: the First Characteristic information of obtaining the people in the view data; Described First Characteristic information and the target people's who obtains in advance Second Characteristic information are mated; Determine according to matching result whether the people in the described view data is described target people.
Preferably, described First Characteristic information comprises: the first facial information of the people in the described view data; Described Second Characteristic information comprises: the second facial information of described target people.
Preferably, determine according to matching result whether the people in the described view data is that described target people comprises: whether the matching degree of judging described the first facial information and described the second facial information satisfies pre-conditioned; If so, determine artificial described target people in the described view data.
Preferably, also comprise: in the situation of the artificial described target people in determining described view data, send the First Characteristic information of the people in the described view data and the primary importance information of the people in the described view data of reporting to service end.
Preferably, described Second Characteristic information also comprises: described target people's second place information; Described First Characteristic information and described Second Characteristic information mated comprise: described primary importance information and described second place information are mated; Judge whether described primary importance information exists common factor with described second place information; If so, described the first facial information and described the second facial information are mated.
Preferably, judge whether described primary importance information exists common factor also to comprise afterwards with described second place information: determine the matched data collection according to judged result, wherein, described matched data collection is the set that has the described target people who occurs simultaneously with described primary importance information; Described the first facial information and described the second facial information mated comprise: the second facial information of the described target people that described the first facial information and described matched data are concentrated is mated.
Preferably, the First Characteristic information of obtaining the people in the view data comprises: whether newly-increased view data is arranged in the detection of stored device, if so, obtain described the first facial information from described newly-increased view data; And/or, directly obtain view data by camera head, from the described view data of obtaining, obtain described the first facial information.
Preferably, before being mated, described First Characteristic information and the target people's that obtains in advance Second Characteristic information also comprises: send request, the described Second Characteristic information of acquisition request to service end; Receive the described Second Characteristic information that described service end is returned according to described request; And/or, directly receive the described Second Characteristic information that described service end pushes.
Preferably, determine that according to matching result whether people in the described view data is after the described target people, also comprises: show to the user and determine result and/or described matching result.
According to another aspect of the present invention, also provide definite device of a kind of target people, having comprised: acquisition module, for the people's who obtains view data First Characteristic information; Matching module is used for described First Characteristic information and the target people's who obtains in advance Second Characteristic information are mated; Determination module is used for determining according to the matching result of described matching module whether the people of described view data is described target people.
Preferably, described acquisition module is for the first facial information of the people who obtains described view data.
Preferably, described determination module comprises: the first judging unit is used for judging whether the matching degree of the second facial information of described the first facial information and described Second Characteristic information satisfies pre-conditioned; The first determining unit, the judged result that is used at described the first judging unit is in the situation that is, determines the artificial described target people in the described view data.
Preferably, described device also comprises: reporting module, the judged result that is used at described the first judging unit is in the situation that is, sends the First Characteristic information of the people in the described view data and the primary importance information of the people in the described view data of reporting to service end.
Preferably, described matching module comprises: the first matching unit is used for the second place information of described primary importance information and described Second Characteristic information is mated; The second judging unit is used for judging whether described primary importance information exists common factor with described second place information; The second matching unit, the judged result that is used at described the second judging unit is in the situation that is, described the first facial information and described the second facial information are mated.
Preferably, described matching module also comprises: the second determining unit is used for determining the matched data collection according to the judged result of described the second judging unit that wherein, described matched data collection is the set that has the described target people who occurs simultaneously with described primary importance information; Described the second matching unit, be used for will described the first facial information and the second facial information of the concentrated described target people of described matched data mate.
Preferably, described acquisition module comprises: the first acquiring unit, for detection of whether newly-increased view data is arranged in the storage device, if so, obtain described the first facial information from described newly-increased view data; And/or second acquisition unit is used for directly obtaining view data by camera head, obtains described the first facial information from the described view data of obtaining.
According to a further aspect of the invention, provide a kind of portable terminal, comprised definite device of above-mentioned arbitrary target people provided by the invention.
By the present invention, obtain the First Characteristic information of the people in the view data, First Characteristic information and the target people's who obtains in advance Second Characteristic information are mated, determine according to matching result whether the people in the view data is the target people, solve the problem that existing missing mode exists, improved accuracy, the validity of missing.
Description of drawings
Accompanying drawing described herein is used to provide a further understanding of the present invention, consists of the application's a part, and illustrative examples of the present invention and explanation thereof are used for explaining the present invention, do not consist of improper restriction of the present invention.In the accompanying drawings:
Fig. 1 is according to the target people of the embodiment of the invention schematic diagram of fixed system really;
Fig. 2 is the structured flowchart according to definite device of the target people of the embodiment of the invention;
Fig. 3 is the flow chart according to definite method of the target people of the embodiment of the invention;
Fig. 4 is the flow chart according to definite method of the target people of the embodiment of the invention one; And
Fig. 5 is the flow chart according to definite method of the target people of the embodiment of the invention two.
Embodiment
Hereinafter also describe in conjunction with the embodiments the present invention in detail with reference to accompanying drawing.Need to prove that in the situation of not conflicting, embodiment and the feature among the embodiment among the application can make up mutually.
For the problem that existing missing mode exists, consider a large amount of uses of portable terminal, formed a widely interconnected mobile network, and the terminal node of network mostly there are shooting and positioning function.In embodiments of the present invention, the network that utilizes portable terminal to consist of carries out missing, will be more more convenient, more efficient than traditional missing mode, also more accurate.
In embodiments of the present invention, can set up target people's database, storage target people's information, for example, target people's view data, the information of the position that target people may occur etc.Target people information pushing is given the portable terminal that has shooting and geo-positioning system, portable terminal is analyzed with target people's information the people in the view data of taking, and determines according to analysis result whether the people in the view data of portable terminal shooting is the target people.
Fig. 1 is that as shown in Figure 1, this system comprises service end and portable terminal according to the target people of the embodiment of the invention schematic diagram of fixed system really.Set up target people information database in service end, storage target people's information, portable terminal obtains target people's information from service end, and with portable terminal in view data in people's information analyze, determine according to the result who analyzes whether the people in the view data in the portable terminal is the target people.
Portable terminal obtains missing database information on the service end by wireless connections.Every data item of missing database can comprise the people's that will seek characteristic information, and and position and other relevant informations that may occur, be synchronized to local missing database.Portable terminal can be set to automatically analyze on the backstage photo that is filmed after the preservation of taking pictures, for example, utilize facial recognition techniques, Geographic mapping technology, mode identification technology etc., the information of people in the photo and the data in the local missing database are mated, determine according to matching result whether the people in the view data in the portable terminal is the target people.
In embodiments of the present invention, portable terminal can send to service end with the missing result by wireless connections.And, can real-time prompting during portable terminal to user's (for example, sound, vibration, literal and figure demonstration etc.) missing result.
The below is described in detail the embodiment of the invention.
According to the embodiment of the invention, provide definite device of a kind of target people.
Fig. 2 is the structured flowchart according to definite device of the target people of the embodiment of the invention, and as shown in Figure 2, this device mainly comprises: acquisition module 10, matching module 20 and determination module 30.Wherein, acquisition module 10 is for the people's who obtains view data First Characteristic information; Matching module 20 is connected with acquisition module 10, is used for First Characteristic information and the target people's who obtains in advance Second Characteristic information are mated; Determination module 30 is connected with matching module 20, is used for determining according to the matching result of matching module whether the people of view data is the target people.
In embodiments of the present invention, acquisition module 10 can obtain the first facial information of the people in the view data.Matching module 20 can mate the second facial characteristics of the first facial information and target people.Determination module 30 can determine whether the people in the view data is the target people according to the matching result of the first facial information and the second facial information.In an execution mode of the embodiment of the invention, determination module 30 can comprise: the first judging unit is used for judging whether the matching degree of the second facial information of the first facial information and Second Characteristic information satisfies pre-conditioned; The first determining unit, the judged result that is used at the first judging unit is in the situation that is, determines the artificial target people in the view data.
In the situation of artificial target people in view data, can also report the missing result to service end.In an execution mode of the embodiment of the invention, the device of the embodiment of the invention can also comprise: reporting module, the judged result that is used at the first judging unit is in the situation that is, sends the First Characteristic information of the people in the view data and the primary importance information of the people in the view data of reporting to service end.Preferably, the information that reports can also comprise matching result etc.
Second Characteristic information can also comprise target people's second place information, and this positional information can be the positional information that the target people may occur.In definite process of target people, can only exist the target people's who occurs simultaneously Second Characteristic information to mate primary importance information and second place information.In an execution mode of the embodiment of the invention, matching module can comprise: the first matching unit is used for the second place information of primary importance information and Second Characteristic information is mated; The second judging unit is used for judging whether primary importance information exists common factor with second place information; The second matching unit, the judged result that is used at the second judging unit is in the situation that is, the first facial information and the second facial information are mated.
Further, matching module can also comprise: the second determining unit is used for determining the matched data collection according to the judged result of the second judging unit that this matched data collection is the set that has the target people who occurs simultaneously with primary importance information; The second matching unit, the second facial information that is used for the target people that the first facial information and matched data is concentrated is mated.
In embodiments of the present invention, acquisition module 10 can directly obtain view data by camera head, obtains the first facial information of the people in the view data from the view data of obtaining.Whether acquisition module 10 also can have newly-increased view data in the detection of stored device in real time or regularly, if having, then obtains the first facial information from newly-increased view data.In an execution mode of the embodiment of the invention, acquisition module can comprise: the first acquiring unit, for detection of whether newly-increased view data is arranged in the storage device, if so, obtain the first facial information from newly-increased view data; And/or second acquisition unit is used for directly obtaining view data by camera head, obtains the first facial information from the view data of obtaining.
Before First Characteristic information and the target people's that obtains in advance Second Characteristic information mated, can obtain from service end target people's Second Characteristic information.In an execution mode of the embodiment of the invention, can send request to server, acquisition request Second Characteristic information, and receive the Second Characteristic information that service end is returned according to request, for example, the user can arrange target people's information of its concern, and for example, the user only obtains the target people's of its region information etc.In another execution mode of the embodiment of the invention, can directly receive the Second Characteristic information that service end pushes, service end can regularly push target people's Second Characteristic information.
In embodiments of the present invention, determine that according to matching result whether people in the view data is after the target people, also comprises: show to the user and determine result and/or matching result.
According to the embodiment of the invention, a kind of portable terminal also is provided, comprise definite device of above-mentioned arbitrary target people of the embodiment of the invention.
According to the embodiment of the invention, also provide definite method of a kind of target people.
Fig. 3 is the flow chart according to definite method of the target people of the embodiment of the invention, and as shown in Figure 3, the method comprises that mainly step S302 is to step S306.
Step S302 obtains the First Characteristic information of the people in the view data.
In embodiments of the present invention, First Characteristic information can comprise: the first facial information of the people in the view data.Can directly obtain view data by camera head, whether the first facial information of obtaining the people in the view data from the view data of obtaining also can have newly-increased view data in the detection of stored device in real time or regularly, if have, then from newly-increased view data, obtain the first facial information.
Further, First Characteristic information can also comprise: the primary importance information of the people in the view data.For example, the position at place when obtaining captured image data by positioner, thereby the position of the people in definite view data.
Step S304 mates First Characteristic information and the target people's who obtains in advance Second Characteristic information.
In embodiments of the present invention, Second Characteristic information can comprise: the second facial information of target people.First Characteristic information and Second Characteristic information mated comprise: the first facial information and the second facial information are mated.
Further, Second Characteristic information can also comprise: target people's second place information; First Characteristic information and Second Characteristic information mated comprise: primary importance information and second place information are mated; Judge whether primary importance information exists common factor with second place information; If so, the first facial information and the second facial information are mated.By this preferred implementation, can dwindle the scope of coupling, thereby matching efficiency is provided.
In an execution mode of the embodiment of the invention, can determine the matched data collection according to above-mentioned judged result, this matched data collection is the target people's that exist to occur simultaneously with primary importance information set, and the second facial information of the target people that the first facial information and matched data is concentrated is mated.
Step S306 determines according to matching result whether the people in the view data is the target people.
Whether the matching degree that in embodiments of the present invention, can judge the first facial information and the second facial information satisfies pre-conditioned; If so, determine artificial target people in the view data.For example, set in advance the matching degree threshold value, this matching degree threshold value can be in the user segment setting, also can be in the service end setting.If matching degree reaches the matching degree threshold value, then determine the artificial target people in the view data.
Further, in the situation of the artificial target people in determining view data, send the First Characteristic information of the people in the view data and the primary importance information of the people in the described view data of reporting to service end.
In an execution mode of the embodiment of the invention, before First Characteristic information and the target people's that obtains in advance Second Characteristic information mated, can also send request, the described Second Characteristic information of acquisition request to service end; Receive the described Second Characteristic information that service end is returned according to request.Also can directly receive the Second Characteristic information that service end pushes.
Determine that according to matching result whether people in the view data is after the target people, also comprises: show to the user and determine result and/or matching result.
Pass through the embodiment of the invention, obtain the First Characteristic information of the people in the view data, First Characteristic information and the target people's who obtains in advance Second Characteristic information are mated, determine according to matching result whether the people in the view data is the target people, solve the problem that existing missing mode exists, improved accuracy, the validity of missing.
Below by instantiation said method is described in detail.
In embodiments of the present invention, whether supporting location is located according to portable terminal, is divided into " containing the geographical position coupling " and " not containing the geographical position coupling " two kinds.
Embodiment one
In the embodiment of the invention one, orientating example as with the portable terminal supporting location is described, in embodiments of the present invention, portable terminal initiatively (timing or manual) obtains data from service end, or passive reception service end pushes the data of coming, add local missing database, every data can comprise target people's photo and the geographical position that may occur.
Fig. 4 is the flow chart according to definite method of the target people of the embodiment of the invention one, and as shown in Figure 4, the method comprising the steps of S402 is to step S420.
Step S402 is to step S404, after portable terminal is taken pictures, and storage picture.
Step S406, the automatic comparison film of portable terminal is analyzed, and carries out recognition of face.
Step S408 determines whether there is facial image in the photo, and if so, execution in step S410 if not, finishes.
Step S410 by the Geographic mapping function, determines current location.
Step S412 is by geographical position search missing database.
Step S414 judges whether to obtain the matched data collection, and if so, execution in step S416 if not, finishes.
Step S416 will recognize people's face and matched data collection and carry out people's face coupling, determine similarity.
Step S418, it is pre-conditioned to judge whether similarity satisfies, and if so, execution in step S420 if not, finishes.
Step S420 reports service end with matching result, finishes.
The information that reports can comprise: photo and at that time geographical position and match information, and with portable terminal phone number etc.
Embodiment two
In the embodiment of the invention two, with portable terminal not supporting location orientate example as and be described, in embodiments of the present invention, portable terminal initiatively (timing or manual) obtains data from service end, or passive reception service end pushes the data of coming, add local missing database, every data can comprise target people's photo and the geographical position that may occur.
Fig. 5 is the flow chart according to definite method of the target people of the embodiment of the invention two, and as shown in Figure 5, the method comprising the steps of S502 is to step S514.
Step S502 is to step S504, after portable terminal is taken pictures, and storage picture.
Step S506, the automatic comparison film of portable terminal is analyzed, and carries out recognition of face.
Step S508 determines whether there is facial image in the photo, and if so, execution in step S510 if not, finishes.
Step S510 carries out people's face coupling with the data set that recognizes in people's face and the database, determines similarity.
Step S512, it is pre-conditioned to judge whether similarity satisfies, and if so, execution in step S514 if not, finishes.
Step S514 reports service end with matching result, finishes.
The information that reports can comprise: photo and at that time geographical position and match information, and with portable terminal phone number etc.
The below describes a typical application case of inventive embodiments as example to seek the old man that wanders away.
After finding that the old man wanders away, relatives pass to the manager of service end with old man's essential information, are generally taken on by the administration such as public security organization or neighbourhood committee or community service department.
Old man's essential information is entered into (can comprise old man's name, photo, the place of wandering away, time, family members' contact method etc.) the missing database of service end.Because photo only needs to embody basic facial characteristics usually, does not need high pixel photo, therefore the data volume of data is not too large.Even also can replace picture or photo with other describing modes of facial information in the database, requirement that equally can the support mode matching operation.
The mobile phone that target people's affirmation device has been installed comprises a local missing database.This database can be arranged by the user, pays close attention to the missing information of particular range.For example, only pay close attention to the old man's information of wandering away in this city, perhaps only pay close attention to the missing information of a nearest week issue, perhaps only pay close attention to and have wanted criminal's information that great number puts on someone's head etc.The data that can reduce so local missing database take up room, and improve local matching operation speed.And specific user, for example the personnel of public security department can arrange all missing information of concern.
This old man's data communication device of wandering away is crossed the propelling movement mode and is synchronized to the affirmation device that target people has been installed and the mobile phone of paying close attention to relevant information, perhaps by the cellphone subscriber manually with database synchronization, thereby get access to this missing old man's information.
Installed target people the affirmation device the cellphone subscriber AT STATION, the place such as sight spot, market takes pictures, when using mobile phone, the user takes pictures after the activity, mobile phone can be automatically at Background scheduling target people's affirmation device, normally use in the situation of cell-phone function not affecting the user, carry out automatically that face recognition, geographical location information are obtained, the work such as information matches and judgement.
When judging " finding suspect object ", can explicitly prompting current phone user.So the cellphone subscriber can go forward immediately to inquire that the old man confirms, and with information reporting to the police or family members, thereby effectively finish job search.Also can not explicit prompting, but will judge that the result of " finding suspect object " and relevant information (for example geographical position, photo original image, information are submitted the phone number of terminal to) report service end.By the police or family members' analysis confirmation.
In case this missing old man is found, the missing task is finished.Then these data are deleted from service end missing database, and by initiatively or passive mode be synchronized to local missing database in the mobile phone, prevent wrong report.
Obviously, those skilled in the art should be understood that, above-mentioned each module of the present invention or each step can realize with general calculation element, they can concentrate on the single calculation element, perhaps be distributed on the network that a plurality of calculation elements form, alternatively, they can be realized with the executable program code of calculation element, thereby, they can be stored in the storage device and be carried out by calculation element, and in some cases, can carry out step shown or that describe with the order that is different from herein, perhaps they are made into respectively each integrated circuit modules, perhaps a plurality of modules in them or step are made into the single integrated circuit module and realize.Like this, the present invention is not restricted to any specific hardware and software combination.
The above is the preferred embodiments of the present invention only, is not limited to the present invention, and for a person skilled in the art, the present invention can have various modifications and variations.Within the spirit and principles in the present invention all, any modification of doing, be equal to replacement, improvement etc., all should be included within protection scope of the present invention.
Claims (17)
1. definite method of a target people is characterized in that, comprising:
Obtain the First Characteristic information of the people in the view data;
Described First Characteristic information and the target people's who obtains in advance Second Characteristic information are mated;
Determine according to matching result whether the people in the described view data is described target people.
2. method according to claim 1 is characterized in that, described First Characteristic information comprises: the first facial information of the people in the described view data; Described Second Characteristic information comprises: the second facial information of described target people.
3. method according to claim 2 is characterized in that, determines according to matching result whether the people in the described view data is that described target people comprises:
Whether the matching degree of judging described the first facial information and described the second facial information satisfies pre-conditioned;
If so, determine artificial described target people in the described view data.
4. method according to claim 3 is characterized in that, also comprises:
In the situation of artificial described target people in determining described view data, send the First Characteristic information of the people in the described view data and the primary importance information of the people in the described view data of reporting to service end.
5. each described method in 4 according to claim 2 is characterized in that described Second Characteristic information also comprises: described target people's second place information; Described First Characteristic information and described Second Characteristic information mated comprises:
Described primary importance information and described second place information are mated;
Judge whether described primary importance information exists common factor with described second place information;
If so, described the first facial information and described the second facial information are mated.
6. method according to claim 5 is characterized in that,
Judge whether described primary importance information exists common factor also to comprise afterwards with described second place information: determine the matched data collection according to judged result, wherein, described matched data collection is the set that has the described target people who occurs simultaneously with described primary importance information;
Described the first facial information and described the second facial information mated comprise: the second facial information of the described target people that described the first facial information and described matched data are concentrated is mated.
7. each described method in 4 according to claim 1 is characterized in that the First Characteristic information of obtaining the people in the view data comprises:
Whether newly-increased view data is arranged in the detection of stored device, if so, from described newly-increased view data, obtain described the first facial information; And/or
Directly obtain view data by camera head, from the described view data of obtaining, obtain described the first facial information.
8. each described method in 4 according to claim 1 is characterized in that, also comprises before described First Characteristic information and the target people's who obtains in advance Second Characteristic information are mated:
Send request, the described Second Characteristic information of acquisition request to service end; Receive the described Second Characteristic information that described service end is returned according to described request; And/or
Directly receive the described Second Characteristic information that described service end pushes.
9. each described method in 4 according to claim 1 is characterized in that, determines that according to matching result whether people in the described view data is after the described target people, also comprises:
Show definite result and/or described matching result to the user.
10. definite device of a target people is characterized in that, comprising:
Acquisition module is for the people's who obtains view data First Characteristic information;
Matching module is used for described First Characteristic information and the target people's who obtains in advance Second Characteristic information are mated;
Determination module is used for determining according to the matching result of described matching module whether the people of described view data is described target people.
11. device according to claim 10 is characterized in that, described acquisition module is for the first facial information of the people who obtains described view data.
12. device according to claim 11 is characterized in that, described determination module comprises:
The first judging unit is used for judging whether the matching degree of the second facial information of described the first facial information and described Second Characteristic information satisfies pre-conditioned;
The first determining unit, the judged result that is used at described the first judging unit is in the situation that is, determines the artificial described target people in the described view data.
13. device according to claim 12 is characterized in that, described device also comprises:
Reporting module, the judged result that is used at described the first judging unit is in the situation that is, sends to service end to report
The First Characteristic information of people in the described view data and the primary importance information of the people in the described view data.
14. each described device in 13 according to claim 11 is characterized in that described matching module comprises:
The first matching unit is used for the second place information of described primary importance information and described Second Characteristic information is mated;
The second judging unit is used for judging whether described primary importance information exists common factor with described second place information;
The second matching unit, the judged result that is used at described the second judging unit is in the situation that is, described the first facial information and described the second facial information are mated.
15. device according to claim 14 is characterized in that,
Described matching module also comprises: the second determining unit is used for determining the matched data collection according to the judged result of described the second judging unit that wherein, described matched data collection is the set that has the described target people who occurs simultaneously with described primary importance information;
Described the second matching unit, be used for will described the first facial information and the second facial information of the concentrated described target people of described matched data mate.
16. each described device in 13 according to claim 10 is characterized in that described acquisition module comprises:
The first acquiring unit for detection of whether newly-increased view data is arranged in the storage device, if so, obtains described the first facial information from described newly-increased view data; And/or
Second acquisition unit is used for directly obtaining view data by camera head, obtains described the first facial information from the described view data of obtaining.
17. a portable terminal is characterized in that, comprises the definite device of each described target people in the claim 10 to 16.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2012105545782A CN103051705A (en) | 2012-12-19 | 2012-12-19 | Method and device for determining target person and mobile terminal |
PCT/CN2013/078185 WO2013182101A1 (en) | 2012-12-19 | 2013-06-27 | Method, device and mobile terminal for determining target person |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2012105545782A CN103051705A (en) | 2012-12-19 | 2012-12-19 | Method and device for determining target person and mobile terminal |
Publications (1)
Publication Number | Publication Date |
---|---|
CN103051705A true CN103051705A (en) | 2013-04-17 |
Family
ID=48064199
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2012105545782A Pending CN103051705A (en) | 2012-12-19 | 2012-12-19 | Method and device for determining target person and mobile terminal |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN103051705A (en) |
WO (1) | WO2013182101A1 (en) |
Cited By (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013182101A1 (en) * | 2012-12-19 | 2013-12-12 | 中兴通讯股份有限公司 | Method, device and mobile terminal for determining target person |
CN103632141A (en) * | 2013-11-28 | 2014-03-12 | 小米科技有限责任公司 | Method, device and terminal equipment for figure identifying |
CN103744895A (en) * | 2013-12-24 | 2014-04-23 | 深圳先进技术研究院 | Method and device for obtaining resident identity information |
CN103945001A (en) * | 2014-05-05 | 2014-07-23 | 百度在线网络技术(北京)有限公司 | Picture sharing method and device |
CN104580121A (en) * | 2013-10-28 | 2015-04-29 | 腾讯科技(深圳)有限公司 | People search/people information matching and pushing method, system, client and server |
CN104901867A (en) * | 2015-04-30 | 2015-09-09 | 广东欧珀移动通信有限公司 | Message interaction method and related device and communication system |
CN105222774A (en) * | 2015-10-22 | 2016-01-06 | 广东欧珀移动通信有限公司 | A kind of indoor orientation method and user terminal |
CN105869015A (en) * | 2016-03-28 | 2016-08-17 | 联想(北京)有限公司 | Information processing method and system |
CN106375380A (en) * | 2016-08-27 | 2017-02-01 | 蔡璟 | An intelligent big data management system for searching lost objects based on lost information |
CN106528864A (en) * | 2016-11-30 | 2017-03-22 | 天脉聚源(北京)科技有限公司 | Intelligent transportation method and device |
CN106681000A (en) * | 2016-11-22 | 2017-05-17 | 宇龙计算机通信科技(深圳)有限公司 | Augmented reality registration device and method thereof |
CN106897726A (en) * | 2015-12-21 | 2017-06-27 | 北京奇虎科技有限公司 | The finding method and device of Missing Persons |
WO2017117879A1 (en) * | 2016-01-08 | 2017-07-13 | 中兴通讯股份有限公司 | Personal identification processing method, apparatus and system |
CN107172198A (en) * | 2017-06-27 | 2017-09-15 | 联想(北京)有限公司 | A kind of information processing method, apparatus and system |
CN107221151A (en) * | 2016-03-21 | 2017-09-29 | 滴滴(中国)科技有限公司 | Order driver based on image recognition recognizes the method and device of passenger |
CN107278369A (en) * | 2016-12-26 | 2017-10-20 | 深圳前海达闼云端智能科技有限公司 | Method, device and the communication system of people finder |
CN107295294A (en) * | 2016-03-30 | 2017-10-24 | 杭州海康威视数字技术股份有限公司 | A kind of intelligent looking-for-person method, apparatus and system |
WO2018010652A1 (en) * | 2016-07-12 | 2018-01-18 | 腾讯科技(深圳)有限公司 | Callback notification method in image identification, server, and computer readable storage medium |
CN108038468A (en) * | 2017-12-26 | 2018-05-15 | 北斗七星(重庆)物联网技术有限公司 | A kind of security terminal based on recognition of face |
CN108039007A (en) * | 2017-12-26 | 2018-05-15 | 北斗七星(重庆)物联网技术有限公司 | A kind of safety protection method and device |
CN108064388A (en) * | 2017-11-16 | 2018-05-22 | 深圳前海达闼云端智能科技有限公司 | Personage's method for searching, device, terminal and cloud server |
CN110223493A (en) * | 2019-05-29 | 2019-09-10 | 李成 | A kind of wander away personnel's mutual assistance searching system and method based on big data |
CN110555876A (en) * | 2018-05-30 | 2019-12-10 | 百度在线网络技术(北京)有限公司 | Method and apparatus for determining position |
CN113990104A (en) * | 2021-10-19 | 2022-01-28 | 马瑞利汽车零部件(芜湖)有限公司 | System and method for recognizing external environment by vehicle |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015151155A1 (en) | 2014-03-31 | 2015-10-08 | 株式会社日立国際電気 | Personal safety verification system and similarity search method for data encrypted for confidentiality |
CN111723618A (en) * | 2019-03-21 | 2020-09-29 | 浙江莲荷科技有限公司 | Information processing method and device |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002314984A (en) * | 2001-04-09 | 2002-10-25 | Fuji Photo Film Co Ltd | Monitoring camera system |
CN101093542A (en) * | 2006-02-15 | 2007-12-26 | 索尼株式会社 | Inquiry system, imaging device, inquiry device, information processing method, and program thereof |
CN102521621A (en) * | 2011-12-16 | 2012-06-27 | 上海合合信息科技发展有限公司 | Method for acquiring information based on image coupling and geographical position information and system thereof |
CN103186590A (en) * | 2011-12-30 | 2013-07-03 | 牟颖 | A method of obtaining the identity information of fugitive wanted persons through mobile phones |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101989300A (en) * | 2010-10-15 | 2011-03-23 | 江苏省莱科信息技术有限公司 | Missing person seeking system and implementation method for seeking missing person |
CN103051705A (en) * | 2012-12-19 | 2013-04-17 | 中兴通讯股份有限公司 | Method and device for determining target person and mobile terminal |
-
2012
- 2012-12-19 CN CN2012105545782A patent/CN103051705A/en active Pending
-
2013
- 2013-06-27 WO PCT/CN2013/078185 patent/WO2013182101A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002314984A (en) * | 2001-04-09 | 2002-10-25 | Fuji Photo Film Co Ltd | Monitoring camera system |
CN101093542A (en) * | 2006-02-15 | 2007-12-26 | 索尼株式会社 | Inquiry system, imaging device, inquiry device, information processing method, and program thereof |
CN102521621A (en) * | 2011-12-16 | 2012-06-27 | 上海合合信息科技发展有限公司 | Method for acquiring information based on image coupling and geographical position information and system thereof |
CN103186590A (en) * | 2011-12-30 | 2013-07-03 | 牟颖 | A method of obtaining the identity information of fugitive wanted persons through mobile phones |
Cited By (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013182101A1 (en) * | 2012-12-19 | 2013-12-12 | 中兴通讯股份有限公司 | Method, device and mobile terminal for determining target person |
CN104580121B (en) * | 2013-10-28 | 2019-03-15 | 腾讯科技(深圳)有限公司 | Missing/personal information matching push method, system, client and server |
CN104580121A (en) * | 2013-10-28 | 2015-04-29 | 腾讯科技(深圳)有限公司 | People search/people information matching and pushing method, system, client and server |
CN103632141A (en) * | 2013-11-28 | 2014-03-12 | 小米科技有限责任公司 | Method, device and terminal equipment for figure identifying |
CN103744895A (en) * | 2013-12-24 | 2014-04-23 | 深圳先进技术研究院 | Method and device for obtaining resident identity information |
CN103945001A (en) * | 2014-05-05 | 2014-07-23 | 百度在线网络技术(北京)有限公司 | Picture sharing method and device |
CN104901867B (en) * | 2015-04-30 | 2017-11-14 | 广东欧珀移动通信有限公司 | Message interaction method and related device and communication system |
CN104901867A (en) * | 2015-04-30 | 2015-09-09 | 广东欧珀移动通信有限公司 | Message interaction method and related device and communication system |
CN105222774A (en) * | 2015-10-22 | 2016-01-06 | 广东欧珀移动通信有限公司 | A kind of indoor orientation method and user terminal |
CN105222774B (en) * | 2015-10-22 | 2019-04-16 | Oppo广东移动通信有限公司 | A kind of indoor orientation method and user terminal |
CN106897726A (en) * | 2015-12-21 | 2017-06-27 | 北京奇虎科技有限公司 | The finding method and device of Missing Persons |
CN106960172A (en) * | 2016-01-08 | 2017-07-18 | 中兴通讯股份有限公司 | Personal identification processing method, apparatus and system |
WO2017117879A1 (en) * | 2016-01-08 | 2017-07-13 | 中兴通讯股份有限公司 | Personal identification processing method, apparatus and system |
CN107221151A (en) * | 2016-03-21 | 2017-09-29 | 滴滴(中国)科技有限公司 | Order driver based on image recognition recognizes the method and device of passenger |
CN105869015A (en) * | 2016-03-28 | 2016-08-17 | 联想(北京)有限公司 | Information processing method and system |
CN107295294A (en) * | 2016-03-30 | 2017-10-24 | 杭州海康威视数字技术股份有限公司 | A kind of intelligent looking-for-person method, apparatus and system |
WO2018010652A1 (en) * | 2016-07-12 | 2018-01-18 | 腾讯科技(深圳)有限公司 | Callback notification method in image identification, server, and computer readable storage medium |
CN106375380A (en) * | 2016-08-27 | 2017-02-01 | 蔡璟 | An intelligent big data management system for searching lost objects based on lost information |
CN106681000A (en) * | 2016-11-22 | 2017-05-17 | 宇龙计算机通信科技(深圳)有限公司 | Augmented reality registration device and method thereof |
CN106528864A (en) * | 2016-11-30 | 2017-03-22 | 天脉聚源(北京)科技有限公司 | Intelligent transportation method and device |
CN107278369A (en) * | 2016-12-26 | 2017-10-20 | 深圳前海达闼云端智能科技有限公司 | Method, device and the communication system of people finder |
CN107278369B (en) * | 2016-12-26 | 2020-10-27 | 深圳前海达闼云端智能科技有限公司 | Personnel search method, device and communication system |
CN107172198A (en) * | 2017-06-27 | 2017-09-15 | 联想(北京)有限公司 | A kind of information processing method, apparatus and system |
CN108064388A (en) * | 2017-11-16 | 2018-05-22 | 深圳前海达闼云端智能科技有限公司 | Personage's method for searching, device, terminal and cloud server |
CN108039007A (en) * | 2017-12-26 | 2018-05-15 | 北斗七星(重庆)物联网技术有限公司 | A kind of safety protection method and device |
CN108038468A (en) * | 2017-12-26 | 2018-05-15 | 北斗七星(重庆)物联网技术有限公司 | A kind of security terminal based on recognition of face |
CN110555876A (en) * | 2018-05-30 | 2019-12-10 | 百度在线网络技术(北京)有限公司 | Method and apparatus for determining position |
CN110555876B (en) * | 2018-05-30 | 2022-05-03 | 百度在线网络技术(北京)有限公司 | Method and apparatus for determining position |
CN110223493A (en) * | 2019-05-29 | 2019-09-10 | 李成 | A kind of wander away personnel's mutual assistance searching system and method based on big data |
CN113990104A (en) * | 2021-10-19 | 2022-01-28 | 马瑞利汽车零部件(芜湖)有限公司 | System and method for recognizing external environment by vehicle |
Also Published As
Publication number | Publication date |
---|---|
WO2013182101A1 (en) | 2013-12-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103051705A (en) | Method and device for determining target person and mobile terminal | |
US7373109B2 (en) | System and method for registering attendance of entities associated with content creation | |
US8392957B2 (en) | Automatic content tagging, such as tagging digital images via a wireless cellular network using metadata and facial recognition | |
US9485404B2 (en) | Timing system and method with integrated event participant tracking management services | |
US8185596B2 (en) | Location-based communication method and system | |
US20160191434A1 (en) | System and method for improved capture, storage, search, selection and delivery of images across a communications network | |
US20110256886A1 (en) | System and method for providing automatic location-based imaging using mobile and stationary cameras | |
CN109543566B (en) | Information processing method and device, electronic equipment and storage medium | |
US20110115915A1 (en) | System and method for providing automatic location-based imaging | |
US20200029173A1 (en) | Method for recording attendance using bluetooth enabled mobile devices | |
CN101535996A (en) | Method and apparatus for identifying an object captured by a digital image | |
US9122910B2 (en) | Method, apparatus, and system for friend recommendations | |
WO2014166133A1 (en) | Method, apparatus, and system for friend recommendations | |
US20080273087A1 (en) | Method for gathering and storing surveillance information | |
CN103347032A (en) | Method and system for making friends | |
US8768377B2 (en) | Portable electronic device and method of providing location-based information associated with an image | |
GB2517944A (en) | Locating objects using images from portable devices | |
KR101729206B1 (en) | System and method for image sharing | |
CN109151733B (en) | A method and device for locating a criminal suspect | |
US20200026869A1 (en) | Systems and methods for identification of a marker in a graphical object | |
JP2007086902A (en) | Content providing apparatus, content providing method, and content providing processing program | |
EP4020939B1 (en) | Evaluating ip location on a client device | |
KR100853379B1 (en) | Location based image file conversion service method and service server | |
Liu et al. | Seva: Sensor-enhanced video annotation | |
JP2020009087A (en) | Information collection server, information collection system, and information collection method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20130417 |
|
RJ01 | Rejection of invention patent application after publication |