CN114330523A - Classification method, classification device, classification equipment and storage medium - Google Patents
Classification method, classification device, classification equipment and storage medium Download PDFInfo
- Publication number
- CN114330523A CN114330523A CN202111588053.6A CN202111588053A CN114330523A CN 114330523 A CN114330523 A CN 114330523A CN 202111588053 A CN202111588053 A CN 202111588053A CN 114330523 A CN114330523 A CN 114330523A
- Authority
- CN
- China
- Prior art keywords
- vector
- label
- class cluster
- name
- positioning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The present disclosure provides a classification method, apparatus, device, and storage medium. The method relates to the technical field of computers, in particular to the fields of big data, data mining, machine learning and the like. The specific implementation scheme is as follows: obtaining a characteristic vector of an object according to the position information of the object appearing in the target area; and carrying out clustering training on the characteristic vector of the object to obtain a class cluster to which the object belongs. The object can be accurately classified by utilizing the position information of the object appearing in the target area.
Description
Technical Field
The present disclosure relates to the field of computer technology, and more particularly, to the fields of big data, data mining, machine learning, and the like.
Background
In labeling users of various platforms, etc., many types of labels rely on manual collection of a sample of the label. Therefore, the collection process takes a long time, and the labor cost is high.
Disclosure of Invention
The present disclosure provides a classification method, apparatus, device, and storage medium.
According to an aspect of the present disclosure, there is provided a classification method including:
obtaining a characteristic vector of an object according to the position information of the object appearing in the target area;
and carrying out clustering training on the characteristic vector of the object to obtain a class cluster to which the object belongs.
According to another aspect of the present disclosure, there is provided a classification apparatus including:
the characteristic vector module is used for obtaining the characteristic vector of the object according to the position information of the object appearing in the target area;
and the clustering training module is used for carrying out clustering training on the characteristic vector of the object to obtain a cluster to which the object belongs.
According to another aspect of the present disclosure, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the method of any of the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform the method of any of the embodiments of the present disclosure.
According to another aspect of the present disclosure, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the method of any of the embodiments of the present disclosure.
The object can be accurately classified by utilizing the position information of the object appearing in the target area.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a schematic flow diagram of a classification method according to an embodiment of the present disclosure;
FIG. 2 is a schematic flow chart diagram of a classification method according to another embodiment of the present disclosure;
FIG. 3 is a schematic flow chart diagram of a classification method according to another embodiment of the present disclosure;
FIG. 4 is a schematic flow chart diagram of a classification method according to another embodiment of the present disclosure;
FIG. 5 is a schematic flow chart diagram of a classification method according to another embodiment of the present disclosure;
FIG. 6 is a schematic structural diagram of a sorting apparatus according to an embodiment of the present disclosure;
FIG. 7 is a schematic structural diagram of a sorting apparatus according to another embodiment of the present disclosure;
FIG. 8 is a schematic structural diagram of a sorting apparatus according to another embodiment of the present disclosure;
FIG. 9 is a schematic flow chart of feature generation in an embodiment in accordance with the present disclosure;
FIG. 10a is a schematic diagram of generating a location name vector in accordance with an embodiment of the present disclosure;
FIG. 10b is a schematic diagram of generating a feature vector according to an embodiment of the present disclosure;
FIG. 10c is a schematic diagram of an example of vector stitching according to an embodiment of the present disclosure;
FIG. 10d is a schematic diagram of another example of vector stitching in an embodiment in accordance with the present disclosure;
FIG. 11 is a schematic flow chart of category classification according to an embodiment of the present disclosure;
FIG. 12 is a schematic flow chart diagram of cluster training according to an embodiment of the present disclosure;
fig. 13 is a block diagram of an electronic device for implementing a classification method of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 is a flow chart diagram of a classification method according to an embodiment of the present disclosure. The method can comprise the following steps:
s101, obtaining a characteristic vector of an object according to position information of the object appearing in a target area;
and S102, carrying out clustering training on the characteristic vector of the object to obtain a class cluster to which the object belongs.
In the embodiments of the present disclosure, the target area may include an area where object allocation is required. The specific range of the area can be selected, or can be automatically determined according to the characteristics of the area. For example, if the target area is a school, mall, dining hall, etc., the location range encompassed by the target area may be automatically determined using an application such as a map. If it is necessary to classify each object appearing in the target area, the position information of each object appearing in the target area can be acquired by a positioning service such as a map or a communications carrier. The object may include a user who is able to use a location service. Such as an application or a user of a network platform that is capable of using location services. An object may comprise a plurality of position information within the target area, for example at different times, the object appears at different positions within the target area or the object is positioned at the same position a plurality of times. And performing word segmentation and other processing on each position information of the object to obtain a feature vector of the object. If there are multiple objects, each corresponding feature vector may be obtained. Then, clustering training is carried out on the feature vectors of one or more objects, and the class cluster to which each object belongs can be obtained.
The classification method of the embodiment of the disclosure can accurately classify the object by using the position information of the object appearing in the target area.
Fig. 2 is a flow chart diagram of a classification method according to another embodiment of the present disclosure. The classification method may include one or more features of the method embodiments described above. In one embodiment, the location information of the object includes a location name of the object, and S101 obtains a feature vector of the object according to the location information of the object appearing in the target area, including:
s201, obtaining a positioning name vector of the object according to the positioning name of the object;
s202, obtaining a feature vector of the object according to the positioning name vector of the object in the target area within the first time range.
In the embodiment of the present disclosure, the location information of the object may include a location name of the object. If an object is located at a Point of Interest (POI) or a relatively fixed location within a target area, the name of the POI or the relatively fixed location, for example, the name of a building, a sign, etc., may be used as the location name of the object. Each location name may be converted to a location name vector. If an object has a plurality of location names in the target area in the first time range, a feature vector of the object can be obtained based on a location name vector obtained by converting the plurality of location names.
In the embodiment of the present disclosure, the first time range may be flexibly selected according to the requirements of the actual application scenario, for example, one year, half year, 1 month, 1 day, and the like.
In the embodiment of the disclosure, the location name of the object appearing in the target area in the first time range can be utilized to obtain the location name vector, and then the feature vector of the object is obtained, which is beneficial to accurately classifying the object subsequently, and improves the classification efficiency.
Fig. 3 is a flow chart diagram of a classification method according to another embodiment of the present disclosure. The classification method may include one or more features of the method embodiments described above. In one embodiment, the step S201 of obtaining a location name vector of the object according to the location name of the object includes:
s301, summing and averaging word vectors of all the participles of the positioning name of the object to obtain the positioning name vector of the object.
In embodiments of the present disclosure, a localization name may be segmented into one or more participles, each participle having a corresponding word vector. And summing and averaging word vectors of one or more participles of a certain positioning name to obtain a positioning name vector corresponding to the positioning name.
For example, a certain location name "first canteen" may be split into "first" and "canteen," where "first" corresponds to word vector 1 and "canteen" corresponds to word vector 2, and averaging the sum of word vector 1 and word vector 2 may result in the location name vector for "first canteen".
For another example, a location name "XX second library" may be split into "XX" and "second" and "libraries", where "XX" corresponds to word vector 1, "second" corresponds to word vector 2, and "library" corresponds to word vector 3, and summing and averaging word vector 1, word vector 2, and word vector 3 may result in the location name vector of "XX second library".
In the embodiment of the disclosure, by segmenting the positioning name and summing and averaging the word vectors of each segmented word, a more accurate positioning name vector can be obtained, which is beneficial to obtaining the feature vector by subsequently utilizing the accurate positioning name vector of the object, so that the object can be classified more accurately, and the classification efficiency is improved.
In one embodiment, the first time range includes a plurality of time periods, and the step S202 of obtaining the feature vector of the object according to the location name vector of the object appearing in the target area in the first time range includes:
s302, summing and averaging a plurality of positioning name vectors of the object included in each time period to obtain a sub-vector of the object corresponding to each time period;
and S303, splicing the sub-vectors of the object corresponding to the time periods to obtain the characteristic vector of the object.
In the disclosed embodiments, the first time range may be divided into a plurality of time periods. For example, if the first time range is one day, 1 to 2 hours may be selected as one time period in the morning, at noon, and in the evening. As another example, if the first time range is one week, each day may be taken as a time period. The embodiment of the disclosure does not specifically limit the specific length of the first time range, nor the specific manner of dividing the first time range into a plurality of time periods, and can be flexibly selected according to the requirements of the actual application scenario.
In this embodiment of the present disclosure, a plurality of location name vectors of the object included in each time period of the first time range may be obtained, and the location name vectors are summed and averaged to obtain a sub-vector of the object corresponding to each time period. And then splicing the sub-vectors of the object corresponding to all time periods in the first time range to obtain the characteristic vector of the object.
For example, a plurality of location name vectors of an object are obtained at two time periods in a day. In the time period 1, summing and averaging a positioning name vector 1, a positioning name vector 2 and a positioning name vector 3 of the object to obtain a sub-vector 1; in the time period 2, the positioning name vector 3, the positioning name vector 4 and the positioning name vector 5 of the object are summed and averaged to obtain a sub-vector 2, and then the sub-vector 1 and the sub-vector 2 can be used for splicing to obtain a feature vector of the object.
In the embodiment of the disclosure, by summing, averaging and splicing the multiple positioning name vectors of the object included in each time period, the obtained feature vector of the object is more accurate, and then the object can be classified more accurately, so that the classification efficiency is improved.
Fig. 4 is a flow chart diagram of a classification method according to another embodiment of the present disclosure. The classification method may include one or more features of the method embodiments described above. In one embodiment, the step S201 of obtaining a location name vector of the object according to the location name of the object includes:
s401, summing and averaging word vectors of all the participles of the positioning name of the object to obtain the positioning name vector of the object.
In one embodiment, the first time range includes a plurality of time periods, and S202 obtains a feature vector of the object according to a plurality of location name vectors of the object in the first time range, further including:
s402, summing and averaging a plurality of positioning name vectors of the object included in each time period to obtain a sub-vector of the object corresponding to each time period;
and S403, splicing the sub-vectors of the object corresponding to the time periods with the age characteristics of the object to obtain the characteristic vector of the object.
In this embodiment, S401 is the same as S301 of the previous embodiment, and S402 is the same as S302 of the previous embodiment, which can refer to the related description of the previous embodiment, and is not repeated herein.
In S403, the sub-vectors of the subject corresponding to the multiple time periods included in the first time range may be spliced with the age characteristics of the subject to obtain a characteristic vector considering the age characteristics of the subject. Thus, the age characteristics and the position information are combined, and the object can be accurately classified according to different age stages.
In one embodiment, the step S102 of performing cluster training on the feature vector of the object to obtain a cluster to which the object belongs includes: s404, performing clustering training on the feature vector of the object by adopting a Gaussian mixture model to obtain a class cluster to which the object belongs.
In the disclosed embodiment, a Gaussian Mixture Model (GMM) is a clustering algorithm. The gaussian mixture model may use gaussian distribution as a parametric model and is trained using Expectation Maximization (EM) algorithm. The feature vector of the object is input into a gaussian mixture model, and the gaussian mixture model can output the class cluster to which the object belongs. Specifically, the output result of the gaussian mixture model may include an identifier of a certain object and an identifier of a class cluster corresponding to the certain object. The identification of the object may include an identification of the user, such as one or more of a username, a nickname, a registration number, etc., of the user of an application or network platform. For example, object ID1 corresponds to class cluster A and object ID2 corresponds to class cluster B. Through the Gaussian mixture model, the class cluster to which the object belongs can be simply and quickly obtained, a large amount of data can be quickly processed, and the processing efficiency is improved.
Fig. 5 is a flow chart diagram of a classification method according to another embodiment of the present disclosure. The classification method may include one or more features of the method embodiments described above. In one embodiment, the method further comprises: s501, sampling and analyzing the class cluster to which the object belongs to obtain the label of the object. In the embodiment of the present disclosure, after the class cluster to which each object belongs is determined, sampling analysis may be performed on the class clusters. For example, an object (the identity may be object ID1) belongs to class cluster A. For the class cluster A, 1000 objects are included (the identification may include object ID1, object ID2, object ID3, etc.). 100 objects are sampled from 1000 objects in the class cluster A for analysis, and the class cluster corresponding label to which the object belongs can be determined. A certain cluster corresponds to a label, and the label can be the labels of all objects in the cluster. By sampling and analyzing the class cluster to which a certain object belongs, the label of the object is obtained, and the labeling processing of the object can be supported.
In one embodiment, the step S501 of performing sampling analysis on the class cluster to which the object belongs to obtain the tag of the object includes: and sampling and analyzing the class cluster to which the object belongs according to the time characteristics, determining a label corresponding to the class cluster to which the object belongs as the label of the object, wherein the time characteristics comprise the positioning frequency in a second time range.
In embodiments of the present disclosure, the temporal characteristics for sampling analysis may include a frequency of localization of a certain object within a certain second time range. The second time range may be selected based on the characteristics of the desired tag. For example, if the labels of students and teachers, etc. are required, holidays due to chills and hots, normal school months, etc. may be selected as the second time range. If the frequency of positioning of more objects in a certain cluster is much lower than that in other months, the cluster can be preliminarily judged to belong to a student cluster or a teacher cluster.
The second time range may be the same as or different from the first time range and the time period used for obtaining the feature vector in the above embodiment. The first time range for deriving the feature vector may or may not be in the second time range for sample analysis. For example, the first time range for deriving the feature vectors may be one day and the second time range for sampling analysis may be one month. As another example, the first time range for deriving the feature vectors may be one week and the second time range for sampling analysis may be one year.
In the embodiment of the present disclosure, the positioning frequency of each object in the target area may be obtained from the positioning service. For example, the frequency of localization of each object within the second time range and within the target area may be obtained. The positioning frequency may include a positioning number and/or a positioning frequency. Specifically, for example, in a month (assuming that the month is 30 days), the number of times a certain object is located in a certain area is N, and the location frequency may be N/30. For another example, the number of times a certain object is located in a certain area in a week is N, and the location frequency may be N/7.
In the embodiment of the present disclosure, one or more tags corresponding to the class cluster to which each object belongs may be determined according to the positioning frequency of each object in the second time range and in the target area. If the class cluster to which the object belongs corresponds to a label, the label can be added to the object. If the class cluster to which the object belongs corresponds to a plurality of labels, it is necessary to further determine a more appropriate label for the class cluster to which the object belongs. For example, the object ID1 corresponds to the class cluster a, and after sampling analysis is performed on the class cluster a according to the time characteristics, the label of the class cluster a is label a 1. For another example, the object ID1 corresponds to the class cluster a, and after sampling analysis is performed on the class cluster a according to the time characteristics, two possible tags a1 and a2 of the class cluster a are obtained, so that whether the tag of the class cluster a is the tag a1 or the tag a2 can be further determined.
In the embodiment of the disclosure, one or more labels corresponding to the class cluster to which the object belongs can be determined according to the positioning frequency in the time characteristic, so that the object can be supported to be labeled, and the label identification efficiency and accuracy are improved.
In one embodiment, the step S501 of performing sampling analysis on the class cluster to which the object belongs to obtain the tag of the object includes: and sampling and analyzing the class cluster to which the object belongs according to the age characteristics, and determining a label corresponding to the class cluster to which the object belongs as the label of the object.
In one example, the label corresponding to the class cluster to which the object belongs may be determined as the label of the object according to the time characteristic and the age characteristic. For example, sampling analysis may be performed on the class cluster to which the object belongs according to the time characteristics, and a plurality of tags corresponding to the class cluster to which the object belongs may be determined; and according to the age characteristics, determining a unique label corresponding to the class cluster to which the object belongs from a plurality of labels corresponding to the class cluster to which the object belongs, and using the unique label as the label of the object. In the embodiments of the present disclosure, the age characteristics may be acquired from information provided by a registered user of an application program, a network platform, or the like. If not, the obtained age characteristic may be null. Users of different age groups may belong to different clusters, and the different clusters can be distinguished by using age characteristics. For example, the age characteristics of students and teachers may be different, with teachers over 30 years old being more numerous and students under 30 years old being more numerous. If a cluster includes subjects with a median age greater than or equal to 30 years, the cluster is labeled teacher. If a cluster includes subjects with a median age less than 30 years, the cluster is labeled as a student. The value of the age characteristic for sampling analysis is only an example and is not limited, and may also be 28, 32, and the like, and may be flexibly selected according to the requirements of the actual application scenario. The label of the object can be obtained more accurately through the age characteristics.
In one embodiment, the step S501 of performing sampling analysis on the class cluster to which the object belongs to obtain the tag of the object includes: and sampling and analyzing the class cluster to which the object belongs according to the positioning times, and determining a label corresponding to the class cluster to which the object belongs as the label of the object.
In an example, a label corresponding to a class cluster to which the object belongs may be determined as the label of the object according to the time characteristic and the positioning frequency. For example, sampling analysis is performed on the class cluster to which the object belongs according to the time characteristics, and a plurality of labels corresponding to the class cluster to which the object belongs are determined; and then according to the positioning times, determining a unique label corresponding to the class cluster to which the object belongs from a plurality of labels corresponding to the class cluster to which the object belongs, and using the unique label as the label of the object.
In an example, the label corresponding to the class cluster to which the object belongs may also be determined as the label of the object according to the time characteristic, the age characteristic, and the number of times of positioning. The use sequence of the time characteristic, the age characteristic and the positioning frequency can be flexibly set according to the requirements of the actual application scene, and the embodiment of the disclosure is not limited.
In the embodiments of the present disclosure, the number of times of positioning may also be referred to as a positioning number. For example, the number of times of positioning of the object may be acquired from a positioning service. Sampling analysis is carried out on the positioning times of each object in the class cluster to which a certain object belongs, so that a more accurate label of the class cluster can be obtained, and a more accurate label of the object can be obtained. For example, 300 objects are sampled from 2000 objects in the class cluster B, the positioning times of the 300 objects are analyzed, and if the sum of the positioning times is higher, the class cluster B can be judged to be a resident cluster; if the sum of the positioning times is lower, the cluster B can be judged as a visitor cluster.
In one embodiment, the step S501 of performing sampling analysis on the class cluster to which the object belongs to obtain the tag of the object includes: and according to the similarity between the reference vector and the characteristic vector of the object, sampling and analyzing the class cluster to which the object belongs, and determining a label corresponding to the class cluster to which the object belongs as the label of the object.
In an example, a label corresponding to a class cluster to which the object belongs may be determined as the label of the object according to the time characteristic and the similarity. For example, sampling analysis may be performed on the class cluster to which the object belongs according to the time characteristics, and a plurality of tags corresponding to the class cluster to which the object belongs may be determined; and according to the similarity between the reference vector and the feature vector of the object, determining a unique label corresponding to the class cluster to which the object belongs from a plurality of labels corresponding to the class cluster to which the object belongs, and using the unique label as the label of the object.
In an example, the label corresponding to the class cluster to which the object belongs may also be determined according to the time characteristic, the age characteristic and the similarity, as the label of the object. The time characteristic, the age characteristic and the use sequence of the similarity can be flexibly set according to the requirements of practical application, and the embodiment of the disclosure is not limited.
In an example, the label corresponding to the class cluster to which the object belongs may also be determined as the label of the object according to the time characteristic, the age characteristic, the positioning frequency and the similarity. The time characteristic, the age characteristic, the positioning frequency and the use sequence of the similarity can be flexibly set according to the requirements of practical application, and the embodiment of the disclosure is not limited.
In the embodiment of the present disclosure, reference vectors of various tags, for example, a student reference vector, a teacher reference vector, a resident reference vector, and the like may be set in advance. For example, similarity of the feature vector of the object to the reference vectors of the plurality of tags is calculated, thereby determining a more accurate tag of the object. For another example, after the class cluster to which the object belongs is sampled and analyzed according to the time characteristics to obtain a plurality of labels of the object, the similarity between the characteristic vector of the object and the reference vectors of the plurality of labels can be calculated, and then a more accurate label of the object is determined.
In one embodiment, the method may further comprise: and S502, adding the label of the object according to the identification of the object. For example, if a certain object is a user of an application program or a network platform, after the tag of the user is determined according to the classification method of the embodiment of the present disclosure, the tag may be added to the user according to the identifier of the user. In the embodiment of the disclosure, after the label of the object is determined, the label can be automatically added to the object, so that the efficiency and the accuracy of label identification are improved.
Fig. 6 is a schematic structural diagram of a sorting apparatus according to an embodiment of the present disclosure. The apparatus may include:
the feature vector module 601 is configured to obtain a feature vector of an object according to location information of the object appearing in the target area;
the cluster training module 602 is configured to perform cluster training on the feature vector of the object to obtain a cluster to which the object belongs.
Fig. 7 is a schematic structural diagram of a sorting apparatus according to another embodiment of the present disclosure. In one embodiment, the location information of the object includes a location name of the object, and the feature vector module 601 includes:
a first vector quantity module 701, configured to obtain a location name vector of the object according to the location name of the object;
a second vector quantum module 702, configured to obtain a feature vector of the object according to the location name vector of the object appearing in the target area within the first time range.
In an embodiment, the first vector quantity module 701 is further configured to sum and average word vectors of all participles of the location name of the object to obtain a location name vector of the object.
In an embodiment, the first time range includes a plurality of time periods, and the second vector sub-module 702 is further configured to sum and average a plurality of location name vectors of the object included in each time period, to obtain a sub-vector of the object corresponding to each time period; and splicing the sub-vectors of the object corresponding to the multiple time periods to obtain the characteristic vector of the object.
In an embodiment, the first time range includes a plurality of time periods, and the second vector sub-module 702 is further configured to sum and average a plurality of location name vectors of the object included in each time period, to obtain a sub-vector of the object corresponding to each time period; and splicing the sub-vectors of the object corresponding to the time periods with the age characteristics of the object to obtain the characteristic vector of the object.
In an embodiment, the cluster training module 602 is further configured to perform cluster training on the feature vector of the object by using a gaussian mixture model, so as to obtain a cluster to which the object belongs.
Fig. 8 is a schematic structural diagram of a sorting apparatus according to another embodiment of the present disclosure. In one embodiment, the apparatus further comprises:
and the sampling analysis module 801 is configured to perform sampling analysis on the class cluster to which the object belongs to obtain the tag of the object.
In one embodiment, the sample analysis module 801 includes:
the time sampling sub-module 8011 is configured to perform sampling analysis on the class cluster to which the object belongs according to a time characteristic, and determine a tag corresponding to the class cluster to which the object belongs, as the tag of the object, where the time characteristic includes a positioning frequency in a second time range.
In one embodiment, the sample analysis module 801 includes:
the age sampling sub-module 8012 is configured to sample and analyze the class cluster to which the object belongs according to the age characteristics, and determine a label corresponding to the class cluster to which the object belongs, as the label of the object.
In one embodiment, the sample analysis module 801 includes:
the quantity sampling sub-module 8013 is configured to sample and analyze the class cluster to which the object belongs according to the positioning times, and determine a label corresponding to the class cluster to which the object belongs, as the label of the object.
In one embodiment, the sample analysis module 801 includes:
the similarity sub-module 8014 is configured to, according to the similarity between the reference vector and the feature vector of the object, perform sampling analysis on the class cluster to which the object belongs, and determine a label corresponding to the class cluster to which the object belongs, as the label of the object.
In this embodiment, the sampling analysis module 801 may include one or more of a time sampling sub-module 8011, an age sampling sub-module 8012, a number sampling sub-module 8013, and a similarity sub-module 8014, and may be flexibly set according to a requirement of an actual application scenario, which is not limited in this embodiment of the disclosure. For example, where sampling module 801 includes time sampling sub-module 8011 and age sampling sub-module 8012, time sampling sub-module 8011 and age sampling sub-module 8012 may cooperate to perform the steps of sampling analysis. For example, the time sampling sub-module 8011 performs sampling analysis on the class cluster to which the object belongs according to the time characteristics, and determines a plurality of labels corresponding to the class cluster to which the object belongs; the age sampling sub-module 8012 determines a unique tag corresponding to the class cluster to which the object belongs as the tag of the object from the plurality of tags corresponding to the class cluster to which the object belongs according to the age characteristics. The steps of performing sampling analysis in cooperation with other sub-modules of the sampling analysis module 801 are also similar, and reference may be specifically made to an example of cooperation between the time sampling sub-module 8011 and the age sampling sub-module 8012, or to related descriptions of corresponding steps in the classification method embodiment, which are not described herein again.
In one embodiment, the apparatus further comprises:
a tag module 802, configured to add a tag of the object according to the identifier of the object.
For a description of specific functions and examples of each module of the classification device of the present disclosure, reference may be made to the related description of the corresponding step in the foregoing classification method embodiment, which is not repeated herein.
In one example, the classification method of the embodiments of the present disclosure may be used to classify tags of objects such as students. The student population is usually active in schools, and the track range of the frequent activities of the students can be mined out based on the positioning data of the users and the ages of the users, so that the identification of the users of the students can be obtained. The positioning data may be obtained from a platform with positioning services, such as an operator, mapping software, etc.
By adopting the classification method of the embodiment of the disclosure, the student label can be generated based on the object, such as user positioning, and the specific process can include feature generation and category division.
An exemplary flow of feature generation is as follows, see fig. 9:
s901, extracting the identification of all users and the positioning names of the users which are positioned in a target area, such as a school, for a period of time, such as the last 1 year. Wherein, the time range of the extraction can be selected. Schools are only one example of a target area and may be other areas as well. For example, acquisition of the identity of a user included in an application or network platform is minimized. The location name may include a school name and a building name, etc. For example, a certain canteen of a certain school.
And S902, generating a feature vector for each user. An example of the vector construction process is as follows:
and S902, 902a, dividing time periods. Each day was divided into 3 time periods by hour: 8: 00-18: 00, 18: 00-23: 00, 23: 00-8: 00. The division of the 3 time periods is merely an example and not a limitation. The number and the division range of the specific time periods can be flexibly selected according to the requirements of application scenes.
And S902b, summing and averaging the positioning name vectors covered by each time segment to obtain a corresponding sub-vector of each time segment.
The method for generating the location name vector may include: locate name participles, and sum-average word vectors. For example, as shown in fig. 10a, a certain location name N may be split into a plurality of word segments, which respectively include: the word segmentation method comprises the following steps of segmenting 1 and segmenting 2 … … into n, wherein each segmentation corresponds to a word vector and comprises a word vector 1 and a word vector 2 … …. And summing and averaging the word vectors to obtain a positioning name vector N corresponding to the positioning name N.
And summing and averaging the positioning name vectors of each time period to obtain a corresponding sub-vector of the time period. For example, as shown in fig. 10b, location name 1 corresponds to location name vector 1, location name 2 corresponds to location name vector 2 … … location name N corresponds to location name vector N. In the first time period of 8: 00-18: 00 positioning, the positioning name vector 1 and the positioning name vector 2 … … are summed and averaged to obtain the sub-vector 1 corresponding to the first time period. In the second time period of 18: 00-23: 00 positioning, the positioning name vector 1 and the positioning name vector 2 … … are summed and averaged to obtain the sub-vector 2 corresponding to the second time period. In the third time period of 23: 00-8: 00 positioning, the positioning name vector 1 and the positioning name vector 2 … … are summed and averaged to obtain the sub-vector 3 corresponding to the second time period.
S902c, vector splicing.
The vectors of the multiple time periods are spliced to obtain a feature vector of the object, for example, a feature vector of the user. For example, referring to fig. 10c, assuming that each sub-vector includes 128 features, the sub-vector 1 of the first time period includes w1, w2, …, w128, the sub-vector 2 of the second time period includes w1, w2, …, w128, and the sub-vector 2 of the third time period includes w1, w2, …, w 128. The concatenation yields a vector comprising 384 features.
The vectors of a plurality of time periods are spliced, and the age characteristics are added to obtain the characteristic vector of the object, such as a user characteristic vector. For another example, referring to fig. 10d, in addition to the features of sub-vector 1, sub-vector 2, and sub-vector 3, plus 1 age feature, a vector comprising 385 features can be spliced.
II, classification of categories: as the crowd positioned in the school comprises teachers, residents and visitors besides students, further distinction is needed, and unsupervised clustering can be adopted for distinction. Referring to fig. 11, the flow of category classification is as follows:
s1101, carrying out clustering training on the feature vectors of the users (the feature vectors can be called as user vectors or feature vectors for short) by adopting a Gaussian mixture model. For example, referring to fig. 12, the number of class clusters may be set to 4. The feature vector of the user is used as an input of the gaussian mixture model, and an output of the gaussian mixture model may include an identifier of the user and an identifier of a class cluster corresponding to the identifier. The number of the above-mentioned clusters is merely an example and is not limited, and may be flexibly selected according to the requirements of the application scenario.
S1102, a plurality of, for example, 4 cluster clusters gathered are sampled and analyzed by using the time characteristic. A plurality of labels to which the classification cluster may correspond may be distinguished first. For example, a first category of clusters, for example comprising teachers or students; the second category of clusters, for example, includes residents or visitors.
For example, if objects in a certain cluster class, such as users, are located much less frequently in schools for certain months, such as 1-2 months and 7-8 months, than in other months, then the certain cluster class is a teacher cluster or a student cluster, and vice versa, a resident cluster or a visitor cluster.
In one embodiment, the positioning frequency may be counted from the positioning data, and the positioning frequency may include the number of times of positioning and/or the positioning frequency. The frequency of locations may be obtained by dividing the number of locations by the time, for example by 30 days.
And S1103, further distinguishing the labels of the classification clusters by using the age characteristics. And (4) calculating age medians of the teacher cluster and the student clusters, wherein the class clusters with medians lower than 30 are the student clusters, and the class clusters with medians higher than 30 are the teacher cluster. If there is no age feature, the reference vector can also be used to distinguish between student and teacher clusters. For example, the similarity of the reference vectors of the class cluster and the known student cluster is calculated to determine whether it is a student cluster. The teacher cluster may be treated similarly.
And S1104, further distinguishing the labels of the classification clusters by using the positioning quantity characteristics. For example, counting the positioning quantity characteristics (e.g., the positioning times) of the resident clusters and the visitor clusters, wherein the cluster with the higher school positioning sum is the resident cluster, and the cluster with the lower school positioning sum is the visitor cluster.
Using the example method, it can be determined which cluster belongs to the student, and thus identify which user ID should have the student label. Further, a corresponding tag may be added to the user ID registered by the application, the network platform, or the like.
The classification method of the embodiment of the disclosure is adopted to determine the label, the accuracy is high, and a sample does not need to be collected. For example, automatic identification of user tags, such as student tags, from location data may improve the efficiency and accuracy of tag identification.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the personal information of the related user all accord with the regulations of related laws and regulations, and do not violate the good customs of the public order.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 13 illustrates a schematic block diagram of an example electronic device 1300 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 13, the apparatus 1300 includes a computing unit 1301 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)1302 or a computer program loaded from a storage unit 1308 into a Random Access Memory (RAM) 1303. In the RAM 1303, various programs and data necessary for the operation of the device 1300 can also be stored. The calculation unit 1301, the ROM 1302, and the RAM 1303 are connected to each other via a bus 1304. An input/output (I/O) interface 1305 is also connected to bus 1304.
A number of components in the device 1300 connect to the I/O interface 1305, including: an input unit 1306 such as a keyboard, a mouse, or the like; an output unit 1307 such as various types of displays, speakers, and the like; storage unit 1308, such as a magnetic disk, optical disk, or the like; and a communication unit 1309 such as a network card, modem, wireless communication transceiver, etc. The communication unit 1309 allows the device 1300 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel or sequentially or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.
Claims (21)
1. A method of classification, comprising:
obtaining a characteristic vector of an object according to position information of the object appearing in a target area;
and performing clustering training on the characteristic vectors of the objects to obtain the class cluster to which the objects belong.
2. The method of claim 1, wherein the location information of the object comprises a location name of the object, and deriving the feature vector of the object according to the location information of the object appearing in the target area comprises:
obtaining a positioning name vector of the object according to the positioning name of the object;
and obtaining a characteristic vector of the object according to the positioning name vector of the object appearing in the target area in the first time range.
3. The method of claim 2, wherein obtaining the location name vector of the object according to the location name of the object comprises:
and summing and averaging word vectors of all the participles of the positioning name of the object to obtain the positioning name vector of the object.
4. The method of claim 2 or 3, wherein the first time range comprises a plurality of time periods, and deriving the feature vector of the object according to the location name vector of the object appearing in the target area within the first time range comprises:
summing and averaging a plurality of positioning name vectors of the object included in each time period to obtain a sub-vector of the object corresponding to each time period;
and splicing the sub-vectors of the object corresponding to the time periods to obtain the characteristic vector of the object.
5. The method of claim 2 or 3, wherein the first time range comprises a plurality of time periods, and deriving the feature vector of the object according to the location name vector of the object appearing in the target area within the first time range comprises:
summing and averaging a plurality of positioning name vectors of the object included in each time period to obtain a sub-vector of the object corresponding to each time period;
and splicing the sub-vectors of the object corresponding to the time periods with the age characteristics of the object to obtain the characteristic vector of the object.
6. The method according to any one of claims 1 to 5, wherein performing cluster training on the feature vectors of the object to obtain a cluster to which the object belongs comprises:
and performing clustering training on the characteristic vectors of the objects by adopting a Gaussian mixture model to obtain the class cluster to which the objects belong.
7. The method of any of claims 1 to 6, further comprising:
and sampling and analyzing the class cluster to which the object belongs to obtain the label of the object.
8. The method of claim 7, wherein sampling the class cluster to which the object belongs to obtain the label of the object comprises at least one of:
sampling and analyzing the class cluster to which the object belongs according to time characteristics, and determining a label corresponding to the class cluster to which the object belongs as the label of the object, wherein the time characteristics comprise positioning frequency in a second time range;
sampling and analyzing the class cluster to which the object belongs according to the age characteristics, and determining a label corresponding to the class cluster to which the object belongs as the label of the object;
sampling and analyzing the class cluster to which the object belongs according to the positioning times, and determining a label corresponding to the class cluster to which the object belongs as the label of the object;
according to the similarity between the reference vector and the feature vector of the object, sampling analysis is carried out on the class cluster to which the object belongs, and a label corresponding to the class cluster to which the object belongs is determined to be used as the label of the object.
9. The method of any of claims 1 to 8, further comprising:
and adding the label of the object according to the identification of the object.
10. A sorting apparatus comprising:
the characteristic vector module is used for obtaining the characteristic vector of the object according to the position information of the object appearing in the target area;
and the clustering training module is used for carrying out clustering training on the characteristic vectors of the objects to obtain the class cluster to which the objects belong.
11. The apparatus of claim 10, wherein the location information of the object comprises a location name of the object, the feature vector module comprising:
the first vector quantum module is used for obtaining a positioning name vector of the object according to the positioning name of the object;
and the second vector quantum module is used for obtaining the characteristic vector of the object according to the positioning name vector of the object in the target area within the first time range.
12. The apparatus of claim 11, wherein the first vector quantum module is further configured to sum and average word vectors of all participles of the localization name of the object to obtain a localization name vector of the object.
13. The apparatus according to claim 11 or 12, wherein the first time range includes a plurality of time periods, and the second vector sub-module is further configured to sum and average a plurality of location name vectors of the object included in each time period, to obtain a sub-vector of the object corresponding to each time period; and splicing the sub-vectors of the object corresponding to the time periods to obtain the characteristic vector of the object.
14. The apparatus according to claim 11 or 12, wherein the first time range includes a plurality of time periods, and the second vector sub-module is further configured to sum and average a plurality of location name vectors of the object included in each time period, to obtain a sub-vector of the object corresponding to each time period; and splicing the sub-vectors of the object corresponding to the time periods with the age characteristics of the object to obtain the characteristic vector of the object.
15. The apparatus according to any one of claims 10 to 14, wherein the cluster training module is further configured to perform cluster training on the feature vectors of the objects by using a gaussian mixture model to obtain the class cluster to which the objects belong.
16. The apparatus of any of claims 10 to 15, further comprising:
and the sampling analysis module is used for sampling and analyzing the class cluster to which the object belongs to obtain the label of the object.
17. The apparatus of claim 16, wherein the sampling analysis module comprises at least one of:
the time sampling submodule is used for sampling and analyzing the class cluster to which the object belongs according to time characteristics, determining a label corresponding to the class cluster to which the object belongs as the label of the object, wherein the time characteristics comprise positioning frequency in a second time range;
the age sampling submodule is used for sampling and analyzing the class cluster to which the object belongs according to the age characteristics, and determining a label corresponding to the class cluster to which the object belongs as the label of the object;
the quantity sampling submodule is used for sampling and analyzing the class cluster to which the object belongs according to the positioning times, and determining a label corresponding to the class cluster to which the object belongs as the label of the object;
and the similarity submodule is used for sampling and analyzing the class cluster to which the object belongs according to the similarity between the reference vector and the characteristic vector of the object, and determining a label corresponding to the class cluster to which the object belongs as the label of the object.
18. The apparatus of any of claims 10 to 17, further comprising:
and the label module is used for adding the label of the object according to the identification of the object.
19. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-9.
20. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-9.
21. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-9.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202111588053.6A CN114330523A (en) | 2021-12-23 | 2021-12-23 | Classification method, classification device, classification equipment and storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202111588053.6A CN114330523A (en) | 2021-12-23 | 2021-12-23 | Classification method, classification device, classification equipment and storage medium |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN114330523A true CN114330523A (en) | 2022-04-12 |
Family
ID=81055105
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202111588053.6A Pending CN114330523A (en) | 2021-12-23 | 2021-12-23 | Classification method, classification device, classification equipment and storage medium |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN114330523A (en) |
Citations (12)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20110029469A1 (en) * | 2009-07-30 | 2011-02-03 | Hideshi Yamada | Information processing apparatus, information processing method and program |
| CN108647200A (en) * | 2018-04-04 | 2018-10-12 | 顺丰科技有限公司 | Talk with intent classifier method and device, equipment and storage medium |
| CN109086265A (en) * | 2018-06-29 | 2018-12-25 | 厦门快商通信息技术有限公司 | A kind of semanteme training method, multi-semantic meaning word disambiguation method in short text |
| CN110020022A (en) * | 2019-01-03 | 2019-07-16 | 阿里巴巴集团控股有限公司 | Data processing method, device, equipment and readable storage medium storing program for executing |
| CN111125550A (en) * | 2018-11-01 | 2020-05-08 | 百度在线网络技术(北京)有限公司 | Interest point classification method, device, equipment and storage medium |
| CN111210269A (en) * | 2020-01-02 | 2020-05-29 | 平安科技(深圳)有限公司 | Object identification method based on big data, electronic device and storage medium |
| CN111954874A (en) * | 2018-04-11 | 2020-11-17 | 诺基亚技术有限公司 | Identify ribbons within a geographic area |
| CN112488384A (en) * | 2020-11-27 | 2021-03-12 | 香港理工大学深圳研究院 | Method, terminal and storage medium for predicting target area based on social media sign-in |
| CN112884390A (en) * | 2019-11-29 | 2021-06-01 | 北京三快在线科技有限公司 | Order processing method and device, readable storage medium and electronic equipment |
| CN113360602A (en) * | 2021-06-22 | 2021-09-07 | 北京百度网讯科技有限公司 | Method, apparatus, device and storage medium for outputting information |
| CN113496236A (en) * | 2020-03-20 | 2021-10-12 | 北京沃东天骏信息技术有限公司 | User tag information determination method, device, equipment and storage medium |
| CN113704436A (en) * | 2021-09-02 | 2021-11-26 | 宁波深擎信息科技有限公司 | User portrait label mining method and device based on session scene |
-
2021
- 2021-12-23 CN CN202111588053.6A patent/CN114330523A/en active Pending
Patent Citations (12)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20110029469A1 (en) * | 2009-07-30 | 2011-02-03 | Hideshi Yamada | Information processing apparatus, information processing method and program |
| CN108647200A (en) * | 2018-04-04 | 2018-10-12 | 顺丰科技有限公司 | Talk with intent classifier method and device, equipment and storage medium |
| CN111954874A (en) * | 2018-04-11 | 2020-11-17 | 诺基亚技术有限公司 | Identify ribbons within a geographic area |
| CN109086265A (en) * | 2018-06-29 | 2018-12-25 | 厦门快商通信息技术有限公司 | A kind of semanteme training method, multi-semantic meaning word disambiguation method in short text |
| CN111125550A (en) * | 2018-11-01 | 2020-05-08 | 百度在线网络技术(北京)有限公司 | Interest point classification method, device, equipment and storage medium |
| CN110020022A (en) * | 2019-01-03 | 2019-07-16 | 阿里巴巴集团控股有限公司 | Data processing method, device, equipment and readable storage medium storing program for executing |
| CN112884390A (en) * | 2019-11-29 | 2021-06-01 | 北京三快在线科技有限公司 | Order processing method and device, readable storage medium and electronic equipment |
| CN111210269A (en) * | 2020-01-02 | 2020-05-29 | 平安科技(深圳)有限公司 | Object identification method based on big data, electronic device and storage medium |
| CN113496236A (en) * | 2020-03-20 | 2021-10-12 | 北京沃东天骏信息技术有限公司 | User tag information determination method, device, equipment and storage medium |
| CN112488384A (en) * | 2020-11-27 | 2021-03-12 | 香港理工大学深圳研究院 | Method, terminal and storage medium for predicting target area based on social media sign-in |
| CN113360602A (en) * | 2021-06-22 | 2021-09-07 | 北京百度网讯科技有限公司 | Method, apparatus, device and storage medium for outputting information |
| CN113704436A (en) * | 2021-09-02 | 2021-11-26 | 宁波深擎信息科技有限公司 | User portrait label mining method and device based on session scene |
Non-Patent Citations (1)
| Title |
|---|
| 刘金花 著: "《文本挖掘与Python实践》", vol. 1, 31 August 2021, 四川大学出版社, pages: 59 - 60 * |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20200326197A1 (en) | Method, apparatus, computer device and storage medium for determining poi alias | |
| CN111212383B (en) | Method, device, server and medium for determining number of regional permanent population | |
| US20230049839A1 (en) | Question Answering Method for Query Information, and Related Apparatus | |
| CN108182253B (en) | Method and apparatus for generating information | |
| CN109492066B (en) | Method, device, equipment and storage medium for determining branch names of points of interest | |
| CN109658033B (en) | Method, system, device and storage medium for calculating similarity of goods source route | |
| Xu et al. | A supervoxel approach to the segmentation of individual trees from LiDAR point clouds | |
| CN109684624B (en) | A method and device for automatically identifying order address road areas | |
| CN112417274A (en) | Message pushing method and device, electronic equipment and storage medium | |
| CN111310961A (en) | Data prediction method, data prediction device, electronic equipment and computer readable storage medium | |
| CN112949784B (en) | Resident trip chain model construction method and resident trip chain acquisition method | |
| US11468349B2 (en) | POI valuation method, apparatus, device and computer storage medium | |
| CN111209351B (en) | Object relation prediction method, object recommendation method, object relation prediction device, object recommendation device, electronic equipment and medium | |
| CN111125272B (en) | Regional characteristic acquisition method, regional characteristic acquisition device, computer equipment and medium | |
| KR102850113B1 (en) | Signal processing method, apparatus, device and storage medium | |
| CN110650170A (en) | Method and apparatus for pushing information | |
| CN115146653B (en) | Dialogue scenario construction method, device, equipment and storage medium | |
| CN112052848A (en) | Method and device for acquiring sample data in street labeling | |
| CN114330523A (en) | Classification method, classification device, classification equipment and storage medium | |
| US20240362552A1 (en) | Estimation device, estimation method, and estimation program | |
| CN113704314A (en) | Data analysis method and device, electronic equipment and storage medium | |
| CN111954154B (en) | Positioning method and device, computer readable storage medium and electronic device | |
| CN119150169A (en) | Address type identification method, correlation analysis method and device | |
| US20210110317A1 (en) | Summarizing business process models | |
| CN113111229A (en) | Regular expression-based method and device for extracting track-to-ground address of alarm receiving and processing text |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |