Disclosure of Invention
The technical problem to be solved by the invention is to provide a method for classifying geographic information service metadata texts in a multi-level and multi-label manner aiming at the defects in the prior art.
The technical scheme adopted by the invention for solving the technical problems is as follows: a geographic information service metadata text multi-level multi-label classification method comprises the following steps:
1) acquiring a geographic information service metadata text set containing unmarked samples and marked samples to perform text preprocessing, and dividing each data sample into text feature word combinations;
2) defining a primary classification catalogue based on the domain application theme category of the geographic information resource, and generating a typical word list which is closely associated with the semantics of the classification category (hereinafter referred to as theme);
3) screening text characteristic words according to the typical word list, filtering out the characteristics of which the distance from the typical words is greater than a threshold value, and obtaining a characteristic subset screened according to the theme classification;
4) selecting a classical Multi-label classification algorithm ML-KNN (Multi-label K Nearest Neighbors) as a base model H for cooperative training1;
5) Calculating the semantic distance from the features to the theme according to the corpus, and establishing a theme prediction model ML-CSW (Multi-label Classification based on SWEET spot)&WordNet), using the model as another basis model H for co-training2;
6) Designing a cooperative mechanism based on the two basic models, and matching a multi-label theme for the metadata text to serve as a primary coarse-grained theme classification result;
7) selecting a metadata text corresponding to a certain classification label according to a primary coarse-grained theme classification result, extracting a text theme to serve as a fine-grained theme of a next level, and simultaneously obtaining a matching relation between the metadata text and a double-layer theme catalog;
8) and 7) repeating the step 7) to obtain fine-grained subject category catalogues with different levels and a matching relation between the metadata text and the subject catalogues.
According to the scheme, the step 2) of defining the primary classification directory based on the domain application theme categories of the geographic information resources is to obtain primary classification by expanding the social benefit fields SBAs proposed by the international earth observation organization aiming at the field of geology.
According to the scheme, the typical vocabulary generation mode in the step 2) is as follows:
and (3) extracting the superior words, the inferior words and the synonyms of the subjects in the SWEET and WordNet definitions as typical words related to the subject semantics by taking the SBAs as a subject classification directory to generate a typical word list.
According to the scheme, the text characteristic words are screened according to the typical word list in the step 3), which specifically comprises the following steps:
s31, representing the typical words and the text feature words into two-dimensional space Word vectors based on the Word2vec algorithm;
s32, calculating the cosine distance between the typical word and the text feature word vector;
and S33, setting a distance threshold T, and filtering out text characteristic words with the cosine distance with the typical word larger than T.
According to the scheme, the method for establishing the topic model in the step 5) is as follows:
according to SWEET notebookNetwork definition of body library and WordNet English vocabulary net, calculating text characteristics f and each theme piSemantic distance d betweenpi
Finding features f and each topic piSemantic distance d betweenpiAnd the minimum value of (c) is obtained and is used as the maximum semantic relevance s of the text feature f and all the subjects PfWherein P is the set of all topics;
defining feature weight based on the shortest distance between the text feature and the theme, establishing a theme prediction model, and predicting a multi-label theme for the unmarked sample;
assuming that the training set contains n text features in total, the vector S ═ S of the maximum semantic relevance from all the features to all the subjects in the training set can be calculated1,s2,…,sn]Defining the weight w (x) of a single piece of data x as a vector of 1 × n, respectively corresponding to the weights of n text features, and defining the weight w (x) as s if the feature f appears in the sample xfOtherwise, defining as 0;
and establishing a theme prediction model Y, wherein F is an adjustment vector of the features, and alpha is a smoothing parameter. Based on the marked sample data, adopting a BP neural network iterative optimization training model Y, calculating the optimal solution of F and alpha under the condition of minimum loss, obtaining a final model, and predicting the category set of the unmarked sample t according to the model;
Y=w(x)*F+α。
according to the scheme, the step 6) designs a cooperation mechanism, and matches a multi-label theme for the metadata text as a primary coarse-grained theme classification result; the method comprises the following specific steps:
s61, generating L according to mark sample in geographic information service metadata text set1And L2Two subsets, respectively as co-training basis model H1And H2The training set of (2);
s62 training base model H by using training set1And H2Predicting the category vector of the unlabeled sample by using the trained base model;
s63, selecting classifier H from unlabeled samples1And H2Samples with the same prediction result are given a pseudo label,adding pseudo-labeled samples to two training subsets L, respectively1And L2Updating the training set, and repeating the steps S62-S63 until the classification results of the two classifiers do not change obviously, so as to obtain the class sets of all unlabeled samples and the finally updated training set;
s64 training classifier H based on all marked samples1A set of topic categories is matched for the test sample.
According to the scheme, the classic multi-label classification algorithm ML-KNN is selected as a base model for collaborative training in the step 4), and the method specifically comprises the following steps:
s41, selecting ML-KNN algorithm as a base model H of cooperative training
1Specifying the number k of neighbor samples, expressing the set of k neighbor samples of the samples x in the training set by N (x), and counting the number c [ j ] of the samples belonging to the subject class l in N (x)]Counting the number of samples c' in N (x) that do not belong to the subject category l [ j]. In the following formula, when a sample x belongs to the topic category l,
the number of the carbon atoms is 1,
is 0, otherwise
Is a non-volatile organic compound (I) with a value of 0,
is 1;
s42, calculating the prior probability that the unlabeled sample t belongs to the subject category l
And posterior probability
Wherein the value of b is 0 and 1,
an event indicating that a sample t belongs to the topic category l,
an event indicating that the sample t does not belong to the topic class l, s is a smoothing parameter, m is the number of training samples,
an event representing that sample j among k neighboring samples of sample t belongs to class l;
s43, predicting the category set of the unlabeled samples t according to the maximum posterior probability and the Bayes principle
According to the scheme, the text theme extracted in the step 7) is extracted based on a Latent Dirichlet Allocation (LDA) algorithm.
The invention has the following beneficial effects: the invention provides a novel multi-level multi-label classification process aiming at an OGC network map service WMS and other geographic information network resource metadata texts. The process introduces a geoscience ontology library SWEET and a general English vocabulary network WordNet into a classification process, and combines a traditional classification algorithm ML-KNN and a classification algorithm ML-CSW with close fit domain characteristics and text semantics to perform collaborative training so as to obtain the matching relation between a geographic information service metadata text and a multi-level topic directory. The method only depends on a small number of marked data samples by considering the field characteristics and text semantics of the geographic information service metadata; meanwhile, compared with the traditional multi-label classification algorithm such as a classifier chain and a voting classifier, the method has better overall performance of the classification result.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
There are 46000 pieces of Web Map Service (WMS) text data, 400 of which are marked with SBAs topics, and all the topics are uniformly distributed. The text content comes from the URL, Abstract, Keywords and Title fields in the Service tag in the WMS GetCapability capability document. Because text contents are mixed and mashup, sections are different in length, a single datum corresponds to a plurality of theme categories, the sample data size of the marked themes is small, the traditional multi-label classification algorithm is difficult to accurately and comprehensively classify, and multi-level theme matching results cannot be obtained.
The invention combines the theoretical basis of cooperative training in semi-supervised learning and introduces a geoscience ontology library and a basic classification model of general English vocabulary net design and fitting with characteristics of the geoscience field. And performing collaborative training in combination with a widely-applied classical multi-label classification model in the classification process, and extracting a multi-level fine-grained theme to match the multi-level multi-label theme with the WMS metadata text.
The algorithm process of the present invention will be described in detail below with reference to the accompanying drawings, in which:
as shown in fig. 1 and 2, a method for multi-level and multi-label classification of meta-data text of geographic information service includes the following steps:
1) performing text preprocessing on all WMS metadata, including three steps of word segmentation, stop word removal and word shape reduction, and segmenting each text into text feature word combinations;
2) the first class is obtained by expanding Social Benefit Areas (SBAs) proposed for the field of geography based on international Earth observation organization (GEO), the SBAs include 9 major interest topics including Agriculture (Agriculture), Biodiversity (Biodiversity), Climate (Climate), Disaster (disaser), ecology (ecosys), Energy (Energy), Health (Health), Water (Water), and Weather (Weather), etc., the SBAs are Social Benefit Areas (SBAs) proposed for the field of geography based on international Earth observation organization (GEO), including 9 major interest topics including Agriculture (Agriculture), Biodiversity (Biodiversity), Climate (Climate), Disaster (ecological Disaster), Health (Health), and Energy (Weather), etc. The topic classification catalog of this embodiment is expanded on the basis of SBAs, and Geology (geography) is added as the 10 th topic, so all topic classification catalogs and primary topic classification catalogs referred to in this embodiment refer to these 10 topics.
Using SBAs as a topic classification directory, extracting hypernyms, hyponyms and synonyms of topics in the SWEET and WordNet definitions as typical words related to topic semantics, and generating a typical word list, wherein a diagram in FIG. 3(a) is a typical word example corresponding to a topic "Agriculture" extracted from the SWEET, a diagram in FIG. 3(b) is a typical word example corresponding to a topic "Agriculture" extracted from the WordNet, and different colors represent different semantic sets;
3) the CBOW model based on the Word2vec algorithm represents the typical words and the text characteristic words as two-dimensional space Word vectors, and calculates cosine distances between the typical words and the text characteristic Word vectors;
4) setting a distance threshold, screening text feature words based on the distance threshold, and filtering features with the distance from the typical words larger than the threshold, thereby obtaining a feature subset with larger contribution to topic classification as model input of a classification algorithm;
5) designing a multi-label classification algorithm ML-CSW which is fit with WMS field characteristics and considers text semantics as a collaborative training base model H1And training a theme prediction model by taking the semantic association degree between text features and themes calculated by the corpus as feature weight:
5.1) taking the network definition of SWEET as a main part and WordNet as an auxiliary part to calculate the semantic shortest distance between text features and a theme;
if the text feature word is recorded by the SWEET, the shortest distance between the crawled feature word and the theme is defined according to the SWEET network, as shown in fig. 4(a), the distance between the feature "Glacier" and the theme "Water" is 3;
if the text features are not included by the SWEET, searching the superior words upwards layer by layer in the WordNet as the substitute words of the text features until the substitute words included by the SWEET are searched, and calculating the shortest distance D from the features to the substitute words in the WordNet definition1As shown in fig. 4(b), the alternative word of the feature "new (snow)" is "Ice", and the shortest distance is 1. And then calculating the shortest distance D between the substitute word and the subject based on Dijkstra algorithm according to the network definition of SWEET2The alternative word "Ice" to main as in fig. 4(b)The shortest distance of the title "Water" is 2. The final distance between the text feature and the theme is the sum of the distance between the text feature and the substitute word and the distance between the substitute word and the theme, namely D-D1+D2The shortest distance from the feature "new" to the subject "Water" as in fig. 4(b) is 3.
5.2) defining feature weight based on the shortest distance between text features and topics, establishing a topic prediction model, and predicting multi-label topics for unmarked samples;
a) according to the step 5.1), the text characteristics f and each theme p can be calculated
iSemantic distance between
Deriving the shortest distance as the maximum semantic relevance s of the text features f to all the topics P
fWherein P is the set of all topics;
b) if all texts contain n text features, the maximum semantic relevance vector S ═ S from all features to all subjects in the training set can be calculated1,s2,…,sn]. Defining the weight w (x) of single data x as a 1 x n vector, respectively corresponding to the weights of n text features, and defining the weight w (x) as s if the feature f appears in a sample xfOtherwise, it is defined as 0.
c) And establishing a theme prediction model Y, wherein F is an adjustment vector of the features, and alpha is a smoothing parameter. Based on the marked sample data, adopting a BP neural network iterative optimization training topic prediction model, calculating the optimal solution of F and alpha under the condition of minimum loss to obtain a final model, and predicting the category set of the unmarked sample t according to the model;
Y=w(x)*F+α
6) selecting a widely-applied classic multi-label classification algorithm ML-KNN as a collaborative training basis model H2:
The number k of adjacent samples is specified, and N (x) represents a training set L
1Middle sampleK neighbor sample sets of x, and the number c [ j ] of samples belonging to the subject class l in N (x) is counted]Counting the number of samples c' in N (x) that do not belong to the subject category l [ j]. In the following formula, when a sample x belongs to the topic category l,
the number of the carbon atoms is 1,
is 0, when the sample x does not belong to the topic class l,
is a non-volatile organic compound (I) with a value of 0,
is 1;
calculating the prior probability that an unlabeled sample t belongs to a topic class l
And posterior probability
Wherein s is a smoothing parameter, m is the number of training samples,
indicating that the event sample t belongs to the topic category/,
indicating that the event sample t does not belong to the topic category l,
the instance j of the k neighboring samples representing the event sample t belongs to the class l;
predicting the category set of unlabeled samples t according to the maximum posterior probability and Bayesian principle
7) Divide 80% of the repeated random samples of all labeled samples into L1And L2Two subsets, each as a classifier H1And H2Predicting the class set of all unlabeled samples by using two classifiers;
8) sorting classifier H1And H2The samples with the same prediction result are endowed with pseudo-marks, and the pseudo-marked samples are respectively added to the two training subsets L1And L2And updating the training set, and repeating 7) until the classification results of the two classifiers do not have obvious change, thereby obtaining the class set of the unlabeled samples.
9) The test samples were matched with a topic class set using a trained classifier with 10% of all labeled samples as test samples, such as the SBAs class labels of the example text in fig. 5 containing Biodiversity, click, Disaster, Ecosystem, Water and Weather.
10) Specifying a topic layer number N, selecting a metadata text of a single topic category for each layer, extracting a text fine-grained topic based on a Latent Dirichlet Allocation (LDA) algorithm until generating an N-layer topic directory, matching the WMS metadata text with N-layer topics, wherein a secondary topic corresponding to biology in FIG. 5 is wildlife, specie and diversity, a secondary topic corresponding to Climate is forest and metology, a secondary topic corresponding to Disaster is polarization, a secondary topic corresponding to Ecosystem is bittaat, resource and containment, a secondary topic corresponding to Water is rain, and a secondary topic corresponding to Weather is metology.
The method considers the field characteristics and text semantics of the geographic information service metadata, and only depends on a small number of marked data samples; as shown in fig. 6, compared with the conventional multi-label classification algorithm such as a classifier chain and a voting classifier, the classification result of the method of the present invention is better in overall performance.
As shown in fig. 7, the text feature selection process of the present invention can filter out features that do not contribute to the classification result compared to the chi-square test and WordNet-based feature selection method. The method can be popularized and applied to geographic information portals and data directory services, and assists in the retrieval and discovery of various geographic information resources.
It will be understood that modifications and variations can be made by persons skilled in the art in light of the above teachings and all such modifications and variations are intended to be included within the scope of the invention as defined in the appended claims.