CN113704363B - Weight determining method, device, equipment and storage medium - Google Patents
Weight determining method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN113704363B CN113704363B CN202010443529.6A CN202010443529A CN113704363B CN 113704363 B CN113704363 B CN 113704363B CN 202010443529 A CN202010443529 A CN 202010443529A CN 113704363 B CN113704363 B CN 113704363B
- Authority
- CN
- China
- Prior art keywords
- data
- target
- factor
- attribute
- scene
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/28—Databases characterised by their database models, e.g. relational or object models
- G06F16/284—Relational databases
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Computing Systems (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The application discloses a weight determining method, a weight determining device, weight determining equipment and a weight determining storage medium, and relates to the technical field of artificial intelligence. The specific implementation scheme is as follows: obtaining factor correlation data of a structured representation between a target reference factor and each adjacent reference factor; wherein the target reference factor and the proximity reference factor affect attribute data of the target object; determining demand tightness data between a target reference factor and a target scene; the target scene has the use requirement of a target reference factor; according to the factor association data and the demand compactness data, determining the attribute dominant weight of the target reference factors in each reference factor of the target object under the target scene. According to the embodiment of the application, the influence degree of the attribute data of the target object is stripped by the reference factors which are mutually influenced under the target scene, so that the influence degree of the single reference factor on the attribute data of the target object under the target scene is quantized.
Description
Technical Field
The present application relates to data processing technologies, and in particular, to an artificial intelligence technology, and in particular, to a weight determining method, apparatus, device, and storage medium.
Background
There are more or less general links between objects, and some objects may have significant differences in their attribute data, i.e. the attribute values of the attribute elements, due to the influence of at least two reference factors.
Since the influence of different reference factors on the attribute data of the object may be different, and some correlation may exist between the reference factors, the influence degree of the different reference factors on the attribute data cannot be determined.
How to use computer technology to quantify the extent of influence of each reference factor on the attribute data of an object, no particularly sophisticated solution exists at present.
Disclosure of Invention
According to a first aspect of the present application, an embodiment of the present application provides a weight determining method, including:
Obtaining factor correlation data of a structured representation between a target reference factor and each adjacent reference factor; wherein the target reference factor and the proximity reference factor have an effect on attribute data of a target object;
Determining demand tightness data between the target reference factors and a target scene; wherein the target scene has a use requirement for the target reference factor;
And determining that the attribute of the target reference factor in each reference factor of the target object in the target scene governs weight according to the factor association data and the demand compactness data.
According to a second aspect of the present application, an embodiment of the present application further provides a weight determining apparatus, including:
The factor correlation data acquisition module is used for acquiring factor correlation data of structural representation between the target reference factors and each adjacent reference factor; wherein the target reference factor and the proximity reference factor have an effect on attribute data of a target object;
The demand compactness data determining module is used for determining demand compactness data between the target reference factors and the target scene; wherein the target scene has a use requirement for the target reference factor;
and the attribute dominant weight determining module is used for determining the attribute dominant weight of the target reference factors in each reference factor of the target object under the target scene according to the factor association data and the demand compactness data.
According to a third aspect of the present application, an embodiment of the present application further provides an electronic device, including:
at least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform a weight determination method as provided by embodiments of the first aspect.
According to a fourth aspect of the present application, embodiments of the present application also provide a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform a weight determining method provided by the embodiments of the first aspect.
By adopting the technical scheme, the attribute dominant weight of the target reference factors in each reference factor of the target object under the target scene is determined.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the application or to delineate the scope of the application. Other features of the present application will become apparent from the description that follows.
Drawings
The drawings are included to provide a better understanding of the present application and are not to be construed as limiting the application. Wherein:
FIG. 1 is a flow chart of a weight determination method provided by an embodiment of the present application;
FIG. 2A is a flow chart of another weight determination method provided by an embodiment of the present application;
FIG. 2B is a block diagram of a factor managed network provided by an embodiment of the present application;
FIG. 3A is a flowchart of another weight determination method provided by an embodiment of the present application;
fig. 3B is a block diagram of a scene-aware network according to an embodiment of the present application;
FIG. 4A is a flow chart of a method of skill value data processing based on dominant weights provided by an embodiment of the present application;
FIG. 4B is a skill diagram provided by an embodiment of the present application;
FIG. 4C is a schematic diagram of a skill pricing model according to an embodiment of the application;
FIG. 4D is a schematic diagram of a skill pricing network according to an embodiment of the application;
FIG. 4E is a schematic diagram of a skill management network provided by an embodiment of the present application;
Fig. 5 is a block diagram of a weight determining apparatus according to an embodiment of the present application;
fig. 6 is a block diagram of an electronic device for implementing a weight determination method of an embodiment of the present application.
Detailed Description
Exemplary embodiments of the present application will now be described with reference to the accompanying drawings, in which various details of the embodiments of the present application are included to facilitate understanding, and are to be considered merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The weight determining method provided by the embodiment of the application is suitable for quantifying the influence degree of the attribute data of the target object by each reference factor influencing the attribute data of the target object under a specific application scene, and is executed by the weight determining device which is realized by software and/or hardware and is specifically configured in the electronic equipment.
Because the influence of different reference factors on the attribute data of the same target object may be different, and the influence of the same reference factors on the attribute data of different target objects may also be different, the weight determination method provided by the application tries to quantitatively determine the influence degree of each reference factor on the attribute data of the target object.
Illustratively, the target object may be a job post, the attribute data of the target object may be a post compensation, the reference factor may be a work skill, and for example, may be at least one of the required job skills for a certain job post, such as an algorithm skill, a programming language skill, and the like.
Taking the example of post compensation in the professional post affected by at least two working skills as an example. For example, a Java engineer needs to be proficient in the Java language, and may also need to have database processing capabilities such as MySQL, so that the Java skills and MySQL skills have different degrees of influence on the post compensation of the Java engineer. In another example, a Java engineer having both Java skills and C++ skills may have different degrees of influence on the post salary of the Java engineer and on the post salary of the C++ engineer, respectively, than a C++ engineer having both C++ skills and C++ skills.
Illustratively, the target object may be a tourist attraction, the target object may be a ticket price of the tourist attraction, and the reference factors may include attraction location, attraction surrounding road conditions, attraction surrounding traffic flow, attraction history revenue, and the like. The scenic spot position, the surrounding road condition of the scenic spot, the surrounding traffic flow of the scenic spot, the historical income of the scenic spot and the like are mutually influenced, but different reference factors have certain difference on the influence degree of the ticket price of the tourist attraction.
In view of this, the embodiment of the application provides at least one weight determining method for quantifying the influence degree of the influence of each reference factor affecting the attribute data of the target object.
Fig. 1 is a flowchart of a weight determining method according to an embodiment of the present application, where the method includes:
s101, obtaining factor correlation data of structural representation between target reference factors and adjacent reference factors; wherein the target reference factor and the proximity reference factor affect attribute data of a target object.
The target reference factor may be understood as a reference factor of the weight to be determined, and the adjacent reference factor may be understood as a reference factor having an association with the target reference factor and capable of affecting attribute data of the target object together with the target reference factor. The factor association data can be understood as data capable of representing the association relationship between the target reference factor and the adjacent reference factor.
When at least two reference factors influencing the attribute data of the target object are provided, a certain association may exist between the reference factors, and the associated reference factors are adjacent to each other. Still, the description will be given taking as an example that post compensation of the professional post is affected by at least two working skills. For example, a Java engineer needs to not only be proficient in the Java language, but also have the processing capability of a database such as MySQL, and the Java skill and the skill value of the MySQL skill have a certain influence on each other. In another example, a Java engineer with Java skills and c++ skills have the same skill value, and the Java skills and the c++ skills of the Java engineer are mutually influenced and respectively distinguished, and at this time, the Java skills and the c++ skills are mutually adjacent reference factors when the post salary of the post of the Java engineer is influenced.
In an optional implementation manner of the embodiment of the present application, factor association data between reference factors that are mutually adjacent reference factors and have an influence on attribute data of each target object may be stored for each target object in advance in a storage device local to or associated with the electronic device. Correspondingly, when needed, searching and acquiring factor-related data in a storage device local to the electronic device or associated with the electronic device through the target object, the target reference factor and the adjacent reference factor.
In another optional implementation manner of the embodiment of the present application, a plurality of reference factors influencing the attribute data of the target object may be obtained in advance, and co-occurrence frequencies of different reference factors may be counted. Correspondingly, constructing a matrix according to the co-occurrence frequency of the target reference factors and the co-occurrence frequency of each adjacent reference factor to obtain factor correlation data.
In order to facilitate the searching and obtaining of the co-occurrence frequency, after counting the co-occurrence frequency of different reference factors, each reference factor can be used as a node, the nodes which are adjacent to each other are connected through edges, and the co-occurrence frequency is used as attribute information of the connected edges, so that a reference factor graph is generated and stored. And subsequently, updating the reference factor graph by adding, deleting and checking the data in the reference factor graph. Correspondingly, when factor association data are acquired, the co-occurrence frequency of the target reference factors and the adjacent reference factors is determined by searching the reference factor graph, so that a matrix containing the co-occurrence frequency of each target reference factor and each adjacent reference factor is generated, and the generated matrix is used as the factor association data.
It will be appreciated that to facilitate subsequent calculations based on the factor-associated data, unstructured factor-associated data is typically converted into structured factor-associated data according to set conversion rules. Wherein, the set conversion rule can be determined by a skilled person according to the requirement or an empirical value.
S102, determining demand compactness data between the target reference factors and a target scene; wherein the target scene has a need for use of the target reference factor.
For example, if the target reference factor is a work skill, the target scenario may be a skill-requiring party such as an enterprise, a public institution, or a scientific laboratory.
For example, if the target reference factor is a factor related to a tourist attraction, the target scene may be a attraction opening period or the like.
For example, a multi-layer perceptron, factorizer, or other feature extraction network may be employed to perform feature extraction on the attribute data of the target reference factors and the attribute data of the target scene to determine demand tightness data between the target reference factors and the target scene.
In order to improve accuracy and comprehensiveness of the extracted demand compactness data, in an optional implementation manner of the embodiment of the present application, a pre-trained scene perception network may be adopted to perform feature extraction on attribute data of the target reference factor and attribute data of the target scene, so as to obtain the demand compactness data. The scene perception network can adopt attribute data of a large number of sample reference factors and attribute data of sample scenes associated with the sample reference factors as training samples to be input into a first neural network model constructed in advance, and network parameters of the first neural network model are optimized according to model output results and deviation of sample attribute values, which influence the attribute data of sample objects, of the sample reference factors, so that the trained scene perception network is obtained.
The attribute data of the target reference factor and the attribute data of the target scene may be stored in advance in the electronic device or other storage devices associated with the electronic device, and data acquisition may be performed when needed.
The acquired attribute data of the target reference factor and the attribute data of the target scene may be data of a structured representation for facilitating subsequent calculations. It can be understood that when the acquired attribute data of the target reference factor and the attribute data of the target scene are unstructured data, the acquired data may be converted into structured data.
It should be noted that the attribute data of the target scene may include discrete scene data and/or continuous scene data.
The continuous scene data is used for manually setting the association between the target scene and the reference factors, so that when the demand compactness data is determined, the set dimension characteristics are extracted according to the relationship, namely, the characteristic rules are searched from the continuous scene. Taking the example that post compensation of the occupation post is affected by at least two working skills, the continuous scene data may include at least one of self-attribute of enterprises and institutions, such as information of unit type, year, city, detailed address, and the like. The continuous scene data may further include historical statistical attributes, such as city historical monthly salary statistical data, specifically including at least one of average, variance, maximum and minimum of historical monthly salary data. For example, internet company B, a 13 year old market company, located in a market, recruits algorithm engineers, the average of the job history monthly salaries is 10000 yuan, the variance is 500 yuan, the highest monthly salary is 12000 yuan, the lowest monthly salary is 8000 yuan, and then the continuous scene data corresponding to the target scene may be { [10000,500,12000,8000], [ a market, B company, market company, 13 years, internet class ] }.
Wherein the discrete scene data is used for limiting the requirement of the target object on the reference factor only, so that the non-set dimension characteristic can be mined based on the requirement. Still taking as an example that post compensation at the professional post is affected by at least two work skills, the discrete scenario data may include skill demand attributes, specifically including at least one of work experience and graduation years, etc. For example, an algorithm engineer recruiting company B requires that the work site be in C city, and that the graduate should be, then the discrete scene data corresponding to the target scene may be [ C city, graduate ].
S103, determining attribute dominant weights of the target reference factors in all reference factors of the target object under the target scene according to the factor association data and the demand compactness data.
For example, a graph network may be employed to determine, based on factor association data and demand affinity data, an attribute-dominated weight of a target reference factor among the reference factors of the target object in the target scenario. The graph network may be, for example, a graph roll-up neural network or other neural network.
In order to improve the determining efficiency and the accuracy of the determined determining result of the attribute dominant weight, in an alternative implementation manner of the embodiment of the present application, a pre-trained factor dominant network may be adopted, and the attribute dominant weight of the target reference factor in each reference factor of the target object under the target scene is determined according to the factor association data and the demand compactness data. Wherein, the factor governs the network to train in the following way: and obtaining sample factor association data between a large number of sample reference factors and sample adjacent reference factors and demand compactness data between the sample reference factors and sample scenes, inputting the sample factor association data as training samples into a pre-constructed graph convolution neural network, determining a weighted sum of a model output result and sample attribute values corresponding to the reference factors, and optimizing network parameters of the graph convolution neural network according to deviation of the sum and sample compensation, so as to obtain a trained factor dominant network.
According to the embodiment of the application, the factor correlation data of the structural representation between the target reference factors and each adjacent reference factor is obtained; wherein the target reference factor and the proximity reference factor have an effect on attribute data of the target object; determining demand tightness data between a target reference factor and a target scene; wherein the target scene has a use requirement for a target reference factor; and determining that the attribute of the target reference factor in each reference factor of the target object in the target scene governs the weight according to the factor association data and the demand compactness data. According to the technical scheme, the attribute dominant weight is determined by introducing factor association data between the target reference factors and the adjacent reference factors, so that the influence degree of the reference factors which are mutually influenced on the attribute data of the target object is stripped under the target scene, and the influence degree of the single reference factor on the attribute data of the target object under the target scene is quantized. Meanwhile, by introducing the demand compactness data between the target reference factors and the target scene, the relation between the reference factors and the target scene is fully considered in the determination process of the attribute dominant weight, and the accuracy of the attribute dominant weight is guaranteed.
On the basis of the above technical solutions, after determining the attribute dominant weight of the target reference factor in each reference factor of the target object according to the factor association data and the demand compactness data, determining a target attribute value of the target object generated by the target reference factor in a target scene; the target attribute values are weighted with attribute-dominated weights to update the target attribute values.
It can be appreciated that, by updating the target attribute value determined when the proximity reference factor is not considered through the attribute dominant weight of the target reference factor, the determination of finer granularity of the attribute data of the target object can be realized, and the accuracy of the determined target attribute value is improved.
Illustratively, determining the target attribute value of the attribute data of the target object generated by the target reference factor in the target scene may be: adopting the trained scene perception network to perform feature extraction on the attribute data of the target reference factors and the attribute data of the target scene to obtain the demand compactness data; and determining a target attribute value of the generated target object under the target scene by the target reference factor according to the demand compactness data.
Illustratively, according to the demand compactness data, determining the target attribute value of the target object generated by the target reference factor in the target scene may be: and a second neural network model trained in advance can be adopted, and the target attribute value of the generated target object under the target scene of the target reference factor is determined according to the demand compactness data. Wherein the network structures of the second neural network model and the first neural network model may be the same or different. The second neural network may be a fully connected neural network, for example.
When the attribute data of the target object is non-negative, optionally, non-negativity of the determined target attribute value may be ensured by introducing a non-negative activation function in the second neural network model.
Optionally, when the attribute data of the target object is a numerical interval including two boundary values, training the second neural network model may be performed for each boundary value, and correspondingly, determining the target attribute value by using each trained second neural network model.
It will be appreciated that the attribute-dominant weights of the target reference factors are used to weight the target attribute values for updating the target attribute values. When the target attribute value is a numerical interval including two boundary values, the attribute dominance weight of the target reference factor may be two weight values, which may be the same or different. The difference of the weight values can be ensured by the difference of network parameters of the adopted models in the process of determining the demand compactness data by using the models or determining the attribute dominant weight by using the models.
Fig. 2A is a flowchart of another weight determining method according to an embodiment of the present application, where the embodiment of the present application is optimized and improved based on the technical solutions of the foregoing embodiments.
Further, the operation of determining the attribute dominant weight of the target reference factor in each reference factor of the target object under the target scene according to the factor association data and the demand compactness data is refined into the operation of adopting a trained factor dominant network, and determining the attribute dominant weight of the target reference factor in each reference factor of the target object under the target scene according to the factor association data and the demand compactness data so as to perfect an attribute dominant weight determining mechanism of the target reference factor.
A weight determination method as shown in fig. 2A, comprising:
S201, obtaining factor correlation data of structural representation between target reference factors and adjacent reference factors; wherein the target reference factor and the proximity reference factor have an effect on attribute data of a target object.
S202, determining demand compactness data between the target reference factors and a target scene; wherein the target scene has a need for use of the target reference factor.
S203, determining attribute dominant weights of the target reference factors in all reference factors of the target object under the target scene according to the factor association data and the demand compactness data by adopting a trained factor dominant network.
Referring to the block diagram of the factor dominated network shown in fig. 2B, the factor dominated network includes a self-influence extraction layer, a mutual influence extraction layer, and a dominated weight activation layer; the self-influence extraction layer is used for extracting self-influence characteristics related to the target reference factors in the demand compactness data; the mutual influence extraction layer is used for extracting mutual influence characteristics among all reference factors in the demand compactness data according to the factor association data; the dominant weight activation layer is configured to determine an attribute dominant weight of the target reference factor according to the self-influence feature and the interaction feature.
The self-influence characteristics characterize the influence of the target reference factors and the target scenes on the importance of the target reference factors; the interaction force features characterize the impact of adjacent reference factors on the importance of target reference factors. The attribute dominant weight of the target reference factors is determined through the self-influencing force characteristics and the mutual-influencing force characteristics, and the attribute dominant weight of the target reference factors can be determined from three layers of the target reference factors, namely the target scene where the target reference factors are located and the adjacent reference factors associated with the target reference factors, so that the accuracy of the attribute dominant weight is improved.
In an optional implementation manner of the embodiment of the present application, the interaction force extraction layer may include a multi-layer perceptron, so as to extract, based on the trained network parameters, the influence features related to the target reference factors in the demand compactness data, and use the influence features as the interaction force features. The multi-layer perceptron for extracting the influence features of the interferometer can be the same as or different from the multi-layer perceptron for extracting the influence features of the interferometer.
In order to improve the comprehensiveness and accuracy of the extracted interaction force features, in another optional implementation manner of the embodiment of the present application, the interaction force extraction layer may further include a graph convolutional neural network, configured to extract local influence force features between a target reference factor and each adjacent reference factor in the demand compactness data according to the involved influence force features and factor association data; the local influence features are taken as the mutual influence features, or the local influence features and the foreign influence features are taken as the mutual influence features.
Optionally, determining the attribute dominant weight of the target reference factor in each reference factor of the target object according to the self-influence feature and the mutual influence feature may be: and determining the attribute dominant weight of the target reference factor according to the local influence characteristic and the self influence characteristic. For example, a attention mechanism may be employed to process the local and self-influencing features to obtain the attribute dominance weight of the target reference factor.
In order to further improve the accuracy of the determined attribute dominance weight, optionally, determining the attribute dominance weight of the target reference factor according to the self-influence feature and the mutual influence feature may be: carrying out feature fusion on the local influence features and the self influence features; and processing the fused characteristic and the mean characteristic of the influence characteristic by adopting an attention mechanism to obtain the attribute dominant weight of the target reference factor.
When the attribute dominance weight includes an upper limit attribute dominance weight and a lower limit attribute dominance weight, two different sets of network parameters are trained for the upper limit attribute dominance weight and the lower limit attribute dominance weight, respectively, in a model training stage of the factor dominance network. Correspondingly, in the model using stage of the factor-dominated network, network parameters corresponding to the upper limit attribute dominated weight and network parameters corresponding to the lower limit attribute dominated weight are adopted respectively to determine the upper limit attribute dominated weight and the lower limit attribute dominated weight.
In an alternative implementation of the embodiment of the present application, after determining the attribute-dominant weight, it may further: determining a target attribute value of a generated target object under a target scene by a target reference factor; the target attribute values are weighted with attribute-dominated weights to update the target attribute values.
When the attribute dominance weight includes an upper limit attribute dominance weight and a lower limit attribute dominance weight, in order to improve accuracy of the determined target attribute value, the target attribute value of the target object generated by the target reference factor in the target scene may also include an upper limit target attribute value and a lower limit target attribute value. Correspondingly, the upper limit target attribute value is weighted by adopting upper limit attribute dominant weight so as to update the upper limit target attribute value; the lower limit target attribute value is weighted with a lower limit attribute dominance weight to update the lower limit target attribute value.
According to the embodiment of the application, the determination process of the attribute dominant weight is refined into the trained factor dominant network, and the attribute dominant weight of the target reference factors in each reference factor of the target object under the target scene is determined according to the factor association data and the demand compactness data, so that the determination mode of the attribute dominant weight is perfected. Meanwhile, through the use of a factor dominant network, the determination efficiency of attribute dominant weights is improved; in addition, the association relation between the target reference factors and the adjacent reference factors can be fully considered through the factor association data and the demand compactness data, and the accuracy of the attribute dominant weight of the target reference factors in the target scene is guaranteed.
Fig. 3A is a flowchart of another weight determining method according to an embodiment of the present application, where the embodiment of the present application performs optimization and improvement based on the above technical solutions.
Further, the operation of determining the demand compactness data between the target reference factors and the target scenes is refined into adopting a trained scene perception network, and feature extraction is performed on the attribute data of the target reference factors and the attribute data of the target scenes to obtain the demand compactness data so as to perfect a determination mechanism of the demand compactness data.
A weight determination method as shown in fig. 3A, comprising:
s301, obtaining factor correlation data of structural representation between target reference factors and adjacent reference factors; wherein the target reference factor and the proximity reference factor have an effect on attribute data of a target object.
S302, adopting a trained scene perception network to perform feature extraction on the attribute data of the target reference factors and the attribute data of the target scene to obtain the requirement compactness data. Wherein the target scene is in need of use with the target reference factor.
In order to clearly introduce the determining process of the demand compactness data, a detailed description will be given of a model training process of the scene-aware network with reference to a structural diagram of the scene-aware network shown in fig. 3B.
The scene perception network comprises a factor embedding layer, a scene embedding layer and a feature extraction layer.
In the model training stage, a skill embedding layer is used for carrying out dimension reduction treatment on attribute data of sample reference factors to obtain sample factor embedding vectors; the scene embedding layer is used for performing dimension reduction processing on the attribute data of the sample application scene to obtain a sample scene embedding vector; and the feature extraction layer is used for extracting features of at least one of attribute data of the sample scene, the sample factor embedded vector and the sample scene embedded vector to obtain sample demand compactness data.
It should be noted that, in order to implement supervised learning on the scene-aware network, an attribute value determining network may be further added to the scene-aware network, for determining an attribute value prediction result according to the sample demand compactness data. Correspondingly, according to the attribute value prediction result and the sample attribute value, the network parameters of the scene sensing network and the attribute value determining network are adjusted, so that the accuracy of sample demand compactness data extracted by the scene sensing network is indirectly improved.
It can be understood that the determination of the embedding vector is performed by the factor embedding layer and the scene embedding layer, so that the data operand when the scene perception network performs feature extraction can be reduced. The feature extraction layer extracts features based on at least one of attribute data of the sample scene, sample factor embedded vectors and sample scene embedded vectors, so that the comprehensiveness and accuracy of the extracted features are improved, and further, the performance of the model is improved.
The influence of the reference factors on the attribute data of the target scene evolves over time. For example, there is a significant difference in salary between Java engineers in 2010 and 2020. As another example, ticket prices for the same tourist attraction may also vary significantly between holidays and non-holidays. Therefore, in order to avoid the influence of time lapse on the accuracy of the determination result of the attribute value and further avoid the influence of time lapse on the accuracy of the extracted sample demand compactness data, in the model training stage, the skill embedding layer performs dimension reduction processing on the attribute data of each sample reference factor in different time periods to obtain sample factor embedding vectors.
In order to reduce the complexity of the model, in an optional implementation manner of the embodiment of the present application, when determining the sample factor embedding vector for the attribute data of the sample reference factor in a certain period, the attribute data of the sample reference factor in the period is subjected to dimension reduction processing to obtain a sample factor low-rank embedding vector and a shared potential projection matrix respectively; and determining the sample factor embedded vector according to the product of the sample factor low-rank embedded vector and the shared potential projection matrix.
In order to avoid higher model complexity caused by determining sample factor embedded vectors in a time-sharing manner and further avoid the occurrence of model overfitting conditions, in an optional implementation manner of the embodiment of the application, when network parameters of a scene sensing network and an attribute value determining network are adjusted, a loss function can be constructed according to the distance between the sample factor embedded vectors in adjacent time-sharing manner; and optimizing network parameters of the scene-aware network according to the loss function.
It can be appreciated that the model is constrained by the distance between the embedded vectors of the sample factors of adjacent time periods, so that the abrupt change of the demand compactness data extracted by the model along with time is limited, the complexity of the model is reduced, and the occurrence of the condition of overfitting of the model is avoided.
Illustratively, the distance between the sample factor embedded vectors of adjacent time periods is determined by a norm calculation. Alternatively, the norm employed may be the F-norm (Frobenius norm, freund Luo Beini us norm).
Optionally, the attribute data of the sample scene of the sample reference factor includes continuous scene data and/or discrete scene data, so the scene embedding layer may include a discrete scene embedding layer, configured to perform dimension reduction processing on the discrete scene data to obtain a sample discrete scene embedding vector; the scene embedding layer can also comprise a continuous scene embedding layer, which is used for carrying out dimension reduction processing on continuous scene data to obtain sample continuous scene embedding vectors.
In order to improve the comprehensiveness and accuracy of the demand compactness data and further improve the model precision, in an optional implementation manner of the embodiment of the application, the feature extraction layer may simultaneously extract the demand compactness data of different levels between the sample reference factors and the sample scene.
Optionally, the first-order sample demand compactness data is determined according to the sample factor embedding vector and the attribute data of the sample scene, so as to determine the direct connection of the single feature of the input data to the attribute data of the sample object; determining second-order sample demand compactness data according to the sample factor embedding vector and the sample scene embedding vector so as to determine the association relation of the binary characteristic combination of the input data to the attribute data of the sample object; and determining high-order sample demand compactness data according to the attribute data of the sample target scene, the sample scene embedding vector and the sample factor embedding vector so as to determine high-order association of the multi-element characteristic combination of the input data to the attribute data of the sample object.
For example, the determination of the high-order sample demand affinity data may be implemented using a deep neural network. For example, the deep neural network may be an MLP (Multilayer Perceptron, multi-layer perceptron).
Optionally, performing feature fusion on attribute data of the sample target scene, the sample scene embedding vector and the sample factor embedding vector by adopting MLP; and activating the fused features by adopting a plurality of sequentially connected activating layers to finally obtain high-order sample demand compactness data.
Optionally, the attribute value determining network may be provided with at least one non-negative activation function, and the activating process is performed on the sample demand compactness data including at least one of the first-order sample demand compactness data, the second-order sample demand compactness data and the higher-order sample demand compactness data by the non-negative activation function, so as to obtain an attribute value prediction result. Correspondingly, network parameters in the attribute value determining network and the scene-aware network are optimized according to the difference between the attribute value predicting result and the sample attribute value.
It will be appreciated that a training-together attribute value determination network may also be used to predict the attribute value of a sample object for each sample reference factor.
In an optional implementation manner of the embodiment of the present application, when the attribute value of the sample object is a numerical interval including two boundary values, in order to ensure that the attribute value determining network has an attribute value interval prediction function, two non-negative activation functions may be set in the attribute value determining network when predicting different boundary values, and activation processing may be performed on the sample demand compactness data respectively to obtain an upper limit attribute value prediction result and a lower limit attribute value prediction result.
Optionally, two non-negative activation functions are set in the attribute value determining network, and activation processing is performed on the sample demand compactness data to obtain an upper limit attribute value prediction result and a lower limit attribute value prediction result, where the upper limit attribute value prediction result and the lower limit attribute value prediction result may be: activating the sample demand compactness data through one non-negative activation function to obtain a lower limit attribute value prediction result; activating the sample demand compactness data through another non-negative activation function to obtain interval length; and taking the sum of the interval length and the lower limit attribute value prediction result as an upper limit attribute value prediction result.
Correspondingly, according to the difference between the upper limit attribute value prediction result and the upper limit of the sample attribute value and the difference between the lower limit attribute value prediction result and the lower limit of the sample attribute value, network parameters in the attribute value determining network and the scene sensing network are optimized, and the trained scene sensing network can be used for extracting the demand compactness data. Correspondingly, the trained attribute values can be used for determining the network, and the sample attribute values of the attribute data of the target object generated by each reference factor under the target scene are predicted.
It should be noted that the training process and the usage process of the scene-aware network and the attribute value determination network may be implemented by using the same or different electronic devices. The use stages of the scene-aware network and the attribute value determination network will be described in detail later.
The scene perception network comprises an embedded layer and a feature extraction layer; the embedding layer is used for respectively carrying out dimension reduction processing on the attribute data of the target reference factors and the attribute data of the target scene to obtain factor embedding vectors and scene embedding vectors; and the feature extraction layer is used for extracting features of at least one of the attribute data of the target scene, the factor embedded vector and the scene embedded vector to obtain the demand compactness data.
Specifically, the embedding layer comprises a factor embedding layer, which is used for performing dimension reduction processing on attribute data of target reference factors to obtain factor embedding vectors; the embedding layer also comprises a scene embedding layer, which is used for performing dimension reduction processing on the attribute data of the target scene to obtain a scene embedding vector.
It can be appreciated that the determination of the factor embedding vector and the scene embedding is performed by the embedding layer, so that the data operand when the scene perception network performs feature extraction can be reduced. The feature extraction layer extracts features based on at least one of attribute data of the target scene, factor embedded vectors and scene embedded vectors, so that the comprehensiveness and accuracy of the extracted features are improved.
Illustratively, the attribute data of the target reference factor is subjected to dimension reduction processing by adopting the trained network parameters in the factor embedding layer to obtain the factor embedding vector. Optionally, the attribute data of the target reference factors can be subjected to dimension reduction processing to obtain a target factor low-rank embedded vector and a target sharing potential projection matrix respectively; and determining the factor embedding vector according to the product of the target factor low-rank embedding vector and the target shared potential projection matrix.
Illustratively, if the attribute data of the target scene includes target discrete scene data, performing dimension reduction processing on the target discrete scene data by adopting trained network parameters in the discrete scene embedding layer to obtain a discrete scene embedding vector; if the attribute data of the target scene comprise the target continuous scene data, performing dimension reduction processing on the target continuous scene data by adopting trained network parameters in the continuous scene embedding layer to obtain continuous scene embedding vectors.
Illustratively, at least one of the target discrete scene data, the target continuous scene data, the factor embedded vector, the discrete scene embedded vector and the continuous scene embedded vector is subjected to feature extraction by adopting trained network parameters in the feature extraction layer, so that the demand compactness data is obtained. The network parameters in the feature extraction layer are parameters after model training is completed.
In order to improve the comprehensiveness and accuracy of the demand compactness data, in an alternative implementation manner of the embodiment of the present application, the feature extraction layer may simultaneously perform extraction of the demand compactness data of different levels between the target reference factor and the target scene.
Illustratively, the demand affinity data includes at least one of first order demand affinity data, second order demand affinity data, and higher order demand affinity data.
Optionally, determining first-order demand compactness data according to the target discrete scene data and/or the target continuous scene data and the factor embedding vector, wherein the first-order demand compactness data is used for representing direct connection of single characteristics of the target scene and the target reference factor to attribute data of the target object; determining second-order demand compactness data according to the discrete scene embedding vector and/or the continuous scene embedding vector and the factor embedding vector, wherein the second-order demand compactness data is used for representing association of the binary characteristic combination of the target scene and the target reference factor to attribute data of the target object; and determining high-order demand compactness data according to the target continuous scene data and the continuous scene embedded vector and/or the target discrete scene data and the discrete scene embedded vector and the factor embedded vector, wherein the high-order demand compactness data is used for representing high-order association of the target reference factor and the multi-element characteristic combination of the target scene to the attribute data of the target object.
In the use stage of the attribute value determining network, optionally, at least one non-negative activation function set in the network is determined according to the attribute value, and activation processing is performed on the demand compactness data to obtain a target attribute value of a target object generated by a target reference factor in a target scene. The network parameters in the non-negative activation function are parameters corresponding to the model after the model training is completed.
In an optional implementation manner of the embodiment of the present application, if the attribute value determining network has a value interval prediction function, correspondingly, two non-negative activation functions set in the network may be determined according to the attribute value, and activation processing may be performed on the demand compactness data respectively, so as to obtain an upper limit target attribute value and a lower limit target attribute value of the target object generated by the target reference factor in the target scene.
Optionally, if the attribute value determines that two non-negative activation functions are set in the network, the requirement compactness data can be activated through one of the non-negative activation functions to obtain a lower limit target attribute value of the target reference factor in the target scene; activating the demand compactness data through another non-negative activation function to obtain interval length; and taking the sum of the interval length and the lower limit target attribute value as the upper limit target attribute value of the generated target object under the target scene by the target reference factor.
S303, determining the attribute dominant weight of the target reference factors in all the reference factors of the target object under the target scene according to the factor association data and the demand compactness data.
In an optional implementation manner of the embodiment of the present application, the attribute dominant weight may be further used to weight the target attribute value to update the target attribute value, so as to implement a finer granularity determination of the target attribute value of the generated target object under the target scene by the single influencing factor.
Wherein the number of attribute dominant weights is at least 1. It will be appreciated that to ensure accuracy of the attribute-dominant weights, in general, the number of attribute-dominant weights corresponds one-to-one to the number of target attribute values.
Optionally, when the target attribute value is a single attribute value, the determined single attribute dominant weight is directly adopted to weight the target attribute value so as to update the target attribute value.
Optionally, when the target attribute value includes an upper limit target attribute value and a lower limit target attribute value, the attribute dominance weight includes an upper limit attribute dominance weight and a lower limit attribute dominance weight, respectively. Correspondingly, the upper limit target attribute value is weighted by adopting upper limit attribute dominant weight so as to update the upper limit target attribute value; the lower limit target attribute value is weighted with a lower limit attribute dominance weight to update the lower limit target attribute value.
According to the embodiment of the application, the compactness data is determined and operated, the trained scene perception network is adopted, the attribute data of the target reference factors and the attribute data of the target scene are subjected to characteristic extraction, the demand compactness data are obtained, the determining mechanism of the demand compactness data is perfected, the extraction efficiency of the demand compactness data is improved through the use of a machine learning model, and meanwhile, the accuracy and the comprehensiveness of the extracted demand compactness data are improved.
Fig. 4A is a flowchart of a method for processing skill value data based on dominant weights according to an embodiment of the present application, where based on the above technical solutions, the embodiment of the present application provides a preferred implementation manner, which is used to determine the influence of each working skill on post pay in recruitment data issued by an enterprise, so as to strip out the skill values of different working skills, and further implement prediction of comprehensive post pay in recruitment data including multiple working skills, or quantization of personal value of a user who has multiple working skills.
A method for processing skill value data as shown in fig. 4A, comprising: a data preprocessing stage 410, a model training stage 420, and a model use stage 430.
Wherein the data preprocessing stage 410 comprises:
S411, acquiring a plurality of recruitment data; wherein the recruitment data includes skill data, work scenario data, and supervisory data.
Wherein the skill data includes skill names, skill co-occurrence frequency and skill proficiency in the same working scene. Among these, skill level may be proficient, skilled, and known, etc.
Wherein the job scenario data includes own attributes of the skill-requiring party, historical statistical attributes, and skill-requiring attributes, such as own attributes of an enterprise and public institution, including at least one of a marketing situation, a type of practise, a year of establishment, and the like, for example. The historical statistical attribute is historical monthly salary statistical data of the city where the skill demander is located, and the historical statistical data comprises at least one of a mean value, a variance, a maximum value and a minimum value. Wherein the skill requirement attributes include at least one of work experience, graduation years, and the like.
Wherein the supervision data comprises an upper salary limit and a lower salary limit.
S412, generating a skill graph according to the skill data.
Referring to the skill diagram shown in FIG. 4B, skills are represented by nodes; connecting skills which appear simultaneously in the same recruitment data through edges; taking the skill level of each skill as the description data of the corresponding node of the skill; and taking the co-occurrence frequency of the skill connected through the edges in all recruitment data as the description data of the edges.
S413, performing city coding on skill requirement attributes and skill requirement parties in the working scene data, and obtaining structured discrete scene vectors through single-heat coding.
Illustratively, if city codes in a recruitment data are XXX, requiring 3 years of work experience, then the discrete scene vector formed may be [ XXX,1]. In this example, recruiting the candidate, the work experience is denoted by "0"; for 1-3 years, the working experience is expressed by 1; for 3-4 years, the working experience is denoted by "2".
S414, generating two structured continuous scene vectors respectively according to the self attribute and the historical statistical attribute of the skill demander in the working scene data.
Illustratively, if a piece of recruitment data is as follows: the internet company B of the established 13-year-old marketing company located in a city recruits algorithm engineers, and the historical monthly salary of the job is as follows: the average value is 10000 yuan, the variance is 500 yuan, the highest monthly salary is 12000 yuan, and the lowest monthly salary is 8000 yuan, then the determined continuous scene vector corresponding to the self attribute of the skill requiring party can be [ A market, B company, marketing company, 13 years, internet class ]; the determined consecutive scene vectors corresponding to the historical statistical properties may be [10000,500,12000,8000]. Of course, the vector elements in the continuous scene vector can also be determined by other coding methods, and the specific coding method can be determined by a skilled person according to the requirement or experience value.
S415, generating a structured supervision vector from the supervision data.
If the range of compensation provided in a particular recruitment data is 8000-10000, then the resulting supervision vector may be [8000,10000].
Referring to the schematic of the structure of a skill pricing model shown in FIG. 4C, the skill pricing model is a two-part neural network including a skill pricing network for scene perception and skill pricing, and a compensation prediction network for skill weight management and compensation determination.
Wherein the model training phase 420 comprises:
S421, generating a sample skill vector comprising the skill name of the sample skill and the skill level according to the skill diagram;
S422, the sample skill vector of the sample skill and sample scene data of the sample scene are used as training samples to be input into a scene perception network, and interaction data of the sample skill and the sample scene are obtained.
Specifically, the interaction data of the sample skill and the sample scene is determined according to the following formula:
x=f(C,s,l|Θ);
Wherein x is interaction data, C is sample scene data comprising discrete scene vectors and continuous scene vectors, s is skill name, l is skill proficiency, and Θ is a parameter to be trained. Wherein, f () is an information extraction function, and is implemented by using a scene-aware network.
S423, according to the interaction data, inputting the interaction data into a pricing network to predict the value interval of the sample skill, and obtaining a sample predicted value interval.
Specifically, a sample predictive value interval is determined according to the following formula:
Wherein [ v l,vu ] is a sample prediction value interval, phi l and phi u are parameters to be trained, gl () and gu () are non-negative activation functions, and the method is realized by adopting a pricing network.
Wherein the predicted lower value limit v l and the upper value limit v u satisfy the following conditions: v l≥0,vu is ≡ 0 and v l≤vu.
S424, according to the skill diagram, the co-occurrence frequency of the sample skill and the sample adjacent skill is obtained, and an adjacent matrix is generated according to the obtained co-occurrence frequencies.
S425, inputting the adjacency matrix and the interaction data into a skill dominance network to obtain a skill weight section of the sample skill.
Specifically, the following formula is adopted to determine the skill weight interval of the sample skill:
Wherein [ d l,du ] is a skill weight section, A is an adjacency matrix, X is part or all of interaction data, or hidden layer vectors when the interaction data are generated, and ψ l and ψ u are two groups of parameters to be trained. h () is a weight calculation function implemented using a skill management network.
S426, weighting the upper limit and the lower limit in the skill weight section of each sample skill in the same recruitment data and the upper limit and the lower limit in the value prediction section of the sample to obtain the final value prediction section of each sample skill.
S427, summing the final prediction value intervals of each sample skill in the data aiming at each recruitment data to obtain a prediction compensation interval.
Specifically, a predicted salary interval for each sample skill is determined according to the following formula:
Wherein, For a predicted compensation zone corresponding to a recruitment data,A skill weight section for the ith sample skill in the recruitment data; /(I)Predicting a value interval for a sample of the ith sample skill in the recruitment data, wherein N is the number of skills included in the recruitment data.
And S428, optimizing network parameters of each network in the skill pricing combination model according to the interval difference between the predicted salary interval and the actual salary interval corresponding to the supervision vector of the recruitment data.
Optimizing network parameters of each network in the skill pricing combination model according to the following loss function:
wherein L s is the loss function value of the current model training, J is the recruitment data of the current model training, For the predicted compensation zone corresponding to the j-th recruitment data,And lambda l and lambda u are super parameters for the actual compensation interval corresponding to the j-th recruitment data.
In optimizing network parameters of each network in the skill pricing combination model, an optimization method may be used to find parameters that achieve a local minimum of function values of the loss function, including but not limited to random gradient descent, and other optimization methods.
The model training process is described in detail in connection with the structural schematic of the skill pricing network shown in fig. 4D and the structural schematic of the skill management network shown in fig. 4E.
Referring to fig. 4D, wherein the skill pricing network includes a context aware network and a pricing network. Wherein, scene perception network includes: a skill embedding layer, a discrete scene embedding layer, a continuous scene embedding layer and a feature extraction layer.
Specifically, the skill embedding layer extracts embedding characteristics of the skills in different time periods, and in order to reduce model complexity, the skill embedding is assumed to be composed of low-rank embedding and a potential projection matrix, and is written:
Wherein, A skill embedding vector for a T-th time period is stored, T is the number of time periods, and N s represents the number of sample skills; /(I)Is a low rank embedding, W vs∈Rdl×de represents the potential projection matrix shared over all time periods, de represents the dimension of the embedding, dl is the number of hidden variables. The length of the time period is determined by a skilled person according to the requirement or an empirical value.
Through the dynamic modeling mode aiming at different time periods, the capability of the skill pricing model in modeling and evaluating the skill value of the skill in different time periods is improved, but higher model complexity is brought, and model overfitting is easy to cause.
In order to avoid model overfitting, a regularization term of a time dimension is introduced into a loss function in the training process of the skill pricing combination model, and the regularization term is concretely as follows:
Where F represents the F-norm, the characterization of this regularized term constraint skill does not change dramatically over time.
Specifically, the discrete scene embedding layer allocates an embedding for each discrete scene vector according to the following formula to obtain the discrete scene embedding:
Wherein, Embedding for discrete scene,For discrete scene vectors, m i is the vector dimension of the discrete scene vector,Parameters to be trained; where i ε D, where D represents the set of discrete scene vectors.
Specifically, the continuous scene embedding layer allocates an embedding for each continuous scene vector according to the following formula, so as to obtain continuous scene embedding:
Wherein, Embedding for continuous scene,For continuous scene vectors, d i is the vector dimension of the continuous scene vector,AndIs a parameter to be trained; where i ε C, where C represents the set of consecutive scene vectors. /(I)
In order to extract as much information as possible, the skill pricing model extracts deep and shallow interactions between sample scenes and sample skills through a feature extraction layer, first processes discrete scene vectors and continuous scene vectors in different ways, and extracts interaction data for each layer through linear projection, multiplication operations, and MLP. The interaction data comprises first-order interaction data, second-order interaction data and high-order interaction data.
Specifically, the first-order interaction data is calculated and determined according to the following formula:
wherein h 1 is first-order interaction data, e s∈Rde is a skill embedding vector of sample skill, which is Line vector corresponding to the sample skill,AndD o1 is the output dimension for the parameters to be trained.
Specifically, the second-order interaction data is calculated and determined according to the following formula:
Wherein, as follows, as indicated by element-wise multiplication, h 2 is second order interaction data.
Specifically, the higher-order interaction data is calculated and determined according to the following formula:
where K is the depth of the MLP, For the parameters to be trained, x (k) represents the output of the k-th layer, σ () represents the activation function, x (K), which we use as the high-order interaction data h 3, is the final output.
The interaction data x is obtained by combining the first-order interaction data h 1, the second-order interaction data h 2 and the higher-order interaction data h 3.
X is input to the pricing network, and a sample prediction value interval is determined according to the related formulas v l and v u, which are not described herein.
See fig. 4E, wherein the skill-govern network comprises: an own influence extraction layer, an influence extraction layer, a local influence extraction layer and a dominant weight activation layer.
Specifically, the self-influence extraction layer uses a multi-layer perceptron to extract the characteristic representation of the self-influence extraction layer related to the importance of each skill, and the self-influence characteristic X imp∈RN×dp is obtained. Where N represents the number of skills involved, dp is a superparameter, representing the extracted feature dimension. The multi-layer perceptron comprises a plurality of linear processing layers and nonlinear processing layers which are sequentially connected.
Specifically, the influence extraction layer is involved, and another multi-layer perceptron is used for extracting the characteristic representation of each skill, which is related to importance, so as to obtain the influence characteristic X inf∈RN×di. Wherein dp is a hyper-parameter representing the extracted feature dimension. The multi-layer perceptron comprises a plurality of linear processing layers and nonlinear processing layers which are sequentially connected. Wherein the structure of the multi-layer perceptron that determines X imp and that determines X inf may be the same or different.
Specifically, the local influence extraction layer extracts the influence of adjacent skills on each skill through a graph convolutional neural network to obtain local influence characteristics, and the local influence characteristics are realized by adopting the following formula:
U=GCN(Xinf,A);
Wherein U is a local influence characteristic, A is an adjacency matrix of sample skills, and GCN () is a function corresponding to a graph convolution neural network.
Specifically, the weight activation layer is controlled, the attention mechanism is introduced, and the skill weight section of each sample skill is determined by adopting the following formula:
wherein Q is the mean characteristic of the influence feature X inf, which is used as the global characterization of the influence feature; [ d l,du ] is the skill weight interval of the predicted sample skill, and W q l、Wk l and W v l are the parameters to be trained of the model corresponding to the lower limit skill weight d l; w q u、Wk u and W v u are parameters to be trained of a model corresponding to the upper limit skill weight d u, and two groups of parameters to be trained are respectively trained by adopting the same skill control network; softmax () is the activation function.
Wherein the model use phase 430 comprises:
s431, acquiring one skill in recruitment data as a target skill;
S432, inputting the skill data of the target skill and the work scene data of the target scene associated with the target skill into a skill pricing network to obtain the value interval and interaction data of the target skill.
S433, determining an adjacent matrix of the target skill by taking other skills except the target skill in the recruitment data as adjacent skills.
S434, inputting the adjacency matrix of the target skills and the interaction data into a salary prediction network to obtain a salary section of the target skills.
S435, adding the compensation intervals corresponding to the skills in the recruitment data to obtain the comprehensive compensation corresponding to the recruitment data.
Fig. 5 is a block diagram of a weight determining apparatus according to an embodiment of the present application, where the weight determining apparatus 500 includes: factor association data acquisition module 501, demand affinity data determination module 502, and attribute dominance weight determination module 503. Wherein,
A factor-associated data acquisition module 501 for acquiring factor-associated data of a structured representation between a target reference factor and each neighboring reference factor; wherein the target reference factor and the proximity reference factor have an effect on attribute data of a target object;
a demand tightness data determining module 502, configured to determine demand tightness data between the target reference factor and a target scene; wherein the target scene has a use requirement for the target reference factor;
an attribute dominance weight determining module 503, configured to determine, according to the factor association data and the demand compactness data, an attribute dominance weight of the target reference factor among the reference factors of the target object in the target scene.
According to the embodiment of the application, the factor correlation data of the structural representation between the target reference factors and each adjacent reference factor is acquired through the factor correlation data acquisition module; wherein the target reference factor and the proximity reference factor have an effect on attribute data of the target object; determining demand compactness data between a target reference factor and a target scene through a demand compactness data determining module; wherein the target scene has a use requirement for a target reference factor; and determining the attribute dominant weight of the target reference factors in each reference factor of the target object under the target scene according to the factor association data and the demand compactness data by an attribute dominant weight determining module. According to the technical scheme, the attribute dominant weight is determined by introducing factor association data between the target reference factors and the adjacent reference factors, so that the reference factors which are mutually influenced exist in the target scene, and the influence degree of the attribute data of the target object is stripped, so that the influence degree of the single reference factor on the attribute data of the target object in the target scene is quantized. Meanwhile, by introducing the demand compactness data between the target reference factors and the target scene, the relation between the reference factors and the target scene is fully considered in the determination process of the attribute dominant weight, and the accuracy of the attribute dominant weight is guaranteed.
Further, the attribute dominance weight determination module 503 includes:
And the attribute dominant weight determining unit is used for determining the attribute dominant weight of the target reference factors in each reference factor of the target object under the target scene according to the factor association data and the demand compactness data by adopting a trained factor dominant network.
Further, the factor dominance network comprises a self-influence extraction layer, a mutual influence extraction layer and an attribute dominance weight activation layer;
the self-influence extraction layer is used for extracting self-influence characteristics related to the target reference factors in the demand compactness data;
the mutual influence extraction layer is used for extracting mutual influence characteristics among all reference factors in the demand compactness data according to the factor association data;
the dominant weight activation layer is configured to determine an attribute dominant weight of the target reference factor according to the self-influence feature and the interaction feature.
Further, the interaction force extraction layer includes:
the foreign influence feature extraction unit is used for extracting foreign influence features related to the target reference factors in the demand compactness data;
The local influence characteristic extraction unit is used for extracting local influence characteristics between the target reference factors and each adjacent reference factor in the demand compactness data according to the influence characteristics and the factor association data;
And the mutual influence characteristic extraction unit is used for the local influence characteristic and/or the foreign influence characteristic as the mutual influence characteristic.
Further, the dominant weight activation layer includes:
The feature fusion unit is used for carrying out feature fusion on the local influence features and the self influence features;
and the attribute dominant weight determining unit is used for processing the fused characteristic and the mean characteristic of the influence characteristic by adopting an attention mechanism to obtain the attribute dominant weight of the target reference factor.
Further, the demand tightness data determining module 502 includes:
The demand compactness data determining unit is used for performing feature extraction on the attribute data of the target reference factors and the attribute data of the target scenes by adopting a trained scene perception network to obtain the demand compactness data.
Further, the scene perception network comprises an embedding layer and a feature extraction layer;
the embedding layer is used for respectively carrying out dimension reduction processing on the attribute data of the target reference factors and the attribute data of the target scene to obtain factor embedding vectors and scene embedding vectors;
And the feature extraction layer is used for extracting features of at least one of the attribute data of the target scene, the factor embedded vector and the scene embedded vector to obtain the demand compactness data.
Further, the demand affinity data includes at least one of first order demand affinity data, second order demand affinity data, and higher order demand affinity data;
Correspondingly, the feature extraction layer comprises:
A first-order demand compactness data determining unit, configured to determine first-order demand compactness data according to the factor embedding vector and attribute data of the target scene;
The second-order demand compactness data determining unit is used for determining the second-order demand compactness data according to the factor embedded vector and the scene embedded vector;
and the high-order demand compactness data determining unit is used for determining the high-order demand compactness data according to the attribute data of the target scene, the scene embedding vector and the factor embedding vector.
Further, the device also comprises a model training module for training the model of the scene perception network;
Wherein, the model training module includes:
and the sample factor embedded vector determining unit is used for respectively carrying out dimension reduction processing on the attribute data of each sample reference factor in a time interval to obtain the sample factor embedded vector.
Further, the model training module includes:
A loss function construction unit for constructing a loss function according to the distance between the sample factor embedded vectors of adjacent time periods;
and the model training unit is used for optimizing the network parameters of the scene perception network according to the loss function.
Further, the factor correlation data is a matrix constructed from co-occurrence frequencies of the target reference factor and the neighboring reference factors.
Further, the apparatus further comprises:
The target attribute value determining module is used for determining a target attribute value of the target object generated by the target reference factors under the target scene after determining the attribute dominant weight of the target reference factors in the reference factors of the target object according to the factor association data and the demand compactness data;
And the target attribute value updating module is used for weighting the target attribute value by adopting the attribute dominant weight so as to update the target attribute value.
Further, the reference factor is a working skill, the attribute data of the target object is post compensation, and the target scene is an enterprise.
The weight determining device can execute any weight determining method provided by the embodiment of the application, and has the functional module and beneficial effects of executing the weight determining method.
According to an embodiment of the present application, the present application also provides an electronic device and a readable storage medium.
As shown in fig. 6, a block diagram of an electronic device implementing a weight determining method according to an embodiment of the present application is shown. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the applications described and/or claimed herein.
As shown in fig. 6, the electronic device includes: one or more processors 601, memory 602, and interfaces for connecting the components, including high-speed interfaces and low-speed interfaces. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions executing within the electronic device, including instructions stored in or on memory to display graphical information of the GUI on an external input/output device, such as a display device coupled to the interface. In other embodiments, multiple processors and/or multiple buses may be used, if desired, along with multiple memories and multiple memories. Also, multiple electronic devices may be connected, each providing a portion of the necessary operations (e.g., as a server array, a set of blade servers, or a multiprocessor system). One processor 601 is illustrated in fig. 6.
The memory 602 is a non-transitory computer readable storage medium provided by the present application. Wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the weight determination method provided by the present application. The non-transitory computer readable storage medium of the present application stores computer instructions for causing a computer to execute the weight determination method provided by the present application.
The memory 602 is used as a non-transitory computer readable storage medium, and may be used to store a non-transitory software program, a non-transitory computer executable program, and modules, such as program instructions/modules corresponding to the weight determining method in the embodiment of the present application (e.g., the factor association data obtaining module 501, the demand compactness data determining module 502, and the attribute domination weight determining module 503 shown in fig. 5). The processor 601 executes various functional applications of the server and data processing, i.e., implements the weight determination method in the above-described method embodiments, by running non-transitory software programs, instructions, and modules stored in the memory 602.
The memory 602 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, at least one application program required for a function; the storage data area may store data created by the use of the electronic device implementing the weight determination method, and the like. In addition, the memory 602 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage device. In some embodiments, the memory 602 may optionally include memory remotely located with respect to the processor 601, which may be connected to the electronic device implementing the weight determination method via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device implementing the weight determining method may further include: an input device 603 and an output device 604. The processor 601, memory 602, input device 603 and output device 604 may be connected by a bus or otherwise, for example in fig. 6.
The input device 603 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device implementing the weight determination method, such as a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointer stick, one or more mouse buttons, a track ball, a joystick, etc. input devices. The output means 604 may include a display device, auxiliary lighting means (e.g., LEDs), tactile feedback means (e.g., vibration motors), and the like. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device may be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASIC (application specific integrated circuit), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
These computing programs (also referred to as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to the technical scheme of the embodiment of the application, the factor association data of the structural representation between the target reference factors and each adjacent reference factor is obtained; wherein the target reference factor and the proximity reference factor have an effect on attribute data of the target object; determining demand tightness data between a target reference factor and a target scene; wherein the target scene has a use requirement for a target reference factor; and determining that the attribute of the target reference factor in each reference factor of the target object in the target scene governs the weight according to the factor association data and the demand compactness data. According to the technical scheme, the attribute dominant weight is determined by introducing factor association data between the target reference factors and the adjacent reference factors, so that the influence degree of the reference factors which are mutually influenced on the attribute data of the target object is stripped under the target scene, and the influence degree of the single reference factor on the attribute data of the target object under the target scene is quantized. Meanwhile, by introducing the demand compactness data between the target reference factors and the target scene, the relation between the reference factors and the target scene is fully considered in the determination process of the attribute dominant weight, and the accuracy of the attribute dominant weight is guaranteed.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present application may be performed in parallel, sequentially, or in a different order, so long as the desired results of the technical solution disclosed in the present application can be achieved, and are not limited herein.
The above embodiments do not limit the scope of the present application. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present application should be included in the scope of the present application.
Claims (15)
1. A weight determination method, comprising:
obtaining factor correlation data of a structured representation between a target reference factor and each adjacent reference factor; wherein the target reference factor and the proximity reference factor have an effect on attribute data of a target object; the factor correlation data is a matrix constructed according to co-occurrence frequencies of the target reference factor and the adjacent reference factor;
Determining demand tightness data between the target reference factors and a target scene; wherein the target scene has a use requirement for the target reference factor;
Determining attribute dominant weights of the target reference factors in all reference factors of the target object under the target scene according to the factor association data and the demand compactness data by adopting a trained factor dominant network;
Wherein the factor dominance network comprises a self-influence extraction layer, a mutual influence extraction layer and a dominance weight activation layer; the self-influence extraction layer is used for extracting self-influence characteristics related to the target reference factors in the demand compactness data; the mutual influence extraction layer is used for extracting mutual influence characteristics among all reference factors in the demand compactness data according to the factor association data; the dominant weight activation layer is configured to determine an attribute dominant weight of the target reference factor according to the self-influence feature and the interaction feature.
2. The method of claim 1, wherein extracting the interaction force characteristics between the reference factors in the demand compactness data according to the factor association data comprises:
Extracting an influence characteristic related to the target reference factor in the demand compactness data;
Extracting local influence characteristics between the target reference factors and each adjacent reference factor in the demand compactness data according to the influence characteristics and the factor associated data;
And taking the local influence characteristic and/or the influence characteristic as the mutual influence characteristic.
3. The method of claim 2, wherein determining the attribute dominance weight of the target reference factor based on the self-influencing force feature and the mutual influencing force feature comprises:
Carrying out feature fusion on the local influence features and the self influence features;
And processing the fused characteristic and the mean characteristic of the influence characteristic by adopting an attention mechanism to obtain the attribute dominant weight of the target reference factor.
4. The method of claim 1, wherein determining demand tightness data between the target reference factor and a target scene comprises:
And adopting a trained scene perception network to perform feature extraction on the attribute data of the target reference factors and the attribute data of the target scene to obtain the demand compactness data.
5. The method of claim 4, wherein the scene-aware network comprises an embedding layer and a feature extraction layer;
the embedding layer is used for respectively carrying out dimension reduction processing on the attribute data of the target reference factors and the attribute data of the target scene to obtain factor embedding vectors and scene embedding vectors;
And the feature extraction layer is used for extracting features of at least one of the attribute data of the target scene, the factor embedded vector and the scene embedded vector to obtain the demand compactness data.
6. The method of claim 5, wherein the demand affinity data comprises at least one of first order demand affinity data, second order demand affinity data, and higher order demand affinity data;
Correspondingly, extracting features of at least one of the attribute data of the target scene, the factor embedded vector and the scene embedded vector to obtain the demand compactness data, wherein the feature extraction comprises at least one of the following steps:
determining the first-order demand compactness data according to the factor embedding vector and the attribute data of the target scene;
determining the second order demand compactness data according to the factor embedded vector and the scene embedded vector;
and determining the high-order demand compactness data according to the attribute data of the target scene, the scene embedding vector and the factor embedding vector.
7. The method according to claim 5, wherein, in a model training stage of the scene-aware network, when performing a dimension reduction process on data attributes of sample reference factors to obtain sample factor embedded vectors, the method comprises:
and respectively carrying out dimension reduction processing on the attribute data of each sample reference factor in a time interval to obtain the sample factor embedded vector.
8. The method of claim 7, further comprising:
Constructing a loss function according to the distance between the sample factors embedded vectors of adjacent time periods;
and optimizing network parameters of the scene-aware network according to the loss function.
9. The method of any of claims 1-8, wherein, after determining that, in the target scenario, the target reference factor has an attribute governance weight among the reference factors of the target object based on the factor-related data and the demand compactness data, the method further comprises:
determining a target attribute value of a generated target object under a target scene by a target reference factor;
And weighting the target attribute value by adopting the attribute dominance weight to update the target attribute value.
10. The method of any of claims 1-8, wherein the reference factor is a work skill, the attribute data of the target object is a post compensation, and the target scenario is an enterprise.
11. A weight determining apparatus comprising:
the factor correlation data acquisition module is used for acquiring factor correlation data of structural representation between the target reference factors and each adjacent reference factor; wherein the target reference factor and the proximity reference factor have an effect on attribute data of a target object; the factor correlation data is a matrix constructed according to co-occurrence frequencies of the target reference factor and the adjacent reference factor;
The demand compactness data determining module is used for determining demand compactness data between the target reference factors and the target scene; wherein the target scene has a use requirement for the target reference factor;
an attribute governance weight determination module comprising:
The attribute dominant weight determining unit is used for determining attribute dominant weights of the target reference factors in all reference factors of the target object under the target scene according to the factor association data and the demand compactness data by adopting a trained factor dominant network;
Wherein the factor dominance network comprises a self-influence extraction layer, a mutual influence extraction layer and a dominance weight activation layer; the self-influence extraction layer is used for extracting self-influence characteristics related to the target reference factors in the demand compactness data; the mutual influence extraction layer is used for extracting mutual influence characteristics among all reference factors in the demand compactness data according to the factor association data; the dominant weight activation layer is configured to determine an attribute dominant weight of the target reference factor according to the self-influence feature and the interaction feature.
12. The apparatus of claim 11, wherein the interaction force extraction layer comprises:
the foreign influence feature extraction unit is used for extracting foreign influence features related to the target reference factors in the demand compactness data;
The local influence characteristic extraction unit is used for extracting local influence characteristics between the target reference factors and each adjacent reference factor in the demand compactness data according to the influence characteristics and the factor association data;
And the mutual influence characteristic extraction unit is used for taking the local influence characteristic and/or the foreign influence characteristic as the mutual influence characteristic.
13. The apparatus of claim 12, wherein the dominant weight activation layer comprises:
The feature fusion unit is used for carrying out feature fusion on the local influence features and the self influence features;
and the attribute dominant weight determining unit is used for processing the fused characteristic and the mean characteristic of the influence characteristic by adopting an attention mechanism to obtain the attribute dominant weight of the target reference factor.
14. An electronic device, comprising:
at least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform a weight determination method of any one of claims 1-10.
15. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform a weight determining method of any one of claims 1-10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010443529.6A CN113704363B (en) | 2020-05-22 | 2020-05-22 | Weight determining method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010443529.6A CN113704363B (en) | 2020-05-22 | 2020-05-22 | Weight determining method, device, equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113704363A CN113704363A (en) | 2021-11-26 |
CN113704363B true CN113704363B (en) | 2024-04-30 |
Family
ID=78646468
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010443529.6A Active CN113704363B (en) | 2020-05-22 | 2020-05-22 | Weight determining method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113704363B (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003271652A (en) * | 2002-03-13 | 2003-09-26 | Hokkaido Technology Licence Office Co Ltd | Similar image retrieval device, similar image retrieval method and similar image retrieval program |
WO2010077223A1 (en) * | 2008-12-30 | 2010-07-08 | Tele Atlas North America, Inc. | A method and system for transmitting and/or receiving at least one location reference, enhanced by at least one focusing factor |
CN105760381A (en) * | 2014-12-16 | 2016-07-13 | 深圳市腾讯计算机系统有限公司 | Search result processing method and device |
CN108280124A (en) * | 2017-12-11 | 2018-07-13 | 北京三快在线科技有限公司 | Product classification method and device, ranking list generation method and device, electronic equipment |
CN108879692A (en) * | 2018-06-26 | 2018-11-23 | 湘潭大学 | A kind of regional complex energy resource system energy flow distribution prediction technique and system |
CN110659799A (en) * | 2019-08-14 | 2020-01-07 | 深圳壹账通智能科技有限公司 | Attribute information processing method and device based on relational network, computer equipment and storage medium |
CN111091196A (en) * | 2019-11-15 | 2020-05-01 | 佳都新太科技股份有限公司 | Passenger flow data determination method and device, computer equipment and storage medium |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9461876B2 (en) * | 2012-08-29 | 2016-10-04 | Loci | System and method for fuzzy concept mapping, voting ontology crowd sourcing, and technology prediction |
US10636044B2 (en) * | 2016-03-15 | 2020-04-28 | Accenture Global Solutions Limited | Projecting resource demand using a computing device |
-
2020
- 2020-05-22 CN CN202010443529.6A patent/CN113704363B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003271652A (en) * | 2002-03-13 | 2003-09-26 | Hokkaido Technology Licence Office Co Ltd | Similar image retrieval device, similar image retrieval method and similar image retrieval program |
WO2010077223A1 (en) * | 2008-12-30 | 2010-07-08 | Tele Atlas North America, Inc. | A method and system for transmitting and/or receiving at least one location reference, enhanced by at least one focusing factor |
CN105760381A (en) * | 2014-12-16 | 2016-07-13 | 深圳市腾讯计算机系统有限公司 | Search result processing method and device |
CN108280124A (en) * | 2017-12-11 | 2018-07-13 | 北京三快在线科技有限公司 | Product classification method and device, ranking list generation method and device, electronic equipment |
CN108879692A (en) * | 2018-06-26 | 2018-11-23 | 湘潭大学 | A kind of regional complex energy resource system energy flow distribution prediction technique and system |
CN110659799A (en) * | 2019-08-14 | 2020-01-07 | 深圳壹账通智能科技有限公司 | Attribute information processing method and device based on relational network, computer equipment and storage medium |
CN111091196A (en) * | 2019-11-15 | 2020-05-01 | 佳都新太科技股份有限公司 | Passenger flow data determination method and device, computer equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN113704363A (en) | 2021-11-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7262539B2 (en) | Conversation recommendation method, device and equipment | |
CN111523597B (en) | Target recognition model training method, device, equipment and storage medium | |
EP3979178A1 (en) | Method, apparatus, and electronic device for collecting loan and storage medium | |
CN110766142A (en) | Model generation method and device | |
CN113553864A (en) | Training method, device, electronic device and storage medium for translation model | |
CN111311321B (en) | User consumption behavior prediction model training method, device, equipment and storage medium | |
US11719550B2 (en) | Method and apparatus for building route time consumption estimation model, and method and apparatus for estimating route time consumption | |
CN112541302A (en) | Air quality prediction model training method, air quality prediction method and device | |
CN110427524B (en) | Method, device, electronic device and storage medium for knowledge graph completion | |
CN111611808B (en) | Method and device for generating natural language model | |
US11830207B2 (en) | Method, apparatus, electronic device and readable storage medium for point cloud data processing | |
CN113723278B (en) | Training method and device for form information extraction model | |
US20240104403A1 (en) | Method for training click rate prediction model | |
US20210239480A1 (en) | Method and apparatus for building route time consumption estimation model, and method and apparatus for estimating route time consumption | |
CN112561031A (en) | Model searching method and device based on artificial intelligence and electronic equipment | |
CN113722368B (en) | Data processing method, device, equipment and storage medium | |
CN116030235A (en) | Target detection model training method, target detection device and electronic equipment | |
CN111753759A (en) | Model generation method, device, electronic device and storage medium | |
CN111753758A (en) | Model generation method, device, electronic device and storage medium | |
CN113032443B (en) | Method, device, device and computer-readable storage medium for processing data | |
CN113704363B (en) | Weight determining method, device, equipment and storage medium | |
CN114331380A (en) | Prediction method, system, device and storage medium for occupational mobility relationship | |
CN112734454B (en) | User information determining method and device, electronic equipment and storage medium | |
CN111311000B (en) | User consumption behavior prediction model training method, device, equipment and storage medium | |
CN112580723A (en) | Multi-model fusion method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |