Detailed Description
The present specification will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. The described embodiments are only a subset of the embodiments described herein and not all embodiments described herein. All other embodiments obtained by a person skilled in the art based on the embodiments in the present specification without any inventive step are within the scope of the present application.
It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. The embodiments and features of the embodiments in the present description may be combined with each other without conflict.
As mentioned above, enterprises generally need data to improve their service quality and efficiency, but their own data is often insufficient or incomplete, and the suppliers of enterprises generally have a lot of data, but there is no way to output data online, efficiently and in compliance for reasons of data security and user privacy.
Based on this, some embodiments of the present specification provide a method for processing a computing task, by which a provider can be helped to output its own data on the premise of legitimacy and compliance to provide services for an enterprise. In particular, FIG. 1 illustrates an exemplary system architecture diagram suitable for use with these embodiments.
As shown in fig. 1, the system architecture may include a client's terminal device, an intermediate service platform, and at least one provider's respective server for the client. Where a single customer may be an organization, which may include, but is not limited to, an enterprise. The intermediate service platform may be, for example, a sub-platform of the cloud communication platform, and may communicate with the terminal device and the server, respectively.
For any of the at least one supplier, at least one scoring model may be deployed in a server of the supplier, and the at least one scoring model may be associated with different tags. It should be noted that the provider typically has its own user data, such as user attribute information and/or historical behavior data of multiple users, and may have the ability to score users based on the user data, such as the ability to credit the user with an intention score, credit card application intention score, repayment intention score, automobile purchase intention score, and/or financial product purchase intention score, among others. Based on this, the labels respectively associated with the at least one scoring model may be obtained by packaging the capabilities possessed by the supplier.
It should be noted that the tags obtained by packaging the loan intention degree scoring capability, the credit card application intention degree scoring capability, the repayment intention degree scoring capability, the automobile purchase intention degree scoring capability and the financial product purchase intention degree scoring capability respectively may be referred to as a loan intention degree tag, a credit card application intention degree tag, a repayment intention degree tag, an automobile purchase intention degree tag and a financial product purchase intention degree tag in sequence.
In practice, the labels respectively associated with the at least one scoring model can be provided for the customer through an intermediate service platform. For the label required by the client, the client can firstly apply for the use permission of the label from the intermediate service platform, and the intermediate service platform can generate an authorization code associated with the label for the client after determining that the client is allowed to use the label, so that the client can be granted the use permission of the label. It is noted that this process may be referred to as a tag application process.
After the client obtains the usage right of the tag, if there is a business requirement related to the tag, such as a risk assessment requirement or an information pushing requirement, the client may submit a computing task related to the tag to the intermediary service platform through the terminal device of the client, and enable the intermediary service platform to obtain score data related to the tag from a server of a target provider of the client according to the computing task, and generate a computing result corresponding to the computing task according to the score data. The server deploys a scoring model related to the label, and the scoring data is predicted by the scoring model. It should be noted that the processing procedure of the computing task may be referred to as a tag using procedure.
Taking the customer as a bank a, the at least one supplier including suppliers B1, …, BN (N may be a natural number greater than 1), and suppliers B1, BN respectively deploying scoring models associated with the repayment intention label, further describe the label using process of the bank a after obtaining the using authority of the repayment intention label.
Assuming that user C applies for a loan to bank A, bank A typically needs to assess the repayment capabilities of user C before approving the loan application by user C to avoid the risk of capital loss and the like. In practice, the repayment ability of user C may be measured based on the willingness to repay of user C. Based on this, the bank a may submit a calculation task related to the offer intention label to the intermediary service platform through its terminal device, which may include the user identification of the user C, such as the mobile phone number shown in fig. 1, and an authorization code associated with the offer intention label.
Then, the intermediate service platform may determine the repayment intention degree label according to the authorization code in the calculation task, deploy the suppliers B1 and BN with the scoring model associated with the repayment intention degree label, and further send scoring query requests to the servers of the suppliers B1 and BN respectively according to the calculation task. The scoring query request may include the mobile phone number of the user C and the repayment intention label.
The provider B1, the server of the BN, may then return query results in response to the scored query requests, respectively. The query results may include, for example, user C's cell phone number, and user C's score under the repayment intent label. Wherein the score can be predicted by the scoring model. The score may be predicted in advance by using the scoring model, or may be predicted on the spot by using the scoring model based on the user data of the user C, and is not particularly limited herein.
Then, the intermediate service platform can generate a calculation result corresponding to the calculation task according to the query result. The calculation result may include the mobile phone number and the target score of the user C. In one example, the target score may be a score in a query result returned by a server of either of the vendors B1, BN. In another example, the target score may be the highest score among the query results returned by the providers B1, BN, respectively. In yet another example, the target score may be an average of the scores in the query results returned by vendors B1, BN, respectively.
Subsequently, the intermediate service platform may actively return the calculation result to the bank a, or return the calculation result to the bank a in response to a request for obtaining the calculation result from the bank a. For example, when the calculation task is a real-time calculation task, after a calculation result corresponding to the real-time calculation task is generated, the calculation result may be immediately returned to the bank a. When the calculation task is an offline calculation task, after a calculation result corresponding to the offline calculation task is generated, the calculation result may be returned to the bank a in response to an acquisition request of the bank a for the calculation result.
By adopting the label using process described above, namely the calculation task processing process, the intermediate service platform can play a role in connecting customer requirements and supplier data compliance use, and can help the supplier to output own data on the premise of legality and compliance, so as to provide services for enterprises.
The following describes specific implementation steps of the above method with reference to specific examples.
Referring to FIG. 2, a diagram of one embodiment of a computing task processing method is shown. The method comprises the following steps:
202, receiving a computing task submitted by a customer by an intermediate service platform, wherein the computing task comprises an authorization code and at least one user identifier, the authorization code indicates a target label authorized to be used by the customer, and the target label is associated with a target supplier of the customer;
step 204, the intermediate service platform sends a grading query request to a server of a target provider, wherein the grading query request comprises at least one user identifier and a target label;
step 206, the intermediate service platform receives a query result returned by the server, wherein the query result comprises a user identifier in the at least one user identifier and a score of a user indicated by the user identifier under the target label, and the score is predicted by a scoring model associated with the target label in the server;
and 208, generating a calculation result corresponding to the calculation task by the intermediate service platform according to the query result.
The above steps are further explained below.
In step 202, the intermediary service platform may receive a computing task submitted by a customer, the computing task may include an authorization code and at least one user identification. The at least one user identification respectively indicates a user, typically a user of the customer.
Where a single customer may be an organization, which may include, but is not limited to, an enterprise. The individual user identification may be, for example, an identification of a number category. Further, the individual subscriber Identity may include, For example, a Mobile phone number, an International Mobile Equipment Identity (IMEI), an advertisement Identifier (IDFA) or an Anonymous Device Identifier (OAID), etc.
The authorization code may indicate that the customer has been authorized to use a target tag, which may be associated with a target provider of the customer. In practice, the target provider has the ability to score the user for the target tag, which can be obtained by packaging the ability.
In some embodiments, the authorization code may be generated by an approval in response to a request for use of the target tag by the customer. Specifically, the application process for the use of the target tag may be as shown in fig. 3. Wherein, the application process comprises the following steps:
step 302, the intermediate service platform receives a use application submitted by a client aiming at a target label;
step 304, the intermediate service platform sends a first approval request to the first approval terminal according to the application;
step 306, the intermediate service platform receives the approval result returned by the first approval end;
308, the intermediate service platform responds to the approval result that the client is allowed to use the target label, and generates an authorization code associated with the target label for the client;
and step 310, the intermediate service platform returns the approval result to the client.
In practice, the intermediary service platform may provide the customer with an interface presented with a label, which may be referred to as a label square interface. In addition, the label square interface can present a use application entry corresponding to the label while presenting the label. The client can access the label square interface through its terminal device, for example through a browser in the terminal device. When the client browses the required target label, the application for the target label can be created and submitted through the application entrance corresponding to the target label.
Referring to fig. 4, a schematic diagram of a tag application submission process is shown. The label square interface indicated by reference numeral 401 in fig. 4 presents a loan intention label and a credit card application intention label, and the two labels respectively correspond to the application entry, i.e., the application opening button. Assuming that the loan intention label is a target label required by the customer, the customer may click an application opening button corresponding to the loan intention label and enter an application opening window indicated by the reference numeral 402. Thereafter, the customer may enter specified information in the application opening window, such as the application title, the application reason, the application material, and check a check box to approve the opening and use the loan intention label service. The customer may then click on the submit application button to submit a use application for the loan intention label to the intermediate service platform.
In some embodiments, after packaging the scoring capabilities as described above into a label, the label may be further packaged into a label in one or more industries, and the label may be specific to a scenario of the industry to which it belongs. For example, loan intent tags may be packaged as tags in the finance, real estate, etc. industries. In the financial industry and the real estate industry, for example, the scenarios to which the loan intention label belongs may both be loan scenarios. As another example, a credit card application intent tag may be packaged as a tag in the financial industry, and the tag may be attributed to a credit card scenario in the financial industry.
Based on this, the label square interface can also present the industry and scene to which the label belongs, and the label square interface pointed to by reference numeral 401 in fig. 4 can be further optimized into the interface shown in fig. 5. It should be noted that the same labels of multiple different industries/scenarios may correspond to different scoring models or different data sources, and are not specifically limited herein.
It should be understood that the label square interface and the application opening window shown in fig. 4, and the label square interface shown in fig. 5 are only exemplary contents, and the label square interface and the application opening window may be designed according to actual requirements, and are not specifically limited herein.
Based on the above-described tag application submission process, in step 302, the intermediate service platform may receive a use application created and submitted by a customer through a use application entry corresponding to the target tag.
Next, in step 304, the intermediate service platform may send a first approval request to the first approval terminal according to the application. The first approval request may be a request to approve whether the customer is allowed to use the target tag. The first approval request may include, for example, a customer identification of the customer, an object tag, and at least a portion of the content input by the customer in the application, such as the reason for the application, the application material, and the like.
It should be noted that, the first approval end may be, for example, a client or a server used by an operation department of the intermediate service platform for using application approval, and is not limited specifically herein. In one example, the first approval end may automatically approve whether the customer is allowed to use the target label according to the first approval request and the deployed application approval algorithm. In another example, on the first approval side, a manual intervention may be employed to approve whether the customer is allowed to use the target label. It should be understood that various approval methods may be adopted upon approval of the application for use, and are not specifically limited herein.
After the approval is completed according to the first approval request, the first approval end may return an approval result to the intermediate service platform. The approval result may be that the customer is allowed to use the target label or that the customer is not allowed to use the target label. If the approval result is that the client is allowed to use the target tag, the intermediate service platform may generate an authorization code associated with the target tag for the client by performing step 308. The intermediate service platform may then return the approval to the customer by performing step 310. In some embodiments, the intermediate service platform may also return an authorization code to the customer in step 310.
After obtaining the approval result indicating that the target label is allowed to be used, the client can know that the client has obtained the use authority of the target label, and can subsequently use the target label, for example, submit a computing task related to the target label to an intermediate service platform.
With reference to fig. 2, after receiving the computing task submitted by the client, the intermediate service platform may obtain, according to the authorization code in the computing task, the target tag associated with the authorization code and the target provider associated with the target tag. For example, the intermediate service platform may establish a first data table for characterizing an association relationship between the authorization code and the tag, and a second data table for characterizing an association relationship between the tag and the provider, and the intermediate service platform may first find a target tag associated with the authorization code in the first data table according to the authorization code in the calculation task, and then find a target provider associated with the target tag in the second data table.
The intermediate service platform may then send a scoring query request to the target provider's server by performing step 204. The scoring query request may include the at least one user identification and the target tag. Further, when the target tag belongs to a certain scene of a certain industry, the scoring query request may further include the industry and the scene to which the target tag belongs.
In practice, the server of the target provider deploys a scoring model associated with the target tags. The scoring model may be a model owned by the target provider or provided by the target provider through the intermediate service platform, and the scoring model may be a scoring rule or a machine learning model for score prediction, which is not limited herein.
For a user identifier in the at least one user identifier, if the server predicts the score of the user indicated by the user identifier under the target tag by using the scoring model in advance, the server may obtain a ready score for the user; otherwise, the server can obtain the user data of the user, and predict the score of the user under the target label on site according to the user data by using the scoring model.
In one example, the target label may be a training label used in the training process of the scoring model, and based on this, the scoring model may predict a score corresponding to the target label for a user corresponding to the user data according to the input user data. In this case, the score of the user under the target tag is usually one score.
In another example, the target tag may have multiple sub-tags. For example, when the target tag is a user representation, the plurality of sub-tags of the user representation may include, but are not limited to, male, female, preferred shopping, preferred financing, and the like, for example. The plurality of sub-labels may be training labels used in the training process of the scoring model, and based on this, the scoring model may predict scores corresponding to the plurality of sub-labels, respectively, for the user corresponding to the user data according to the input user data. In such a case, the score of the user under the target tag is typically a plurality of scores.
And the server of the target provider can return the query result to the intermediate service platform after obtaining the scores respectively corresponding to the at least one user identifier according to the scoring query request. The query result may include a user identifier of the at least one user identifier, and a score of the user indicated by the user identifier under the target tag. Further, when the target tag has a plurality of sub-tags and the scoring model associated with the target tag is used to predict scores corresponding to the plurality of sub-tags, the query result may further include the sub-tags corresponding to the scores.
Thereafter, the intermediate service platform may receive the query result returned by the server of the target provider by performing step 206.
Then, the intermediate service platform may generate a calculation result corresponding to the calculation task according to the query result by executing step 208.
Specifically, in one example, if the target vendor is a single vendor, the intermediate service platform may directly take the query result as the calculation result in step 208. In another example, if the target provider is a plurality of providers, in step 208, the intermediate service platform may employ various processing means to generate a calculation result corresponding to the calculation task according to the query result returned by the plurality of providers.
For example, the intermediate service platform may take the results of the query of any of the plurality of vendors as the computed results. Alternatively, the intermediate service platform may perform the following generating steps: step a, combining the query results of the plurality of suppliers into a first query result; step b, performing deduplication on the first query result, and for tags corresponding to a plurality of different scores (e.g. target tags, or sub-tags of the target tags), performing any one of the following processing operations: retaining a maximum score of the plurality of different scores in the first query result and deleting other scores of the plurality of different scores; averaging the plurality of different scores and replacing the plurality of different scores with a single average; and c, taking the processed first query result as a calculation result.
In some embodiments, the computing task may further include a task type, and in step 208, the intermediate service platform may generate a computing result corresponding to the computing task according to the task type and the query result.
In general, a single task type may be, for example, user ranking, user filtering, normalization calculations, or sub-label score predictions, among others. In practice, when the target tag does not have a sub-tag, i.e., when the scoring model associated with the target tag is used to predict the score corresponding to the target tag, the task type of the computing task may be, for example, user ranking, user filtering, or normalization computing. When the target tag has a sub-tag, i.e. when the scoring model associated with the target tag is used to predict the score corresponding to the sub-tag, the task type of the computation task may be, for example, a sub-tag score prediction.
When the task type of the computing task is normalization computing or sub-tag score prediction, the intermediate service platform may perform the specific implementation described above with respect to step 208 to generate the computing result.
When the at least one user identifier is a plurality of user identifiers and the task type of the calculation task is user sequencing, the calculation task may further include a number upper limit, where the number upper limit may be data in a percentage form or a natural number, and is not specifically limited herein. In such a case, in step 208, the intermediate service platform may first determine, according to the query result, a score that the user indicated by the user identifier of the plurality of user identifiers is finally under the target tag. Then, the intermediate service platform may sort the plurality of user identifiers in order of scores from top to bottom to obtain a user identifier sequence. Then, the intermediate service platform may select at least part of the user identifiers from the positions of the user identifiers corresponding to the highest scores in the user identifier sequence according to the upper limit of the number, and use the at least part of the user identifiers as the calculation result.
Specifically, when the number upper limit is data in percentage form, the intermediate service platform may determine the product of the number upper limit and the number of the plurality of user identifiers as a target number, and select the user identifiers of the previous target number from the position in the user identifier sequence. When the number upper limit is a natural number, the intermediate service platform may select a user identifier with the number upper limit from the position. In some embodiments, when the number of the plurality of user identifiers is less than or equal to the target number or the upper number limit, the intermediate service platform may directly use the user identifier sequence as the calculation result.
When the task type of the calculation task is the user filtering, in step 208, the intermediate service platform may first determine, according to the query result, a score of the user indicated by the user identifier in the at least one user identifier, which is finally under the target tag, and then may determine whether the score exceeds a score threshold, and then generate a calculation result according to the determination result. The calculation result may include a user identifier of the at least one user identifier, and a flag indicating whether a score of the user indicated by the user identifier, which is finally under the target tag, exceeds a score threshold. By way of example, the flag may be, for example, "YES", "Y", or "YES", etc., when the score exceeds a score threshold; when the score does not exceed the score threshold, the flag may be, for example, "NO", "N", or "NO", and is not particularly limited herein.
It should be noted that the score threshold may be preset by the intermediate service platform, or may be specified by the client (for example, the score threshold is included in the calculation task), and is not specifically limited herein.
In some embodiments, after step 208, the intermediate service platform may then perform step 212 as shown in FIG. 2, returning the results of the computation to the customer.
In practice, the computing task may be a real-time computing task, or an offline computing task. The intermediate service platform may provide the customer with a first created entry for the real-time computing task and a second created entry for the offline computing task, and the computing task in step 202 may be created and submitted by the customer through either the first created entry or the second created entry.
By way of example, the first created portal may be presented on a label square interface as previously described, and the second created portal may be presented on a particular offline task management interface, which is not specifically limited herein.
Further, while each label on the label square interface corresponds to the application entry, each label may also correspond to the first creation entry, for example, the immediate use button shown in fig. 6. Wherein, fig. 6 is another schematic diagram of the label square interface. It should be noted that, for any one of the labels presented on the label square interface, when the client has obtained the usage right of the label, the immediate use button corresponding to the label is generally in a clickable state; when the customer does not obtain the usage right of the label, the immediate use button corresponding to the label is generally in a non-clickable state.
When the computing task is a real-time computing task, the intermediate service platform may actively return the computing result, that is, after the intermediate service platform performs step 208, the intermediate service platform may immediately perform step 212 to return the computing result to the client. When the computing task is an offline computing task, the intermediate service platform can passively return a computing result. For example, after performing step 208, the intermediary service platform may first receive a request for obtaining the calculation result from the client by performing step 210 as shown in fig. 2, and then return the calculation result to the client by performing step 212.
It should be noted that, when the calculation task is a real-time calculation task, the aforementioned scoring model associated with the target tag may be specifically a scoring model associated with the target tag in a real-time calculation service. When the calculation task is an offline calculation task, the scoring model associated with the target tag may specifically be a scoring model associated with the target tag in an offline calculation service. It should be noted that the scoring models respectively associated with the target tag in the real-time computing service and the offline computing service may be the same model or different models, and are not specifically limited herein.
In addition, the server of the target provider may provide the real-time query interface and the offline query interface to the intermediate service platform. When the computing task is a real-time computing task, in step 204, the intermediate service platform may send a scoring query request to the server through the real-time query interface. When the computing task is an offline computing task, in step 204, the intermediate service platform may send a scoring query request to the server through the offline query interface.
In some embodiments, the offline computing task may belong to a single task or a looping task. As the name implies, a single task is a task that needs to be executed only once, and a round-robin task is a task that needs to be executed periodically. When the offline computing task belongs to the looping task, after the step 208 is executed for the offline computing task, the intermediate service platform may continue to execute the step 204 for the offline computing task when a next computing period of a current computing period of the offline computing task comes.
The computing task processing method provided by the embodiment corresponding to fig. 2 can enable the intermediate service platform to receive the computing task submitted by the customer, obtain the score data output by the target provider of the customer in an online, efficient and compliant manner according to the computing task, and generate the computing result corresponding to the computing task based on the score data. Subsequently, the calculation result may be returned to the client. Therefore, the intermediate service platform can play a role in connecting customer requirements and supplier data compliance use, and can help the supplier to output own data on the premise of legality and compliance so as to provide services for enterprises.
In addition, the method can help the customer reduce the difficulty of using the data of the suppliers, shield the access complexity of the data sources of different suppliers and shield the difference of the structured data of different suppliers. And moreover, the data of the suppliers can be efficiently used by the customers, when the data of different suppliers needs to be accessed, the whole platform does not need to be greatly modified, and after the suppliers complete the access, the customers can use the new data labels by themselves.
In practice, in an offline computing service, the intermediate service platform may support a client to specify a model identifier when creating an offline computing task, so that score data predicted by a specified score model (the score model indicated by the model identifier) may be obtained subsequently when executing the offline computing task. It should be noted that, if the scoring model indicated by the model identifier is not pre-deployed in the server of the target provider, the intermediate service platform may first deploy the scoring model to the server of the target provider after receiving the offline computing task submitted by the client.
Based on this, in some embodiments, the computing task in step 202 may be an offline computing task, which may also include a model identification of the scoring model. After step 202, and before step 204, when the scoring model is stored in the intermediate service platform and not pre-deployed to the server of the target provider, a model deployment procedure as shown in fig. 7 may also be performed. The model deployment process comprises the following steps:
step 2032, the intermediate service platform sends a second approval request to the second approval end according to the offline calculation task, so that the second approval end deploys the scoring model to the server of the target provider through the model processing end after the offline calculation task passes the task feasibility approval;
step 2034, the intermediate service platform receives the notification information of the completion of model deployment returned by the second approval end.
Specifically, in step 2032, the second approval request may be a request for approval of the feasibility of the task. The second approval request may include, for example, the offline computing task itself or at least part of the offline computing task, and is not particularly limited herein. The second approval end may be, for example, a client or a server used by the aforementioned operation department for task approval, and the second approval end and the first approval end may be the same approval end or different approval ends, which is not limited herein.
In one example, the second approval terminal may automatically approve the task feasibility of the offline computing task according to the second approval request and the deployed task approval algorithm. In another example, on the second approval side, a task feasibility approval may be performed on the offline computing task in a manual intervention manner. It should be understood that when performing a task feasibility approval on an offline computing task, various approval methods may be employed, and are not specifically limited herein.
After the approval is completed according to the second approval request, if the offline calculation task does not pass the task feasibility approval, the second approval end may return a newly added task failure result to the intermediate service platform, and the intermediate service platform may return the newly added task failure result to the client. And if the offline computing task passes the task feasibility approval, the second approval end can send a model deployment request to the model processing end.
The model deployment request may include, but is not limited to, a model identification, among others. Further, the offline computing task may also include a vendor identification of the target vendor, and thus the model deployment request may also include the vendor identification. It should be noted that the model processing end may be a server used by a research and development department of the intermediate service platform and used for performing model generation and model deployment, or may be a server used by a third-party research and development team for performing model generation and model deployment, and is not specifically limited herein.
After receiving the model deployment request, the model processing terminal may deploy the scoring model to a server of a target provider by using various deployment methods. In one example, the model processing terminal may automatically obtain the scoring model indicated by the model identifier according to the model deployment request and an existing model deployment algorithm, and deploy the scoring model to the server of the target provider. In another example, a manual intervention mode may be adopted to further communicate the model deployment requirement with the customer, and then perform model deployment based on the final model deployment requirement.
It should be noted that modeling is a highly human-required task, and therefore, the modeled service can be provided to the client by a third-party research and development team at an intermediate service platform. Therefore, the intermediate service platform can rapidly realize large-scale service for the client by means of the force of a third-party research and development team.
After the scoring model is successfully deployed, the model processing end can return model deployment completion notification information to the second approval end, and the second approval end can return the model deployment completion notification information to the intermediate server. Based on this, in step 2034, the intermediate service platform may receive the model deployment completion notification information returned by the second approval end. Optionally, after step 2034, the intermediate service platform may return a model deployment completion notification to the customer.
In practice, under the offline computing service, the intermediate service platform can also support a customer-defined scoring model. Specifically, the customer may provide modeling requirements to the intermediate service platform, which may create a scoring model based on the modeling requirements.
Based on this, in some embodiments, the scoring model indicated by the model identifier may be a model owned by the intermediate service platform or may also be a model generated according to the modeling requirements of the customer. When the scoring model is generated according to the modeling requirements of the customer, a model creation flow as shown in fig. 7 may also be performed before step 202. The model creation process comprises the following steps:
step 2012, the intermediate service platform receives a model creation request of the client, wherein the model creation request comprises modeling requirements;
step 2014, the intermediate service platform sends a third approval request to the second approval end according to the model creation request, so that the second approval end obtains a grading model at least according to the modeling requirement through the model processing end after the modeling requirement passes the feasibility approval;
step 2016, the intermediate service platform receives the scoring model returned by the second approval end;
and 2018, generating a model identifier for the grading model by the intermediate service platform.
Specifically, in step 2012, the model creation request may be submitted by the customer through a model creation portal provided by the intermediate service platform, for example. The model creation request includes at least modeling requirements. Modeling requirements may include, for example, the use of the model, applicable industries and scenarios, etc., and are not specifically limited herein. Optionally, the model creation request may also include a set of positive examples, a set of negative examples, and/or feature filtering rules, or identification information for at least one of the three. Wherein, when the model creation request includes the identification information, the client has uploaded the at least one to the intermediate service platform in advance. It is noted that the positive samples in the positive sample set and the negative samples in the negative sample set may be both user identifications, such as mobile phone numbers, IMEIs, IDFAs, OAIDs, and the like.
Next, in step 2014, the intermediate service platform may send a third approval request to the second approval end according to the model creation request. Wherein the third approval request may be a request to approve feasibility of the modeling requirement. The third approval request may include modeling requirements. Optionally, the third approval request may further include a positive sample set, a negative sample set, and/or feature screening rules as described above.
In one example, the second approval end may automatically approve the feasibility of the modeling requirement according to the third approval request and the deployed feasibility approval algorithm of the modeling requirement. In another example, on the second approval side, manual intervention may be employed, approval being the feasibility of modeling requirements. It should be understood that in feasibility approval of modeling requirements, various approval methods may be employed, and are not specifically limited herein.
After the approval is completed according to the third approval request, if the modeling requirement does not pass the feasibility approval, the second approval end may return a new model failure result to the intermediate service platform, and the intermediate service platform may return the new model failure result to the client. If the modeling requirement passes the feasibility approval, the second approval end may send the modeling request to the model processing end as described above.
Wherein the modeling request may include modeling requirements. Optionally, the modeling request may also include a set of positive examples, a set of negative examples, and/or feature screening rules as previously described. After receiving the modeling request, the model processing end can adopt various modeling methods to create a scoring model. In one example, the model processing side can automatically create a scoring model based on the modeling request and the deployed modeling algorithm. In another example, a scoring model may be created from a modeling request using human intervention.
After the scoring model is successfully created, the model processing terminal may return the scoring model to the second approval terminal, and the second approval terminal returns the scoring model to the intermediate service platform, for example. Based on this, the intermediate service platform may receive the score model returned by the second approval end by executing step 2016. The intermediate service platform may then generate a model identification for the scoring model by performing step 2018. Subsequently, the intermediate service platform can also return the model creation completion notification information to the client. The notification information may include, but is not limited to, the model identification.
With further reference to FIG. 8, the present specification provides one embodiment of a computing task processing device that may be applied to an intermediary service platform as shown in FIG. 1.
As shown in fig. 8, the calculation task processing device 800 of the present embodiment may include: a first receiving unit 801, a first transmitting unit 802, a second receiving unit 803, and a generating unit 804. The first receiving unit 801 is configured to receive a computing task submitted by a customer, where the computing task includes an authorization code and at least one user identifier, the authorization code indicates that the customer has been authorized to use a target tag, and the target tag is associated with a target provider of the customer; the first sending unit 802 is configured to send a scoring query request to a server of a target provider, where the scoring query request includes the at least one user identifier and a target tag; the second receiving unit 803 is configured to receive a query result returned by the server, where the query result includes a user identifier in the at least one user identifier, and a score of the user indicated by the user identifier under the target tag, where the score is predicted by a scoring model associated with the target tag in the server; the generating unit 804 is configured to generate a calculation result corresponding to the calculation task according to the query result.
In some embodiments, the apparatus 800 may further include a second sending unit for sending information to the client, a third sending unit for sending information to the first approval side, a third receiving unit for receiving information returned by the first approval side, a fourth sending unit for sending information to the second approval side, a fourth receiving unit for receiving information returned by the second approval side, and so on.
In some embodiments, the first receiving unit 801 may be further configured to: receiving a use application submitted by a client aiming at a target label before receiving a computing task submitted by the client; the third transmitting unit may be configured to: sending a first approval request to a first approval end according to the application; the third receiving unit may be configured to: receiving an approval result returned by the first approval end; the generating unit 804 may be further configured to: in response to the approval result being that the client is allowed to use the target label, generating the authorization code for the client; the second transmitting unit may be configured to: and returning the approval result to the client.
In some embodiments, the computing task is a real-time computing task or an offline computing task.
In some embodiments, the computing task is a real-time computing task; and the second transmitting unit may be configured to: after the generating unit 804 generates the calculation result corresponding to the calculation task, the calculation result is returned to the client.
In some embodiments, the computing task is an offline computing task; and the first receiving unit 801 may be further configured to: after the generating unit 804 generates the calculation result corresponding to the calculation task, receiving an acquisition request of a client for the calculation result; the second transmitting unit may be configured to: and returning the calculation result to the client.
In some embodiments, the offline computing task belongs to a looping task; and the first sending unit 802 may be further configured to: and when the next calculation period of the current calculation period of the offline calculation task comes, continuously sending a grading inquiry request to the server of the target supplier.
In some embodiments, the computing task is an offline computing task, the offline computing task further including a model identification of a scoring model, the scoring model being stored in the intermediate service platform; and the fourth transmitting unit may be configured to: before the first sending unit 802 sends a scoring query request to a server of a target provider, a second approval request is sent to a second approval end according to an offline calculation task, so that the second approval end deploys a scoring model to the server through a model processing end after the offline calculation task passes through task feasibility approval; the fourth receiving unit may be configured to: and receiving model deployment completion notification information returned by the second approval end.
In some embodiments, the scoring query request is generated based on modeling requirements of the customer; and the first receiving unit 801 may be further configured to: receiving a model creation request of a customer, including the modeling requirement, prior to receiving a computing task submitted by the customer; and the fourth transmitting unit may be configured to: according to the model establishing request, sending a third approval request to the second approval end, so that the second approval end obtains a grading model at least according to the modeling requirement through the model processing end after the modeling requirement passes feasibility approval; the fourth receiving unit can be configured to receive the scoring model returned by the second approval end; the generating unit 804 may be further configured to: generating a model identification for the scoring model.
In some embodiments, the computing task further includes a task type; and the generating unit 804 may be further configured to: and generating a calculation result according to the task type and the query result.
In some embodiments, a single task type may be user-ordered, user-filtered, normalized or sub-label score predicted, or the like.
In some embodiments, a target tag may be attributed to a certain scene of a certain industry; and/or, the individual user identification may include a mobile phone number, an international mobile equipment identity, an advertisement identifier, or an anonymous device identifier, among others; and/or the scoring model may be provided to the target provider by the intermediary service platform.
In the embodiment of the apparatus corresponding to fig. 8, the detailed processing of each unit and the technical effect thereof can refer to the related description of the method embodiment in the foregoing, and are not repeated herein.
An embodiment of the present specification further provides a computing task processing method based on cloud communication, which is applied to an intermediate service platform in a cloud communication platform, and includes: receiving a computing task submitted by a customer, wherein the computing task comprises an authorization code and at least one user identifier, the authorization code indicates a target label authorized to be used by the customer, and the target label is associated with a target supplier of the customer; sending a scoring query request to a server of a target provider, wherein the scoring query request comprises the at least one user identifier and a target tag; receiving a query result returned by the server, wherein the query result comprises a user identifier in the at least one user identifier and a score of a user indicated by the user identifier under a target label, and the score is predicted by a scoring model associated with the target label in the server; and generating a calculation result corresponding to the calculation task according to the query result.
An embodiment of the present specification further provides a computing task processing device based on cloud communication, which is applied to an intermediate service platform in a cloud communication platform, and includes: a first receiving unit configured to receive a computing task submitted by a customer, the computing task including an authorization code and at least one user identifier, the authorization code indicating a target tag that the customer has been authorized to use, the target tag being associated with a target provider of the customer; a first sending unit configured to send a scoring query request to a server of a target provider, the scoring query request including the at least one user identifier and a target tag; the second receiving unit is configured to receive a query result returned by the server, wherein the query result comprises a user identifier in the at least one user identifier and a score of a user indicated by the user identifier under the target label, and the score is predicted by a scoring model associated with the target label in the server; and the generating unit is configured to generate a calculation result corresponding to the calculation task according to the query result.
The embodiment of the specification further provides a use application method for the label, which comprises the following steps: receiving a use application submitted by a client aiming at a target label; sending a first approval request to a first approval end according to the application; receiving an approval result returned by the first approval end; and in response to the approval result being that the client is allowed to use the target label, generating an authorization code associated with the target label for the client, and returning the approval result to the client.
An embodiment of the present specification further provides a device for applying for use of a tag, including: the first receiving unit is configured to receive a use application submitted by a client aiming at a target label; the sending unit is configured to send a first approval request to the first approval terminal according to the use application; the second receiving unit is configured to receive the approval result returned by the first approval end; and the processing unit is configured to respond to the approval result that the client is allowed to use the target label, generate an authorization code associated with the target label for the client, and return the approval result to the client.
The present specification also provides a computer-readable storage medium, on which a computer program is stored, wherein when the computer program is executed in a computer, the computer program causes the computer to execute the computing task processing method and the application method for using a tag described in the above method embodiments respectively.
The embodiment of the present specification further provides a computing device, which includes a memory and a processor, where the memory stores executable code, and when the processor executes the executable code, the computing task processing method and the application method for use of a tag described in the above method embodiments are implemented.
Embodiments of the present specification further provide a computer program, where the computer program, when executed in a computer, causes the computer to execute the computing task processing method and the application method for using a tag, which are respectively described in the above method embodiments.
Those skilled in the art will recognize that, in one or more of the examples described above, the functions described in the embodiments disclosed herein may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium.
In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The above-mentioned embodiments, objects, technical solutions and advantages of the embodiments disclosed in the present specification are further described in detail, it should be understood that the above-mentioned embodiments are only specific embodiments of the embodiments disclosed in the present specification, and are not intended to limit the scope of the embodiments disclosed in the present specification, and any modifications, equivalent substitutions, improvements and the like made on the basis of the technical solutions of the embodiments disclosed in the present specification should be included in the scope of the embodiments disclosed in the present specification.