Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention provides a picture transaction platform management system based on artificial intelligence, which comprises a picture display uploading module, a scene dynamic updating module, a picture searching module, a picture first screening module, a picture second screening module, a picture recommending and displaying module and a scene configuration library.
Referring to fig. 1, the above-mentioned middle picture display uploading module is connected with the usage scene dynamic updating module, the usage scene dynamic updating module and the picture retrieving module are both connected with the first screening module of the picture, the first screening module of the picture is connected with the second screening module of the picture, the second screening module of the picture is connected with the picture recommending and displaying module, and the scene allocation library is connected with the usage scene dynamic updating module.
The picture display uploading module is used for setting an uploading entry in a picture display page of the transaction platform and allowing a historical purchaser to upload a use state image and a use evaluation of a picture on a picture detail page.
In a specific implementation of the above scheme, the user is guided to upload their usage status image and usage assessment by integrating a functional module, such as a "share use case" button or link, in the picture detail page of the transaction platform. Wherein the usage ratings include historical purchaser ratings content and ratings star ratings generated by the system for the ratings content.
The usage scenario dynamic updating module is used for identifying effective positive evaluation from usage state images and usage evaluation uploaded by a picture detail page user according to a set updating period, extracting usage scenarios from the usage state images of the effective positive evaluation, and updating the extracted usage scenarios in the picture detail page. See fig. 2.
The updating period setting can be dynamically adjusted, and is particularly set according to the generation frequency of transaction records in a transaction platform, namely the transaction frequency, when the transaction frequency is higher, the updating of the use state images and the evaluation uploaded by purchasers is faster, at the moment, a short updating period can be set, the latest user feedback can be reflected faster, when the transaction frequency is lower, the updating of the use state images and the evaluation uploaded by users is slower, and an unnecessary resource consumption can be avoided by setting a longer updating period, so that the use evaluation which is called from a picture detail page is always the latest evaluation on the basis of avoiding excessive consumption of resources, the proportion of outdated information is reduced, and the use evaluation which is called from the picture detail page is more accurate.
In the implementation of the scheme, the identification process of the service state image of each service evaluation map is obtained by carrying out mapping processing on the service evaluation uploaded by each picture in the picture display page in the updating period and the service state image.
As a specific example of the above implementation, the mapping of the usage rating to the usage status image may correlate the usage rating to the usage status image by an identifier (e.g., purchaser id+picture id+rating ID).
And dividing each word by using the evaluation to obtain a plurality of divided words, and marking the parts of speech of each divided word.
The above-mentioned Chinese word segmentation can be divided by using Chinese word segmentation tool, and the part of speech tagging can be implemented by using part of speech tagging tool, in which the part of speech includes but is not limited to nouns, adjectives, adverbs and other parts of speech.
And extracting evaluation characterization phrases based on the parts of speech of each word, if a certain use evaluation does not extract the evaluation characterization phrases, rejecting the use evaluation, and marking the reserved use evaluation as a valid evaluation.
In the preferred implementation of the scheme, the evaluation characterization phrase is extracted by commonly combining adjectives and adverbs as descriptive words, so that the words with parts of speech being nouns and the words with parts of speech being descriptive words are screened out according to the parts of speech of each word, and the words with parts of speech being nouns and the words with parts of speech being descriptive words are respectively marked as noun words and descriptive words.
It should be appreciated that adjectives and adverbs may be referred to as descriptors because they are used primarily in language to describe and modify nouns, verbs, or other words, to provide detailed information or characteristics about objects or actions, and that in use evaluation adjectives and adverbs may provide detailed information about evaluation objects as important components of expression evaluation by which an evaluator can more accurately convey his own mindset and feelings, thereby making the evaluation more expressive and convincing.
And combining adjacent noun word segmentation and descriptive word segmentation into a phrase serving as an evaluation characterization phrase.
In the example of the above implementation, it is assumed that the picture is rated as "this picture is used for office wall decoration, the color is vivid, and the contrast is strong. The words divided by the Chinese word dividing tool are "this", "picture", "used for" office "," wall "," decoration "," color "," comparison "," vivid "," contrast "," very "and" strong ", wherein the words are" comparison "," vivid "," very "and" strong ", the words are" office "," wall "," color "," contrast ", and the words are" color "," comparison "," vivid "and" contrast "," very "and" strong ", and the words of adjacent words and words are" color "," comparison "," vivid "and" contrast ", and the evaluation characterization phrases in this example are" vivid color "and very strong".
According to the invention, effective evaluation identification is carried out according to whether the evaluation characterization phrase can be extracted from the use evaluation, the use evaluation without the evaluation characterization phrase is removed, so that the residual evaluation can be ensured to contain substantial information, the accuracy and reliability of subsequent analysis are improved, the analysis conclusion is ensured to be based on valuable user feedback, and meanwhile, the calculation resources and storage space required for processing and storing useless data can be reduced by removing invalid evaluation.
The evaluation star level is extracted from the effective evaluation, and the emotion tendencies corresponding to the effective evaluation are obtained according to the correspondence between the evaluation star level and the emotion tendencies, and in a specific implementation, a mapping table of star levels and emotion tendencies can be established, and as shown in table 1, when the evaluation star level is 3 stars or more, the emotion tendencies corresponding to the evaluation are positive.
TABLE 1
It is to be appreciated that on many e-commerce, application store, etc. platforms, 3-star ratings are generally considered the limit of neutral preference. An evaluation below 3 weeks is generally considered negative feedback, while 3 weeks and above are considered positive feedback.
And screening the affective trend from the affective trends corresponding to the effective evaluations to obtain the effective evaluation.
According to the invention, through effectively and actively evaluating and identifying the use evaluation uploaded by the historical purchaser, which pictures perform well in practical application can be more accurately known, so that more accurate recommendation is provided for new users.
In a further preferred implementation, the extraction of the usage scenario from the actively evaluated usage state image is performed by background separation of the actively evaluated usage state image and identifying the background object from the separated background.
The purpose of the background segmentation is to separate the foreground in the use state image (i.e. the picture used by the user) from the background, so as to better identify the background object and know the specific scene of the image. Illustratively, the background separation may be performed by edge detection, background subtraction, or the like.
And counting the number of the recognized background objects, and matching each background object with the background objects conventionally equipped in different scenes in the scene equipment library, so as to obtain the scene matched with each background object.
For the above embodiment, different scenes in the scene configuration library are typically configured with specific background objects to help the user to understand and apply better, for example, objects typically configured in indoor design scenes are sofa, television, tea table, etc., objects typically configured in office design scenes are office desks, computers, file cabinets, etc., and objects typically configured in commercial display scenes are display desks, shelves, etc.
And counting the number of the recognized background objects, and matching each background object with the background objects conventionally equipped in different scenes in the scene equipment library, so as to obtain the scene matched with each background object.
Classifying the scenes matched with the background objects according to the same scenes, counting the occurrence frequency of each scene, and further taking the scene corresponding to the maximum occurrence frequency as the use scene of the corresponding use state image for effective and active evaluation.
In the example of the above scheme, if the object such as sofa, television, tea table, computer, etc. is identified, it may be in the indoor design scene.
The picture retrieval module is used for providing a use scene retrieval condition on a picture retrieval page of the trading platform and allowing a user to input the use scene for picture retrieval.
The first picture screening module is used for primarily screening pictures meeting user requirements from each picture detail page in the picture display page according to the use scene input by the user, marking the pictures as alternative pictures, and further calling the use evaluation of the alternative pictures, so that available pictures are screened deeply. See fig. 3.
In a possible implementation of the above scheme, the filtering process of the candidate pictures is to compare the usage scenario updated from the detail pages of each picture with the usage scenario input by the user, and extract the pictures meeting the usage scenario input by the user as the candidate pictures.
It should be appreciated that the alternative pictures are selected based on the user input usage fields Jing Shai, which may ensure that the selected pictures are selected based on the actual needs of the user.
In a further possible implementation, the available picture screening process is such that the statistics of the candidate pictures corresponds to the effective positive evaluation duty cycle present in the usage evaluation uploaded by the historical purchaser and is compared with the set up-to-standard duty cycle, for example 80% up-to-standard duty cycle, from which the candidate pictures up to up-to-standard duty cycle are extracted as reference pictures.
It is to be appreciated that after the candidate pictures are selected, the reference pictures are further selected according to the effective positive evaluation duty ratio, so that the available pictures are obtained based on the effective positive evaluation, and the reliability of the recommendation result and the user satisfaction can be improved.
And extracting an evaluation star grade from the use evaluation of the reference pictures, and comparing the evaluation star grades of the reference pictures corresponding to each use evaluation to obtain the median evaluation star grade and the average evaluation star grade of the reference pictures.
It should be added that the average rating star reflects the overall level, while the median rating star reflects the stability of the median value.
And dividing the median evaluation star level and the average evaluation star level of each reference picture by the median evaluation star level after taking the absolute value as the difference to obtain the median average deviation degree of each reference picture.
Comparing the average deviation degree of the middle position of each reference picture with the system limiting deviation degree, wherein the limiting deviation degree is 0.3 by way of example, if the average deviation degree of the middle position of a certain reference picture reaches the system limiting deviation degree, the difference between the average evaluation star level and the average evaluation star level is larger, the evaluation star level distribution used for evaluation in the reference picture is reflected to be scattered, the risk of two-stage differentiation of evaluation exists, the reference picture is eliminated, otherwise, if the average deviation degree of the middle position of a certain reference picture does not reach the system limiting deviation degree, the difference between the average evaluation star level and the average evaluation star level is smaller, the evaluation star level distribution used for evaluation in the reference picture is reflected to be relatively average, and the evaluation of representative historical buyers tends to be consistent, which means that the evaluation of the pictures by the historical buyers forms consensus.
According to the method, the available pictures obtained through screening are pictures with uniform evaluation star distribution, the quality of recommended pictures can be improved, the recommended pictures are guaranteed to have good evaluation in most users, the pictures are more stable during recommendation, the overall recommendation result is not influenced by fluctuation of individual evaluation, the pictures with scattered evaluation star distribution are removed from the perspective of replacement, the risk of extreme evaluation in the recommendation result can be reduced, and confusion of users due to inconsistent evaluation is reduced.
The picture second screening module is used for carrying out attention feature identification on effective positive evaluation of the available pictures, further extracting historical transaction records of users and analyzing user tendency attention features from the historical transaction records, and accordingly matching the attention features of the available pictures using evaluation with the user tendency attention features, and screening out adaptive pictures from the matching pictures.
In a preferred implementation, the effective positive evaluation of the available pictures is subjected to a focused feature recognition process, namely, the effective positive evaluation is extracted from the use evaluation of the available pictures, and an evaluation characterization phrase obtained by extracting the evaluation is subjected to evaluation feature and evaluation description splitting.
In the invention, the evaluation characteristic is the evaluation object, and the evaluation characteristic of the resolution of the "vivid color" in the characterization phrase is the color, the evaluation is described as vivid, the evaluation characteristic of the resolution of the "very strong contrast" is the contrast, and the evaluation is described as very strong under the example that the image is used for office wall decoration, the color is vivid, and the contrast is very strong.
Numbering the extracted evaluation characterization phrases according to the sequence before and after the evaluation is used, and splitting the number as the appearance number of the evaluation characterization phrases to obtain the appearance number of the evaluation feature.
In the above example, the "color is vivid" appears before and the "contrast is intense" appears after, so the appearance numbers of the evaluation features "color" and "contrast" are 1 and 2.
It should be noted that, because the evaluation characterization phrase and the evaluation feature are in one-to-one correspondence, the appearance number of the evaluation characterization phrase in the use evaluation is the appearance number of the evaluation feature.
Extracting the degree words from the evaluation description split by each evaluation characterization phrase, and acquiring the weight values corresponding to the degree words according to the weight corresponding relation between the pre-defined degree words and the emotion intensity.
It should be noted that the terms of degree in the evaluation description generally refer to terms used to express the degree or intensity of a certain characteristic, state or behavior, and illustratively, "very," "compare," "very," "slightly," etc., and the terms of degree exist in the terms of "relatively vivid color" and "very intense contrast" in the evaluation characterization phrases under the above examples as "compare" and "very," respectively.
It should be further understood that, the higher the emotion intensity of the terms, the larger the corresponding weight value, specifically, the weight correspondence relationship between the terms and the emotion intensities may be set according to the emotion intensity magnitude relationship of the terms conventionally used in chinese language, for example, the emotion intensity magnitude relationship of "very", "compare", "very", "slightly" is slightly < compare < very ", and the weight range is set between 0 and 1 in such a relationship, where the weights of" very "," compare "," very "," slightly "are 0.8, 0.4, 0.6, and 0.2, respectively.
And accumulating the weight values corresponding to the words of each degree in the evaluation description corresponding to each evaluation characterization phrase to obtain the emotion intensity coefficient of the evaluation characteristics corresponding to the evaluation characterization phrase.
In the above example, since only one degree word is contained in each of the "relatively bright color" and the "very strong contrast", the emotional intensity coefficients of the evaluation features "color" and "contrast" corresponding to the "relatively bright color" and the "very strong contrast" are 0.4 and 0.6, respectively.
Substituting the appearance numbers and emotion intensity coefficients corresponding to the evaluation features in the available pictures into a formula
Calculating and obtaining the attention degree corresponding to each evaluation feature in the effective positive evaluation of the available picturesIn the followingThe number of occurrences of the evaluation feature is indicated,Representing the number of evaluation characterization phrases, wherein the number of evaluation characterization phrases is the number of evaluation features,And (5) representing the emotion intensity coefficient of the evaluation feature.
From the calculation formula of the attention degree, it is known that the more the evaluation feature appears, the larger the emotion intensity coefficient is, and the greater the attention degree is. This is because users typically first mention their most interesting or deeply imaged features when composing a rating, and thus the presence of rating features in the front tends to mean that users consider these features more important. The emotion intensity coefficient reflects the emotion intensity of the user to the feature, and the larger the intensity is, the stronger the user feel to the feature is.
The invention can analyze the evaluation content of the user more carefully and know the specific characteristics focused by the user by quantifying the attention degree by means of the appearance sequence and the emotion intensity of each evaluation characteristic in the effective positive evaluation.
Classifying the evaluation features existing in all the effective positive evaluations of the available pictures according to the same evaluation features to obtain a plurality of effective positive evaluations corresponding to each evaluation feature;
And accumulating and calculating the attention degree of each evaluation feature in different effective active evaluations to obtain the total attention degree of each evaluation feature, and further extracting the evaluation feature corresponding to the maximum total attention degree from the total attention degree as the attention feature of the available picture.
In a further preferred implementation, the user-tended attention feature is parsed by extracting a transaction picture from a user's historical transaction record when the user has the historical transaction record, and retrieving a usage rating for the transaction picture from a details page of the transaction picture.
And screening out effective positive evaluation from the using evaluation of the transaction picture, and identifying the attention features by using the effective positive evaluation to obtain the attention features of the transaction picture.
And comparing the attention features of the corresponding transaction pictures of each historical transaction record, and extracting the attention features with the most occurrence frequency from the attention features as the attention features of the user tendency.
In a further innovative implementation, the user-tended attention feature further includes an parsing process that retrieves the historical transaction record based on the user-entered usage scenario when the user does not have the historical transaction record, and further extracts the historical transaction record conforming to the user-entered usage scenario from the retrieved record referred to by the historical transaction record as an associated historical transaction record.
And carrying out analysis on the user tendency attention feature according to the analysis process of the user tendency attention feature when the historical transaction record exists in the user.
In a further preferred implementation, the adaptive picture screening is performed by matching the attention features of the available pictures with the attention features of the user's tendency, and screening out successfully matched available pictures as adaptive pictures.
The analysis of the user tendency attention features is only to consider that the user is willing to receive the recommendation conforming to the interests of the user, so that more personalized recommendation is provided for the user by identifying the user tendency attention features, and the satisfaction degree of the user is improved.
It should be noted that when adaptive picture screening is performed based on a usage scenario entered by a user, a set of initially eligible pictures should first be determined according to the usage scenario specified by the user and the rating star in the historical usage rating. This stage is mainly to screen the formal properties of the pictures to ensure that the selected pictures meet the preliminary requirements of the user in terms of basic conditions. And then, further analyzing the evaluation characteristics in the historical use evaluation on the basis of the preliminary screening, and carrying out matching screening on the content level on the basis of the evaluation characteristics, so as to ensure that the finally recommended picture not only meets the requirements of the user in form but also is highly matched with the attention characteristics of the user in content.
The picture recommendation display module is used for carrying out interactive display on the screened adaptive pictures, and specifically carries out the implementation process of arranging the screened adaptive pictures in descending order according to the evaluation star level, extracting the first adaptive picture from the descending order arrangement result, displaying the first adaptive picture in the retrieval result of the retrieval page, and simultaneously providing a navigation button to switch to other adaptive pictures.
According to the invention, the top-ranked adaptive pictures are displayed in the search result of the search page, so that the user is ensured to see the best-rated pictures first, the satisfaction degree of the user on the recommendation result is improved, meanwhile, the user does not need to browse all the pictures one by one, but can directly see the screened and ordered optimal options, the time and energy of the user are saved, in addition, the provided navigation buttons can enable the user to conveniently browse other adaptive pictures, the interaction depth of the user is increased, and more importantly, the user has more participation in the process of switching the pictures by using the navigation buttons, so that the viscosity of the user on the platform is increased. In combination, through the interactive display mode, the user can conveniently view the first picture and switch to other pictures through simple input, so that efficient browsing and decision making processes are realized.
The scene allocation library is used for storing background objects conventionally allocated to different scenes.
The foregoing is merely illustrative and explanatory of the principles of this invention, as various modifications and additions may be made to the specific embodiments described, or similar arrangements may be substituted by those skilled in the art, without departing from the principles of this invention or beyond the scope of this invention as defined in the claims.