[go: up one dir, main page]

HK1177021B - Crowd-sourcing and contextual reclassification of rated content - Google Patents

Crowd-sourcing and contextual reclassification of rated content Download PDF

Info

Publication number
HK1177021B
HK1177021B HK13104184.3A HK13104184A HK1177021B HK 1177021 B HK1177021 B HK 1177021B HK 13104184 A HK13104184 A HK 13104184A HK 1177021 B HK1177021 B HK 1177021B
Authority
HK
Hong Kong
Prior art keywords
user
content
rating
content item
demographic
Prior art date
Application number
HK13104184.3A
Other languages
Chinese (zh)
Other versions
HK1177021A1 (en
Inventor
M.E.墨求里
Original Assignee
微软技术许可有限责任公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US12/729,243 external-priority patent/US20110238670A1/en
Application filed by 微软技术许可有限责任公司 filed Critical 微软技术许可有限责任公司
Publication of HK1177021A1 publication Critical patent/HK1177021A1/en
Publication of HK1177021B publication Critical patent/HK1177021B/en

Links

Description

Crowd sourcing and contextual reclassification of rated content
Background
The internet is full of many different types of content such as text, video, audio, and so forth. Many sources produce content, such as traditional media channels (e.g., news sites), personal blogs, retail stores, product manufacturers, and so forth. Some websites aggregate information from other sites. For example, using Really Simple Syndication (RSS) feeds, web site authors make content available for consumption by other sites or users, and syndication sites can consume various RSS feeds to provide syndicated content.
Content publishers often provide tools for rating content or receiving opinions from users about the content (e.g., a positive, negative, or some intermediate dimension). For example, a video may include a display of five stars that a user may click on to rate the video as one to five stars. The publisher may also display ratings based on input from multiple users and use the ratings in a search (e.g., to return the highest rated content or to order content by rating) or other workflow. The organization may rate the content internally or externally, such as determining: which advertising campaign of several choices will be most effective for the target demographics. In the real-time web world, it is beneficial for organizations to receive contextually relevant assessments of content.
One area in which content opinions may be determined is in protecting the reputation of an organization. The reputation of an organization may be one of the most important assets owned by the organization. For example, a company's sales may be determined in part by how trusted the customer is as to how the company delivers high quality products to the customer and how timely the products are delivered. Many customers determine whether they will be in transit with a particular business by how the customer service portion of the business will handle the error (e.g., lost shipments, damaged goods, etc.). Many organizations have built significant reputation around the quality of service of their customers, while others have suffered loss due to negative impressions of their customer service. The customer may upload content to various sources that affects the reputation of the organization.
Given the amount of data, most content can be evaluated by an automated algorithm to provide mixed performance. Algorithms are often trained on generic result sets and therefore the interpretation of accuracy may vary widely when the algorithm is reviewed in various contexts such as generational cognition, geographically specific slang, geographically specific cultural beliefs, business verticals, and so forth. An organization may initially rate content automatically, and then be followed by a manual process to adjust the rating or interpret the meaning of the rating.
Unfortunately, opinions vary from person to person. Simply because millions of teenagers like a particular content item do not guarantee that the elderly will like the content item. Also, humorous content in one country or language may appear boring, or even worse, impolite in other regions or languages. In the real-time web world, organizations need to be able to easily identify content opinions for a variety of different groups and for a variety of different purposes. In addition, organizations need to be able to validate the automatic opinion algorithms and adjust these algorithms based on experience.
Disclosure of Invention
A content evaluation system is described herein that enables end users and organizations to share their interpretation of automatically generated opinion scores. The system may provide a simple visual mechanism, such as a slider bar, that the user may move to indicate approval or disapproval of the automatic scoring. The system adds metadata to the revised score that tracks information about the user based on the user's feedback to account for different demographic contexts. The system performs re-scoring with user-provided scoring, taking context into account, and then presents the re-scored values on context-specific endpoints. The content evaluation system provides a crowd-sourcing scheme that is very scalable, adds more precision (as individuals within known demographic categories/contexts do the scoring), and generates value-added data products that can be sold/resold. In addition, the resulting data set can be used to improve automated content evaluation algorithms, thereby increasing the accuracy of the algorithms and providing context-specific variants. Thus, the content evaluation system provides a mechanism for individuals and organizations to override the values assigned by the automated content evaluation process, while providing context about the individual/organization providing the override of the algorithm score.
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Drawings
FIG. 1 is a block diagram that illustrates components of the content evaluation system, in one embodiment.
FIG. 2 is a flow diagram that illustrates the processing of the content evaluation system to rate content in one embodiment.
FIG. 3 is a flow diagram that illustrates the processing of the system to receive opinion rating overrides for content items from users in one embodiment.
Figure 4 is a flow diagram that illustrates the processing of the system to re-evaluate aggregate scores in one embodiment.
FIG. 5 is a block diagram that illustrates an operating environment of the content evaluation system, in one embodiment.
Detailed Description
A content evaluation system is described herein that enables end users and organizations to share their interpretation of automatically generated opinion scores. The system may provide a simple visual mechanism, such as a slider bar, that the user may move to indicate approval or disapproval of the automatic scoring. The system adds metadata to the revised score that tracks information about the user based on the user's feedback to account for different demographic contexts. For example, the system allows an administrator to later determine the user's impression of content for a particular age range, gender, social status, and so forth. The system performs re-scoring of the user-provided scores, taking context into account, and then presents the re-scored values on context-specific endpoints. The content evaluation system provides a crowd-sourcing scheme that is very scalable, adds more precision (because the scoring is performed by individuals within a known demographic category/context), and generates value-added data products that can be sold/resold. In addition, the resulting data set can be used to improve automated content evaluation algorithms, thereby increasing the accuracy of the algorithms and providing context-specific variants. Thus, the content evaluation system provides a mechanism for individuals and organizations to override the values assigned by the automated content evaluation process, while providing context about the individual/organization providing the override scoring the algorithm. The revised score has a context-specific metatag associated with it, and is reviewed quantitatively with the revised scores of other individuals. The system then recalculates the context-specific score and exposes the score to the web service for consumption in the website, web service, and application.
In some embodiments, the content evaluation system provides a mechanism for using human and demographic context for contextual re-scoring of information. As described herein, the system may present an automatic score to the user that reflects a positive or negative impression of the content item, and allow the user to indicate an agreement or disagreement with the automatic score. The user has an associated user profile that was previously created and stored by a system that captures demographic information about the user, such that when the user overrides the content store, the system can store both the modified score and the demographic associated with the user modifying the score. After many such users perform similar actions, the system may accumulate statistics describing modifications made by users with similar demographic characteristics to identify trends in content evaluation in particular demographic categories.
In some embodiments, the content evaluation system collects and aggregates user score modifications from many different users to identify trends. For example, the system may provide a website through which a user may view and evaluate content. The website may provide an indication of an automatic score for the content, or of a score that reflects historical user feedback received over time about the content item. The system stores data points according to demographic tags so that an administrator can later generate statistical analysis of the scoring data split according to various demographic combinations. For example, an administrator may wish to know the impression of a particular content item by females aged 15-25 years, and then wish to know the impression of that particular content item by females of all ages living on the west coast. By storing impression information associated with known demographic traits as each impression is received, the system facilitates later analysis according to a variety of different criteria.
In some embodiments, the content evaluation system exposes an Application Programming Interface (API) for users, services, and applications to access content evaluation information compiled by the system based on user impressions and to generate reports and statistical analysis based on the collected data. The system may provide a website, web service, or other interface to provide broad access to the data collected by the system, and so that other applications and systems may identify and use the data variants identified by the system to drive larger solutions and workflows.
In some embodiments, the content evaluation system embeds a mechanism for opinion override (e.g., a slider control) into an application or website. Upon receiving the opinion override, the website invokes the web service and provides the content identifier, the revised score, and demographic information (e.g., age, geographic location, business vertical, etc.) of the individual/organization providing the revised score. The web service stores the revised score in a hosted data store (e.g., an online database or cloud-based storage service). The service evaluates the demographic data (e.g., age, geographic location, opinion, business verticals, etc.) of the individual/organization that provides the revised score, assigns an appropriate metadata tag to the content to track the demographic data, and creates a record in the database for the revision. The software periodically evaluates the crowdsourcing scores using the context of the metadata tags and re-scores the content along multiple dimensions (e.g., age, geographic location, business verticals, etc.) for different contexts. The revised score is then stored in the hosted database. The Web service presents updated context-specific scores for the content, which are then consumed by websites, services, and applications accessing the content evaluation system.
FIG. 1 is a block diagram that illustrates components of the content evaluation system, in one embodiment. The system 100 includes a publisher interface component 110, a baseline evaluation component 120, an opinion data store 130, a user interface component 140, a user feedback component 150, a user demographics component 160, an auto-tuning component 170, and a data consumer interface component. Each of these components is discussed in more detail herein.
The publisher interface component 110 provides an interface that can be used by publishers to add content to the system that is to be automatically and manually rated. For example, a publisher may use a publisher interface to post a new video to a website. The publisher interface component 110 also provides a way for publishers to view the current rating status of one or more content items and obtain reports related to various demographic profiles.
The baseline evaluation component 120 automatically determines a rating sentiment for the content item. The component 120 may use a variety of different automated rating algorithms to develop a baseline rating for a content item. The user of the system 100 will adjust the base rating by providing feedback in the user's opinion regarding the accuracy of the automatic rating. The baseline evaluation component 120 can employ a variety of automated methods of rating content, and scores of the various methods can be combined (e.g., averaged). In addition, the baseline evaluation component 120 receives adjustment information based on user ratings received over time, which can be used by the component 120 to improve the quality and accuracy of baseline automatic sentiment ratings.
The opinion data store 130 stores rating information for one or more content items. The data store may include disk drives, file systems, databases, Storage Area Networks (SANs), cloud-based storage servers, or other tools for persisting data. For example, system 100 may use a database that includes tables with a lower row: each of these rows stores a particular user rating and demographic metadata identifying the demographic characteristics of each user providing an opinion rating. Other components may query the opinion data store 130 in a variety of ways to extract information relevant to a particular report or other objective. For example, the component may query for ratings from users of a particular age range or geographic residence.
The user interface component 140 provides a user interface that can be used by a user of the system 100 to provide manual opinion ratings through user interface controls. For example, the user interface may display content items to the user and provide a slider control near each content item through which the user can specify his perspective (e.g., like it, dislike it) of the content item at a scale. The user interface component 140 may also provide other controls, pages, or interfaces to the user for searching for content items, specifying profiles/demographic information, receiving credits to rate content items, and so forth.
The user feedback component 150 receives user feedback from the user interface and stores the user feedback in the intent data store 130. For example, if the user slides the slider control one way to a negative value, the component 150 may record a data line indicating that the user dislikes the content item. The row may include a content identifier, a particular sentiment rating for the item by the user, and a demographic associated with the user.
The user demographics component 160 tracks user demographic information to be used when a user rates a content item and when a data consumer receives reports about user opinion ratings. The user demographics component 160 may maintain a stored profile for each user that includes information about the user (e.g., age, place of residence, gender, affiliations, etc.). Alternatively or additionally, the component 160 may obtain similar information from the user upon receiving the rating indication. For example, a user may access the system 100 anonymously, but the system may request that the user give their age or other demographic information before providing the content item for rating by the user.
The auto-adjustment component 170 creates a feedback loop between the automatic evaluation and the actual rating value received from the user. Automatic evaluation attempts to determine a baseline quality level for a content item, but may not accurately predict what the user will like. If the user rating indicates a strong disagreement or opposite inclination to the automated evaluation results, the component 170 may incorporate user feedback to adjust the automated algorithm to produce better results. For example, the adjustment may mitigate an assumption of the automatic algorithm (e.g., longer content will not be rated high), or adjust a parameter of the automatic algorithm (e.g., by adjusting a threshold level of the amount before the content item is determined to be objectionable generally or in a particular context.) over time, the user rating directed back to the automatic evaluation by the automatic adjustment component 170 improves the accuracy of the automatic evaluation to provide a better initial baseline result (which may then be further adjusted by user input).
The data consumer interface component 180 provides aggregated data regarding content item opinions to one or more data consumers. For example, the component 180 may provide an API (e.g., a web services API or other protocol) that may be used by a data consumer to submit a data query and receive matching results. For example, a data consumer may request user opinions of users of a particular demographic or of users from all groups with respect to a particular content item. The system 100 may automatically identify trends and create data groups that may be enumerated by data consumers and around which data consumers may query for additional information. For example, the system 100 may determine that a particular age group has a much more positive opinion on a particular content item (or a type of content item) than other age groups. If the content item is an advertisement, this information may be used by the data consumer to better target the advertisement to the age group that will respond most positively.
The computing device on which the content evaluation system is implemented may include a central processing unit, memory, input devices (e.g., keyboard and pointing devices), output devices (e.g., display devices), and storage devices (e.g., disk drives or other non-volatile storage media). The memory and storage devices are computer-readable storage media that may be encoded with computer-executable instructions (e.g., software) that implement or enable the system. In addition, the data structures and message structures may be stored or transmitted via a data transmission medium, such as a signal on a communications link. Various communication links may be used, such as the Internet, a local area network, a wide area network, a point-to-point dial-up connection, a cellular telephone network, and so forth.
Embodiments of the system may be implemented in various operating environments that include personal computers, server computers, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, programmable consumer electronics, digital cameras, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and so on. The computer system may be a cellular telephone, personal digital assistant, smart phone, personal computer, programmable consumer electronics, digital camera, or the like.
The system may be described in the general context of computer-executable instructions, such as program modules, executed by one or more computers or other devices. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Typically, the functionality of the program modules may be combined or distributed as desired in various embodiments.
FIG. 2 is a flow diagram that illustrates the processing of the content evaluation system to rate content in one embodiment. The system performs the following steps after the system receives a new content item: for the new content item, the publisher or other party would like to determine and track an opinion rating that indicates the content item's appeal to the user's audience. Beginning in block 210, the system receives a content item for which a publisher wants to determine and track a sentiment rating. For example, a publisher may upload content items to a web service through a publisher interface, and the web service may implement the system described herein and provide ratings for content items through automated and crowd-sourced tools.
Continuing in block 220, the system determines a baseline automatic sentiment rating for the received content item. The system may use one or more well-known automatic content rating algorithms to determine a baseline rating or may set an initial default rating (e.g., 50%, three stars, or similar neutral values). The system may also incorporate adjustment feedback to improve the baseline rating by receiving a prior iteration of user feedback that overrides the baseline rating. Continuing in block 230, the system receives a request to access the received content item. For example, a content distributor may place a content item on a website or other distribution source so that a user may access the content item. The content items may include any type of content such as text, images, video, audio, movies, presentation data, and so forth. The system may receive a content access request from a client web browser in response to a user directing the browser to access a website.
Continuing in block 240, the system provides the requested content item for display to the user with a control for receiving a user rating of the content item. For example, the system may provide other embeddable objects that may embed web controls, a MICROSOFT TM SILVERLIGHT TM application, or a slider or other control that displays the requested content and that a user may manipulate to score the user's opinion of the content item. For example, the user may slide the slider to the left when the user dislikes the content item, or to the right when the user likes the content item.
Continuing in block 250, the system receives an opinion rating override from the user, as will be further described with reference to FIG. 3. Continuing with the previous example, user manipulation of the slider control may cause the system to receive an HTTP post or other data upload specifying the identity of the content item, the identity of the user or user characteristics, and the user's score for the content item. Continuing in block 260, the system waits for the next request to access the content item and then loops back to 230 to receive the request. The system may make the content item available for rating indefinitely, or whenever the publisher requests that the content item be available. After block 250, these steps conclude.
FIG. 3 is a flow diagram that illustrates the processing of the system to receive opinion rating overrides from users for content items in one embodiment. Beginning in block 310, the system receives a rating for a content item from a user. For example, as described with reference to FIG. 2, a user may view a web page or other site containing the content item and upon viewing the item may provide a rating score for the content item that specifies the user's opinion of the content item. Continuing in block 320, the system stores the revised score in a data store for subsequent analysis and reporting. For example, the system may store the score in a database as follows: the database includes individual and/or aggregate scoring information for one or more content items provided by publishers. The score may include a numerical value, an enumerated value, a boolean indication of whether the user likes the content, or any other scoring paradigm for content (e.g., x out of 5, etc.).
Continuing in block 330, the system determines a demographic profile of the user that provided the received rating for the content item. For example, the system may determine the user's age, geographic location (e.g., based on coordinate information from a GPS module, a software-provided geographic location API, or an IP address of the user's client machine), business vertical, or other characteristics related to the user. The system tracks demographic data specified by publishers or determined by the system to potentially distinguish user perspectives of one group from another. Continuing in block 340, the system assigns metadata tags to records associated with the user's revised scores for the content item based on the determined demographic profile of the user. The system may store the user's raw demographic information (e.g., age) or may associate tags that specify particular relevant demographic categories (e.g., age 25-35 categories). The record may contain a number of categories applicable to the user, such as age, location, gender, and so forth.
Continuing in block 350, the system stores the assigned metadata tag in association with the revised score for the user so that subsequent reporting and analysis can process the revised content item rating based on the demographic profile. For example, a particular publisher may want to know a male age of 30 to 40's opinion of a particular content item, and may access the system and retrieve ratings for that and other demographic data. After block 350, these steps conclude.
Figure 4 is a flow diagram that illustrates the processing of the system to re-evaluate aggregate scores in one embodiment. The following steps occur periodically after a sufficient number of coverage ratings have been received for the system to update the aggregated data for a particular demographic. The system may track aggregated data for specified demographics or based on dynamically determined demographics. Beginning in block 410, the system identifies content items for which the system is tracking opinion rating information. For example, the system may include a database of items for which the system is tracking rating information, and the system may periodically iterate through each content item to update the aggregate statistics.
Continuing in block 420, the system evaluates the received crowd-sourced ratings for the identified content items based on metadata tags that identify a demographic profile of the user that revises the ratings of the content items. For example, the system may determine that: the presence updated score may be obtained from users of various genders and ages. Continuing in block 430, the system re-scores the content item based on the demographic context for which the system has received a revised rating. For example, if the system determined a baseline score or score for a content item during the last iteration for a user satisfying a demographic profile, the system may re-score the content item based on the overlaid rating information received from the user satisfying the demographic profile. If the user rating differs significantly from the results from the automatic scoring algorithm, the system may store tuning parameters (not shown) to modify the behavior of the automatic algorithm in order to improve future results.
Continuing in block 440, the system stores the revised aggregate score for the content item in a data store according to one or more demographic contexts. For example, the system may update a score in the database for aggregated content rating information for one or more content items. Continuing in block 450, the system publishes the stored scores so that the data consumer can determine user ratings for the content items for one or more demographic profiles. For example, the system may provide a data consumer interface (e.g., a web service or other programmatic API or user-accessible web page) through which a data consumer may submit a query for the identified content items and receive results from a user based on the data recorded by the system. After block 450, these steps conclude.
FIG. 5 is a block diagram that illustrates an operating environment of the content evaluation system, in one embodiment. The server computer 510 comprises one implementation of a content evaluation system. The server computer 510 provides a crowd-sourced opinion service 520 to one or more clients, such as client 530. The client provides the user with an experience that includes displaying a content item and an opinion indicator 540, the opinion indicator 540 being manipulable by the user to indicate the user's opinion of the content item. For example, the user may slide the illustrated slider to the left to indicate a more negative opinion and to the right to indicate a more positive opinion. The client sends the opinion override 550 to the server computer 510. The server computer system 510 provides the opinion override to the rating and reclassification logic 560 of the content rating system. The system incorporates the user's evaluation of the content into an aggregate score (or scores) for the content, which includes demographic information about the user who has rated the content item, as will be described further herein.
In some embodiments, the content evaluation system allows site publishers to resell data. Com, for example, a web site such as huffingtonpost may resell data on the site related to the user's view of the content to the content creator so that the content creator may improve the appeal of future content. The content creator, acting as an advertiser, may determine: users of a certain demographic enjoy science fiction videos but not baby videos, and therefore advertisers can make more science fiction videos or distribute advertising dollars to advertise in or around science fiction videos. This may allow site publishers to make advertisements that are more attractive and drive brand value and increase their customer base. Any site displaying content can become a platform for generating approval data for content creators, regardless of who owns the site used to publish the content. The system can then aggregate the approval data across all content providers to obtain a picture of what is happening universally.
In some embodiments, the operator of the content evaluation system provides data back to the content site to encourage adoption of the system. For example, in return for providing rating information about a content item to the system, the system may reward a content site by providing a report to the content site indicating which content the user likes the content most. The system can retrieve statistical information about the user based on the demographic profile so that the content site operator can improve the content of the site for the target demographic group.
From the foregoing, it will be appreciated that, although specific embodiments of the content evaluation system have been described herein for purposes of illustration, various modifications may be made without deviating from the spirit and scope of the invention. Accordingly, the invention is not limited except as by the appended claims.

Claims (14)

1. A computer-implemented method for crowd-sourced rating of online content, the method comprising:
receiving an identification of a content item for which a publisher wants to determine and track an opinion rating;
determining a baseline automatic opinion score for the identified content items using an automatic content evaluation algorithm;
receiving a request to access the received content item based on a user request;
providing the requested content item for display to a user with a control for receiving a user rating for the content item;
receiving, by the provided control, a revised rating for the content item;
determining a demographic profile of a user providing the received rating of the content item;
assigning at least one metadata tag to a record associated with the user's revised score for the content item based on the user's determined demographic profile; and
storing the assigned metadata tags in association with the received revised ratings to allow the content items to be re-scored along multiple dimensions of different demographic contexts, wherein the resulting data set of the assigned metadata tags and revised content item ratings is used to refine the automatic content evaluation algorithm;
wherein the foregoing steps are performed by at least one processor.
2. The method of claim 1, wherein receiving the identification of the content item comprises: receiving a content item identifier from the publisher that distinguishes the content item from other content items.
3. The method of claim 1, wherein determining the baseline automatic sentiment rating comprises: incorporating the adjustment feedback by receiving a previous iteration of user feedback covering the baseline rating to improve the baseline rating.
4. The method of claim 1, wherein receiving the request to access the content item comprises: a content access request is received from a client web browser in response to a user directing the browser to access a website.
5. The method of claim 1, wherein providing the requested content item comprises: providing an embeddable object that displays the requested content and a control that the user can manipulate to score the user's opinion of the content item.
6. The method of claim 1, wherein receiving the user rating for the content item comprises: receiving an indication that the user manipulated the control to override the original opinion indication provided by the control.
7. The method of claim 1, wherein determining the demographic profile of the user comprises: profile information is received from the user that describes one or more groups of which the user is a member.
8. The method of claim 1, wherein assigning metadata tags comprises: assigning a plurality of demographic tags corresponding to a group to which the user belongs.
9. The method of claim 1, wherein storing the assigned metadata tag and revised rating comprises: updating a database of content ratings to track user impressions belonging to the user's demographic profile.
10. A computer system for crowd-sourced rating and reporting of online content, the system comprising:
a processor and memory configured to execute software instructions;
a publisher interface component configured to provide an interface that can be used by a publisher to add content to the system that is to be automatically and manually rated;
a baseline evaluation component configured to automatically determine a rating sentiment for a content item using an automatic content evaluation algorithm;
a sentiment data store configured to store rating information for one or more content items, the sentiment data store further configured to store the following rows of data: each of the data rows storing a particular user rating and demographic metadata identifying demographic characteristics of each user providing an opinion rating such that the one or more content items can be re-scored along multiple dimensions of different demographic contexts;
a user interface component configured to provide a user interface usable by a user of the system to provide manual opinion ratings through user interface controls;
a user feedback component configured to receive user feedback through the user interface and store the user feedback in the opinion data store;
a user demographic component configured to track user demographic information as a user rates content items and to provide the demographic information to a data consumer, the data consumer receiving a report from the system describing user opinion ratings; and
a data consumer interface component configured to provide aggregated data regarding content item opinions to one or more data consumers;
wherein the automatic content evaluation algorithm is improved using the user demographic information and the user feedback.
11. The system of claim 10, the publisher interface component further configured to provide a tool for the publisher to view a current rating status of one or more content items and to obtain reports related to a demographic profile of a user who has rated the content items.
12. The system of claim 10, the baseline evaluation component further configured to receive adjustment information based on user ratings received over time, and to apply the adjustment information to improve the quality and/or accuracy of baseline automatic sentiment ratings provided by the component.
13. The system of claim 10, wherein the user interface component is further configured to display a content item to the user and provide a slider control in proximity to the content item through which the user can specify his opinion of the content item.
14. The system of claim 10, further comprising an auto-adjustment component configured to create a feedback loop between an automated evaluation and an actual rating received from a user by feeding adjustment parameters to the baseline evaluation component based on a received modification of an automatically determined baseline rating by the user.
HK13104184.3A 2010-03-23 2011-03-18 Crowd-sourcing and contextual reclassification of rated content HK1177021B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US12/729,243 US20110238670A1 (en) 2010-03-23 2010-03-23 Crowd-sourcing and contextual reclassification of rated content
US12/729,243 2010-03-23
PCT/US2011/029084 WO2011119440A2 (en) 2010-03-23 2011-03-18 Crowd-sourcing and contextual reclassification of rated content

Publications (2)

Publication Number Publication Date
HK1177021A1 HK1177021A1 (en) 2013-08-09
HK1177021B true HK1177021B (en) 2016-01-29

Family

ID=

Similar Documents

Publication Publication Date Title
CN102812460B (en) Crowdsourcing and contextual reclassification of rated content
US12212638B2 (en) Method and apparatus for real-time personalization
CN103309866B (en) The method and apparatus for generating recommendation results
JP5899275B2 (en) System and method for scoring quality of advertisement and content in online system
US10664509B1 (en) Processing non-uniform datasets
US20210192460A1 (en) Using content-based embedding activity features for content item recommendations
CN109417644B (en) Revenue optimization for cross-screen advertising
JP5462971B2 (en) Information processing apparatus, information processing method, and information processing program
US20190130457A1 (en) Methods and systems for modeling campaign goal adjustment
US8788338B1 (en) Unified marketplace for advertisements and content in an online system
US20150235275A1 (en) Cross-device profile data management and targeting
US20080256056A1 (en) System for building a data structure representing a network of users and advertisers
US10262339B2 (en) Externality-based advertisement bid and budget allocation adjustment
US20140108139A1 (en) Marketing Segment Rule Definition for Real-time and Back-end Processing
US20150356627A1 (en) Social media enabled advertising
EP2606459A2 (en) Unified data management platform
US20130013428A1 (en) Method and apparatus for presenting offers
Krämer et al. The role of data for digital markets contestability: case studies and data access remedies
US10803471B2 (en) Audience size estimation and complex segment logic
Iwashita A Framework of Matching Algorithm for Influencer Marketing
HK1177021B (en) Crowd-sourcing and contextual reclassification of rated content
US20150213486A1 (en) Method and Device For Placing Branded Products As Advertisements Within Media
HK40064518A (en) Yield optimization of cross-screen advertising placement
JP2023042436A (en) Information processing device, information processing method, and information processing program
JP2023028840A (en) Information processing device, information processing method, and information processing program