[go: up one dir, main page]

HK1218006A1 - Method and system for evaluating user satisfaction with respect to a user session - Google Patents

Method and system for evaluating user satisfaction with respect to a user session Download PDF

Info

Publication number
HK1218006A1
HK1218006A1 HK16105858.2A HK16105858A HK1218006A1 HK 1218006 A1 HK1218006 A1 HK 1218006A1 HK 16105858 A HK16105858 A HK 16105858A HK 1218006 A1 HK1218006 A1 HK 1218006A1
Authority
HK
Hong Kong
Prior art keywords
user
session
satisfaction
queries
information
Prior art date
Application number
HK16105858.2A
Other languages
Chinese (zh)
Inventor
艾麗薩.格拉斯
艾丽萨.格拉斯
斯科特.伽弗尼
易星
Original Assignee
Oath Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oath Inc. filed Critical Oath Inc.
Publication of HK1218006A1 publication Critical patent/HK1218006A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0251Targeted advertisements
    • G06Q30/0255Targeted advertisements based on user history
    • G06Q30/0256User search
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2457Query processing with adaptation to user needs
    • G06F16/24578Query processing with adaptation to user needs using ranking

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Accounting & Taxation (AREA)
  • Strategic Management (AREA)
  • Finance (AREA)
  • Development Economics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Game Theory and Decision Science (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Computer And Data Communications (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

Methods, systems and programing for evaluating user satisfaction with respect to a user session are presented. In one example, one or more queries in a use session are received from a user. Information about one or more user activities is obtained. Each user activity is related to manipulation of a content item associated with one of the one or more queries. A score associated with the user session is computed based at least partially on the one or more user activities. User satisfaction with respect to the user session is determined based on the score.

Description

Method and system for evaluating user satisfaction with respect to user sessions
Technical Field
The present teachings relate to methods, systems, and programs for user satisfaction evaluation. In particular, the present teachings are directed to methods, systems, and programs for evaluating user satisfaction with respect to a user session.
Background
Online content searching is the process of interactively searching an online database and retrieving requested information from the online database via a search application running on a local user device (e.g., a computer or mobile device). Online searches are conducted through search systems that include search engines, which are programs that run at remote servers and search for content based on specific queries or keywords submitted by users. The search results of an online search may include a list of some content items or files that are provided to the user. To improve the user's search experience with search systems, it is critical to assess user satisfaction with respect to the performance of the search system.
However, the prior art is limited to evaluating user satisfaction with respect to a single query, although it is more desirable to evaluate user satisfaction with respect to a user session that includes a set of consecutive queries, particularly when the queries in the user session relate to a topic that may reflect the user's informational needs. Furthermore, conventional search systems do not evaluate user satisfaction based on user activity related to manipulating content items. For example, after a user enters a query and receives a content item related to the query on a touch screen, the user performs some activities related to manipulating the content item, including but not limited to pressing the content item, sliding the content item, and zooming in or out of the content item. While information related to these activities may be used to automatically determine or predict user satisfaction, conventional approaches estimate user satisfaction based solely on tags or bookmarks from the user regarding the content item.
Therefore, there is a need to provide an improved solution for assessing user satisfaction to avoid the above-mentioned drawbacks.
Disclosure of Invention
The present teachings relate to methods, systems, and programs for user satisfaction evaluation. In particular, the present teachings are directed to methods, systems, and programs for evaluating user satisfaction with respect to a user session.
In one example, a method implemented on at least one machine for evaluating user satisfaction with respect to a user session is presented, wherein the at least one machine each has at least one processor, a storage device, and a communication platform connected to a network. One or more queries in a user session are received from a user. Information about one or more user activities is obtained. Each user activity relates to an operation on a content item, wherein the content item is associated with one of the one or more queries. A score associated with the user session is calculated based at least in part on the one or more user activities. User satisfaction with respect to the user session is determined based on the score.
In various examples, a system for evaluating user satisfaction with respect to a user session is presented having at least one processor, a storage device, and a communication platform. The system comprises a query analysis unit, a user activity detection unit, a user satisfaction determination unit and a user satisfaction report generation unit. The query analysis unit is implemented on at least one processor and is configured to receive one or more queries from a user in a user session. The user activity detection unit is implemented on at least one processor and is configured to obtain information about one or more user activities, wherein each user activity is related to an operation performed on a content item, the content item being associated with one or more queries. A user satisfaction determination unit is implemented on the at least one processor and configured to calculate a score associated with the user session based at least in part on the one or more user activities. A user satisfaction report generating unit is implemented on the at least one processor and is configured to determine user satisfaction with respect to the user session based on the score.
Other concepts relate to software for providing query suggestions. According to this concept, a software product includes at least one non-transitory machine-readable medium and information carried by the medium. The information carried by the medium may be executable program code relating to parameters associated with a request or operational parameter (e.g., information related to a user, a request, or a social group).
In one example, a non-transitory machine-readable medium having recorded thereon information for evaluating user satisfaction with respect to a user session is presented. When the recorded information is read by a machine, the machine is caused to perform the following operations. One or more queries in a user session are received from a user. Information about one or more user activities is obtained. Each user activity relates to an operation on a content item, wherein the content item is associated with one of the one or more queries. A score associated with the user session is calculated based at least in part on the one or more user activities. User satisfaction with respect to the user session is determined based on the score.
Drawings
The methods, systems, and/or programs described herein are further described in terms of exemplary embodiments. These exemplary embodiments are described in detail with reference to the accompanying drawings. These embodiments are non-limiting exemplary embodiments in which like reference numerals represent similar structures throughout the several views of the drawings and wherein:
FIG. 1 is a high-level depiction of an exemplary networking environment for evaluating user satisfaction with respect to a user session, in accordance with embodiments of the present teachings.
FIG. 2 is a high-level depiction of another exemplary networking environment for evaluating user satisfaction with respect to a user session, according to embodiments of the present teachings.
FIG. 3 illustrates an exemplary block diagram of a user satisfaction rating system for evaluating user satisfaction with respect to a user session, according to embodiments of the present teachings.
FIG. 4 is a flowchart of an exemplary process for evaluating user satisfaction with respect to a user session, according to an embodiment of the present teachings.
FIG. 5 illustrates an exemplary block diagram of a user activity detection unit in a user satisfaction evaluation system, according to embodiments of the present teachings.
FIG. 6 is a flowchart of exemplary processing performed by the user activity detection unit, according to an embodiment of the present teachings.
FIG. 7 illustrates an exemplary block diagram of a user session determination unit in a user satisfaction evaluation system, according to embodiments of the present teachings.
Fig. 8 is a flowchart of exemplary processing performed by the user session determination unit, according to an embodiment of the present teachings.
FIG. 9 illustrates an exemplary block diagram of a user satisfaction score determiner in a user satisfaction evaluation system, according to embodiments of the present teachings.
FIG. 10 is a flowchart of exemplary processing performed by the user satisfaction score determiner, according to an embodiment of the present teachings.
11-14 depict exemplary user sessions according to various embodiments of the present teachings.
FIG. 15 illustrates a content item on a user interface, according to an embodiment of the present teachings.
16-19 illustrate exemplary user activities with respect to a content item, according to various embodiments of the present teachings.
FIG. 20 depicts a general mobile device architecture on which the present teachings may be implemented; and
figure 21 depicts a general computer architecture on which the present teachings may be implemented.
Detailed Description
In the following detailed description, numerous specific details are set forth by way of examples in order to provide a thorough understanding of the relevant teachings. However, it should be apparent to one skilled in the art that the present teachings may be practiced without these specific details. In other instances, well-known methods, procedures, systems, components, and/or circuits have been described at a high-level, with a omission of detail, so as not to unnecessarily obscure aspects of the present teachings.
The present disclosure describes method, system, and program aspects for efficient and effective user satisfaction assessment. The methods and systems disclosed herein aim to improve the satisfaction of the end user search experience by providing accurate and timely user satisfaction ratings.
In the context of mobile or other similar environments, search result linked lists may not be applicable. When using a different approach than traditional linked lists of search results to enable a user to access content items related to a query, evaluating user satisfaction based on click-through (click-thru) measurements alone may not be sufficient. For example, search results may be presented as "cards" loaded with content relevant to the user query, thereby reducing the need for the user to click/tap (tap) a link to access an external or third party site. Thus, it is important in such scenarios not to rely solely on click-through activity and to evaluate user satisfaction based on other user activities (e.g., vertical scrolling information, horizontal sliding information carousel, crop (ping), zoom, rotate, cancel (dismisssal), overlay (collapse), external application selection actions related to information cards, etc.).
Further, when a user has an information need related to a topic, the user may continuously enter queries about the topic over a period of time. These queries fall into a user session during which the user activity described above can be detected and used to determine whether or how satisfied the user was for the entire user session.
Additional features will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following and the accompanying drawings or may be learned by production or operation of the examples. The novel features of the present teachings may be realized and attained by practice or use of various aspects of the methodologies, instrumentalities and combinations set forth in the detailed examples discussed below.
FIG. 1 is a high-level depiction of an exemplary networking environment 100 for evaluating user satisfaction with respect to a user session, in accordance with embodiments of the present teachings. In FIG. 1, the exemplary networked environment 100 includes a search engine system 130, a user satisfaction evaluation system 140, one or more users 110, a network 120, and a content source 160. The network 120 may be a single network or a combination of different networks. For example, network 120 may be a Local Area Network (LAN), a Wide Area Network (WAN), a public network, a private network, a Public Switched Telephone Network (PSTN), the internet, a wireless network, a virtual network, or any combination thereof. In the example of internet advertising, the network 120 may be an online advertising network or an advertising network that is a company that connects advertisers to web sites that want to host advertisements. A key function of the advertising network is to aggregate the advertising space provided by the publisher and match it to the advertiser's needs. The network 120 may also include various network access points (e.g., wired or wireless access points such as base stations or internet switching points 120-1.. 120-2) through which data sources may connect to the network 120 to transmit information via the network 120.
The users 110 may be of different types, such as users connected to the network 120 via a desktop computer 110-1, a laptop computer 110-2, a built-in device in a motor vehicle 110-3, or a mobile device 110-4. The user 110 may send a query to the search engine system 130 via the network 120 and receive a response to the query from the search engine system 130. The response may include content items and/or search results related to the query.
Based on user activity from the user 110 related to operating search results or content items on the user interface, the user satisfaction evaluation system 140 may evaluate whether the user 110 is satisfied with the search service provided by the search engine system 130. In this embodiment, the user satisfaction evaluation system 140 is directly connected to the network 120 and is able to communicate with the user 110 directly via the network 120.
The content source 160 includes a plurality of content sources 160-1, 160-2.. 160-3, such as vertical content sources. Content sources 160 may correspond to websites hosted by an entity, whether an individual, business, or organization (e.g., uspto. gov), a content provider (e.g., cnn. com and yahoo. com), a social networking website (e.g., facebook. com), or a content feedback source (e.g., twetter or blog). Search engine system 130 may access information from any of content sources 160-1, 160-2.. 160-3. For example, search engine system 130 may retrieve content (e.g., websites) through its web crawler to build a search index.
FIG. 2 is a high-level depiction of another exemplary networking environment 200 for evaluating user satisfaction with respect to a user session, according to embodiments of the present teachings. The exemplary networked environment 200 in this embodiment is similar to the exemplary networked environment 100 in FIG. 1, except that the user satisfaction evaluation system 140 in this embodiment is connected to the network 120 via the search engine system 130. For example, the user satisfaction evaluation system 140 may act as a background to the search engine system 130 to evaluate the user satisfaction of users in communication with the search engine system 130.
FIG. 3 illustrates an exemplary block diagram of a user satisfaction rating system 140 for evaluating user satisfaction with respect to a user session, according to embodiments of the present teachings. The user satisfaction evaluation system 140 can be in an exemplary networked environment (e.g., networked environment 100 in fig. 1 or networked environment 200 in fig. 2). The user satisfaction rating system 140 in this example includes a user activity detection unit 320, a user engagement assessment unit 330, a user session determination unit 340, a user satisfaction score determiner 350, and a user satisfaction report generation unit 360. Search engine system 130 is also shown in FIG. 3 for reference. Based on the query input by the user 310, the search engine system 130 may send the content item to the user 310. The content item may be presented to the user 310 via a user interface. In some embodiments, the content item is an information card related to the user query on which content related to the information card is presented.
For example, FIG. 15 shows a user interface 1510 on the mobile device 1520 after the user has submitted a query term in the query input area 1530. In response to submitting the query terms, a stack of content item or information cards 1540-1.. 1540-n is presented to the user on the user interface 1510. As shown, in some embodiments, presentation of the information card is provided to the user without requiring an intermediate result set related to the query after the query is received and prior to presentation of the information card. For example, the presentation of the information card is provided to the user without first presenting the user with a list of search result links and requiring the user to select (e.g., by clicking, tapping, etc.) one of the presented search result links equipped with the presentation of the content item. As depicted, the information card 1540-1 is presented on top of other information cards 1540 such that the contents of the information card 1540-1 (e.g., in various portions of the information card 1540-1) appear in the user interface 1510. In some embodiments, the user may view or otherwise access the contents of other information cards by: slide over the information card 1540-1, drag the information card 1540-1 to another location in the stack of information cards 1540, select another information card from the information cards 1540, and the like. In some embodiments, each of the information cards 1540 may correspond to a respective domain (e.g., weather, restaurants, movies, music, navigation, calendar, etc.). Viewing or otherwise accessing the content of other information cards may thus allow a user to view or otherwise access content of information relating to other domains.
The user session determination unit 340 may determine whether the query from the user belongs to a new user session or a current user session. A user session may be a period of exchange of interaction information between a user and a server. In one example, the user may be user 110, and the server may be search engine system 130 and/or user satisfaction evaluation system 140. Since the user satisfaction rating system 140 in this example is configured to estimate user satisfaction with respect to services provided by the search engine system 130 to the user 310, the definition of user sessions with respect to the user 310 may be the same for the search engine system 130 and the user satisfaction rating system 140. Thus, the user session determination unit 340 may be located in the user satisfaction evaluation system 140 in this example, or in the search engine system 130 in another example. In both examples, information related to the user session may be shared between the search engine system 130 and the user satisfaction rating system 140.
In one embodiment, the user session determination unit 340 may even be located at a client device of the user 310, such that the user 310 may manually define the user session or set a configuration for defining the user session on the user side. User session definitions on the user side may be sent to search engine systems 130 and/or 140 to evaluate user satisfaction with respect to the user session.
A user session may be defined based on a particular user, a start time, an end time, and a session separation mode. The session separation mode may be used to determine a start time and an end time of a user session for a particular user. According to various embodiments, different session separation modes may be used.
FIG. 11 depicts an exemplary user session, according to embodiments of the present teachings. According to the session separation mode illustrated in fig. 11, a user session is separated based on an idle period between two consecutive queries. In particular, after a current search query is received, at the user session determination unit 340, the idle time between the current search query and the most recent previous query received from the same user is compared to a predetermined threshold to determine whether two consecutive queries belong to the same user session. If the idle time does not exceed the threshold, then the current query is recorded in the current user session. Otherwise, a new user session is created and the current query is recorded as the first query of the new user session. For example, in FIG. 11, queries 1-1, 1-2, 1-3, 1-4 belong to one user session 1, while queries 2-1, 2-2 belong to another user session 2, because the idle period 1110 between consecutive queries 1-4 and 2-1 exceeds a predetermined threshold.
FIG. 12 depicts an exemplary user session, according to another embodiment of the present teachings. According to the session separation mode shown in fig. 12, user sessions are separated based on the similarity of related topics of two consecutive queries. Each query may be analyzed and determined to be associated with a related topic (e.g., weather, sports, news, shopping, etc.). After the current search query is received, at the user session determination unit 340, related topics of both the current search query and the most recent previous query received from the same user are compared with each other to determine similarity. If the similarity is greater than a predetermined threshold, then the current query is recorded in the current user session. Otherwise, a new user session is created and the current query is recorded as the first query of the new user session. For example, in FIG. 12, queries 1-1, 1-2, 1-3, 1-4 belong to one user session 1, and queries 2-1, 2-2, 2-3 belong to another user session 2, because the query in user session 1 relates to the topic "weather" 1210, but the query in user session 2 relates to another topic "sports" 1220.
FIG. 13 depicts an exemplary user session, according to yet another embodiment of the present teachings. According to the session separation mode shown in fig. 13, user sessions are separated based on both the idle period between two consecutive queries and the similarity of the related topics of the two consecutive queries. For example, as shown in FIG. 13, although all queries in user session 1 and user session 2 are related to the same topic "sports" 1310, queries 1-1, 1-2, 1-3, 1-4 belong to one user session 1, and queries 2-1, 2-2 belong to another user session 2. This is because the idle period 1320 between consecutive queries 1-4 and 2-1 exceeds a predetermined threshold.
FIG. 14 depicts an exemplary user session, according to yet another embodiment of the present teachings. According to the session separation mode shown in fig. 14, a user session may have staggered queries for common related topics. For example, as shown in FIG. 14, although queries 2-1, 2-2 are submitted between query 1-2 and query 1-3, queries 1-1, 1-2, 1-3, 1-4 belong to one user session 11410 and queries 2-1, 2-2, 2-3, 2-4 belong to another user session 21420. This is because queries 1-1, 1-2, 1-3, 1-4 may relate to a common topic or information desired by the user, while queries 2-1, 2-2, 2-3, 2-4 may relate to another common topic or information desired by the user. For example, a user may search for movie tickets or show information in queries 1-1, 1-2, and some movie stars in movies of interest to the user in queries 2-1, 2-2; the user then searches for more movie tickets or show information again in queries 1-3, 1-4 and some other movie stars in queries 2-3, 2-4. In this example, queries 1-1, 1-2, 1-3, 1-4 each relate to ticket/show information about some particular movies, while queries 2-1, 2-2, 2-3, 2-4 each relate to information that is not limited to movie stars for a particular movie. This may frequently occur on browsers such as desktop, laptop, tablet, or smartphone that have a multi-tab setting in which a user may search for different information simultaneously through multiple browser tabs.
It should be appreciated that in other examples, rather than using idle time or topic-related similarity to define user sessions, user sessions may be defined in other manners (e.g., through a predetermined time window). The predetermined time window may be, for example, 10 minutes or 30 minutes. That is, any query entered within a predetermined time window after a user accesses the search application is considered to be in the same user session, and the user session ends when the predetermined time window ends.
Returning to FIG. 3, after the user 310 submits the query, the user session determination unit 340 may receive the query directly via the network 120, as in the environment 100, or forwarded by the search engine system 130, as in the environment 200. In this example, based on any of the session separation modes shown in fig. 11-13, the user session determination unit 340 may determine whether the query belongs to a new user session or a current user session. After determining the user session for the query, the user session determination unit 340 may send the user session information to both the user activity detection unit 320 for monitoring user activity and/or the user satisfaction score determiner 350 for determining the user satisfaction score.
In one embodiment, the user session determination unit 340 may send a detection period to the user activity detection unit 320, such that the user activity detection unit 320 may monitor the user activity from the user 310 for the detection period. The detection period may be determined based on the user session information such that the user activity detection unit 320 may monitor the user activity when the user is most active and stop monitoring when the user is relatively inactive.
The user activity detection unit 320 in this example may monitor user activity from different users. The user activity may include an action or no action from the user. The actions from the user may include pressing, sliding, clicking, rotating, zooming, scrolling, and the like. An example of no action from the user may be a dwell time during which the user does not provide any input. In addition to traditional user activities related to searching for links to search results provided to the user (e.g., mouse clicks or keyboard typing), more user activities related to manipulating content items provided to the user may be detected and used to assess user satisfaction. For example, FIGS. 16-19 illustrate exemplary user activities with respect to a content item, according to various embodiments of the present teachings.
FIG. 16 shows a rotation action performed for the content in section 1 of the information card 1540-1. In an embodiment, the depicted rotational action is the following: the modification of the information card 1540-1 (or portion 1 content) is triggered such that an instance of the information card 1540-1 (or portion 1 content) is modified and then stored for subsequent presentation of the information card 1540-1. In one use case, for example, in response to a subsequent user query, a modified version of the information card 1540-1 (or the rotated portion 1 content) is presented to the user (rather than the original version of the information card 1540-1). In another embodiment, the depicted rotational action is the following: a modification to the presentation of the information card 1540-1 (or portion 1 content) is triggered such that a subsequent presentation of the information card 1540 includes the original version of the information card 1540-1 (or the non-rotated portion 1 content).
FIG. 17 illustrates a removal action performed with respect to content in section 2 of the information card 1540-1. In an embodiment, the removal action depicted is the following: the modification of the information card 1540-1 (or portion 2 content) is triggered such that an instance of the information card 1540-1 (or portion 2 content) is modified and then stored for subsequent presentation of the information card 1540-1. In one scenario, for example, in response to a subsequent user query, a modified version of the information card 1540-1 (e.g., the information card 1540-1 without the portion 2 content) is presented to the user (instead of the original version of the information card 1540-1). In another embodiment, the removal action depicted is the following: a modification to the presentation of the information card 1540-1 (or portion 2 content) is triggered such that a subsequent presentation of the information card 1540 includes the original version of the information card 1540-1 (e.g., the information card 1540-1 with portion 2 content).
FIG. 18 illustrates a removal action performed to remove an information card 1540-1 from the presentation of a stack of information cards 1540. The removal action may include, for example, a swipe action, drag-and-drop action, or other action that triggers the removal of the information card 1540-1 from the presentation of the stack of information cards 1540. After performing the removal action of FIG. 18 on the information card 1540, the information card 1540-2 is presented to the user on top of a stack of information cards 1540-n such that the contents of the information card 1540-2 (e.g., within portions of the information card 1540-2) appear in the user interface 1510.
FIG. 19 illustrates a scrolling action performed with respect to an information card 1540-2. As shown, the user performs a scrolling action on the information card 1540-2 such that the contents of portions 2, 3, and 4 of the information card 1540-2 appear in the user interface 1510 (as opposed to the contents of portions 1 and 2 of the information card 1540-2). It should be understood that although exemplary user activities have been described herein (e.g., with respect to fig. 15-19), they are performed by way of example and not limitation. Any other suitable user activity may be performed, monitored, detected, and/or used to provide information related to user engagement with the content item within the scope of the present teachings. Other examples include moving a portion of a content item from one location of the content item to another location of the content item, adding content to the content item, and so forth. In some embodiments, each of the user activities described with respect to fig. 15-19 is monitored so that user engagement of the information card 1540 can be analyzed based on those user activities.
Returning to fig. 3, the detection period in this example is received from the user session determination unit 340 based on the user session information. In another example, the detection period may be determined as an extended period of time (e.g., one month) that may cover a large amount of user activity from many users. In yet another example, the detection period may be received from the user engagement assessment unit 330 and determined based on previous measurements of user engagement.
After detecting user activity from the user, the user activity detection unit 320 may determine user session information related to the user activity (e.g., a user session Identification (ID) associated with the user activity). The user activity detection unit 320 may transmit information related to the user activity and the user session to the user engagement assessment unit 330 for assessing user engagement.
The user engagement assessment unit 330 in this example measures user engagement based on information received from the user activity detection unit 320 relating to user activity and user sessions. The user engagement level indicates a degree to which the user is engaged in or interested in a content item (e.g., a content item provided to the user in response to a query submitted by the user). Although a user session may include multiple queries, a user engagement score that indicates a user's interest in one of the user sessions may not reflect user satisfaction with respect to the entire user session, particularly when there is a lot of information demand from the user and when the user has complex and rich interaction activities during the user session. While user satisfaction with a query may be evaluated according to various embodiments of the present teachings, user satisfaction with respect to an entire session may be more desirable in practice. In one embodiment, a user may be satisfied with the search results of one query in a user session, but may be dissatisfied with the search service for the entire user session because the user is disappointing with the results of most other queries in the user session. In another embodiment, whether a user is satisfied with one query may be reflected by user activity for other queries in the same user session. For example, when a search result for "Hiragitki" is provided to a user in response to receiving a query "Clinton" from the user, whether the user is satisfied with the search result may be indicated by a next query input made by the user immediately after receiving the search result. If the next query is "Bill Clinton," the user may not be fully satisfied with the results of the "Hiragi Clinton" search. But if the next query is "first female president" or "2016 election," the user may be satisfied with the search results of "cray clinton. Therefore, it is generally more desirable to know whether a user is satisfied with a user session that includes one or more related queries. The user engagement assessment unit 330 in this example sends the user engagement information to the user satisfaction score determiner 350 for assessing user satisfaction with respect to the entire user session.
The user satisfaction score determiner 350 in this example receives user engagement information from the user engagement assessment unit 330 and user session information from the user session determination unit 340 and generates or updates a user satisfaction score associated with the user session. The user satisfaction score determiner 350 may analyze the user engagement information to obtain a user session ID associated with the user engagement information. The user satisfaction score determiner 350 may then retrieve user session information from the user session determination unit 340 based on the user session ID. The user engagement information may include information related to user activity with respect to the content item or search results, and the user engagement information may be used by the user satisfaction score determiner 350 to estimate user satisfaction by generating a user satisfaction score based on a satisfaction assessment model. The satisfaction evaluation model may be determined based on one or more user satisfaction metrics including, for example, click-through rate (CTR), dwell time, first click time, number of shares, number of tweets, number of collections, and the like. The satisfaction evaluation model may use different metrics with different weights based on, for example, the subject of the current user session, the information needs of the users in the current user session, the user activity during the current user session, historical user activity of previous user sessions similar to the current user session, and/or personal information of the users.
In one example, the score associated with the user session generated by the user satisfaction score determiner 350 may be a binary number "1" or "0" to indicate whether the user is satisfied with the user session. In another example, the score related to the user session generated by the user satisfaction score determiner 350 may be a probability between 0 and 1 to indicate a likelihood that the user is satisfied with the user session (e.g., 80% satisfied). In yet another example, the score related to the user session generated by the user satisfaction score determiner 350 may be a real-valued score indicating a degree of user satisfaction with respect to the user session.
In one embodiment, the user satisfaction score determiner 350 may also generate a confidence level associated with the user satisfaction score. The confidence level indicates the confidence level that the user satisfaction score can be used to predict the actual satisfaction of the user. In one example, when the detected user activity includes a number of user actions indicating that the user shared results in the user session or that the user marked content items as favorites in the user session, the confidence may be relatively high because the user actions reflect the user's explicit intent. In another example, when the detected user activity includes little user action in the user session, without any explicit input from the user, the confidence may be relatively low because the satisfaction level of the user is more difficult to predict with less information from the user. The confidence level may also depend on different information requirements from the user. For example, for queries related to the information need "weather", a small amount of user activity (e.g., only some scrolling actions) may have provided the user session determination unit 340 with sufficient confidence that the user is satisfied, while too much query rewrite for this type of information need session is not a good indication of user satisfaction.
The user satisfaction score determiner 350 may send one or more scores to the user satisfaction report generating unit 360 to generate a user satisfaction report. The user satisfaction report generating unit 360 in this example receives a score associated with the user session from the user satisfaction score determiner 350 and generates user satisfaction information based on the score. In one example, reports may be sent daily from the user satisfaction report generating unit 360 to the search engine system 130 to indicate a user satisfaction level for each user with respect to each user session. The search engine system 130 may utilize the reports to improve its search services, analyze the user's intentions, and/or propose new products to attract more users. Reports may also be sent to the user for the user to verify or confirm the satisfaction level by giving feedback. In another example, different user satisfaction reports may be generated for different types of information needs of users for the same user, as the user may be satisfied with some sessions and not others.
FIG. 4 is a flowchart of an exemplary process for evaluating user satisfaction with respect to a user session, according to an embodiment of the present teachings. In one example, the exemplary process in FIG. 4 may be performed by search engine system 130 and user satisfaction evaluation system 140 shown in FIG. 3. Beginning at 402, a query is received from a user, for example, by search engine system 130. At 404, content items related to the query are provided to the user, for example, by the search engine system 130. Moving to 406, user activity related to the operation performed on the content item is acquired and determined, for example, by a user activity detection unit 320 in the user satisfaction evaluation system 140. At 408, user engagement information based on the user activity is generated, for example, by the user engagement assessment unit 330 in the user satisfaction evaluation system 140. At 410, it is determined whether the query belongs to a new user session, e.g., based on the user session split mode at the user session determination unit 340. The result of 410 is checked at 411. If the query belongs to a new session, a new user session is created at 412, a user satisfaction score associated with the new user session is generated at 414, and the process passes to 419. If the query does not belong to a new session but to a current user session, then at 416, a user satisfaction score associated with the current user session is generated or updated based at least in part on the user activity, the score is saved in a score database at 418, and the process passes to 419. The score database may be located in the user satisfaction rating system 140 (e.g., in the user satisfaction score determiner 350 of the user satisfaction rating system 140).
At 409, it is checked whether the current session has ended, e.g., based on user activity or a predetermined time threshold. If so, at 420, a user satisfaction score associated with the current user session may be determined and finalized, for example, along with a confidence associated with the score. Otherwise, the process branches back to 402. At 422, a user satisfaction report is generated based on the score.
It should be understood that at least some of the steps in fig. 4 mentioned above may occur out of the order shown in fig. 4, in accordance with various embodiments. In one embodiment, some steps may occur after all queries from the user are received and/or after user activity is completely detected. For example, the determination 410 of whether a particular query or set of queries belongs to a new user session may occur offline long after all queries from the user have been entered. Similarly, the determination 410 of whether a query belongs to the same user session as a previous query may involve reviewing all queries received from the user during a particular time period, including queries that may come after the query being considered.
Fig. 5 illustrates an exemplary block diagram of a user activity detection unit 320 in a user satisfaction evaluation system (e.g., user satisfaction evaluation system 140 in fig. 3), according to embodiments of the present teachings. The user activity detection unit 320 in this example includes a pressing motion detection unit 502, a rotation motion detection unit 504, a clicking motion detection unit 506, a sliding motion detection unit 508, a zooming motion detection unit 510, and a scrolling motion detection unit 512. The pressing action detection unit 502 may be configured to detect a pressing action performed by the user 520 on the touch screen. For example, a pressing action may be detected when the user 520 presses down on the touch screen for a period of time longer than a predetermined threshold before the user 520 releases the pressure. The rotational motion detection unit 504 may be configured to detect a rotational motion performed by the user 520 on the touch screen. For example, a rotation action may be detected when the user 520 presses and rotates a content item on the touch screen before the user 520 releases the press, as shown in FIG. 16. The click action detection unit 506 may be configured to detect a click action performed by the user 520 on the touch screen. For example, a click action may be detected when the user 520 presses down on the touch screen for a period of time that is less than a predetermined threshold before the user 520 releases pressure. The sliding motion detection unit 508 may be configured to detect a sliding motion performed by the user 520 on the touch screen. For example, a sliding motion may be detected when the user 520 presses down on the touch screen with a finger and moves the finger across the surface of the touch screen. The zoom action detection unit 510 may be configured to detect a zoom action performed by the user 520 on the touch screen. For example, a zoom action may be detected when the user 520 presses down on the touch screen with more than one finger at the same time and slides with at least one finger. The scroll action detection unit 512 may be configured to detect a scroll action performed by the user 520. For example, a scrolling action may be detected when the user 520 moves or scrolls a scroll wheel on a computer mouse, a scrollable button on a laptop or smartphone, or a virtual scrollable icon on a touch screen.
It should be appreciated that other user activities may be determined based on the user actions described above. For example, typing on a touch screen may be detected when the user 520 clicks on the corresponding area on the touch screen. For example, the dwell time of the user 520 may be detected when none of the detection units in the user activity detection unit 320 detects any input from the user for a period of time.
It should also be understood that more elements related to user actions (e.g., elements for detecting keyboard input and/or mouse input when user 520 is using a desktop or laptop computer) may be included in user activity detection element 320.
Each of the user activity detection units 320 may detect user activity within a predetermined or received detection period 522 and determine user session information 524 associated with the detected user activity. The user activity detection unit 320 may send user activity information to the user engagement evaluation unit 330, the user activity information including information about the detected user activity and associated user session information.
Fig. 6 is a flow chart of exemplary processing performed by a user activity detection unit (e.g., user activity detection unit 320 in fig. 5), according to an embodiment of the present teachings. At 602, user activity is detected during a detection period. At 604, user session information is received. Moving to 606, user session information related to the user activity is determined. At 608, the user activity is analyzed to obtain activity-related information. At 610, the activity-related information is sent to the user engagement assessment unit 330 along with the relevant user session information.
Fig. 7 illustrates an exemplary block diagram of a user session determination unit 340 in a user satisfaction rating system (e.g., user satisfaction rating system 140), according to embodiments of the present teachings. The user session determination unit 340 in this example comprises a query analysis unit 702, an end-of-session detection unit 704, a split pattern generator/updater 706 and a user session recording unit 710. The query analysis unit 702 may receive and analyze a query from a user to obtain query-related information. The query-related information may include subject matter related to the query, user information for the user, and/or a timestamp indicating when the query was received. The end-of-session detection unit 704 in this example determines whether the query received at the query analysis unit 702 belongs to a new user session based on the query-related information. The end-of-session detection unit 704 may select one of the session split schemas 708 stored in the user session determination unit 340, wherein each session split schema 708 indicates the manner in which a query may be assigned to a user session. According to different session-separated modes, one or more queries may be determined to be in a user session if the one or more queries are associated with similar topics, if a user's idle period between any two consecutive queries of the one or more queries is less than a predetermined threshold, and/or if the one or more queries are received within a predetermined time period from the beginning of the user session. In one embodiment, a user session may have interleaved queries as shown in FIG. 14. The session split pattern may be generated or updated by split pattern generator/updater 706 based on instructions from end-of-session detection unit 704. The instruction may be sent by the end-of-session detection unit 704, for example, when a threshold relating to the similarity between two consecutive queries should be lowered because the threshold is so high that too many user sessions are generated due to the difficulty in exceeding the similarity threshold.
The end-of-session detection unit 704 may determine to which user session the query belongs based on some user session information retrieved from the user session database 705 in the user session determination unit 340, according to the selected session split mode. User session database 705 may store information related to user sessions, including information related to queries in user sessions and corresponding users.
After the end-of-session detection unit 704 determines the user session to which the query belongs, the user session recording unit 710 may record the query in the user session. In one example, when the query belongs to a new user session, the user session recording unit 710 creates the new user session, records the query as the first query in the new user session, and stores the new user session in the user session database 705. In another example, when the query belongs to a current user session, the user session recording unit 710 retrieves the current user session from the user session database 705, records the query in the current user session, and stores the current user session in the user session database 705. For each query received, the user session recording unit 710 may determine user session information related to the query and send it to the user activity detection unit 320 and/or the user satisfaction score determiner 350. In one example, user session recording unit 710 may also determine a detection period related to the query and send it to user activity detection unit 320.
Fig. 8 is a flowchart of exemplary processing performed by a user session determination unit (e.g., user session determination unit 340), according to an embodiment of the present teachings. Beginning at 802, a query is received. At 804, the query is analyzed to obtain query-relevant information. At 806, a session detach mode is determined. At 808, it is determined whether the query belongs to a new user session according to the session separation mode. The result of 808 is checked at 809. If the query belongs to a new session, a new user session is created at 810, the query is recorded in the new user session at 812, and the process passes to 816. If the query does not belong to a new session but to a current user session, the query is recorded in the current user session at 814 and the process passes to 816.
Moving to 816, user session information related to the query is determined and sent to, for example, the user activity detection unit 320 and/or the user satisfaction score determiner 350. Optionally at 818, information related to the detection period may be determined and sent to the user activity detection unit 320. At 819, a check is made as to whether the session detach pattern needs to be generated or updated. If so, then the split mode is generated or updated at 820, and the process branches back to 802. Otherwise, the process branches directly back to 802.
It should be understood that at least some of the steps of fig. 8 mentioned above may occur out of the order shown in fig. 8, in accordance with various embodiments. In one embodiment, some steps may occur after all queries from the user are received and/or after user activity is completely detected. For example, the determination 808 of whether a particular query or set of queries belongs to a new user session may occur offline long after all queries from the user have been entered. Similarly, the determination 808 of whether the query belongs to the same user session as the previous query may involve reviewing all queries received from the user during a particular time period, including queries that may come after the query under consideration.
Fig. 9 illustrates an exemplary block diagram of a user satisfaction score determiner 350 in a user satisfaction rating system (e.g., user satisfaction rating system 140), according to embodiments of the present teachings. The user satisfaction score determiner 350 in this example includes a user engagement analyzer 902, a user session association unit 904, a session-based satisfaction analyzer 906, a user satisfaction score generator 908, and an evaluation model generator/updater 910. User engagement analyzer 902 in this example is configured to receive and analyze user engagement information to obtain, for example, information related to a user session. The user session association unit 904 may determine a user session ID associated with the user engagement information based on the analyzed user engagement information including information related to the user session. The session-based satisfaction analyzer 906 may receive or retrieve user session information, for example, from the user session determination unit 340 based on the user session ID and analyze the user session information to obtain, for example, the subject of the user session, the information needs of the user in the user session, and/or personal information of the user. The session-based satisfaction analyzer 906 may send the analyzed user session information associated with the user engagement information to the user satisfaction score generator 908 to generate a user satisfaction score.
User satisfaction score generator 908 may generate a user satisfaction score (if there is no score for the current user session) or update an existing user satisfaction score associated with the current user session based at least in part on the analyzed user session information from session-based satisfaction analyzer 906 and/or the user engagement information from user engagement analyzer 902. The user satisfaction score indicates a level of user satisfaction with respect to the user session. The user satisfaction score may be generated in the user satisfaction score determiner 350 based on the satisfaction evaluation model 914. The satisfaction evaluation model 914 may be generated or updated by the evaluation model generator/updater 910 based on one or more user satisfaction metrics 912. The one or more user satisfaction metrics 912 may include, for example, CTR, dwell time, first click time, number of zoom actions, number of click actions, number of shares, number of tweets, number of favorites, and the like. Evaluation model generator/updater 910 may generate satisfaction evaluation model 914 using different metrics with different weights. The satisfaction evaluation model 914 may be updated by the evaluation model generator/updater 910 based on instructions from the user satisfaction score generator 908. The instructions may include, for example, information related to the subject of the current user session, the information needs of the users in the current user session, the user activity during the current user session, historical user activity of previous user sessions similar to the current user session, and/or personal information of the users. The instructions may be sent by the user satisfaction score generator 908 when the score generated based on the satisfaction evaluation model 914 is always at the lowest end of the possible range of user satisfaction scores. For example, most of the scores generated relating to different user sessions are between 1% and 2%, while the possible range of scores is from 0% to 100%. In this case, it is difficult to distinguish user satisfaction between different user sessions, and therefore the model should be updated to avoid the problem accordingly.
Once the user satisfaction score associated with the current user session is generated, it may be saved by the user satisfaction score generator 908 in the score database 909 in the user satisfaction score determiner 350. The user satisfaction score generator 908 may determine whether the current user session has ended, for example, based on user session information from the session-based satisfaction analyzer 906. If the current user session has ended, the user satisfaction score for the current user session may be finalized and sent to the user satisfaction report generation unit 360 for generating a user satisfaction report. For example, different user satisfaction reports may be generated based on user satisfaction scores for different user sessions associated with the same user, as the user may be satisfied with some sessions but not others. If the current user session has not ended, the user satisfaction score generator 908 may wait for more user engagement information and/or user session information to update the user satisfaction score before it can be finalized.
In one embodiment, the user satisfaction score generator 908 may also generate a confidence level associated with the user satisfaction score and send the confidence level to the user satisfaction report generation unit 360 along with the associated user satisfaction score. The confidence level indicates the confidence level that the user satisfaction score can be used to predict the actual satisfaction of the user. In one example, when the detected user activity includes a number of user actions indicating that the user shared results in the user session or that the user marked content items as favorites in the user session, the confidence may be relatively high because the user actions reflect the user's explicit intent. In another example, when the detected user activity includes little user action in the user session, without any explicit input from the user, the confidence may be relatively low because the satisfaction level of the user is more difficult to predict with less information from the user. The confidence level may also depend on different information requirements from the user. For example, for a user session related to "weather", a high confidence may be obtained with only a small amount of user activity, whereas for a user session related to "civil war history", a high confidence may be obtained with only a large amount of user activity.
Fig. 10 is a flowchart of exemplary processing performed by a user satisfaction score determiner (e.g., user satisfaction score determiner 350), according to an embodiment of the present teachings. Beginning at 1002, user engagement information is received and analyzed. Moving to 1004, a user session ID is determined based on the user engagement information. At 1006, user session information is retrieved and analyzed based on the user session ID.
At 1020, one or more user satisfaction metrics are selected. At 1022, a user satisfaction assessment model is generated or updated based on the selected one or more user satisfaction metrics. At 1024, the user satisfaction assessment model is saved for future retrieval.
At 1008, a user satisfaction assessment model is retrieved. Moving to 1010, a user satisfaction score for the user session is generated or updated based on the user satisfaction assessment model. Optionally, at 1012, a confidence level associated with the user satisfaction score may be generated or updated based on the user satisfaction assessment model. At 1014, the user satisfaction score is saved to a score database such that the score may be retrieved and updated as more user engagement information and/or user activity is acquired for the same user session.
At 1015, a check is made as to whether the current user session has ended. If so, at 1016, the user satisfaction score is finalized and sent to the user satisfaction report generation unit 360 to generate a user satisfaction report, and the process moves to 1025. Otherwise, the process branches back to 1002 to continue receiving user engagement information related to the current user session.
At 1025, it is determined whether to update the user satisfaction assessment model. If so, the process passes back to 1020 to select metrics and update the model. Otherwise, the process returns to 1002 to receive user engagement information related to another user session.
It should be understood that at least some of the steps of fig. 10 mentioned above may occur out of the order shown in fig. 10, according to various embodiments.
Figure 20 depicts a general mobile device architecture on which the present teachings may be implemented. In this example, the user device 110 is a mobile device 2000 including, but not limited to, a smartphone, a tablet computer, a music player, a handheld game console, a GPS receiver. The mobile device 2000 in this example includes one or more Central Processing Units (CPUs) 2002, one or more Graphics Processing Units (GPUs) 2004, a display 2006, a memory 2008, a communication platform 2010 (e.g., a wireless communication module), a storage 2012, and one or more input/output (I/O) devices 2014. Any other suitable components, such as but not limited to a system bus or a controller (not shown), may also be included in mobile device 2000. As shown in fig. 20, a mobile operating system 2016 (e.g., IOS, Android, microsoft windows phone operating system (windows phone), etc.) and one or more applications 2018 may be loaded from storage 2012 into memory 2008 for execution by CPU 2002. The applications 2018 may include a web browser or any other suitable mobile search application. Running the application 2018 may cause the mobile device 2000 to perform some of the processing as previously described. For example, content items and search results are displayed by GPU2004 in conjunction with display 2006. User input to a search query is received via the I/O device 2014 and sent to the search engine system 130 via the communication platform 2010.
To implement the present teachings, a computer hardware platform may be used as the hardware platform(s) for one or more of the elements described herein. The hardware elements, operating systems, and programming languages of such computers are conventional in nature, and it is assumed that those skilled in the art are familiar with adapting those technologies to implement the processes described in essence herein. A computer with user interface elements may be used to implement a Personal Computer (PC) or other type of workstation or terminal device, but the computer may also act as a server if suitably programmed. It is believed that those skilled in the art will be familiar with the structure, programming, and general operation of such computer devices, and that the drawings should be self-explanatory.
FIG. 21 depicts a general computer architecture on which the present teachings may be implemented, and has a functional block diagram illustration of a computer hardware platform that includes user interface elements. The computer may be a general purpose computer or a special purpose computer. The computer 2100 may be used to implement any of the components of the user satisfaction assessment architecture described herein. The various components in a system (e.g., depicted in fig. 1 and 2) can all be implemented on one or more computers (e.g., computer 2100) via their hardware, software programs, firmware, or combinations thereof. Although only one such computer is shown for convenience, computer functionality related to user satisfaction assessment may be implemented in a distributed manner across multiple similar platforms to distribute processing load.
By way of example, the computer 2100 includes COM ports 2102, the COM ports 2102 being connected to and from a network connected thereto to facilitate data communications. The computer 2100 also includes a CPU2104, in the form of one or more processors, for executing program instructions. The exemplary computer platform includes an internal communication bus 2106, various forms of program storage and data storage devices (e.g., disk 2108, Read Only Memory (ROM)2110, Random Access Memory (RAM)2112) for various data files to be processed and/or transmitted by the computer and possibly program instructions to be executed by the CPU 2104. The computer 2100 also includes I/O components 2114 that support input/output flow between the computer and other components therein (e.g., user interface elements 2116). The computer 2100 may also receive programs and data via network communication.
Thus, as outlined above, aspects of the method of user satisfaction assessment may be implemented in a programmed manner. The procedural aspects of the technology may generally be viewed as an "article of manufacture" or "article of manufacture" in the form of executable code and/or associated data carried or embodied in a type of machine-readable medium. Tangible, non-transitory "storage" type media include any or all of a computer, processor, etc., memory or other storage device or module associated therewith that may provide storage for software programs at any time, e.g., various semiconductor memories, tape drives, disk drives, etc.
All or a portion of the software may sometimes communicate over a network, such as the internet or various other telecommunications networks. Such communication may enable, for example, loading of software from one computer or processor into another computer or processor. Thus, another type of media which may carry software elements includes optical, electrical, and electromagnetic waves, for example, used through physical interfaces between local devices, through wired and optical land networks, and through various air links. The physical element (e.g., wired or wireless link, optical link, etc.) that carries such waves can also be considered a medium that carries software. As used herein, unless limited to a tangible "storage" medium, terms such as a computer or machine "readable medium" refer to any medium that participates in providing instructions to a processor for execution.
Thus, a machine-readable medium may take many forms, including but not limited to, tangible storage media, carrier wave media, or physical transmission media. By way of example, non-volatile storage media includes optical or magnetic disks (e.g., any storage device in any computer(s), etc.) that can be used to implement the system shown in the figures or any component thereof. Volatile storage media includes dynamic memory (e.g., main memory of such a computer platform). Tangible transmission media include coaxial cables, copper wire and fiber optics, including the wires that form a bus within a computer system. Carrier-wave transmission media can take the form of electrical or electromagnetic signals, or acoustic or light waves such as those generated during Radio Frequency (RF) and Infrared (IR) data communications. Thus, common forms of computer-readable media include, for example: a floppy disk, a flexible disk, hard disk, magnetic tape, any other magnetic medium, a CD-ROM, DVD or DVD-ROM, any other optical medium, punch cards, paper tape, any other physical storage medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave transporting data or instructions, cables or links transporting such a carrier wave, or any other medium from which a computer can read program code and/or data. Many of these forms of computer readable media may be involved in carrying one or more sequences of one or more instructions to a processor for execution.
Those skilled in the art will recognize that the present teachings may be susceptible to various modifications and/or enhancements. For example, although the implementation of the various components described above may be implemented in a hardware device, they may also be implemented as a software-only solution (e.g., installation on an existing server). Furthermore, the units of the host and client nodes disclosed herein may be implemented as firmware, a firmware/software combination, a firmware/hardware combination, or a hardware/firmware/software combination.
While the foregoing has described what are considered to be the best mode and/or other examples, it is understood that various modifications may be made therein, that the subject matter disclosed herein may be implemented in various forms and examples, and that these teachings may be applied in numerous applications, only some of which have been disclosed herein. The appended claims are intended to protect any and all applications, modifications, and variations that fall within the true scope of the present teachings.

Claims (20)

1. A method for evaluating user satisfaction with respect to a user session, the method being implemented on a machine having at least one processor, a storage device, and a communication platform connected to a network, the method comprising:
receiving one or more queries from a user in a user session;
obtaining information about one or more user activities, each user activity relating to an operation on a content item, wherein the content item is associated with one of the one or more queries;
calculating a score associated with the user session based at least in part on the one or more user activities; and
determining user satisfaction with the user session based on the score.
2. The method of claim 1, further comprising: automatically generating a flag associated with the user session based on the score, wherein the flag indicates whether the user is satisfied with the user session.
3. The method of claim 1, further comprising: based on the scores associated with the user sessions, a score associated with a new user session of the user is predicted prior to detecting user activity in the new user session.
4. The method of claim 1, wherein the one or more user activities comprise an action and/or a non-action of the user.
5. The method of claim 1, wherein the one or more queries are determined to be in the user session based on at least one of:
the one or more queries are associated with similar topics;
an idle period of the user between any two consecutive queries of the one or more queries is less than a predetermined threshold; and
the one or more queries are received within a predetermined time period from the start of the user session.
6. The method of claim 1, wherein the score is calculated based on at least one of: personal information of the user and topics related to the one or more queries.
7. The method of claim 1, wherein:
the score is calculated based on a model; and is
The model is generated by a machine learning method and trained by data relating to at least one of: information from one or more tags created by the user, user engagement information related to user activity of the user, and one or more metrics related to the user engagement information.
8. The method of claim 1, further comprising: determining a confidence level associated with the score, wherein the user satisfaction is determined based on the confidence level.
9. A system for assessing user satisfaction with respect to a user session, the system having at least one processor, a storage device, and a communication platform, the system comprising:
a query analysis unit configured to: receiving one or more queries from a user in a user session;
a user activity detection unit configured to: obtaining information about one or more user activities, each user activity relating to an operation on a content item, wherein the content item is associated with one of the one or more queries;
a user satisfaction determination unit configured to: calculating a score associated with the user session based at least in part on the one or more user activities; and
a user satisfaction report generating unit configured to: determining user satisfaction with the user session based on the score.
10. The system of claim 9, wherein:
the user satisfaction report generating unit is further configured to: automatically generating a token associated with the user session based on the score; and is
The flag indicates whether the user is satisfied with the user session.
11. The system of claim 9, wherein the user satisfaction report generating unit is further configured to: based on the scores associated with the user sessions, a score associated with a new user session of the user is predicted prior to detecting user activity in the new user session.
12. The system of claim 9, wherein the one or more user activities include an action and/or a non-action of the user.
13. The system of claim 9, wherein the one or more queries are determined to be in the user session based on at least one of:
the one or more queries are associated with similar topics;
an idle period of the user between any two consecutive queries of the one or more queries is less than a predetermined threshold; and
the one or more queries are received within a predetermined time period from the start of the user session.
14. The system of claim 9, wherein the score is calculated based on at least one of: personal information of the user and topics related to the one or more queries.
15. The system of claim 9, wherein:
the score is calculated based on a model; and is
The model is generated by a machine learning method and trained by data relating to at least one of: information from one or more tags created by the user, user engagement information related to user activity of the user, and one or more metrics related to the user engagement information.
16. The system of claim 9, wherein:
the user satisfaction determination unit is further configured to: determining a confidence level associated with the score; and is
The user satisfaction report generating unit is further configured to: determining user satisfaction based on the confidence level.
17. A non-transitory machine-readable medium for evaluating user satisfaction with respect to a user session, the non-transitory machine-readable medium having information recorded thereon, wherein the information, when read by the machine, causes the machine to perform operations comprising:
receiving one or more queries from a user in a user session;
obtaining information about one or more user activities, each user activity relating to an operation on a content item, wherein the content item is associated with one of the one or more queries;
calculating a score associated with the user session based at least in part on the one or more user activities; and
determining user satisfaction with the user session based on the score.
18. The medium of claim 17, the information, when read by the machine, further causes the machine to perform operations comprising: automatically generating a flag associated with the user session based on the score, wherein the flag indicates whether the user is satisfied with the user session.
19. The medium of claim 17, the information, when read by the machine, further causes the machine to perform operations comprising: based on the scores associated with the user sessions, a score associated with a new user session of the user is predicted prior to detecting user activity in the new user session.
20. The media of claim 17, wherein the one or more user activities comprise an action and/or a non-action of the user.
HK16105858.2A 2014-05-06 2016-05-23 Method and system for evaluating user satisfaction with respect to a user session HK1218006A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US14/270,862 2014-05-06
US14/270,862 US10599659B2 (en) 2014-05-06 2014-05-06 Method and system for evaluating user satisfaction with respect to a user session

Publications (1)

Publication Number Publication Date
HK1218006A1 true HK1218006A1 (en) 2017-01-27

Family

ID=52813910

Family Applications (1)

Application Number Title Priority Date Filing Date
HK16105858.2A HK1218006A1 (en) 2014-05-06 2016-05-23 Method and system for evaluating user satisfaction with respect to a user session

Country Status (5)

Country Link
US (1) US10599659B2 (en)
EP (1) EP2942749A1 (en)
CN (1) CN105095334A (en)
HK (1) HK1218006A1 (en)
TW (1) TWI672598B (en)

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10095797B2 (en) 2014-10-03 2018-10-09 Salesforce.Com, Inc. Suggesting actions for evaluating user performance in an enterprise social network
US20170024748A1 (en) * 2015-07-22 2017-01-26 Patient Doctor Technologies, Inc. Guided discussion platform for multiple parties
CN108291733B (en) * 2015-09-03 2021-04-09 罗伯特·博世有限公司 Method for determining target operating point, target operating point determining device and user input device
US10601747B2 (en) * 2015-10-05 2020-03-24 Oath Inc. Method and system for dynamically generating a card
US10298701B2 (en) * 2016-01-29 2019-05-21 Microsoft Technology Licensing, Llc Systems and methods for timely propagation of network content
US10142298B2 (en) * 2016-09-26 2018-11-27 Versa Networks, Inc. Method and system for protecting data flow between pairs of branch nodes in a software-defined wide-area network
US10089475B2 (en) * 2016-11-25 2018-10-02 Sap Se Detection of security incidents through simulations
US10528576B1 (en) * 2017-06-27 2020-01-07 Intuit, Inc. Automated search recipe generation
CN108255943A (en) * 2017-12-12 2018-07-06 百度在线网络技术(北京)有限公司 Human-computer dialogue method for evaluating quality, device, computer equipment and storage medium
US11682029B2 (en) * 2018-03-23 2023-06-20 Lexisnexis, A Division Of Reed Elsevier Inc. Systems and methods for scoring user reactions to a software program
US12327562B1 (en) 2018-03-23 2025-06-10 Amazon Technologies, Inc. Speech processing using user satisfaction data
TWI744620B (en) * 2019-03-29 2021-11-01 金志聿 Pooling prediction system
CN110414824B (en) * 2019-07-23 2023-04-07 复旦大学 Service management system based on community system
CN112511324B (en) * 2019-09-16 2023-03-31 中国移动通信集团河北有限公司 Big data-based user satisfaction evaluation method and device
US20210192415A1 (en) * 2019-12-20 2021-06-24 Ushur, Inc. Brand proximity score
CN111475722B (en) * 2020-03-31 2023-04-18 百度在线网络技术(北京)有限公司 Method and apparatus for transmitting information
US12007954B1 (en) * 2020-05-08 2024-06-11 Amazon Technologies, Inc. Selective forwarding for multi-statement database transactions
US11816073B1 (en) 2020-05-08 2023-11-14 Amazon Technologies, Inc. Asynchronously forwarding database commands
US11748558B2 (en) * 2020-10-27 2023-09-05 Disney Enterprises, Inc. Multi-persona social agent
CN112822753B (en) * 2021-01-15 2023-03-31 中国人民解放军陆军工程大学 User-base station bidirectional matching access method in lift-off wireless network
US12380485B2 (en) 2022-01-30 2025-08-05 Walmart Apollo, Llc Systems and methods for altering a graphical user interface based on a customer journey graph
CN114416513B (en) * 2022-03-25 2022-07-05 百度在线网络技术(北京)有限公司 Processing method and device for search data, electronic equipment and storage medium
US20240362278A1 (en) * 2023-04-25 2024-10-31 Trovata, Inc. Natural language interface for search and filtering on a web service platform for distributed server systems and clients

Family Cites Families (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7526470B1 (en) * 2003-05-28 2009-04-28 Microsoft Corporation System and method for measuring and improving search result relevance based on user satisfaction
US7937340B2 (en) * 2003-12-03 2011-05-03 Microsoft Corporation Automated satisfaction measurement for web search
US7587324B2 (en) * 2004-03-30 2009-09-08 Sap Ag Methods and systems for detecting user satisfaction
US7739270B2 (en) * 2004-12-07 2010-06-15 Microsoft Corporation Entity-specific tuned searching
US7596558B2 (en) * 2005-04-18 2009-09-29 Microsoft Corporation System and method for obtaining user feedback for relevance tuning
US8417697B2 (en) * 2005-08-22 2013-04-09 Google Inc. Permitting users to remove documents
US8683362B2 (en) * 2008-05-23 2014-03-25 Qualcomm Incorporated Card metaphor for activities in a computing device
US8126874B2 (en) * 2006-05-09 2012-02-28 Google Inc. Systems and methods for generating statistics from search engine query logs
US7805278B1 (en) * 2006-11-29 2010-09-28 At&T Intellectual Property Ii, Lp Optimal sampling system on populations of collectives
CN101295222B (en) * 2007-04-28 2011-05-04 国际商业机器公司 Method and related equipment for generating three-dimensional disc form tree-shaped data display
US8037042B2 (en) 2007-05-10 2011-10-11 Microsoft Corporation Automated analysis of user search behavior
US20080306830A1 (en) * 2007-06-07 2008-12-11 Cliquality, Llc System for rating quality of online visitors
US7984000B2 (en) * 2007-10-31 2011-07-19 Microsoft Corporation Predicting and using search engine switching behavior
US7877389B2 (en) * 2007-12-14 2011-01-25 Yahoo, Inc. Segmentation of search topics in query logs
US20100031190A1 (en) * 2008-07-29 2010-02-04 Yahoo! Inc. System and method for copying information into a target document
US20100082637A1 (en) * 2008-09-30 2010-04-01 Yahoo; Inc. Web Page and Web Site Importance Estimation Using Aggregate Browsing History
US9569541B2 (en) * 2009-12-31 2017-02-14 Microsoft Technology Licensing, Llc Evaluating preferences of content on a webpage
US20110270824A1 (en) * 2010-04-30 2011-11-03 Microsoft Corporation Collaborative search and share
US20110296351A1 (en) * 2010-05-26 2011-12-01 T-Mobile Usa, Inc. User Interface with Z-axis Interaction and Multiple Stacks
US8307005B1 (en) * 2010-06-30 2012-11-06 Google Inc. Determining reachability
US8688667B1 (en) * 2011-02-08 2014-04-01 Google Inc. Providing intent sensitive search results
CN102156746A (en) 2011-04-19 2011-08-17 清华大学 Method for evaluating performance of search engine
CN103186604B (en) 2011-12-30 2017-10-27 北京百度网讯科技有限公司 For determining user to the method for search result satisfaction, device and equipment
US8930339B2 (en) * 2012-01-03 2015-01-06 Microsoft Corporation Search engine performance evaluation using a task-based assessment metric
US8589386B2 (en) * 2012-04-03 2013-11-19 Oracle International Corporation Card view for project resource search results
US10496649B1 (en) * 2013-06-22 2019-12-03 Google Llc Personalized suggestions based on past queries
US8949250B1 (en) * 2013-12-19 2015-02-03 Facebook, Inc. Generating recommended search queries on online social networks
WO2015123390A1 (en) * 2014-02-12 2015-08-20 Quixey, Inc. Query cards
US9342227B2 (en) * 2014-09-02 2016-05-17 Microsoft Technology Licensing, Llc Semantic card view

Also Published As

Publication number Publication date
TWI672598B (en) 2019-09-21
US20150324361A1 (en) 2015-11-12
US10599659B2 (en) 2020-03-24
CN105095334A (en) 2015-11-25
TW201543238A (en) 2015-11-16
EP2942749A1 (en) 2015-11-11

Similar Documents

Publication Publication Date Title
HK1218006A1 (en) Method and system for evaluating user satisfaction with respect to a user session
US11206311B2 (en) Method and system for measuring user engagement using click/skip in content stream
US12056037B2 (en) Method and system for measuring user engagement with content items
US10789304B2 (en) Method and system for measuring user engagement with content items
US9064016B2 (en) Ranking search results using result repetition
US9760541B2 (en) Systems and methods for delivery techniques of contextualized services on mobile devices
US8538978B2 (en) Presenting a search suggestion with a social comments icon
US9147000B2 (en) Method and system for recommending websites
US9836554B2 (en) Method and system for providing query suggestions including entities
US20140279048A1 (en) Systems and methods for providing relevant pathways through linked information
US9471570B2 (en) Method and system for user selection of query suggestions
US20150317310A1 (en) Method and system for evaluating query suggestions quality
EP2628097A1 (en) Systems and methods for using a behavior history of a user to augment content of a webpage
US20180011854A1 (en) Method and system for ranking content items based on user engagement signals
US20140280550A1 (en) Method and system for measuring user engagement from stream depth
US20170193059A1 (en) Searching For Applications Based On Application Usage
EP3782048A1 (en) Action indicators for search operation output elements
US10789305B2 (en) Search engine results
WO2018217402A1 (en) Suggested content generation
US20170192978A1 (en) Searching For Applications Based On Application Usage
US9633118B2 (en) Editorial service supporting contrasting content
Sponder et al. Search Engines and the Internet
KR20150144420A (en) Advertisement providing server and method for advertisement providing