HK40048810A - Method, system and article for behavioral pairing in a task assignment system - Google Patents
Method, system and article for behavioral pairing in a task assignment system Download PDFInfo
- Publication number
- HK40048810A HK40048810A HK42021038696.7A HK42021038696A HK40048810A HK 40048810 A HK40048810 A HK 40048810A HK 42021038696 A HK42021038696 A HK 42021038696A HK 40048810 A HK40048810 A HK 40048810A
- Authority
- HK
- Hong Kong
- Prior art keywords
- tasks
- task
- contacts
- integer
- contact
- Prior art date
Links
Description
The application is a divisional application of applications with PCT application numbers of PCT/IB2018/000897, international application dates of 2018, 7 and 18 months, Chinese application numbers of 201880015611.X and the name of invention 'technology for behavior pairing in a task allocation system'.
Cross Reference to Related Applications
This international patent application claims priority to U.S. patent application No.15/837,911, filed on 12/11/2017, the entire contents of which are incorporated herein by reference as if fully set forth herein.
Technical Field
The present disclosure relates generally to behavior pairing and, more particularly, to techniques for behavior pairing in a task allocation system.
Background
Typical task distribution systems algorithmically distribute tasks arriving at a task distribution center to agents available for processing the tasks. Sometimes, the task allocation system may make an agent available and wait for allocation to a task. At other times, the task distribution center may have tasks wait in one or more queues for agents to become available for distribution.
In some typical task distribution centers, tasks are distributed to ordered agents according to arrival time, and the agents receive ordered tasks based on the time when the agents become available. This strategy may be referred to as a "first-in-first-out", "FIFO", or "round robin" strategy. For example, in the "L2" environment, multiple tasks wait in a queue for allocation to an agent. When an agent becomes available, the task at the head of the queue will be selected for assignment to the agent.
Some task distribution systems prioritize certain types of tasks over other types of tasks. For example, some tasks may be high priority tasks while other tasks are low priority tasks. Under the FIFO strategy, high priority tasks will be allocated before low priority tasks. In some cases, some low priority tasks may have a high average latency, while high priority tasks are processed instead. Furthermore, agents that may be able to process low priority tasks more efficiently may instead end up being allocated to high priority tasks, resulting in sub-optimal overall performance in the task allocation system.
In view of the above, it can be appreciated that there may be a need for a system that efficiently optimizes the application of BP policies in the L2 environment of a task distribution system.
Disclosure of Invention
Techniques for behavioral pairing in a task allocation system are disclosed. In one particular embodiment, the techniques may be realized as a method for behavioral pairing in a task allocation system, the method comprising: determining, by at least one computer processor communicatively coupled to the task distribution system and configured to operate in the task distribution system, a priority for each of a plurality of tasks; determining, by the at least one computer processor, an agent available for assignment to any of the plurality of tasks; and assigning, by the at least one computer processor, a first task of the plurality of tasks to the agent using a task assignment policy, wherein the first task has a lower priority than a second task of the plurality of tasks.
In accordance with other aspects of this particular embodiment, the first plurality of tasks may include a number of tasks from a front of a queue of tasks.
In accordance with other aspects of this particular embodiment, the number of tasks is greater than 1 and less than 10.
According to other aspects of this particular embodiment, the method may further comprise: determining, by the at least one computer processor, an optimal degree of selection of the task allocation policy; and determining, by the at least one computer processor, the number of tasks based on the selected optimality.
According to other aspects of this particular embodiment, the number of tasks is proportional to a size of the queue of tasks.
In accordance with other aspects of this particular embodiment, the number of tasks is proportional to the relative number of tasks of different priorities.
According to other aspects of this particular embodiment, the method may further include determining, by the at least one computer processor, that the first task of the plurality of tasks has exceeded an associated service level agreement.
In accordance with other aspects of this particular embodiment, the service level agreement may be a function of an estimated latency of the first task.
According to other aspects of this particular embodiment, the first plurality of tasks may include a number of tasks from a front of a queue of tasks, and wherein the service level agreement may be a function of the number of tasks.
In accordance with other aspects of this particular embodiment, at least one of the plurality of tasks may be a virtual task.
In accordance with other aspects of this particular embodiment, the task allocation policy may be a behavior pairing policy.
In another particular embodiment, the techniques may be realized as a system for behavioral pairing in a task distribution system comprising at least one computer processor communicatively coupled to and configured to operate in the task distribution system, wherein the at least one computer processor is further configured to perform the steps in the above-described method.
In another particular embodiment, the techniques may be realized as an article of manufacture for behavioral pairing in a task distribution system comprising a non-transitory processor-readable medium and instructions stored on the medium; wherein the instructions are configured to be readable from the medium by at least one computer processor communicatively coupled to the task allocation system and configured to operate in the task allocation system and thereby cause the at least one computer processor to operate so as to perform the steps of the above-described method.
The present invention will now be described in more detail with reference to specific embodiments thereof as illustrated in the accompanying drawings. While the present disclosure is described below with reference to specific embodiments, it should be understood that the present disclosure is not limited thereto. Those of ordinary skill in the art having access to the teachings herein will recognize additional implementations, modifications, and embodiments, as well as other fields of use, which are within the scope of the present disclosure as described herein, and with respect to which the present disclosure may be of significant utility.
Drawings
For a more complete understanding of this disclosure, reference is now made to the drawings, wherein like elements are designated by like numerals. These drawings should not be construed as limiting the present disclosure, but are intended to be illustrative only.
FIG. 1 shows a block diagram of a task allocation system according to an embodiment of the present disclosure.
FIG. 2 shows a flow diagram of a task assignment method according to an embodiment of the present disclosure.
Detailed Description
Typical task distribution systems algorithmically distribute tasks arriving at a task distribution center to agents available for processing the tasks. Sometimes, the task allocation system may make an agent available and wait for allocation to a task. At other times, the task distribution center may have tasks wait in one or more queues for agents to become available for distribution.
In some typical task distribution centers, tasks are distributed to ordered agents according to arrival time, and the agents receive ordered tasks based on the time when the agents become available. This strategy may be referred to as a "first-in-first-out", "FIFO", or "round robin" strategy. For example, in the "L2" environment, multiple tasks wait in a queue for allocation to an agent. When an agent becomes available, the task at the head of the queue will be selected for assignment to the agent.
Some task distribution systems prioritize certain types of tasks over other types of tasks. For example, some tasks may be high priority tasks while other tasks are low priority tasks. Under the FIFO strategy, high priority tasks will be allocated before low priority tasks. In some cases, some low priority tasks may have a high average latency, while high priority tasks are processed instead. Furthermore, agents that may be able to process low priority tasks more efficiently may instead end up being allocated to high priority tasks, resulting in sub-optimal overall performance in the task allocation system.
In view of the foregoing, it can be appreciated that there may be a need for a system that efficiently optimizes the application of BP policies in the L2 environment of a task distribution system.
FIG. 1 shows a block diagram of a task allocation system 100 according to an embodiment of the present disclosure. The description herein describes network elements, computers, and/or components of systems and methods for benchmarking pairing policies (benchmarking) in a task distribution system that may include one or more modules. As used herein, the term "module" may be understood to refer to computing software, firmware, hardware, and/or various combinations thereof. However, a module is not to be construed as software that is not implemented on hardware, firmware, or recorded on a non-transitory processor-readable recordable storage medium (i.e., the module itself is not software). Note that the modules are exemplary. Modules may be combined, integrated, separated, and/or duplicated to support various applications. Further, the functionality described herein as being performed at a particular module may be performed at one or more other modules and/or by one or more other devices in place of, or in addition to, the functionality performed at the particular module. Further, modules may be implemented across multiple devices and/or other components, local or remote to each other. Additionally, a module may be removed from one device and added to another device, and/or may be included in both devices.
As shown in FIG. 1, the task assignment system 100 may include a task assignment module 110. The task distribution system 100 may include switches or other types of routing hardware and software to help distribute tasks among various agents, including queuing or switching components or other internet-based, cloud-based, or network-based hardware or software solutions.
The task assignment module 110 may receive an incoming task. In the example of FIG. 1, the task distribution system 100 receives m tasks, i.e., tasks 130A-130m, over a given period. Each of the m tasks may be assigned to an agent of the task distribution system 100 for servicing or other types of task processing. In the example of FIG. 1, n agents (agents 120A-120n) are available during a given cycle. m and n may be any large finite integer greater than or equal to 1. In a real-world task distribution system, such as a contact center, tens, hundreds, etc. of agents may log into the contact center to interact with contacts during a shift, and the contact center may receive tens, hundreds, thousands, etc. of contacts (e.g., calls) during the shift.
In some embodiments, the task allocation policy module 140 may be communicatively coupled to the task allocation system 100 and/or configured to operate in the task allocation system 100. The task assignment policy module 140 may implement one or more task assignment policies (or "pairing policies") for assigning individual tasks to individual agents (e.g., pairing contacts with a contact center agent).
A variety of different task allocation policies may be designed and implemented by the task allocation policy module 140. In some embodiments, a first-in-first-out ("FIFO") policy may be implemented in which, for example, the longest-waiting agent receives the next available task (in the L1 environment) or assigns the longest-waiting task to the next available task (in the L2 environment). Other FIFOs and FIFO-like policies can be allocated without relying on information specific to individual tasks or individual agents.
In other embodiments, it may be implemented that higher performance agents for task allocation may be prioritized using Performance Based Routing (PBR) policies. For example, under PBR, the highest performing agent of the available agents receives the next available task. Other PBRs and PBR-like policies may be assigned using information about a particular agent, but need not necessarily rely on information about a particular task or agent.
In other embodiments, a Behavioral Pairing (BP) policy may be used for optimally assigning tasks to agents using information about both the particular task and the particular agent. Various BP policies may be used, such as a diagonal model BP policy or a network flow BP policy. These task assignment strategies and other strategies are described in detail for the contact center context in, for example, U.S. patent No.9,300,802 and U.S. patent application No.15/582,223, which are incorporated herein by reference.
In some embodiments, the history allocation module 150 may be communicatively coupled to the task allocation system 100 and/or configured to operate in the task allocation system 100 via other modules, such as the task allocation module 110 and/or the task allocation policy module 140. The history allocation module 150 may be responsible for various functions such as monitoring, storing, retrieving, and/or outputting information regarding the agent task allocations that have been made. For example, the history allocation module 150 may monitor the task allocation module 110 to collect information about task allocation over a given period. Each record of historical task assignments may include information such as an agent identifier, a task or task type identifier, result information, or a pairing policy identifier (i.e., an identifier indicating whether the task assignment was made using a BP pairing policy or some other pairing policy such as a FIFO or PBR pairing policy).
In some embodiments and for some contexts, additional information may be stored. For example, in a contact center context, the history allocation module 150 may also store information about the time the call started, the time the call ended, the dialed phone number, and the caller's phone number. For another example, in a dispatch center (e.g., "truck roll") context, the history allocation module 150 may also store information about the time the driver (i.e., live agent) left the dispatch center, recommended routes, routes taken, estimated travel time, actual travel time, the amount of time spent processing customer tasks at the customer site, and the like.
In some embodiments, the history allocation module 150 may generate a pairing model or similar computer processor-generated model based on a set of history allocations for a period of time (e.g., past week, past month, past year, etc.), which may be used by the task allocation policy module 140 to make task allocation recommendations or instructions to the task allocation module 110. In other embodiments, the historical allocation module 150 may send the historical allocation information to another module, such as the task allocation policy module 140 or the benchmarking module 160.
In some embodiments, the benchmarking module 160 may be communicatively coupled to the task distribution system 100 and/or configured to operate in the task distribution system 100 via other modules, such as the task distribution module 110 and/or the historical distribution module 150. The benchmarking module 160 may benchmark the relative performance of two or more pairing policies (e.g., FIFO, PBR, BP, etc.) using historical allocation information, which may be received from, for example, the historical allocation module 150. In some embodiments, the benchmarking module 160 may perform other functions, such as establishing benchmarking schedules for cycling through various pairing strategies, tracking groups (e.g., historically assigned base sets and measurement sets), and the like. Techniques for benchmarking and other functions performed by benchmarking module 160 for various task allocation policies and various contexts are described in subsequent sections throughout this disclosure. Benchmark testing is described in detail for a contact-centric scenario, for example in U.S. patent No.9,712,676, which is incorporated herein by reference.
In some embodiments, the benchmarking module 160 may output or otherwise report or use relative performance measurements. The relative performance measure may be used to evaluate the quality of the task allocation policy to determine, for example, whether a different task allocation policy (or a different pairing model) should be used, or when optimized or configured to use one task allocation policy instead of another, the relative performance measure may be used to measure the overall performance (or performance gain) achieved within the task allocation system 100.
In some task distribution systems, a relatively large number of tasks can be set up in a queue while waiting to be distributed as agents become available. For this highly simplified example, nine tasks wait in the queue. Three of these tasks are high priority tasks: h1, H2, and H3; and 6 of these tasks are low priority tasks: l1, L2, L3, L4, L5, L6.
In some task distribution systems, tasks of different priorities may be organized (within the system, or at least conceptually) in different priority queues:
high priority queue: h1, H2, H3
Low priority queue: l1, L2, L3, L4, L5, L6
In this example, each priority queue is chronologically ordered according to arrival time for each task (e.g., contact or caller in a contact center system). H1 is the longest waiting high priority task, H3 is the shortest waiting high priority task, L1 is the longest waiting low priority task, L6 is the shortest waiting low priority task, etc. In some embodiments, one or more of these tasks may be "virtual tasks". For example, in a call center context, a caller may request a callback and disconnect from a contact center, but the location and priority of the caller is maintained in a queue.
In other task distribution systems, tasks of different priorities may be mixed (within the system, or at least conceptually) in a chronologically ordered queue, except that higher priority tasks may be inserted in the queue before lower priority tasks:
and (3) queue: h1, H2, H3, L1, L2, L3, L4, L5, L6
In this example, even though L1 is the longest waiting task of all nine tasks, three high priority tasks that arrive later in time have been inserted into the queue before L1.
Even when an agent capable of processing lower priority tasks more efficiently than higher priority tasks becomes available, a typical FIFO strategy can operate by allocating all high priority tasks before allocating any low priority tasks, leaving the low priority tasks waiting indefinitely in the queue. This drawback may be particularly detrimental if higher priority contacts continue to arrive at the task distribution system.
In some task allocation systems, a Service Level Agreement (SLA) may be in place that imposes a limit on how long any one task is expected to wait to be allocated. Some examples of SLAs include fixed times (e.g., 10 seconds, 30 seconds, 3 minutes, etc.); estimated Wait Time (EWT) plus some fixed time (e.g., 1 minute 45 seconds EWT plus 30 seconds); and multipliers of EWT (e.g., 150% of EWT, or 1.2 × EWT).
In these task allocation systems, if an SLA is exceeded for a certain lower priority task (sometimes referred to as a "blown SLA"), a FIFO policy may eventually allocate the task. However, low priority tasks may eventually still wait longer in the queue than the average expected wait time, and may still inefficiently perform agent assignments.
In some embodiments, the more efficient and effective task allocation policy is a BP policy. Under the BP strategy, up to all nine tasks may be considered for allocation when an agent becomes available. The BP policy may still take into account the priority level of each task, but if the information about the tasks and available agents indicates that such pairing is optimal for the performance of the task distribution system and achieving the desired target task utilization or distribution rate, it may eventually be preferable to distribute lower priority tasks before higher priority tasks.
The BP policy may consider the degree of priority level to be spectrum (spread). On one extreme of the spectrum, the BP policy may consider all tasks in the queue (or all tasks in all priority queues), giving relatively little to no weight to the priority of each task:
and (3) queue: t1, T2, T3, T4, T5, T6, T7, T8, T9
In this example, the BP policy may enable efficient, optimal task allocation. However, one possible consequence of this strategy is that some high priority tasks may eventually wait longer than they might under the FIFO strategy, since lower priority tasks are assigned first.
Near the other end of the spectrum, the BP policy may consider all tasks in the highest priority queue:
high priority queue: h1, H2, H3
In this example, the BP strategy still enables more efficient and optimal task allocation than the FIFO strategy. Under the FIFO policy, tasks will be allocated in queue order, regardless of which agent becomes available: first H1, then H2, and finally H3, while the BP policy will consider information about the three tasks and agents to select a more efficient pairing, even though the assigned high priority task may not be the longest waiting high priority task. However, one possible consequence of this strategy is that low priority tasks may eventually wait as long as they would be under the FIFO strategy, and will miss an opportunity to effectively pair the agent with the low priority task.
In some embodiments, a hybrid approach may be used that honors task priorities and latencies while also timely processing at least some of the longer-waiting lower priority tasks. Some of these embodiments may be referred to as "top N" or "head N" because it considers the first N tasks in the priority queue.
For example, if N ═ 6, such BP policy would select among the first six tasks in the following queue:
and (3) queue:H1,H2,H3,L1,L2,L3,L4,L5,L6
in this example, the BP policy may assign any one of three high priority tasks or any one of three longest waiting low priority tasks when an agent becomes available.
In some embodiments, N may be a predetermined value and/or a fixed value. In other embodiments, N may be dynamically determined for each pairing. For example, the BP policy may determine the size of N to represent the selected optimal amount or degree (e.g., 3, 6, 10, 20, etc.). As another example, N may be a function of the number of tasks waiting in the queue (e.g., one-quarter, one-third, one-half, etc. of the number of tasks in the queue). As another example, N may be a function of the relative number of tasks of different priority levels.
As another example, if the BP policy encounters the ith call for which an SLA has been blown, the BP policy may consider up to i calls for i ≦ N. In this example, if L1 has waited longer than expected by the SLA, the BP policy may consider H1, H2, H3, and L1 — ignoring L2 and L3, since it is preferable to pair the longer waiting L1 before pairing L2 or L3.
In some embodiments, the BP policy may use the SLA based on tracking how many times each task has been proposed for selection (i.e., how many times a task occurred in the previous N tasks):
h3 is selected from H1(1), H2(1), H3(1), L1(1), L2(1), and L3(1) > H3
H1(2), H2(2), L1(2), L2(2), L3(2), L4(1) > L2 is selected
H1(3), H2(3), L1(3), L3(3), L4(3), L5(1) > H1 is selected
4.H2(4),L1(4),L3(4),L5(2),L6(1)
If the SLA is based on whether the task has occurred more than three times in the first 6, then three tasks with a blown SLA are now distributed by a fourth: h2, L1, and L3 have now appeared for the fourth time. In these embodiments, the BP strategy may preferably pair these three tasks before the other tasks that occur only three or fewer times in the first 6 (i.e., L5 and L6).
In some embodiments, the top N-based SLA may be a function of N. For example, tasks may occur in the top N through 1/2N, 2N, 5N, etc., before an SLA is blown.
This type of SLA may be particularly useful in real-world scenarios where higher priority tasks continue to arrive at the queue and would otherwise be allocated ahead of longer waiting lower priority tasks that have been present in the top N more than the top N SLAs are normally expected or allowed.
In some embodiments, individual tasks or task types may have different SLAs than other tasks or other types of tasks. The different SLAs may be any based on the above techniques, such as time-based SLAs or SLAs based on the number of times each task has been included in the top N or otherwise evaluated. For example, a first task in the queue may have a SLA of 2N, while a second task in the queue may have a SLA of 3N. Determining which SLA each task has may be based on information about the task, information about one or more available agents, or both.
In some embodiments, the SLA for a task may be dynamic, changing as the amount of latency increases or as the number of times a task is evaluated in the top N increases.
FIG. 2 illustrates a task assignment method 200 according to an embodiment of the disclosure.
The task assignment method 200 may begin at block 210. At block 210, a number of tasks for a size of the plurality of tasks may be determined. In some embodiments, the number of tasks for the size of the plurality of tasks may be equal to the size of the queue of tasks. For example, in a contact-centric context, if 20 contacts are waiting in a queue to connect to an agent, the plurality of tasks will include all 20 contacts from the queue. In other embodiments, the number of tasks may be a fixed or predetermined number of tasks taken from the front or head of the queue. For example, if the number of tasks is 10, the plurality of tasks may include the first ten tasks (e.g., contacts) from a 20-sized queue. In other embodiments, the number of tasks may be dynamically determined according to any of the techniques described above, such as being a function of queue size (e.g., fraction, percentage, proportion), a function of the relative number of tasks for different priority levels, a function of the degree of selection of the behavioral pairing policy, and so forth. In some embodiments, this number of tasks may be referred to as "N" and the plurality of tasks may be referred to as "top N" plurality of tasks.
The task assignment method 200 may proceed to block 220. At block 220, a priority may be determined for each of a plurality of tasks (e.g., top N tasks). For example, a first portion of the plurality of tasks may be designated as "high priority" and a second portion of the plurality of tasks may be designated as "low priority. In some embodiments, there may be any large number of different priorities and identifiers for the priorities. In some embodiments, the task distribution system may maintain a separate queue of tasks for each priority. In other embodiments, the task distribution system may maintain a single queue of tasks that are first prioritized, and in some cases may be second in order of arrival time or another chronological order. In these embodiments, the task assignment method 200 may consider all tasks or the first N tasks, whether holding tasks in a single priority queue or multiple priority queues.
The task assignment method 200 may proceed to block 230. In some embodiments, it may be determined for at least one of the plurality of tasks whether the SLA has been exceeded. In some embodiments, a task allocation policy or task allocation system will allocate an agent to a task that has exceeded its SLA (e.g., the longest waiting task that has exceeded or blown the SLA). In various embodiments, SLAs may be defined or otherwise determined according to any of the techniques described above, such as a function of fixed time, EWT, or the number of times a given task has been available for allocation in previous N. In other embodiments, there may not be an SLA associated with the task allocation policy, and the task allocation method 200 may continue without determining or checking any exceeded SLAs.
The task assignment method 200 may proceed to block 240. At block 240, an agent available for assignment to any of a plurality of tasks may be determined. For example, in the L2 environment, agents become available for allocation. In other environments, such as the L3 environment, several agents may be available for allocation.
The task assignment method 200 may proceed to block 250. At block 250, a task of the plurality of tasks may be assigned to an agent using a task assignment policy. For example, if the task allocation policy is a BP policy, the BP policy may consider information about each of the plurality of tasks and information about the agents to determine which task allocation is expected to optimize the overall performance of the task allocation system. In some cases, the optimal allocation may be the longest waiting, highest priority task, as is the case with FIFO or PBR policies. However, in other cases, the optimal allocation may be a longer waiting and/or lower priority task. Even these instances, a lower expected performance for immediate pairing may be expected to result in a higher overall performance of the task distribution system, while in some embodiments, balanced or otherwise targeted task utilization is achieved (e.g., normalizing or balancing average latency for all tasks, or balancing average latency for all tasks within the same priority).
In some embodiments, if there is an out-of-SLA, the task allocation policy or task allocation system may prioritize allocation of tasks with out-of-SLA (such as longest-waiting and/or highest-priority tasks with out-of-SLA).
In some embodiments, the task allocation system may cycle between multiple task allocation policies (e.g., cycle between a BP policy and a FIFO or PBR policy). In some of these embodiments, the task allocation system may benchmark the relative performance of multiple task allocation policies.
After assigning the task to the agent, the task assignment method 200 may end.
In this regard, it should be noted that techniques for behavioral pairing in a task allocation system according to the present disclosure as described above may involve, to some extent, the processing of input data and the generation of output data. The input data processing and output data generation may be implemented in hardware or software. For example, certain electronic components may be employed in a behavioral pairing module or similar or related circuitry for implementing the functionality associated with techniques for behavioral pairing in a task allocation system according to the present disclosure as described above. Alternatively, one or more processors operating in accordance with instructions may implement the functionality associated with techniques for behavioral pairing in a task allocation system in accordance with the present invention as described above. If this is the case, it is also within the scope of the disclosure that the instructions may be stored on one or more non-transitory processor-readable storage media (e.g., a magnetic disk or other storage medium) or transmitted to the one or more processors via one or more signals embodied in one or more carrier waves.
The present disclosure is not to be limited in scope by the specific embodiments described herein. Indeed, other various embodiments and modifications of the disclosure, in addition to those described herein, will be apparent to those of ordinary skill in the art from the foregoing description and accompanying drawings. Accordingly, such other embodiments and modifications are intended to fall within the scope of the present disclosure. Moreover, although the present disclosure has been described herein with respect to at least one particular implementation in at least one particular environment for at least one particular purpose, those of ordinary skill in the art will recognize that its usefulness is not limited thereto and that the present disclosure may be beneficially implemented in any number of environments for any number of purposes. Accordingly, the claims set forth below should be construed in view of the full breadth and spirit of the present disclosure as described herein.
Claims (15)
1. A method, comprising:
determining, by at least one computer processor communicatively coupled to a contact center system and configured to operate in the contact center system, an integer greater than 1;
determining, by the at least one computer processor, an ordered set of contacts;
applying, by the at least one computer processor, a pairing policy configured to truncate the sorted set of contacts to a length equal to the integer;
selecting, by the at least one computer processor, a contact of the truncated set of ordered contacts for pairing to an agent based on the pairing policy.
2. The method of claim 1, wherein the determined integer provides an integer having a higher value, and wherein the integer having a higher value reduces a likelihood that the pairing policy selects a longest waiting contact of the truncated set of ordered contacts.
3. The method of claim 1, wherein the selecting further comprises selecting a lower priority contact and the sorted set of contacts that are truncated comprises an available higher priority contact.
4. The method of claim 1, wherein the integer includes one of 3, 6, 10, and 20.
5. The method of claim 1, wherein the integer comprises at least 10.
6. The method of claim 1, wherein the sorted set of contacts that are truncated includes at least one contact associated with a lower priority and at least one contact associated with a higher priority.
7. The method of claim 1, wherein the determined integer is based on a total number of contacts in the sorted set of contacts.
8. A system, comprising:
at least one computer processor communicatively coupled to a contact center system and configured to operate in the contact center system, wherein the at least one computer processor is further configured to:
determining an integer, the integer being greater than 1;
determining a sorted set of contacts;
applying a pairing policy configured to truncate the ordered set of contacts to a length equal to the integer;
selecting a contact in the truncated set of ordered contacts for pairing to an agent based on the pairing policy.
9. The system of claim 8, wherein the determined integer provides an integer having a higher value, and wherein the integer having a higher value reduces a likelihood that the pairing policy selects a longest waiting contact of the truncated set of ordered contacts.
10. The system of claim 8, wherein the at least one computer processor is further configured to select by selecting a lower priority contact, and the ordered set of contacts that are truncated includes an available higher priority contact.
11. The system of claim 8, wherein the integer includes one of 3, 6, 10, and 20.
12. The system of claim 8, wherein the integer comprises at least 10.
13. The system of claim 8, wherein the ranked set of contacts that are truncated includes at least one contact associated with a lower priority and at least one contact associated with a higher priority.
14. The system of claim 8, wherein the determined integer is based on a total number of contacts in the sorted set of contacts.
15. An article of manufacture, comprising:
a non-transitory computer processor-readable medium; and
instructions stored on the medium;
wherein the instructions are configured to be readable from the medium by at least one computer processor communicatively coupled to a contact center system and configured to operate in the contact center system, and thereby cause the at least one computer processor to operate to:
determining an integer, the integer being greater than 1;
determining a sorted set of contacts;
applying a pairing policy configured to truncate the ordered set of contacts to a length equal to the integer;
selecting a contact in the truncated set of ordered contacts for pairing to an agent based on the pairing policy.
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US15/837,911 | 2017-12-11 |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| HK40048810A true HK40048810A (en) | 2021-12-10 |
| HK40048810B HK40048810B (en) | 2025-04-17 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN110574010B (en) | Techniques for behavior pairing in a task allocation system | |
| CN112352222B (en) | Techniques for adapting behavioral pairings to runtime conditions in task assignment systems | |
| KR20220140769A (en) | Techniques for sharing control with an internal pairing system to assign tasks between an external pairing system and a task assignment system | |
| HK40048810A (en) | Method, system and article for behavioral pairing in a task assignment system | |
| HK40048809A (en) | Method, system and article of manufacture for pairing in a contact center system | |
| HK40050018A (en) | Method and system for behavioral pairing in a task assignment system | |
| KR102847617B1 (en) | Techniques for pairing contacts and agents in a contact center system | |
| HK40011158A (en) | Techniques for behavioral pairing in a task assignment system | |
| HK40050018B (en) | Method and system for behavioral pairing in a task assignment system | |
| HK40048809B (en) | Method, system and article of manufacture for pairing in a contact center system | |
| HK40048810B (en) | Method, system and article for behavioral pairing in a task assignment system | |
| HK40076444A (en) | Techniques for assigning tasks in a task assignment system with an external pairing system | |
| HK40040893A (en) | Techniques for pairing contacts and agents in a contact center system | |
| HK40076155A (en) | Techniques for sharing control of assigning tasks between an external pairing system and a task assignment system with an internal pairing system | |
| HK40036993A (en) | Techniques for adapting behavioral pairing to runtime conditions in a task assignment system | |
| HK40036993B (en) | Techniques for adapting behavioral pairing to runtime conditions in a task assignment system |