HK1182864B - Techniques for dynamic voice menus - Google Patents
Techniques for dynamic voice menus Download PDFInfo
- Publication number
- HK1182864B HK1182864B HK13110148.5A HK13110148A HK1182864B HK 1182864 B HK1182864 B HK 1182864B HK 13110148 A HK13110148 A HK 13110148A HK 1182864 B HK1182864 B HK 1182864B
- Authority
- HK
- Hong Kong
- Prior art keywords
- voice menu
- user
- voice
- menu
- customized
- Prior art date
Links
Description
Technical Field
The present invention relates to techniques for implementing dynamic voice menus.
Background
The voice menu system may provide a telephone-based method of interacting with a computer system. The use of traditional voice menu systems, where all users receive the same static menu, has increased due to the increased use of computer systems for performing administrative tasks and the increased quality of text-to-speech voice conversion. However, while static menus may be suitable where it is desirable for a calling user to conform to a common set of desired functions, these menus may be difficult to produce in a manner that supports a wide range of functions. Furthermore, making changes to static voice menus may require custom coding, increasing the cost of making changes and reducing the utility of providing customized voice menus to individual users. It is with respect to these and other considerations that the improvements of the present invention are needed.
Disclosure of Invention
The following presents a simplified summary in order to provide a basic understanding of some novel embodiments described herein. This summary is not an extensive overview, and it is not intended to identify key/critical elements or to delineate the scope of the invention. Its sole purpose is to present some concepts in a simplified form as a prelude to the more detailed description that is presented later.
Embodiments are generally directed to techniques for dynamic voice menus. Some embodiments are particularly directed to techniques for user-customized voice menus for dynamic compilation. In one embodiment, for example, an apparatus may comprise: an endpoint component for receiving an incoming call from a user, the endpoint component for identifying an incoming telephone number of the incoming call; a menu retrieval component for determining a voice menu based on the incoming telephone number; and a menu execution component for executing the voice menu for the user. In some embodiments, the menu retrieval component may be operative to identify a customized voice menu specific to the user and load the customized voice menu. In some embodiments, loading the customized voice menu may include retrieving a customized voice script specific to the user and compiling the customized voice script to generate the customized voice menu in response to the retrieved incoming call. Other embodiments are also described and claimed.
To the accomplishment of the foregoing and related ends, certain illustrative aspects are described herein in connection with the following description and the annexed drawings. These aspects are indicative of various ways in which the principles disclosed herein can be practiced, and all aspects and equivalents thereof are intended to be within the scope of the claimed subject matter. Other advantages and novel features may become apparent from the following detailed description when considered in conjunction with the drawings.
Drawings
FIG. 1 illustrates an embodiment of a voice menu system.
FIG. 2 illustrates one embodiment of a first logic flow of the voice menu system of FIG. 1.
FIG. 3 illustrates one embodiment of a second logic flow of the voice menu system of FIG. 1.
FIG. 4 illustrates an embodiment of a centralized system for the voice menu system of FIG. 1.
FIG. 5 illustrates an embodiment of a distributed system for the voice menu system of FIG. 1.
FIG. 6 illustrates an embodiment of a computing architecture.
Fig. 7 illustrates an embodiment of a communication architecture.
Detailed Description
Embodiments are directed to techniques for dynamic voice menus. Conventional voice menu services or Interactive Voice Response (IVR) systems typically require customized hardware running pre-compiled static voice menus. To the extent that conventional voice menu services can provide user-specific interactions, this generally comes from the result that a generic voice menu script can perform certain data retrieval tasks, such as looking up billing information, based on user-provided information, such as personal information codes (PINs) typed in through dual tone multi-frequency signaling (DTMF) or other push-button voice data input techniques. Conventional voice menu systems fail to provide the user with the ability to receive both customized information and customized logic flows.
Rather, various embodiments may allow a user to utilize a customized voice menu generated from a customized voice script. Both the customized menu and the script may be user-specific, such that the menu and script are only for that particular user. Further, the customized voice script may be created by a user, such as by using a drag-and-drop user interface in which customizable voice script components can be linked together to create the customized voice script. For example, a user may decide that they wish their customized voice menu to have the following specific menu options: when this option is selected, the voice menu system is caused to provide them with the most recent order date of their largest customer, such as through a text-to-speech (TTS) conversion. This would give the user the ability to quickly determine: whether any large client is unable to make a recent renewal order and therefore it is necessary to make a telephone call to ask whether any resupply is necessary. In some cases, the user may decide to include a second menu option for initiating a call directly to the larger customer whichever has not been contacted for the longest time (whether an order was received or a call from the user). It should be appreciated that if the voice menu system has access to data from a Customer Relationship Management (CRM) system that stores customer and business information, then complexity-respectable voice menu options may be constructed that allow the user to perform complex, customized tasks related to their needs.
In some embodiments, the customized voice script may be compiled only when it is needed. The voice menu system can be distributed to a large number of users to manage the tasks of customizing voice scripts. As changes are made to the various systems (such as data repositories, CRM systems, or telephone networking bridge devices) to which scripts may be linked, pre-compiled scripts may need to be recompiled in order to implement the functionality that requires recompilation of each script for each user. In addition, some menu systems may utilize multiple computing devices to handle the process of receiving or making calls and executing voice menus. In some environments, these computing devices may be heterogeneous, such as where different systems are used to process Plain Old Telephone Service (POTS) and Voice Over Internet Protocol (VOIP) based telephone calls. On-demand compilation allows each type of system to compile a customized voice script into a customized voice menu linked to the appropriate library of the system.
In some embodiments, the system may be used to persist the user state in the voice menu in the event of a disconnection, so that a second call that is disconnected later may resume user interaction in the voice menu. Voice menus are interactions back and forth between the voice menu system and the user based on defined logic flows. It can be said that a user has a state when using a voice menu, which state consists of the user's position in a defined logical flow and the values of any information retrieved and received by the voice menu system that can still affect the user's path through the voice menu system. For example, a user may initiate a call to a voice menu system and receive prompts to "dial 1 to receive billing information" and "dial 2 to receive delivery status information". A user entering "2" and proceeding through the logic flow to various options related to billing information, and then being disconnected (e.g., due to a lost cellular telephone signal), may reconnect to the system and resume his location at the billing information option without having to reenter the previous information, such as he wishes to continue in spanish.
As discussed above, in some embodiments, multiple computing devices may be used to handle the process of receiving or making a call and executing a voice menu. Thus, in some embodiments, the persisted state of the disconnected call may be stored at a location where all of the plurality of computing devices are accessible, such as by using a cloud storage solution. This may give any of the multiple computing devices the ability to: a second incoming call from the disconnected user is processed and the user's position in the logic flow of the voice menu is restored by retrieving the stored state from the normal storage.
In some embodiments, multiple forms of user interaction with a voice menu system may be supported. For example, a user may be able to input information over the telephone using DTMF data input or by speaking. Thus, in some embodiments, as part of executing a voice menu, a voice menu system may be used to perform Speech To Text (STT) conversion. Alternatively or additionally, as part of executing the voice menu, the voice menu system may be used to organize certain phrases, such as where the voice menu specifies that certain phrases match certain menu options. Continuing with the previous example, the user may indicate that he wishes to continue in English by saying "English" and may continue by saying ""to indicate that he wishes to continue in spanish. It should be understood that in the specific instance of which language is used, the voice menu system may be used to implement functionality in a wide variety of languages, and the desired language may be associated with the user profile so that a preferred language is used that is more preferred than the default language at the time the system or installation receives or initiates a call with the user.
In some embodiments, the user may be allowed to set a custom trigger that will cause the voice menu system to initiate a telephone call from the voice menu system to the user. For example, a user may create a custom notification requesting that whenever a client enters more than a certain amount of a product order, they receive a phone call from a voice menu system notifying them of the order. This may help employees responsible for maintaining the supply of products respond quickly to situations that may require their attention. In general, a user may be able to create a customized notification relating to any element of a business or organization accessible by the voice menu system, such as any data managed by a Customer Relationship Management (CRM) system. Access to data managed by the CRM system can be performed in a modular fashion, by which various supported CRM systems can be used by loading modules corresponding to a particular system when needed for installation.
In some embodiments, a voice menu system may be used to plug in and connect to a wide variety of data services. For example, a voice menu system may be used to connect to a web search service. In instances in which a web search system is connected, a voice menu system may be used to provide a user with access to a web search service. For example, a voice menu may have a selectable option to initiate a web search, where the voice menu system is used to record the user's voice and convert the voice to text, and the web search is performed on the converted text. The voice menu system may be used to return the results of a web search using text-to-speech conversion, and may be used to process the results of the web search using a number of known techniques for answering questions using a web search (or other internet-based data search) to produce answers to the spoken text-in some embodiments, the voice menu system may be used to provide this functionality as part of its customized voice script creation process, where pre-built web search elements may be placed in a user's customized voice menu without the user performing programming.
It will be appreciated that the above possible embodiments and advantages may be used in various combinations, where they may greatly expand the versatility of functionality and the customizability of a voice menu system. As a result, embodiments may improve the utility, affordability, scalability, modularity, extensibility, and interoperability of a voice menu system for an operator or user.
Reference will now be made to the drawings wherein like reference numerals are used to refer to like elements throughout. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It may be evident, however, that the novel embodiments can be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to facilitate describing the subject invention. The intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the claimed subject matter.
FIG. 1 shows a block diagram of a voice menu system 100. In one embodiment, the voice menu system 100 may comprise a computer-implemented voice menu system 100 having one or more software applications and/or components. Although the voice menu system 100 shown in FIG. 1 has a limited number of elements in a certain topology, it may be appreciated that the voice menu system 100 may include more or less elements in alternate topologies as desired for a given implementation.
As shown in the illustrated embodiment of fig. 1, voice menu system 100 includes an endpoint component 110, a menu retrieval component 120, a menu execution component 130, a call state persistence component 140, and a notification component 150. The endpoint component 110 is generally operable to receive an incoming call from a user 160, and the endpoint component 110 is operable to identify an incoming telephone number for the incoming call. The menu retrieval component 120 is generally operable to determine a voice menu based on an incoming telephone number. The menu execution component 140 is generally operable to execute voice menus for the user 160. Call state persistence component 140 may generally be used to persist user states in executed voice menus. The notification component 150 is generally operable to initiate an outgoing call from the voice menu system 100 to the device 162 of the user 160 to transmit a notification, the outgoing call using the voice menu based on the contacted user and the notification.
In some embodiments, the voice menu system 100 is communicatively connected to the cloud storage system 170. The cloud storage system 170 may be constructed based on any one of a number of known techniques for implementing a cloud storage system. Generally, the cloud storage system 170 may be generally used to receive information and store the information on behalf of the voice menu system 170. In various embodiments, the cloud storage system may be used to store customized voice scripts, role-based voice menus, and generic voice menus.
In general, endpoint component 110 may be used to receive incoming calls from device 162 of user 160. Device 162 may include any electronic device capable of initiating and accepting voice calls, such as a call terminal or User Equipment (UE). The endpoint component 110 may identify an incoming phone number for an incoming call. In various embodiments, incoming calls may be received in various ways. In some embodiments, endpoint component 110 is communicatively connected to a computer accessible private branch exchange (PBX) system that supports traditional telephone calls. In some embodiments, the endpoint components 110 are communicatively connected to a Session Initiation Protocol (SIP) system that supports Voice Over Internet Protocol (VOIP) calls, which itself may support bridging from a traditional telephone network. In some embodiments, endpoint component 110 may be communicatively connected to a plurality of devices that are provisioned with the ability to receive and send telephone calls. In general, endpoint component 110 may generally be used to receive incoming calls through any mechanism that may be used to receive telephone calls. In various embodiments, the endpoint component 110 may be used to use a modular system in which plug-ins for supporting different telephony technologies may be used to customize which telephony technologies are supported by the semantic menu system 100. The endpoint component 110 may be used to determine an incoming telephone number using suitable techniques corresponding to the telephone techniques used by the voice menu system 100, such as Caller Identification (CID) techniques supported by conventional analog and digital telephone systems and many VOIP systems.
In various embodiments, endpoint component 110 may be operative to generate outgoing calls to device 162 of user 160 using a preset telephone number associated with user 160. In general, endpoint component 110 may be operative to generate and execute outgoing calls to user 160 using any mechanism that may be used to generate and execute telephone calls. This may use any of the techniques discussed above with respect to receiving an incoming telephone call. In various embodiments, endpoint component 110 may be used to manage outgoing telephone call queues. This outgoing telephone call queue is particularly advantageous for systems having limited capabilities to generate outgoing telephone calls (such as a limited number of outgoing telephone lines), where the number of outgoing telephone calls requested by the voice menu system 100 may exceed the number of outgoing telephone lines. In general, endpoint component 110 may be operative to receive a request to initiate an outgoing telephone call and place the telephone call request in an outgoing telephone call queue. Endpoint component 110 may be operative to employ the next outgoing telephone call request in the outgoing telephone call queue and initiate an outgoing telephone call while the resource is running.
The menu retrieval component 120 is generally operative to determine a voice menu based on the identified incoming telephone number. In some examples, the identified voice menu may include a customized voice menu specific to the user 160. In some examples, the identified voice menu may include a role-based voice menu 174 for the role associated with the user 160. In some examples, the identified voice menu may include a generic voice menu 176.
The menu retrieval component 120 is operable to identify a customized voice menu specific to the user 160 and load the customized voice menu, wherein loading the defined voice menu comprises retrieving a customized voice script 172 specific to the user 160 in response to the received incoming call and compiling the defined voice script 172 to produce the customized voice menu. The customized voice script may be stored in an intermediate format, such as a markup language, such as extensible markup language (XML), that the system does not support using the customized voice script immediately upon execution of the voice menu. In other words, the customized voice script may be stored in an intermediate format that the system needs to compile before the script can be used. In these instances, the menu retrieval component 120 can be operative to compile the script in response to requirements related to the script. Compiling the script may include operations such as linking the compiled voice menu to various information or control systems (such as a CRM system) that will be used when executing the voice menu.
In some instance, the customized voice script 172 may have been previously created using input from the user 160. In some embodiments, creation of the customized voice script may be accomplished without programming using a drag-and-drop interface. Drag-and-drop interface elements may be linked together to provide both logic and information for a flow. The logic flow may use any conventional techniques for logic flows including, but not limited to, the following elements: receiving user input and converting it into an element of a form usable by the voice menu system 100; access a data repository (such as a CRM) and retrieve elements of information in a form usable by the voice menu system 100; branching points that branch based on user input, such as buttons that cause DTMF tones to be pressed; a branch point branching based on information retrieved from the data repository; a branch point that branches based on a comparison between two or more pieces of information (such as between pieces of retrieved information or between retrieved information and information for input); an element that stores data in a data repository (such as CRM); and elements that provide data to the user (such as by reading the data with a text-to-speech component). In various embodiments, the voice menu script may be customized for both incoming calls and outgoing notifications received from the user 160, allowing, for example, a customized voice menu that provides a plurality of options for responding to notifications received from the voice menu system 100.
Generally, the menu execution component 130 can be utilized to execute voice menus for the user 160. Executing the voice menu may include executing a compiled voice menu through a customized voice script specific to the user 160, which may include a pre-compiled generic voice menu, a pre-compiled role-based voice menu, or a defined voice menu compiled on demand. Executing the voice menu may include retrieving, storing, or modifying data stored in a data repository, such as a CRM system. Executing the voice menu may include performing text-to-speech conversion on pre-written text or dynamically retrieved data. In various embodiments, the text-to-speech conversion may be for a spoken human language (such as english or spanish), where the spoken human language used is based on the spoken human language stored in the profile associated with the user 160.
Generally, call state persistence component 140 can be utilized to persist user states in executed voice menus. Persisting user state may include persisting information such as a location in a stream of logic of a voice menu, data entered by a user during the course of a voice menu, and data retrieved by voice menu system 100 (e.g., such as from a CRM). Call state persistence component 140 may be operative to restore a user state in the executed voice menu based on the persisted user state. For example, if a user connects to the voice menu system 100 and begins navigating through the voice menu, but is subsequently unexpectedly disconnected, the call state persistence component 140 will be used to place the user 160 back into its previous position in navigating through the voice menu without the user having to retrace its steps. In the following embodiments, this restoration may be done automatically after the user reconnects. In other embodiments, the restoration may be an option presented to the user 160 after the user reconnects to the voice menu system 100, where the user is given the option of either restarting (in which case the persisted user state will be deleted) or restoring its state. In the event that the user 160 reconnects after disconnecting, this can be accomplished, for example, by launching a resume-specific script that is shared by some or all of the users, wherein any selected option (resume or not) will cause the menu execution component 130 to execute a voice menu (such as a customized voice menu specific to the user) at the beginning of the navigation or in the middle of the navigation. In various embodiments, the user's state may be persisted in the cloud storage system 170.
The notification component 150 can be used to initiate an outgoing call from the voice menu system 100 to the device 162 of the user 160 to transmit a notification, the outgoing call using the voice menu based on one or more of the contacted user, the notification, or the activated trigger. The notification may include a notification that a customized trigger (such as a customized trigger created by user 160) has been activated, the notification including one or more pieces of data associated with the customized trigger. The customization trigger may be based on any piece of information available to the voice menu system 100, including any information stored in the CRM system. For example, a custom trigger may specify that if an event (order, receipt of a bill, receipt of a payment, or any other business or organizational action) is recorded in the CRM system, a notification of the event will be communicated by the voice menu system 100 to the associated user 160. In various embodiments, a data repository (such as a CRM system) may be responsible for activating a trigger as a result of a specified event occurrence, where the data repository delivers a notification to notification component 150. In other embodiments, the notification component 150 can be employed to periodically poll the data repository to determine whether a related event has occurred.
Generally, the customized trigger will cause the voice menu system 100 to initiate a telephone call from the voice menu system 100 to the user 160. For example, the user 160 may create a custom notification requesting that whenever any customer enters more than a certain amount of a product order, they receive a phone call from the voice menu system 100 notifying them of the order. This may help employees responsible for maintaining the supply of products respond quickly to situations that may require their attention.
The user 160 may be able to create a customized notification relating to any element of a business or organization accessible to the voice menu system 100, such as any data managed by the CRM system. A customized notification may have associated with it a customized voice menu that conveys one or more pieces of data related to the notification and includes menu options for responding to the notification.
The process of initiating an outgoing telephone call may be two parts. In the first part, the notification component 150 can place a notification (received or generated by itself) in the outgoing telephone call queue, where the notification contains information identifying the activated trigger and thus the associated outgoing telephone number and voice menu. In the second section, a call component (such as endpoint component 110) may periodically remove notifications from the outgoing telephone call queue, attempt to contact the relevant user 160 via device 162, and execute a voice menu associated with the notification. The endpoint component 110 may be used to obtain pending notifications and initiate outgoing telephone calls to the extent that it has processing and communication resources to accomplish this.
In these embodiments, the notification component 150 may be used to enqueue failed outgoing calls (i.e., calls that the user 160 has not answered their phone or otherwise failed to reach) into the failed call queue for eventual reattempts. The voice menu system 100 may retry a failed outgoing call periodically, such as based on a predefined time increment or by a user time increment. In some embodiments, notification component 150 can be used to leave a voicemail or answer machine message for the user indicating some portion of the notification.
In some embodiments, the voice menu system 100 may be used to provide notifications alternatively or additionally through non-telephonic means. For example, the voice menu system 100 may be used to contact the user 160 using various messaging applications, such as sending a text message using Short Message Service (SMS), Multimedia Messaging Service (MMS), Instant Messaging (IM), email, push notification applications to a desktop or mobile device, or any other form of communication for contacting the user 160. Generally, the voice menu system 100 may support one or more defined contact methods and may support a load module that provides additional contact methods. The voice menu system 100 is generally operable to store one or more contact mechanisms for the user 160, associate the contact mechanisms with a trigger or user profile, and store one or more preferences required or allowed by the contact mechanisms, between the two, and as compared to using a telephone call. Similarly, a profile or trigger may have associated with it multiple phone numbers of the user 160, and have stored therein preferences indicating which contact mechanism is preferred. The user 160 may associate one or more date and time based rules for specifying preferred contact forms with their profile.
In some cases, multiple methods of contacting a user with a particular phone number may be possible. For example, a user 160 having a Session Initiation Protocol (SIP) based telephone number may be contacted by an internet connection to the SIP provider of the telephone number or by dialing the telephone number using a POTS network, where the SIP provider will bridge between the POTS network and its VoIP network. Endpoint component 110 may be operative to select a technique for contacting subscriber 160 based on one or more criteria, such as an attempt to maximize voice quality, an attempt to minimize cost, an attempt to minimize local processing resources, or based on the availability of local communication resources (such as whether an additional outgoing POTS connection is available).
Included herein is a set of flow charts representative of exemplary methodologies for performing novel aspects of the disclosed architecture. While, for purposes of simplicity of explanation, the one or more methodologies shown herein, e.g., in the form of a flow chart or flow diagram, are shown and described as a series of acts, it is to be understood and appreciated that the methodologies are not limited by the order of acts, as some acts may, in accordance with the present invention, occur in a different order and/or concurrently with other acts from that shown and described herein. For example, those skilled in the art will understand and appreciate that a methodology could alternatively be represented as a series of interrelated states or events, such as in a state diagram. Moreover, not all acts illustrated in a methodology may be required for a novel implementation.
FIG. 2 illustrates one embodiment of a first logic flow 200. The first logic flow 200 may be representative of some or all of the operations executed by one or more embodiments described herein, such as, for example, the voice menu system 100.
At block 210, the operations of logic flow 200 are initiated.
At block 220, an incoming call is received from a user. For example, the user 160 may utilize the device 162 to initiate a call received by the endpoint component 110 of the voice menu system 100.
At block 230, an incoming phone number for an incoming call is identified. For example, the endpoint component 110 may be used to determine an incoming telephone number using suitable techniques corresponding to the telephone techniques used by the voice menu system 100, such as Caller Identification (CID) techniques supported by conventional analog and digital telephone systems and many VOIP systems.
At block 240, a voice menu is determined based on the incoming telephone number. Determining the voice menu can include determining to use one of a customized voice menu specific to the user, a role-based voice menu for a role associated with the user, or a generic voice menu. All of these options may be considered as the voice menu system 100 selects a custom voice menu instead of a role-based menu or a generic menu, and selects a role-based menu instead of a generic menu.
Thus, the voice menu system may first determine whether a customized voice menu specific to the user is available, such as by performing a lookup (such as a table lookup) using the identified incoming telephone number to determine whether the identified incoming telephone number is associated with a particular customized voice menu or customized voice script. This may include an intermediate step of determining a profile for the user based on the identified incoming telephone number, where the profile may specify a customized voice menu or script and may specify a role for the user. If a customized voice menu is identified, the customized voice menu may be used as a voice menu for the incoming call. If a customized voice script is identified, the customized voice script can be compiled into a customized voice menu that is compiled as needed to be used as a voice menu for incoming calls.
If no customized voice menu or customized script is identified-such that no customized voice menu specific to the user is available, a role associated with the user may be determined, such as a role associated with the user may be specified in a lookup table based on the incoming telephone number, or such as a role associated with the user may be specified in a profile of the user, the profile being determined based on the incoming telephone number. Roles can include any location in an organization or business that can be shared by multiple individuals — so that roles are not specific to a user, but rather are more specific than a generic script adapted to be used as anyone who invokes a voice menu system. Roles may be specific to the type of work performed, such as but not limited to sales, support, management, or development. Roles may be dedicated to the status of the principal within the organization, such as differentiating between practice, low-level, regular employees, and management. In general, a role-based voice menu can be created for any group of people within an organization such that the people within the group have sufficient coverage of voice menu system functions that can be jointly provided by the role-specific voice menu. Role-based voice menus may provide particular benefits to members of an organization that have not defined customized voice menus. For example, an employee may be assigned a role when the employee is hired, the role specifying a role-based voice menu for use by the employee, such that the employee immediately experiences a more specialized voice menu than a generic voice menu, and the option of creating a customized voice menu specific to the user is available when the user gains experience and develops a need for a method specific to him or her work.
If no customized voice menu or script or role-based menu is identified, a generic voice menu may be loaded. The generic voice menu may include any default voice menu that applies to those incoming users and telephone numbers that do not have an associated customized voice menu or role-based voice menu.
It should be understood that although the above embodiments are generally described in relation to members of an organization hosting the voice menu system, the customized voice menu and role-based voice menu may be available to individuals outside of the organization. For example, a telephone number associated with a customer of a service may be associated with a customer-specific role-based voice menu, such that conventional customers need not specify that they are customers when they call the system. The role based customer voice menu may focus on options for ordering, contacting sales associations, or other customer specific requirements. Similarly, a telephone number associated with a provider of a service may be associated with a provider-specific role-based voice menu. The role based supplier voice menu may focus on options for billing queries, contacting purchase associations, or other supplier specific requirements. In addition, the customized voice menu may be created for a particular individual outside of the organization, such as for a particular customer or provider or for a particular business contact at the customer, provider, or other external entity.
At block 250, a voice menu is executed for the user. As discussed previously, this may include using text-to-speech conversion, receiving data input from a user, collecting data from a data repository (such as a CRM system), or any other element in the logic flow.
FIG. 3 illustrates one embodiment of a second logic flow 300. The second logic flow 300 may be representative of some or all of the operations executed by one or more embodiments described herein.
At block 310, the operations of the second logic flow 300 are initiated.
At block 320, a notification of the activated trigger for the user is received. The notification may be received from a system for managing data, such as a CRM system for managing customer data. The triggers may include any customized or standardized triggers suitable for a system for managing data. Triggers may include triggers for notifying one or more users based on the occurrence of any event that the system for managing or monitoring data may monitor. The trigger may be specific to a particular user or may be shared by a group of users. Where the trigger is shared by a group of users, the trigger may include rules for determining which user or users in the group should receive the notification. In some embodiments, the rules may include a ranking priority of one or more users in the group, such that the order in which notifications should be delivered is specified such that if one user cannot be reached, the system attempts to contact the next user in the specified order. In some embodiments, the set of users may include a role, such that a profile associated with the role specifies rules that cause one or more users who are eligible for the role to be sent notifications, or specifies an order of contacts for the users who are eligible for the role.
At block 330, a voice menu is determined based on the activated trigger. The voice menu may be a voice menu dedicated to the user or a voice menu based on a group or a character. The voice menu may include one or more options for notifying the contacted user or users of the occurrence of the activated trigger, and possibly for responding to the notification. The voice menus specific to the user or the role based voice menus may be specific to a particular trigger or may be shared by multiple triggers. For example, a trigger may specify that a developer is to be automatically contacted if a particular software development project has completed a test run, with an associated voice menu dedicated to each software test run that has ended, but containing a toggle or other configuration element that allows the voice menu to dynamically adjust to which particular test run that has ended, such as by employing a test run identifier as an input that may be communicated to the contacted user.
At block 340, an outgoing telephone number is determined. The outgoing telephone number may be a trigger or an explicitly encoded portion of a voice menu associated with the trigger, may be obtained based on a profile associated with the user specified by the trigger, or may be obtained based on a profile associated with the user determined directly from the trigger as described above, such as where the trigger specifies a role to contact, where the profile associated with the role specifies one or more rules for selecting one or more users to contact.
At block 350, the determined voice menu is executed for the user. As discussed previously, this may include using text-to-speech conversion, receiving data input from a user, collecting data from a data repository (such as a CRM system), or any other element in the logic flow.
Fig. 4 shows a block diagram of a centralized system 400. The centralized system 400 may implement some or all of the structure and/or operation of the voice menu system 100 in a single computing entity, such as entirely within a single computing device 410.
The computing device 410 may use the processing component 430 to execute processing operations or logic of the voice menu system 100. The processing component 430 may include various hardware elements, software elements, or a combination of both. Examples of hardware elements may include devices, components, processors, microprocessors, circuits, circuit elements (e.g., transistors, resistors, capacitors, inductors, and so forth), integrated circuits, Application Specific Integrated Circuits (ASIC), Programmable Logic Devices (PLD), Digital Signal Processors (DSP), Field Programmable Gate Array (FPGA), memory units, logic gates, registers, semiconductor device, chips, microchips, chip sets, and so forth. Examples of software elements may include software components, programs, applications, computer programs, application programs, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, procedures, software interfaces, Application Program Interfaces (API), instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof. Determining whether an embodiment is implemented using hardware elements and/or software elements may vary in accordance with any number of factors, such as desired computational rate, power levels, heat tolerances, processing cycle budget, input data rates, output data rates, memory resources, data bus speeds and other design or performance constraints, as desired for a given implementation.
The computing device 410 may use the communication component 440 to perform communication operations or logic of the voice menu system 100. The communications component 440 may implement any well-known communications techniques and protocols, such as techniques suitable for use with packet-switched networks (e.g., public networks such as the Internet, private networks such as an enterprise intranet, and so forth), circuit-switched networks (e.g., the public switched telephone network), or a combination of packet-switched networks and circuit-switched networks (using suitable gateways and translators). Communications component 340 may include various types of standard communications elements, such as one or more communications interfaces, Network Interface Cards (NICs), radios, wireless transmitters/receivers (transceivers), wired and/or wireless communications media, physical connectors, and so forth. By way of example, and not limitation, communication media 420 includes wired communication media and wireless communication media. Examples of wired communications media may include a wire, cable, wire, Printed Circuit Board (PCB), backplane, switch fabric, semiconductor material, twisted-pair wire, co-axial cable, fiber optics, a propagated signal, and so forth. Examples of wireless communication media may include acoustic, Radio Frequency (RF) spectrum, infrared and other wireless media 420.
Computing device 410 may communicate with other devices 450, 460, and 470 over communication medium 420 using communication signals 422 via communication component 440. Similarly, the computing device may communicate with the cloud storage system 170 through the communication medium 420 using the communication signal 422 via the communication component 440.
Device 450 may comprise a first user device for a user to specify a customized voice menu. The signal 422 transmitted over the medium 420 associated with the device 450 may thus include a network connection to the voice menu system 100 for the purpose of accessing the interface for specifying the customized voice menu. In some embodiments, device 450 may use customization software (such as software specific to voice menu system 100) to specify a customized voice menu. In some embodiments, the voice menu system 100 can provide a web-based software solution for specifying customized voice menus, such as by being the even one of the processing component 430 and the communication component 440. In either case, the customized voice menu received by the voice menu system 100 can include a customized voice menu created without the use of programming that is specific to the user. Creating a customized voice menu without using programming may correspond to creating a customized voice menu using only drag-and-drop element placements, drag-and-drop links between elements, and input of base terms (such as numbers, identifiers, or names), which does not require the use of a specialized programming language. In general, creating a customized voice menu without the use of programming may include any process for creating a customized voice menu, but the process does not rely on the user writing code in a programming language (such as C or JAVA) and does not rely on the user entering information in a schema intended to encode the document in a machine-readable form (such as XML).
The device 460 may comprise a second user device for the user to initiate an incoming telephone call to the voice menu system 100. In response to the voice menu system 100 receiving the incoming telephone call, the signal 422 transmitted over the medium 420 in connection with the device 460 may thus include the initiation and execution of a voice menu for the user, such as a customized voice menu, a role-based voice menu, or a generic voice menu.
The device 470 may include a third user device that receives outgoing telephone calls from the voice menu system 100. The signal 422 transmitted over the medium 420 in relation to the device 470 may thus include initiation and execution of a voice menu (such as a customized voice menu or a character-based voice menu) for the user in response to the voice menu system 100 receiving an activated trigger for the user.
In various embodiments, the cloud storage system 170 may be responsible for storing, managing, and retrieving data for use by the computing device 410 in performing the functions of the voice menu system 100. Further, in various embodiments, the cloud storage system 170 may be responsible for organizing events corresponding to stored triggers and communicating the activation of the triggers to the voice menu system 100.
Fig. 5 shows a block diagram of a distributed system 500. The distributed system 500 may distribute portions of the structure and/or operation of the systems 100, 400 across multiple computing entities. Examples of distributed system 500 may include, but are not limited to, a client-server architecture, a layer 3 architecture, an N-layer architecture, a tightly coupled or clustered architecture, a peer-to-peer architecture, a master-slave architecture, a shared database architecture, and other types of distributed systems. The embodiments are not limited in this context.
Client system 510 and server system 550 may process information using processing component 530, processing component 430 being similar to processing component 430 described with reference to FIG. 4. Client system 510 and server system 550 may communicate with each other over communication medium 520 using communication signals 522 via communication component 540, communication component 440 being similar to communication component 440 described with reference to FIG. 4.
In one embodiment, for example, distributed system 500 may be implemented as a client-server system. Client system 510 may include a call component 515, the call component 515 implementing endpoint component 110, menu execution component 130, call state persistence component 140, and notification component 150. In general, client system 510 may include a system for initiating and receiving calls to and from users and for performing and managing such calls. The server system 550 may include a voice menu component 555 that implements some or all of the functionality of the menu retrieval component 120 and the cloud storage system 170. It should be understood that other divisions of labor between the client system 510 and the server system 550 are contemplated.
In various embodiments, client system 510 may include or use one or more client computing devices and/or client programs for performing various methods in accordance with the described embodiments. For example, client system 510 may be one of a plurality of client systems, each of which is used to perform call tasks for distributed system 500 directed by server system 550. As previously discussed, the voice menu system 100 may queue outgoing telephone calls in the following events: insufficient resources are available to synchronize all outgoing telephone calls requested. In some embodiments, client system 510 may include a client in an outgoing telephone call queue such that an outgoing telephone call in the outgoing telephone call queue may be processed by one of the plurality of client systems.
In various embodiments where multiple client systems are used, the distributed system 500 may be used to dynamically scale the number of client systems based on demand. In the event that more incoming calls are requested or outgoing calls are needed, additional client systems may be allocated for use by the distributed system 500 to handle additional traffic. Such amplification may be limited or modified by specific business relationships where appropriate, such as where a client system is leased. Similarly, such amplification may be recorded by the distributed system 500 for billing or performance improvement purposes. Continuing, where more client systems are allocated to distributed system 500 than needed, some portion of the client systems currently being used by distributed system 500 may be deallocated, which may result in, for example, the deallocated systems being moved to a general pool or otherwise becoming available to other users. The deallocation of the system may include an incremental process by which the client system to be deallocated processes any in-process calls until they are made complete, without taking any incoming calls and pulling any outgoing calls from the queue, at which point the client system may be fully deallocated. Alternatively, where possible, an in-process call may be migrated from a client system to be deallocated to another client system, after which the client system may be deallocated.
In various embodiments, the server system 550 may include or use one or more server computing devices and/or server programs for performing various methods in accordance with the described embodiments. For example, when installed and/or deployed, a server program may support one or more server roles for server computing devices that provide particular services and features. The exemplary server system 550 may include, for example, operations such asAn operating system,An operating system,An operating system or other suitable server-based operating system, or the like. For example, an exemplary server program may include a program such asOfficeCommunicCommunication server programs such as the Office Communication Server (OCS) for managing incoming and outgoing messagesExchange server, etc. is a messaging server program used to provide Unified Messaging (UM) for email, voicemail, VoIP, Instant Messaging (IM), group IM, enhanced presence, and audio-video conferencing, and/or other types of programs, applications, or services in accordance with the described embodiments.
In various embodiments, server system 550 may implement functionality for migrating between one or more client systems (such as client system 510) and one or more data repositories (such as CRM systems). The server system 550 may be implemented as a cloud computing system, where multiple servers operate together to perform computing tasks. The server system 550 may be used to store and manage user profiles, role-based profiles, customized voice scripts specific to a user, customized voice menus specific to a user, role-based voice menus, generic voice menus, persistently stored call states, pending notifications, activated triggers, inactivated triggers, and data necessary to determine whether a trigger should be activated. It should be appreciated that by persisting call state in the server system 550, for example, incoming telephone calls received from users having persisted state may be handled by a client system that is different from the client system that handled the disconnected call, as the new client system may access the persisted state via the server system 550.
FIG. 6 illustrates one embodiment of an exemplary computing architecture 600 suitable for implementing the aforementioned embodiments, such as the device 162 and various components of the voice menu system 100. As used in this application, the terms "system" and "component" are intended to refer to a computer-related entity, either hardware, a combination of hardware and software, or software in execution, examples of which are provided by the exemplary computing architecture 600. For example, a component may be, but is not limited to being, a process running on a processor, a hard disk drive, multiple storage drives (of optical and/or magnetic storage medium), an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a server and the server can be a component. One or more components can reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers. Further, the components may be communicatively coupled to each other by various types of communications media to coordinate operations. The coordination may involve a one-way or two-way exchange of information. For example, a component may communicate information in the form of signals communicated over the communications media. This information may be implemented as signals assigned to the respective signal lines. In these allocations, each message is a signal. However, other embodiments may alternatively employ data messages. These data messages may be sent across various connections. Exemplary connections include parallel interfaces, serial interfaces, and bus interfaces.
In one embodiment, the computing architecture 600 may comprise or be implemented as part of an electronic device. Examples of an electronic device may include, but are not limited to, a mobile device, a personal digital assistant, a mobile computing device, a smartphone, a cellular telephone, a handset, a one-way pager, a two-way pager, a messaging device, a computer, a Personal Computer (PC), a desktop computer, a laptop computer, a notebook computer, a handheld computer, a tablet computer, a server array or server farm, a web server, a network server, an Internet server, a workstation, a minicomputer, a mainframe computer, a supercomputer, a network appliance, a web appliance, a distributed computing system, multiprocessor systems, processor-based systems, consumer electronics, programmable consumer electronics, television, digital television, set-top box, wireless access point, base station, subscriber station, mobile subscriber center, radio network controller, a wireless network, A router, hub, gateway, bridge, switch, machine, or combination thereof. The embodiments are not limited in this context.
Computing architecture 600 includes various common computing elements, such as one or more processors, co-processors, memory units, chipsets, controllers, peripherals, interfaces, oscillators, timing devices, video cards, audio cards, multimedia input/output (I/O) components, and so forth. However, the embodiments are not limited to implementation by the computing architecture 600.
As shown in FIG. 6, the computing architecture 600 includes a processing unit 604, a system memory 606, and a system bus 608. The processing unit 604 can be any of various commercially available processors. Dual microprocessors and other multi-processor architectures may also be employed as the processing unit 604. The system bus 608 provides an interface for system components including, but not limited to, the system memory 606 to the processing unit 604. The system bus 608 can be any of several types of bus structure that may further interconnect to a memory bus (with or without a memory controller), a peripheral bus, and a local bus using any of a variety of commercially available bus architectures.
The computing architecture 600 may comprise or implement various articles of manufacture. An article of manufacture may comprise a computer-readable storage medium to store logic. Examples of a computer-readable storage medium may include any tangible medium capable of storing electronic data, including volatile memory or non-volatile memory, removable or non-removable memory, erasable or non-erasable memory, writeable or re-writeable memory, and so forth. Examples of logic may include executable computer program instructions implemented using any suitable type of code, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, visual code, and the like.
The system memory 606 may include various types of computer-readable storage media in the form of one or more higher speed memory units, such as Read Only Memory (ROM), Random Access Memory (RAM), dynamic RAM (dram), double data rate dram (ddram), synchronous dram (sdram), static RAM (sram), programmable ROM (prom), erasable programmable ROM (eprom), electrically erasable programmable ROM (eeprom), flash memory, polymer memory such as ferroelectric polymer memory, ovonic memory, phase change or ferroelectric memory, silicon-oxide-nitride-oxide-silicon (SONOS) memory, magnetic or optical cards, or any other type of media suitable for storing information. In the illustrated embodiment shown in FIG. 6, the system memory 606 can include non-volatile memory 610 and/or volatile memory 612. A basic input/output system (BIOS) may be stored in the non-volatile memory 610.
The computer 602 may include various types of computer-readable storage media in the form of one or more relatively low-speed memory units, including an internal Hard Disk Drive (HDD) 614, a magnetic Floppy Disk Drive (FDD) 618 for reading from and writing to a removable magnetic disk 616, and an optical disk drive 622 for reading from or writing to a removable optical disk 620 (e.g., a CD-ROM or DVD). The HDD614, FDD616 and optical disk drive 620 can be connected to the system bus 608 by a HDD interface 624, an FDD interface 626 and an optical drive interface 628, respectively. The HDD interface 624 for external drive implementations can include at least one or both of Universal Serial Bus (USB) and IEEE1394 interface technologies.
The drives and associated computer-readable media provide volatile and/or nonvolatile storage of data, data structures, computer-executable instructions, and so forth. For example, a number of program modules can be stored in the drives and memory units 610, 612, including an operating system 630, one or more application programs 632, other program modules 634, and program data 636.
The one or more application programs 632, other program components 634, and program data 636 can include, for example, the endpoint component 110, the menu retrieval component 120, the menu execution component 130, the call state persistence component 140, or the notification component 150.
A user can enter commands and information into the computer 602 through one or more wired/wireless input devices, e.g., a keyboard 638 and a pointing device, such as a mouse 640. Other input devices may include a microphone, an Infrared (IR) remote control, a joystick, a game pad, a stylus pen, touch screen, or the like. These and other input devices are often connected to the processing unit 608 through an input device interface 642 that is coupled to the system bus 604, but can be connected by other interfaces, such as a parallel port, IEEE1394 serial port, a game port, a USB port, an IR interface, etc.
A monitor 644 or other type of display device is also connected to the system bus 608 via an interface, such as a video adapter 646. In addition to the monitor 644, a computer typically includes other peripheral output devices, such as speakers, printers, and so forth.
The computer 602 may operate in a networked environment using logical connections via wired and/or wireless communications to one or more remote computers, such as a remote computer 648. The remote computer 648 can be a workstation, a server computer, a router, a personal computer, portable computer, microprocessor-based entertainment appliance, a peer device or other common network node, and typically includes many or all of the elements described relative to the computer 602, although, for purposes of brevity, only a memory/storage device 650 is illustrated. The logical connections depicted include wired/wireless connectivity to a Local Area Network (LAN) 652 and/or larger networks, e.g., a Wide Area Network (WAN) 654. Such LAN and WAN networking environments are commonplace in offices and companies, and facilitate enterprise-wide computer networks, such as intranets, all of which may connect to a global communications network, e.g., the Internet.
When used in a LAN networking environment, the computer 602 is connected to the LAN652 through a wired and/or wireless communication network interface or adapter 656. The adaptor 656 can facilitate wire and/or wireless communications to the LAN652, and may also include a wireless access point disposed thereon for communicating with the wireless functionality of the adaptor 656.
When used in a WAN networking environment, the computer 602 can include a modem 658, or is connected to a communications server on the WAN654, or has other means for establishing communications over the WAN654, such as by way of the Internet. The modem 658, which can be internal or external and a wired and/or wireless device, connects to the system bus 608 via the input device interface 642. In a networked environment, program modules depicted relative to the computer 602, or portions thereof, can be stored in the remote memory/storage device 650. It will be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers may be used.
The computer 602 is operable to communicate with wire and wireless devices or entities using the IEEE802 family of standards, such as wireless devices operatively disposed in wireless communication (e.g., IEEE802.11 over-the-air modulation techniques) with, for example, a printer, scanner, desktop and/or portable computer, Personal Digital Assistant (PDA), communications satellite, any piece of equipment or location associated with a wirelessly detectable tag (e.g., a kiosk, news stand, restroom), and telephone. This includes at least Wi-Fi (i.e., Wireless Fidelity), WiMax, and BluetoothTMWireless technology. Thus, the communication may be a predefined structure as for a conventional network, or simply an ad hoc (ad hoc) communication between at least two devices. Wi-Fi networks use radio technologies called IEEE802.11x (a, b, n, g, etc.) to provide secure, reliable, fast wireless connectivity. A Wi-Fi network can be used to connect computers to each other, to the Internet, and to wired networks (which use IEEE 802.3-related media and functions).
Fig. 7 illustrates a block diagram of an exemplary communication architecture 700 suitable for implementing various embodiments described above. The communication architecture 700 includes various common communication elements such as a transmitter, receiver, transceiver, radio, network interface, baseband processor, antenna, amplifiers, filters, and so forth. Embodiments, however, are not limited to implementation by communication architecture 700.
As shown in FIG. 7, the communication architecture 700 includes one or more client(s) 702 and server(s) 704. The client 702 may implement the client system 310 or the devices 450, 460, and 470. The server 704 may implement the server system 550. The clients 702 and the servers 704 are operatively connected to one or more respective client data store(s) 702 and server data store(s) 704 that can be employed to store information local to the respective clients 708 and servers 710, such as cookies and/or associated contextual information. In various embodiments, client data store 708 and/or server data store 710 may comprise a cloud storage system (such as cloud storage system 170).
The client(s) 702 and server(s) 704 can communicate information between each other using a communication framework 706. The communication framework 706 may implement any well-known communication techniques and protocols, such as those described with reference to systems 400, 500, and 600. The communications framework 706 may be implemented as a packet-switched network (e.g., a public network such as the internet, a private network such as an enterprise intranet, and so forth), a circuit-switched network (e.g., the public switched telephone network), or a combination of packet-switched and circuit-switched networks (using suitable gateways and translators).
Some embodiments may be described using the expression "one embodiment" and "an embodiment" along with their derivatives. The terms mean that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment. Furthermore, some embodiments may be described using the expression "coupled" and "connected" along with their derivatives. These terms are not necessarily intended as synonyms for each other. For example, some embodiments may be described using the terms "connected" and/or "coupled" to indicate that two or more elements are in direct physical or electrical contact with each other. The term "coupled," however, may also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other.
It is emphasized that the abstract of the disclosure is provided to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing detailed description, it can be seen that various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus the following claims are hereby incorporated into the detailed description, with each claim standing on its own as a separate embodiment. In the appended claims, the terms "including" and "in which" are used as the plain-English equivalents of the respective terms "comprising" and "characterized by". Moreover, the terms "first," "second," "third," and the like are used merely as labels, and are not intended to impose numerical requirements on their objects.
What has been described above includes examples of the disclosed architecture. It is, of course, not possible to describe every conceivable combination of components and/or methodologies, but one of ordinary skill in the art may recognize that many further combinations and permutations are possible. Accordingly, the novel architecture is intended to embrace all such alterations, modifications and variations that fall within the spirit and scope of the appended claims.
Claims (6)
1. A system for dynamic voice menus, comprising:
means for receiving an incoming call from a user;
means for identifying, by a processor circuit, a telephone number of the incoming call;
means for determining whether a dynamically compiled customized voice menu specific to the user is available based on the phone number and loading the dynamically compiled customized voice menu if it is determined that the dynamically compiled customized voice menu is available;
means for determining a role associated with the user based on the phone number if it is determined that no dynamically compiled customized voice menu is available, to determine if a role-based voice menu for the role is available, and to load the role-based voice menu if it is determined that the role-based voice menu is available;
means for loading a generic voice menu if it is determined that no role based voice menu is available; and
means for executing the loaded voice menu for the user.
2. The system of claim 1, further comprising:
means for retrieving a customized voice script specific to the user; and
for compiling the customized voice script to generate the customized voice menu in response to the incoming call, wherein the customized voice script was previously created using input from a user.
3. The system of claim 2, further comprising means for persisting user state in the executed voice menu.
4. A computer-implemented method for dynamic voice menus, comprising:
receiving an incoming call from a user;
identifying, by a processor circuit, a telephone number of the incoming call;
determining whether a dynamically compiled customized voice menu specific to the user is available based on the phone number, and loading the dynamically compiled customized voice menu if it is determined that the dynamically compiled customized voice menu is available;
determining a role associated with the user based on the phone number to determine whether a role-based voice menu for the role is available if it is determined that no dynamically compiled customized voice menu is available, and loading the role-based voice menu if it is determined that the role-based voice menu is available;
loading a universal voice menu under the condition that no voice menu based on roles is available; and
executing the loaded voice menu for the user.
5. The computer-implemented method of claim 4, further comprising:
identifying the customized voice menu specific to the user;
loading the customized voice menu;
retrieving a customized voice script specific to the user; and
compiling the customized voice script in response to the incoming call to generate the customized voice menu, wherein the customized voice script was previously created using input from the user.
6. The computer-implemented method of claim 5, further comprising persisting user state in the executed voice menu.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/328,130 | 2011-12-16 | ||
US13/328,130 US9063703B2 (en) | 2011-12-16 | 2011-12-16 | Techniques for dynamic voice menus |
Publications (2)
Publication Number | Publication Date |
---|---|
HK1182864A1 HK1182864A1 (en) | 2013-12-06 |
HK1182864B true HK1182864B (en) | 2016-09-23 |
Family
ID=
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP2792110B1 (en) | Techniques for dynamic voice menus | |
US7283829B2 (en) | Management of call requests in multi-modal communication environments | |
US10009463B2 (en) | Multi-channel delivery platform | |
US9681278B2 (en) | VOIP service with streamlined conferencing options | |
KR20100120136A (en) | Techniques for transfer error recovery | |
EP2650829A1 (en) | Voice approval method, device and system | |
US10447857B2 (en) | Method and apparatus for operating a contact center system | |
US12244769B2 (en) | Voice message callback action enabling and disabling | |
CA2928357C (en) | Multi-channel delivery platform | |
CN113196218A (en) | System and method for delivering modular tools | |
ZA200610552B (en) | Distributed Customizable Voicemail System | |
US11563782B2 (en) | Enriched calling based call routing | |
US20240048654A1 (en) | Methods and Systems for Augmenting Caller ID Information | |
US11271975B2 (en) | Enriched calling based call type notification | |
HK1182864B (en) | Techniques for dynamic voice menus | |
US9042528B2 (en) | Data communication | |
US10897539B1 (en) | Method for visual-based programming of self-service workflow | |
US11201967B1 (en) | Advanced telephony functionality for chat groups in a communication platform | |
US20210136223A1 (en) | Visual-based programming for self-service workflow | |
US20210136221A1 (en) | System for visual-based programming of self-service workflow | |
WO2025122793A1 (en) | Transfer of a voice call at a software as a service platform | |
Yen et al. | An Object-Oriented Administration Tool for IP-PBX | |
Yen et al. | A TOOL FOR WEB-BASED AND OBJECT-ORIENTED DIALPLAN ADMINISTRATION OF OPEN-SOURCE IP-PBX |